id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
234425320
pes2o/s2orc
v3-fos-license
Determination of minimum stock on system retail using forecast, economic order quantity and reorder point methods In a globalization era, where retail business competition is getting tougher, a retail business has to have more advantages such as competitive selling prices, stock availability and others. In order to have competitive selling prices, a retail business should be able to arrange planning regarding goods availability to maintain balance between demand and existing stock (supply). The need for information on the availability of stock of goods in accordance with the sales that occur becomes very important. It is needed to avoid the goods accumulated excessively in the warehouse or even shortage of goods. If this kind of information about stocks both in a warehouse or the one that has been sold does not properly provided, it can cause reduction on the company’s profit because it can lead to more expenses on buying other goods as guessing that the stock of certain goods are running low or even more miscalculating to buy more goods that is actually still on the warehouse with sufficient quantities. These kind of situations were often occurred when a retail business do not have a tool to ensure the estimated purchase of stock needed for a certain period of time during selling. Therefore, it is required a system to help facilitate companies in carrying out retail business activities by using the Forecasting Method to help estimate quantitatively what will happen in the future with the stocks, based on relevant data in the past. In accordance with the results of the identification of demand patterns, the Forecasting Method used is Moving Average (MA), Weighted Moving Average (WMA), Single Exponential Smoothing (SES) and Double Exponential Smoothing (DES). After Forecasting, the best Forecast Method which has the lowest Mean Absolute Deviation (MAD) selection is made. This Economic Order Quantity (EOQ) method helps to determine the optimal purchase frequency. Through determining the optimal number and frequency of purchases, then optimal inventory control is obtained. The Reorder Point method can be used to find out what the minimum point is to be able to order a stock back, so that it is expected that the amount of stock that is not excessive and can meet sales needs optimally. Background The development of science and technology in the era of globalization is increasingly advanced and greatly increased rapidly, so that it brings a positive impact and facilitates all matters of daily life. The results of the development of science and technology have been very widely used by the community, such as at home, offices, companies, schools, universities, and other public places. The development of science and technology has also brought many changes in various fields of life, ranging from social life to the use of technology in the business world, as for retail businesses. The increasing development of science and technology has also resulted in the increasing role of information systems in fulfilling accurate information which will make the company have a competitive advantage and to meet these information needs, information technology that can process data to be accurate is needed. Retail business these days are really need a system to help and facilitate business activities, such as providing better customer service, increasing data security related to customers, facilitating and speeding up the process of buying and selling goods, managing stock of goods, and accurate financial reports and data. The system of determining the minimum stock in a retail system uses the Forecast, Economic Order Quantity and Reorder Point methods that will be designed based on data obtained from spare parts and motorcycle repair shops &9 Literature Study In this system a research will be conducted which aims to establish an appropriate forecasting system in order to determine the inventory stock of goods that must be carried out in accordance with past sales data that has occurred. The method used in this research is Forecasting to determine Reorder Points. Forecasting method to forecast future stock needs. Then do the Economic Order Quantity (EOQ) calculation to determine the optimal purchase frequency. Through determiningthe optimal number and frequency of purchases, optimal inventory control is obtained. inventory of goods that can be ordered in a period for the purpose of minimizing the cost of the inventory of goods, then do a Reorder Point calculation to find out when to reorder stock, with the data used is sales data for several periods that have occurred. Method and Materials The method used in the research is the Forecast, Economic Order Quantity and Reorder Point methods. Moving Average Moving Average method is a method of forecasting which is done by taking a group of observations, looking for the average value as a forecast for the coming period [1]. The steps are as follows: Step 1: Calculate Forecasting for period t (1) Step 2: Calculate Absolute Deviation for Period of t value Where: ݊ ‫ܨ‬ ‫1+ݐ‬ = Forecast for period t + 1 ‫ܨ‬ ‫ݐ‬ = Forecast for period t ‫ܣ‬ ‫ݐ‬ = Actual demand period t ݊ = The amount of request data involved ‫ܦܣ‬ ‫ݐ‬ = Absolute Deviation in the period t MAD = Mean Absolute Deviation Weighted Moving Average Weighted Moving Average method, as explained is the same as the moving average or Moving Average, but the latest value in a periodic sequence is given a greater weight or weight to calculate forecasting. The Weighted Moving Average method is given a different weight for each available past data, assuming that the last recent or the newest recent data will have a greater weight than the old data because the last recent or the newest recent data is the most relevant data for forecasting. The closer to forecasting data then the weight will be greater because the data closest to forecasting is data that greatly affects the results of the forecast. [2]. The steps are as follows: Step 1: Calculate Forecast for Period of t value Step 2: Calculate Absolute Deviation for Period of t value Where: ݊ ‫ܨ‬ ‫ݐ‬ = Forecast for period t ‫ܣ‬ ‫ݐ‬ = Actual demand period t ‫ܦ‬ ‫ݐ‬ = Actual data for the period t ‫ܧܣ‬ ‫ݐ‬ = ‫݁ݐݑ݈ݏܾܣ‬ ‫ݎݎݎܧ‬ Weight = Weights are given for each month ݊ = Amount of request data involved MAD = Mean Absolute Deviation Single Exponential Smoothing The Single Exponential Smoothing method is a development of the Moving Average method. This method uses the recording of very little past data, and assumes the data fluctuates around a fixed average value, without following patterns or trends [3]. The steps are as follows: Step 1: Calculate Forecast for Period of t value Double Exponential Smoothing The Double Exponential Smoothing (DES) method is a linear model proposed by Brown. In this method the smoothing process is carried out twice. The premise of Brown's linear exponential smoothing method is similar to the linear moving average, because both the single and double smoothing scores lagged behind the actual data if there is an element of trend. The difference between single and multiple smoothing scores can be added to the score of single smoothing and adjusted for trends. This method is used in this application because there is a trend in sales data, which happened in a several important months such as Eid month and New Year when sales are increasing in general [4]. The steps are as follows: Step 1: Calculate a single exponential smoothing for a period t ܵ′ ‫ݐ‬ = α ܺ ‫ݐ‬ + (1-α) ܵ′ ‫1−ݐ‬ (10) Step 2: Calculate double exponential smoothing in period t Step 3: Calculate constant value Step 4: Calculate Slope value Step 5: Calculate Forecast for Period of t value Ordering Cost Ordering costs are all costs incurred in the process of ordering an item. The cost of the message is variable or changes which changes are according to the order frequencies. The order formula costs each time ordered: Holding Cost Holding costs are costs incurred by the company in the context of storing an item purchased. Save costs are costs incurred by the company to store inventory for a certain period [5]. Storage cost formula: Mean Absolute Deviation Mean Absolute Deviation (MAD) is the average absolute error over a certain period regardless of whether the forecast result is greater or smaller than the reality, in other words MAD is the average of the absolute value of the deviation. The intended deviation is the difference between actual data and forecasting results for a certain period [6]. Economic Order Quantity Economic Order Quantity (EOQ) is an inventory management method that determines the number of orders or purchases that must be made and how many quantities must be ordered so that the total cost (the sum of the order costs and storage costs) is minimum. Thus, to calculate the economical number of Conclusion Following are the conclusions that can be drawn in this calculation: 1. The system of determining the minimum stock in a retail system using the Forecast, Economic Order Quantity and Reorder Point methods designed is based on data obtained from spare parts shops and motorbike repair shops &9 S-The results of testing the data on
2021-01-07T09:07:10.189Z
2020-12-31T00:00:00.000
{ "year": 2020, "sha1": "b5fe24aca5d40acac7a148d906cace6233f98531", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/1007/1/012180", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "046fac5569889a33767073fc064d857c9d11bd55", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Physics", "Economics" ] }
227246248
pes2o/s2orc
v3-fos-license
Leishmaniasis and phlebotomine sand flies in Oman Sultanate There are few data on leishmaniases and sandflies in Oman Sultanate. We carried out an eco-epidemiological study in 1998 in the two main mountains of the country, the Sharqiyah and the Dhofar. This study allowed us to isolate and identify three Leishmania strains from patients exhibiting cutaneous leishmaniasis. The typing carried out by isoenzymatic study and by molecular biology were congruent: two strains of Leishmania donovani zymodeme (Z) MON-31 isolated in the Sharqiyah and one L. tropica ZROM102 (ZMON-39 variant for 4 isoenzymes) from the Dhofar. No strain was isolated from canids. The study of sandflies identified 14 species distributed in the genera Phlebotomus, Sergentomyia and Grassomyia: Ph. papatasi, Ph. bergeroti, Ph. duboscqi, Ph. alexandri, Ph. saevus, Ph. sergenti, Se. fallax, Se. baghdadis, Se. cincta, Se. christophersi, Se. clydei, Se. tiberiadis, Se. africana, and Gr. dreyfussi. In Sharqiyah, the only candidate for the transmission of L. donovani was Ph. alexandri, but the low densities observed of this species do not argue in favor of any role. In Dhofar, Ph. sergenti is the most important proven vector of L. tropica, but Ph. saevus, a locally much more abundant species, constitutes a good candidate for transmission. Introduction Leishmaniasis remains poorly documented in Oman Sultanate as well as the fauna of their vectors, Phlebotomine sandflies [5]. A few case reports are available in the literature of both visceral (VL) [6, 12, 21-23, 30, 63, 64] and cutaneous (CL) [64,65,71] leishmaniases. However, to our knowledge, no parasite has been cultured and typed according to gold standard methods (isoenzymes or PCR-RFLP or sequencing of targeted markers) except one strain of L. tropica isolated from a Pakistani patient continuously resident in Oman for the 18 months before parasite isolation [65]. No record of affected animals like dogs has been documented. The agents of VL belong to the L. donovani complex without identification at the specific level (L. donovani s. st. or L. infantum) [63]. A few studies have been carried out to identify the sandflies of the Sultanate [36]. Visceral leishmaniasis is confined principally to children in the Sharqiyah [50] and Dhofar [25] governorates. Between 1992 and 1995, the annual incidence rate of VL in Oman varied from 14 to 40 cases, but many children treated empirically for kala-azar are not reported [64]. The annual incidence rate is decreasing to 15-20 CL and 2-4 VL cases yearly [8]. The goal of the present study was to isolate and culture Leishmania strains in order to identify them from humans and wild or domestic canids, and to carry out an inventory of the Phlebotomine sandflies of the country in order to determine candidate(s) for Leishmania transmission according to the eco-epidemiological concept. This concept started in the 1950s and was applied in Mediterranean foci (France, Italy, Spain, Tunisia, Algeria, Morocco, Syria) and the Arabian peninsula (Yemen). Ethical approval The study (inclusion of patients, animals and captures of Phlebotomine sandflies) was carried out in 1998 in agreement with the World Health Organization, the Omani Ministry of Health, and the Omani Ministry of Agriculture. All laws and regulations were strictly followed. At that time, no ethics committee existed in Oman [7]. In all cases, patient records and information were anonymised and de-identified prior to analysis. Study sites We prospected the two main biogeographical regions of Oman from September 26 to October 26, 1998: Sharqiyah and Dhofar. In the Sharqiyah region, analysis of places where CL and VL cases occurred was performed on selected farms in the Ibra alluvial basin. Houses were occupied by one to two families living mostly with some cows and herds of sheep and goats. The cows remained in the stable, while the herds were driven into the steppes and the surrounding hills. In order to complete the vector sampling, trapping with adhesive traps was carried out on the rockslides, cracks, holes and caves of the Ouadi Mouqal cliffs. The Dhofar region was given special attention, not only because of the presence of VL and CL, but also its phyto and zoo-geographical originality (Afro-tropical elements, endemism) [26,46]. The use of the transect method lead us to sample the different phyto-ecological climaxes, from Salalah to Herwouib, through the Qara and Qamar djebels, the Jejouel reg, the Wad Afaoul and the Wadi Herwouib (Figs. 1, 2 and 3). Among these climaxes, two were chosen because of their endemic richness: the slopes watered with Anogeissus and the thalwegs and wadi with Acacia, Boswellia and Dracaena. Phlebotomine sandflies sampling The trapping sites were selected according to both field observations (orography, geology, geomorphology, vegetation, human habitat) and information acquired prior to the field work: human cases of VL or CL, entomological and parasitological studies, and expert reports. Sampling was carried out by combining miniature CDC traps and sticky traps. One to three CDC miniature light traps were installed in the previously selected biotopes: houses, stables, sheepfolds, caves, and various vegetation. They were placed at the end of the afternoon and picked up the next morning before sunrise. Sandflies were stored in 95% ethanol. [46], modified. Dh1: shores with Avicennia marina (residual mangrove); Dh2: piedmont with Boscia arabica; Dh3: mountain flanks and humid escarpments with Anogeissus dhofarica and hilly plateaus with steppes and grasses; Dh4: arid plateau with Euphorbia balsamifera.; Dh5: scree and perarid reg desert with Boswellia sacra; Dh6: wadis and perarid cliffs with Acacia ethbaica and Dracaena serrulata. The sampled bioclimatic levels are indicated in grey. We express the relative frequencies of species by reporting the number of sandflies caught per "night/trap" (s/n/t). Sticky traps were made with white paper sheets (20 Â 20 cm for a double-sided active surface of 800 cm 2 ) impregnated with castor oil. They are placed in the crevices of walls or rocks, at the opening of burrows, in stables, sheepfolds, and henhouses. The number of traps deposited is shown in Table 1. Sandflies are collected from the trap with a little brush and stored in 100% ethanol. After their identification, results were expressed as the number of specimens (males, females, total) per species and per square meter of trap (relative densities). The grouping of the stations allowed for calculation of the densities by climax [58,59]. During the present field work, 21 stations were sampled in 18 localities using CDC traps (29 nights/traps) and/or the sticky trap (815 traps representing a total interception area of 65.2 m 2 ). These stations were grouped by locality and/or biotope ( Table 1). Most of the sandflies were processed to be mounted in toto according to the following protocol. Soft tissues were lysed in a bath of KOH 10% (12 h), then washed four times in distilled water, cleared in Marc-André solution (12 h), and mounted individually between microscope slide and cover slide in Canada Balsam for species identification, after dehydration in successive alcoholic baths then clove oil. A few specimens were processed individually to allow molecular biology processing [16,19]. They were mounted in chloral gum directly after the Marc-André solution step. Visual analysis of the specimens was performed by means of a BX61 microscope (Olympus, Japan). Measurements and counts were made using Stream Motion software (Olympus, Japan) and a video camera connected to the microscope. The identifications were made thanks to the original descriptions of each of the species encountered, as well as available keys and papers [1,10,18,39,[41][42][43]. Vertebrate hosts sampling; Leishmania detection, isolation and identification Human Human leishmaniasis Among the human leishmaniasis cases reported in Oman, only cutaneous forms were observed. All patients were hospitalised in Muscat and Ibra provinces (in Sharqiyah region) and Salalah province (in Dhofar region). Samples were biopsied by Arouette's bistoury (punch biopsy of single use) under local anaesthesia of lidocaine (Xylocaine Ò ). Samples were obtained by fine scissors or hooked pliers. After crushing the samples in Potter mortar in a sterile solution of NaCl (0.9%) plus penicillin G (200,000 u/mL), cultures were initiated on NNN medium (4 tubes per sample), with three drops of sterilised urine and a few drops of heart/brain solution, and incubated at 24°C (23°-26°C). Control was performed 5 days later and a subculture carried out every 8 days. Cultures were considered sterile after four subcultures. Simultaneously, crushing on slides was performed using new cutaneous samples. Slides were fixed by methanol and stained by Giemsa; therefore, they were examined by direct microscopical examinations using 50Â oil immersion objective. Based on the leishmanin skin test method, the survey included a school population living close to Ibra. The L. major antigen, prepared by the Istituto Superiore di Sanità of Rome (Italy), was injected intradermally by Dermo-jet. Reactions were analysed 48 h later. The criterion for positivity was a papule of 5 mm diameter or more. Traditional leishmanin antigens react positively in most CL cases of the "Old World", regardless of the Leishmania species. Animal leishmaniases We mainly focused on domestic canids (dogs) and wild canids (foxes). Animals were slaughtered under the control of the police services. The necropsies (one fox and 5 dogs) were carried out in veterinary centres (dogs) or in the field (foxes). Spleen, liver and bone marrow were targeted. Giemsa stain smears and cultures on NNN medium were performed according to the protocol detailed above for human leishmaniasis cases. Leishmania species identification Isolated Leishmania strains were characterised by Multi Locus Enzyme Electrophoresis (MLLE) and confirmed by molecular identification. The identifications were performed at the Leishmania Identification Centre of the Unit of Vector-borne Diseases of the Istituto Superiore di Sanità of Rome (Italy). Molecular identification The isolated Leishmania strains were typed by PCR-RFLP analysis targeting ITS-1 (internal transcribed spacer-1) [62] and Heat Shock Protein (HSP70) [70] according to the protocol detailed in the original techniques. Phlebotomine sandflies During the present study, a total of 707 sandflies were captured by the technique of sticky traps. Of these, 331 were trapped in the Sharqiyah, with an overall density of 7.85 sandflies/m 2 trap and 376 traps in Dhofar, at an average density of 16.32 sandflies/m 2 trap (Table 2). Using the CDC miniature light traps, a total of 360 sandflies were captured: 115 sandflies in the Sharqiyah (with an average of 6.76 sandflies/night/trap) and 245 in the Dhofar (with an average of 20 sandflies/night/ trap) ( Table 3). The species we caught belonged to the genera Phlebotomus, Sergentomyia and Grassomyia: Phlebotomus Comments on sandfly species The terminology used in this paper is that recently proposed [24]. Phlebotomus (Phlebotomus) papatasi (Scopoli, 1786) The male is characterised by the presence of two spines at the end of the surstyle, by a group of more than ten big setae at the distal part of the gonocoxite, by the upper part of the paramere longer than the other ones and covered with setae along its full length. The ascoids are relatively short and never reach the next articulation. The female is identified by its annealed spermathecae with sessile head wrapped in a cloud. Its pharyngeal armature presents teeth with, at the anterior part, many comb-like ones. Ascoids never reach the next articulation. The distribution of this major vector of L. major [34] is very large, from Bangladesh to Morocco and from Crimea to Sudan. Limited to the North of the Sahara in West Africa, the species is most southern in East Africa. In Oman, the species is absent in the Dhofar, but dominates in the Sharqiyah. It was previously recorded in the Wahiba sands of the Sultanate [36]. Phlebotomus (Phlebotomus) bergeroti Parrot, 1934 The male is characterised by the presence of two spines at the end of the surstyle. The gonocoxite has a subapical tuft not exceeding ten setae. The upper part of the paramere is slightly longer than the other ones and covered with bristles in its distal half only. The female is identified by its annealed spermathecae with sessile head, by a pharynx armed with teeth without spines or denticles on the posterior part and by the presence of anterior bilateral teeth. Ascoids reach or exceed the next articulation. The distribution of Ph. bergeroti is wide: from Morocco to Iran. Its southern limit is the Sudan. In Oman, it is a scarce species in the Sharqiyah and the semi-arid zone of Dhofar, but it becomes abundant in the perarid part of the latter region. The strong anthropophily of Ph. bergeroti, and its abundance in certain perarid areas [2,44] are arguments in favor of a significant vector role for L. major in the extreme deserts of Africa and the Arabian peninsula. 1906 The male is characterised by the presence of four to seven spines at the end of the surstyle. The gonocoxite has about ten setae in its distal half, and usually two ones in its proximal part. The upper part of the paramere is slightly shorter than the other ones and is covered with setae throughout its length. The female is identified by its annealed spermathecae with sessile head. Its pharyngeal armature does not exhibit comb-like or lateral teeth. Ascoids sometimes reach the next articulation. Its distribution is south of the Sahara to the Equator in Africa, and extends into the Arabian Peninsula. In Oman, the species is absent from Dhofar and seems rare in Sharqiyah, where we captured only two specimens. This species constitutes in some foci a good alternative to L. major transmission by Ph. papatasi [15]. Phlebotomus (Paraphlebotomus) sergenti Parrot, 1917 The male has, like all Paraphlebotomus, a basal lobe on gonocoxite and its gonostyle carries four spines. However, it is easily identified by the curved shape of this basal lobe and by the brush of some setae that it carries, by its hooked parameral sheath, and by its globular gonostyle. The female of Ph. sergenti is very difficult to separate from that of Ph. saevus. In Ph. sergenti, the well-developed pharyngeal armature contains strong elongated teeth, which are less numerous than in Ph. saevus. The geographical distribution of Ph. sergenti is very wide: from the Canary Islands to India and from Ukraine to Kenya. However, the diagnosis is delicate with an affine species Ph. similis whose distribution area that was initially thought to be limited to the North-east of the Mediterranean basin [17] is finally greater with a large area of sympatry in the Middle-East [47]. In Oman, we identified Ph. sergenti in small numbers (10 males) and still in wild sites (cavities and rocky chaos) in the Sharqiyah (Wadi Mouqal, at altitudes ranging from 550 to 600 m) and in the Dhofar (Wadi Herwouib, altitude 600 m). This species had already been Table 2. Captures made using sticky traps in locations mentioned in Figures 1 and 2. # = males, $ = females, # + $ = males and females, s/m 2 = density of sandflies (per m 2 of sticky paper). Ph. sergenti is the most important proven vector of L. tropica [4,28]. Phlebotomus (Paraphlebotomus) saevus Parrot & Martin, 1939 The male of Ph. saevus has a straight, non-hooked parameral sheath, a large basal lobe of the gonocoxite, with a weakly dilated distal portion carrying many long and slightly curved setae. The Ph. saevus female is difficult to distinguish from that of Ph. sergenti. Its pharyngeal armature is well developed and contains more teeth than those of Ph. sergenti (Fig. 4). Ph. saevus has a distribution including East Africa and Arabia. In Oman, this is its first record. We caught Ph. saevus only in Dhofar (Djebel Quara), at the Dh3 capture site, an isolated farm where a female patient with leishmaniasis caused by L. tropica (LCO 4) lived. Ph. saevus is a vector suspected of transmitting L. tropica in households where Ph. sergenti is absent, like in Kenya [45] or Yemen [14]. Sinton, 1928 The male and female are easily identifiable thanks to their short first flagellomere (= AIII). Phlebotomus (Paraphlebotomus) alexandri Moreover, in the male, there is a short basal lobe of the gonocoxite, with a spherical head, provided with radiant, generally rectilinear setae. The apical spine of the style is inserted on a long process, far from the subapical spine. The female exhibits a pharyngeal armature of rectangular overall appearance without anterior extension, consisting of strongly chitinised, spiniform scales forming a thick network. Ph. alexandri occupies a vast geographical area: from Morocco to Mongolia down to Sudan. In Oman, Ph. alexandri is a fairly abundant Phlebotomus, especially in Dhofar. Here, Ph. alexandri is mainly found in the desert zone at Boswellia (Incense Tree), and more particularly in bottom of the Wadi (Herwouib). With the exception of the isolation of L. donovani in China [27], the role of Ph. alexandri as a vector is still under discussion. Its low abundance in the prospected areas of the Sharqiyah cannot yet explain its potential role in the transmission of L. donovani. Grassomyia dreyfussi (Parrot, 1933) This species is distinguished from other Oman species by the absence, in both sexes, of ascoids on the first flagellomere, which is a characteristic of the genus Grassomyia. Moreover, both male and female harbour strong spines on each femur (pro-and meso-and metafemur). The female is recognised by the very characteristic pattern of her spermathecae, in capsule of opium poppy. The distribution of Gr. dreyfussi extends from Morocco to Iran and goes down to Kenya. It has recently been recorded in the Arabian Peninsula [20]. Its record in Oman is not surprising in this context. In the Sultanate, we have captured very rare specimens in the Sharqiyah and Dhofar. Its role in the transmission of a Leishmania has never been mentioned. Sergentomyia (Sergentomyia) fallax (Parrot, 1921) The male genitalia has a long and narrow gonostyle with a non-deciduous silk implanted very distally. The female has a large pharynx with a well-developed armature consisting with monomorphic teeth forming a heartshaped pattern. The cibarium is armed with 15-23 pointed teeth, equal or sub-equal, arranged in an arch. The sclerotised area (= pigment patch) is oval. The distribution of Se. fallax is wide. It extends from the Canary Islands and Morocco to Pakistan, covers the Arabian Peninsula and remains north of the Sahara. In Oman, Se. fallax is abundant in the Dhofar, while it is rather rare in the Sharqiyah. The role of this species has never been mentioned in the transmission of a Leishmania, despite its vicariant Se. dubia being a possible vector of L. infantum in Senegal [68]. Se. cincta is mainly distributed throughout eastern Africa, but has also been reported in West Africa [1,67,69]. It was recently found in Cameroon [69]. Taking into consideration the number of cibarial teeth as a valid specific character, we identified the female specimens of Oman as Se. cincta and associated males, pending revision of this group, using molecular tools to check whether Se. cincta is individualised from Se. antennata. This is the first record of Se. cincta in the Arabian Peninsula. In Oman, we recorded it mostly in the Sharqiyah, and a few specimens in the Dhofar. Se. (Sintonius) christophersi (Sinton, 1927) As a member of the subgenus Sintonius, the male exhibits a pointed parameral sheath, whereas the female has annealed spermathecae, a common character in the genus Phlebotomus but an original character in the genus Sergentomyia, shared only by the members of the subgenus Trouilletomyia [55]. The identification of the male is based on a few teeth (3-7) of the cibarial armature and the existence of a row of a few vertical teeth. Similarly, the female exhibits a few cibarial teeth (2-5) and a few anterior vertical teeth (4-6) arranged along a line. The distribution area of Se. christophersi is wide: from Morocco to India, including Cameroon [69] and the Arabian Peninsula [20,42]. In Oman, we found Se. christophersi both in the Sharqiyah and in the Dhofar. Se. (Sin.) clydei (Sinton, 1928) As a member of the subgenus Sintonius, the male exhibits a pointed parameral sheath, whereas the female has annealed spermathecae. The identification of the male is based on the presence of 16-35 small cibarial teeth. The female exhibits a row counting 10-15 cibarial teeth and a row of vertical teeth in variable number (from 4 to about 20) [19]. The distribution of Se. clydei is wide and was recently revised [19]: from Senegal to Afghanistan, through the Arabian Peninsula and the Seychelles. In Oman, we recorded a limited number of specimens, more in in the Sharqiyah than in the Dhofar. Se. clydei is a sandfly feeding on humans as well as on reptiles [1,68] but no Leishmania vectorial role has been demonstrated for this species. The identification of the male is based on the presence of one row of 10-15 cibarial teeth, the median ones smaller than the lateral ones and 6-10 anterior vertical teeth. The female exhibits one row of 10-20 cibarial teeth, the median ones smaller than the lateral ones and two anterior rows of vertical teeth. Se. tiberiadis is a species from the Middle East, including the Arabian Peninsula. It has never been involved in the transmission of Leishmania. It was previously recorded in the Wahiba sands [36]. The male shows a cibarial armature of 20-35 teeth, palisade-like. Se. africana is a member of a species complex called the Africana group, which requires revision by molecular tools as some identifications refer to the group rather than to the species sensu stricto. Its distribution area is wide. It includes Africa and the Middle-East, including the Arabian Peninsula. This species has never been reported to be involved in the transmission of Leishmania. The male can be identified thanks to its cibarium with angle-shaped notch and 14-16 teeth. The identification is easy thanks to the deep notch on the cibarium, also exhibiting about 30 teeth (Fig. 4). Its distribution is limited to a zone ranging from Iraq to India). The record in Oman is the first in the Arabian Peninsula. We recorded it only in the Sharqiyah, not in the Dhofar. It has never been suspected of transmitting human Leishmania. Leishmaniases and Leishmania species identification Skin tests A total of 213 students (114 boys and 99 girls) from the school of Kafaïfa (Ibra province) underwent skin tests. Among them, 17 (8%) were positive. They exhibited an erythematous and pruritic papule of 1 or more cm in diameter. Girls were more susceptible to the disease (13.3%) than boys (10.1%). A low positivity rate seems to be due to a weak interaction between the young population and the Leishmania parasite in this focus (20 km from Ibra). It could be related to low density of sandfly vectors or their weak anthropophilia. Human CL case reports, sampling and cultures Four suspected human CL cases were observed, and 3 Leishmania strains were cultured and identified. Case LCO 1: Female, 17 years old, from the Muscat area (Sharqiyah region). Lesion on the right leg, beginning two years before. The CL diagnosis was confirmed by the presence of amastigotes in the lesion by microscopical examination. Treatment Animal leishmaniases No Leishmania strain was isolated from canids. Discussion The results presented in this work were collected in 1998 and have not been published until now. However, no data related to the Sultanate of Oman have been published since this field work [5]. Dating back more than 20 years, these data remain particularly interesting in this poorly explored region regarding leishmaniases and Phlebotomine sandflies. There are some data related to sandflies in Oman in the literature. In the 1980s, a work focusing mainly on the sandflies of Saudi Arabia [40] reported a batch of 14 sandflies collected around Muscat including Ph. alexandri, Se. fallax, Se. tiberiadis as well as atypical specimens possibly belonging to new taxa close to Se. christophersi and Se. schwetzi. Our sampling showed the presence of three species of Paraphlebotomus, whereas the subgenera Larroussius, Synphlebotomus and Euphlebotomus, confirmed or putative vectors of the L. donovani complex, are absent. The hypothesis of their role will be discussed later. The detail of captures by province highlights the differences between the Sharqiyah and Dhofar (Tables 2 and 3). Thus, Ph. papatasi is observed mainly in the Sharqiyah, while Ph. bergeroti dominates in the Dhofar. Such distribution is also observed for Ph. saevus and Ph. alexandri, species well represented in Dhofar and rare or absent in the Sharqiyah. The CDC miniature light traps positioned in the villages were of little or no yield in areas with a strong anti-malarial structure applying long-life insecticides regularly. Thus, in several villages of the Sharqiyah (Batem, Hedna, Rawda) where leishmaniases have been detected, the CDC remained systematically empty, in contrast to the sticky traps positioned in the wild sites, a few kilometres away from the CDC traps. The study carried out in the Dhofar confirmed this observation. Comments on Leishmania species identification Leishmania (Leishmania) tropica (Wright, 1903) The finding and isolation of L. tropica in Oman is not surprising. This parasite suspected of being responsible for CL [64], was recently isolated in Muscat from a Pakistani resident [65]. Moreover, several countries in the Middle East (Yemen, Saudi Arabia, Jordan, Syria, Iraq, Lebanon, Iran, Pakistan) and East Africa (Ethiopia, Kenya, Sudan) are known foci of L. tropica CL [8,13,53]. In neighbouring Yemen, there is an increasing trend of L. tropica CL [31][32][33]. Our results show the strain we isolated in the present study is related to Middle Eastern strains confirming the high polymorphism of L. tropica species and the description of a new variant zymodeme ROM102 (ZMON39 cluster). The L. tropica isolation in a young girl who had never left the Dhofar region (case LCO 4, strain IBM-105) suggests the hypothetic vectorial role of Ph. saevus, the only Paraphlebotomus well represented on the site with 56 specimens caught (Tables 2 and 3), and the only member of the genus Phlebotomus, except one male of Ph. bergeroti. Mesnil, 1903) The record of L. donovani ZMON-31 in Oman leads us to briefly discuss the taxonomic status and the geographical distribution of this zymodeme and closely related zymodemes. Starting in the 1980s, enzymatic taxonomy studies led to consider the linnean taxa L. donovani and L. infantum as two distinct phenetic groups. Cladistic analysis confirmed these results by showing their monophyly [37,48,60]. In fact, these phylogenetic groups (or complex) possessed a series of synapomorphic states, such as G6PD 100, G6PD 102 G6PD 105, GPI 86, GPI 100, GOT1 100, GOT2 100, GOT 113, and GOT2 113. Some of these states were common to both branches, others only present in one of them. This was the case of PGM 100, present in the donovani-infantum set (complex synapomorphies), GOT1 100 and GOT2 100 present in the single subset L. infantum, and GOT1 113 and GOT2 113 present in the single subset L. donovani (specific synapomorphies). The taxonomic status of the donovani-infantum group changed at the end of the 1980s. Enzymatic analysis of human Leishmania strains isolated in Sudan [49,60] and the vector Ph. orientalis [11] showed an original zymodeme (MON-82, L. archibaldi) characterised by a heterozygous structure for GOT1 (100/113) and GOT2 (100/113), and highlighted the complexity of the systematics of L. donovani s. l. [54]. Moreover, hybrids could develop differently in sandfly vectors [66]. Regarding the clinical aspect, even though L. donovani s.st. is poorly studied for tissue tropism, the complex L. donovani-L. infantum appears to cause both CL and VL. Therefore, we consider L. donovani s.st. as the most probable agents of VL in Oman. However, L. infantum s. st. would be in second place, if its presence should be confirmed, as it is in Yemen (Taëz), a country where L. donovani, L. infantum and L. tropica are sympatric [57]. In Oman, the scarcity of dogs, domestic or feral, in relation to the significant number of reported human cases seems more in agreement with the circulation of L. donovani than that of L. infantum. On the basis of the sandflies collected, the proven or candidate vectors of L. donovani were not recorded: Ph. (Eup.) argentipes [34], Ph. (Lar.) orientalis, Ph. (Syn.) celiae, Ph. (Syn.) martini, or Ph. (Ana.) rodhaini [3]. Consequently, the vector candidates for L. donovani transmission in the Sharqiyah region still remain unknown. For us, the only candidate could be Ph. alexandri, a species suspected in China [27] or in Cyprus [9,38], but the low densities of this species observed in Oman do not argue in favour of any role. The low positivity rate of the skin test (8%) reported in children of Ibra province suggests a weak interaction between the young population and the Leishmania parasite, confirming a low density of sandfly vectors or their weak anthropophilia. Consequently, in order to better understand the epidemiology of the leishmaniases in Oman, we encourage the isolation and typing of Leishmania strains in the future, because each Leishmania complex often corresponds to a specific vector and to a particular parasitic cycle (anthroponosis/zoonosis). This was the problem previously raised when L. infantum was suspected to be the causative agent of VL in Oman [30], whereas L. donovani s.l. was later incriminated as the agent of VL [63].
2020-12-02T14:11:32.127Z
2020-11-27T00:00:00.000
{ "year": 2020, "sha1": "c0a872b5f3e11aa68e7d7dd20f5e5670ee04ba54", "oa_license": "CCBY", "oa_url": "https://www.parasite-journal.org/articles/parasite/pdf/2020/01/parasite200118.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8ae33bff3398e4fa1f62a7727f9107f123957d8d", "s2fieldsofstudy": [ "Environmental Science", "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
4359082
pes2o/s2orc
v3-fos-license
Maintenance inhaler preference, attribute importance, and satisfaction in prescribing physicians and patients with asthma, COPD, or asthma–COPD overlap syndrome consulting for routine care Background In respiratory disorders, patient- and physician-perceived satisfaction with the maintenance inhaler device is an important factor driving treatment compliance and outcomes. We examine inhaler preferences in asthma and COPD from patient and physician perspectives, particularly focusing on the relative importance of individual device attributes and patient characteristics guiding inhaler choice. Materials and methods Real-world data from >7,300 patients with asthma, COPD, or asthma–COPD overlap syndrome (ACOS) consulting for routine care were derived from respiratory Disease Specific Programs conducted in Europe, USA, Japan, and China. Outcome variables included current pattern of inhaled maintenance therapy and device type, physician preference, patient-reported device attribute importance, and satisfaction. Results The most commonly prescribed inhalers for maintenance therapy of asthma, COPD, and ACOS were dry powder inhalers (62.8%–88.5% of patients) and pressurized metered dose inhalers (18.9%–35.3% of patients). One-third of physicians stated no preference for maintenance device when prescribing treatment, and less than one-third of patients reported being “extremely satisfied” with any attribute of their device. Instructions being “simple and easy to follow” was the inhaler attribute most commonly selected as important. For approximately one-third of patients across all groups, “ease of use/suitability of inhaler device” was a reason for the prescribing decision, as stated by the physician. Device characteristics were more likely to impact the prescribing decision in older patients (in asthma and COPD; P<0.01) and those with worse disease severity (in COPD; P<0.001). Conclusion A relatively high proportion of physicians had no preference for inhaler type across asthma, COPD, and ACOS. Simplicity of use was the most important inhaler attribute from a patient’s perspective. Physicians appeared to place most importance on ease of use and device suitability when selecting inhalers for older patients and those with more severe disease, particularly in COPD. Introduction Inhaled medications form the mainstay of maintenance treatment in patients with asthma and COPD, both in terms of symptom control and exacerbation risk reduction. 1,2 For patients with asthma, regular treatment with a low dose of inhaled corticosteroid (ICS), with or without a bronchodilator (long-acting β 2 -agonist [LABA] and/or long-acting muscarinic antagonist [LAMA]), has been shown to reduce symptoms and exacerbations, improve lung function, and enhance quality of life. 1 Similar benefits are observed following inhaled maintenance treatment for COPD, although the magnitude of the effect on lung function may not be as large as for asthma. 2 Typically, the choice of therapeutic agent and delivery platform will fall to the prescribing physician. An everincreasing number of inhaled delivery options currently exists for respiratory medicines, including variations of pressurized metered dose inhalers (pMDIs), dry powder inhalers (DPIs), breath-actuated MDIs (baMDIs), and a soft mist inhaler (SMI). The choice of inhaler has an important bearing on the outcome of any treatment regimen, given that poor inhaler technique has been associated with suboptimal drug delivery, increased adverse events and, consequently, poor adherence and/or impaired disease control. 1,3 It is, therefore, important to tailor the selection of the inhaler device to the individual patient, taking into account their needs, functional ability, and the complexity of the medication regimen. 1,2 A study investigating inhaler preference, acceptability, and usability of different inhalers, including a single-dose inhaler and multidose inhalers, found that patients were quicker to learn how to correctly use multi-dose devices, which required fewer maneuvers prior to actuation. 4 Availability and the cost of the inhaler device will also factor into any treatment decision. 3,5 Certain patient characteristics can guide the choice of inhaler device. For example, in COPD, the selection of a device that is not breath-actuated (eg, pMDI, SMI) may be preferable to a DPI in patients with poor lung function parameters (ie, forced expiratory volume in 1 second [FEV 1 ] and forced vital capacity) and a consequent reduced ability to inhale efficiently; 6,7 particularly, elderly patients. 6 Similarly, some groups of patients may be more suited to certain devices than others. 2,8 For example, elderly patients with arthritis, muscle weakness, or impaired vision may encounter difficulties with large or bulky inhalers, or may be confused by complex medication regimens requiring multiple devices. 9 Young children tend to show a preference for medium-high resistance inhalers that are easy to handle and have an oblong mouthpiece. 10 It is the responsibility of the physician to ensure that patients are competent in the use of their inhaler and that they understand the importance of good inhaler technique. 3,8 Provision of training in correct inhaler usage can greatly improve inhalation technique in patients. 11 Additionally, provision of training to physicians themselves can help to improve inhalation technique in their patients. 12 Patient-perceived satisfaction with their maintenance inhaler device is an important factor for driving treatment compliance in COPD, with inhaler satisfaction closely linked to improved health status. 13 Indeed, health care professionals cite patient satisfaction as one of the most important attributes of an inhaler. 14 It follows that the identification of specific inhaler features of particular importance to the patient could help to improve adherence and, ultimately, disease control. 3 Previous studies have identified durability, ergonomics, and ease of use as important inhaler characteristics. 4,10,13 In this cross-sectional analysis of data derived from the respiratory Disease Specific Program (DSP), we examine inhaler preferences in asthma and COPD, from both a patient's and a physician's perspective, with a particular focus on the relative importance of individual device attributes and patient characteristics guiding inhaler choice. Materials and methods The respiratory DSP is a cross-sectional survey of patients with asthma, COPD, or asthma-COPD overlap syndrome (ACOS) consulting for routine care, conducted in the USA, Europe (France, Germany, Italy, Spain, UK), Japan, and China. It is designed to provide impartial observations of real-world clinical practice from a physician's and a matchedpatient's perspective, with a view to improving standards of care. 15 Quantitative and qualitative patient and physician data together provide an accurate snapshot of the perception of a particular disease within a real-world setting, without preselection of patients. The survey can be viewed as four discrete stages. In Stage A, primary care and specialist physicians are screened and recruited with a view to obtaining nationally representative samples. This is followed by individual faceto-face interviews with the physician (Stage B). Stage C is the prospective completion of patient record forms by the physician for the next five consecutive patients with asthma and the next five patients with COPD (including ACOS). Finally, in Stage D patients fill out a self-completion record, with no influence or input from a health care professional. 15 The respiratory DSPs were conducted in the fourth quarter of 2013 in France, Germany, Italy, Spain, UK, and the USA; the fourth quarter of 2012 in Japan (patient record form data only); and the fourth quarter of 2010 in China (specialists only). study populations Patients and physicians The patient population comprised three groups: asthmaonly (patients .12 years of age with a physician-confirmed 929 Maintenance inhaler preference in COPD and asthma diagnosis of asthma), COPD-only (patients .40 years of age with confirmed airflow obstruction and a diagnosis of COPD that included emphysema and chronic bronchitis), and patients with ACOS who had a dual diagnosis of asthma and COPD. All patients were currently prescribed at least one inhaler for maintenance therapy (ICS, LAMA, ICS/LABA, LABA/LAMA, or LABA) at the time of enrollment. Physicians were eligible for participation in the study if they became medically qualified within the last 5-35 years and were responsible for the treatment of both patients with asthma and patients with COPD, with the exception of allergists, who treated only patients with asthma. Variables Outcome variables were recorded directly by the patient or physician, or were derived from the physician-or patientcompleted record form (which included the COPD Assessment Test and the modified Medical Research Council breathlessness scale). Descriptive variables included age, gender, time since diagnosis, physician-perceived severity of respiratory disease, comorbidities, most recent FEV 1 (% predicted), and current treatment/device type. Patient preferences were measured according to perceived satisfaction with, and importance of, individual inhaler attributes. Physicians also provided information pertaining to their specialty, maintenance inhaler type preference, and inhaler prescribing practice. Patient satisfaction and perceived importance of individual inhaler attributes Patients indicated the three most important attributes in an inhaler from a predefined list of 12 attributes (detailed in Figure 1). For each of the 12 inhaler-specific attributes, patients were then asked to rate their satisfaction on a scale of 1 (not at all satisfied) to 5 (extremely satisfied). statistical analyses The use of the various inhaler devices (pMDI, baMDI, DPI, and SMI) was described in terms of the proportion of patients receiving each inhaler type. Physician-reported preference for inhaler device was described for the entire population, and was also stratified according to physician specialty (primary care physician [PCP], pulmonologist, or allergist [asthma-only]). Device attribute importance and satisfaction were determined using a five-point Likert scale. Disease groups were stratified as to whether or not "ease of use or suitability of inhaler device" was considered by the physician as a reason for the prescribing decision and a univariate test (Mann-Whitney) was used to determine whether the difference between the "yes" and "no" groups was statistically significant. All analyses were performed using Stata 13.1 (StataCorp LP, College Station, TX, USA; 2013). The DSP was conducted as a survey adhering to market research guidelines and codes of conduct according to the International Chamber of Commerce/European Society for Opinion and Marketing Research international code on observational research. Therefore, ethics approval was not necessary to obtain and was not sought. Patients provided informed consent to participate in the survey via a tick box on the front of the patient self-completion questionnaire. study population The 1,205 participating physicians included 449 PCPs, 646 pulmonologists, and 110 allergists. Of the total patient sample from the respiratory DSP survey who had data collected within the observation period, 7,305 patients were eligible for inclusion in the study population (asthma, n=3,736; COPD, n=3,326; ACOS, n=243). Most of these patients suffered from comorbidities; no comorbidities were reported for 26.4%, 19.3%, and 11.4% of patients with asthma, COPD, and ACOS, respectively. The most frequently reported comorbidities overall were hypertension, allergic rhinitis, elevated cholesterol, anxiety, gastroesophageal reflux disease, and diabetes (Table S1). The mean (95% CI) numbers of concomitant daily medications that were prescribed were 1.6 (1.5, 1.7), 3.4 (3.3, 3.5), and 4.3 (3.8, 4.7) for patients with asthma, COPD, and ACOS, respectively. Current pattern of inhaled maintenance therapy Globally, a DPI was the most commonly used inhaler for maintenance therapy, prescribed to 62.8%, 88.5%, and 84.0% of patients with asthma, COPD, and ACOS, respectively (Table 1). DPIs were also the most commonly prescribed inhaler type in each of the countries included in the survey (Table S2). MDIs were the next most commonly used maintenance device and among patients with asthma, approximately one-third (35.3%) of prescribed devices were MDIs, with slightly lower proportions among patients with COPD and ACOS (18.9% and 27.2%, respectively; Table 1). The proportions of patients using an SMI and baMDIs were much lower across all groups (Table 1). Physician device preferences Roughly one-third of physicians stated no preference for maintenance device when prescribing treatment, regardless of respiratory condition ( Figure 2). Similarly, the device preferences of pulmonologists and PCPs were consistent across all three respiratory conditions. In asthma management, the preferences of allergists were more closely aligned with those of PCPs than with pulmonologists ( Figure 2). Patient device satisfaction From the patients' perspective, "instructions are simple and easy to follow" was the inhaler attribute considered the most important across indications ( Table 2 and Table S3). While the relative importance of device attributes differed, the robustness of the inhaler, its ease of handling, and transportability were consistently rated as important (Table 2). Less than one-third of patients reported being extremely satisfied with any single attribute of their device ( Figure 1). Across indications, the highest relative dissatisfaction was reported for the attributes "inhaler locking when empty", "being able to reuse the inhaler for more than one month", and "feedback on whether the dose has been inhaled correctly" (see Figure S1). Other secondary areas of dissatisfaction included irritation in the mouth and throat, inability to see how many doses are left, and having to breathe in while simultaneously pressing the inhaler, or breathing hard to inhale the medicine (see Figure S1). The proportions of patients satisfied or dissatisfied with specific attributes were generally consistent across indications, with the exception of the need to load the inhaler before use, for which the satisfaction rates were lowest for patients with COPD (Figures 1 and S1). Physician's prescribing decision From the perspective of the physician, "ease of use/suitability of inhaler device" formed the basis of the prescribing decision for approximately one-third of patients across indications. Increases in patient age (in asthma and COPD; P,0.01) and 931 Maintenance inhaler preference in COPD and asthma disease severity (in COPD; P,0.001) played a significant role in consideration of device type when making a prescribing decision ( Table 3). The time elapsed since diagnosis also had a significant effect on the likelihood of prescribing an inhaler for these criteria (P,0.0001 and P,0.05 for asthma and COPD, respectively). Patients with asthma or COPD had been diagnosed for significantly longer where the physician selected "ease of use/suitability of inhaler device" as a reason for the prescribing choice (Table 3). For patients with COPD, higher impact of symptoms (assessed using the COPD Assessment Test and the modified Medical Research Council dyspnea scale) was also associated with "ease of use/suitability of inhaler device" forming part of the prescribing decision (P,0.05; Table 3). However, post-bronchodilator FEV 1 was not associated with the physician's decision to prescribe a device based on ease of use/suitability (P=0.406). "Ease of use/suitability of inhaler device" was associated with a reason for prescribing in both asthma and COPD for patients with one or more comorbidities (P,0.0001 in each case; see Table S4). Conversely, for patients with no comorbidities, "ease of use/suitability of inhaler device" was associated with "not" being a reason for prescribing in asthma and COPD (asthma P,0.0001; COPD P,0.005; see Table S4). In terms of the current device type being associated with the likelihood of a physician selecting "ease of use/suitability of inhaler device" as a reason for prescribing choice for patients with COPD, no significant difference was found except for the SMI (P=0.1278, P=0.5214, and P=0.0672 for the MDI, baMDI, and DPI, respectively; SMI P=0.0202; see Table S5). Discussion This study investigated the prescribing patterns for inhaled maintenance medications among patients with asthma 932 Ding et al and/or COPD, with particular emphasis on the impact of device type, patient and clinical characteristics, and patient and physician preferences on the choice of inhaler device. The most commonly prescribed maintenance inhaler device type was a DPI, with this trend most pronounced for patients with COPD. This may have been due, in part, to the wide global availability of DPI maintenance products compared with products for MDIs. The leading LAMA, tiotropium, has, for example, been primarily available in a single-dose DPI to date. The comparatively low numbers of SMIs and baMDIs currently prescribed to patients reflect the fact that they are less widely available and, in the case of an SMI, not approved in all markets. At the first glance, these observations appear inconsistent with findings from a retrospective evaluation of inhaler sales in Europe between 2002 and 2008, using data from the IMS sales database, in which pMDIs accounted for 47.5% of the total sales (DPIs and nebulizers taking 39.5% and 13% of the market, respectively). 16 However, the retrospective evaluation included short-acting β-adrenergic and short-acting antimuscarinic bronchodilators in the analysis, which are almost exclusively prescribed in an MDI, whereas this study excluded the inhaler type for short-acting bronchodilators. While the pMDI was the most frequently prescribed inhaler for bronchodilators, sales of DPIs and pMDIs were similar for ICSs. 16 DPI sales were higher in the case of inhalers with a combined long-acting β-agonist and corticosteroid. 16 Overall, the high variability in inhaler prescription between European countries was ascribed not only to differences in health policy, costs, and reimbursement, but also to prescriber and patient preference. 933 Maintenance inhaler preference in COPD and asthma Physician preferences and prescribing decisions In our study, around two-thirds of physicians stated a preference for inhaler type. Given that physicians were unlikely to prescribe the same medication in two different types of inhaler, it could be that inhaler choice was reflective of underlying prescribing habits rather than a conscious, evidence-based decision. Within specialties, pulmonologists demonstrated some preference for DPIs across all indications compared with other inhaler types, while PCPs and allergists did not establish a consensus on a preferred device. These differences may reflect the type of patient most often encountered in clinical practice, with people seeking, or being referred to specialist care from, a pulmonologist tending to be more severely affected by their disease. Pulmonologists may prescribe more combination products, which are more readily available in DPIs. Despite the increasing number of inhalers available, one-third of physicians stated no current preference for the type of inhaler they prescribe. Given that medication and inhaler are so often inherently linked, it can be difficult to determine whether the inhaler type truly forms an integral part of the prescribing decision. Examination of physician-reported feedback revealed that the inhaler type influenced prescribing choice for around one-third of patients across indications. The association between increasing age and disease severity (in COPD-only) and the inhaler attribute "ease of use/suitability" is perhaps reflective of the awareness by physicians of the unique physical challenges faced by these patients. Older patients are more likely to have comorbidities as they are more prone to arthritis and general muscle weakness, as well as deficits in vision and cognition, which may limit the viability of certain inhaler types. 1 Here, an inverse association was observed between comorbidities and "ease of use/suitability", meaning that "ease of use/suitability" was less likely to influence prescribing decisions in patients with no comorbidities than in those with comorbidities. The link between lung function parameters (FEV 1 and forced vital capacity) and a patient's inhalation capacity means that inhalation efficiency is likely to be impaired in severe cases of COPD in which lung function is more heavily compromised. 7 Suboptimal peak inspiratory flow rates have been reported in ~20% of patients with advanced COPD .60 years of age using a DPI. 6 If the physician is cognizant of this association, they may be more likely to prescribe a pMDI or an SMI, devices that are not reliant on flow rate for optimal delivery of medication. Younger children are also more likely to have reduced inspiratory flow than adults, so may not be able to actuate DPIs. 10 This may, in part, explain the greater prescribing levels for pMDIs versus DPIs in patients with asthma (includes patients .12 years), compared with patients with COPD (includes patients .40 years). The absence of an association between asthma severity and prescribing based on the ease of use of inhaler device is perhaps reflective of the limited availability of medications in a range of delivery devices. Our finding that the ease of use/suitability as a reason for choosing a particular inhaler is linked to time since diagnosis suggests that physicians are prioritizing these device attributes for those patients who have been diagnosed for longer. If a patient is comfortable using an inhaler and is experiencing clinical benefit, they may be reluctant to switch to alternative devices that are less familiar. Non-consensual switching of inhaler device can result in patient discontent, reduced confidence in the medication, and uncertainty regarding the degree of disease control. 5 Furthermore, when a patient is nonconsensually switched to a new device, they may be more likely to show poor inhalation technique, unless adequate training is provided. 17 Physicians are more likely to prioritize other device attributes in more recently diagnosed patients who will not have yet settled into a therapeutic routine and may be more open to change or experimentation. Patient preference and satisfaction From the perspective of the patient, the simplicity of the inhaler operating instructions was the attribute of greatest importance across all indications. This finding is consistent with an online survey of patients with COPD and health care professionals, in which ease of use was cited as an important attribute by patients and physicians alike, 14 and it highlights the importance of providing patients with simple and easy-to-follow instructions for their device. Studies comparing single-dose and multi-dose DPIs found that patients were more satisfied with, and preferred, multi-dose devices compared to single-dose devices. [18][19][20] In our study, patient prioritization of other inhaler attributes differed according to respiratory condition, with patients with asthma ranking convenience highly (portability and minimal preparation of dose), in contrast to patients with COPD who favored robustness of the inhaler and reliability and reproducibility of dose. Differences between inhaler attribute preferences of patients with COPD and patients with asthma were also noted in a study of 294 patients, wherein those with asthma most valued fewer dose preparation steps, while patients with COPD most valued an inhaler that could be used during episodes of breathing difficulties. 21 There was a broad range in the level of satisfaction that patients reported with their inhaler device, with each attribute associated with high and low levels of satisfaction. Typically, patients reported low satisfaction with the feedback mechanism on their inhaler that indicated correct inhalation of medication, and reported irritation of the mouth as an adverse event. Both of these characteristics can be indicative of suboptimal inhaler technique and may suggest that the patient has not been adequately trained in the correct use of the inhaler, 1 emphasizing the importance of training patients to improve their inhalation technique. 11 It is not clear if patients were dissatisfied with how the feedback mechanism indicated whether the inhalation was correct or not, or if patients were dissatisfied with its lack of availability on all devices. Although it has been reported that patients would prefer an inhaler that provided feedback on their performance after use, 22 the feedback mechanism was not ranked as one of the five most important attributes in any patient group in this study. From the published literature, key drivers of inhaler device preference include ergonomic design, mouthpiece fit, dose counter visibility, and ease of interpretation of the dose counter. 23 Lack of dose counter on an inhaler was an issue raised by patients with asthma concerned about not having sufficient medication left in the device. 22 These features link closely to the grievances reported in our study, given that a device that patients find comfortable and easy to use would help to improve inhaler technique. It should be recognized that while patients may have preferences for certain inhaler types or functions, they may not have access to them due to reimbursement restrictions operating in their locality. This may lead to a poorer outcome for the patient if the inhaler does not match their preferences and they consequently do not achieve optimal use. Across patient populations, levels of satisfaction with individual inhaler attributes were broadly similar, with the exception of preloading of the inhaler prior to use. Patients with COPD showed the highest level of dissatisfaction for this feature, which may relate to the common use of single-dose DPIs in this patient group and ties in with the physician-reported preference for ease of use in patients with COPD who have advanced disease severity, as well as the fact that there are both single-and multi-dose DPIs available. If a patient is experiencing difficulties using their device, these are likely to be worsened by the need to reload the device before every use as is required with single-dose DPIs. Published studies reveal higher rates of patient satisfaction with multidose over single-dose inhalers and with some, but not all, reporting lower rates of critical inhaler error when using multidose devices. [18][19][20] In particular, patients with asthma have been noted as valuing devices with a single dose preparation step compared to numerous steps. 21 However, analysis of inhaler technique in patients with COPD and asthma has shown high rates of inhaler error, regardless of whether a preferred inhaler was being used or not, 24 or whether the patient was satisfied with their inhaler. 22 This emphasizes the importance of correct instruction from the prescribing physician. Previous studies have shown that provision of training can improve competence in inhaler use in patients, with different types of training leading to different levels of improvement. Patients who received training as part of a group were less likely to make errors, with 97% of these patients showing good inhalation technique 6 months after training had been delivered. 11 The importance of patient satisfaction with an inhaler device should not be underestimated. In COPD, patient satisfaction with inhaler characteristics is rated more highly than factors such as complexity of medication regimen and severity of symptoms, and is inextricably linked to overall medication adherence or compliance. 13 Patient compliance plays a key role in maximizing the efficacy of the medication regimen. Failing to understand the correct use of an inhaler represents a common form of unintentional noncompliance on the part of the patient, which can negatively impact disease control. 25 Patients need fewer attempts to learn how to correctly use device types requiring fewer maneuvers prior to actuation, suggesting that such devices are more likely to be used successfully, and that such devices are usually preferred by patients. 4 The relationship between satisfaction with inhaler and clinical efficacy may be more complex than it initially appears, with a lack of evidence available for a correlation between these factors. 26 A systematic literature review of randomized controlled trials in COPD showed an apparent disconnect between patient satisfaction and improvement in clinical efficacy. 27 It, therefore, appears that an individualized approach to device selection should be utilized with consideration to the patient's ability to effectively use an inhaler. 28 Further investigation within real-world treatment settings will help to more clearly delineate the role of the inhaler in treatment outcomes for respiratory conditions. Studies that show benefits to the patients, in addition to improving their satisfaction, will likely strengthen the appeal of these new devices to both patients and physicians alike. study limitations There are some potential limitations associated with this study. While "ease of use" or "suitability" of the inhaler International Journal of COPD 2018:13 submit your manuscript | www.dovepress.com 935 Maintenance inhaler preference in COPD and asthma covers a number of device attributes, it may not cover all the attributes a physician considers important. As such, the impact of the inhaler as part of the prescribing decision process may be higher than reported here. It is also important to note that the list of inhaler attributes is not a validated one, although it has been developed by respiratory experts. Furthermore, patients could only rank attributes based on knowledge of inhaler types they currently or previously used, without experience of all available inhaler types. There is also a possibility that other attributes not assessed in the current study were of importance to patients and/or physicians. This study evaluated maintenance inhaler preference only, and a potential difference in preference between maintenance and reliever inhalers was not evaluated. This could be a further area to research in order to understand whether there are any differences in inhaler preference characteristics between maintenance and reliever inhalers. Additionally, inhaler devices are an area of increasing promotion by pharmaceutical companies with the introduction of several new devices since 2013. As the DSPs were conducted between 2010 and 2013, physician preferences could potentially have developed further since the conclusion of our studies; however, this would only affect physician preference and not the other analyses of importance and satisfaction. Nevertheless, this study offers valuable insight into both physician-and patient-led preferences that could be used to inform the development of next-generation inhaler devices for respiratory disease. Conclusion A high proportion of physicians had no preference for the inhaler type, irrespective of the disease state, and when preferences were stated, there was no clear consensus on a particular device type. For patients, the most important attribute of an inhaler was that its instructions were easy and simple to follow. Physicians appeared to be placing most importance on ease of use and suitability of device type when selecting inhalers for older patients and those with more severe disease, particularly in COPD. Given that patients and physicians value ease of use and suitability of device to patient needs (both subjective measures), it is important that a variety of device types are available for all classes of maintenance therapy. Data sharing statement The data from this study is not available as open access, but the raw data files can be requested depending on the purpose for obtaining these files. For example, additional analysis using the data would not be permitted unless conducted by Adelphi Real World.
2018-04-03T06:25:00.465Z
2018-03-16T00:00:00.000
{ "year": 2018, "sha1": "3560241c21872580b98cfba07cb133979552db96", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=41030", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f5e08d0196fb1a5f1fa3c982d2ea74585193fb75", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
92996216
pes2o/s2orc
v3-fos-license
Debugging of Behavioural Models with CLEAR . This paper presents a tool for debugging behavioural models being analysed using model checking techniques. It consists of three parts: (i) one for annotating a behavioural model given a temporal formula, (ii) one for visualizing the erroneous part of the model with a specific focus on decision points that make the model to be correct or incorrect, and (iii) one for abstracting counterexamples thus providing an explanation of the source of the bug. Introduction Model checking [2] is an established technique for automatically verifying that a behavioural model satisfies a given temporal property, which specifies some expected requirement of the system.In this work, we use Labelled Transition Systems (LTS) as behavioural models of concurrent programs.An LTS consists of states and labelled transitions connecting these states.An LTS can be produced from a higher-level specification of the system described with a process algebra for instance.Temporal properties are usually divided into two main families: safety and liveness properties [2].Both are supported in this work.If the LTS does not satisfy the property, the model checker returns a counterexample, which is a sequence of actions leading to a state where the property is not satisfied. Understanding this counterexample for debugging the specification is a complicated task for several reasons: (i) the counterexample may consist of many actions; (ii) the debugging task is mostly achieved manually (satisfactory automatic debugging techniques do not yet exist); (iii) the counterexample does not explicitly point out the source of the bug that is hidden in the model; (iv) the most relevant actions are not highlighted in the counterexample; (v) the counterexample does not give a global view of the problem. The CLEAR tools (Fig. 1) aims at simplifying the debugging of concurrent systems whose specification compiles into a behavioural model.To do so, we propose a novel approach for improving the comprehension of counterexamples by highlighting some of the states in the counterexample that are of prime importance because from those states the specification can reach a correct part of the model or an incorrect one.These states correspond to decisions or choices that are particularly interesting because they usually provide an explanation of the source of the bug.The first component of the CLEAR toolset computes these specific states from a given LTS (AUT format) and a temporal property (MCL logic [5]).Second, visualization techniques are provided in order to graphically observe the whole model and see how those states are distributed over that model.Third, explanations of the bug are built by abstracting away irrelevant parts of the counterexample, which results in a simplified counterexample.The CLEAR toolset has been developed mainly in Java and consists of more than 10K lines of code.All source files and several case studies are available online [1].CLEAR has been applied to many examples and the results turn out to be quite positive as presented in an empirical evaluation which is also available online. The rest of this paper is organised as follows.Section 2 overviews the LTS and property manipulations in order to compute annotated or tagged LTSs.Sections 3 and 4 present successively our techniques for visualizing tagged models and for abstracting counterexamples with the final objective in both cases to simplify the debugging steps.Section 5 describes experiments we carried out for validating our approach on case studies.Section 6 concludes the paper. Tagged LTSs The first step of our approach is to identify in the LTS parts of it corresponding to correct or incorrect behaviours.This is achieved using several algorithms that we define and that are presented in [3,4].We use different techniques depending on the property family.As far as safety properties are concerned, we compute an LTS consisting of all counterexamples and compare it with the full LTS.As for liveness properties, for each state, we compute the set of prefixes and suffixes.Then, we use this information for tagging transitions as correct, incorrect or neutral in the full LTS.A correct transition leads to a behaviour that always satisfies the property, while an incorrect one leads to a behaviour that always violates the property.A neutral transition is common to correct and incorrect behaviours. Once we have this information about transitions, we can identify specific states in the LTS where there is a choice in the LTS that directly affects the compliance with the property.We call these states and the transitions incoming to/outgoing from those states neighbourhoods. There are four kinds of neighbourhoods, which differ by looking at their outgoing transitions (Fig. 2 from left to right): ( 1) with at least one correct transition (and no incorrect transition), ( 2) with at least one incorrect transition (and no correct transition), ( 3) with at least one correct transition and one incorrect transition, but no neutral transition, (4) with at least one correct transition, one incorrect transition and one neutral transition.The transitions contained in neighbourhood of type (1) highlight a choice that can lead to behaviours that always satisfy the property.Note that neighbourhoods with only correct outgoing transitions are not possible, since they would not correspond to a problematic choice.Consequently, this type of neighbourhood always presents at least one outgoing neutral transition.The transitions contained in neighbourhood of type ( 2), ( 3) or ( 4) highlight a choice that can lead to behaviours that always violate the property.It is worth noting that both visualization and counterexample abstraction techniques share the computation of the tagged LTS (correct/incorrect/neutral transitions) and of the neighbourhoods. Visualization Techniques The CLEAR visualizer provides support for visualizing the erroneous part of the LTS and emphasizes all the states (a.k.a.neighbourhoods) where a choice makes the specification either head to correct or incorrect behaviour.This visualization is very useful from a debugging perspective to have a global point of view and not only to focus on a specific erroneous trace (that is, a counterexample). More precisely, the CLEAR visualizer supports the visualization of tagged LTSs enriched with neighbourhoods.These techniques have been developed using Javascript, the AngularJS framework, the bootstrap CSS framework, and the 3D force graph library.These 3D visualization techniques make use of different colors to distinguish correct (green), incorrect (red) and neutral (black) transitions on the one hand, and all kinds of neighbourhoods (represented with different shades of yellow) on the other hand.The tool also provides several functionalities in order to explore tagged LTSs for debugging purposes, the main one being the step-by-step animation starting from the initial state or from any chosen state in the LTS.This animation keeps track of the already traversed states/transitions and it is possible to move backward in that trace.Beyond visualizing the whole erroneous LTS, another functionality allows one to focus on one specific counterexample and rely on the animation features introduced beforehand for exploring the details of that counterexample (correct/incorrect transitions and neighbourhoods). Figure 3 gives a screenshot of the CLEAR visualizer.The legend on the left hand side of this figure depicts the different elements and colors used in the LTS visualization.All functionalities appear in the bottom part.When the LTS is loaded, one can also load a counterexample.On the right hand side, there is the name of the file and the list of states/transitions of the current animation.Note that transitions labels are not shown, they are only displayed through mouseover.This choice allows the tool to provide a clearer view of the LTS.From a methodological point of view, it is adviced to use first the CLEAR visualizer during the debugging process for taking a global look at the erroneous part of the LTS and possibly notice interesting structures in that LTS that may guide the developer to specific kinds of bug.Step-by-step animation is also helpful for focusing on specific traces and for looking more carefully at some transitions and neighbourhoods on those traces.If the developer does not identify the bug using these visualization techniques, (s)he can make use of the CLEAR abstraction techniques presented in the next section. Abstraction Techniques In this section, once the LTS has been tagged using algorithms overviewed in Sect.2, the developer can use abstraction techniques that aim at simplifying a counterexample produced from the LTS and a given property.To do so we make a joint analysis of the counterexample and of the LTS enriched with neighbourhoods computed previously.This analysis can be used for obtaining different kinds of simplifications, such as: (i) an abstracted counterexample, that allows one to remove from a counterexample actions that do not belong to neighbourhoods (and thus represent noise); (ii) a shortest path to a neighbourhood, which retrieves the shortest sequence of actions that leads to a neighbourhood; (iii) improved versions of (i) and (ii), where the developer provides a pattern representing a sequence of non-contiguous actions, in order to allow the developer to focus on a specific part of the model; (iv) techniques focusing on a notion of distance to the bug in terms of neighbourhoods.For the sake of space, we focus on the abstracted counterexample in this paper.Abstracted Counterexample.This technique takes as input an LTS where neighbourhoods have been identified and a counterexample.Then, it removes all the actions in the counterexample that do not represent incoming or outgoing transitions of neighbourhoods.Figure 4 shows an example of a counterexample where two neighbourhoods, highlighted on the right side, have been detected and allow us to identify actions that are preserved in the abstracted counterexample. Experiments We carried out experiments on about 100 examples.For each one, we use as input a process algebraic specification that was compiled into an LTS model, and a temporal property.As far as computation time is concerned, the time is quite low for small examples (a few seconds), while it tends to increase w.r.t. the size of the LTS when we deal with examples with hundreds of thousands of transitions and states (a few minutes).In this case, it is mainly due to the computation of tagged LTSs, which is quite costly because it is based on several graph traversals.Visualization techniques allowed us to identify several examples of typical bugs with their corresponding visual models.This showed that the visualizations exhibit specific structures that characterize the bug and are helpful for supporting the developer during his/her debugging tasks.As for abstraction techniques, we observed some clear gain in length (up to 90%) between the original counterexample and the abstracted one, which keeps only relevant actions using our approach and thus facilitates the debugging task for the developer. We also carried out an empirical study to validate our approach.We asked 17 developers, with different degrees of expertise, to find bugs on two test cases by taking advantage of the abstracted counterexample techniques.The developers were divided in two groups, in order to evaluate both test cases with and without the abstracted counterexample.The developers were asked to discover the bug and measure the total time spent in debugging each test case.We measured the results in terms of time, comparing for both test cases the time spent with and without the abstracted counterexample.We observed a gain of about 25% of the total average time spent in finding the bug for the group using our approach.We finally asked developers' opinion about the benefit given by our method in detecting the bug.Most of them agreed considering our approach helpful. The CLEAR toolset is available online [1] jointly with several case studies and the detailed results of the empirical study. Concluding Remarks In this paper, we have presented the CLEAR toolset for simplifying the comprehension of erroneous behavioural specifications under validation using model checking techniques.To do so, we are able to detect the choices in the model (neighbourhood) that may lead to a correct or incorrect behaviour, and generate a tagged LTS as result.The CLEAR visualizer takes as input a tagged LTS and provides visualization techniques of the whole erroneous part of the model as well as animation techniques that help the developer to navigate in the model for better understanding what is going on and hopefully detect the source of the bug.The counterexample abstraction techniques are finally helpful for building abstractions from counterexamples by keeping only relevant actions from a debugging perspective.The experiments we carried out show that our approach is useful in practice to help the designer in finding the source of the bug(s). Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/),which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material.If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
2019-04-05T03:31:20.798Z
2019-04-06T00:00:00.000
{ "year": 2019, "sha1": "4ddd64e95cced27733414510f3c322d4c43b556a", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-030-17462-0_26.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "6ad5be44e139e9aa8f521a95715bc2e4b8cb3f05", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
256603639
pes2o/s2orc
v3-fos-license
Streptococcus constellatus Brain Abscess in a Middle-Aged Man With an Undiagnosed Patent Foramen Ovale Brain abscess is a rare diagnosis. Common sources of infection include direct spread from otic sources, sinuses, or oral cavities, and hematogenous spread from distant sources, including the heart and lungs. Brain abscess with cultures growing oral flora species, in rare cases, may develop from bacteria in the oral cavity entering the bloodstream and then traveling to the brain via a patent foramen ovale. This report highlights a case of brain abscess caused by Streptococcus constellatus in a middle-aged man with an undiagnosed patent foramen ovale. Introduction Brain abscess is a rare diagnosis, with a reported incidence of 0.3 to 1.3 per 100,000 people per year [1][2][3]. In nearly 40% of cases, the source of infection is unknown [4,5]. Infection etiologies are divided into primary sources and secondary sources. Primary sources include the direct introduction of bacteria from penetrating cranial injuries, facial trauma, or brain surgery. Secondary sources include infections with non-neural origins that spread via continuous or hematogenous routes. Continuous spread of infection may seed from otic sources, including otitis and mastoiditis; paranasal, frontal, or ethmoid sinuses; or oral sources, primarily dental infections. Hematogenous spread commonly originates from pulmonary sources, including lung abscesses, empyema, pulmonary arteriovenous malformations, or bronchopulmonary fistula [6]. Cardiac pathologies can also lead to the formation of a brain abscess, including cyanotic congenital heart defects in children, bacterial endocarditis, ventricular aneurysm, and thrombosis [6]. Brain abscess secondary to bacteremia characteristically shows multiple abscesses. Brain abscesses are typically caused by Streptococcus and Staphylococcus species, primarily Streptococcus viridans and Staphylococcus aureus [6]. Rare organisms that may be involved in brain abscess formation include Streptococcus anginosus species (Streptococcus anginosus, Streptococcus constellatus, and Streptococcus intermedius). These organisms are a known oral flora recognized for their tendency to form abscesses; however, they are rarely involved in the formation of brain abscesses. This report presents a rare case of brain abscess caused by Streptococcus constellatus, in the setting of septicemia in a male patient in his 60s with an undiagnosed patent foramen ovale (PFO). Case Presentation A 63-year-old male presented to the emergency department (ED) with intermittent headaches for three months and new-onset fatigue and altered mental status one day after being diagnosed with pneumonia. His presenting vital signs were a temperature of 39.4°C, oxygen saturation of 94% on room air, heart rate of 125 beats per minute, and blood pressure of 119/75 mmHg. Physical examination was positive for tachycardia without murmurs and neurologic findings included left superior quadrantanopia and left upper extremity pronator drift. Initial laboratory evaluation revealed leukocytosis of 25,100 cells/mm 3 with left shift. Computed tomography (CT) of the brain performed in the ED was suggestive of a subacute infarct in the right inferior parietal lobe with a small subdural hematoma in the superior right parietal lobe. The patient was diagnosed with sepsis secondary to pneumonia and admitted to the hospital for treatment with empiric antibiotics, ceftriaxone, and azithromycin to cover for his known pneumonia. Shortly after admission, the patient had two focal tonic-clonic seizures, predominantly affecting the left upper extremity. The patient's treatment was changed to cover for meningitis with acyclovir, vancomycin, and ampicillin. Electroencephalogram (EEG) showed right frontotemporal central focal impaired seizures. Interictal findings included right temporal intermittent rhythmic delta activity (TIRDA) and right frontal lateralized periodic discharges (LPDs) suggesting an increased risk for focal seizures. Continuous right frontotemporal slowing indicated underlying structural or functional cerebral abnormalities. Lumbar puncture was significant for an elevated white blood cell count of 23 cells/mm 3 (24% neutrophils) and elevated protein of 102.1 mg/dL with negative gram stain and culture on cerebrospinal fluid. Brain magnetic resonance imaging (MRI) showed multifocal ring-enhancing lesions in the right inferior parietal occipital region with additional hemorrhagic ring-enhancing lesions noted diffusely throughout the brain concerning for hemorrhagic metastases (Figure 1). Additional workup in the setting of these imaging findings included CT of the chest, abdomen, and pelvis that showed no signs of metastatic or primary malignancy. However, there were partial cavitary opacities in the medial right lower lobe that resembled inflammatory or infectious etiology, consistent with his pneumonia. A transthoracic echocardiogram (TTE) was performed to rule out emboli secondary to infective endocarditis, which showed no evidence of endocarditis. At this time, the neurosurgery team was consulted and proceeded with a craniotomy to obtain a brain biopsy. During the procedure, purulent material was observed, and cultures of the surgical specimen were sent for aerobic, anaerobic, and fungal organisms. Anaerobic cultures grew rare Streptococcus constellatus, confirming a diagnosis of brain abscess. While a diagnosis of brain abscess was confirmed, at this point, the source of the infection was still unknown. Since Streptococcus constellatus is commonly found in oral flora and is an organism known to cause abscesses, direct extension from the oral cavity was suspected. The patient was noted to have poor dentition and a cracked right mandibular molar, increasing his risk for contiguous spread from the oral cavity; however, he exhibited no overt signs of dental infection, and a CT of the face showed no evidence of soft tissue infection. Without direct seeding from the oral cavity to the brain, it was suspected that the abscess may have developed from a paradoxical bacterial embolism through a PFO. A transesophageal echocardiography (TEE) bubble study was performed, confirming the presence of a PFO. The patient was successfully treated with intravenous ceftriaxone and metronidazole during his hospitalization with a plan to complete treatment as an outpatient via a peripherally inserted central catheter (PICC) for a total of four weeks. Post-treatment imaging was not obtained, as the patient was unfortunately lost to follow-up. Discussion Even in the absence of infective endocarditis on echocardiogram, brain abscess should still be considered in the setting of new-onset encephalopathy and undiagnosed brain lesions on imaging with recent or concurrent bacteremia. Cardiac and vascular anomalies such as PFOs and arteriovenous malformations (AVMs) may create an opportunistic environment for the hematogenous spread of oral flora to the brain, resulting in brain abscess formation [5,[7][8][9]. To our knowledge, 19 cases have been reported [7]. Sadahiro et al. highlighted seven patients with reported brain abscesses in the setting of a right-to-left shunt, including PFO (n = 6) and pulmonary arteriovenous shunt (n = 1) identified in the TEE bubble study [7]. Four patients were found to have Streptococcus intermedius bacteria, and cultures of three patients grew normal oral flora, all of whom had periodontal disease [7]. Due to the higher sensitivity and specificity of TEE compared to TTE in identifying small PFOs, physicians should perform a TEE bubble study in patients with brain abscesses with oral flora organisms [8,10]. Additionally, in the setting of recurrent brain abscess, closure of PFO or AVM may be considered to decrease the risk for future abscesses. However, further research is needed to better understand the potential benefits and outcomes of PFO closure in these cases [8,11]. Recognized antibiotic treatment for brain abscesses include empiric coverage of Streptococcus species and oral anaerobic species. In undifferentiated cases, additional empiric coverage for methicillin-resistant Staphylococcus aureus (MRSA) may be considered [12]. A six-to-eight-week antibiotic course is typically recommended for brain abscesses; however, studies have shown shorter courses may be sufficient following surgical intervention, such as in this patient [3,13]. Among the few reported cases of Streptococcus constellatus brain abscess, routes of infectious spread included direct seeding from dental infection and hematogenous spread originating from infective endocarditis or paraspinal abscess [14][15][16][17]. This patient's case was not only a rare route of infectious spread among cases of brain abscesses but also may be the first reported case of Streptococcus constellatus brain abscess caused by hematogenous spread through a PFO. Given the association between brain abscess and oral cavity bacteria, it is important for physicians to educate patients about the value of oral hygiene and improve access to dental care for all patients. These preventative measures may limit the risk of brain abscess formation [18]. Conclusions While brain abscesses are uncommon, physicians should consider brain abscesses as a differential diagnosis in patients with neurologic symptoms and suspicious imaging findings. Brain abscesses, in rare cases, may originate from oral flora organisms entering circulation and spreading to the brain through a PFO, even in the absence of endocarditis. Patients with oral flora bacteria isolated from brain abscesses with an unknown source of infection should be assessed for PFO with a TEE bubble study. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2023-02-06T16:06:55.287Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "a57a7f12844533db7b1b4fe52acec4fe44934518", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/case_report/pdf/133391/20230204-28146-grcg1e.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dc3796b0d0c81e3507089c94e437fbf2547e69e4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
91155287
pes2o/s2orc
v3-fos-license
Pollen Dispersal and Pollination Patterns Studies in Pati Kopyor Coconut using Molecular Markers Parentage analysis has been used to evaluate pollen dispersal in Kopyor coconut (Cocos nucifera L.). Investigations were undertaken to elucidate (i) the dispersal of pollen, (ii) the rate of self and outcrossing pollination, and (iii) the distance of pollen travel in Pati kopyor coconut population. The finding of this activities should be beneficial to kopyor coconut farmers to increase their kopyor fruit harvest and to support breeding of this unique coconut mutant. As many as 84 progenies were harvested from 15 female parents. As many as 95 adults coconut provenances surrounding the female parents were analyses as the potential male parents for the progenies. The adult coconut palms were mapped according to their GPS position. All samples were genotyped using six SSR and four SNAP marker loci. Parentage analysis was done using CERVUS version 2.0 software. Results of the analysis indicated that evaluated markers were effective for assigning candidate male parents to all evaluated seedlings. There is no specific direction of donated pollen movement from assigned donor parents to the female ones. The donated pollens could come from assigned male parents in any directions relative to the female parent positions. Cross pollination occured in as many as 82.1% of the progenies analyzed. Outcrossing among tall by tall (TxT), dwarf by dwarf (DxD), hybrid by hybrid (HxH), TxD, DxT, TxH, DxH, and HxD were observed. Self-pollination (TxT and DxD) occurred in as many as 17.9% of the progenies. The dwarf coconut was not always self pollinated. The presence of DxD, TxD, and HxD outcrossing was also observed. The donated pollens could come from pollen donor in a range of at least 0-58 m apart from the evaluated female recipients. Therefore, in addition to the wind, insect pollinators may have played an important role in Kopyor coconut pollination. Introduction Kopyor coconuts are natural coconut mutants having abnormal endosperm and only exist in Indonesia. The endosperm is soft, crumbly and detached from the shell, forming flakes filling up the shell (Maskromo et al. 2007;Novarianto et al. 2014). The Makapuno coconut grown in the Philipines and other Asian countries is another example of coconut mutant exhibiting endosperm abnormality (Samonthe et al. 1989;Wattanayothin, 2010). This mutant has been used as parent for hybridizations in coconut breeding (Wattanayothin, 2005). The Macapuno coconut exhibits a soft and jelly-like endosperm (Santos, 1999) that is phenotypically different to Indonesian Kopyor coconut. The kopyor coconut mutant phenotype is genetically inherited from parents to their progenies (Sukendah 2009) and most probably is controlled by a single locus (K locus) regulating the endosperm development of coconut . However, the identity of the regulatory locus has not yet been resolved. The abnormal endosperm phenotype in kopyor coconut is controlled by the recessive k allele; therefore, the genotype of kopyor fruit of coconut would be homozygous kk for the zygotic embryos and homozygous kkk for the endosperm. On the other hand, the genotype of the normal fruit of coconut would either be a homozygous KK or a heterozygous Kk for the zygotic embryo and either a homozygous KKK, heterozygous KKk, or heterozygous Kkk, respectively. The origin of Kopyor coconut mutant is not well documented; however, currently the kopyor palms are found in a number of areas in Java and southern part of Sumatera (Novarianto and Miftahorrachman 2000). The district of Pati, Central Java Province is recognized as one of the Kopyor coconut production centers. Kopyor coconuts have existed in this region for generations, especially the dwarf type of Kopyor coconuts. Although only in a limited numbers, Kopyor Tall and Kopyor Hybrid coconut types also exist along side of the dwarf one. The tall and dwarf coconut have different morphological characters and pollination strategy. Tall coconuts are generally outcrossing since male flower mature earlier than the female counterpart in the same inflorescence. Dwarf coconut tends to self-pollinate because of an overlapping maturation period between male and female flowers (Deb Mandal and Shyamapada 2011). Pollination in coconut most probably is assisted by insect pollinators or by the wind (Ramirez et. al. 2004). The family of Diptera, Coleoptera and Hymenoptera are reported as effective pollinators of coconut (Ramirez et al. 2004). Distances of pollen transfer between male and female parents may be used to predict the type of pollinator assisting pollination in coconut. Such question may be answerred by studying pollen dispersal. Evaluating pollen dispersal in various plant species usually use an approach based on the parentprogeny genotype genotype (Austerlitz et al. 2004). Evaluations have been done in pines (Schuster and Mitoon 2000), Dinizia excels -Fabaceae (Dick et al. 2003), Quercus garryana -Fagaceae (Marsico et al. 2009) and teak (Prabha et al. 2011). Availability of molecular markers capable of identifying genotype of parents and their progenies should assist the pollen dispersal studies. Using such markers, it should also be possible to estimate the self-pollination and outcrossing rates in a certain population (Milleron et al. 2012). To our understanding, pollen dispersal analysis has not been evaluated in coconuts. With the development of kopyor coconut in Indonesia, availability of information associated with pollen dispersal should be beneficial considering the recessive nature of the kopyor character. Such coconut pollen dispersal evaluation requires availability of some coconut progeny arrays and polymorphic loci for molecular markers of coconut genome. Co-dominant markers, such as SSR and SNAP markers for coconut have been developed and routinely evaluated at PMB Lab, Department of Agronomy and Horticulture, Faculty of Agriculture, Bogor Agricultural University (IPB), Bogor, Indonesia for a number of plant species. These include coconut , cacao (Kurniasih 2012), and nut meg -Myristica sp. (Soenarsih 2012). Moreover, the gene specific SNAP markers have also been developed and used successfully in coconut (Sudarsono el al. 2014). The SSR markers have successfully been used in gene flow analysis of pines (Lian et al. 2001;Burczyk and Koralewski. 2005). SNAP marker have also been reported as an effective co-dominant marker for plant analysis (Morin et al. 2004, Sutanto et al. 2013) and proven to generate better data quality for the majority of samples on plant genetic studies (Brumfield et al. 2003) and population structure analysis (Herrera et al. 2007). The objectives of this research were is to evaluate (i) the dispersal of pollen, (ii) the rate of self and out-crossing pollination, and (iii) the distance of pollen travel in Pati kopyor coconut population. The finding of these activities should be beneficial to kopyor coconut farmers to increase their kopyor fruit yield and to support breeding and cultivar development of this unique mutant. Time and Location of Research This research was conducted during the period of July 2012 up to January 2014. The field activities were at the Kopyor coconut plantation belonging to local farmer's at Sambiroto, Pati District, Central Java, Indonesia. The research site was at the following GPS location: S 6 32.182 E 11 03.354. The soil in the evaluated Kopyor coconut plantation is sandy soil. The laboratory activities were done at Plant Molecular Biology Laboratory (PMB Lab), Department of Agronomy and Horticulture, Faculty of Agriculture, Bogor Agricultural University, Bogor, Indonesia. Selection of Parents and Progeny Arrays There were 164 adult coconut trees in the field research site, consisted of a mixture of both kopyor heterozygous Kk and normal homozygous KK coconut trees. Only 95 out of 164 adult coconut trees in one block of 100x100 m 2 were sampled in this evaluation. Based on the coconut type, the sampled population consisted of 68 dwarf, 14 tall and 13 hybrid coconuts. Moreover, based on their phenotype, they were recognized as 22 normal homozygous KK and 73 kopyor heterozygous Kk coconuts. Map of the existing coconuts in the research site was generated using the GPS position of all individuals. Six dwarf, seven tall, and two hybrid coconuts among the kopyor heterozygous Kk trees were selected as female parents. They were selected using purposive random sampling to represent different sites in the sampled population. A single fruit bunch from each female parent containing 2-10 fruits/bunch was harvested 10-11 months after pollination. The total harvested fruits were collected and identified as either kopyor or normal fruits. The identified normal fruits were germinated and DNA was isolated from leaf tissue of the germinated seedlings (63 seedlings of normal fruits). The kopyor fruits are not able to naturally germinate since this character is lethal. Zygotic embryos were isolated from the identified kopyor fruits and DNA was isolated directly from the whole zygotic embryo tissues (21 zygotic embryos). Among the 84 DNA samples, 26 samples were from tall, 45 from dwarf, and 13 from hybrid female parents. Genotyping of Parents and Progenies DNA isolation was conducted using the CTAB method (Rohde et al. 1995). Either young coconut leaf or zygotic embryo (0.3-0.4 g) was homogenized in 2 ml of lysis buffer, containing 0.007 g PVP and 10 μl2-mercaptoetanol. The homogenized tissues were then incubated in 65°C waterbath for 60 minutes and the mixtures were centrifuged at 11000 rpm for 10 minutes using using the Eppendorf 5416 centrifuge. Supernatant was then transferred to an Eppendorf tube and an equal volume of chloroform:isoamyl-alcohol (24:1) was added. The mixtures were mixed well; centrifuged at 11000 rpm for 10 minutes and the supernatant was transferred into new microtube. Cold isopropanol (0.8 volume of supernatant) and sodium acetate (0.1 volume of supernatant) were added into the supernatant. After overnight incubation, the mixture was centrifuged at 11000 rpm for 10 minutes and DNA pellet was retained. The DNA pellet was washed using 500 μl of cold 70% ethanol, centrifuged and air dried before it was diluted into100 μl aquabidest. RNA contaminants were remove using RNase treatment following standard procedures (Sambrook and Russel 2001). SSR marker at 37 loci (Lebrun et al. 2001) were evaluated for their polymorphism 6 polimorphic loci were selected. In addition, four SNAP marker loci developed based on nucleotide sequence variabilities of both SUS and WRKY genes were also used to genotype all of the parents and progeny arrays. To generate markers, PCR amplifications were conducted using the following reaction mixtures: 2µl of DNA; 0.625µl of primers, 6.25µl PCR mix (KAPA Biosystem), and 3µl ddH20. Amplifications were conducted using the following steps: one cycle of pre-amplification at 95°C for 3 minutes, 35 cycles of amplification steps at 95°C for 15 seconds (template denaturation), annealing temperature for 15 seconds (primer annealing), and 72°C for 5 seconds (primer extension), and one cycle of final extention at 72°C for 10 minutes as suggested by KAPA Biosystem kit. The generated SSR markers were separated using vertical 6% polyacrilamide gel electrophoresis (PAGE) using SB 1x buffer (Brody and Kern 2004) and stained using silver staining. The silver staining was done following methods developed by Creste et al. (2001). Electrophoregrams were visualized over the light table and used to determine the genotype of the evaluated samples. The generated SNAP markers were separated using 1% agarose gel electrophoresis using TBE 1x buffer and stained using standard DNA staining procedures (Sambrook and Russel 2001). The electrophoregrams were visualized over the UV transluminescence table and recorded using digital camera. The recorded pictures were used to determine the genotype of the evaluated samples. Identification of the Candidate Male Parents Each sample of the progeny arrays has a known female parent but unknown pollen donor (the male parent). The candidate male parents could be any one of the sampled adult population including the female parents. Studies were conducted to determine the assigned male parent donating pollen to generate any fruit in the progeny arrays. Identification of the assigned male parent was done by analyzing genotype of progeny and the respective female parent versus the genotype of all adult trees in the selected samples. The ID of the potential male parent for any progeny was determine based on the results of parentage analysis. Simulation was conducted to determine the threshold level for confidence interval of 80% (relax) and 95% (strict) levels before the final parentage analysis steps. Parentage analysis using the genotype of progenies, female parents, and potential male parents was done using CERVUS version 2.0 software (Marshall et al. 1998). Most likely approach (potential male parent with the highest LOD score) based on the matching genotype of progeny, female parent and potential male parent were used as the basis for assigning certain adult individual as the potential male parent or pollen donor of a progeny. The progeny and female parent genotype were compared with those of other adult trees and the assigned male parent was selected based on the output of CERVUS version 2.0 analysis results (Marshall et al. 1998). Pattern of Pollen Dispersal The location of the female and the assigned male parents were plotted in the map of adult individuals generated by Garmin MapSource GPS mapping software version 76C5x. The distance between the known female parent and the assigned male parent was calculated using the same software. The distances and positions of both female and male parents in the generated map was then used to ilustrate pattern of pollen dispersal in the location. Self pollination was defined if the assigned male parent was the same as the female parent. Otherwise, they were assigned as outcrossing. The outcrossing were further grouped as outcrossing between either dwarf (dwarf parent pollinated other dwarf), tall (tall parent pollinated other tall), or hybrid (hybrid parent pollinated other hybrid); outcrossing between dwarf and either tall or hybrid (either tall or hybrid parent pollinated by dwarf); outcrossing between tall by hybrid coconuts (tall parent pollinated hybrid) or vice versa. The numbers of both self pollination and the respective cross pollination were calculated. The Parents and Progeny Arrays Map of the existing coconut palms in the research site are presented in Fig. 1. As indicated, the sample coconut population consists of a mixture of normal homozygous KK and kopyor heterozygous Kk individuals and a mixture of dwarf, tall and hybrid coconuts. All of these adult trees were used as potential male parents capable of donating pollens to and pollinating the selected female parents and generating the evaluated progeny arrays. The position of the selected female parents (6 dwarf, 7 tall, and 2 hybrid kopyor heterozygous Kk coconuts) are indicated in Fig. 1. The harvested progenies from selected female parents ranged from 2-10 progenies per female parent. Out of 84 selected progenies, 21 were kopyor nuts and 63 were normal ones. They were harvested from tall (26 progenies), dwarf (45 progenies), and hybrid (13 progenies) female parents, respectively. Genotyping of Parents and Progeny Arrays The selected SSR and SNAP marker loci generated polymorphic markers in the evaluated coconut population. An example of the polymorphic marker generated by either the selected SSR (CnCir_56 locus) and SNAP (SUS 1_3 locus) primer pairs producing polimorphic markers is presented in Fig. 2. and 3. In Fig. 2, the evaluated individuals are either homozygous cc (sample #1), bb (sample #7-10), heterozygous bc (sample # 2-6, and 11), or heterozygous ab (sample # 12) for the CnCir_56 SSR locus. On the other hand, the evaluated individuals (sample # 1, 3, 4, 6) are heterozygous for reference and alternate SNAP alleles and the other two (sample # 2 and 5) are homozygous for the reference allele (Fig. 3). All individuals were genotyped using the same approaches. The summary of genotping results for a total of 179 individuals using six SSR and four SNAP marker loci are presented in Table 1. The marker loci generated a range of 2-4 alleles per locus (Table 1). Mean number of alleles per locus is 3.4 and mean PIC for all marker loci was 0.47. The polymorphic information content (PIC) for SSR marker loci ranges from 0.31-0.68 while that of SNAP markers ranges from 0.28-0.37 (Table 1). The PIC values represents measures of polimorphism between genotypes in a locus using information of the allele numbers (Sajib et al. 2012). Total exclusionary power using the ten marker loci is either 0.85 (first parent) or 0.97 (second parent), indicating the SSR and SNAP markers should be informative enough for analyzing the evaluated coconut population. Note: T -tall coconut, D -dwarf coconut, and H -hybrid coconut Identification of the Candidate Male Parents Results of simulation analysis using 10.000 iterations, 95 candidate male parents, and the known female parent for each progeny, predicted the rate of success in identifying male parents at 95% (strict) was 32% and at 80% (relax) confidence interval was 62%. Parentage analysis was able to resolve the identity of the male parent for every individual in the 84 progeny arrays using the most likely parent approach. Moreover, the results of analysis also indicated that assignment of the predicted male parents for the 20% (17 individuals) progenies are at least in the minimum of 95% confidence and 43% (36 individuals) were at least in the minimum of 80% confidence. The assignment for the male parents of other 57% (48 individuals) progenies were at the level of less than 80 % confidence. Although the confidence level was below 80 %, the male parent assignment for these progenies shows LOD (likelihood of odds) value higher than 0. A positive LOD value indicates the suspected male parent might be the true parents. According to Marshall et al. (1998), the higher the LOD value the higher the possibility for the assigned male parent to be the actual parent (Marshal et al. 1998). Cross pollination is pollination of female flower by male pollen from different parents. Cross pollination produces half-sib progenies. The tall, dwarf and hybrid coconuts could reciprocally donate their pollens. Based on the assigned male parent of the 84 progeny arrays, cross pollination occured in as many as 69 events (82.1 %). Among those identified as outcrossing, 4 events are cross pollination between tall x tall (TxT), 16 tall by dwarf (TxD), and 4 tall by hybrid (TxH) parents. Moreover, outcrossing among DxD (15 events), DxT (6 events), DxH (11 events), HxH (2 event) and HxD coconuts (11 events) were also observed. Complete scheme and pollination types identified based on results of pollen dispersal analysis are presented in Table 2. The general understanding stated that because of the open flower morphology and the differences in flower maturation, tall coconut is probably always cross pollinated (Ramirez et al. 2004;Maskromo et al. 2011). However, our data indicated there are at least 2.38% of self pollination among the tall coconut (Table 2). Self pollination is characterized by the pollination of female flower by male pollen of the same parent. Self pollination produces fullsib progenies. Total numbers of self pollination are observed in as many as 15 events (17.9 %) in the evaluated progeny arrays (Table 2). They consist of two self pollination events in the tall kopyor coconut (2.38 %) and 13 self pollination events in the dwarf kopyor one (15.48%). Based on 13 progeny arrays harvested from the hybrid parents, no self pollination in the hybrid coconut is recorded ( Table 2). The general understanding stated that because of the overlapping period between male and female flower maturation, dwarf coconut is always self pollinated (Maskromo et al. 2011). However, our data indicated the dwarf coconut is not always self pollinated. Contrary to the basic understanding, our data indicated the presence of more dwarf to dwarf (15 events, 17.86%), dwarf to tall (6 events, 7.14%) and dwarf to hybrid (11 events, 13.1%) outcrossing (Table 2). Finding by Rajesh et al. (2008) has previously indicated that cross pollination did occur in dwarf coconuts. Availability of new tools, such as molecular markers, for analyzing outcrossing rate may change the previous understanding. Such changes have been shown in Hymenaea coubaril which was previously reported as more cross pollinated because of self incompatibility (Dunphy et al. 2004). However, more recent pollen dispersal studies indicated that H. coubaril is more self pollinated (Carneiro et al. 2011). Other alternative explanation for this findings is that it is just a special case in the evaluated site. In the study site, coconut palms were planted in high density. Moreover, population of honey bees exist in the coconut plantation. Honey bees are known to roam around the male and female flowers and function as effective pollinators for coconuts. The high density planting and the availability of pollinators may have caused the unexpected One assigned male parent may donate one or more pollens to the evaluated female coconut parent, with a range of 1-5 pollens per assigned male one. Number of assigned male parents donating certain numbers of pollen to the evaluated female parents is presented in Fig. 4. The data indicate that most of the assigned male parents contribute only one pollen to the evaluated female parents. Only three assigned male parents (two dwarf, and one hybrid coconuts) donated 4 or 5 pollens to the surrounding female parents. The same female parents may receive donated pollens from different numbers of assigned male parents, with a ranged of 1-7 assigned male parents donated pollen to the same female one. The numbers female parents receiving donated pollens from different number of assigned male parent iss presented in Fig. 5. The data indicated a single female parent most frequently received pollens from 2, 4 or 5 different assigned male parents. Only three female parents evaluated in this experiment (two dwarf and one hybrid coconuts) are found receiving pollens from at least 6 assigned male parents (Fig. 5). Pattern of Pollen Dispersal The distances between female to the assigned male parents have been determined based on their GPS positions. The distance of pollen travel between assigned male to female parents as measured in this evaluation ranged from 0 -58 m. Numbers of pollination events of each distance class from the assigned male to the female coconut parents are presented in Fig. 6. The assigned male parents are distributed almost evenly in the different class distances from the female parents. The 0 m distance between parents indicates self pollination events. To evaluate pattern of pollen dispersal among the assigned male parent to the female, the positions of assigned male parents as pollen donors to one female parent are plotted to a map using their GPS positions. Representative samples of the assigned male parent positions to a single female recipient parent are presented in Figures 7-11. As the female parent, Hybrid kopyor # 059 (Fig. 7) received 6 donated pollens from six different assigned pollen donors. The pollen contributors to the progeny array harvested from Hybrid kopyor # 059 female parent were all kopyor heterozygous Kk coconuts. However, the seven progenies harvested from this female parent were all phenotypically normal, i.e. genetically either a normal heterozygous Kk or homozygous KK. The positions of the assigned male parents relative to the female parent # 059 in the study site are presented in Fig. 7. Figure 7. Pattern of pollen movement to female parent #059 inferred from parentage analysis. The mark indicates position of ( ) Dwarf kopyor, ( ) Hybrid kopyor as the assigned male (pollen donor) parents, and ( ) hybrid kopyor #59 as the female recipient, respectively The Dwarf kopyor # 067 (Fig. 8) received 10 donated pollens from eight different assigned male parents. The assigned pollen contributors to the Dwarf kopyor # 067 female parent were all kopyor heterozygous Kk coconuts. Only one out of the 10 progenies harvested from this female parent was phenotypically kopyor. The assigned male parent for the harvested kopyor fruit was the tall kopyor # 089. The positions of the assigned male parents relative to the female parent # 067 are presented in Fig. 8. Dwarf kopyor # 068 ( Fig. 9) received 9 donated pollens from four assigned male parents. The four progenies were the result of outcross with either hybrid (# 59) or dwarf (#87 or # 90) and from self pollination. The assigned pollen contributors to the Dwarf kopyor # 068 were all kopyor heterozygous Kk coconuts. Three out of the 9 progenies harvested from Dwarf kopyor # 069 were phenotypically kopyor. These kopyor fruits received one donated pollen from either the tall kopyor # 059, dwarf kopyor # 68 or # 87. The positions of the assigned male parents relative to the female parent # 68 are presented in Fig. 9. Dwarf kopyor # 084 (Fig. 10) received 8 donated pollens from surrounding pollen donors. The pollen contributors to the Dwarf kopyor # 084 female parent were all kopyor coconuts. Only two out of the 8 progenies harvested from Dwarf kopyor # 084 were phenotypically kopyor. These two kopyor fruits received donated pollens from either The two assigned male parents, either dwarf kopyor # 056 (one pollen) and hybrid kopyor # 057 (one pollen), each contributed a one pollen to the evaluated progenies. Moreover, assigned male parent # 32 is the most distance pollen contributor among the evaluated trees. The positions of the assigned male parents (pollen contributors) relative to the female parent # 084 are presented in Fig. 10. Dwarf kopyor # 089 (Fig. 11) received 7 donated pollens from surrounding pollen donors. The pollen contributors to the Dwarf kopyor # 089 female parent were all kopyor coconuts. None of the 7 progenies harvested from Dwarf kopyor # 089 was phenotypically kopyor. The positions of the assigned male parents relative to the female parent # 089 are presented in Fig. 11. In the reseach location, wind blows from left to right during the night and from right to left during the day. If the wind is the major pollinators, there should be a specific pattern of pollen movement. Moreover, the distance of pollen dispersals should be close to the pollen donors. Our data did not support the wind as the only major pollinator in Kopyor coconut since pollens disperse in random directions and the assigned male parents are as far as 58 m apart from the evaluated female recipients. Our data also indicated that insect pollinators may play an important role in Kopyor coconut pollination. Numbers of insects are associated with inflorescence of kopyor coconuts. Such insects may aid pollination and promote cross pollination in kopyor coconuts, as it happens to other plant species (Bown 1988). These findings, however, do not rule out the role of wind in the Kopyor coconut pollination, especially from closely spacing male pollen donors. This might have been the first report of using molecular marker to study pollen dispersal in coconut. Results of this study point to new finding about pollen dispersal and pollination, selfing and out-crossing rates among dwarf, hybrid, and tall coconuts, respectively. However, further research and evaluation are necessary to generalize the finding since the present study is specific for the current study site. Conclusion The evaluated markers were effective for assigning candidate male parents to all evaluated seedlings. There is no specific direction of donated pollen movement from assigned donor parents to the female ones. The donated pollens could come from assigned male parents in any directions relative to the female parent positions. Based on the assigned male parent of the 84 progeny arrays, cross pollination occured in as many as 69 events (82.1%) including one among tall by tall (TxT), dwarf by dwarf (DxD) and hybrid by hybrid (HxH) cross pollination events. Moreover, outcrossing among TxD, TxH, DxH and vice-versa were also observed. This finding also indicated the dwarf coconut is not always self pollinated. The presence of 17.86 DxD, 19.05% TxD and 13.10% HxD were also observed. In Kopyor coconut, the pollens could travel from pollen donors as far as 58 m apart from the evaluated female recipients. Therefore, insect pollinators may have played an important role in long distance pollen dispersal in Kopyor coconut.
2019-04-02T13:08:31.521Z
2015-04-01T00:00:00.000
{ "year": 2015, "sha1": "7c7aadfdfb8ee36514ae931fe221390f7940dd66", "oa_license": "CCBYSA", "oa_url": "https://journal.coconutcommunity.org/index.php/journalicc/article/download/70/52", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "741027075efdb3547d310bd2df01d178c62aa85f", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
264270506
pes2o/s2orc
v3-fos-license
Heterochromatin variation in chromosomes of Anopheles ( Nyssorhynchus ) darlingi Root and A . ( N . ) nuneztovari Gabaldón ( Diptera : Culicidae ) Pela tecnica do bandamento C detectou-se variacao de marcacao dos blocos heterocromaticos dos cromossomos de A. darlingi e A. nuneztovari de Manaus, Amazonas, e de Macapa, Amapa, Brasil. Os cromossomos sexuais de ambas as especies mostraram duas formas de cromossomos X e o Y foi totalmente heterocromatico. No cromossomo X1 de A. darlingi a marcacao atingiu 1/3 e no cromossomo X2 foi apenas na regiao centromerica. Nos autossomos de ambas as especies as marcacoes foram constantes nas regioes centromericas, e o cromossomo II de A. darlingi mostrou um bloco heterocromatico em um dos bracos. A. nuneztovari mostrou polimorfismo de tamanho para o cromossomo X, tendo o X maior (X1) tres blocos e o menor (X2) dois blocos heterocromaticos. Femeas homozigotas (X2X2) nao foram detectadas nas duas localidades. Em machos de A. darlingi foram encontrados os cromossomos X1 e X2, enquanto que em machos de A. nuneztovari somente o cromossomo X1 foi detectado. Apenas variacao intraespecifica de blocos heterocromaticos nos cromossomos X e nos autossomos foi registrada nas duas populacoes de ambas as especies estudadas em cada localidade. Chromosomal studies of A. darlingi populations from Minas Gerais, Brazil, and of other South American species showed a karyotype of 2n = 6 (Schreiber and Guedes, 1959), as in other Anopheles species (Coluzzi, 1988).Rafael and Tadei (1998) reported an identical karyotype (2n = 6) for A. darlingi and A. nuneztovari populations from the Amazon region. C-banding analysis of mitotic chromosomes of Anopheles species from continental Asia (Baimai et al., 1995) revealed species complexes which included Anopheles dirus (Baimai, 1984;Hii, 1985) and Anopheles maculatus in the Neocellia series (Cellia) (Baimai et al., 1993).C-banding was reported to be useful for identifying sibling species based on differences in the morphology, quantity and distribution of heterochromatic blocks, principally in X and Y chromosomes (Baimai et al., 1993). In spite of the epidemiological importance of A. darlingi and A. nuneztovari in the Amazon region, there are no data on C-banding of the metaphase chromosomes of these species.We studied the variation in heterochromatic block markings in metaphase chromosomes to determine the heterochromatic patterns in the Manaus and Macapá populations of these species. MATERIAL AND METHODS Two natural populations of A. darlingi were sampled, with 20 individuals from Manaus (3°08'S, 60°01'W), Amazonas State, and 14 from Macapá (0°02'S, 51°03'W), Amapá State.For A. nuneztovari, 17 individuals from Manaus and 11 from Macapá were analyzed.Slides were prepared from fourth instar larval brain ganglia, treated with a 0.005% colchicine-hypotonic solution, as described by Imai et al. (1988).The slides were washed with distilled water, air dried and stored at room temperature for 72 h.C-banding was done using the method of Sumner (1972), with a reduction in the barium exposure time (3 min).The best preparations were photographed using a phase-contrast microscope fitted with a green filter. RESULTS The C-banding patterns of 76 out of 103 A. darlingi metaphases from Manaus and 57 out of 74 from Macapá, as well as 63 A. nuneztovari metaphases out of 86 from Manaus and 46 out of 53 from Macapá were photographed and analyzed.A. darlingi and A. nuneztovari populations from both localities showed two types of X chromosomes (X 1 and X 2 ), which differed in the content and distribution of heterochromatic blocks (Figure 1).In A. darlingi from Manaus, the sex chromosomes had centromeric markings that extended to 1/3 of X 1 while the Y chromosome was entirely heterochromatic (Figure 2).The X 2 chromosomes of samples from Macapá (Figure 2B) showed fewer markings, which extended only to the centromeric region.These marking patterns were the same as that of A. darlingi from Manaus.Chromosomes with a longer barium exposure (4 min) were more discolored than other preparations, although centromeric markings were seen in autosomes II and III and in the X 1 X 1 sex pairs (Figure 2C). The C-banding pattern in autosomes of the A. darlingi population from Macapá was the same as that of A. darlingi from Manaus (Figure 1).In these populations, the II and III chromosomes had well-marked centromeric regions (Figure 2B and C).All of the II chromosomes had a band which extended from the centromere along half the length of one arm of the chromatid in each population (Figure 2B). The variations in heterochromatic block markings in X 1 , X 2 and autosomal chromosomes of the A. nuneztovari from Manaus were the same as that of A. nuneztovari from Macapá (Figure 1).The X 1 chromosome (longer) consisted of three heterochromatic blocks (two telomeric and one centromeric) and the X 2 chromosome (shorter) contained two heterochromatic blocks, one of which was telomeric and the other centromeric (Figure 3A, B and C).The X 2 chromosomes of female A. nuneztovari had two heterochromatic blocks (Figure 3).The centromeric heterochromatin markings of the autosomes were found in this species (Figure 3D and E). X 2 X 2 A. darlingi and A. nuneztovari females were not found (Table I).X 1 and X 2 males were found in A. darlingi while A. nuneztovari males had only the X 1 chromosome. DISCUSSION C-banding studies of mitotic and meiotic chromosomes have provided important information on inter-and intraspecific population variation in Anopheles species and the technique has proven to be an excellent tool for identifying species complexes (Baimai et al., 1993).In this study the analysis of mitotic chromosomes of A. darlingi and A. nuneztovari described above revealed intraspecific variation in the quantity and distribution of heterochromatic blocks in sex chromosomes and in the centromeric regions of autosomes (Figure 1).Kitzmiller (1977) and Tadei (1985) suggested that in Anopheles genus the X chromosome was more sensitive to rearrangements than the autosomes.Intraspecific variation in sex chromosomes through the acquisition of constitutive heterochromatin is a common phenomenon in Southeast Asian anophelines.Baimai et al. (1996) reported two types of X chromosomes with floating frequencies in natural populations of Anopheles willmori.The X chromosomes in Amazonian populations of A. darlingi and A. nuneztovari most likely have similar mechanisms of adaptation in order to survive in these populations. The difference in size between the X 1 and X 2 chromosomes of A. nuneztovari may have resulted from the addition to or loss of part of one of these chromosomes.The addition or loss of chromosomal heterochromatin in Anopheles has played an important role in chromosomal evolution in Anopheles species (Vasantha et. al., 1982;Baimai et al., 1993Baimai et al., , 1996)).The X 2 chromosome in Amazonian populations of A. nuneztovari could have been derived from the presumed X 1 through the loss of an extra heterochromatic block in the distal end of the chromosome arm.The heterochromatic blocks of A. darlingi and A. nuneztovari are similar to those of Anopheles (Kerteszia) cruzii, according to Ramírez (1989) and Ramírez andDessen (1994, 1996).The inversions in the latter species probably arose from differences in the homolog chromosomes of the same specimen.However, the inversion polymorphism detected in A. darlingi (Kreutzer et al., 1972;Tadei et al., 1982;Tadei, 1985) and A. nuneztovari (Kitzmiller et al., 1973;Conn et al., 1993) does not necessarily mean that inversions alone positioned the heterochromatic blocks in the chromosomes of these spe- cies.Rather, these blocks may have originated from differences accumulated during evolution, as proposed by Gatti et al. (1982) to account for differences in the heterochromatic patterns of Anopheles gambiae and Anopheles arabiensis. The C-banding in the present study in A. darlingi and A. nuneztovari populations exhibited only intraspecific variation of the heterochromatic blocks in X chromosomes and autosomes.The X chromosomes presented greater variation in the content and distribution of heterochromatic blocks than did the autosomes. Figure 1 - Figure 1 -Diagrammatic comparison of metaphase karyotypes of Anopheles darlingi and Anopheles nuneztovari from Manaus and Macapá.Only one set of autosomes (II and III) is shown.Variable heterochromatic portions are indicated in black.Chromosomes and heterochromatic portions are shown as a percentage of the total length.c = Centromeric region; sc = secondary constriction. in A. darlingi and A. nuneztovari chromosomes Table I - X 1 and X 2 chromosomes in females and males of Anopheles darlingi and Anopheles nuneztovari populations from Manaus (MAO) and Macapá (MC).
2018-11-06T07:17:12.226Z
2000-03-01T00:00:00.000
{ "year": 2000, "sha1": "175d7b25cc3eec84a0a02acc04577a62526dff2f", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/gmb/a/ZCm5xdpf8vGkt3KyPSbcCRr/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "175d7b25cc3eec84a0a02acc04577a62526dff2f", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Biology" ] }
214217464
pes2o/s2orc
v3-fos-license
Plasma-neutral gas interactions in various space environments: Assessment beyond simplified approximations as a Voyage 2050 theme In the White Paper, submitted in response to the European Space Agency (ESA) Voyage 2050 Call, we present the importance of advancing our knowledge of plasma-neutral gas interactions, and of deepening our understanding of the partially ionized environments that are ubiquitous in the upper atmospheres of planets and moons, and elsewhere in space. In future space missions, the above task requires addressing the following fundamental questions: (A) How and by how much do plasma-neutral gas interactions influence the re-distribution of externally provided energy to the composing species? (B) How and by how much do plasma-neutral gas interactions contribute toward the growth of heavy complex molecules and biomolecules? Answering these questions is an absolute prerequisite for addressing the long-standing questions of atmospheric escape, the origin of biomolecules, and their role in the evolution of planets, moons, or comets, under the influence of energy sources in the form of electromagnetic and corpuscular radiation, because low-energy ion-neutral cross-sections in space cannot be reproduced quantitatively in laboratories for conditions of satisfying, particularly, (1) low-temperatures, (2) tenuous or strong gradients or layered media, and (3) in low-gravity plasma. Measurements with a minimum core instrument package (< 15 kg) can be used to perform such investigations in many different conditions and should be included in all deep-space missions. These investigations, if specific ranges of background parameters are considered, can also be pursued for Earth, Mars, and Venus. Introduction One of the fundamental questions regarding the Universe is how the different types of matter interact and shape stellar systems, planetary environments, and specific environments that allow life forms to emerge. The small-scale limit of such an interaction is between quarks and photons, and belongs to high-energy physics. The large-scale limit includes dark matter and dark energy, and belongs to cosmology. For the habitable part of the Universe such as planetary and exoplanetary systems, their evolution is driven by interactions between visible (baryonic) matter through radiation, collisions, and collective forces such as electric and magnetic forces, in addition to gravity. For example, the lifetime of a comet is strongly affected by the solar radiation, the solar wind plasma interaction, and the tidal forces near perihelion. Chemical interactions start forming complicated molecules including biomolecules from a mixture of low-energy (< 1 keV) ions and neutrals (both in the gas phase and the condensed phase) that are exposed to strong external energy sources (such as cosmic rays and extreme ultraviolet radiation), particularly at low temperature such as in the interstellar medium [1] and the upper atmosphere of planets and satellites. For example, the thermosphere and mesosphere of the Earth contain complicated molecules and even aerosols such as ion-water cluster molecules [2][3][4] and noctilucent clouds [5,6]. Considering the fact that the habitable part of the Universe is composed of low-temperature ions and neutral species (T < 0.04 eV) and that the heavy molecules are exposed to extremely low-temperature space plasma before being trapped by ice or dust in interstellar space, understanding the actual plasma-neutral gas interaction at low energy through in-situ observations is very important. If the organic matter is formed in the low-temperature plasma, a similar process that involves plasma-neutral gas interactions might take place during the formation of the Solar System, which should have had its effect on comets in, e.g., the Oort Cloud. By looking back to 4.6 billion years ago, the formation of the Solar System might have undergone a period when the plasma-neutral gas interactions played a critical role, which is not only relevant for the formation of heavier materials but also for re-distributing (partitioning) the energy among all components, or even degrading the material. On the other hand, plasma-neutral gas interactions in the thermosphere and ionosphere have substantially influenced planetary evolution through atmospheric escape [7][8][9][10]. However, our knowledge of the actual plasma-neutral gas interactions in tenuous space plasmas with some neutral particle content -either gaseous or in the form of icy grains -is still incomplete. The chemical pathways to forming heavy (organic) molecules are only partially understood. This is partly because lowenergy ion-neutral and electron-neutral interactions in low-temperature plasma vary across different environments, depending on the external DC and AC electric and magnetic fields, as described in Section 3 below. Although the cross section of a single interaction between a simple ion and a neutral particle in the gaseous phase without complicated force or energy is known from both theory and laboratory experiments [11], a substantial change in the plasma conditions (composition and velocity distribution of the ions and neutrals) or in the ambient energy (electric and magnetic fields, radiation, and temperature) can cause a significant change in the ion-neutral particle and electron-neutral particle interactions, particularly in a tenuous plasma. Also, the electron impact ionization properties [12] are not well understood at distant environments and for different neutral species, although they provide a dominant source of ionization, especially far from the Sun, where photo-ionization is subdominant because of significantly low solar UV flux. With such a variety of environments and ambient energy distributions, it is not easy to pinpoint the exact conditions for relevant laboratory experiments without in-situ measurements in space. This makes the plasma-neutral gas interactions in actual space environments less well understood when compared to the present knowledge on the dynamics and interactions in fully ionized space plasma. Consequently, our knowledge on the interaction between low-energy ions and neutral species is far from complete (cf. Section 5). In this paper, based on a White Paper prepared for the ESA Voyage 2050 Call, we show that our knowledge of the plasma-neutral gas interactions and of the partially ionized environments is far from sufficient, although they are ubiquitous in the upper atmospheres of planets and moons, and elsewhere in space. In order to advance such knowledge, we advocate for measurements addressing the following fundamental questions in future space missions: (A) How and by how much do plasma-neutral gas interactions influence the redistribution of externally provided energy to the composing species? (B) How and by how much do plasma-neutral gas interactions contribute toward the growth of heavy complex molecules and biomolecules? Sections 2 and 3 present an overview of current knowledge of plasma-neutral gas interactions in different environments. Section 4 proposes specific science questions to be addressed. Here, we do not include the cases when either the neutral part or the ion plasma part are in the solid form, because such interaction problems open up another world of fundamental questions. For instance, electrostatic charging of icy grains under the influence of cosmic ray or UV radiation may grow or dismantle the grains [13,14], thus controlling the grain size distribution and also the total grain surface area available for chemical reactions. Inversely the plasma is also affected by the charged grain and dusty plasma behavior, which is the case at Saturn's rings [15]. Although in this paper we do not discuss such science themes relevant to dust and grain (which belongs to the "dusty plasma science"), they should also involve the plasma-neutral gas interaction. In Section 5 we propose a strategy for obtaining measurements needed to address these questions, with a specific terrestrial mission case elaborated in Section 6. Technology challenges are discussed in Section 7 On the other hand, collisions play a fundamental role in the dynamics and energetics of ionospheres. They are responsible for the production of ions, diffusion of plasma from high to low density regions, conduction of heat from hot to cold regions, and exchange of energy between different species, among other processes. The collisional processes can be either elastic or inelastic. Some interactions lead to chemical reactions. All these processes wait for future investigations. As we have mentioned earlier, it is difficult to learn from the interaction when the neutral particles are in the form of grains, which deserves its own study field of dusty plasma. The plasma-grain interactions, often taking place at the grain surfaces, are an important ingredient. The interstellar medium embodies this situation, where equilibrium is assumed between the ambient interstellar gas and dust grain condensation nuclei. However, modeling the composition depends on the condensation temperature, ambient UV flux, cosmic ray radiation field, charging, and surface area during the growth of the grain. The UV flux may also lead to grain charging and dusty plasma effects. Regarding protoplanetary discs for which such interaction is under the influence of a young star, significant progress has been made in view of the growing body of observational data (e.g. with ALMA), but more can also be learned from the study of comets and asteroids as relics of Solar System formation that have undergone limited alteration since then. Even minor species can play a major dynamic role, as they may behave as catalysts, changing the surface albedo or the sublimation temperature of ices. Thus, plasma-grain interactions or the ionneutral problem constitute fundamental questions that are still unsolved, and hence they motivate us to tackle the ion-neutral gas interaction in the gas form (including heavy molecules) as a separate problem. Limitations The exact nature of the collision process depends both on the relative kinetic energy of the colliding particles and on the type of particles. In general, elastic collisions dominate at low energies, but as the relative kinetic energy increases, inelastic collisions become progressively more important. The excess energy during inelastic collisions is normally converted into, in the order of rotational, vibrational, electronic excitation, and ionization, as the relative kinetic energy increases. However, the different collision processes can affect the continuity, momentum, and energy equations in several different ways. This complicates the plasma dynamics and chemistry especially when the externally provided variable energy density exceeds the pre-existing energy density under quasi-static state because collisions take place under the influence of short-range external fields and adding an AC electric field further alters the collision configuration. In such cases, it is difficult to evaluate the external force terms in the multi-fluid equations and the collective effects on the collisional term. Even the derived distribution can already violate the Maxwellian assumption. Since the relative energy between the variable external energy and pre-existing energy must play an important role, unexpected plasma behaviors can arise in low-energy density plasma, with low density and low temperature. By contrast, if the external field is not strong, its (collective) effects on the collision process also cannot be ignored. Such forces are formulated with, e.g., quasi-linear approximation to describe the ion dynamics. However, quantitative verification of the Boltzmann collision integrals in the real space environment is not easy even if the distribution function is known. What makes space environments special are the unique combinations of phenomena and their interplay. Studying plasma-neutral gas interactions becomes more challenging when some of the neutral particles condense to form clouds or heavy molecules, but not as heavy as grains or dust. Observation-model discrepancy For high-density collisional regions such as the lower thermosphere, existing models provide a good estimate of the bulk ion properties from bulk neutral properties when compared with ionospheric and thermospheric observations. However, when the partially ionized plasma becomes tenuous with very low collision rates both for neutral species and ions, such as at altitudes above 300 km for the Earth's case, observations start to depart from what we expect from combinations of empirical and theoretical models and from laboratory experiments because the assumptions of Maxwellian distributions are no longer valid, particularly near the exobase. Earth: neutral particle behavior in the upper thermosphere and exosphere Ultraviolet observations of N 2 and O in the Earth's upper thermospheric density and temperature profile by NASA's TIMED satellite [18] found a significant discrepancy with the empirical NRLMSIS model [19]. Even the scale height for density is not yet clear: for hydrogen, the UV observations of the exosphere beyond 3 R E indicate a scale height of about 20000 km for one order of magnitude decrease [20] as shown in Fig. 1a, whereas it is only 3000 km for the thermospheric model based on outdated in-situ measurements [21]. For the range between 500 km and 3 R E , where the exospheric profiles are basically obtained from the hydrostatic assumption as shown in Fig. 1b, there is no reliable information. This comes partly from insufficient observational knowledge on the energy re-distribution in that region, and partly from the lack of sufficient in-situ observations of neutral species and ions in the upper thermosphere and above [22]. Modern spacecraft that carry accelerometers for total density measurements do not cover that altitude range, and we still rely on measurements from the 1960's-1970's that are already 50 years old (except DE-2, from 1981). The discrepancy or unexpected dynamics becomes more significant when a massive energy input is provided from space, e.g., near the cusp and during geomagnetic storms. Figure 2 shows the cusp case. Narrow channels with a density increase of neutral particles are observed near the cusp by CHAMP satellite [22]. Since strong field-aligned currents, both DC and AC, are continuously providing electromagnetic energy to the ionosphere in a narrow region in the cusp, such a density enhancement is conventionally considered to be the result of upward neutral wind sustained by the Joule heating at 120-130 km altitude [22]. However, simulations of neutral wind in a narrow channel [23] do not reproduce simple upward flow but require downward flow at the sides. The strong structuring of density and temperature has also been found by TIMED, which showed the structure within 5° in latitude during geomagnetic storms, while models predict essentially constant density (less than a factor of 2 change, [18]. During geomagnetic storms, when the energy flow from the magnetosphere to the thermosphere is enhanced, neutral properties deviate over an area that is wider than the local cusp where the energy input is locally high. The TIMED satellite [18,[24][25][26] showed large variability of neutral temperature and density [20]. b Example altitude profile by the International MSIS model (https:// ccmc. gsfc. nasa. gov/ model web/ models/ nrlms ise00. php). The smoothness of the profile (nearly exponential above 200 km for neutral species) comes from the lack of observations beyond the 1960's [21] Fig. 2 a CHAMP observation of density (using accelerometer) and fine-scale field-aligned current derived from magnetic field at above 400 km altitude [22]. b Simulation of neutral wind in narrow channel [23] of O and N 2 , responding to both the solar EUV flux and the magnetospheric activity, which are significantly different from the model predictions for average conditions [19,27,28]. For example, during geomagnetic storm periods, the temperature nearly doubled and the N 2 density increased by one order of magnitude within 10 days. Figure 3 shows observations of neutral density during major geomagnetic storms by GRACE the satellite [29] and the TWINS satellites [30]. The enhancement of the neutral density in response to major geomagnetic storms (black line in Fig. 3a) is much more than it was predicted by the empirical model (red line in Fig. 3a) during similar but stable conditions. This also indicates that the enhanced energy inflow caused an unexpected response of the neutral atmosphere [31]. In summary, our knowledge is not sufficient to even understand the basic behavior of the terrestrial upper ionosphere and upper thermosphere. This equally applies to ion-neutral phenomena in the mesosphere, such as polar mesospheric summer and winter echoes that seem to be partially influenced by solar activity [32]. ESA's candidate Earth Explorer 10 mission Daedalus [33] would have contributed to improving our understanding of the plasma-neutral gas interactions in the lower thermosphere and ionosphere in view of its special orbit (with perigee between 120 and 150 km altitude) and with its instrument suite that consists of ion, neutral, and electromagnetic field instruments. In February 2021, unfortunately, Daedalus was not selected to go forward to Phase A studies. Fig. 3 a Neutral densities from GRACE measurements (black) and the empirical thermosphere JB08 model (red) using average daily indices in November 2004 [29]. The sudden increase due to the X2.0 solar flare is marked as 1, whereas the impact of an interplanetary coronal mass ejection (ICME) a few hours later is marked as 2. b TWINS Lyman-alpha observations of relative variation in the column density of exospheric H that is represented by total solar Lyman-alpha flux (%) when a large geomagnetic storm took place [30] Venus and Titan: super-rotation and fast ion flow The cause of the super-rotation of Venus' atmosphere [34] is a long-standing mystery. Such a large-scale atmospheric convection, much faster than the surface rotation, was also found on Titan [35], suggesting that this might be a common feature of atmospheric dynamics on planets or moons with sufficient atmosphere and slow solid rotation. There were two fundamentally different ideas regarding the driver: (i) the momentum of the extremely slow surface motion keeps transferring a massive total momentum to the upper atmosphere so that it flows 100 times faster than the ground, and (ii) plasma transfers a sufficient amount of momentum to the neutral atmosphere despite the plasma density being much smaller than the neutral density. The amount of the transferred energy does not have to be very large (unlike terrestrial global circulation) because intrinsic modes of planetary convection might exist, such that a very small momentum transfer from either below (the ground) or above (space) may maintain the mode. The Japanese Venus mission Akatsuki (https:// akats uki. isas. jaxa. jp/ en/ missi on/) is dedicated to this problem by examining details of the atmospheric convection to evaluate the energy transfer from smaller-scale to larger-scale convection. Akatsuki's observations indicated a strong third candidate scenario: (iii) the thermal tide at the cloud layer can play a major role (like a piston), leaving the relation to and contribution from the ionospheric convection as an unsolved problem. While Akatsuki did not include the instrumentation to examine the second scenario due to mass limitations, both Venus Express [36,37] and Pioneer Venus Orbiter [38] showed strong ion convection in the super-rotation direction, with velocity 10 times faster in the Venus Express observations, as shown in Fig. 4. The observations suggest that the ionospheric super-rotation and atmospheric super-rotation are related [36] [39], raising a possibility of a much more effective momentum transfer than predicted by any model of plasma-neutral gas interaction. In addition to super-rotation, Titan has several other mysteries that are relevant to the plasma-neutral gas interaction. One is the cause of the massive cold ion outflow from Titan's ionosphere, which is believed to be too cold to produce such an outflow. Unexpectedly high-density cold ions in Titan's upper ionosphere and high escape rates with ~ 100 km/s velocity were found by Cassini spacecraft [40,41], as shown in Fig. 5. If the ion velocity is maintained by the magnetospheric convection of Saturn or by other plasma processes, scenario (ii) for the super-rotation may apply here because thermal tides at Tidan far away from the Sun are very small. However, the momentum may still be transferred from neutral atmosphere's superrotation (which is much slower than the observed ion flow) to the ion flow. In both cases, the observations suggest a momentum transfer that is higher than predicted in any model of plasma-neutral gas interaction. Cold environments such as interstellar space: formation of organic matter Another important issue relevant to the plasma-neutral gas interaction at Titan is the formation of heavy molecules, including organic matter [42]. The process is more chemistry-led in a collisional atmosphere rather than being controlled by collective effects of external forces. In the terrestrial middle atmosphere, cold environments are known to enhance certain types of chemical reactions such as the ozone depletion [43] and the formation of noctilucent clouds and heavier particles causing specific radar echoes near the mesopause [44]. Similarly, the enhanced plasma-neutral gas chemistry in Titan's upper atmosphere is expected to behave as a purely chemical system [45]. However, this chemistry is initiated by external energy provided by the solar UV, high-energy photons, electrons, and ions, through ionization of the major neutral species like nitrogen and methane. The Cassini mission confirmed that Titan has one of the most compositionally complex ionospheres in the Solar System, with roughly 50 molecular ions at or above the detection threshold, most of which are composed of C, H, and N, as [41]. b Summary of cold ion escape observed by Cassini [40] shown in Fig. 6 [46]. Unlike terrestrial atmospheric chemistry, where heavy molecules imply water compounds [2,47], the observed composition at Titan naturally should lead to the formation of amino acids [48] although Cassini's instruments were not capable of identifying them. It appears that much of the interesting chemistry, even that forming heavy species, occurs in the upper atmosphere rather than at lower altitudes, which indicates that energetic particles from above may be one of the key elements in addition to the UV irradiation. The formation of organic matter, including amino acids, in cold tenuous environments may also occur in the formation region of comets (such as the Kuiper belt and the Oort cloud) and in interstellar space where the environment is very cold [49]. Figure 7 shows observations of a cometary amino acid by the Rosetta spacecraft [50]. Because of the intimate relationship between the condensed and gas phases in molecular clouds and their exposure to strong UV from young OB stars in starforming regions, and also due to their long-duration immersion in the cosmic ray background, the formation pathways leading to organic matter can be multiple and [50] complex, beyond what we can model so far. Such cold environments make the situation more favorable, when an external one-time energy deposit exceeds the background energy density in non-thermal processes (e.g., cosmic ray energy deposition inside an icy dust grain triggering the formation of chemical radicals), but interaction mechanisms in such environments are impossible to examine in a laboratory experiment [51]. Comet: unexpected structures in plasma-neutral gas mixed plasma Comets are fascinating laboratories for studying plasma-neutral gas interactions, both for the formation of heavy molecules and for the solar wind interaction with the outgassing of neutral species. Rosetta was able to explore the diamagnetic cavity, a completely field-free region, at comet 67P/Churyumov-Gerasimenko. Figure 8 shows an example of these measurements [52]. The boundary normal direction is variable which implies that the structure is not spherical in shape. Henri et al., [53] found that the boundary location can be organized by the electron-neutral collision rate, but the formation mechanism has not been elucidated yet. At high activity comets, like 1P/Halley, the ion-neutral collisions are an important mechanism of energy transfer in the inner coma. There, the ions are efficiently cooled and remain coupled to the neutral species. Then the influence of collisions with neutral species is deemed important in forming these diamagnetic cavities at comets. Indeed, the neutral particle density and their composition determines the amount of ionization and thus the mass-loading of the solar wind plasma, which then affects the size of the diamagnetic cavity. The role of charged dust in the coma remains largely unexplored. Meteor: air burst Meteors are known to produce a shock front leading to an enhanced ionization of the ambient atmosphere (because of intense heating at the shock) and of ablated meteor The observed location and the boundary normal direction (bar direction); c Zoom up of (b), illustrating the temporal boundary of the cavity (solid line) that is rippled from the average boundary (dashed line) but its physics is still unclear [52]. X-axis points sunward, z-axis points northward in the orbital plane material (through high-speed collisions with air) [54]. The Chelyabinsk meteor burst in 2013 produced more energy than the traditional models predicted. A large part of its kinetic energy was unexpectedly consumed in the atmosphere rather than at ground impact, causing various effects in the geomagnetic field, lithosphere, and atmosphere. Most of the energy was emitted as a result of disruption (airburst) at around 27-30 km altitudes, affecting the ionospheric electron density in a wide area, as illustrated in Fig. 9 [55]. This indicates that energy conversion from the meteor motion to the atmosphere through the plasma around the meteoroid was more effective in terms of the plasma-neutral gas interaction than our present-day knowledge suggests. Past climate change: solar influence The role of the Sun's plasma and magnetic activity in the paleoclimate (4 billion years ago) and climate change over the past millennia and longer (but prior to major anthropogenic impact on climate) is the subject of a long-standing debate for nearly 30 years after the introduction of non-linear methods to correlate the solar activity (length of the solar cycle, length of solar minimum, or strength of solar dipole magnetic field, instead of simple sunspot number or solar irradiance) with the terrestrial climate (not only the average temperature but also the regional pressure such as north Atlantic oscillation or cloud coverage) [56][57][58][59][60][61]. Figure 10 shows the correlation study by Stauning [60]. The reason that the solar impacts have been ignored Fig. 9 Illustration of how the Chelyabinsk meteoroid airburst affected the ionosphere [55] in climate models is an lack of established understanding of a physical link between the solar activity and tropospheric temperature or meteorological phenomena [62]. In the upper thermosphere the ion density and neutral density are comparable. If the plasma-neutral gas interactions are strongly intensified when the external energy source grows, as has been suggested from observational discrepancies (cf. §3.1 above), the neutral atmosphere could be affected too via the vertical coupling of the energy in the atmosphere through, e.g., gravity waves. Thus, the influence of the Sun or of other external sources (e.g., galactic cosmic rays) on atmospheric chemistry that shapes the past climate variability cannot be evaluated unless we understand the interaction between the plasma and the neutral atmosphere. Science questions related to the plasma-neutral gas interactions As discussed above, there are many unexplained observations related to the plasmaneutral gas interactions. The above examples are mainly focused on ion-neutral interactions, but our understanding of electron-neutral interactions also needs to be updated. First fundamental question: dynamics aspect The behavior of comet atmospheres and magnetospheres, including diamagnetic cavities ( §3.4), indicates that the plasma-neutral gas interactions can play an important role in the dynamics. The terrestrial examples ( §3.1, §3.5, and §3.6) illustrate the importance of the plasma-neutral gas interactions on the energy re-distribution between energized ions and neutral species and how the ambient environment has often been overlooked, i.e., how external energy feeds into the energization of ions, of neutral species, or of the background populations (acceleration and thermalization). Either ions or neutral species could depend more sensitively on the external drivers than what the present models predict. This problem is formulated as the first fundamental question: 10 Northern hemisphere average temperature (red line) and length of sunspot cycle (blue line). Before the human effect took over during the 1980's, they are strongly correlated [60] (A) How and by how much do plasma-neutral gas interactions influence the re-distribution of externally provided energy to the composing species? The plasma-neutral gas interaction and subsequent energy re-distribution is expected to be largely modified when the external energy (characterized by energy flux density) is large compared to the background energy (characterized by pressure), which is more readily the case in the upper ionosphere near the exobase. Although the most affected region has a limited extent, the consequences of the enhanced energy re-distribution through the plasma-neutral gas interaction can be far-reaching. They are classified into the following four major topics. (A1) Impact of atmospheric particle energization on long-term large-scale evolution: A more extreme energy re-distribution, such as focusing the energy into specific form, causes more atmospheric heating for both ions and neutral species directly in the upper thermosphere in addition to the Joule heating in the lower thermosphere. This is important for understanding the origin of certain types of ion escape that require neutral species reaching to altitudes from which adiabatic acceleration and ion acceleration through ambipolar and/or auroral electric fields become effective, for assessing the present-day atmospheric escape rate and its effects on magnetospheric circulation and dynamics, and for understanding atmospheric escape over geological time [62]. This is related to the question whether a magnetic field protects an atmosphere against escape, or not [63,64]. The problem can be generalized to the Solar System size, e.g., in the interaction region between the heliosphere and the interstellar medium. (A2) Structures and variability in the upper thermosphere and exosphere: Since the external energy that is provided from space to the upper thermosphere is localized in the dayside cusp or very variable in the auroral oval, the energy redistribution in the upper thermosphere through the ion-neutral interaction there should be enhanced locally in space and/or variable globally ( §3.1), causing a structure and variability near the exobase and in the exosphere. The exobase altitude and temperature and scale height of the exosphere determine a large part of neutral escape from Mars and the ancient Earth. Hence, local anomalies and/ or temporal changes in this region significantly modify the neutral escape and the related neutral dynamics from the lower part of the atmosphere. Through the vertical coupling via gravity waves and other mechanisms, such variability may influence even the lower thermosphere and the mesosphere as well as the local plasma-neutral gas interactions, particularly during severe magnetospheric activities such as magnetic storms. An improved understanding of the thermosphere will have immediate benefits for technological applications such as satellite drag in low Earth orbits, space debris management, and spacecraft re-entry. The anomaly of the exobase altitude is a natural feature of comets that have localized outgassing regions, and the same argument applies to the moons with plumes. A non-uniform neutral density is even expected at the heliospheric boundary. (A3) The turbulent energy cascade and the Kolmogorov scale in partially ionized plasma: Different interactions imply different scale sizes (in both space and time) in the energy transfer. In the small-scale limit of the turbulent energy cascade, an enhanced energy re-distribution through the plasma-neutral gas interaction will change the scale size of the energy cascade, influencing even the Kolmogorov scale. This cascade will become even more complex in the layered region where the ionization rate and the collision frequency rapidly change, like in a comet and the ionosphere. Since the turbulence energy is small, such a modification through the plasma-neutral interaction is expected to be substantial. (A4) Roles of ions in the transfer of angular momentum and energy to neutrals: The possibility that the dawn-dusk asymmetric plasma motion helps to maintain the super-rotation of the atmosphere of Venus ( §3.2) indicates the possibility of more effective momentum and energy transfer from the Sun or protoplanetary disc to the protoplanets, with more significant roles of the plasma and the electromagnetic fields in the formation of the Solar System than so far understood. A higher energy transfer from the protoplanetary disc to the protoplanets may even heat the protoplanets more during their formation when the Sun was colder than at present. For example, strong solar flares are expected to have taken place during the ancient Earth [10,65], and may even provide extra heat to solve the faint Sun paradox. Since the effect is expected to be of large scale but slow, it is also relevant to the plasma effects on the upper atmosphere on the time scale of climate variability. A more effective energy re-distribution also means a larger plasma energy input to the neutral atmosphere, which particularly affects the mesospheric climate, with a possible long-term influence on the stratosphere and even on the tropopause. Second fundamental question: chemistry aspect The observations of organic matter in Titan, comets, and the interstellar medium ( §3.3) indicate that the plasma-neutral gas interactions in low-density and low-temperature environments might contribute to chemistry and changes in composition, including the formation of heavy molecules and organic matter such as biomolecules and amino acids. Also, sputtering chemistry by ion-atmosphere interaction is another candidate in forming heavy particles as an analogy of the surface sputtering chemistry. These indications lead to the second fundamental question: (B) How and by how much did plasma-neutral gas interactions contribute toward the growth of heavy complex molecules and biomolecules? Here, the interaction includes both the chemical reactions and plasma physical interaction, but excludes surface interaction which has its own important science as mentioned in Section 1. Both types of reaction offer major topics of scientific study. (B1) Favorable conditions of plasma and external energy in enhancing the chemical reaction: Since the basic elements of organic matter (N 2 , NH 3 , NO, CH 4 , CO 2 , CO, H 2 , H 2 O, O 2 ) have very low condensation temperatures, low-temperature conditions are considered favorable for developing organic molecules in space. However, the conditions in space where organic matter is most likely formed (interstel-lar medium, Oort cloud, and Titan's ionosphere) are impossible to reproduce in the laboratory without in-situ measurements that provide the exact parameters, although some of the pure chemical interactions or reaction efficiencies have been determined pretty well with laboratory experiments and quantum chemistry modeling (cf. §5.1). (B2) Formation of plasma structures that may work as catalysts in tenuous environments: The terrestrial stratospheric chemistry is enhanced when the stratospheric thin clouds (i.e., layers of condensed molecules) are formed through rarefaction that is sustained by horizontal wind [66]. Thus, density structures sustained by the neutral dynamics (e.g., rarefied layers) may work as catalysts for the condensation and chemistry resulting from photolysis and electron-impact, and identifying such structures in the Solar System (as mentioned in A2) and examining their relation to the chemical reactions and to the roles of neutral species provide fundamental information about chemistry in space. Thus this is not limited to the Earth. For example, the organic matter in Titan is found at altitudes where vertical convection is weak, while we have no knowledge about structures in the formation regions of comets or interstellar space. Since energy density is very low and the plasma is collocated with substantial amounts of neutral species, a small amount of external energy may cause large modifications in the plasma-neutral gas interactions, and such interactions may complicate the reactions significantly. Measurement strategy As listed in the examples in Section 3, our current knowledge regarding plasma-neutral gas interactions is not sufficient, particularly for cold and low-density environments. This comes partly from a lack of missions to such environments, but mainly from a lack of appropriate instrumentation in all missions up to now, including those to the Earth's upper atmosphere and Venus. To improve our knowledge on the plasma-neutral gas interaction at low energy (< 1 keV), the most fundamental observations are those of velocity and density distributions of ions and neutral species. In addition, at least we need certain information for the composition. The most dramatic lack of observations is that of ions and neutral species at energies below 10 eV, which for ion energy spectrometers corresponds to the + 1 eV limit of controlling the spacecraft potential by the existing methods. As a reference, a spacecraft velocity of 7.6 km/s (circular orbital velocity at the terrestrial exobase) corresponds to a ram energy of only 0.3 eV for H, 1.2 eV for He, 5 eV for O, much lower than 10 eV, so that these species have remained largely undetected. Moreover, in-situ detections need a trade-off between mass resolution and energy resolution. Even for the Earth, past neutral particle measurements are limited to bulk (moment) information such as density, bulk velocity, and temperature. Using the Doppler shift in optical remote sensing measurements to derive velocity and temperature requires high enough density in the target region such that emission or absorption is intense, but the regions we consider are low density regions. Also, the optical (emission and absorption) method cannot reveal the dynamics much below the exobase. Recent developments in the in-situ instrumentation can solve some of what was impossible in past and on-going missions. A trade-off should also be considered when combining the remote sensing and in-situ observations, and between singleand multi-point measurements. This means that we must define mandatory measurements for each major topic outlined in points (A1) to (B2). Why do we need in-situ observations in space in addition to the laboratory experiments? The ion-neutral interaction problem is in principle reduced to the cross-section problem, which belongs to the field of fundamental microphysics and should first be determined by laboratory experiments with the aid of collision models. However, the microphysics process of collisions in a plasma is affected by the local magnetic and electric fields. These electric and magnetic fields are set by the macroscopic plasma environment, and their parameters are eventually measured through macroscopic quantities. Such influences are more significant for lower energy particles and under larger gradients in the environment. Therefore, the laboratory experiments may not fully represent real space environments with large-scale gradients and many external parameters (cf. Figure 11). In fact, space observations show notable discrepancies between model predictions and observations at higher altitudes in Earth's geospace (larger discrepancy at higher altitudes) as described in Section 3, and our knowledge of these cross-sections is not sufficient for evaluating the effects of ionneutral and electron-neutral interactions in many space plasma environments ranging from the Earth to the interplanetary cloud. Some of these fundamental processes in space can be qualitatively reproduced and studied in the laboratory, but not quantitatively. For example, the field-aligned potential drop (double layer), which is a fundamental process in space, was first demonstrated in laboratory experiments [67] before it was found in space [68], and the laboratory experiments were improved after finding the parallel electric field in Fig. 11 Illustration of complicated background conditions in space that are difficult to reproduce in a terrestrial laboratory space. However, many space missions were still needed to study double layers in space because it is not possible to reproduce the space environment in a quantitatively representative manner in the laboratory. Thus, the theme of "Plasma-neutral gas interactions in various space environments" must be addressed by space missions in various environments in terms of macroscopic plasma conditions related to collisions. Required measurements for ions and neutral species For (A1): To examine the role of plasma-neutral gas interactions on atmospheric escape, we must know the density profile and the velocity distributions at different altitudes for the most relevant species near the exobase and in the exosphere (H, He, N, N 2 , NO, O, O 2 , CO, CO 2 for the Earth's case), both for the thermal ion and neutral background components and for the non-thermal escaping components of ions and neutral particles. Here we note that the current empirical models of the exosphere and upper thermosphere, such as the MSIS model, are outdated [18] and not suitable for modeling the thermal escape. The TIMED results are even different from the estimates from ground-based observations of airglow [69]. Even the baseline densities of the most abundant species (O and N 2 ) are observed to be 20-30% below those of the empirical model, while the temperature is not directly measured so far, but estimated from the density gradient (scale height). The model cannot be tuned for the best fit because the discrepancy changes from year to year. To explain the mismatch, a higher cross-section to produce low energy Energetic Neutral Atoms (ENA; < 100 eV) than the current estimate for different solar conditions has been suggested as one possibility [18]. In this respect, covering H, He, N, N 2 , and O, could already be sufficient for the Earth's case. Separation of N and O for the non-thermal component, which was difficult before but now becomes technically possible for < 100 eV, is needed to estimate the efficiency of nitrogenrelated chemical and photochemical reactions, for which no good observational knowledge exists near the exobase and above. On the other hand, N-O separation is not required for the background velocity in the thermal energy range. For (A2): To reveal the structures of these regions, the spatial distribution of bulk quantities (density, bulk drift velocity, and ideally temperatures of thermal and nonthermal components) becomes more important than their velocity distributions. To obtain data from different altitudes simultaneously, we need to combine the optical "snapshot" from line-of-sight integrated measurements with "in-situ" observations. This is possible with even a single spacecraft if the target region of the optical observation is along the spacecraft trajectory (we assume no significant change during this traversal) or is conjugate with the spacecraft during its traverse, like for the Reimei satellite, when conjugacy along the geomagnetic field can be assumed [70,71]. A more standard method is a combination of two spacecraft or a combination with ground-based optical and radar measurements. To estimate the dynamics, isotope fractionation above the homopause can also be used. Therefore, very high accuracy ion composition measurements are recommended with mass resolution of m/∆m > 1000. This applies to both the Earth and extraterrestrial environments. For (A3): To examine the small-scale limit, we need multi-point measurements of ions and neutral species with high-time resolution. Since this does not require fine composition information, small sub-satellites can provide the necessary information. For (A4): To measure the momentum transfer from ions to neutral species through tangential stress, altitude profiles of velocity distribution for both ions and neutral species are needed. Since the composition information for such measurements can be at the minimum level, the requirement for (A1) and (A2) is sufficient. For (B1): To diagnose the conditions favourable for growing heavy molecules, bulk properties of ions and neutral species (density, bulk velocity, and temperature) need to be known. Here, the temperature can be common for the cold backgrounds, and velocity can be measured only as an average over a long integration time. However, the separation of different species is needed in the density distribution measurements for H, N 2 , NO, NH 3 , O, H 2 O, CH 4 , CO, and CO 2 . These requirements are covered by the measurements required for (A1) and (A2). In addition, detection of heavy molecules and organic molecules are needed. This means that we need a mass spectrometer of high mass resolution (m/∆m > 1000) that has very high mass limit such as m > 100 and a high sensitivity (supported by extensive ground-based efforts) for determination of fragmentation patterns, since untangling the contributions of the neutral species to the recorded fragment intensities is a major puzzle for heavy species. High mass resolution is also required to obtain the isotope ratio of simple molecules because this gives essential information on the location and process of molecular formation [72][73][74]. For (B2): The extra requirement from (B1) is measuring differential velocities between species, for at least two major species (e.g., O and N 2 ). Such instruments are more difficult to build than what is required for (A2) because the velocity is expected to be very slow. Therefore, they have not been used in the past and present space observations but are under development. Required measurements of the background plasma In addition to the background magnetic field that is mandatory in describing the plasma, electron temperature and the DC electric field are needed. Ideally, electromagnetic or electrostatic waves with slow group velocities should also be known, but most of them do not influence the plasma-neutral gas interactions unless the energy density of the waves is very high, causing any type of resonance with the ions. In this sense, low sensitivity wave measurements at low frequencies (below 100 Hz) could be sufficient, although high frequency measurements can be useful as well, particularly for the Earth. The gravity field must be also considered but it will be anyway known before any mission. Required measurements for the external energy source Obvious energy sources are non-thermal ions and energetic neutral species. Each target environment is characterized by energy sources with its characteristic energies, such as the solar wind for comets and for the Earth, magnetospheric particles (keV-MeV) for the Earth and for moons, and cosmic rays including solar energetic particles for the Earth, planets, and interstellar space. Another obvious energy source is the radiation that directly triggers photochemistry for short wavelengths (UV and soft X-ray) and absorption and scattering for long wavelengths (infrared and mm waves). The solar source and a large fraction of cosmic sources can be monitored by other spacecraft and space weather monitoring, while planetary and galactic source fluxes in the outer Solar System might need local measurements (i.e., by the same space mission). However, the energy flux is probably low, and therefore such measurements might be optional. Electromagnetic waves from local sources might be more important to measure, for which the energy flux is very high, as mentioned in §5.3. Table 1 summarizes mandatory and optional measurement requirements, in which we also specified minimum measurement requirements: (i) neutral and ion density for major species (which include composition and ionization rate), (ii) neutral velocity distribution for major species (which may also provide the average neutral temperature depending to design), (iii) cold ion (< 10 eV) energy spectra for major species, and (iv) approximate direction of background DC magnetic field (the Venus Express level of electromagnetic cleanness is sufficient). The other instrumentation required to study the problems of plasma-neutral gas (including ion-neutral) interactions, depends on how many of the sub-themes are to be studied. They include (v) gravity, (vi) internal energy such as velocity differences between different species, (vii) external free energy such as radiation, cosmic rays, and large-scale electric, and (viii) the existence of catalysts such as the surfaces of the dust grains, cloud (layer of condensed molecules), or catalytic structures such as non-mixing layers. Not all the measurements in Table 1 have been possible in the past and present missions. Even the minimum mandatory measurements (i)-(vi) are not yet available: instruments for (i) and (iv) are mature, whereas the instrumentation for (iii) needs improvements, and instruments for (ii) are still under development (present Technology Readiness Level is 3). Here, "mature" does not necessarily mean that the size is optimized for missions with severe mass and power limitations. Summary of relevant measurements However, instrument technology has significantly improved (for optical remote sensing, plasma spectrometers, mass spectrometers, and electric and magnetic field instruments) for near future missions. The highest-rank ("1" in Table 1, or (i)-(iv) above) in-situ measurements can constitute a small plasma package with total payload mass of < 20 kg at present and will become < 15 kg within a decade. With such a minimum package, a large part of the important questions on the plasma-neutral gas interactions (A1)-(B2) can be studied at planets or moons with an atmosphere, by orbiting through the upper ionospheres or exospheres. The detailed description of how these measurements answer the questions in the listed topics (A1)-(B2) is given in Section 6 where a mission for terrestrial observations is presented as an example. Recent and future improvements are also needed for the spacecraft technology, such as the automated operation of multiple spacecraft including low-cost subspacecraft of less than 50 kg (the Swedish Innosat platform already achieved this for 15 kg payload). The improvements are also needed for methods combining insitu and remote sensing measurements, and for the upgraded ground infrastructure. These technologies have rapidly improved in the recent decade. For example, in the middle of 2010's, Comet Interceptor [75] was not possible even as an ESA's M-class mission. We expect further improvements within a decade. Destinations of relevant missions: almost all Solar System objectives Possible parameters that influence the ion-neutral interactions (cf. (i)-(viii) in §5.5) can be also summarized as: (1) temperature, (2) density and degree of ionization, (3) gravity, (4) external free energy, (5) internal energy, and (6) the existence of catalysts. While there is room to improve the knowledge on (6) and a part of (4) and (5) through laboratory experiments, extreme environments of (1)-(3), e.g., low value extremes are difficult to achieve inside ground laboratory experiments (cf. §5.1). Limiting the discussion just to parameters (1)-(3), the Solar System is full of different environments. Table 2 summarizes a number of possible missions that may contribute to the plasma-neutral gas interaction theme. Most of these are Table 2 Possible destinations *1: It ranges from very low to high along the orbit *2: A1-B2 refer to science questions -see text for detail *3: LL: Need to collaborate with another agency for either cost reasons or for obtaining an RTG generator. P: S-class/F-class or at least much less than an M-class or if piggy-back is possible --M self-explanatory. The "artificial comet" mission refers to a massive release of water or other "light materials" in the solar wind. The "planetary L2 composition" mission aims at measuring the composition of pick-up ions of planetary origin near planetary L2 points, since there might be several piggy-back opportunities for Earth L2 telescopes or for Martian missions in the 2040's. Other destinations have been under discussion by other space agencies. For each destination, temperature (T), density (n), and gravity (g) are classified from extremely low to extremely high. Therefore, each mission can address the topics only in that range. Considering the wide range of environments to be investigated, this theme is better pursued as just one of the science objectives in as many missions to Solar System objects as possible, including comets, all planets (except Mercury), and moons with substantial atmosphere (e.g., Titan, Enceladus, Europa, Triton). Such an approach enhances the mission science for all Solar System missions (except solar missions). This is not limited to atmospheric and plasma missions, but also to surface missions and sample return missions if the spacecraft (orbiter) traverses the upper ionosphere or exosphere. The most comprehensive knowledge can be obtained with a comet rendezvous mission for aphelion reaching the Kuiper belt so that the spacecraft can measure both the comet environment and the space plasma in the Kuiper belt. The advantage such a comet target lies in the extremely large range of temperature and UV environments along the highly elliptic orbit. This produces various levels of outgassing that also lead to various levels of density and gradients. Therefore, we can examine the plasma-neutral gas interactions in various space environments with a single mission in terms of (1)-(6) mentioned above. Such a mission is also useful as a step toward the Halley comet rendezvous in the 2060's, and in fact at least two such missions are proposed in White Papers in this issue (a cryogenic comet sample return mission by D. Bockelée-Morvan and a comet plasma mission by C. Goetz). Since the required measurements can be performed with a suite of small instruments (~ 15 kg) as described in §5.5, this theme can easily be added even to a sample return mission from a comet. On the other hand, other missions can also contribute to improving our knowledge on the plasma-neutral gas interactions to answer many of the fundamental questions on topics (A1)-(B2). By combining different targets (e.g., Earth and Venus with the same instrumentation, like Mars Express and Venus Express), even missions to study a high temperature or high-density environment also help understanding the solar energy conversion, which has different roles in the evolution of the Solar System, and even help understanding if low-temperature stars with neutral species possibly exist. Terrestrial mission case Since the plasma-neutral gas interaction problem is very fundamental, a mission in the terrestrial environment has a large advantage because of strong support from remote sensing measurements by ground-based instruments, including EISCAT_3D. The mandatory altitude range to cover is 500-3000 km, i.e., upper thermosphere and lower exosphere, where our knowledge of even the ground state is poor for both neutral species and cold thermal ions (except hydrogen atoms) as described in §3.1, and is an important subject even for the ground-based observation community. Here, the apogee of > 3000 km comes from the requirement to obtain the scale height for H. Since ion and neutral mass spectrometers have a dynamic range of six orders of magnitude, tuning the instruments for the exosphere can provide unprecedented new information. Figure 12 shows one example of such a mission design. The apogee altitude is 1.5 R E (about 10000 km) in the figure but this is flexible between 3000 km and 30000 km (below the forbidden region for geosynchronous orbits and preferably avoiding the radiation belts). To combine the optical and in-situ measurements and to address topic (A3), a multi-spacecraft mission with three or more spacecraft is ideal. For both options in Fig. 12, the mission is composed of a main spinning spacecraft for in-situ observation (payload 100-120 kg), a 3-axis stabilized sub-spacecraft just for optical remote sensing observation (payload 10-15 kg), and one or two sub-spacecraft (the attitude control can be either spinning and 3-axis stabilized) for multi-point in-situ measurements flying with separation in the ion scale to fluid scale from the main spacecraft (payload 5-10 kg). A spinning platform is preferable for the main spacecraft to cover 3D for particle instruments, for which a substantial portion of the field-of-view (FOV) would otherwise be blocked by the spacecraft body on a 3-axis stabilized platform, and to have good coverage of DC electric field measurements. The difference between the left and right in Fig. 12 is where to locate the remote sensing observation spacecraft: (a) It can be placed in a completely different orbit than the in-situ spacecraft so that the camera's FOV includes the in-situ spacecraft for real-time comparison, or (b) fly together with the main spacecraft and look along the orbit so that the camera's FOV covers the region of in-situ observation with some time delay. The first option has the advantage of not looking at the highdensity region, whereas the second option has the advantage of avoiding conjugacy problems. In the first option, the in-situ spacecraft is not always in good conjugacy with the remote sensing spacecraft even though the spacecraft inclination is adjusted such that the longitudinal drift velocity matches. Observation strategy using multiple spacecraft The telemetry for the in-situ sub-spacecraft is through the main spacecraft, both for downlink and uplink, so that the cost for ground operations can be minimized. In this sense, the second option reduces the cost because the telemetry for the remote sensing spacecraft can go through the main spacecraft. Using a despun platform could be less expensive than having a separate spacecraft for remote sensing, but by 2030 the cost for building and operating very small (< 50 kg) sub-satellites will be significantly reduced, so that the trade-off has to be re-evaluated. The telemetry link through the main spacecraft also opens up the possibility of having nearly identical platforms (with some difference in thermal design, the radiation protection, the antenna design, and numbers of payload or sub-spacecraft). The other requirements for the orbit and spacecraft are: • The 3-year radiation dose shall not be excessive so that it would require an unreasonable amount of shielding. • Orbital parameters must be designed to require as few maneuvers as possible (e.g., free drift) to avoid contamination of composition measurements from (chemical) propulsion exhaust. • To be able to study ion-neutral interactions properly, the spacecraft should cover the auroral regions, and thus the inclination must be as close to 90° as possible. This automatically facilitates conjugate observations with ground-based radar (e.g., EISCAT_3D) and optical instruments (e.g., Fabry-Pérot interferometer for airglow) that are mainly located in the polar region. • At mission completion all spacecraft can be de-orbited. Payload Tables 3 and 4 summarizes a model payload. Compared to the Cluster mission, measurements of the neutral gas have to be added, ion spectrometers with sufficient mass separation ability have to be used, and the detection of low-energy ions has to be improved, whereas we do not require a very wide frequency range for waves. Still, most of the instruments are already available with acceptable masses and sensitivities. And developments for further miniaturisation of neutral gas mass spectrometers are ongoing, to use them on a cubesat platform [76]. As of 2020, instruments for neutral gas velocity and temperature with sufficient sensitivity are not ready. However, a miniature prototype with very low sensitivity has already flown on the Dellingr cubesat in 2017 [77,78], and another design was also proposed by Shimoyama (private communication, 2019) as shown in Fig. 13. Therefore, this challenge can be most likely solved in the near future. The addition of a strong suite for neutral gas and composition measurements makes this mission unique compared to the past missions. The last mission that obtained density profiles of the thermosphere and exosphere is 40 years old (Dynamics Explorer 2) and did not cover a very high altitude range (309 km × 1012 km). All recent missions to study the density profiles are using the line-of-sight integrated measurements, which strongly rely on the model (for example, the temperature of the exosphere is derived from the scale height with an assumption of nearly constant temperature). By combining in-situ measurements and remote sensing measurements, we can construct density profiles without the assumptions that are inevitable for line-of-sight observations. In addition, no systematic measurements of isotope Support from the ground-based observations In Tables 3 and 4, the ground-based observations will be included for many measurements. In principle, the mission is closed without such support, but accuracy and particularly the three dimensional (3D) spatial coverage improves significantly with the ground-based conjugate observations. The relevant facilities that already exist or are planned to be installed in the near future are: Incoherent scatter (IS) radars: located in different regions (e.g., Europe, North America, and Asia), these radars at 200-1000 MHz range can measure some key parameters described in §5.3 (electron temperature, electron density, ion temperature, and line-of-sight ion velocity) at different altitudes nearly simultaneously. EISCAT_3D (https:// eiscat. se): the next generation IS radar will be able to cover a large 3D volume of space in very short time thanks to phased-array antenna arrays as opposed to a large parabolic dish, which can scan through a volume of space only very slowly. With high-power 5-10 MW transmitting and receiving antennas at the core site and 10,000 high-sensitivity receiving antennas at remote sites, EISCAT_3D covers a very wide area (> 300 km diameter at 500 km altitude with three sites), and is under construction toward operation in 2023 for the first three sites, and additional two sites are planned afterward. Even with a spacecraft velocity of 10 km/s (perigee velocity of a highly elliptic orbit), the spacecraft is continuously within this volume over 30 s for the diameter traversals (instead of passing at some distance from the radar line of sight in current systems). Since the Earth is magnetized, only the geomagnetic conjugacy is often required, which allows a much longer time sequence of continuous conjugate observations. SuperDARN: With much lower power than IS radars, and optimizing the array direction nearly horizontal, HF radars (8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(20)(21)(22) can provide ion line-of-site velocity in a wide region in Earth's upper atmosphere and ionosphere. SuperD-ARN is a network of such HF radars from more than 30 sites, and continuously provides ionospheric convection. Although the detection altitude is lower than 150 km altitude, this gives important information on the plasma motion. Fabry-Pérot interferometer: The mid thermosphere (around 240 km altitude) is the region where the airglow intensity is maximum. Optical interferometry measurement of the Doppler spectrum is a standard method to obtain the neutral motion and temperature at the altitude where the airglow emission is maximized [79]. Although this altitude is lower than the spacecraft perigee, this gives another important piece of information on the momentum transfer and convection in the upper thermosphere. A scanning Doppler imager (SDI) is capable of measuring 2D wind and temperature fields from a snapshot of the all-sky image. The wide field-of-view can increase the opportunity to obtain simultaneous measure-ments with satellites. An international team is working to deploy multiple SDIs in the Northern Scandinavia. Magnetometers: Global and regional networks of the magnetometers are the most traditional ground-based support to indicate the ionospheric dynamics and electric currents that are important in evaluating the external energy, although there is no altitude resolution and the estimate is not perfect [80,81]. In recent years, Iridium satellites also provide extra information about geomagnetic activity, improving the accuracy of the estimation of the electric current system obtained by the ground geomagnetic data. OH airglow imager: The upper mesosphere-lower thermosphere temperature can be permanently monitored based on OH airglow measurements around the globe. This will provide input data on temperature variations due to plasma-neutral gas interactions in the upper atmosphere. Science Closure By covering the energy distribution of both background neutrals and ions, together with their macroscopic parameters (n, v, T), energy transfer can be measured on the distribution function level, e.g., whether a double-peak distribution is formed or not when the velocity is different between ions and neutrals under low collision frequency. Thus, the effect of the external energy can be examined at the distribution function level, i.e., in wide energy range. This addresses (A1) and (A4). By adding extra measurement points using small sub-satellites, such measurements can even address (A3). A combination of the in-situ and remote sensing measurements (remote sensing measurements from both the spacecraft and from the ground) will reveal the layer structure in the global context, addressing (A2). The composition data can give the chemical interaction including photochemistry, addressing (B1). Combining with the observations of layer structure, this also addresses (B2). Requirements for the spacecraft For Earth missions, power and telemetry are not a major issue. Since the mission is oriented to neutral species and ions rather than waves and fields, the magnetic cleanliness and EMC requirements do not exceed the Cluster level, e.g., a linear regulated power system, and distributed single-point-ground power system. The other requirements for the orbit and spacecraft are: • Cold Xenon propulsion is preferable for orbit maneuvers or attitude control instead of ordinary propulsion that contains nitrogen (N) because atomic nitrogen and nitrogen ion are the major components in the thermosphere and exosphere and any nitrogen contamination should be avoided. Similarly, propane (C 3 H 8 ) should be avoided because it is easily dissociated into C 2 H 3 + , C 2 H 4 + and C 2 H 4 + , covering the mass range of 14 N 2 (mass 28) and 14 N 15 N (mass 29). • External conductive surfaces are needed to keep the spacecraft potential as constant and as low as possible, together with active spacecraft potential control. • The required pointing accuracy is 1° (0.1° knowledge) for both the main spacecraft and the remote sensing sub-spacecraft. • A Sun-pointing constant attitude is preferable to maintain a constant spacecraft surface exposure to sunlight. This helps to avoid evaporation of eventual condensed volatiles on the night side of the spacecraft, i.e., outgassing from the spacecraft associated with attitude change (lessons from Rosetta; [82,83]. • All particle instruments shall be placed with unobstructed FOV so as not to harm the 3D observations. With these requirements, a spacecraft with a dry mass of 350 kg is sufficient for the main spacecraft when the required power for payloads is less than 200 W (reasonable requirement). For the sub-spacecraft, a dry mass of 50-60 kg for remote sensing and a dry mass of 20-30 kg each for multipoint measurements is feasible. This means that total launch mass for all spacecraft would be < 700 kg for the Earth mission. A similar mission could be devised for Venus. In that case, some on-board processing of the data before sending to the Earth would be needed. Such processing could be done by each science payload. The requirement on the spacecraft would be that it has sufficient mass memory (> 10 GByte). A main spacecraft dry mass of < 900 kg would be sufficient. Technological challenges As described in Section 6, new developments of or improvements toward lightweight instruments and low-cost small spacecraft are required to make the proposed observations feasible. For the instrument, the following developments and improvements are the technological challenges that must be solved for both the terrestrial mission and the Solar System missions: • Develop a new instrument to measure the temperature of the background neutral gas for at least two major components except H. For the exobases of the Earth and Venus, it should be able to measure N 2 and O with an accuracy for 10 s integration that should correspond to the largest variation of temperature: ∆T = 300 K (or 30% for heated events with > 1000 K). For velocity, an accelerometer can give an accuracy up to tens m/s. • Develop a new instrument to measure the 2D-velocity distribution of background neutral particles for at least two major components except H, with 10% sensitivity compared to the Maxwellian peak. The need for this measurement arises from the extraordinary interaction out of the theoretical prediction resulting in multiple-peaks in the velocity distribution. • Improve the old instrument or newly develop an instrument to measure the velocity and energy distribution of the background cold ions. • Improve the low energy ENA for < 100 eV toward high angular resolution. This improves significantly the estimate of the substantial cross-section that produces ENA at an energy range of 10-100 eV, for which our knowledge even in the laboratory is poor. These challenges are being addressed at many places, and the design shown in Fig. 13 is one such attempt. In addition, the spacecraft and operation side have some technological issues to address for reliability and cost, to make the proposed multispacecraft missions as mentioned in Section 6 possible. As well as the requirement of further optimization for the inter-spacecraft telemetry and the low-cost manufacturing (including managing cost) of small sub-satellites, we have another challenge: • The Sun-pointing constant attitude requirement means frequent maneuver operations because a maximum off-pointing from the solar direction cannot be kept due to Earth's rotation around the Sun (1% per day). Therefore, autonomous maneuvers using a Sun-sensor should be developed to keep the cost low. If not, as an alternative, a cold trap should be developed for the controlled condensation and re-evaporation of volatiles. This will avoid interference from the spacecraft background. • Another autonomous system to be developed is for radiation belt detection so that all instruments can be turned on/off automatically. This is possible by using the on-board data (energetic particle detector and background noise in particle instruments that use a microchannel plate) combined with a radiation belt model and space weather predictions. Summary In this White Paper, prepared in response to the European Space Agency (ESA) Voyage 2050 Call, we advocated for the importance of advancing our knowledge of plasma-neutral gas interactions, and of deepening our understanding of the partially ionized environments that are ubiquitous in the upper atmospheres of planets and moons, and elsewhere in space. In future space missions, the above task requires addressing the following fundamental questions: (A) How and by how much do plasma-neutral gas interactions influence the redistribution of externally provided energy to the composing species? (B) How and by how much do plasma-neutral gas interactions contribute toward the growth of heavy complex molecules and biomolecules? Most matter in stars and interstellar space is composed of free ions and electrons, whereas the majority of planets, satellites, small bodies, and their envelopes are composed of neutral species. Given our very limited understanding of plasmaneutral gas interactions, the small amount of neutral species in space above the exobase and the effects of electric charges on neutrals have been underestimated in considering plasma dynamics and the formation of planets, exoplanets, satellites, small bodies, and their atmospheres. However, recent space observations in the upper thermosphere and exosphere where plasma-neutral gas collisions become important compared to neutral-neutral interactions suggest that this lack of knowledge of the plasma-neutral gas interactions is a serious drawback when trying to describe neutral behavior in a tenuous plasma such as the upper thermosphere and exosphere. This raises the first question (A). Furthermore, the finding of organic matter, including amino acids and other building blocks of life in comets and in interstellar space, indicates that they are formed in low-temperature environments where neutral-ion interactions cannot be neglected with respect to neutral-neutral interactions. This is the chemical aspect of the energy re-distribution problem. Since the amount and types of the required energy is different from physical energy re-distribution, the chemical aspect raises its own question (B). Answering these questions is an absolute prerequisite for addressing the longstanding question of atmospheric escape and the origin of biomolecules, and their roles in the evolution of planets, moons, or comets under the influence of energy sources in the form of electromagnetic and corpuscular radiation. Study of the ion-neutral and electron-neutral interactions requires accurate measurements of plasma and neutral species in relevant partially ionized media, including composition of the neutral and ion species, velocity distribution of ions and electrons, as well as ambient energy that is characterized by electric and magnetic fields, radiation, and temperature. Since such complicated environments, particularly under the influence of various electromagnetic fields and with complicated composition, temperature, and radiation fluxes, cannot easily be reproduced in a laboratory, the only way to understand the plasma-neutral gas interactions in space is through insitu observations in various environments in space, a task suitable for space missions. In particular, observations in low-density environments with substantial neutral particle content are needed, for example, in the upper ionosphere near the exobase of a planet or natural satellite, in comets, or in interstellar space. Ideally, measurements should be performed in partially ionized plasmas under diverse thermal conditions, for example, from extremely low to moderately high temperatures. Doing so in a long-period comet is one obvious candidate because it covers wide density and temperature ranges. The diversity of the target environments can also be achieved through several different missions in collaboration with other space agencies. In this respect, we can start with a mission at a nearby planet (Earth or Venus) as a small-or medium-class mission, while we can also contribute relevant instrumentation to possible large-class Solar System missions (e.g., interstellar probe, an ice giant mission, or a long-period comet mission). In this article, we have described one possible mission scenario for the Earth's upper atmosphere, which can be copied for Venus, while the space physics community is simultaneously submitting White Papers devoted to the other relevant environments.
2020-03-19T19:47:38.260Z
2019-12-16T00:00:00.000
{ "year": 2022, "sha1": "fc1664d77b2ed28c7014a6454f97ebf63f10839e", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10686-022-09846-9.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "a15522b11afa710f58f3bf8dad605800454c3762", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Chemistry" ] }
216637125
pes2o/s2orc
v3-fos-license
A multi-parameter diagnostic clinical decision tree for the rapid diagnosis of tuberculosis in HIV-positive patients presenting to an emergency centre Background: Early diagnosis is essential to reduce the morbidity and mortality of HIV-associated tuberculosis. We developed a multi-parameter clinical decision tree to facilitate rapid diagnosis of tuberculosis using point-of-care diagnostic tests in HIV-positive patients presenting to an emergency centre. Methods: A cross-sectional study was performed in a district hospital emergency centre in a high-HIV-prevalence community in South Africa. Consecutive HIV-positive adults with ≥1 WHO tuberculosis symptoms were enrolled over a 16-month period. Point-of-care ultrasound (PoCUS) and urine lateral flow lipoarabinomannan (LF-LAM) assay were done according to standardized protocols. Participants also received a chest X-ray. Reference standard was the detection of Mycobacterium tuberculosis using Xpert MTB/RIF or culture. Logistic regressions models were used to investigate the independent association between prevalent microbiologically confirmed tuberculosis and clinical and biological variables of interest. A decision tree model to predict tuberculosis was developed using the classification and regression tree algorithm. Results: There were 414 participants enrolled: 171 male, median age 36 years, median CD4 cell count 86 cells/mm 3. Tuberculosis prevalence was 42% (n=172). Significant variables used to build the classification tree included ≥2 WHO symptoms, antiretroviral therapy use, LF-LAM, PoCUS independent features (pericardial effusion, ascites, intra-abdominal lymphadenopathy) and chest X-ray. LF-LAM was positioned after WHO symptoms (75% true positive rate, representing 17% of study population). Chest X-ray should be performed next if LF-LAM is negative. The presence of ≤1 PoCUS independent feature in those with ‘possible or unlikely tuberculosis’ on chest x-ray represented 47% of non-tuberculosis participants (true negative rate 83%). In a prediction tree which only included true point-of-care tests, a negative LF-LAM and the presence of ≤2 independent PoCUS features had a 71% true negative rate (representing 53% of sample). Conclusions: LF-LAM should be performed in all adults with suspected HIV-associated tuberculosis (regardless of CD4 cell count) presenting to the emergency centre. Introduction Tuberculosis remains an important cause of morbidity and mortality globally, despite ongoing control efforts 1 . The early diagnosis and successful treatment of people with tuberculosis should reduce the risk of mortality and morbidity, and decrease the transmission of tuberculosis 2 . Factors associated with delays in the diagnosis of tuberculosis include the limitations of tuberculosis diagnostic tests, limited availability of these tests in high burden settings, and the reduced diagnostic performance of tuberculosis tests in people living with HIV (PLWH) [3][4][5] . In PLWH with advanced immunosuppression, the diagnosis of active tuberculosis is challenging due to more atypical clinical presentations; other opportunistic infections with similar presentations; high proportion with inability to produce sputum or negative sputum smears; and high rates of extra-pulmonary and disseminated tuberculosis [6][7][8][9][10][11] . Autopsy studies in HIV-positive adults report a very high proportion with tuberculosis (32% to 47%), almost half (46%) of which was undiagnosed pre-mortem 12 . The WHO recommends that HIV-positive patients should be systematically screened for active tuberculosis when visiting a healthcare facility 2 . Many patients access the healthcare system through hospital emergency centres. The prevalence of HIV-related admissions to emergency centres varies, with up to 43% documented in Uganda 13 . These patients are often severely ill and would benefit from prompt diagnosis and treatment of tuberculosis to decrease mortality 14 . The use of point-of-care diagnostic tests would facilitate rapid diagnosis of tuberculosis. Lateral flow lipoarabinomannan (LF-LAM) is currently the only true point-of-care test, with other tests (e.g. smear microscopy, Xpert MTB/RIF, Xpert MTB/RIF Ultra, GeneXpert OMNI, and portable digital chest X-ray) being near point-of-care tests 15 . Point-of-care ultrasound (PoCUS) is also a potentially useful test for extra-pulmonary or disseminated tuberculosis 16 . No evidence-based algorithm incorporating clinical information, individual PoCUS features, and urine LF-LAM for diagnosing tuberculosis in HIV-positive patients currently exists. We performed a cross-sectional diagnostic study and developed a multi-parameter clinical decision tree to facilitate rapid diagnosis of tuberculosis in HIV-positive patients presenting to an emergency centre. Study setting and participants Khayelitsha is a township with a mix of formal and informal housing in Cape Town, South Africa. The Khayelitsha Health sub-district has an antenatal HIV prevalence of 34% 17 , and an annual tuberculosis notification rate of 917 per 100,000 persons 18 . The emergency centre of Khayelitsha Hospital (a district-level hospital) manages ± 35,000 patients per annum with an admission rate around 30%. The HIV prevalence of patients managed in the resuscitation unit is 23% 19 . Inclusion criteria were adults (≥18 years); HIV-positive (HIV-status was determined by laboratory confirmation or from the clinical records), and presence of at least one symptom of the WHO's recommended four-symptom screening rule for tuberculosis in PLWH (cough of any duration, fever, drenching night sweats, or weight loss) 20 . Exclusion criteria were: presenting to the emergency centre more than 24 hours before screening; received anti-tuberculosis treatment within 3 months of screening; pregnant; main clinical presentation of meningitis syndrome or new focal neurology; trauma, gynaecological or psychiatric presentation. Data from this cohort relating to LF-LAM and PoCUS were previously published 21,22 . These manuscripts described the use of LF-LAM in an acute care setting and identified PoCUS features independently associated with HIV-associated tuberculosis 21,22 . All participants provided written informed consent using a two-phase consent process. Severely ill participants were provided with a short one-page consent form indicating what extra tests would be done and that these would be used to Amendments from Version 1 We have explained our choice of the LF-LAM test used in the methods section: 'The Alere Determine™ TB LAM Ag test was used since it was the only commercially available test at the time. ' We have clarified the difference between 'Individual PoCUS features' and 'Independent PoCUS features' in the statistical analysis section: 'Individual PoCUS features were determined by univariable analysis using a 10% significance level 22 . In PoCUS features where different thresholds for positivity exists (e.g., size of intra-abdominal lymphnodes), the lowest threshold was included. Individual PoCUS features included any sized pericardial effusion, pleural effusion, ascites, any focal splenic lesion, and any sized intra-abdominal lymphadenopathy 22 . Independent PoCUS features were determined by multivariable logistical regression 22 . The PoCUS features independently associated with tuberculosis were pericardial effusion of any size, ascites, and intra-abdominal lymphadenopathy of any size 22 . ' We have added a footnote to Table 3 linking the 63 patients with a clinical diagnosis of tuberculosis to Table 4 which provide Reasons for the diagnosis of tuberculosis without microbiological confirmation. We have added the estimated sensitivity and specificity of the FujiLAM test in the discussion: ' Another urine-based LAM assay, Fujifilm SILVAMP TB LAM (FujiLAM; Fujifilm, Tokyo, Japan), has higher sensitivity (70.4% versus 28.1%) but somewhat lower specificity (90.8% versus 95.0%) than the LF-LAM assay we used 33 . ' We included in the discussion that the number need to scan is likely to increase when used in areas with a lower tuberculosis prevalence (and vice versa). We have added to the limitations that: 'The individual and independent PoCUS features were based on a single study and needs further evaluation. ' We have added to the conclusion that the role of PoCUS as a rule-in test to diagnose HIV-associated tuberculosis in the emergency centre needs to be further investigated. Procedures and samples Consecutive patients evaluated at the emergency centre were screened for eligibility from June 2016 through October 2017. A standardized data collection form was used to record demographic and clinical information. Urine, sputum and blood samples were obtained from all patients whenever possible (see Extended data) 23 . Fresh urine samples were tested using the Xpert MTB/RIF assay (GX4) (Cepheid Inc., Sunnyvale, CA, USA) and for the presence of LAM (Alere Determine™ TB LAM Ag test, Alere Inc., Waltham, MA, USA); LF-LAM was performed in the emergency centre 21 . The Alere Determine™ TB LAM Ag test was used since it was the only commercially available test at the time. Sputum specimens were tested using the Xpert MTB/RIF assay (GX4) and cultured in mycobacterial growth indicator tubes (MGIT; Becton Dickson, Sparks, MD, USA). Mycobacterial blood cultures were performed using the BACTEC MYCO/F Lytic blood culture bottle (Becton Dickson, Sparks, MD, USA). The MTBDR plus assay (Hain Lifescience, Nehren, Germany) were used to identify culture isolates as M. tuberculosis complex. Complete blood count and CD4 cell count were done as part of routine clinical care. CD4 cell count results were accepted if performed within 3 months of enrolment. The National Health Laboratory Service performed all the tests. Ultrasound examination was performed in the emergency centre and the findings documented on a standardized assessment form. A single, emergency physician (with adequate training and credentials as specified by the International Federation of Emergency Medicine's Emergency Ultrasound Special Interest Group 24 ) performed the ultrasound examination using either a Mindray M5™ ultrasound system with a 3C5s (2.5-6.5 MHz) convex probe and a 7L4s (5.0-10 MHz) linear probe (Mindray DS USA, Inc., Mahwah, NJ, USA) or a NanoMaxx™ ultrasound system with a L38n (10-5 MHz) linear array probe and a C60n (5-2 MHz) curved array probe (SonoSite Inc., Bothell, WA, USA). Ultrasound examinations were performed before any specimens were collected. At the time of the ultrasound, the point-of-care sonographer had access to the clinical information but not to results from the reference standard (detection of M. tuberculosis from Xpert MTB/RIF and/or culture on any specimen obtained from any anatomical site). Chest x-rays were reviewed by a single radiologist using a standardized assessment form (see Extended data 23 . Chest x-rays were classified as unlikely tuberculosis, probable tuberculosis, and likely tuberculosis. The radiologist had no access to clinical information or the reference standard. Statistical analyses The sample size was determined with the aim of including more than the recommended 10 candidate predictors (including interaction terms) from multivariable logistic regression analyses 25 . The tuberculosis prevalence in HIV-positive patients in the emergency centre is around 25% 19 , and a sample size of 400 HIV-positive participants was deemed adequate to include 100 tuberculosis cases. Data were analysed with the use of SAS/STAT ® software ( Individual PoCUS features were determined by univariable analysis using a 10% significance level 22 . In PoCUS features where different thresholds for positivity exists (e.g., size of intra-abdominal lymphnodes), the lowest threshold was included. Individual PoCUS features included any sized pericardial effusion, pleural effusion, ascites, any focal splenic lesion, and any sized intra-abdominal lymphadenopathy 22 . Independent PoCUS features were determined by multivariable logistical regression 22 . The PoCUS features independently associated with tuberculosis were pericardial effusion of any size, ascites, and intra-abdominal lymphadenopathy of any size 22 . For correlated variables, when more than one index was significant in a univariate model, the one with more significant effect on the 2 log L -statistic was first entered into the multivariable model. However, in the final model, the effect of substituting variables was also assessed. When more than one correlated variable was significant in multivariable models, the final model selected was the one associated with the smallest Akaike's information criterion (AIC), a statistic derived from the 2 log L -statistic. Multivariable model building was based on the combination of significant variables in univariable models (based on a threshold p<0.10). A model comprising WHO screening symptoms and history of current antiretroviral therapy use was used as starting model 20 . The ability of logistic regression models to discriminate between participants who had and those who did not have microbiologically confirmed tuberculosis was assessed using area under the receiver operating characteristic curves (AUC) and the relative integrated discrimination improvement (RIDI) which measures the percentage increase in discrimination when an extra variable is added to a prediction model 27,28 . AUC comparisons used nonparametric methods 29 . Bootstrap techniques were used to derive the 95% confidence interval (CI) for the RIDI estimates, which were based on 1000 replications. We developed a decision tree model to predict microbiologically confirmed tuberculosis, including variables from the best performing multivariable logistic regression model, using the classification and regression tree (CART) algorithm and rpart package (version 4.1-11) of the R statistical software. The CART algorithm builds a tree model through recursive partitioning, through which process the data is successfully split into increasingly homogenous subgroups. At each stage (also known as node), the algorithm selects a predictor and a cut-point associated with the best ability of the predictor to discriminate participants with tuberculosis from those without. This was less an issue in the current analyses with no continuous predictor. However, for class variables with more than two levels, the algorithm could collapse levels in order to achieve the best discrimination. The CART starts with one predictor, then adds other predictors (and nodes) until reaching homogenous groups or having subgroups with few participants (<5), or exhaustion of predictors which can contribute further to subgroups refinement. Due to the small size of the achieved tree, no pre-or post-pruning was applied. CART uses a generalization of the binomial variance (Gini index) for its impurity function, and employs a 10-fold cross-validation to estimate error rates. The algorithm code is available as Extended data 30 . Demographic and clinical characteristics of participants with and without confirmed tuberculosis are presented in Table 2. The median CD4 cell count was 86 cells/mm 3 (25 th -75 th percentile, 30-218). The alternative diagnoses and the reasons for a clinical tuberculosis diagnosis in participants without microbiologically confirmed tuberculosis are presented in Table 3 and Table 4. The all-cause in-hospital mortality was 7.2% (n=30), 15 of whom had confirmed tuberculosis (representing 8.7% in hospital). These individual-level data are available at Zenodo 31 . Multivariable model Measures of model performance are summarized in Table 6. The initial model (WHO screening symptoms ≥2, antiretroviral therapy use) had poor discriminatory power in predicting confirmed tuberculosis with an AUC of 0.615. The addition of either PoCUS independent features or PoCUS individual features to the initial model both improved model goodness of fit and its discriminatory power, however the model with PoCUS independent features had a greater AUC and a smaller AIC. The further addition of urinary LF-LAM and chest x-ray improved the model. Adding CD4 cell count did not improve the performance of the model (Table 6). Based on RIDI% estimates, adding urinary LF-LAM, PoCUS independent features, and chest x-ray to the initial and subsequent models conferred similar levels of improvement for tuberculosis prediction (Table 7). Change in RIDI% was meaningless when CD4 cell count was added to the model comprising WHO symptoms screen, antiretroviral therapy use, PoCUS independent features, urinary LF-LAM and chest X-ray (RIDI% 2.6 (2.4-2.7)). Prediction tree Significant variables (Model F in Table 7) were included in the splitting process to build the classification tree for microbiologically confirmed tuberculosis. The CART created for confirmed tuberculosis is shown in Figure 2, and the CART as applied to a theoretical cohort of 1000 patients is presented in Figure 3. The CART analysis suggest that once screened via WHO symptoms as eligible for further diagnostic investigations, the number of WHO symptoms present does not add further to the discrimination of people with tuberculosis from those without. Furthermore, CART positions urinary LF-LAM as the next screening test after WHO symptoms, with 75% of people with positive urinary LF-LAM test (17% of all those with positive WHO symptoms) having a definitive diagnosis of microbiologically confirmed tuberculosis (Figure 2 and Figure 3). For those with negative urinary LF-LAM, CART positions chest x-ray as the next screening test. Chest x-ray appears twice, but with complementary and not overlapping contributions. The first appearance of chest x-ray (after those with negative urinary LF-LAM) serves to separate participants with 'likely tuberculosis' on chest x-ray from those with 'possible or unlikely tuberculosis' on chest x-ray. The presence of one or no PoCUS independent features in those with 'possible or unlikely tuberculosis' on chest x-ray (47% of the starting sample) isolates 83% of this subgroup (representing 39% of the starting sample) where tuberculosis was not microbiologically confirmed ( Figure 2 and Figure 3). The second appearance of chest x-ray occurs in participants with ≥2 PoCUS independent features and serves to separate those with 'possible tuberculosis' on chest x-ray from those with 'unlikely tuberculosis' on chest x-ray. The validation for the decision tree is presented in Figure 4. We created a second decision tree to make it more clinically applicable by removing the history of antiretroviral therapy (ART) status, because ART interruption is often not disclosed and ART status may be unavailable in confused patients ( Figure 5 and Figure 6). The branch on the original tree relating to antiretroviral therapy no longer expands, narrowing down what to decide for the 24% of the sample with negative urinary LF-LAM and 'likely tuberculosis' on chest x-ray. Just over half (56%) of these participants will have confirmed tuberculosis. We created a third prediction tree by only excluding chest x-ray, which is not a true point-of-care test (Figure 7 and Figure 8). CART positions PoCUS as the next screening test for those with a negative urinary LF-LAM. The presence of two or less independent PoCUS features (75% of the starting sample) had a true negative rate of 71% (representing 53% of the starting sample) in the subgroup where tuberculosis was not microbiologically confirmed. Discussion We developed a prediction tree to diagnose HIV-associated tuberculosis in an emergency centre in a high burden setting. The variables selected on multivariable analysis for inclusion in the final model were the presence of >2 WHO screening symptoms, current antiretroviral therapy use, urinary LF-LAM, independent PoCUS features, and chest x-ray. The CART analysis positioned urinary LF-LAM as the first test to perform in participants with positive WHO screening symptoms, followed by chest x-ray. We also developed a simplified prediction tree by excluding chest x-ray, which is not a true point-of-care test: CART positioned PoCUS as the next screening test for those with a negative urinary LF-LAM. The use of urinary LF-LAM was the predictor with the best ability of creating pure groups (either with or without tuberculosis); classifying almost 25% of the study sample (75% of which were true positives) regardless of their CD4 cell count. The false positive rate of 25% is less than a recent Cochrane review, in which 33% of participants with tuberculosis symptoms had a false positive urinary LF-LAM result for microbiologically confirmed tuberculosis 32 . However, inappropriate exclusions (e.g. participants unable to produce sputum), different enrolment criteria and different CD4 cell counts could potentially explain the high false negative rate seen in the Cochrane review 32 . Another urine-based LAM assay, Fujifilm SILVAMP TB LAM (FujiLAM; Fujifilm, Tokyo, Japan), has The performance of PoCUS when chest x-ray is available is limited ( Figure 2 and Figure 3). One of every 11 PoCUS examinations will be 'positive' (i.e. two or more PoCUS independent features), but then an evaluation of the chest x-ray would still be needed to refine the classification of patients with and without tuberculosis. A 'negative' PoCUS examination (i.e. the presence of ≤1 PoCUS independent feature) will only rule out 39% of all patients with a clinical suspicion of tuberculosis. This supports other studies and the current WHO guidelines that ultrasound is an additional diagnostic tool and should not replace chest x-ray as the initial imaging step to diagnose tuberculosis in HIV-positive patients 20,37 . However, chest x-ray is not a true point-of-care test, unlike PoCUS. In acute care settings where chest x-ray is not readily available PoCUS has a 100% true positive rate when all 3 of the independent features were detected, indicating its potential value as a rule-in test; however, 39 PoCUS examinations will need to be performed to Diagnostic test n Suggestive formal abdominal ultrasound done in radiology department 19 Suggestive chest X-ray 9 Positive urine lateral flow lipoarabinomannan (LF-LAM) 7 Suggestive formal abdominal ultrasound and suggestive chest X-ray 6 Not improving on empiric antibiotics 4 Raised adenosine deaminase (ADA) in effusion fluid (pleural or ascitic) 4 Cerebrospinal fluid suggestive of tuberculous meningitis (TBM) 4 Suggestive chest X-ray and positive urine LF-LAM 3 Suggestive formal abdominal ultrasound and positive urine LF-LAM 2 Psoas abscess on formal ultrasound 2 Caseous necrosis on biopsy (histology) 1 Suggestive computer tomography (CT) scan of abdomen 1 Suggestive chest X-ray and raised ADA in effusion fluid 1 Total 63 confidently diagnose one additional patient in those who had a negative LAM. This number need to scan is likely to increase when used in areas with a lower tuberculosis prevalence (and vice versa). The presence of ≤2 PoCUS independent features will rule out 53% of patients with a clinical suspicion of tuberculosis in situations where chest x-ray is not available; however, the high false negative rate (29%, 218/750) indicates that PoCUS cannot be used as a rule-out test and these patients will need to undergo further testing. The use of urinary LF-LAM should be prioritised in all HIVpositive patients (regardless of CD4 cell count and clinical condition) who presents to the emergency centre with WHO tuberculosis symptoms. Although a result can be obtained after 25 minutes, a major time increasing factor would be to get a urine sample. The history of current use of ART should be obtained if the patient's condition allows, as it further refines the diagnostic ability of the algorithm by increasing both the true positive and the true negative rate. Chest x-ray should still be performed if available. In these settings, the value of PoCUS becomes doubtful due to the low positive yield (5%) and the further interpretation of a chest x-ray to better classify cases and non-cases. Although 47% of patients will have negative results for urinary LF-LAM, chest x-ray and PoCUS, the true negative rate is only 83%, too low to confidently rule tuberculosis out. In emergency centres without chest x-ray availability (e.g. limited resources, restricted radiology consulting times), physicians can confidently diagnose tuberculosis in patients where all three independent PoCUS features are present (true positive rate 100%). However, only 2% of the PoCUS examinations are expected to be positive and one can argue whether the time spend to perform the PoCUS is worthwhile. The 71% true negative rate again indicates the need for further diagnostic testing. Our study has some limitations. Our findings may not be generalizable as the study was conducted in a single emergency centre in a high TB/HIV-prevalence setting; a single, experienced operator performed all the PoCUS examinations; and the chest x-rays were interpreted by a single experienced radiologist. The individual and independent PoCUS features were based on a single study and needs further evaluation. The main strength of our study is the robust microbiologic reference standard composed of TB culture and Xpert MTB/ RIF performed on multiple samples from different anatomic sites. However, it is still possible that some TB cases were missed by the reference standard. The study was also performed under routine conditions experienced in the emergency centre. Lastly, robust analytic strategies were used to develop and validate the diagnostic decision tree. Conclusion We developed a near-patient and point-of-care decision tree for the diagnosis of HIV-associated tuberculosis in acute care settings. Implementing this decision tree following screening via WHO symptoms can allow immediate initiation of TB treatment within the emergency centre in about a quarter of suspected patients among whom 75% would have microbiologically confirmed tuberculosis, or withhold such treatment in nearly half of suspected patients, among whom less than 18% will have microbiologically confirmed tuberculosis. Urinary LF-LAM had a 75% true positive rate, representing 17% of participants with positive WHO screening symptoms regardless of CD4 cell count and its use should be prioritised. The contribution of PoCUS in the context of urinary LF-LAM and chest X-ray availability was limited, due to the low positive yield, the need for further chest x-ray interpretation and the high false negative rate. In acute care settings without chest x-ray availability, PoCUS has a 100% true positive rate, but will only affect 2% of eligible patients. The role of PoCUS as a rule-in test to diagnose HIV-associated tuberculosis in the emergency centre needs to be further investigated. Peter MacPherson Liverpool School of Tropical Medicine, Liverpool, UK Thanks for asking me to review this manuscript. Very nice study, and clearly reported. I have only a few comments the authors should address. Major comments Prevalence of microbiologically-confirmed TB is very high in this population, and health facilities seem to be reasonably well resourced. The authors should add text to the Discussion to discuss generalisbility to settings in Africa outside of the Khayelitsha area. 1. The CD4 cell count profile is low, and antiretroviral therapy coverage considerably lower than we have seen in other settings. It would be good to add a sentence in the discussion to reflect on how the accuracy, performance, and applicability of the final decision tree may be affected in other settings in the African continent where we see different patterns of ART coverage. 2. Presumably HIV viral load measurements were not available for inclusion in the model? We have seen in several studies now a moderately high prevalence of detectable viraemia in HIV-positive adults who reported taking ART, and is strongly associated with adverse clinical outcomes. I could imagine that HIV viral load measurement (e.g. using the GeneXpert platform) might be a useful diagnostic predictor of prevalent TB. It would be worth adding a sentence to discuss ART coverage and viral failure in the limitations section of the Discussion. 3. In Figure 1, not clear what were the reasons for "Research related problems" in the 55 participants who were excluded. The authors should add a sentence to describe these participants in more details in the results, and provide reassurance that these exclusions have not resulted in selection/spectrum bias. 4. Discussion. Sentence about coverage of urine LAM is a little outdated, and several more countries have rolled-out this test, including Malawi. 5. POCUS is critically dependent on trained operators and high levels of quality assurance, which are not usually available in most settings in Africa. In most settings, operators trained in POCUS will not be available, whereas digital chest with CAD is often available, meaning that pragmatically, the decision tree placement of POCUS prior to CXR doesn't necessarily make sense. I would perhaps temper the paragraph about the utility of POCUS, especially given the suboptimal performance found here, and in other studies. If applicable, is the statistical analysis and its interpretation appropriate? Yes Are all the source data underlying the results available to ensure full reproducibility? Yes Are the conclusions drawn adequately supported by the results? Yes Competing Interests: No competing interests were disclosed. Reviewer Expertise: HIV, TB, Public Health, epidemiology, randomised controlled trials, diagnostic accuracy evaluations, global health Infectious and Tropical Diseases Department, San Bortolo Hospital, Vicenza, Italy This manuscript is aimed to describe the use of LF-LAM in an acute care setting and identified PoCUS features independently associated with HIV-associated tuberculosis. Then, the study proposes a decisional algorithm for the diagnosis of TB in HIV patients, including point-of-care tests, as a result of a regression tree algorithm (CART). This algorithm combines the only two tests really available at the patient bedside and having the true characteristics of point of care test: LF-LAM and PoCUS, and the "classic" tests, CXR and GeneXpert PCR plus the culture of the sputum or pus: the two latter tests are now considered as the gold standard. The scenario is peculiar: the patients accessing this hospital have a high prevalence of HIV and are very deeply immunosuppressed, with very low CD4 count (never treated before for both conditions, or only treated for HIV). General comments: Portable CXR and GeneXpert, reported in the text as "near point-of caretests" really improved the opportunity of TB diagnosis in the last decades and now are the standard-of-diagnostic tools in many hospitals in many different resource settings. However, the clinical impact of these technologies on the TB epidemic and mortality was not so impressive, particularly if we consider patients living with HIV. The point-of-care tests like LF-LAM and PoCUS are available also in resource-limited settings (RLS) with affordable costs, can be performed in small and rural hospitals with basic labs without the need for referral centers. This will be a great advantage and an addition in the COVID era. Limitations of the study (some of the limitations are already discussed in the study). The authors do not report the TB treatment nor considerations of the patients' clinical outcome, so these tests are evaluated only in their performances in the diagnosis of TB (compared with the gold standard of culture), not considering the efficacy of treatment. 1. In the file "clinical data of the patients", data on HIV treatment are very limited (see questions). 2. As at least two different types of LF-LAM tests are commercially available and have different performances, the author should explain the choice (LAM Alere Ag test) in the methods section and discuss in depth the performances of the different tests commercially available. 3. Intrinsic limitations of the sensitivity of LF-LAM were described in a recent paper of Tlali et al. (2020) 1 -the authors should discuss this paper. 4. POCUS examinations were performed by a single, experienced operator. A second blinded reviewer of the clips and/or sonographer would have added value to the results, considering that the US is a repeatable test. 5. The difference between the "individual" and "independent" aspect of the US is based on week evidence and considering that the result of this differentiation is that the focal splenic 6. lesions and pleuric effusions, I'd delete it. Or, the author should just propose it separately and say that it needs further evaluation. Questions and suggestions: HAART is a relevant issue in these patients. In the database, a considerable part of patients results on HAART therapy, so it is unclear why these patients were so deeply immunosuppressed; failure of the therapy? No compliance? The authors report that the patients are often confused at admission. This is true. However, the failure of the HAART therapy should both have had a role or be a consequence of TB infection: a brief comment of this aspect in the discussion should be appreciated. Also, I've noticed that no data on the VL are available, probably the test was not done. Please add it. Surprisingly, in the cohort, the different levels of CD4 considered by stratifying the patients in different groups with CD4 > of 200 or less. (<100 cells/mm3, 100-200 cells/mm3, >200 cells/mm3) does not have any impact. Probably, you should have considered another group with a higher level of CD4 and a working active therapy in the stratification (if that kind of patient were included in the cohort). If the CD4 doesn't matter, also the HIV status could have been not so relevant as presumed in the performances of LF-LAM. You should add some considerations on these aspects. The neurological presentation was considered an exclusion criterion ( Figure 1), but, in Table 1, 13 patients result to have a CRF sample. Please explain why these patients did a Lumbar Puncture, it should be interesting. In Table 3, I'd link with * the 63 patients with clinical diagnoses of tuberculosis of Table 3 with Table 4. Surprisingly, no one patient was diagnosed as ARL (AIDS-related lymphoma) that is the principal differential diagnosis (both clinical and the US) in TB-HIV patients: I'll speculate of some bias of selections (is it possible that patients with suspected lymphoma are preselected for referral centers? Sometimes it happens in African settings). Only 1 case of NTM was diagnosed in this cohort: it is interesting because in the work of Nel et al. (2017) 2 a considerable rate of false-positive LF-LAM test was found, and the cohort includes a high number of patients with a low CD4 count. In Table 4, some of the "Reason for diagnosis of tuberculosis without microbiological confirmation", as "Suggestive formal abdominal ultrasound is done in radiology department" are really unclear, data on the true outcome of patients if available should be of particular interest. I partially disagree with the final consideration "The contribution of PoCUS in the context of urinary LF-LAM and chest X-ray availability was limited, due to the low positive yield, the need for further chest x-ray interpretation and the high false-negative rate...." because if you consider that LAM has good sensitivity but lower specificity, PoCUS whose specificity is very high could be the perfect tool. Please add the consideration to the conclusions. Final comment: it's common thinking that "we have CXR, GeneXpert, and QGT, and we don't need further tests". Theoretically, it seems difficult to support the need for further tests for TB. But, in "real life", there is a considerable "grey area" of challenging cases, particularly in patients with extrapulmonary TB, needing expensive second-level tests, like CT scan, Pet CT, laparoscopy with biopsy, staining and pathologists, all not available in LRS were most patients with TB live. The alternative is to start with an "empirical treatment" without any scientific evidence of TB except the WHO symptoms. But this is a rough approach in the MDR-TB era. The authors present a scenario, with a sidereal prevalence of TB (47%) and tremendously bad treatment of HIV. This peculiar situation might have influenced the results: other studies should have been carried in different settings, including a limited resource setting with a lower prevalence of HIV (and TB); a database including treatment and outcomes of the patients will add a lot. I suggest adding these considerations to the conclusions. To my knowledge, urinary LAM is increasingly used in LRS hospitals, but the test has some limitations and probably should be improved. However, probably there isn't a perfect test in TB and the combination of different tests is the better choice at the present moment. I apologize to the editor and authors but my knowledge in statistics is limited, so my review is incomplete and only focused on the clinical aspect of the work. I think a mathematics or a statistics expert should be included as a reviewer of this manuscript Thank you for asking me to review this manuscript. I read it with high interest, and I found it accurate, written with conviction, and powerfully argued. Reviewer Expertise: I'm a full-time clinician in Infectious Diseases and Tropical Medicine in San Bortolo Hospital, Vicenza, Italy. I started using ultrasound, POCUS and the interventional US in the setting of infective patients included HIV/TB patients in 1995 and exported my experience in remote settings with very constrained resources. I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above. in different groups with CD4 > of 200 or less. (<100 cells/mm3, 100-200 cells/mm3, >200 cells/mm3) does not have any impact. Probably, you should have considered another group with a higher level of CD4 and a working active therapy in the stratification (if that kind of patient were included in the cohort). If the CD4 doesn't matter, also the HIV status could have been not so relevant as presumed in the performances of LF-LAM. You should add some considerations on these aspects. We unfortunately did not include a 4th group representing higher CD4 cell counts. We did comment on the use of urinary LF-LAM in participants with positive WHO screening symptoms regardless of CD4 cell count. This is an area that need further exploration. The neurological presentation was considered an exclusion criterion (Figure 1), but, in Table 1, 13 patients result to have a CRF sample. Please explain why these patients did a Lumbar Puncture, it should be interesting. Only patients with a main neurological presentation were excluded. All laboratory tests performed by the hospital clinicians (non-research group) were evaluated and included if the tests related to tuberculosis. There is a very low threshold at the hospital to do a lumbar puncture as part of the evaluation of immunodeficient patients and hence the reason for including 13 CSF samples. In Table 3, I'd link with * the 63 patients with clinical diagnoses of tuberculosis of Table 3 with Table 4. Table 3 linking the 63 patients to Table 4. We have added a footnote to Surprisingly, no one patient was diagnosed as ARL (AIDS-related lymphoma) that is the principal differential diagnosis (both clinical and the US) in TB-HIV patients: I'll speculate of some bias of selections (is it possible that patients with suspected lymphoma are preselected for referral centers? Sometimes it happens in African settings). We are not aware that patients with suspected lymphoma are per-selected for referral centers at the study hospital. We can only speculate on why AIDS-related lymphoma was not diagnosed. Only 1 case of NTM was diagnosed in this cohort: it is interesting because in the work of Nel et al. (2017) 2 a considerable rate of false-positive LF-LAM test was found, and the cohort includes a high number of patients with a low CD4 count. We can only speculate on why non-tuberculous mycobacterial infection was only cultured in one patient. In Table 4, some of the "Reason for diagnosis of tuberculosis without microbiological confirmation", as "Suggestive formal abdominal ultrasound is done in radiology department" are really unclear, data on the true outcome of patients if available should be of particular interest. certain ultrasound features (e.g. hypo echoic splenic lesions), where as others want to see more than one ultrasound feature (e.g. pericardial effusion and splenic lesions). We unfortunately did not follow patients up to see whether their clinical condition actually improves after the empiric initiation of anti-tuberculous treatment. We purposefully used the term 'suggestive' as there is no robust evidence on this (hence a motivation for the current study). The clinical diagnosis of tuberculosis also differ between attending physicians as some will only use I partially disagree with the final consideration "The contribution of PoCUS in the context of urinary LF-LAM and chest X-ray availability was limited, due to the low positive yield, the need for further chest x-ray interpretation and the high false-negative rate...." because if you consider that LAM has good sensitivity but lower specificity, PoCUS whose specificity is very high could be the perfect tool. Please add the consideration to the conclusions. We have added that the the role of POCUS as a rule-in test to diagnose HIV-associated tuberculosis needs to be further investigated Final comment: it's common thinking that "we have CXR, GeneXpert, and QGT, and we don't need further tests". Theoretically, it seems difficult to support the need for further tests for TB. But, in "real life", there is a considerable "grey area" of challenging cases, particularly in patients with extrapulmonary TB, needing expensive second-level tests, like CT scan, Pet CT, laparoscopy with biopsy, staining and pathologists, all not available in LRS were most patients with TB live. The alternative is to start with an "empirical treatment" without any scientific evidence of TB except the WHO symptoms. But this is a rough approach in the MDR-TB era. The authors present a scenario, with a sidereal prevalence of TB (47%) and tremendously bad treatment of HIV. This peculiar situation might have influenced the results: other studies should have been carried in different settings, including a limited resource setting with a lower prevalence of HIV (and TB); a database including treatment and outcomes of the patients will add a lot. I suggest adding these considerations to the conclusions. To my knowledge, urinary LAM is increasingly used in LRS hospitals, but the test has some limitations and probably should be improved. However, probably there isn't a perfect test in TB and the combination of different tests is the better choice at the present moment. I apologize to the editor and authors but my knowledge in statistics is limited, so my review is incomplete and only focused on the clinical aspect of the work. I think a mathematics or a statistics expert should be included as a reviewer of this manuscript Thank you for asking me to review this manuscript. I read it with high interest, and I found it accurate, written with conviction, and powerfully argued. Table 4 details the cases diagnosed with TB without microbiological confirmation -a total of 63 compared with a total of 172 microbiologically proven cases. This group would equate to 25% of patients finally treated for TB. I assume reading the paper these 25% were not included in the final statistics as the abstract clearly states the gold standard that the variables were looked at with respect to microbiologically confirmed TB. This data raises a few points of interest. I note in Table 4 almost 30% of the group diagnosed without microbiological proof were considered to have TB based on formal US findings. With my PoCUS interest it would be of great interest if the authors were able to review any reasons for differences in the US findings between the expert and PoCUS findings -this may be of interest for future improving of accuracy of any PoCUS algorithm. These 25% are also of potential interest when considering the results (see 4.2) . 4.1 This covers all the valid points/shortcomings of the study and gives clear evidenced recommendations. Although the authors comment that a weakness of the study was that it was performed under routine conditions in the emergency department, this in my view makes the research more applicable. 4.2 Table 4 shows a number of patients in whom TB was diagnosed in the absence of microbiology which is over 25% of the final cases. The main scientific thrust of the paper has to be on confirmed cases as a scientific gold standard. However I do wonder in the final discussion if it is worth subjecting a few of the main decision trees and some of the univariable factors in Table 6 to data analysis which includes these cases to see how they might perform in a "real world" setting. This would obviously not be the main thrust of the scientific process but would be of great interest. Therein specifically, in light of the 30% being US findings I would be especially interested to see how PoCUS performed including that cohort of those eventually treated and whether it affects the number needed to scan? In regard to the performance of PoCUS, specifically 39 being needed to opt in one case -is it worth highlighting that this is however likely to increase in low prevalence and decrease in higher prevalence areas? Is the work clearly and accurately presented and does it cite the current literature? Yes Is the study design appropriate and is the work technically sound? 2 Study Design 2.1 The study design is appropriate for investigation of the subject and formulation of a decision tree. 2.2 The study population as indicated is one of a high prevalence and so appropriate to study though, as the authors clearly indicate the applicability to areas of lower prevalence is more difficult to predict. 2.3 Recruitment was good with respect to exclusion criteria some are obvious, others less obvious with no reason given as to exclusion of gynae/obstetric cases and CNS cases .There was a reasonable large amount excluded (132 in total) and some exclusions would prefer male over female inclusion (i.e. gynae/ obstetric though these appear small in number). All gynae/obstetric cases and CNS cases were not excluded, only patients with a main clinical presentation of meningitis syndrome or new focal neurology; gynaecological or psychiatric presentation. I note however the male study population was only 40%. With respect to the exclusion criteria a clearer statement of any differences with the study group would be of value. 3.1 The use of phraseology "PoCUS independent and PoCUS individual features" is slightly unclear and could be clarified. Syndr. 2020;83(4):415-423. 31904699 10.1097/QAI.0000000000002279), however we've reworded the paragraph under statistical analysis to better clarify the difference between the two types of POCUS features. Table 4 details the cases diagnosed with TB without microbiological confirmation -a total of 63 compared with a total of 172 microbiologically proven cases. This group would equate to 25% of patients finally treated for TB. I assume reading the paper these 25% were not included in the final statistics as the abstract clearly states the gold standard that the variables were looked at with respect to microbiologically confirmed TB. This is correct. Only participants with microbiologically confirmation were included in the 172 cases This data raises a few points of interest. I note in Table 4 almost 30% of the group diagnosed without microbiological proof were considered to have TB based on formal US findings. With my PoCUS interest it would be of great interest if the authors were able to review any reasons for differences in the US findings between the expert and PoCUS findings -this may be of interest for future improving of accuracy of any PoCUS algorithm. We did not compare PoCUS with radiology-performed ultrasounds. The 30% referred to, relate to participants without microbiological proof and it does not reflect any difference with POCUS findings. We will consider to determine the correlation between radiologyperformed ultrasounds and POCUS findings as it would be of interest. These 25% are also of potential interest when considering the results (see 4.2) . 4.1 This covers all the valid points/shortcomings of the study and gives clear evidenced recommendations. Although the authors comment that a weakness of the study was that it was performed under routine conditions in the emergency department, this in my view makes the research more applicable. 4.2 Table 4 shows a number of patients in whom TB was diagnosed in the absence of microbiology which is over 25% of the final cases. The main scientific thrust of the paper has to be on confirmed cases as a scientific gold standard. However I do wonder in the final discussion if it is worth subjecting a few of the main decision trees and some of the univariable factors in Table 6 to data analysis which includes these cases to see how they might perform in a "real world" setting. This would obviously not be the main thrust of the scientific process but would be of great interest. Therein specifically, in light of the 30% being US findings I would be especially interested to see how PoCUS performed including that cohort of those eventually treated and whether it affects the number needed to scan? Thanks for the comment. It would be interesting to see how POCUS performed when clinical cases were included. However, we did not follow up patients to determine whether they actually improved on anti-tuberculous treatment and this will severely limit the interpretation of such results. In regard to the performance of PoCUS, specifically 39 being needed to opt in one case -is it worth highlighting that this is however likely to increase in low prevalence and decrease in higher prevalence areas? Thanks for the suggestion. We've incorporated it in the discussion section. Competing Interests: No competing interests were disclosed.
2020-04-23T09:13:06.853Z
2020-04-17T00:00:00.000
{ "year": 2022, "sha1": "1ea3adb01bc85a8c5ca68ba29a17175d1252cfc3", "oa_license": "CCBY", "oa_url": "https://wellcomeopenresearch.org/articles/5-72/v2/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2e1c5a3ef530e5c20b07c79ea5c8ebc4986e8e7b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
223770467
pes2o/s2orc
v3-fos-license
Reduction of the Body Weight-adapted Volume of Contrast Material by Increasing the Injection Rate in 320-detector Row Coronary CT Angiography Background: 320-detector row dynamic volume computed tomography (CT) scanner was widely applied in coronary CT angiography (CCTA),which made it possible to reduce the volume of contrast material (CM) used. Some studies have reported the feasibility of reducing the CM in 320-detector row CCTA using a weight-adapted injection protocol. However, it hasn't been studied to investigate what was the significance of increasing the injection rate with a lower volume of CM. Objective: To investigate the feasibility of reducing the body weight-adapted volume of CM by increasing the injection rate in 320-detector row CCTA. Methods: A total of 116 patients who underwent 320-detector row CCTA were divided into three groups. Group A received 0.7 ml/kg of CM (350 mg I/ml) at an injection rate of 5.0 ml/s (n = 40); group B received 0.6 ml/kg of CM at 5.5 ml/s (n = 39); group C received 0.5 ml/kg of CM at 6.0 ml/s (n = 37). A 30-ml 0.9% saline chaser was administered after the CM. Enhancement values of the cardiovascular territories and coronary arteries were measured and compared. Image quality was also evaluated and compared among the three groups. Result: Enhancement values of the proximal coronary arterial segments for group C were significantly lower than those for groups A and B (all, P < 0.05), whereas there were no significant differences between groups A and B (all, P > 0.05). Similar statistical results were found in the proportion of proximal coronary arterial segments with enhancement values ≥ 300 HU, image quality ratings and the proportion of the main of coronary arterial segments with image quality scores ≥ 3 on both per-vessel and per-patient analyses. Conclusion: At least 0.6 ml/kg with 350 mg I/ml of CM at 5.5 ml/s injection rate was required to achieve sufficient and credible evaluation of the coronary artery in 320-detector row CCTA. Introduction With the rapid advancement of multi-detector computed tomography (CT) and improvements in image quality and acquisition speed, coronary CT angiography (CCTA) has become a standard noninvasive imaging modality, with high spatial and temporal resolution, for the diagnosis of coronary artery disease [1,2]. Unfortunately, patients undergoing CCTA are inevitably exposed to iodinated contrast media (CM) [3]. It has been reported that contrast-induced nephropathy (CIN) is an important iatrogenic complication from use of the CM [4], can have a poor prognosis, and may result in additional healthcare costs [5]. A number of studies have focused on how to prevent CIN, and the core recommendation is to use the lowest possible volume of CM because the incidence of CIN is highly associated with the volume of CM used [6]. Therefore, minimizing the total volume of CM used in CCTA is important, especially for patients with significant coronary artery stenosis who may be exposed to more CM during subsequent coronary artery stenting or angioplasty [7]. In recent years, the 320-detector row dynamic volume CT scanner was widely applied in clinical practice. It has a Rate in 320-detector Row Coronary CT Angiography z-coverage width of 160-mm and allows acquisition of the entire heart in a single rotation and within a single heartbeat with a minimum temporal resolution of 175 ms [8]. The non-helical volume scan mode of the entire heart makes it possible to reduce the volume of CM used [9,10]. Some studies have reported the feasibility of reducing the CM volume in 320-detector row CCTA using a weight-adapted injection protocol [11,12]. However, to our knowledge few studies have investigated what was the significance of increasing the injection rate with a lower volume of CM in 320-detector row CCTA. Our study was therefore designed to investigate the feasibility of using the lowest possible volume of CM with a body weight-adapted injection protocol, by increasing the CM injection rate and precisely determining the CM bolus arrival time, to achieve sufficient and credible evaluation of the coronary artery with 320-detector row CCTA. Patients This study was performed according to the principles of the Declaration of Helsinki and was approved by our institutional review board. Informed consent was obtained from all patients before the CCTA examination. Before this clinical study, a preliminary retrospective investigation was performed to determine the lowest possible CM volume and injection rate in 320-detector row CCTA. From February 2019 to September 2019, a total of 116 patients (68 males and 48 females; mean age, 60.42 ± 12.35 years; range, 29 -86 years) who were scheduled to undergo 320-detector row CCTA were consecutively recruited for this study. All patients were suspected of having coronary artery disease based on electrocardiography (ECG) findings or clinical symptoms, and had no history of coronary artery stenting or bypass surgery. Patients who had a previous allergic reaction to iodinated CM, respiratory failure, severe arrhythmias, congestive heart failure, renal failure (serum creatinine > 1.5 mg/dl [133 mol/l]), or an inability to achieve a heart rate below 75 beats per minute (bpm) with the use of beta-blocking agents, as well as women who were potentially pregnant, were excluded. Metoprolol (Metoprolol tartrate tablets, AstraZeneca AB, Sweden) 25 mg or 50 mg as a single dose was administered orally to 23 patients 1 -2 hours before examination in order to meet the heart rate inclusion criterion. The 116 enrolled patients were randomly divided into three groups using a table of random numbers. Group A (n = 40) received 0.7 ml/kg of CM at an injection rate of 5.0 ml/s, group B (n = 39) received 0.6 ml/kg of CM at 5.5 ml/s, and group C (n = 37) received 0.5 ml/kg of CM at 6.0 ml/s. All patients received nonionic CM (Optiray, Ioversol Injection; 350 mg of iodine per milliliter; Tyco Healthcare, Quebec, Canada). A 30-ml 0.9% saline chaser was administered with the same injection rate after the injection of CM. They were injected using a dual shot injector (Dual Shot Alpha; Nemoto-Kyorindo, Tokyo, Japan) through a 20-gauge or 18-gauge (injection rate of 6.0 ml/s) intravenous injection catheter (BD Intima II; Becton Dickinson Medical Devices, New Jersey, USA) inserted into an antecubital vein. The delay between the start of the CM injection and scanning was set with the help of automatic bolus-tracking technology (Real Prep technique; Toshiba Corporation Medical Systems, Tokyo, Japan). Dynamic monitoring scanning (120 kV, a tube current of 50 mA) was performed at the midlevel of the heart. Two regions of interest (ROIs) were placed in the left ventricle (LV) and the descending aorta (DA), and the threshold values were set at 100 and 280 Hounsfield units (HU), respectively. Twelve seconds after the initiation of intravenous CM injection, dynamic monitoring scanning was started to obtain the dynamic monitoring image of every second. The patient was instructed to take a breath and hold it when the enhancement value in the LV reached 100 HU (first threshold). After approximately 5.5 s, when the enhancement value in the DA reached 280 HU (second threshold), diagnostic scanning was performed automatically when the patient held his/her breath ( Figure 1). Figure 1. Two regions of interest (ROIs) were placed in the left ventricle (LV) (real line) and the descending aorta (DA) (dashed line). Reconstruction phase was determined at the system's console by using cardiac-phase search software (Phase Navi; Toshiba Corporation Medical Systems, Tokyo, Japan). Images were reconstructed by using a segmented reconstruction algorithm at 75% of the R-R interval or auto-choose best phase with a slice thickness of 0.5 mm and a reconstruction interval of 0.25 mm (image matrix 512 × 512). If motion artifacts were still present in any coronary artery at this phase, additional reconstructions were performed with the reconstruction window offset by 5% toward the beginning or end of the cardiac cycle, or at intervals of 10 ms. The image with the fewest motion artifacts among all of the reconstructed images was chosen to transfer to an off-line 3D workstation (Vitrea FX; Vital Images, Minnesota, USA) for post-processing. Data Measurement and Image Quality Evaluation Enhancement values of vascular structures were measured in all patients by an experienced radiologist using a manually defined circular ROI cursor. The radiologist was blinded to the injection protocol performed and to the patient grouping. Enhancement values of the cardiovascular territories were measured on axial images at two representative slice levels: level 1 at the origin of the left main coronary artery (LMCA) was used to measure the enhancement values of the ascending aorta (AA) and pulmonary trunk (PT) (Figure 2a), and level 2 at the midlevel of the heart was used to measure the enhancement values of the right atrium (RA), right ventricle (RV), left atrium (LA), LV, and DA ( Figure 2b). The mean enhancement values based on two measurements by the radiologist were used for analysis. Enhancement values of the coronary arteries were measured at the following five points on cross-sectional images in which the vessel lumen was easily identified: LMCA, proximal segments of the left anterior descending artery (LAD), left circumflex artery (LCX), right coronary artery (RCA), and distal segments of the RCA. The mean enhancement values based on three measurements obtained from three ROIs in each target point were used for analysis. Figure 2c depicted the methods of measurement for the point of the proximal RCA (RCA-p). Calcifications, soft plaques, papillary muscles, and areas of stenosis were carefully avoided. To avoid the influence of partial volume effects, the coronary arteries were required to be more than 2 mm in diameter. Image quality was evaluated independently by another two cardiovascular radiologists with more than 5 years of experience in cardiac CT imaging using a 5-point grading scale for the main coronary arterial segments: 5=excellent, 4=good, 3=acceptable, 2=suboptimal, and 1=nondiagnostic. The main segments of the coronary arteries were assessed qualitatively by scoring of all vessels with a diameter of at least a 1.5 mm, including the LMCA and proximal, middle, and distal segments of the LAD, LCX, and RCA. Every segment of the coronary artery with a score of 3 or higher was considered a diagnosable image. Statistical Analysis Statistical analysis was performed using SPSS software version 21.0 (SPSS, Chicago, IL, USA). The quantitative data were expressed as the mean ± standard deviation (SD). The means were compared with one-way analysis of variance (ANOVA) among the three groups. To evaluate the homogeneity of contrast enhancement in the entire coronary artery, a paired t-test was used to compare the enhancement value between the proximal and distal portions of the RCA in each group. Statistical significance was accepted at a value of P < 0.05. Image quality ratings of the main of coronary arterial segments for the three groups were compared using the nonparametric Kruskal-Wallis test, and pair-wise comparisons of groups were performed using the Mann-Whitney U test. Inter-observer agreements of the image quality ratings were calculated using kappa statistics. Pearson Chi-Square test was performed to examine the difference in the proportions of the proximal coronary arterial segments with enhancement values ≥ 300 HU and the proportion of the main of coronary arterial segments with a score of 3 or higher on per-vessel and per-patient analyses. On the per-patient analysis, the proportion of patients with all proximal segments ≥ 300 HU and all the assessed coronary arterial segments with a score ≥ 3 were calculated. In pair-wise comparisons, statistical significance was accepted at a value of P < 0.05/3 = 0.017 by Bonferroni correction. Results CCTA was performed successfully in all 116 patients without any technical problems or adverse reactions to the CM. Significant coronary artery stenosis (lumen obstruction of ≥ Rate in 320-detector Row Coronary CT Angiography 50%) were noted in 34 patients (29.31%). There were no significant differences in demographic characteristics or CT scanning parameters including scan delay, scan time, breath-holding time (BHT), and dose-length product (DLP) among the three groups (all, P > 0.05) ( Table 1). The mean total volume of CM used was 46.53 ± 6.64 ml in group A, 40.59 ± 6.12 ml in group B, and 33.92 ± 4.47 ml in group C. The injection duration was 9.70 ± 1.44 s in group A, 7.85 ± 1.18 s in group B, and 6.14 ± 0.75 s in group C. Unless otherwise specified, the data are the means ± SD; *Statistic of the male/female ratio is χ 2 value. BMI Body mass index, bpm beats per minute, CM contrast material, BHT breath-holding time, DLP dose-length product, SD standard deviation. Enhancement values of the cardiovascular territories are shown in Figure 3. There were no significant differences in the mean enhancement values of the RA, RV, and PT among the three groups (all, P > 0.05). The mean enhancement values of the LA, LV, AA, and DA exhibited a declining trend from group A to group B to group C, which for group C were lower than those for groups A and B (all, P < 0.05), whereas there were no significant differences between group A and group B (all, P > 0.05). The mean enhancement values of the LMCA, proximal LAD (LAD-p), proximal LCX (LCX-p), proximal RCA (RCA-p), and distal RCA (RCA-d) for group C were lower than those for groups A and B (all, P < 0.05), whereas there were no significant differences between group A and group B (all, P > 0.05) ( Table 2). There were no significant differences in the enhancement values between the RCA-p and RCA-d in groups A and B (P = 0.319, P = 0.941, respectively), whereas a significant difference was noted in group C (t = 3.269, P = 0.002). The proportions of the proximal coronary arterial segments with enhancement values ≥ 300 HU on per-vessel and per-patient analysis were 71.17% (79/111) and 62.16% (23/37) in group C as compared to 90.83% (109/120) and 90.00% (36/40) in group A, and 88.89% (104/117) and 87.18% (34/39) in group B, respectively. Those proportions in group C were significantly lower than those in groups A and B on both per-vessel and per-patient analysis (all, P < 0.012), but there were no significant differences between group A and group B (χ 2 = 0.248,P = 0.620;χ 2 = 0.156,P = 0.693). Image quality ratings of the main coronary arterial segments were 4.60 ± 0.83 or 4.52 ± 0.92 in group A, 4.54 ± 0.95 or 4.44 ± 1.04 in group B, and 3.87 ± 1.39 or 3.72 ± 1.41 in group C by reader 1 or reader 2, respectively. The image quality for group C was significantly worse than that for groups A and B (all, P < 0.000), but there were no significant differences between group A and group B (Z = 0.467,P = 0.641 or Z = 0.509,P = 0.611) with reader 1 or reader 2. Inter-observer agreements regarding image quality ratings of each measured segment of the coronary arteries were all good (κ ≥ 0.730). The results of the proportion of the main of coronary arterial segments with a score of 3 or higher on per-vessel and per-patient analysis by reader 1 or reader 2 are shown in Table 3. The proportions of coronary arteries with a score ≥ 3 on per-vessel and per-patient analysis for group C were significantly lower than those for groups A and B (all, P < 0.012), but there were no statistically significant differences between group A and group B (χ 2 = 2.907,P = 0.088;χ 2 = 2.414,P = 0.120 or χ 2 = 0.614,P = 0.433;χ 2 = 0.156,P = 0.693). Discussion In 320-detector row CCTA, volume data acquisition is completed at once by the CT scanner when the coronary artery enhancement arrive peak, and then reconstructs three-dimensional images of the coronary artery with computer post-processing [13]. In general, consistent and high enough vascular contrast enhancement is considered a prerequisite to sufficiently evaluate the coronary artery. And most studies tend to believe this enhancement value shall not be less than 300 HU [14]. Enhancement values of the coronary arteries are suffered from numerous interacting factors, including scan delay, scan time, scan direction and scan mode; CM concentration, volume, injection rate, injection duration, injection mode; patient age, body weight, cardiac output and other factors [15]. Some researchers think body weight is a cardinal patient-related factor affecting the magnitude of vascular contrast enhancement [11,16], and suggest that the CM volume is adjusted according to patient body weight in CCTA. On the other hand, injection rate is another primary CM injecting factor affecting the magnitude of vascular contrast enhancement. In the current study, we compared the three groups according to the volume of CM administered with a body weight-adapted injection protocol: patients received 0.7 ml/kg of CM at an injection rate of 5.0 ml/s (group A), 0.6 ml/kg of CM at 5.5 ml/s (group B), or 0.5 ml/kg of CM at 6.0 ml/s (group C). The results showed there were no significant differences in the mean enhancement values of the coronary artery and the image quality between group A and group B. This demonstrates that increasing the CM injection rate allows a 12.77% reduction in the total volume of CM from 0.7 ml/kg (mean 46.53 ml) to 0.6 ml/kg (mean 40.59 ml) for CCTA without affecting the imaging quality. We tried to reduce the volume of CM to 0.5 ml/kg (mean 33.92 ml). But the mean enhancement values in the cardiovascular territories and coronary arteries all decreased significantly, and the image quality was also destroyed. Using a lower CM volume and higher injection rate implies that the injection duration and peak time of contrast enhancement in the coronary artery will be shorter [13]. If the injection duration is too short, the bolus will be not enough to maintain adequate enhancement during the volume data acquisition, which will lead to inhomogeneous enhancements between the proximal and distal portions of the coronary artery. In our study, the injection duration was 9.70 ± 1.44 s in group A, 7.85 ± 1.18 s in group B, and 6.14 ± 0.75 s in group C. And the results showed no significant difference in the enhancement values between the proximal and distal portions of RCA in groups A and B, indicating that the injection duration was adequate. However, the enhancement values of the proximal and distal portions of the RCA were significantly different in group C, demonstrating it was difficult to maintain consistent contrast enhancement in the whole coronary artery. In consideration of partial volume effects, measurements were not taken in the distal portions of LAD and LCX because the diameter of these vessels was ≤ 1.5 mm in most patients. With a lower CM volume and shorter injection duration, exact determination of CM arrival time is crucial to synchronize CT image acquisition for optimal coronary enhancement Determination of CM arrival time is typically done using either the test bolus (TB) technique or automatic bolus-tracking (BT) technique. However, the TB has been abandoned because it requires the use of a small volume (10 -20 ml) of CM, which is incompatible with the goal of reducing the CM volume used in CCTA. The automatic BT is based on real-time monitoring of the main bolus during injection with the acquisition of a series of dynamic low-dose monitoring scans at the midlevel of heart, where the CM can be visually observed passing through the RA, RV, and pulmonary circulation and finally reaching the LA and LV at the "first pass". Tatsugami et al. reported that the use of a two-threshold setting in the AA could reduce inter-patient variability using the automatic trigger mode of BT in 320-detector row CTCA [17]. Nevertheless, when the two thresholds are set in the AA, the patient only has approximately 3 s to complete the movement of taking a breath and holding it, which is difficult to achieve, especially for some elderly patients. In our study, we selected two ROIs in the LV and the DA with threshold values of 100 and 280 HU, respectively. When the CM arrived at the LA, the first threshold (100 HU) was triggered to instruct the patient to take a breath and hold it. After approximately 5.5 s, the enhancement value in the DA reached 280 HU (second threshold); the diagnostic scan was performed automatically while the patient was holding his/her breath. The mean BHT was 8.22 ± 0.69 s in group A, 8.23 ± 0.73 s in group B, and 8.27 ± 0.80s in group C. Compared with the traditional methods in which the patient is instructed to begin the breath hold at 14 s after the initiation of intravenous CM injection, this method shortens the BHT, especially for some patients with reduced pulmonary circulation. It also avoids the respiratory motion artifacts. Conclusion In conclusion, a total of at least 0.6 ml/kg with a 350 mg I/ml concentration of CM at 5.5 ml/s injection rate are required to achieve sufficient and credible evaluation of the coronary artery in 320-detector row CCTA. Increasing the injection rate can compensate for the lower coronary arterial enhancement caused by the minimized CM volume to some extent.
2020-10-17T21:49:55.591Z
2020-10-17T00:00:00.000
{ "year": 2020, "sha1": "8ecb14611835f37b4ca9a6046ac0931e18d3d3fa", "oa_license": "CCBY", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ijmi.20200804.11.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8ecb14611835f37b4ca9a6046ac0931e18d3d3fa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248870670
pes2o/s2orc
v3-fos-license
QUALITY FORMING PATTERNS IN THE CUPCAKE ENRICHED WITH PUMPKIN SLICES This paper reports a study into the effect of different quantities and shapes of fresh pumpkin slices on the technological proper­ ties of the cupcake. A comparative analysis of the technological properties of the cup­ cake with the addition of different quantities and shapes of pumpkin slices has been car­ ried out. A change in the technological pro­ perties of the cupcake depending on the vo­ lume of pumpkin slices has been established. The use of fresh pumpkin slices reliably improves shrinkage during baking, humi­ dity, and acidity of the cupcake. The volume of the cupcake is significantly reduced in this case. Porosity is significantly impaired when adding 30–50 % of slices. The slice shape does not significantly affect the tech­ nological parameters of the cupcake. importance when choosing physical pump­ cupcake vo­ acidity use of different quantities and shapes of fresh pumpkin slices. The use of pumpkin slices makes it possible to reduce the volume of dough in the finished product. The devised recommendations could be used by low­productivity grain processing enterprises when making flour confectio­ nery Introduction The priority task in the industry of healthy food is to devise technologies for food products enriched with func-tional food ingredients [1]. One way to implement it is to use non-traditional raw materials of plant origin. A promising object of modification is flour confectionery as a mass segment of regular consumption products [2,3]. Flour confectionery is rich in protein, fat, and carbohydrates. It has high energy value and taste properties and is in great demand among people of all age groups [4]. Cupcakes belong to the category of high-calorie easily digestible food [5]. Each component plays an important role and affects the structure, physical appea rance, and nutritional properties of the finished product [6]. A significant disadvantage of these products is the low content of biologically active substances [7,8]. Numerous studies [9,10] show that the use of plant raw materials in confectionery products contribute to an increase in the content of biologically active substances. This helps improve health and reduce the risk of many diseases. In addition, adverse environmental factors predetermine the adjustment of the biochemical composition of food [11,12]. Pumpkin is a promising additive in the production of cupcakes since it has a number of advantages [13]. Pumpkins have high-stability productivity and nutritional value, long shelf life, and are easily transported [14]. Pumpkin color can be green, white, blue-gray, yellow, orange, or red depending on the species. Pumpkin is used both in full and technical ripeness as a vegetable. The flesh is tasty in fried, stewed, boiled, or baked forms [15]. Pumpkin contains water: 75.8-91.3 %; carbohydrates: 3.1-13.0; protein: 0.2-2.7; fiber: 1.0-1.8; fat: 1.0-1.4; ash: 0.5-2.1 %; carotene: 2.4-5.2 mg/100 g [16]. However, adding pumpkins to confectio nery changes their technological properties [17]. The results of research into the use of pumpkin in cupcake technology are necessary for production because the range of products of higher biological value will increase. In addition, adding pumpkin to the cupcake will reduce the volume of sugar in it. Therefore, studies on the use of pumpkin in the technology of flour confectionery production are relevant. Literature review and problem statement An important role in nutrition belongs to confectionery. Cupcake technology typically involves fat, sugar, eggs, and wheat flour. The biochemical composition of fat and flour indicates an insufficient content of biologically active substances [18]. Several studies are underway aimed at improving the biological value of food [19]. Paper [20] proved the expediency of the use of uncommon crops in the technology of food production [20]. In cupcake technology, solid fat is replaced with vegetable oil [21]. In addition, fruit and vegetable raw materials are added, as well as products and waste from their processing [22]. Study [23] proved the advantages of adding dried apples, raspberries, medicinal calendula leaves, and pumpkin oil to the formulation of cookies. In the recipe of the filling, cherry plum and zucchini jam is used, enriched with the preparation of eggshell with lemon juice. The composition of «Clear Sun» cookies was supplemented with a powder of medicinal medunka leaves, apricot, and sea buckthorn oil. Sea buckthorn jam and calendula syrup were used as fillings. The advantages of applying such components according to technological parameters have been scientifically proved. However, the results of the research relate to cookies. The technological process of cupcake production is different from cookies. In addition, they used components that differ in properties from fresh pumpkins. In the technology of bakery and flour confectionery products, fresh pumpkin pulp, juice, powder, paste, and peel are used [24]. Thus, the use of 0.85 and 1.7 % of pumpkin powder by weight of dough in cupcake technology improved its tech-nological parameters. The porosity score of such products increased from 5.02 to 7.36-7.57 points [25]. Study [26] also proved the advantage of using pumpkin powder. The results of the research show that the highest biological value was obtained by adding 20 % of the powder to the product. However, sensory analysis confirmed the use of 15 % during the production of cupcakes. The importance of pumpkin powder is also emphasized in a number of other studies. Thus, it is established that in cupcake technology it is advisable to use up to 20 % of pumpkin powder. Such a product had an attractive color and improved overall culinary rating [27]. However, the improved cupcake formulation parameters cannot be applied to fresh pumpkin slices. Study [28] examined in detail the biochemical component of pumpkin powder depending on drying modes. The powder was added during the manufacture of cookies, from 5 to 20 %. It was found that the addition of 10 % of pumpkin powder was optimal. It should be noted that its addition to the recipe of cookies improved the sensory quality indicators of the finished product. In addition, the nutritional value of the finished product was determined depending on the content of the pumpkin powder. The expediency of using pumpkin in cookie technology has been proved. However, the results can be used in the manufacture of cookies. Cupcake technology is significantly different from cookie technology. In addition, the properties of pumpkin powder differ from fresh pumpkin pulp. Moreover, the properties of the product with powder differ compared to the use of fresh pumpkins. In study [29], pumpkin paste was used. Adding it to a cupcake in the volume of 27 % by weight of dough changed the properties of the product. Thus, the humidity of the cupcake increased from 34.4 to 35.7 %, and the porosity estimate -from 8.25 to 8.95 points. It should be noted that the paste in properties is similar to fresh pulp in moisture content. However, the paste in the cupcake is evenly placed without visible particles. The use of fresh pumpkin in the form of slices will change the appearance of cupcake crumb. In addition, the cited study considered only one option using pumpkin paste. It was found that the addition of 15-25 % of fresh pumpkin puree contributed to an increase in the humidity of the cupcake dough from 23.8 % to 24.2-25.3 %. The porosity of the finished products was 62.8-65.5 % depending on the content of the puree [30]. However, the results of the research cannot be used for fresh pumpkin slices. It is proven that the addition of unconventional raw materials can reduce moisture loss during baking [31]. Other studies have shown that the addition of ground nuts can improve the organoleptic properties of the product, but its volume decreases [32]. In the studies, the addition of non-traditional raw materials increased the volume and porosity of the finished product. However, the elasticity deteriorated [33]. The use of non-traditional types of raw materials changes the rheological properties of the dough and the need to adjust the technological parameters of making a cupcake. It is established that the addition of pumpkin seeds and buckwheat flour extends the baking time by 3 minutes. It is necessary to reduce the temperature in the chamber by 5 °C to ensure the better formation of the height of products and avoid burning their surface. Consequently, a slight increase in the duration of baking will not cause excessive electricity consumption [34]. It should be noted that in the cited studies, the use of fresh pumpkin and its paste helps reduce the energy value of the cupcake. At the same time, the content of carotene in it increases significantly. The technological parameters of confectionery products meet the established requirements. However, those studies did not investigate the effects of different pumpkin content on the technological parameters of the cupcake. In addition, the addition of fresh pumpkins was not sufficiently studied. The aim and objectives of the study The purpose of our study was to determine the features of the formation of the quality of a cupcake enriched with pumpkin slices. The proposed solutions would make it possible to expand the range of finished products (cupcakes) through the use of common and cheap raw materials (pumpkin). The results could be valuable for farms that grow pumpkins. In addition, for enterprises of low productivity, the main activity of which is the production of confectionery products. To accomplish the aim, the following tasks have been set: -to establish indicators of the quality of the cupcake enriched with pumpkin slices depending on their shape and quantity in the finished product; -to substantiate the rational formulation of the production of a cupcake enriched with pumpkin slices. 1. Raw materials to produce a cupcake enriched with pumpkin slices Pumpkin was added to the cupcake in the form of slices. Depending on the shape and size of the resulting pieces, four variants of pumpkin slices were obtained (Table 1). Table 1 Characteristics of pieces of pumpkin slices depending on the shape and size 2. Program, methodology, equipment for studying the properties of a cupcake enriched with pumpkin slices The research was conducted in the laboratory at the Department of Food Technologies, the Uman National University of Horticulture (Uman, Ukraine). The dough for the cupcake was prepared according to the following recipe: flour -70 g, powdered sugar -50 g, margarine (fat content 72 %) -50 g, eggs -50 g, salt -0.2 g, baking powder (baking soda+sodium phosphate) -2.5 g, vanilla sugar -0.3 g. First, we prepared the dough. Salt and vanilla sugar were added to margarine at room temperature. Then it was whisked for 5-7 minutes in a dough-mixing machine (Royalty Line RL-PKM1900.7, Germany) with a speed of 60-65 per minute. After that, sugar powder was added and whipped for another 5-7 minutes. Then we added eggs and whisked them for 10 minutes. After that, wheat flour of the highest grade was added and mixed in a mixer for 3-5 minutes. Fresh pumpkin slices were added to the prepared dough in accordance with the experiment scheme ( Table 2). The baking temperature was 180-185 °C. The added free moisture and the shape of pumpkin slices necessitated prolonging the duration of cupcake baking. The consolidated temperature regimes of cupcake baking are given in Table 2. Table 2 The duration of baking cupcakes with fresh pumpkin slices, min Cupcake shrinkage at baking was determined from the formula: where Y is the cupcake shrinkage at baking, %; m 1 is the mass of the dough, g; m 2 is the mass of a hot cupcake, g. The specific volume was determined from the formula: where V p is the specific volume, cm 3 /g; V is the volume of a cupcake, cm 3 ; m is the weight of a cupcake, g. The humidity of the cupcake was determined by the thermogravimetric method. The volume -from the difference bet ween the volume of the container filled with fine-seed crop without a cupcake and with it. The acidity -by titration of 50 cm 3 of the filtrate with 0.1 nNaOH solution. The porosityorganoleptically on a scale: 9 -small, thin-walled, or thickwalled, uniform, 7 -pore-free or other parts of the crumb occupies up to 25 % of the cross-section, 5 -pore-free or other parts of the crumb occupies 26-50 % of the cross-section, 3 -pore-free or other parts of the crumb occupies 51-75 % of the cross-section, 1 -pore-free or other parts of the crumb occupies 76-100 % of the cross-section. 3. Statistical treatment of experimental data The experiments were carried out four times and randomized over time. The results were treated by using the Microsoft Excel 2010 and Statistica 12 software in accordance with the guidelines from [35,36]. Social surveys The initiators of the survey were scientists at the Department of Food Technologies, Uman NUS, Ukraine. The focus groups consisted of potential consu mers from different age categories. The study place was the town of Uman, Ukraine. The number of respondents involved was 526. The study's period -Q4'2021. 5. Results of studying the cupcake quality depending on the shape and share of added pumpkin 1. The properties of a cupcake enriched with pumpkin slices depending on their shapes and shares in the finished product Previous studies [37] have revealed a low probability of the effect of varietal properties of grain raw materials on the technological properties of the cupcake and its culinary qua lity. In the case of adding raw materials with a high moisture content (pumpkin paste), there are more significant changes in the rates of shrinkage at baking, shrinkage at drying, the humidity, volume, and porosity of cupcakes [38]. Physical appearance is quite important for the modern consumer, which is especially important for modern food manufacturers. The opinions of potential consumers are valuable information for social analysis. A significant number of respondents (22.2 %) regardless of their age are early innovators, and, therefore, the probability of their choosing a new product for themselves is quite high. At the same time, for most respondents, the physical appearance of the product is important. Only 5.6 % of the respondents said that they did not pay attention to the physical appearance of the product, while 5.5 % preferred only products with a good physical appearance. Other respondents indicated that the physical appearance of products was an important prerequisite for the purchase of food. A significant number of respondents are trying to join the current trends of «healthy eating» and systematically monitor their health; 38.9 % of the respondents say that they pay attention every time to the chemical composition of food products before purchasing them. Our results of the survey of potential consumers indicate the relevance of creating new types of food, including the expansion of the range of cupcakes. The addition of vegetable raw materials will change the chemical composition of the product and increase its attractiveness to the modern consumer. At the same time, the physical appearance of the product plays a key role for a significant number of potential consumers, which should be taken into consideration when creating such products. When making cupcakes, the formation of physical appearance significantly depends on the indicators of shrinkage at baking, porosity, and volume. The acidity of the cupcake and its humidity affect the condition of the finished product during the period of its storage [39]. The effect of the volume of added slices on the cupcake shrinkage during baking (Table 3) is reliable. The indicator of cupcake shrinkage at baking reliably depended on the volume of pumpkin slices added (Partial etasquared -0.93). However, the effect of the shape of pumpkin slices on the indicator of cupcake shrinkage at baking was unlikely (p = 0.65). It is highly likely that there is a relationship between the factors of the volume of pumpkin slices added and their shapes (Partial eta-squared -0.27). The lowest value of the indicator of cupcake shrinkage at baking, depending on the volume of slices added, was registered in the control sample (Fig. 1). The increase in the volume of added pumpkin slices led to an increase in the indicator of cupcake shrinkage at baking. With the maximum addition of pumpkin slices (50 %), the indicator of cupcake shrinkage at baking increased by 4 %. The most rapid increase in the indicator of cupcake shrinkage at baking was recorded when adding pumpkin slices in the volume of 5 to 15 %. The shape of the added pumpkin slices had no effect on the humidity of the finished cupcake (p = 0.70). In addition, there is no mutual effect of the shape of the slices and their content on the humidity of the cupcake (p = 0.99). The formation of the humidity of the finished product was predetermined by the volume of pumpkin slices (Fig. 2). An increase in the volume of added pumpkin slices increased the humidity of the product. With the maximum addition of pumpkin (50 %), there was an increase in the cupcake moisture content by 13 %, compared with the control value (5.9 %). On the formation of the volume of the cupcake, the reliable influence was caused by factors A and B; in addition, a mutual relationship was registered between factors A and B and the indicator of the volume of the cupcake ( Table 4). The volume of the cupcake followed an inverse dependence on the volume of pumpkin slices added (Fig. 3). The actual reduction in the volume of the cupcake by adding the maximum volume of pumpkin slices (50 %) resulted in a 60 % reduction in the volume of cupcakes compared to the control. The high indicators of cupcake volume were registered in samples that were enriched with slices of types No. 1 and No. 2, regardless of their volume added to the cupcakes (Fig. 3, b). The volume of the cupcakes when adding slices of types No. 3 and No. 4 was similar. Trends in a change in the specific volume of the cupcake depending on the type of pumpkin slices added and their total volume were similar to the cupcake volume indicator (Fig. 4). Fig. 4. The specific volume of the cupcake, depending on the shape and quantity of added pumpkin slices: a -the volume of pumpkin slices, %; b -the type of slices The specific volume of the cupcakes decreased with an increase in the volume of added pumpkin slices, regardless of their shape. At the same time, the specific volume of the cupcakes enriched with slices of different types was reliably different from each other. The largest specific volume was registered when adding slices of type No. 1, regardless of their volume (Fig. 4, b). The acidity of the cupcakes reliably depended on the volume of pumpkin slices (Fig. 5). The probability of the effect of the type of slices on the acidity index was low. The increase in the proportion of pumpkin slices led to an increase in the acidity of the cakes. When adding the maximum volume of pumpkin slices (50 %), the acidity increased by 30 % compared to the control sample. The porosity of the cupcake significantly depended on the volume of added pumpkin slices and their shape (Fig. 6). Reliable differences in porosity were found in slice sample No. 4, which was 0.5 points smaller than in the samples of the cupcake enriched with other types of slices (Fig. 6, a). More significant changes occurred with the use of various formulations, in particular, with an increase in the volume of pumpkin slices, there was a decrease in the porosity of the finished product (Fig. 6, b). The porosity of the cupcake was at a high level (9.0 points) when adding pumpkin slices to 25 %. Fig. 6. The cupcake porosity depending on the shape and quantity of added pumpkin slices: a -the volume of pumpkin slices, %; b -the type of slices A further increase in the proportion of pumpkin slices in the cupcake led to a sharp decrease in the porosity of the cupcake. With the addition of 30 % of pumpkin slices, the porosity value decreased by 11.1 % compared to control. With the addition of 35 % of pumpkin, we registered a decrease in porosity by 27.7 %; 40 % of pumpkin reduced the porosity by 50 %; 45 % of pumpkin -by 83.3 %, and 50 % of pumpkin -by 88.9 %. 2. Improving the recipe to bake a cupcake enriched with pumpkin slices The addition of pumpkin slices adversely affected the technological parameters of the finished product in a general case. Such a pattern may be associated with the influence of free moisture of the added raw materials. The best results in a comprehensive assessment were registered with the minimum addition of pumpkin slices. The level of expectation (desirability) with the minimum addition of pumpkin was maximal (89.9 %). Further increase in the volume of pumpkin added reduced the overall effect of desirability (Fig. 7). The best results in terms of the shape of pieces of pumpkin slices were demonstrated by the second sample. However, given the minimal impact of the shape and size of pumpkin slices on the examined properties of cupcakes, the advantage of this option compared to other variants of the experiment was minimal. In general, the effect exerted on the condition and physical appearance of the finished product (cupcake), enriched with pumpkin slices, was negative. A large volume of added moisture caused significant changes in the structure of the dough, and reliably influenced the quality of the product. Therefore, the request of potential consumers to expand the range of products of increased biological value in the case of enrichment of cupcakes with pumpkin slices should be balanced, and the results of our study are to be taken into consideration when forming a strategy for the development of enterprises of manufacturers of such products. The addition of slices No. 3 significantly reduced the volume of the cupcake (Fig. 3) since their linear dimensions were the largest. When baking, for the generated gas, it was more difficult to loosen the structure of such dough. Slices No. 3 were the smallest, which led to the release of free moisture into the dough. Therefore, the volume of the cupcake was also reliably smaller. The tendency of the influence of slice type on the specific volume was similar (Fig. 4). The indicator was calculated using the volume of the cupcake. Adding the smallest slices to the dough increased the content of free moisture in it (Fig. 6). During baking, for gas, it was also more difficult to loosen such a dough structure. Therefore, the porosity of the cupcake was significantly lower compared to the use of other types of slices. The use of other types of slices released less moisture into the dough. Therefore, the porosity of the cupcake was higher. Along with a decrease in the indicators of the physical appearance of cupcakes enriched with pumpkin slices (Fig. 7), the samples containing from 5 to 15 % of the added moisture-containing raw materials demonstrated quite high indicators. Based on a comprehensive analysis of the properties of cupcakes enriched with pumpkin slices, the reliable effect of the volume of slices and their shape has been proved. This predetermines the relevance of the further study of the possibilities of modernization of pumpkin processing technologies. There are new opportunities for enriching bakery products with pumpkin pulp in fresh form. The results of our study, including social, indicate a potentially attractive program for investors to expand the range of cupcakes. A special feature of our results, in comparison with most available studies [26,28], is the use of pumpkin fresh pulp instead of its powder. Currently, the formation of a moisture-containing product powder is an energy-consuming technological process, leveled by the possibility of using fresh pumpkin pulp, which is consistent with the results repor-ted in [24]. At present, the relevant areas of scientific work are the energy efficiency of the technological process, and, therefore, one requires an additional comparative analysis of the energy intensity of the proposed technologic process in comparison with the alternative solutions given in [24,28]. It is highly likely that the proposed technique to prepare pumpkin slices has lower energy costs compared to the production of fresh pumpkin paste, which requires additional investigation and is a disadvantage of this study. The proposed solutions and recommendations could expand the ways of effective utilization of pumpkins. The results reported in the current paper have limitations related to the properties of pumpkins used to enrich the cupcakes. To obtain identical results in the production, it is ne cessary to use nutmeg pumpkin (Cucurbita moschata (Duch.) Duch. ex Poir.). The trends in the food market identified during the study predetermine the feasibility of further research into the purchasing power of potential consumers, their attitude to the culinary quality of the product, investigating the biological value and culinary quality of cakes enriched with pumpkin slices, establishing energy cost of pumpkin slice production. Conclusions 1. The technological parameters for cupcake quality with the addition of fresh pumpkin slices of different shapes and quantities have been determined. The use of fresh pumpkin slices reliably increases the shrinkage at baking, the humidity, and the acidity of the cupcake. Its volume reliably decreases at the same time. The porosity is significantly impaired by adding 30-50 % of slices. The shape of the slices does not significantly affect the technological parameters of the cupcake. The use of 5-25 % of pumpkin slices does not reduce the porosity of the cupcake. The moisture content and acidity of the cupcake meet the current requirements. The use of 30-35 % of slices reduces porosity to 6.5-8.0 points, which corresponds to a high level. 2. In the technology of cupcake production, it is recommended to add 5-25 % of fresh pumpkin slices of various shapes by weight of the dough. The use of this volume of slices makes it possible to receive a cupcake with a porosity of 9 points, a shrinkage at baking of 6.9-8.5 %, moisture content of 6.9-12.8 %, and a volume of 176-203 cm 3 , the acidity of 1.5-1.7 degrees. In addition, it is possible to use 30-35 % of pumpkin slices. A cupcake with such a formulation has porosity at the level of 6.5-8.0 points, a shrinkage at baking of 8.8-9.7 %, moisture content of 13.4-14.8 %, and a volume of 156-172 cm 3 , and the acidity of 1.8-1.9 degrees.
2022-05-19T15:12:54.296Z
2022-04-30T00:00:00.000
{ "year": 2022, "sha1": "03479cafcafd2e3996aa5e7dfb6bd06558762753", "oa_license": "CCBY", "oa_url": "http://journals.uran.ua/eejet/article/download/255646/252921", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f4f3f59e828aa031fb1bcfdca5af742c26223e57", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
237412587
pes2o/s2orc
v3-fos-license
How I do it: the trans-laminar, facet-joint sparing minimal invasive approach for ventral dural repair in spontaneous intracranial hypotension—a 2-dimensional operative video Background We describe the minimally invasive, facet-sparing postero-lateral approach to the thoracic spine for a ventral dural repair in a patient with intracranial hypotension secondary to a spontaneous dural breach. Methods We performed a minimally invasive approach using a short paramedian posterior skin incision followed by a 10 × 10 mm targeted trans-laminar approach, to achieve a microsurgical repair of a symptomatic ventral dural defect causing severe disability. Conclusion The facet-sparing postero-lateral approach is safe and effective in the surgical management of thoracic dural tears, even in the most anterior ones, and avoids the traditional costotransversectomy. Supplementary Information The online version contains supplementary material available at 10.1007/s00701-021-04987-w. Introduction Spontaneous intracranial hypotension is a rare disease with an estimated incidence of 5/100,000, primarily affecting patients in their fourth and fifth decade [2]. It is caused by a spinal dural cerebrospinal fluid (CSF) leak, which mostly occurs along the cervico-thoracic spine [1], and the dural tear is often inflicted by a degenerative calcified discogenic microspur. In the case of a significant CSF leakage, brain sagging puts meninges, veins, and cranial nerves under tension and may cause symptoms like orthostatic headache [3]. Whenever SIH is suspected, dynamic CT myelography is the gold standard imaging to precisely localize the CSF leak [2,4] and the associated microspur. We present the case of a 40-year-old healthy female with symptomatic SIH. The diagnosis and the technical aspects are thoroughly presented and the most recent literature on the topic is discussed. Relevant surgical anatomy The macroscopic and microscopic surgical anatomies of the surgical approach are shown in Fig. 1. The skin incision is paramedian, allowing a larger working angle and a higher range of motion for instruments during the surgical procedure. Attention should be paid to keep the surgical field as dry as possible, avoiding unnecessary bleeding during surgical exposure. A subperiosteal dissection of the muscles should be carefully performed. However, the full exposure of the entire zygapophyseal articulation is unnecessary, and the joint capsule must not be violated. During the intradural step, attention must be paid not to push against the spinal cord while exposing the ventral dural defect. To facilitate spinal cord mobility, the denticulate ligament may be cut. In our case, this was not necessary. Description of the technique After the identification of the CSF leak on a myelo-CT (Fig. 2), the surgical indication was retained. The patient was positioned prone with the head maintained in a neutral position with a Mayfield head clamp. Cefazolin 2 g was given i.v. as prophylactic antibiotic. Motor and somato-sensory evoked potentials were used. Precise localization of the level was verified using fluoroscopy, and the surgical site disinfected by Betadine followed by sterile draping. A straight, slightly paramedian 3-cm-long skin incision was performed (right-sided), centered at the level of the CSF fistula. The lumbodorsal fascia was incised with the monopolar, and the ipsilateral paravertebral muscles were dissected. A careful and thorough hemostasis was achieved, followed by the installation of a Fehling retractor. The bone plane was exposed using the monopolar. Thereafter, the interlaminar window was enlarged to 10 × 10 mm using a 4-mm sharp matchstick drillbit. Bone wax was used to secure the hemostasis. The flavum was resected using a Kerrison punch, exposing the underlying dura. Throughout the procedure, immaculate epidural hemostasis was secured, allowing uninterrupted and pristine microsurgery. This is paramount, especially after the dural opening, in order to avoid blood in the subarachnoid space, carrying the risk of secondary inflammatory arachnoiditis. Prolene 5-0 stitches were used to suspend the dura, which had been incised longitudinally. The dural incision was performed dorso-laterally to allow the surgeon to work under the spinal cord, with minimal retraction (Fig. 3). CSF was released and the dural suspensions completed. A careful inspection of the ventral dura using a microdissector was undertaken with special attention to the ventral spinal cord. A thorough circumferential inspection of the ventral dural defect was achieved (Fig. 4), and the underlying bone spur could be removed using a microcurette, until no more bone irregularities were palpated. The dura surrounding the defect was dissected from the bone, so as to make room for a piece of TachoSil®. Running stitches with Prolene 9-0 were placed at the dural edges and the dura repaired using a TachoSil® "sandwich technique" (Fig. 5). In details, the first TachoSil® sponge was placed extradurally, facing the dural defect, and the Prolene suture was tightened to reduce the dural defect. A gentle pressure was applied in order to secure a good adhesion between the dura and the TachoSil. A second piece of TachoSil® was then placed intradurally, with the sticky side facing the dural defect and the first sponge and gentle pressure was applied to the construct for about 1 min. The sizes of these pieces were about twice the size of the dural defect so as to ensure good adhesion between the TachoSil® and the surrounding dura. A Prolene 6-0 running suture was used to close the postero-lateral dural opening. A piece of TachoSil was added extradurally, followed by a regular myofascial and subcutaneous closure using a Monocryl 2-0 running suture and a skin closure using a Prolene 4-0 running suture. To repair the fistula, the dural edges are approximated with a Prolene 9-0 running suture followed by a TachoSil® sandwich technique. The suture loops are initially kept lose to allow for the passing of a TachoSil® sponge extradurally as an outlay. The suture is then tightened and tied, whereafter a supplementary TachoSil® sponge is placed intradurally as an inlay. Note that the sticky (yellow) faces are placed against the dura After surgery, the patient was kept in bed for 48 h before gradual mobilization (due to her significant subdural hygromas). The postoperative course was free of any complication and the symptoms completely resolved. A 3D reconstruction of the postoperative CT scan shows the minimal invasive interlaminar bone window (Fig. 6). Postoperative CT scans show the complete resolution of the subdural hematomas (Fig. 7). Indications The postero-lateral trans-laminar, facet-sparing approach is suitable for lesions of the spinal canal and allows a safe, straightforward, and effective approach to the ventral dura, without the need to resect parts of the zygapophyseal joint or rib to access to the anterior part of the spinal canal. This approach is a valid alternative to the potentially destabilizing costotransversectomy, traditionally used when it comes to access to the most ventral aspect of the thoracic dura. Limitations While the surgical management of SIH may appear technically demanding, it is not a major obstacle to cure the patient. In our opinion, the accurate diagnosis of SIH with proper imaging and accurate visualization of the exact level of the fistula is of paramount importance. However, the surgery should be performed by a trained microneurosurgeon, mastering advanced microsurgical techniques and sutures. The face-sparing approach is not suitable in the case of large anterior lesions, such as spinal meningiomas. Patients with zygapophyseal hypertrophy may not be candidates to the minimally invasive, facet-sparing approach: a careful preoperative radiological assessment should be undertaken to ensure safety and feasibility for a specific patient. How to avoid complications Careful patient positioning and radiological identification of the correct level are paramount. Spinal navigation can be used to ensure that an adequate angle of view is obtained. During muscle dissection, keep the surgical field dry and clean, since blood contamination can prevent optimal visualization of the critical structures, such as the dura (and dural tear) and the spinal cord. Furthermore, arachnoidal contamination with blood can cause subsequent inflammatory arachnoiditis and adhesions, which are very difficult to cure, even with adequate management. Therefore, immaculate epidural hemostasis is paramount for allowing uninterrupted pristine microsurgery, especially after the dural opening. Watertight closure of the dural tear using running sutures with inlay and outlay reinforcement is also important, since surgical failure requires re-operation. In case of bleeding of the spinal cord, electrocautery must never be used. Instead, gentle compression with cottonoids along with smooth rinsing is sufficient. Whenever blood contamination occurs during the intradural phase, a thorough low-pressure rinsing must be performed. Do not use the drill to resect the bone spurs, since the spinal cord does not tolerate even minimal heating. Specific information for the patient Patients should be warned about general risks of the surgery, e.g., hematoma, postoperative infection, CSF leaks, pseudomeningocele, and failure-to-cure. It is of outmost importance that patients understand that while the symptoms are mostly cranial, the problem comes from the spinal canal. Patients should be warned that the surgery can be converted to a standard costotransversectomy in case of failure to achieve adequate exposure, increasing the risk of postoperative pain and secondary destabilization of the spine. 10 key point summary 1. SIH secondary to a spontaneous CSF leak is rare disease, but can cause severe symptoms. 2. Myelo-CT is the best imaging modality to identify and localize precisely CSF leaks in case of SIH. 3. Microsurgical closure is the first-line therapy of SIH. The trans-laminar, facet-joint sparing technique is much less invasive than the standard costotransversectomy, but requires advanced microsurgical skills and should be performed by a senior microneurosurgeon. 4. Careful perioperative localization using fluoroscopy should be performed in order to ensure the minimal invasive aspect of the procedure. 5. Avoid unnecessary bleeding during the extradural approach, since arachnoidal contamination with blood may result in postoperative adhesions. 6. Use the so-called "sandwich technique" with and inlay and outlay in addition with the suture of the dural tear (products with fibrin-covered surfaces may be preferred). 7. If necessary, sectioning of the denticulate ligament can increase the surgical view. 8. When bone spurs must be removed, do not use the highspeed drill in the vicinity of the spinal cord. 9. Whenever bleeding of the spinal cord occurs, apply gentle pressure using cottonoids; do not use electrocautery. 10. When achieved successfully, the surgical closure of the dural tear results in immediate postoperative pain relief. Conclusion The minimally invasive trans-laminar, facet-joint sparing approach to repair anterior thoracic CSF fistulas is safe and effective, allowing for speedy recovery of symptomatic patients. A postero-lateral dural incision avoids unnecessary and harmful spinal cord mobilization. A watertight closure using a combination of Prolene 9-0 suture and a TachoSil® "sandwich technique" secures the dural repair and enables early mobilization. Funding Open Access funding provided by Université de Genève. Declarations Ethics approval Not required. Consent for publication Written patient consent was obtained for use and publication of images after complete information. The patient consented to the surgery. Conflict of interest The authors declare no competing interests. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2021-09-05T13:27:38.050Z
2021-09-04T00:00:00.000
{ "year": 2021, "sha1": "28878eabcf542c8d1bfe0297a712521a25cc4090", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00701-021-04987-w.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "28878eabcf542c8d1bfe0297a712521a25cc4090", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
239993226
pes2o/s2orc
v3-fos-license
Biorational method for controlling the abundance of Cydia pomonella L. in apple agrocenoses of the Krasnodar region In the region of Krasnodar Territory, Cydia pomonella L. belongs to the dominant pests of the apple tree, against which 8-10 treatments with insecticides are carried out during the growing season. In world practice, pheromones Shin-Etsu® MD CTT, D and BRIZ® are used in apple agrocenoses for the control of C. pomonella. abstention or reduction of insecticidal treatments leads to a decrease in the pesticide load on the agrocenosis of the garden by a factor of two or more. The objective of our research was to determine the biological pheromones effectiveness in controlling C. pomonella quantity. The test was carried out in two horticultural zones of the Krasnodar Territory, in areas with different numbers of phytophage. The experiment as a result, it was found that in the experimental plots the percentage of damaged fruits in the drop was 1.31.5%. Fruit damage wasn’t observed in a removable crop, which corresponds to the results of the standard version, with the use of insecticidal treatments. It was found that the pheromones usage in the Black Sea horticultural zone of the Krasnodar Territory is economically feasible. Сost reduction for the purchase of insecticides amounted to 9089.2 rub/ha, a decrease in pesticide load by 70%. Introduction Need arises to switch to new strategies for protection against lepidoptera pests, in particular, against Cydia pomonella, in fruit plantations, in recent years. This concept should primarily be based on phytosanitary optimization of agroecosystems, expanding the list of plant protection products. The main condition for the applied pesticides is the selective ability and the preparations ecological safety. Currently, insecticides are used from the organophosphorus compounds group, carbamates and pyrethroids, which are highly toxic drugs for pests and garden agrocenosis. The contemporary substances of a different type include bioregulators that don't directly affect the insect's body, but are involved in the transmission of chemical signals that control life processes at the physiological level. Cydia pomonella is one of the main destructive species in apple plantations of the Krasnodar Territory. The phytophage develops in three full generations, therefore 8-10 insecticidal treatments must be carried to protect the crop in the region [1]. Frequent use of pesticides for a series of years leads to significant changes in agrobiocenosis, as a result -the formation of resistance, the disappearance of entomophages, and rotation of the prevailing species. With the advent of pheromones and modern technologies, which over a long period are dosed and regularly release volatile substances from dispensers, it has become possible to reduce the quantity of chemical treatments. Specifics of pheromones are -effective control of the phytophage population at an economically imperceptible level, selectivity of action, absence of toxicity for mammals and entomophages, as well as high efficiency with minimal amounts and rapid degradation in the environment [2][3][4][5]. In 2015, twin-tubes (plastic dispensers) Shin-Etsu® MD CTT, D were created in Japan, they were produced by Summit Agro and German scientists from BASF synthesized pheromones with the trade name BRIZ® (RAK in Europe) [6][7]. Analysis of the global pheromone market revealed that Lymantria dispar and C. pomonella account for about 68% of the total world pheromone consumption [8]. The purpose of our research was to determine the biological effectiveness of pheromones Shin-Etsu® MD CTT, D and BRIZ® in controlling the number of C. pomonella. The research tasks included: to evaluate the biological effectiveness of pheromones Shin-Etsu® MD CTT, D and BRIZ® in the control of C. pomonella in areas with different initial numbers of phytophage; determine if there is a side effect of pheromones on other types of pests in apple plantations; to develop a technology for regulating the number of C. pomonella based on a complex of communication methods and pest activity. In production field experiments (3 ha), two types of pheromones were used: Shin-Etsu® MD CTT, D -dispenser E, E-8,10-Dodecadien-1-ol 2.2x10-4 + dispenser 1-Dodecanol 1.2x10-4 + 1-Tetradecanol dispenser 2.76x10-5 kg / dispenser) and BRIZ® complex threecomponent pheromone -a vapor-generating product in a dispenser (178 mg / codlemon + 42 mg / ntetradecyl acetate). Materials and methods The tests were carried out according to the male disorientation method, the placement scheme recommended by the manufacturers. The dispensers were hung in the garden during the "beginning of flowering" phenophase, i.e. before the overwintered generation of C. pomonella began to fly. The dynamics of the phytophage flying was monitored using pheromone traps; the number of other garden pests was counted in accordance with the methods for registration tests of insecticides [9]. Results and discussion The protecting system apple trees from diseases and pests is based on the use of chemical pesticides of the second or third hazard classes, while every year there is an increase in the frequency of treatments during one growing season to 20 or more. Therefore, the search for alternative methods for controlling the number of pests in the apple orchard is relevant and timely. Today, the unset mating (MD) strategy is the most promising technology for the use of sex pheromones in the world, due to which the use of insecticides against lepidoptera pests has been significantly reduced [10][11][12][13][14]. The first use of pheromones began in 1959, but they weren't widely used, because there were no technologies capable of maintaining a constant concentration of a substance in the open air, low efficiency did not make it possible for this method to compete with insecticides [15]. Over the past two decades, the number of studies on the biosynthesis of insect pheromones and the mechanisms of their practical application in controlling the behavior of pests has increased. A high abundance of C. pomonella was noted at the experimental site of ZAO Loris; up to 68 individuals were caught per trap in three days. As a result of the experiment, it was found that the damage to the fruits in the drop was 1.3-1.5%, in the removable harvest, no damage to the fruits was noted, which corresponds to the results of the standard version, with the use of insecticidal treatments. In the control, the indicators of fruit damage were 83% in the drop and 61% in the harvest. In addition to C. pomonella, a high abundance of Aphis pomi, Quadraspidiotus perniciosus, Tortricidae, Euzophera bigella ZELL was noted. In the control variant, the damage to E. bigella fruits was more than 50%. As a result of the research, it was confirmed that the pheromones Shin-Etsu® MD CTT, D and BRIZ® have no side effects on other phytophages of the apple orchard, therefore, five spraying with insecticides was carried out on these pests. In the experimental plot of the agricultural sector of Novomikhaylovskoye agricultural enterprise, the number of C. pomonella didn't exceed the economic injury level, and a moderate presence of E. bigella and Grapholitha molesta was also noted in the garden. During the season, two treatments were carried out against the pest complex, with ten sprays in the standard version. The effectiveness of the biorational method for controlling the number of C. pomonella using Shin-Etsu® MD CTT, D was 98.8-100%, which is equivalent to the indicators obtained in the variant with the use of a chemical protection system (Fig. 1). When testing BRIZ® pheromones, results similar in efficiency were obtained. The highest efficiency of the use of twin-tubes Shin-Etsu® MD CTT, D was noted in the Black Sea horticultural zone of the Krasnodar Territory -98,8-100%, which is equivalent to the standard. The cost reduction for the purchase of insecticides was 9089.2 rub/ha, the profitability was 138%, and the pesticide pressure was reduced by 70%. The crop yield in the variants of the experiment was 30.5 t/ha, the fruit standard nature was 98%. The use of attractants in the central zone is not cost-effective, since the pheromones on the market are products of targeted action only on one species of phytophage (C. pomonella), and the number of other fruit-damaging pests in the garden is higher than the economic injury level. Experience of using Shin-Etsu® MD CTT in the 2016-2018 season in all regions of the Russian Federation showed high reliability and efficiency of the disorientation method. Studies have shown that the scheme can be applied both independently (for example, in organic gardens), and in combination with chemical treatments (integrated protection system) in the presence of a wide range of pests, with a constant influx of C. pomonella into gardens, with an extremely high the peak of summer. Conclusion The results obtained indicate that the Shin-Etsu® MD CTT, D and BRIZ® (RAK) disorientators are an equivalent alternative to insecticides used against Cydia pomonella. The use of pheromones in apple agrocenoses is recommended against the background of a low number of lepidoptera (Tortricidae, Tineidae) pests, this will reduce pesticide treatments to 70%. When the number of C. pomonella is higher than the economic injury level in apple plantations, 2-3 treatments are necessary, but even with this, the pesticide load on apple cenoses decreases by 20-90%. "The means list of production for use in the organic system farming on the basis of international principles of organic agriculture" allowed the use of pheromones for organic gardening.
2021-10-21T15:17:24.699Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "a1b18984b2133cd90f4f6b5747de40be096eba55", "oa_license": "CCBY", "oa_url": "https://www.bio-conferences.org/articles/bioconf/pdf/2021/06/bioconf_biphv2021_04013.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c6e99c80ee1f51ba133a2dc5252506f8e690fc9d", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
225816001
pes2o/s2orc
v3-fos-license
Smoke-free Zone in Indonesia: Who is Doing What Now BACKGROUND: Although all environments whom applied smoke-free zones (SFZs) have sufficient compliance rate (over 80%) in Indonesia particularly in Bogor City, it is still unclear who is doing what now on SFZs activities to assess the effectivity and efficiency of this tobacco control program. OBJECTIVES: This review aimed to present the evidence of tobacco control on SFZs programs and activities of these zones based on the several indicators set by the local government’s regulation. MATERIALS AND METHODS: A review was held to observe the SFZs local regulation archives. Data were derived from secondary sources and observation data of law enforcement teams’ generic activities and programs in Bogor City in the Province of Jawa Barat, Indonesia. RESULTS: There were eight (eight) zones designated as SFZs according to the local regulation, namely: (1) Public places, (2) workplaces, (3) places of worship, (4) children’s playgrounds and/or other gathering places, (5) public transportation, (6) teaching and learning environments, (7) health facilities, and (8) sports facilities. It resulted that 55% of these zones still uncomplied to SFZs regulation. It is still a tobacco control homework in Indonesia while it is remembering that Indonesia has the only largest country of six developing countries that have not ratified Framework Convention on Tobacco Control of the World Health Organization. CONCLUSION: The role of the SFZs’ enforcement team is crucial and consists of relevant stakeholders to optimize activities and programs of SFZs regulations with clear targeting, rewards, and punishments. However, further studies are needed to determine the effectiveness of non-smoking areas specifically. Background Indonesia is the largest of the six countries that have not ratified the Framework Convention on Tobacco Control of the World Health Organization. However, in an effort to control and confine the tobacco industry, which has dominated decision-making regarding public health policies in the central government, several regions (provinces and municipalities) such as Bogor City have already largely implemented a policy against the tobacco industry by establishing smoke-free zones (SFZs). In maintaining the sustainability compliance, SFZs local regulation is not only assessed at moment as well as mentioned but also it has developed both of short-term by 4-6 months assessment and long-term by 1-3 years assessment [1]. Short-term indicators comprised there SFZs sign is installed, there is room for smoking as well as applicable terms, and there is any promotion and socialization regarded to SFZs. On the other hand, long-term consisted of: SFZs policy is accepted and held by management and visitors of public places, supporting facilities regarded to this regulation have obeyed and utilized, no one is smoking as well as no sales and no cigarette smokes are found in these environments [2]. However, several studies decided only to monitor and evaluate how these activities and programs progressed [3], [4], [5], [6] We consider ways to briefly describe the kinds of activities and programs along with their targets and implemeners that are relevant and being considered proactively for family medicine's enforcement toward SFZs implementation. These focused on activities and programs for the implementation of SFZs local regulation in Bogor City. This review describes several programs and activities that were implemented by stakeholders to support actions against tobacco abuse. Study design and data collection Analyses of the activities and programs were based on a review of their availability as local government regulations in 2014. Data were derived from secondary sources and observation data of law enforcement teams' generic activities and programs in Bogor City in the Province of Jawa Barat, Indonesia, based on the current SFZs' implementations by government stakeholders. Ethics statement This research was conducted from All activities and programs are generated by law enforcement teams for SFZs implementation that supervised and regulated the SFZs local regulation of Bogor City's local government. These consist of stakeholders of Bogor City's local government, such as its Public Health Department, and Office of Tourism. Data evaluation of the SFZs local regulation on Bogor City found that several events were held regarding its implementation (Table 1). Discussion In an effort to control the harmful impacts of tobacco use, Bogor City initiated the implementation of local regulation No. 12 of 2009 on SFZs and Bogor's Mayor Regulation (Perwali) No. 7 of 2010 on the Implementation Guidelines of local regulation on SFZs [7]. The implementation and enforcement of SFZs local regulation No. 12 of 2009 began in May 2010, a year after completing its socialization activities and programs, through anti-cigarette campaign activities, sympathetic actions, SFZs notification, minor crime enforcement, strengthening the role of the community through the establishment of non-smoking communities, smoking-cessation counseling, etc. [3]. Adequate targeting of societies is considered as the core for maximizing the dissemination of information toward SFZs implementation [8]. Position level is one of the factors that influence knowledge [9], especially in regard to SFZs managers. Previously, socialization by enforcement teams has strived to convince managers to comply [10], [11]. Therefore, powerful formal regulation is essential that can force SFZs management not only to attend socialization activities and programs but also to implement them proactively. Naturally, the terms of successful implementation of the rules are all that is involved to know and support any subjects that are regulated, protected, make the rules, oversee the rules, and/or enforced the rules [12]. The same understanding will prevent ambiguity and ensure consistency of SFZs' local regulation. Learning from the New York City model, which follows the Implementation Guidelines of the 2003 "Clean Act in Space," [13] a source that offers practical suggestions in clear language in the form of questions and answers, or frequently asked questions, that have proven effective for the socializing of a regulation. Conclusion The role of the SFZs' enforcement team is crucial and consists of relevant stakeholders to optimize socialization through brief tasks conducted through activities and programs of information on SFZs regulations and clear targeting, rewards, and punishments. Better compliance indicators are considered essential for revising the articles in the SFZs' local regulation, especially in relation to the rules for smoking areas in environments with no additional land. Otherwise, every SFZs manager needs to initiate the importance of these local regulations at various service levels. Moreover, further studies are needed to determine the effectiveness of non-smoking areas specifically.
2020-10-28T18:56:43.618Z
2020-05-25T00:00:00.000
{ "year": 2020, "sha1": "0a8d7fdd693b3b91a5a014798a667bfcda7dee2e", "oa_license": "CCBYNC", "oa_url": "https://www.id-press.eu/mjms/article/download/4091/4831", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8f78c6beae0d871a041117091dbc4a1e776629f3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119625458
pes2o/s2orc
v3-fos-license
Proof of the averaged null energy condition in a classical curved spacetime using a null-projected quantum inequality Quantum inequalities are constraints on how negative the weighted average of the renormalized stress-energy tensor of a quantum field can be. A null-projected quantum inequality can be used to prove the averaged null energy condition (ANEC), which would then rule out exotic phenomena such as wormholes and time machines. In this work we derive such an inequality for a massless minimally coupled scalar field, working to first order of the Riemann tensor and its derivatives. We then use this inequality to prove ANEC on achronal geodesics in a curved background that obeys the null convergence condition. I. INTRODUCTION In general relativity it is possible to have exotic spacetimes that allow superluminal travel, closed timelike curves, or wormholes, so long as the appropriate stress-energy tensor T µν is available. Although general relativity does not provide any restrictions on T µν , quantum field theory does. These constraints are called energy conditions or quantum (energy) inequalities. The simplest energy conditions are bounds on projections of the stress energy tensor at each point in spacetime, but those are easily violated by quantum fields, even by free fields in flat spacetime. But by averaging, we can produce conditions that are not so easily violated. A quantum inequality bounds an average of T µν over a localized part of a timelike path, and an averaged energy condition bounds the energy along an entire geodesic. A good technique to rule out exotic phenomena [1] is to use the achronal averaged null energy condition (achronal ANEC), which requires that the null-projected stress-energy tensor cannot be negative when averaged along any complete achronal null geodesic, where γ is an achronal null geodesic (also called a null line), i.e., no two points of γ can be connected by a timelike path, and ℓ a is the tangent vector to γ. Ref. [2] proved ANEC for null geodesics traveling in flat spacetime (though there could be curvature elsewhere) using a quantum inequality. In previous work [3] we studied ANEC in the case of a classical curved background, meaning a spacetime generated by matter that obeys the null energy condition, at all points and for all null vectors ℓ. We conjectured a particular form for a curved-space quantum inequality, and from that we were able to show that that a quantum scalar field in a classical curved background would obey achronal ANEC. Here we complete the proof by demonstrating a curved-space quantum inequality (somewhat different from the one we conjectured before) and using it to prove the same conclusion. The rest of the paper is structured as follows. In Sec. II we state our assumptions and present the ANEC theorem we will prove. We begin the proof by constructing a parallelogram which can be understood as a congruence of null geodesic segments or of timelike paths, as in Ref. [3]. In Sec. III we present and discuss the general quantum inequality of Fewster and Smith [4]. Secs. IV-VI apply that general inequality to the specific case needed here, using results from our previous application of Fewster and Smith's inequality in Ref. [5]. In Sec. VII we present the proof of the ANEC theorem of Sec. II using the quantum inequality. Finally, Sec. VIII is a summary of our results and discussion of some open problems. We use the sign convention (−, −, −) in the classification of Misner, Thorne and Wheeler [6]. Latin indices (in small or capital letters) from the beginning of the alphabet will denote all coordinates; those from the middle of the alphabet will denote only spatial coordinates. A. Assumptions We consider a spacetime M containing a null geodesic γ with tangent vector ℓ, and define a "tubular neighborhood" M ′ around γ, which is composed of a congruence of null geodesics as in Ref. [3]. Then we define Fermi-like coordinates [7] on M ′ as follows [3]. First pick some point p on the geodesic γ. Let E (u) = ℓ, and pick a null vector E (v) at p such that E a (v) ℓ a = 1, and two unit spacelike vectors E (x) and E (y) at p, perpendicular to E (u) and E (v) and to each other, giving a pseudo-orthonormal tetrad. Then the point q = (u, v, x, y) in these coordinates is found by traveling unit distance along the geodesic generated by vE (v) + xE (x) + yE (y) , parallel transporting E (u) , and then unit distance along the geodesic generated by uE (u) . We suppose that the curvature inside the tubular neighborhood M ′ obeys the null convergence condition, R ab V a V b ≥ 0 for any null vector V . This will be true if the matter generating this curvature obeys the null energy condition, Eq. (2). We require that in M ′ the curvature is smooth and obeys the bounds, and in the coordinate system described above, where the greek indices α, β, γ, . . . take values v, x, y but not u, and R max , R ′ max , R ′′ max , R ′′′ max are finite numbers but not necessarily small. These bounds need not apply outside M ′ . Finally, we consider a quantum scalar field in M. Inside M ′ it is masseless, free, and minimally coupled but outside M ′ we allow interactions and different curvature couplings. For further details see Sec. II E of Ref. [3]. B. The theorem Theorem 1. Let (M, g) be a spacetime and γ an achronal null geodesic and suppose that around γ there is a tubular neighborhood M ′ . We suppose that the curvature is bounded in the sense of Sec. II A and the causal structure of M ′ is not affected by conditions outside M ′ [3]. Let T ab be the renormalized expectation value of the stress-energy tensor of a minimally coupled quantum field in some Hadamard state ω. Then the ANEC integral, cannot converge uniformly to negative values on all geodesics Γ(λ) in M ′ . C. The parallelogram We will use the (u, v, x, y) coordinates of the Fermi-like coordinate system defined in Sec. II A. Let r be a positive number small enough such that whenever |v|, |x|, |y| < r, the point (0, v, x, y) is inside the tubular neighborhood M ′ defined in Sec. II A. Then the point (u, v, x, y) ∈ M ′ for any u. Define the points with v fixed and u varying. Write the ANEC integral As in Ref. [3] we suppose that, contrary to Theorem 1, Eq. (7) converges uniformly to negative values, and show that this leads to a contadiction. Given any positive number v 0 < r we can find a negative number −A greater than all A(v) with v ∈ (−v 0 , v 0 ). By uniform continuity, it is then possible to find some number u 1 large enough that for any v ∈ (−v 0 , v 0 ) as long as We define a sequence of parallelograms in the (u, v) plane, and integrate over each parallelogram in null and timelike directions. The parallelograms have the form where u − (v), u + (v) are linear functions of v defined below., Let f be a smooth sampling function supported only on (−1, 1) and normalized Then we can take a weighted integral over the whole parallelogram, We choose a velocity V and define the Doppler shift parameter We pick any fixed number α with 0 < α < 1/3 and let and Then as V → 0, δ → ∞ and t 0 , v 0 → 0. We define The points with |η| < η 0 and |t| < t 0 , are the same parallelogram described above, but parameterized in a different way (see Fig. 1). For constant η the paths are timelike and in flat space parametrized by proper time. In curved spacetime t is approximately the proper time as shown in Ref. [3]. Now we change variables in Eq. (12) using the Jacobian to get We will show that this upper bound conflicts with a lower bound that we will derive using quantum inequalities on the paths given by fixing η and varying t in Φ V (η, t). III. A GENERAL QUANTUM INEQUALITY Quantum inequalities are bounds on weighted averages along a timelike path of projections of the stress-energy tensor T ab . The general form is where w(t) is a timelike path parametrized by t, V is a vector field onto which the stressenergy tensor will be projected, f (t) is a smooth sampling function, and B is some positive number depending on the choice of quantum field, the spacetime, the projection direction V , and the function f . In this paper we will apply the general quantum inequality of Fewster and Smith [4] to the case of T uu (Φ V ) appearing in Eq. (19). Following Refs. [4,8], we define the renormalized stress-energy tensor, The quantities appearing in Eq. (21) are defined as follows. The operator T split ab ′ is the pointsplit energy density operator, which is applied to the difference between the two point function and the Hadamard series, We have introduced a length l so that the argument of the logarithm in Eq. (23) is dimensionless. The possibility of changing this scale creates an ambiguity in the definition of H, but this ambiguity for curved spacetime can be absorbed into the ambiguity involving local curvature terms discussed below [4]. For simplicity of notation, we will work in units where l = 1. In the first term ∆ 1/2 is the Van Vleck-Morette determinant, and σ is the squared invariant length of the geodesic between x and x ′ , negative for timelike distance. In flat space. By F (σ + ), for some function F , we mean the distributional limit where In some parts of the calculation it is possible to assume that the two points have the same spatial coordinates, so we define and write where The Hadamard series can be written where the subscript j shows the power of σ in the term. Following the notation of Ref. [9], we let H (j) denote the sum of all terms from H −1 through H j . The quantity Q is added "by hand" to ensure that the stress-energy tensor is conserved [8]. But since we will be interested here in projection on a null vector ℓ, Q will not contribute, because g ab ℓ a ℓ b = 0. The term C ab handles the possibility of including local curvature terms with arbitrary coefficients in the definition of the stress-energy tensor. From Ref. [10] we find that these terms include (1) So we must include a term in Eq. (21) given by a linear combination of Eqs. (31a) and (31b). However, we keep only first order in R, and ignore those terms that vanish on null projection, so for our purposes, where a and b are undetermined constants. From Ref. [4] we have the definitioñ where iE is the antisymmetric part of the two-point function. We will let E j be the part of E involving σ j , define a "remainder term", and letH We will use the Fourier transform convention We can now state the quantum inequality of Ref. [4], on a timelike path w(t) with the stress-energy tensor contracted with null vector field ℓ a where g(t) is a smooth function with compact support and the operator θ * denotes the pullback of the function to the path, The subscript (5) means that we include only terms through j = 5 in the sums of Eq. (23). However, as we proved in Ref. [9], terms of order j > 1 make no contribution to Eq. (37). Thus we can write Eq. (37) with w(t) = Φ V (η, t) for a specific value of η and the stress energy tensor null contracted with vector field ℓ a pointing only in the u direction with ℓ u = 1, where F denotes the Fourier transform in both arguments according to Eq. (36), and we used the fact that R uu = 0 according to Ref. [3]. We will now evaluate Eq. (40) in the case of interest. In this section we will calculate T splitH (1) and thus F (t, t ′ ). In Sec. V, we will Fourier transform F (t, t ′ ), and in Sec. VI we will find the form of B in terms of limits on the curvature and its derivatives. To simplify the calculation we will evaluate T splitH (1) in a coordinate system (t, x, y, z) where the timelike path w(t) points only in the t direction, the z direction is perpendicular to it, and x and y are the previously defined ones. More specifically t and z are where we extend the definition of t from Sec. II to cover the whole spacetime. The new null coordinatesũ andṽ are defined bỹ and are connected with u and v,ũ The operator T split uu ′ can be written If we define ζ = z − z ′ andū as theũ coordinate ofx, the center point between x and x ′ , we have A. Derivatives ofH −1 For the derivatives ofH −1 it is simpler to use Eq. (45). We have In flat spacetime it is straightforward to apply the derivatives toH −1 . However in curved spacetime, there will be corrections first order in the Riemann tensor to both σ and its derivatives. We are considering a path w whose tangent vector is constant in the coordinate system described in Sec. II A. The length of this path can be written where ∆x = x − x ′ and x ′′ = x ′ + λ∆x since dx a /dλ is a constant. Now σ is the negative squared length of the geodesic connecting x ′ to x. This geodesic might be slightly different from the path w. However, this deviation results from the connection, which is first order in the curvature (times the coordinate distance from the origin -see Eq. (9) of Ref. [7]). Thus the distance between the two paths is first order, and the difference in the metric is second order in the curvature (see Eqs. (25,27) of Ref. [7]). The difference in length in the same metric due to the different path between the same two points is also second order. All these effects can be neglected, and so we take σ = −s 2 . Now using Ref. [7] we can write the first-order correction to the metric, where F ab is given by Eq. (29) of Ref. [7] because the first step for x = y = 0 is in theṽ direction and the second in theũ direction. By the symmetries of the Riemann tensor the only non-zero component is where we took into account the different sign conventions. Putting this in Eq. (48) gives So to first order in the curvature, We define the zeroth order σ, and the first order, where we defined ℓ ≡ x ′′ũ and changed variables to y = κℓ. Now to first order, and the derivatives, Now we can take the derivatives of σ , Similarly, For the two derivatives of σ (1) , Now we can assume purely temporal separation, so ∆xũ = ∆xṽ = τ / √ 2 and wherez = (z + z ′ )/2 and t ′′ = t ′ + λτ . Then the derivatives ofH −1 are Let us define the locationsx κ = (κxũ,xṽ) and Then Eq. (61) can be written The derivatives ofH −1 can thus be written where the y i 's are smooth functions of the curvature, where x ′′ and x ′′ κ are defined in terms of t ′′ by Eqs. (60) and (62). B. Derivatives with respect to τ andū Ref. [5] calculatedH (1) , but for points separated only in time. Let us use coordinates (T, Z, X, Y ) to denote a coordinate system where the coordinates of x and x ′ differ only in T . Ref. [5] gives The order-0 remainder term is where dΩ means to integrate over solid angle with unit 3-vectorsΩ, the 4-vector Ω = (0,Ω), the subscript R means the radial direction, and we define X The order-1 remainder term is where G AB is the remainder after subtracting the second-order Taylor series. We can write When we apply the τ andū derivatives from Eq. (46), we can take (T, Z, X, Y ) = (t, z, x, y) and calculate ∂ 2 For the derivatives with respect to τ we have and in the τ → 0 limit. Applyingū derivatives to R 0 gives where x ′′′ =x + rΩ and x ′′′ s =x + srΩ. Now we have to take the second derivative of R 1 with respect to τ , which is T − T ′ in this case. This appears in three places: the argument of sgn in Eq. (72), the limit of integration in Eq. (73), and the term in parentheses in Eq. (73). When we differentiate the sgn, we get δ(τ ) and δ ′ (τ ). but since G AB ∼ τ 3 , there are enough powers of τ to cancel the δ or δ ′ , so this gives no contribution. When we differentiate the limit of integration, the term in parentheses in Eq. (73) vanishes immediately. The one remaining possibility gives C. Derivatives with respect to ζ To differentiate with respect to ζ, we must consider the possibility that x and x ′ are not purely temporally separated. We will suppose that the separation is only in the t and z directions and construct new coordinates (T, Z) using a Lorentz transformation that leaves x unchanged and maps the interval (T − T ′ , 0) in the new coordinates to (τ, ζ) in the old coordinates. Then and the transformation from (T, Z) to (t, z) is given by with the x and y coordinates unchanged. Then Now let M be some tensor appearing inH (1) . The components in the new coordinate system are given in terms of those in the old by We would like to differentiate such an object with respect to ζ and then set ζ = 0. The only place ζ can appear is in the Lorentz transformation matrix, where we see and similarly, To simplify notation, we will define P and Q to be the matrices on the right hand sides. Reinstating x and y, Now we can write the derivative of M ABC... as where p abc... ABC... is a rank-n matrix of 0's and 1's. With two derivatives, we have where q abc... ABC... is a rank-n matrix of nonnegative integers. There are also places where T − T ′ appears explicitly inH 1 . We can differentiate it using Eq. (79), Now we apply the operators ∂ 2 ζ and ∂ τ ∂ ζ toH 0 ,H 1 , and R 1 . First we apply one ζ derivative 1 to Eq. (69b) using Eq. (87), and two ζ derivatives using Eqs. (88) and (89a), Then we apply one ζ derivative toH 1 , and two ζ derivatives toH 1 , Finally we have to apply the derivatives to the remainder R 1 . We can apply the ζ derivatives in two places, the Lorentz transformations and G AB . Since the three terms are very similar we will apply the derivatives to one of them where we defined Y a ≡ (1/2)|T −T ′ |Λ a I Ω I . Then using Eqs. (87) and (89a), we find that that ∂ ζ Y a | ζ=0 = (1/2)p a i Ω i sgn τ and taking into account the properties of Taylor expansions, where G ab,c is the remainder of the Taylor expansion of G ab,c after subtracting the first-order Taylor series. (96) 1 The Lorentz transformation technique we use here is not quite sufficient to determine the singularity structure of the distribution ∂ ζH0 at coincidence. Instead we can use Eq. (47) of Ref. [5] to compute the non-logarithmic term inH 0 for arbitrary x and x ′ , which is then Differentiating this term gives Eq. (90) and explains the presence of τ − instead of τ in the denominator. The first term of Eq. (91) arises similarly. Using G (3) from Eq. (73) and Eq. (96) becomes We could simplify further by using the explicit values of the p matrices, but our strategy here is to show that all terms are bounded by some constants without computing the constants explicitly, since the actual constant values will not matter to the proof. Applying the τ derivative gives We do not have to differentiate sgn τ here, because the rest of the term is O(τ 2 ) and so a term involving δ(τ ) would not contribute. The same procedure can be applied to all three terms. Terms involving X ′′ s will get an extra power of s each time G is differentiated. The final result is For two ζ derivatives we can apply both on the Lorentz transforms, both on the Einstein tensor or one on each, Using Eqs. (89b) and (88), since q t i = 0 and Ω t = 0. Using properties of the Taylor series as before, we can write so Eq. (101) becomes Using G (1) as in Eq. (71) and G (2) and G (3) from Eqs. (97) and (73) this becomes For all three terms where c 1 , c 2 , and c 3 are smooth and have no τ dependence and c 4 is odd, C 1 and bounded. As mentioned in Sec. IV A, the functions y i depend on τ but are smooth. Explicit expressions for the c i are given in Appendix A. We now put the terms of Eq. (108) into Eq. (40), and Fourier transform them, following the procedure of Sec. IV of Ref. [9], to obtain the bound B in the form The first term in Eq. (108) is 1/(π 2 τ 4 − ), and we proceed exactly as Ref. [9], except for the different numerical coefficient, to obtain Putting only Eq. (110) into Eq. (109) gives the result for flat space. Fewster and Eveson [11] found a result of the same form, but they considered T tt instead of T uu , so the multiplying constant is different. Fewster and Roman [12] found the result for null projection. Where we have 1/24, they had (v · ℓ) 2 /12, where v is the unit tangent vector to the path of integration. Here v · ℓ = ℓ t = 1/(δ √ 2), from Eq. (42), so the results agree. The remaining τ −4 − term requires more attention, because of the τ dependence in y 1 . We write with Then [9] Applying the τ derivatives to G 1 gives where the terms with an odd number of derivatives of the product of the sampling functions vanish after taking τ = 0. Now y 1 depends on τ andt only through t ′′ =t + (λ − 1/2)τ , so using Eq. (65), we can write d dτ Then we integrate by parts and put all the derivatives on the sampling functions g, Since we set τ = 0, Y 1 has no λ dependence and we can perform the integral. The result is For the term proportional to τ −3 − , we have where We calculate this Fourier transform in Appendix B and the result is Applying the derivatives to G 2 gives Again the only dependence of y 2 on τ is in the form of t ′′ so we can integrate by parts and perform the λ integrals For the term proportional to τ −2 − , we have where Ref. [9] calculated this Fourier transform, but the Fourier transform of 1/τ 2 − given by Ref. [13] was cited with the wrong sign in Eq. (105) of Ref. [9]. The correct result is Applying the derivatives to G 3 gives As before, we integrate by parts Integrating in λ gives The three remaining terms have Fourier transforms given in Ref. [9], so we find 2 where we added which is the local curvature term from Eq. (40). The bound is now given by Eqs. VI. THE INEQUALITY We would like to bound the correction terms B 1 through B 6 using bounds on the curvature and its derivatives. Using Eq. (3) in Eq. (66a), we find We can use Eq. (132) in Eq. (117) to get a bound on |B 1 |. But will not be interested in specific numerical factors, only the form of the quantities that appear in our bounds. So we will write where J 1 [g] is an integral of some combination of the sampling function and its derivatives appearing in Eq. (117). We will need many similar functionals J Similar analyses apply to B 2 and B 3 and the results are Among the rest of the terms in B there are some components of the form R abcd,ũ which diverge after boosting to the null geodesic, as shown in Ref. [3]. However we can show that these derivatives are not a problem since we can integrate them by parts. Suppose we have a term of the form where L n (τ,t) is a function that contains the sampling function g and its derivatives. Thẽ u derivative on the Riemann tensor can be written The term can be reorganized the following way by grouping the terms with t andṽ, x, y derivatives where A abcd... n are arrays with constant components and the subscript n denotes the term they come from. Here the greek indices α, β, · · · =ṽ, x, y. The term with one derivative on α can be bounded, while the term with one derivative on t can be integrated by parts, where the primes denote derivatives with respect tot. The sampling function is C ∞ 0 so L ′ (τ,t) is still smooth and the boundary terms vanish. Now it is possible to bound this term, where we defined The same method can be applied with more than oneũ derivative. Now we apply this method to the integrals B 4 , B 5 and B 6 of Eq. (130). We start with B 4 , which has the form where After integration by parts Taking the bound gives Reorganizing B 5 based on the number of t derivatives gives where and the bound is Finally the remainder term is where we changed variables to λ = r/τ and now arrays A abcd... We define constants a and now we can take the bound Putting everything together gives We can change the argument of the sampling function, writing g(t) = f (t/t 0 ), where f is defined in Sec. II and normalized according to Eq. (11), so Eq. (154) becomes where we used J (k) We can simplify the inequality by defining Then Eq. (155) becomes We will use this result to prove the achronal ANEC. VII. THE PROOF OF THE THEOREM We use Eq. (159) with w(t) = Φ V (η, t) and integrate in η to get As δ → ∞, t 0 → 0 but F (m) ,F (n) , R max , and R (m) max are constant. Nowxũ =x u /δ, and using Eqs. (10), (15), (16), |x u | < u 1 + √ 2δt 0 . Thus as δ → ∞,xũ → 0. Therefore only the first term in braces in Eq. (160) survives, so the bound goes to zero as Equation (160) is a lower bound. It says that its left-hand side can be no more negative than the bound, which declines as δ 2α−1 . But Eq. (19) gives an upper bound on the same quantity, saying that it must be more negative than −At 0 /2, which goes to zero as t 0 ∼ δ −α . Since α < 1/3, the lower bound goes to zero more rapidly, and therefore for sufficiently large δ, the lower bound will be closer to zero than the upper bound, and the two inequalities cannot be satisfied at the same time. This contradiction proves Theorem 1. The ambiguous local curvature terms do not contribute in the limit η 0 → ∞ because they are total derivatives proportional to η 0 −η 0 dηR ,uu (x)) = 0 . (162) VIII. CONCLUSIONS This work completes the proof of ANEC in curved spacetime for a minimally coupled, free scalar field, on achronal geodesics traveling through a spacetime that obeys NEC. The techniques are similar to those of Ref. [3], but that paper required an unproven conjecture. Here, we use the general absolute quantum inquality of Fewster and Smith [4] to derive a null projected quantum inequality, slightly different from our previous conjecture, and use that inequality, Eq. (155), to prove achronal ANEC. Equation (155) has the form of the flatspace null-projected quantum inequality of Fewster and Roman [12], plus correction terms which vanish as one considers more and more highly boosted timelike paths with smaller and smaller total proper time in the limiting process above. The result of this paper concerns integrals of the stress-energy tensor of a quantum field in a background spacetime; we have so far not been concerned about the back-reaction of the stress-energy tensor on the spacetime curvature. This analysis is correct in the case where the quantum field under consideration produces only a small perturbation of the spacetime. Thus we have shown that no spacetime that obeys NEC can be perturbed by a minimally-coupled quantum scalar field into one which violates achronal ANEC. Thus no such perturbation of a classical spacetime would allow wormholes, superluminal travel, or construction of time machines 3 [1]. What possibilities remain for the generation of such exotic phenomena? One is that the quantum field is not just a perturbation but generates enough NEC violation to permit itself to violate ANEC also. We argued against this idea on dimensional grounds in Ref. [3]. Another is that there is a field that violates NEC but obeys ANEC, and a second field, propagating in the background generated by the first, that violates ANEC. This three-step process seems unlikely but is open to future investigation. There is also the possibility of different fields. We have not studied higher-spin fields, but these typically obey the same energy conditions as minimally-coupled scalars. Of more interest is the possibility of a non-minimally coupled scalar field. Such fields can produce ANEC violations even classically [16,17] with large enough (Planck-scale) field values. However these situations seem unphysical since the effective Newton's constant becomes negative as the field value increases. In the case of a wormhole [18], the effective Newton's constant must be negative not only inside the wormhole but in one of the asymptotic regions. If one disallows Planck-scale field values, there are restrictions on non-minimally coupled classical [19] and quantum [20] fields, but these restrictions are not in the form of the usual quantum inequalities. Whether there is a self-consistent achronal ANEC for non-minimally coupled scalar fields remains an open question. Using f ′ (ξ) = −iξf (ξ), we get The function G 2 is odd but with three derivatives it becomes even, so we can extend the intergal
2015-10-27T02:27:20.000Z
2015-07-01T00:00:00.000
{ "year": 2015, "sha1": "ed1d7c2ddc0738004abbaf03e476e844c83ccfea", "oa_license": null, "oa_url": "https://eprints.whiterose.ac.uk/130425/1/1507.00297v2.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ed1d7c2ddc0738004abbaf03e476e844c83ccfea", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
15138264
pes2o/s2orc
v3-fos-license
A Connection between Good Rate-distortion Codes and Backward DMCs Let $X^n\in\mathcal{X}^n$ be a sequence drawn from a discrete memoryless source, and let $Y^n\in\mathcal{Y}^n$ be the corresponding reconstruction sequence that is output by a good rate-distortion code. This paper establishes a property of the joint distribution of $(X^n,Y^n)$. It is shown that for $D>0$, the input-output statistics of a $R(D)$-achieving rate-distortion code converge (in normalized relative entropy) to the output-input statistics of a discrete memoryless channel (dmc). The dmc is"backward"in that it is a channel from the reconstruction space $\mathcal{Y}^n$ to source space $\mathcal{X}^n$. It is also shown that the property does not necessarily hold when normalized relative entropy is replaced by variational distance. I. INTRODUCTION Consider a discrete memoryless source with generic distribution P X and a per-symbol distortion measure d(x, y). Given a distortion allowance D, the minimum achievable rate of compression (in bits per source symbol) is given by ratedistortion theory as One intriguing achievability proof of this classic theorem was given by Wolfowitz in [1] (see also [2,Theorem 7.3]) and goes roughly as follows. A joint distribution P XY ∈ P(D) gives rise to a random transformation P X|Y from the reproduction alphabet to the source alphabet. Using Feinstein's maximal code construction, create a channel code designed for the "backward" dmc n i=1 P X|Y (x i |y i ); here, "backward" refers to the reversed flow of information from the reconstruction space to the source space. The resulting channel code can be transformed into a rate-distortion code by using the channel decoder as a source encoder and the channel encoder as a source decoder. In [1], it is shown that the distortion criterion is met as long as the channel code has large enough error probability, thus demonstrating that good rate-distortion codes can be constructed from certain channel codes. In this paper, we explore another connection between lossy source coding and backward dmc's, one which involves the input-output statistics of good rate-distortion codes. Briefly, the result is as follows. Consider an arbitrary R(D)-achieving rate-distortion code 1 that maps source sequences X n to recon-1 More precisely, a sequence of codes. (a) A rate-distortion code is a pair (fn, gn) that maps a source sequence X n to a reconstruction codeword Y n . The code induces a distribution PXnY n on the pair (X n , Y n ). Select a codewordỸ n uniformly at random from the codebook corresponding to (fn, gn), then passỸ n through a memoryless channel P X|Y . The pair (X n ,Ỹ n ) induces a distribution QXnY n . Fig. 1: Description of the true joint distribution PXnY n (Fig. 1a) and the approximating joint distribution QXnY n (Fig. 1b). struction codewords Y n . The code induces a joint distribution P X n Y n on the pair (X n , Y n ) (see Figure 1a). Using the corresponding codebook, select a codeword uniformly at random as the input to a backward dmc n i=1 P X|Y (x i |y i ), where P X|Y is derived from the minimizer of R(D). 2 This channel coding operation induces a joint distribution Q X n Y n on the pair (X n ,Ỹ n ), whereỸ n is the randomly selected codeword andX n is the channel output (see Figure 1b). We show that, provided some mild necessary conditions are satisfied, That is, the input-output statistics of nearly all R(D)-achieving sequences of rate-distortion codes converge (in the sense of normalized relative entropy) to the output-input statistics of a backward dmc acting on the rate-distortion codebook. 3 The property in (1) is analogous to the property of capacityachieving codes for memoryless channels established in [4,Theorem 15], namely that the channel output statistics converge (in normalized relative entropy) to a memoryless distribution. More precisely, a capacity-achieving sequence of codes where P Y n is the true distribution of the channel output and where P Y is the unique capacityachieving output distribution. There are various properties of good rate-distortion codes that have been examined in the past (see, for example, [5] and [6]). Notably, [6] showed that the empirical kthorder distribution of a good rate-distortion code converges in distribution almost surely to the minimizer of the kth-order rate-distortion function (when that minimizer is unique). Note that the property in (1), in contrast, concerns the actual (not empirical) joint distribution and k = n. In some sense, (1) complements [6] in the same way that (2) complements the results in [7] on the kth-order empirical input distribution of good channel codes. In order to show that good rate-distortion codes yield (1), we will first prove in Section II that the property holds for good empirical coordination codes. Empirical coordination, studied in [8], is similar to rate-distortion except for the distortion criterion, which is replaced by the requirement that the variational distance between the joint empirical distribution and a target joint distribution P XY converges in probability. Thus, one aims to achieve coordination pairs (R, P Y |X ) instead of rate-distortion pairs (R, D). Upon demonstrating that (1) holds for good empirical coordination codes, we show in Section III that the property holds for good rate-distortion codes, as well. In Section IV, we show that the property can fail to hold when the distance measure is replaced by variational distance or unnormalized relative entropy. Although we do not prove it here, we are able to use the property in (1) to solve a problem in information-theoretic secrecy relating to Yamamoto's "Rate-distortion theory of the Shannon cipher system" [9]. Specifically, one can use the property to show that the results of [10] can be achieved simply by using good rate-distortion codes, instead of the particular stochastic encoders that [10] asserts the existence of. It is likely that the property can provide a solution or give insight into other secrecy problems, as well. II. GOOD EMPIRICAL COORDINATION CODES We begin by introducing empirical coordination codes. All results in this paper will assume memoryless sources and finite alphabets. Furthermore, we assume for simplicity that the source satisfies P X (x) > 0, ∀x ∈ X . We first give the definition of a coordination code (see Figure 1a). Definition 1. An (n, R n ) coordination code consists of an encoder-decoder pair (f n , g n ) operating at rate R n , where A coordination code acts on a memoryless source X n with generic distribution P X . For a fixed source sequence x n , the code produces a codeword y n = g(f (x n )). The empirical distribution of the resulting pair (x n , y n ) is defined for all (x, y) ∈ X × Y by The empirical distribution of the pair of random variables (X n , Y n ) is itself a random variable and is denoted by T X n Y n . Variational distance, a measure of the distance between two distributions P and Q with common alphabet, is defined by Definition 2. The pair (R, P Y |X ) is achievable if there exists a sequence of (n, R n ) coordination codes such that and where P XY = P X P Y |X . The rate boundary in Theorem 1 justifies the following definition of a "good" coordination code. and To each good sequence of coordination codes for P Y |X , we associate two sequences of joint distributions {P X n Y n } ∞ n=1 and {Q X n Y n } ∞ n=1 . The first, P X n Y n , is the distribution of the pair (X n , Y n ) induced by the code. That is, where is the memoryless source distribution and is the composition of the encoder with the decoder. The second distribution, Q X n Y n , is the distribution of the pair (X n ,Ỹ n ), whereỸ n is a codeword selected uniformly at random and X n is the output of the backward dmc when the input isỸ n . That is, where is the uniform distribution over the codebook (which might contain duplicate codewords) and is the backward dmc with generic channel P X|Y derived from the joint distribution P XY = P X P Y |X . Our main result is the following theorem. 4 Then, for any good sequence of coordination codes for P Y |X , it holds that where P X n Y n and Q X n Y n are defined in (12)-(17). Furthermore, if P XY / ∈ A, then there exists a good sequence of coordination codes for P Y |X such that Proof: We will need the following property of variational distance, which is easily verified. Let ε > 0 and let f (x) be a function bounded by b ∈ R. Then We also need the following chain rule of relative entropy: To begin the proof of Theorem 2, fix P XY ∈ A and a good sequence of coordination codes for P Y |X . We first show that such a sequence has the property 5 where I(X n ; Y n ) is evaluated with respect to the true distribution P X n Y n . Throughout the proof, bear in mind that all expectations and mutual information expressions involving (X n , Y n ) are evaluated with respect to the true distribution P X n Y n . To show (23), we first introduce an auxiliary random variable J ∼ Unif{1, . . . , n} independent of (X n , Y n ). Regurgitating some of the standard steps found in the converse 4 We exclude the single pathological case P XY (x, y) = 1 |X | 1{x = y}, in which it is possible that there are some codebooks such that P X n Y n = Q X n Y n and other codebooks such that D(P X n Y n ||Q X n Y n ) = ∞. 5 In [3], the assertion is that the theorem follows from (23). However, this is not the case. It is necessary to establish the steps in (38)-(41), which rely on the property of coordination codes in (36). to the lossy source coding theorem, we have where (a) follows from X J ⊥ J. If we can show that then the proof of the property in (23) will be complete by (10) and the squeeze theorem. To that end, we use several observations from [8]. By the boundedness of variational distance, (11) implies Upon noting that we have where (a) follows from Jensen's inequality. Therefore, Since mutual information is continuous with respect to variational distance for finite alphabets (this follows from (21)), we see that (36) yields (31). Thus, the property in (23) holds. We remark that the property in (36) underlies the reason that we are considering empirical coordination codes. In brief, it arises more naturally in an empirical coordination setting than in a rate-distortion setting. We will invoke (36) again shortly. With (23) in hand, we now show that To start, we have To see how (a) follows, first note that the function is bounded due to the restriction P XY ∈ A (in fact, this is the only step where the restriction is needed). Then, use (36) along with (21). Continuing, we have where (a) is due to (41) and (b) is due to (23). This proves the property in (37). Finally, write where (a) follows from the squeeze theorem. To complete the first part of the theorem, invoke the chain rule of relative entropy in (22). To show the second part of Theorem 2, fix P XY / ∈ A and a good sequence of coordination codes for the corresponding P Y |X . The condition P XY / ∈ A implies the existence of a pair (x, y) such that P X|Y (x|y) = 0. For every n, append a codeword y n to the codebook and associate with it a sequence x n such that |i : (x i , y i ) = (x, y)| > 0. Accordingly, modify f n and g n so that y n = g(f (x n )). Such a modification maintains the goodness of the code, but now P X n Y n has support on (x n , y n ), while Q X n Y n does not. Consequently, 1 n D(P X n Y n ||Q X n Y n ) diverges. III. GOOD RATE-DISTORTION CODES In this section, we establish the counterpart to Theorem 2 for good rate-distortion codes. A rate-distortion code is defined according to Definition 1. The notion of good is also similar; in this case, a good code is an R(D)-achieving one. Definition 4. Given a source P X and a distortion measure d(x, y), a sequence of (n, R n ) rate-distortion codes and For a fixed per-letter distortion measure d(x, y), the rate-distortion function is defined for D ≥ D min , where D min = E[min y d(X, y)]. Without loss of generality, we assume that D min = 0. In view of the restriction in Theorem 2 to P XY ∈ A, the following lemma is useful. Accordingly, the reproduction symbol y may be deleted from Y without affecting R(D). Thus, we have that for any D > 0 we can reduce the reproduction alphabet Y, without penalty, to an alphabet Y * (D) such that any P XY minimizing R(D) satisfies P XY (x, y) > 0 for all (x, y) ∈ X × Y * . In particular, P XY ∈ A. It is shown in [11] that this does not hold for D = 0. From this point on, we assume that Y has been reduced according to Lemma 1, so that Theorem 2 can be invoked. Although the minimizer of R(D) need not be unique, it turns out that the corresponding backward channel P X|Y is unique. This is analogous to the fact that the capacityachieving output distribution is unique, even though the input distribution is not. We now state the counterpart to Theorem 2. The proof is immediate once we use the fact that good rate-distortion codes are good empirical coordination codes. Theorem 3. Let D > 0, and assume that the reproduction alphabet has been reduced to Y * (D). Then, for any good sequence of rate-distortion codes for D, it holds that where P X n Y n (x n , y n ) = n i=1 P X (x i ) 1 y n = g n (f n (x n )) and where P X|Y is the unique backward channel corresponding to D. Proof: From [8, Theorem 11] or [6, Theorem 9], we have that a good rate-distortion code for D is a good empirical coordination code for some P Y |X minimizing R(D). Due to the reduction to Y * (D), we have P XY ∈ A, which allows us to invoke Theorem 2. IV. VARIATIONAL DISTANCE In this section, we show that Theorem 2 does not hold when we replace normalized divergence by variational distance. From Pinsker's inequality, this implies that it does not hold in unnormalized relative entropy, either. Theorem 4. There exists P XY ∈ A and a sequence of good coordination codes for the corresponding P Y |X such that where P X n Y n and Q X n Y n are defined in (12)-(17). Proof: Let P XY ∈ A be such that P Y is an capacityachieving input distribution of the channel P X|Y . Fix a sequence of good empirical coordination codes {(f n , g n )} ∞ n=1 for P Y |X such that the decoder is bijective and for some δ > 0. This is possible by Theorem 1. By way of contradiction, suppose that To reach a contradiction, we first define joint distributions P X n Y n M and Q X n Y n M by P X n Y n M (x n , y n , m) = P X n Y n (x n , y n ) 1 m = f n (x n ) Q X n Y n M (x n , y n , m) = Q X n Y n (x n , y n ) 1 m = f n (x n ) . Observe that Q X n Y n M is the joint distribution governing the triple (X n , Y n , M ) in the following channel coding setting: Thus, we have turned the rate-distortion code (f n , g n ) into a channel code by identifying the channel encoder as the source decoder and the channel decoder as the source encoder. Because g n is bijective, the error event for the channel coding is given by and the probability of error is Pr{Error(n)} Q(E n ). On the other hand, notice that under the distribution P X n Y n M , it holds that g n ( M ) = g n (f n (X n )) = Y n , and thus P (E n ) = 0. Now, since variational distance has the property we have by (61) that Therefore, by the definition of variational distance, Thus, we have demonstrated a sequence of channel codes whose rates approach the channel capacity slowly 6 from above, yet whose probability of error vanishes. This is impossible due to the strong converse to the channel coding theorem (e.g., [12, Theorem 5.8.5]), yielding a contradiction.
2013-07-29T17:41:16.000Z
2013-07-29T00:00:00.000
{ "year": 2013, "sha1": "09124ce9c9f92abf0d365257c605fdf2c7258404", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1307.7770", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "09124ce9c9f92abf0d365257c605fdf2c7258404", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
54741046
pes2o/s2orc
v3-fos-license
Effect of normal stresses on the results of thermoplastic mold filling simulation The paper deals with the effect of the normal stresses on the predicted flow front during the filling stage of thermoplastic injection molding. The normal stresses are predicted using the non-linear Criminale-Ericksen-Filbey model (a variant of the second-order fluid rheological model with viscosity, first and second normal stress coefficients dependent upon magnitude of shear rate) incorporated into a comprehensive 3D simulation software for mold-filling analysis. The additional stress term allows the prediction of the so called ear-flow effect (melt racing on the edges of the cavity). Introduction Thermoplastic injection molding is the most common manufacturing process for producing plastic parts.Material is fed into a heated barrel, mixed, and forced into a mold cavity where it cools and hardens to the configuration of the cavity.Significant progress has been achieved in three dimensional finite element simulation of plastic filling the mold (mold filling analysis) [1].Typically in commercial simulations polymer melt is considered as generalized Newtonian fluid, there the deviatoric stress tensor is proportional to the deviatoric deformation rate, the scalar coefficient connecting the shear rate and shear stress, known as viscosity, is dependent upon temperature, shear rate invariants, pressure and other factors [2].However, the generalized Newtonian fluid model does not predict any normal stress differences during simple shear flow whereas real polymers usually exhibit significant normal stress differences. In this current work, we develop a finite element mold-filling program that allows incorporation of the Criminale-Ericksen-Filbey viscoelastic model that can accurately predict normal stress differences in a wide range of temperatures and shear rates [3].One of the motivators for this development was potentially improving the prediction of ear-flow, a little understood phenomenon of the more rapid advance of the flow front on the edge of a mold cavity than in the center of the cavity [4]. Ear-Flow Phenomenon Numerous cases have been observed in industrial injection molding practice of amorphous materials which exhibit a race-track or ear-flow effect.This is the more rapid advance of the flow front at the edges of the molding cavity than in the center of the cavity, usually observed at elevated injection speed.This flow-leading effect at the edge cannot be explained by differences in cavity wall section thickness.In the worst cases, this race-tracking leads to air-traps and visual defects in the molded part.A typical flow front propagation demonstrating the ear-flow in a polystyrene material is shown on Figure 1. As was originally suggested by experimental observations of Murata et al. [5], and confirmed by moldfilling simulations of Costa et al. [4], in many cases the ear flow is caused by the higher polymer temperatures in the edge region.The temperature rise is caused by shear heating in the high-shear region (close to the boundary) of the runners and gates.This temperature rise is then convected into the cavity, favoring the cavity edge due to the distribution pattern in the gate.The effect of the preferential convection of the temperature rise from the shear regions of the runners is also similar to the development of flow imbalance in geometrically balanced feed systems explained by Cook et al. [6]. However in many practical injection molding cases ear flow phenomenon is observed for conditions where very little shear heating is predicted, thus, rising suspicions that there is also another mechanism responsible for the ear flow effect.Since normal stress differences caused by in-plane shear near the cavity edges may push polymer perpendicular to the flow they are a candidate for such a mechanism.Our simulation was used to test this possibility. Figure 1 Flow front shapes obtained by Murata et al. [5] for a Polystyrene material in a glass insert mold. Mathematical model Filling stages of injection molding process are described by the combination of the momentum equation 1: ) and the energy equation (3): together with appropriate material equations and boundary conditions described in [1] and [2].In the simplest mold filling case the material equations include generalized Newtonian rheological equation: On the mold-plastic interface the boundary conditions are set as: On the melt flow front (F) the boundary conditions are set as where is the normal to the flow front F. On the injection surface (I) the boundary conditions are set as where I n G is the normal to the injection surface I and ) (t Q is the injection flow rate. The system of equations ( 1)-( 12) is solved using a specialized finite element method customized for the typical conditions of the injection molding process described in [1]. In order to incorporate the effect of normal stresses we implemented material rheological function connecting stresses (V ) with flow conditions that follows Criminale-Ericksen-Filbey model [3]: In the equation ( 4 Following established practice for mold-filling simulation we use Cross-WLF model [2] for the viscosity model: D and n are empirical coefficients. Autodesk Moldflow material library stores quite an extensive data of the Cross-WLF parameters for thousands of material grades.The set of equations 1-17 was integrated into a special build of 3D flow solver of Autodesk Moldflow Insight 2017 software and the resulting program was used for simulation of injection molding processes.The addition included calculation of the additional normal stress tensor field: using equations (15)-(18).Then we apply additional nodal forces i N G to each of the filled nodes i calculated as: where i S is the surface of the control volume of the node i.At the each time step the algorithm iterated the velocity-pressure solver together with the calculations of the normal stress by equations ( 18) and ( 19) until they converge.The rest of the mold filling simulation algorithm described in [1] was left intact.All other rheological, thermal and mechanical material parameters were taken from the standard material library. Filling of a thin rectangular cavity All three cases simulation do not show any significant shear heating in the edge area (see Figure 3) as was expected because the runners and gates were not modeled. No normal stresses case The simulation without normal stresses does not predict any ear flow phenomenon as shown in Figure 4. First normal stress differences only When only the first normal stress difference is included, the flow front propagation pattern is very similar to the case of no normal stress.As shown in Figure 5 there is no ear flow phenomenon in this case either. First normal differences and maximal second normal differences Figure 6 Flow front positions for the case with the first and second normal stress differences. The flow front distribution from the simulation using first and second normal stress differences is shown in Figure 6.A significant ear flow effect can be seen.The effect is quite prominent despite a moderate magnitude of normal stresses up to ~30kPa (see Figure 7).The results of these simulations show that the second normal stress difference can be an important contributing factor to the ear flow phenomenon. Conclusions An integrated system of 3D mold filling simulation that takes into account nonlinear rheological properties of normal stress differences is presented. The second normal stress difference appears to be an important contributing factor to the ear flow phenomenon while the first normal stress difference does not affect much the flow front propagation. Boundary conditions are set on the mold-plastic interface, on the melt flow front and on the injection surfaces. ) V is the deviatoric stress tensor; D is the deformation rate tensor, equation (14), v is velocity and functions ) first and the second normal stress difference functions.Equation (13) is an extension of the generalized Newtonian equation (4). when viscosity is known we followed the so called Cox-Mertz Abnormal Rule as described by V. Sharma and G.H. McKinley[7]. or consistency viscosity.As shown in[7] equation 7 allows relatively accurate estimation of the first normal stress difference function if Cross-WLF parameters are known.Finally, to estimate the second normal stress difference function we assume that it is proportional to the first normal stress difference function: 4. 1 Case studyIn order to estimate the effect of normal stresses on flow front propagation we use a filling simulation of a simple thin rectangular plaque of 100mm in length, 20mm in width and 2mm in thickness shown on Figure2.The plague being filled by the generic polymethylmethacrylate (PMMA) material.The filling time is 1 second.The cavity is meshed by 4 node tetrahedral elements with at least 10 layers of elements through the thickness.Melt inlet boundary conditions are applied along one of the short edges of the plaques in the style of a film or fan gate. Figure 2 x Figure 2 Illustrative molding caseThree rheological models were considered:x No normal stressesx First normal stress difference estimated from the Cox-Mertz Abnormal Rule but no Second normal stress differences: coefficient \ K =0x First normal stress difference estimated from the Cox-Mertz Abnormal Rule and large Second normal stress differences \ K =-0.5. Figure 3 Figure 3 Temperature distribution at the end of fill.No normal stresses case.The cutting plane is in the middlethickness plane of the cavity. Figure 4 Figure 4 Flow front positions for the case with no normal stress. Figure 5 Figure 5 Flow front positions for the case with the first normal stress differences only.
2018-12-08T00:26:36.970Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "f09cb8e42b424889b2ca61dc7d0a5728300579df", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2016/43/matecconf_numi2016_16004.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f09cb8e42b424889b2ca61dc7d0a5728300579df", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
214329428
pes2o/s2orc
v3-fos-license
Accentuate the Positive: Strengths-Based Therapy for Adolescents Purpose: The field of psychiatry has conventionally employed a medical model in which mental health disorders are diagnosed and treated. However, the evidence is amassing that using a strengths-based approach that promotes wellness by engaging the patient’s assets and interests may work in synergy with the medical model to promote recovery. This harmonizes with the patient-centered care model that has been promoted by the Institute of Medicine. Methods: The article uses a clinical case to highlight the attributes of a strength-based model in the psychiatric treatment of adolescents. Results: Outcome metrics from a number of studies have demonstrated enhanced youth and parent satisfaction and decreased use of hospital level of care with the implementation of strengths-based therapeutic modalities. Implications: Incorporating strengths-based interventions into conventional psychiatric practice provides a multi-faceted treatment approach that promotes recovery in children and adolescents with psychiatric disorders. CASE Zoe is an 18-year-old college freshman who presents to your clinic. Zoe describes excessive worry about a variety of life challenges. She becomes easily overwhelmed by her academic performance and social situations. In the face of perceived failure or criticism, Zoe frequently resorts to cutting her forearm with a razor for emotional relief. She begins to panic and worry whenever she walks into a lecture. Zoe calls home to cry every night and eventually decides to move out of the dormitory and move back home with her parents, where she is able to commute to college. Despite moving home, her anxiety symptoms persist and her grades continue to fall as she cannot focus in class. Eventually, Zoe requests medical leave from college for a year. Prior to college, Zoe excelled in environments with structure and clear expectations. She was bright, goal-oriented and organized. Zoe was one of the popular girls in high school, with many friends, but she was vulnerable to sadness and anxiety when facing separation and transitions. In the 6 th grade, Zoe's best friend left town to attend a different middle school. Losing her friend while transitioning into puberty was difficult. Zoe experienced crying spells, had little interest in her favorite activities and did not feel like making new friends or hanging out with siblings. Zoe did not know how to explain her situation to family and friends, and began to avoid going to school entirely. Her mother initially arranged home-bound tutors, but later decided to quit her full-time job to focus her full attention on Zoe's homeschooling. Gradually, Zoe began to see and spend more time with friends and eventually slowly transitioned back to public school. In high school, Zoe performed extremely well while juggling school work, extracurricular activities, and part-time jobs. Zoe's volleyball coach was particularly impressed by her leadership and dedication to the team. With a competitive SAT score and high school volleyball championship in her resume, she received a full-ride volleyball scholarship offered by an out-of-state university. Before the semester started, Zoe's mother began to voice her concerns that Zoe would not be able to adjust or thrive being so far from home. This led to a strained mother-daughter relationship. Eventually, Zoe agreed with her mother, declined the scholarship, and chose to attend a college near home. DISCUSSION Using a conventional medical treatment model, we would start with a diagnostic formulation based on symptom presentation. Zoe met the criteria for major depressive disorder in middle school and evolving generalized and social anxiety disorder in the context of separation and transition to college. The treatments of choice may include a selective serotonin reuptake inhibitor (SSRI) with or without psychotherapy. Cognitive behavioral therapy (CBT), which would be clinically indicated in Zoe's case, would focus on improving Zoe's mood and anxiety by addressing cognitive distortions and problematic behaviors. Zoe's recovery, as reflected by a reduction in psychiatric symptoms, may be tracked by commonly used assessment scales, such as the Beck Depression Inventory (BDI) and Hamilton Anxiety Rating Scale (HAM-A). The conventional medical model in psychiatry arose with pharmacological discovery and the introduction of DSM-III in the 1980s. In contrast to psychoanalytical theories, the medical approach emphasizes a systematic process of gathering data, identifying symptoms, creating a differential diagnosis and a working diagnostic formulation based on Diagnostic and Statistical Manual (DSM) (American Psychiatric Association, 2013) criteria, and then implementing treatment modalities that target the disorders and/or symptoms (Mayes & Horwitz, 2005). The treatment usually utilizes the clinician's clinical acumen and expertise and external treatment resources. However, some have suggested that medicalization of the field has diminished the conceptual richness of the inner life of patients, including aspirations, ego strengths, and complex family dynamics (Sedler, 2016). The conventional medical model often focuses on rectifying deficits and challenges and does not always consider the child's existing strengths and abilities or environmental resources that may be leveraged towards treatment progress. In the 1990s, Martin Seligman, then president of the American Psychological Association, advocated for the shift within the field of psychology to "positive psychology," emphasizing human strengths, virtues and well-being (Rettew, 2019;Seligman, Steen, Park, & Peterson, 2005). Subsequently, the term "positive psychiatry" was coined by Dr. Dilip Jeste in 2012 when he was the President of the American Psychiatric Association. Positive psychiatry is defined as "the science and practice of psychiatry that seeks to understand and promote wellbeing through assessments and interventions aimed at enhancing positive psychosocial factors among people who have or are at high risk for developing mental and physical illness" (Boxorgnia, 2018). Donald O. Clifton, known as "the father of strengths-based therapy," (Buckingham & Clifton, 2001) proposed that individuals can achieve far more when efforts are spent on reinforcing their greatest strengths, rather than on highlighting their weaknesses. Helping adolescents understand and utilize their strengths can assist in building hope and confidence about their ability to overcome challenges. How can we utilize this approach to enhance Zoe's treatment, to inspire her own sense of agency, and to improve treatment adherence? If we were to utilize a strengths-based approach, where would we start? Let's conceptualize Zoe's situation through the lens of a brand-new college student caught in the transition of leaving home. Complications include a parent whose significant scaffolding and accommodation have contributed to Zoe's doubts in her ability to individuate, contributing to Zoe's current anxiety and depression. The following are important issues to consider when deciding on a treatment plan. How has the experience been for Zoe to leave her social comfort zone from a high school where she was popular and surrounded with friends, and how has this experience been similar to her previous childhood experience of separation from her best friend? How has it been for her to have a strained relationship with her mother, who had formerly been both a protective and authoritative figure? How does Zoe's mother handle the separation from Zoe and how does her mother's response make her feel? How does Zoe perceive herself when adjusting to a new environment? Whom can she identify at college that can provide support in addition to her mother? How do we introduce medication, such as a selective serotonergic reuptake inhibitor (SSRI), while considering Zoe's assets and her own goals? Partnering with Zoe and her family, five immediate goals, strengths and target plans are identified. (1) Zoe is a very self-disciplined person who works well with consistency. Recognizing this strength, we co-construct a structured daily routine that helps her manage college life, such as setting a scheduled bedtime, emphasizing behaviors that promote a healthy sleep/wake cycle, maintaining regular exercise and a healthy diet, scheduled study time and personal time. (2) Zoe's parents are warm, nurturing and supportive. We encourage Zoe to schedule a set time to communicate with her parents, and for them to reflect on and share their mutual thoughts and feelings around adjusting to college life. (3) Zoe is able to engage in meaningful relationships. She joins an extracurricular group of interest on campus where she can make friends. (4) Zoe is skilled at sports and finds exercise to be a source of coping and stress relief. She decides to continue her athletic interest by joining college intramural volleyball. (5) During high school, Zoe had a tremendous relationship with her volleyball coach, who inspired her to work hard and be a leader. Zoe's hope is to be a future coach to motivate other children and adolescents to strive their best in volleyball and in life. We continue to support Zoe's passion to inspire others. Throughout the therapeutic work, the clinician guides reflection, reassurance and encouragement of Zoe's competencies. This becomes critical as Zoe learns to trust her own abilities and transitions to independence in college. In addition, working with her mother promotes her mother's confidence in both her daughter's ability to succeed independently and her ability as a mother to continue to provide support and stay connected with her daughter. The strengths-based approach can also be implemented in combination with pharmacological intervention. A clinician can discuss the probable benefits of medication in improving Zoe's quality of life by reducing her symptoms to a more manageable level. Potential adverse reactions of medication for the treatment of anxiety and depression should also be carefully specified and openly discussed. While communicating to Zoe that you believe in her competence to assume agency around decisions regarding her care, she begins to gain self-confidence. In the maintenance stage, Zoe's individual strengths and desired goals guide ongoing treatment and recovery. Open-ended inquiry into Zoe's understanding of the components of healing helps Zoe clarify her goals and priorities. Sensitivity to her values regarding spiritual, cultural, and lifestyle identity and choices is fundamental to strengths-based and patient-centered care. Six months later, Zoe has successfully transitioned back to college and moved back into dormitory with classmates. Her grades are back to As and Bs. Zoe's college success also alleviates her mother's anxiety and tendency towards overprotective parenting. They maintain a close relationship, and her mother is proud of her daughter for having a healthy college life. Zoe continues to enjoy playing volleyball in college, and plans to apply for a summer internship as an assistant volleyball coach in a summer enrichment program for elementary-aged youth. IMPLICATIONS Strengths-based intervention focuses on the patient's attributes that promote wellness and can work synergistically with the conventional medical model of treating disease. The strengths-based approach is predicated on the assumption that each individual has a unique set of goals and possesses internal strengths and external resources that can help them achieve these goals. It also aims to activate a patient's hopefulness through a strengthened relationship with him/herself, family, therapeutic supports, community, and culture. Empathic parental guidance to Zoe's mother helped her gain confidence in her daughter's resilience, which activated hope for Zoe's prognosis for both Zoe and her mother. Restoring a stable family relationship provides a long-term resilient foundation for the child, which should be one of the key elements in the strengths-based intervention, especially in child and adolescent patients. The strengths-based model is patient-centered (Institute of Medicine, 2001), and provides the patient with the greater agency in the recovery process, while the clinician collaborates with the patient to identify and bolster their strengths to progress towards recovery. It not only guides our psychotherapy direction but also enhances engagement in psychopharmacological intervention, since the outcome directly ties to the patient's recovery goals. One strengths-based therapy, solution-focused brief therapy (SFBT) has demonstrated efficacy with patients from diverse racial and ethnic groups, with those that are resistant to change, and in school, specialty mental health outpatient treatment and medical settings (Zhang, Franklin, Currin-McCulloch, Park, & Kim, 2018). SFBT has demonstrated enhanced patient-doctor communication, medication adherence, and the promotion of health-related behaviors (Zhang et al., 2018). Strengths-based cognitivebehavior therapy (CBT) promotes resilience by incorporating strengths-based elements such as focusing on the development and utilization of resilient beliefs and behaviors instead of identifying and challenging cognitive distortions (Padesky & Mooney, 2012). Evidence has shown that strengths-based interventions that promote wellness generate positive outcomes in children with psychiatric disorders and disadvantaged backgrounds. For example, physical exercise has been shown to be beneficial for children with Attention-deficit Hyperactivity Disorder (ADHD) (Vysniauske, Verburgh, Oosterlaan, & Molendijk, 2016), whereas mindfulness practices help teens to cope with anxiety and low self-esteem (Biegel, Brown, Shapiro, & Schubert, 2009). Similar treatment models also extend to the entire family's wellness with the understanding that "positive parenting promotes children's health" (Hudziak & Ivanova, 2016). Furthermore, a strength-based approach has been shown to improve hospitalization rates, self-efficacy and a sense of hope (Tse et al., 2016), as well as parent satisfaction and appointment adherence in the outpatient setting (Cox, 2006). Yet, there are limitations to the strengths-based intervention. Patients who are unable to engage in therapy, such as those with an acute psychotic or manic episode, may not respond to strengths-based therapy as an initial intervention, and will likely require stabilization of symptoms through use of medication and other intensive treatments. Solution-focused brief therapy has been shown to be effective in patients with internalizing disorders such as anxiety or depression, but less so in those with externalizing disorders (Zhang et al., 2018). High functioning patients may have acquired greater resilience than more severely psychosocially challenged patients and families, and thus may respond more effortlessly to a strengths-based approach. In many cases, the strengths-based intervention can be used in conjunction with the conventional medical approach, rather than as a monotherapy. This will require clinical discretion to determine which elements of each modality are best suited to the clinical presentation. In addition, there can be instances in which a trait that is adaptive in certain settings proves to be maladaptive in other settings. For example, Zoe's tendency to thrive within structured environments served as a strength in high school, but this soon became maladaptive in college when life was less structured. Clinicians may need to redefine strengths and goals as patients transition to different settings and situations. Although strengths-based approaches have demonstrated efficacy in a number of studies, further research into the benefits and limitations of this approach is needed. In closing, incorporating strengths-based intervention into our conventional practice provides a multi-faceted treatment approach that promotes optimism for recovery in children and adolescents with psychiatric disorders.
2020-02-27T09:24:19.033Z
2020-02-25T00:00:00.000
{ "year": 2020, "sha1": "671d671e864c8b32e97c9b254c95a02caa40b8df", "oa_license": "CCBY", "oa_url": "https://doi.org/10.2174/2210676610666200225105529", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "e6d6ddb1d5890ae100424a7b0cf4a2d049c69323", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118058799
pes2o/s2orc
v3-fos-license
The diagonal lemma as the formalized Grelling paradox Since the diagonal lemma plays a key role in the proof of the main limitative theorems of logic, its proof could shed light on the very essence of these fundamental theorems. Yet the lemma is often characterized as one of those important logical results that lack an insightful explanatory proof. By making explicit that the well-known proof of the lemma is just a straightforward translation of the Grelling paradox into first-order arithmetic, the proof can be made completely transparent. Notation Our formal language is that of first-order arithmetic. Q stands for Robinson arithmetic while ω is the set of natural numbers. g is any one of the standard Gödel numberings and F m n is the set of formulas with all free variables among the first n ones. For the sake of simplicity, we shall denote the closed terms corresponding to natural numbers by the numbers themselves. Further, N denotes the set of Gödel numbers of formulas in F m 1 . Finally, the result of substituting a term t for the only free variable of a formula ϕ ∈ F m 1 is denoted by ϕ(t). Diagonal lemma For any formula ϕ ∈ F m 1 , there is a sentence λ such that Q ⊢ λ ←→ ϕ(g(λ)) . Proof idea First we show how to construct, out of Grelling's paradox, an ordinary language sentence that, on the one hand, says of itself that it has a given property, on the other hand, consists of components with easily identifiable formal first-order counterparts. The straightforward formalization of this ordinary language sentence leads to the desired formal sentence (as can be expected since the lemma is just about the existence of a first-order sentence that, informally speaking, says of itself that it has a given property). As is well known, the Grelling paradox consists in the fact that the sentence (1) 'heterological' is heterological 3 shares with the Liar sentence the remarkable property that its truth implies its own falsity and vice versa, i.e., in effect, says of itself that it is false. What is truly important is that, contrary to the Liar, this paradoxical sentence achieves self-reference without using an indexical. 4 Since our aim is to construct a sentence that (a) is not about an adjective but about a sentence and (b) instead of asserting its own falsehood, says of itself that it has an arbitrary (but fixed) property, we have to slightly modify (1) accordingly. In order to satisfy the first requirement, in place of an adjective A, we consider the open sentence 'x is A'. Obviously, in this case, the transformation corresponding to the application of an adjective A to a linguistic object O will be the substitution of the name of O 5 for the variable x in the open sentence corresponding to A. Consequently, the sentence associated with the self-application of any adjective A in this way is " 'x is A' is A". In particular, the counterpart of (1) is: Note that the notion of heterologicality occurring here is already a property of open sentences with single variables. Since, on the one hand, to be heterological is to have the property that its application to itself yields a false sentence, on the other, as we noted above, in the case of sentences, 'applied to itself' means 'its name is substituted for the variable in it', for any open sentence x with a single variable, we have (3) x is heterological just in case the sentence obtained by substituting the name of x for the variable in it is false. Finally, if we replace 'being false' by 'having property p', (2) and (3) together yield: (4) the sentence obtained by substituting the name of 'the sentence obtained by substituting the name of x for the variable in it has property p' for the variable in it has property p. It can directly be checked that this sentence indeed says of itself that it has property p (and says nothing else) since it is built up in such a way that if we perform the substitution described in it, then we obtain the sentence itself, which is stated to have property p. 6 Now, let s denote the open sentence between the quotation marks in (4): the sentence obtained by substituting the name of x for the variable in it has property p. Then, clearly, the whole sentence (4) is s('s'). 7 That is, the formalization process should consist of two steps. In the first step we have to find the formal version η ∈ F m 1 of s, and then the second step is obvious: the desired sentence λ will simply be η(g(η)). 8 Proof Let ϕ ∈ F m 1 be arbitrary and let its informal counterpart be the open sentence 'x has property p'. 9 Certainly, x(g(x)) is the formal version of the phrase the sentence obtained by substituting the name of x for the variable in it, and hence ϕ g[x(g(x))] is the formal version of s. Clearly, ϕ g[x(g(x))] with a variable x running over formulas in F m 1 10 is not a formula itself, it becomes a formula only if we replace the variable x by a formula. Therefore, we cannot continue the formalization process unless we find a formula that can play the role of ϕ g[x(g(x))] , that is, a formula η ∈ F m 1 such that η(g(ψ)) is provably equivalent in Q to ϕ g[ψ(g(ψ))] for every ψ ∈ F m 1 , or equivalently (denoting the inverse of g by g −1 ), for any n ∈ N , In order to find the appropriate formula η, let us consider the expression substituted into the formula ϕ, and define the function f : ω −→ ω accordingly: f (n) = g[g −1 (n)(n)] if n ∈ N and f (n) = 0 otherwise. 5 Following the common practice, we define the name of a linguistic object to be the object itself between quotation marks. 6 J.N. Findlay used sentences of the same structure to examine informally the incompleteness theorem (cf. [3]). 7 Mimicking the formal notation, in the case of any common language open sentence o having a single variable, we abbreviate the result of substituting a linguistic phrase q for the variable in o by o(q). 8 Gödel numbering is, of course, the formal counterpart of naming. 9 It is obvious that the formulas in F m1 are formal versions of open sentences with single variables asserting the possession of a property, and, taking into consideration only those informal concepts that have formal counterparts, the formalization of attributing a property to an object is the substitution of the formal name (i.e. the Gödel number) of the corresponding formal object for the only free variable of the formula that formalizes the open sentence asserting the possession of the property concerned. Since this function is obviously recursive and hence representable in Q, and, up to provable equivalence in Q, the result of substituting a representable function into a formula can also be expressed by a formula, 11 there is a formula η ∈ F m 1 such that, for any n ∈ N , (5) Q ⊢ η(n) ←→ ϕ(f (n)) . Thus we have obtained what we need, we have shown that there exists an η ∈ F m 1 that can be considered to be the formal version of s. Now, all that remains to do is straightforward: it follows from (5) that, for every ψ ∈ F m 1 , Q ⊢ η(g(ψ)) ←→ ϕ g[ψ(g(ψ))] , which, in turn, choosing ψ to be η, yields Q ⊢ η(g(η)) ←→ ϕ g[η(g(η))] , showing that the sentence λ = η(g(η)) indeed has the desired property. 12
2019-04-12T09:20:04.483Z
2006-06-17T00:00:00.000
{ "year": 2006, "sha1": "8a19433092af23b67472dd84b9dfe3555440ba87", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4b8bc395e2ceca17d5b287df3ef2156cef616f34", "s2fieldsofstudy": [ "Mathematics", "Philosophy" ], "extfieldsofstudy": [ "Mathematics" ] }
52185943
pes2o/s2orc
v3-fos-license
Gait Training with Bilateral Rhythmic Auditory Stimulation in Stroke Patients: A Randomized Controlled Trial The aim of this study was to investigate the effect of gait training with bilateral rhythmic auditory stimulation (RAS) on lower extremity rehabilitation in stroke patients. Forty-four participants (<6 months after stroke) were randomly allocated to the gait training with bilateral rhythmic auditory stimulation (GTBR) group (n = 23) and the control group (n = 21). The GTBR group had gait training with bilateral RAS for 30 min a day, 5 days a week, in a 6-week period, in addition to conventional therapy. The control group had gait training without RAS, and conventional therapy. Outcome measures included gait symmetry, gait ability, balance ability, and lower extremity function. Gait symmetry on step time showed significant improvements compared to baseline (p < 0.05) in the GTBR group, but not in the control group. Gait ability was significantly improved in both groups relative to baseline values (p < 0.05), and the GTBR group showed significantly greater improvement in comparison to the control group (p < 0.05). Both groups showed significant improvements in the Timed Up and Go test (TUG), Berg Balance Scale (BBS), and Fugl–Meyer Assessment (FMA) compared to baseline (p < 0.05). GTBR is an effective therapeutic method of improving symmetric gait in stroke rehabilitation. Moreover, we found that GTBR beat frequency matching fast step time might be even more beneficial in improving gait symmetry. Future studies may develop a method of applying RAS on step time and length for improvement of gait symmetry in stroke patients. Introduction Stroke patients have gait disorders, impairments in the lower extremities, changes in gait pattern, weakness, loss of sensation, spasticity, abnormal movement timing, and balance disorders [1]. Patients with stroke highlight that changes in gait are the most urgent impairment which needs attention. For an effective rehabilitation program, improvement in gait pattern should be included as a primary treatment goal [2]. Moreover, restoration of gait can be viewed as the most significant goal for the rehabilitation of stroke patients [3]. Gait disorders in stroke patients result from several factors, such as declining paretic muscular strength, spasticity, and paralysis [4]. Gait dysfunction in stroke patients can also include limitations with respect to walking speed and movement, due to flexor synergy of the paretic upper extremity as well as weakness of the extensor [5]. Moreover, because of abnormal muscular activity and spasticity, non-paretic and paretic asymmetries appear between separated gait patterns [6]. Additionally, stroke patients show asymmetric gait, because their paretic side moves slower than their non-paretic stroke patients, but the results are unclear. This is because stroke patients do not experience a rhythm problem in walking, like patients with Parkinson's disease, but an asymmetrical problem. Therefore, in order to solve this problem, this study aimed to measure the step time of the individual without using the metronome applied to the existing studies, to provide the RAS to stroke patients, and to improve the asymmetry of walking. The purpose of this study was to investigate the effects of gait training with bilateral RAS on gait symmetry, walking ability, and balance ability of stroke patients. Subjects Participants were inpatients who had developed a stroke at least 6 months previously, and were selected from U rehabilitation center (Gyeonggi Province, South Korea). Individuals were included when they (1) were able to independently walk for a minimum of 10 min, (2) had a Korean version of mini-mental state examination score of over 21, and (3) were able to follow instructions; and excluded when they had (1) auditory system problems and (2) other conditions, such as fractures or digital neuropathy, on the lower extremities. All participants signed informed consent forms after receiving detailed explanations of the study objectives and requirements. The study was approved by the Institutional Review Board of Sahmyook University (date of approval: 12 October 2012; No. SYUIRB2012-059). Procedure A total of 45 participants were enrolled in the trial and randomly assigned to the gait training with bilateral rhythmic auditory stimulation (GTBR) group (n = 23) or the control group (n = 22) after undergoing a preliminary test. Random Allocation Software was used to minimize selection bias. The GTBR group underwent GTBR for 30 min a day, 5 days a week, for 6 weeks, in addition to conventional rehabilitation. A control group had conventional rehabilitation with gait training without RAS (acoustic cue). The posttest was conducted 1 day after the 6-week intervention period. All assessment data were collected by 3 physical therapists who were blinded to the treatment allocations. The training was conducted by 2 therapists, and the therapists were not blinded. This study was only assessor-blind. All assessments were performed by 3 physiotherapists who were blinded to training group assignments. Subjects who were unable to participate in the study or who were not tested after the training were excluded from the final analysis. In the GTBR group, the final analysis was performed on all 23 patients. In the control group, 21 patients were excluded because of discharge from the hospital ( Figure 1). Procedure Auditory Stimulation Sound Production Process In order for us to determine walking speed, subjects walked at various speeds without an assistant or support, and the most comfortable speed for each subject was set. While patients were walking at a comfortable speed, the step times of both legs were calculated using a gait analysis system (OptoGait, Microgate S.r.l, Bozen, Italy). Based on the calculated step time, the auditory stimulation sound was produced at an increased rate of 10% for the paretic side and 5% for the non-paretic side, rather than at a comfortable speed. Previous studies have shown that gait velocity and symmetry improved when patients walked to a faster acoustic cue than at a comfortable speed. To reduce the asymmetry of both step times, fast acoustic cues of 10% and 5% were applied differently. The auditory stimulation sound was generated using digital audio editing software (GoldWave v5., GoldWave Inc., St. John's, NL, Canada). We provided different pitch sounds to distinguish the sounds applied to both legs. Measurements were taken every 2 weeks and acoustic cues were given to subjects according to the changed step time. Procedure Auditory Stimulation Sound Production Process In order for us to determine walking speed, subjects walked at various speeds without an assistant or support, and the most comfortable speed for each subject was set. While patients were walking at a comfortable speed, the step times of both legs were calculated using a gait analysis system (OptoGait, Microgate S.r.l, Bozen, Italy). Based on the calculated step time, the auditory stimulation sound was produced at an increased rate of 10% for the paretic side and 5% for the nonparetic side, rather than at a comfortable speed. Previous studies have shown that gait velocity and symmetry improved when patients walked to a faster acoustic cue than at a comfortable speed. To reduce the asymmetry of both step times, fast acoustic cues of 10% and 5% were applied differently. The auditory stimulation sound was generated using digital audio editing software (GoldWave v5., GoldWave Inc., St. John's, NL, Canada). We provided different pitch sounds to distinguish the sounds applied to both legs. Measurements were taken every 2 weeks and acoustic cues were given to subjects according to the changed step time. Gait Training with Bilateral RAS (GTBR) GTBR was performed once for 30 min, and the training time consisted of warm-up for 5 min, gait training for 20 min and cool-down for 5 min. The warm-up was done to increase adaptability to RAS and to allow practice with the RAS beat. Another purpose of the warm-up was to reduce the Gait Training with Bilateral RAS (GTBR) GTBR was performed once for 30 min, and the training time consisted of warm-up for 5 min, gait training for 20 min and cool-down for 5 min. The warm-up was done to increase adaptability to RAS and to allow practice with the RAS beat. Another purpose of the warm-up was to reduce the spasticity of the legs. During the gait training, the auditory stimulation sound was set for each individual to be used. Before the gait training, the participants nodded their heads in a sitting position in accordance with the sound, tapping the floor with their feet in a sitting position, and marched in place for the adaptation of RAS. The gait training was performed in an independent treatment space with an elliptical track structure, including a linear section of 3300 cm and a rotation section of 879.2 cm. While subjects walked, each sound was adjusted to the heel strike of each foot. To prevent disturbance due to external interference and noise during walking, we used a Bluetooth wireless headphone (MDR-RF4000K, Sony, Tokyo, Japan) for the subjects to hear the sound. One physical therapist performed a heel strike on the patient's bilateral RAS and confirmed that the subject walked well to the bilateral RAS via the same Bluetooth wireless headphone. Before the experiment, subjects did not receive any training or electrical stimulation that could affect walking, and all subjects were asked to wear the same type of shoes for accurate measurement. If anyone felt vertigo during training or was tired and unable to walk any longer, they were given enough rest. Conventional Rehabilitation Conventional rehabilitation programs consisted of therapeutic exercise, occupational therapy, and electrical stimulation therapy. Therapeutic exercise was based on proprioceptive neuromuscular facilitation and consisted of upper extremity movement. Occupational therapy consisted of upper extremity functional exercise to improve activities of daily living. Electrical stimulation therapy consisted of applying a passive functional electrical stimulus to the wrist. Each exercise consisted of 30 min of therapeutic exercise, 20 min of occupational therapy, and 10 min of electrical stimulation therapy. Outcome Measurements To collect data for quantitative gait analysis, a gait analysis system (OptoGait) was used to measure temporal and spatial gait ability. OptoGait was used in this study and consisted of two 3-m-long transmission bars and a webcam (Logitech Webcam Pro 9000, Logitech International S.A., Lausanne, Switzerland). The width of both bars was 1 m. Each bar communicated via infrared sent to the reception bar from the transmission bar light-emitting diode (LED) set 1 cm apart. The gait of the patients was detected between the transmission bars that collected the data on time and spatial variables. The webcam saved the video information and accurately synced the gait (i.e., order of the feet and error recognition from overlapping feet). The collected data on time and spatial variables were processed by OptoGait Version 1.5.0.0. The software was also used to connect the long axis of the reflective markers attached to the lower extremities to measure the flexion angle of the knee joint in the stance phase. Gait symmetry was calculated with the data collected by the gait analyzer software as described below. The symmetry index (SI) is the ratio of the sum and difference between the non-paretic and paretic side values [24]. GA was calculated by multiplying the log value of the paretic/non-paretic side value by 100 [25]. The symmetry ratio (SR) was obtained by dividing the paretic side value by the non-paretic side value. In this study, both sides' step times were used in calculating the symmetry [26]. To assess balance ability, we used the Timed Up and Go test (TUG) and Berg Balance Scale (BBS). The TUG was measured when the participant sat on a chair with armrests and, upon hearing the command "begin", got up from the chair, walked to the 3-m mark ahead, returned to the chair, and sat down, with time measured from start to finish for each participant to do so. The BBS is used to assess the balance of older people and people with neurological disorders who are at high risk of falling down. It is a functional balance test that considers 3 aspects: Maintenance of posture, postural control through voluntary exercise, and reaction to external stimuli. The BBS has 14 items: Movement from sitting to standing, standing unsupported, sitting with back unsupported, movement from standing to sitting, transfer from one chair to another, standing with eyes closed, standing unsupported with feet together, standing and reaching forward with outstretched arm, standing and picking up an object from the floor, turning to look behind over the left and right shoulders, turning 360 • , alternating feet placed on a platform, standing with one foot in front, and standing on one leg. Each item is scored on a scale of 0 to 4, with a total score of 56 [27]. Lower extremity function was measured by the Fugl-Meyer Assessment (FMA). The FMA is used to assess motor recovery of the lower extremities, which comprises 7 items with respect to the hip, knee, and ankle: 3 with reflex, 3 with flexor synergy, 4 with extensor synergy, 4 with volitional movement, 1 with normal reflex activity, and 3 with coordination (maximum score 34 points) [28]. Statistical Analysis All statistical analyses were performed using SPSS Version 19.0 (SPSS Inc., Chicago, IL, USA). The Shapiro-Wilk test was used to confirm the normal distribution of all outcome variables. The paired t-test was used to compare dependent variables within groups, whereas the independent t-test and chi-squared test were used to compare dependent variables between the 2 groups. Statistical significance was set at p value < 0.05. Results One of the 22 participants in the control group was excluded from the analysis because the patient was discharged from the hospital. Accordingly, a total of 44 participants were included for analysis, of whom 23 were in the GTBR group and 21 were in the control group. There were no significant differences in the general characteristics of both groups and dependent variables between the two groups ( Table 1). Values are expressed as mean ± standard deviation (SD). The independent t-test and chi-squared test are compared dependent variables between group. GTBR = gait training with bilateral rhythmic auditory stimulation, MCA = middle cerebral artery, ACA = anterior cerebral artery. Outcome measures of gait symmetry, gait ability, balance ability, and lower extremity function of the GTBR and control groups are shown in Table 2. Gait symmetry on step time was significantly improved in the GTBR group relative to baseline (p < 0.05). In addition, the magnitude of the decreases in gait symmetry on step time was significantly greater in the GTBR group relative to that in the control group (p < 0.05). However, gait symmetry on step length did not significantly improve in either the GTBR or the control group. Gait ability on velocity and cadence was significantly improved in both groups relative to baseline values (p < 0.05). However, the GTBR group showed significantly greater improvement in comparison to the control group in velocity and cadence (p < 0.05). Both groups showed significant improvement in the TUG test, BBS, and FMA relative to baseline values (p < 0.05). However, the difference in change between groups was not statistically significant. Discussion In comparison to standard gait training, a 6-week program including GTBR proved to be more effective for stroke patients' gait symmetry, gait ability, balance ability, and lower extremity function. GTBR, a way of inducing a synchronization between RAS and motor areas, promoted stroke patients' gait symmetry. It also improved their gait ability, balance ability, and lower extremity function. Additionally, the control of gait pattern with use of bilateral RAS ameliorated the GA of stroke patients. It was demonstrated that its influence can be effective in rehabilitation programs for stroke patients. The auditory cues used in previous studies utilized the cadence of the subjects in experiments, and they were measured by stopwatch or gait measurement sensors. Compared with previous studies [29][30][31], this study was applied according to the step time that was calculated by the gait analysis system. On the paretic side, RAS with a 10% faster tempo than step time was applied, and on the non-paretic side, RAS with a 5% faster tempo than step time was applied. The present study also applied a faster tempo than the average tempo, which was suggested by existing studies on auditory stimulation. Because previous studies used various tempos and gait training was conducted on a treadmill [29][30][31], it was not possible to accurately suggest which tempo would be more effective in a case of normal gait. However, in consideration of the application results of this study, it could be suggested that the application of auditory stimulation with a faster tempo to the paretic and non-paretic sides would be an effective rehabilitation program for stroke patients. In this study, the gait symmetry of step time in the GTBR group increased significantly, but that of step length showed no change. In order to investigate change in symmetry, this study utilized the changes in time and length values based on step. In this comparison, previous research generated a symmetry ration with the use of values of stance time, swing time, and stride length. Thus, it is impossible to directly compare existing research with the results of the present study. However, through GTBR, it could be confirmed that it would be possible to improve the asymmetric gait pattern of stroke patients. Although symmetry of step time improved through GTBR, symmetry of step length saw no change. This was because auditory stimulation was created with the standard of step time. As gait training was performed in accordance with auditory stimulation with rapid speed, which was designed on the basis of step time of both paretic and non-paretic legs, the step time of the experimental group could be controlled. Subsequently, gait speed actually increased due to the gait performed at the controlled speed. This experimental results show that, because of auditory stimulation, the speed of step time increased, and at the same time, temporal symmetry of the gait pattern of stroke patients developed. According to the study by Patterson et al. [24], people normally showed 1.03 points of a step length's SR, but stroke patients revealed 1.13 points of a step length's SR. The researchers reported that, if SRs of paretic and non-paretic sides were close to 1 point, it could be regarded as normal and symmetric. In this study, the experimental group represented a significant SR decrease from 1.14 to 1.11. With GTBR, stroke patients' gait symmetry improved. Gait training, which was performed in accordance with auditory stimulation of a regular tempo and faster speed, affected pattern generation so as to change the gait pattern shown before the training. GTBR improved the asymmetric gait pattern of stroke patients and developed movement of the paretic side. In consideration of the results, it is regarded that gait symmetry improved. There was no change in the symmetry of step length in this study. With this result, it was difficult to improve the gait symmetry of spatial parameters using only step time, the standard of auditory stimulation. It can be assumed that a certain training method based on spatial data, not a change in gait pattern using auditory stimulation on the basis of step time, that is, a kind of temporal data, is needed. The step time utilized in this research is a type of temporal data depending on the movement of the lower extremities during walking, so it is a method that can be insufficient in changing step length, that is, a kind of spatial index. In this regard, it can be said that research is required to achieve both spatial symmetry and a rehabilitation effect at the same time. As a result of this study, both groups showed significant improvements in gait ability. This was demonstrated in the comparison between the two groups. Throughout the gait training, while doing heel strikes according to bilateral RAS of a faster tempo than a normal step time, patients moved their lower extremities faster than usual. As such rapid movement of lower extremities was repeated, gait patterns changed, and gait speed increased to the speed that was similar to the tempo of the given auditory stimulation. Moreover, with stepping in accordance with the auditory stimulation of a rapid tempo, stride length changed, and cadence increased. In the study by Hurt et al. [32], the group who underwent gait training that applied RAS showed an increase in speed from 38.8 m/min to 57.6 m/min after the training. In the experiment by Schauer and Mauritz [19], speed increased by 27%. In addition, in the study by Thaut et al. [11], a group performing gait training with RAS showed a significant increase in gait speed, presenting a change from 14.1 to 34.5 m/min throughout the training. By contrast, the control group did not improve significantly. As suggested by the similar results of their 6-week research, Thaut et al. [15] asserted that the amount of the increase was higher, and the period of the training affected the results. GTBR improved one leg stance, so that the change in gait pattern increased gait ability, such as stride length and stride time [22]. Then, paralysis of both upper and lower extremities decreased [33], and due to the increased control ability of the trunk and lower extremities, stride length and time were enhanced [20]. It can be thought that GTBR induced changes in gait pattern, and movements of the trunk and lower extremities were improved, and gait ability became better. With the change in gait pattern, which resulted from auditory stimulation, a direct improvement appeared in cadence. Furthermore, auditory stimulation of a rapid tempo improved gait speed. To examine one's balance ability, this study utilized TUG and BBS to prove that GTBR improved the balance ability of stroke patients. This improvement was also shown in the control group. Balance and gait abilities are significantly correlated, and both gait speed and balance ability increased [34]. The amounts of the increase in gait speed and balance ability showed a similar tendency. Our results were similar to the work of Combs et al. [35], which found that gait training improved BBS by 4.2 points and speed changed from 0.62 to 0.73 m/s. Thus, it can be observed that both gait and balance improved simultaneously. Next, in the research by Yavuzer et al. [36], movement of the pelvis was improved after the training, which aimed to enhance balance. As movements of the upper and lower extremities as well as thoracic spine rotation with an axis of the pelvis were developed by GTBR, movement of the trunk enhanced so that balance ability finally improved. Additionally, pelvic paralysis decreased through GTBR. Subsequently, with stability and development in functions of the hip joint, compensation activity decreased, and balance increased. Increase in the activities for maintaining balance affected balance ability [33,37]. With gait training for 6 weeks, lower extremity function and trunk control ability developed. To examine how GTBR would change lower extremity function, this study used the FMA scale. In this study, both the GTBR and control groups showed significant improvements, and there was no difference between the groups. Noting the significant effects in both groups, it could be observed that repeated gait trainings were required for functional improvement of the lower extremities. Moreover, a greater change was found in the values calculated within the GTBR group, so it was confirmed that GTBR was an effective method for development of lower extremity function. If training were to be performed for a longer term than that in the present study, it could be suggested as an effective rehabilitation program for functional recovery of stroke patients' lower extremities. Effects of GTBR from this experiment are not generalizable because our sample size of stroke patients was small. We suggest that the use of GTBR should be applied to a larger sample size and more intimately explore the effects of functional improvement similar to balance and lower extremity function. Conclusions GTBR is useful as a rehabilitation program for functional locomotor recovery of stroke patients. It can also be used as a home exercise program for outpatients. Future studies may follow four directions to further establish the role of RAS in gait rehabilitation: (1) To examine specifically the effects on symmetry, gait variables should be analyzed in various ways; (2) To discover the most effective tempo, a variety of auditory stimulation tempos should be applied; (3) A study providing visual cues for effects of step length should be conducted; (4) To explore the effects on balance and lower extremity functions, long-term training should be performed.
2018-09-16T06:22:59.805Z
2018-08-31T00:00:00.000
{ "year": 2018, "sha1": "708befc66be199e969b6b526537f313d12057b03", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/brainsci8090164", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "708befc66be199e969b6b526537f313d12057b03", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
231623567
pes2o/s2orc
v3-fos-license
The Habitual Diet of Dutch Adult Patients with Eosinophilic Esophagitis Has Pro-Inflammatory Properties and Low Diet Quality Scores We determined the nutritional adequacy and overall quality of the diets of adult patients with eosinophilic esophagitis (EoE). Dietary intakes stratified by sex and age were compared to Dietary Reference Values (DRV). Overall diet quality was assessed by two independent Diet-Quality-Indices scores, the PANDiet and DHD-index, and compared to age- and gender-matched subjects from the general population. Lastly, food and nutrient intakes of EoE patients were compared to intakes of the general population. Saturated fat intake was significantly higher and dietary fiber intake significantly lower than the DRV in both males and females. In males, the DRV were not reached for potassium, magnesium, selenium, and vitamins A and D. In females, the DRV were not reached for iron, sodium, potassium, selenium, and vitamins A, B2, C and D. EoE patients had a significantly lower PANDiet and DHD-index compared to the general population, although the relative intake (per 1000 kcal) of vegetables/fruits/olives was significantly higher (yet still up to 65% below the recommended daily amounts) and alcohol intake was significantly lower compared to the general Dutch population. In conclusion, the composition of the habitual diet of adult EoE patients has several pro-inflammatory and thus unfavorable immunomodulatory properties, just as the general Dutch population, and EoE patients had lower overall diet quality scores than the general population. Due to the observational character of this study, further research is needed to explore whether this contributes to the development and progression of EoE. Introduction Eosinophilic esophagitis (EoE) is a chronic, immune-mediated condition of the esophagus affecting both adults and children [1,2]. Although the exact pathogenic mechanism Study Population Adult patients diagnosed with active EoE (≥15 eosinophils/high power field (HPF) and symptoms of esophageal dysfunction), who participated in two trials (Trialregister.nl NTR4052 and NTR4892) performed in the Academic Medical Center, Amsterdam, the Netherlands, were included between 2013 and 2015. Patients participating in Trial NTR4052 followed an allergen-microarray-guided dietary intervention as treatment of EoE [19]. Patients participating in Trial NTR4892 were treated with an elemental diet for four weeks [9,20]. From these two studies, baseline data, i.e., before starting any (elimination) diet other than self-imposed dietary measures, were used in the present study. Information on atopic status and food avoidance was collected by medical and allergy focused diet history. Dietary Intake Assessment All patients completed a 3-day non-weighed food diary (2 working days and 1 weekend day) as described previously (online supporting information in [15]). In short: Patients recorded amounts of foods and drinks in detail as well as dietary supplements (type, yes/no, infrequently). Portion size was assessed using household measures. The average of the 3-day diaries (without supplements) was calculated using the Dutch NEVO-online Food Composition Database [21] and Evry software [22] for energy (kcal) and 32 nutrients (Supplementary Table S1). Additionally, all foods consumed were allocated to one of 38 food groups, based on the Dutch National Food Consumption Survey (DNFCS) [23] (see Supplementary Table S1). Dutch Dietary Reference Values The 3-day mean habitual intake of nutrients, stratified for age group and gender, was compared to the Dutch DRV for males and females [24][25][26]. Overall Diet Quality For each patient, two DQIs were calculated from the dietary data, i.e., the Probability of Adequate Nutrient intake (PANDiet) [27] and the Dutch Healthy Diet Index (DHDindex) [28]. Since the PANDiet score is based on nutritional recommendations (i.e., the intake of macro-and micronutrients) and the DHD-index on food based dietary guidelines (i.e., intakes of fruit, vegetables, fish, etc.), the scores were selected to complement each other. PANDiet The PANDiet is a nutrient-based DQI which measures the adequacy of intake of 25 macro-and micronutrients in comparison to nutritional recommendations: protein, total fat, saturated fatty acids (SFA), linoleic acid (LA), α-linolenic acid (ALA), the sum of eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA), total carbohydrate, dietary fiber, vitamins A, B1, B2, B6, B12, C, D, E, calcium, copper, iodine, iron, magnesium, potassium, selenium, sodium, and zinc [29]. Further details of the PANDiet and its validation in the general Dutch population can be found in Supplementary File S1. DHD-Index The DHD-index is a DQI that ranks participants according to their adherence to food based dietary guidelines, i.e., in the Netherlands, the Dutch Guidelines for a Healthy Diet of 2006 [30]. These guidelines consist of ten components (i.e., physical activity and the intake of vegetables, fruit (juices), fiber, fish, saturated fatty acids (SFA), trans fatty acids, acidic drinks and foods, sodium and alcohol) which are divided into adequacy components and moderation components. The DHD-index was previously (2012) validated in the general Dutch population [28]. Further details of the DHD-index and its application to the present study can be found in Supplementary File S1. Data from the General Dutch Population The 3-day average habitual intake of foods and nutrients in EoE patients, stratified by age group and gender, was compared to reference intakes of males 18 to 69 years (n = 1114) and females 18 to 50 years (n = 763, according to the classification of the Dutch National Food Consumption Survey (DNFCS, 2007-2010)) [23]. The average dietary assessment in the DNFCS was based on two non-consecutive 24-hour dietary recalls per subject, using the Dutch NEVO-online Food Consumption Database [23]. Nutrient intakes from supplements were not assessed. Statistical Analyses Descriptive statistics were used to summarize findings. All data were checked for normality using Shapiro-Wilk tests and histograms. Normally distributed data were presented as means and standard deviations (SD). Skewed data were presented as medians with interquartile ranges (IQR). Categorical data were reported as n with percentages of total. To compare dietary intake with the DRV for adults, according to age and sex, skewed data were log-transformed prior to analyses. One-tailed Student's t-tests were performed to test whether dietary intake of the different nutrients were significantly lower (at p = 0.05) or significantly higher (at p = 0.05) than the reference value (given by the DRV), based on the hypothesis that the intake of the tested nutrient would be unhealthier than recommended. For example: we tested if the fat intake was significantly higher in EoE patients than the maximum recommended intake; however, for most nutrients we tested whether this intake was significantly lower than recommended (some examples are calcium, and vitamins A, C and D). The results were presented as the mean and the upper bound or lower bound of the 95% confidence interval (CI) (if it was tested if the intake of a nutrient was significantly lower and higher than recommended, respectively) on the original (back-transformed) scale. The DQI scores of EoE patients were compared to the scores of the general Dutch population using an independent two-tailed samples t-test. Patient characteristics were compared between the EoE patients and the general Dutch population using independent samples t-tests or Mann-Whitney U tests. Food and nutrient intakes between the EoE patients and the general Dutch population were compared using Mann-Whitney U tests. These comparisons were made after the intakes of nutrients were corrected for energy intakes (consumption per 1000 kcal/day), because energy intake in men of 31-50 years differed significantly between the two groups. Statistical analyses were performed using SAS Enterprise Guide v.6.1 and IBM SPSS Statistics v.20.0. p-Values of <0.05 were considered statistically significant. Patient Characteristics Baseline characteristics of the 34 EoE patients were presented previously (Table 1, adapted from [15]). In short, the majority of EoE patients (79%) reported having one or more concomitant atopic diseases. Sixty-two percent of the patients reported to have one or more food allergies, including pollen-food syndrome (previously known as Oral Allergy Syndrome). Nineteen patients (56%) avoided specific foods: 12 (35%) because of pollen-food syndrome, 7 (21%) because of food allergy other than pollen-food syndrome and 12 (35%) due to dysphagia, food impaction or dyspepsia. The most frequently avoided food groups were fruit, n = 13 (38%), nuts/peanut/seeds, n = 13 (38%) and vegetables, n = 4 (12%) [15]. Twenty-nine percent of the patients (all male) took dietary supplements, however patients changed brands and types frequently. Age and BMI of EoE patients and the general Dutch population were comparable [23]. However, among EoE patients, a higher percentage was male (76.5% versus 59%, p = 0.041). Table 2 shows that the average percentage of energy (en%) from protein, carbohydrates and total fat in EoE patients was in line with dietary guidelines, although protein and total fat intakes were relatively high. Intake of saturated fat was significantly higher than the DRV (below 10 en%) in males (13.2 en%; p < 0.001), whereas dietary fiber intake was significantly lower than the DRV (30 g/d); both in males (19.6 g/d; p < 0.001) and females Table 3 shows that the PANDiet and DHD-index of the EoE patients and the general Dutch population were statistically significantly different, with the EoE patients having lower DQI scores compared to the general Dutch population. Comparison of Intake by EoE Patients with Intake by the General Dutch Population Male EoE patients had lower energy intakes than the general Dutch population: EoE males (n = 26), median 2228 kcal/day, IQR 1709-2556, vs. the general population (n = 1114), median 2582 kcal/day, IQR 2131-3088, p = 0.003. However, for the different age groups these differences were not statistically significant, except for males aged 18-30 years. For EoE women, there were no statistically significant differences in energy intake. After correction for energy intake, male EoE patients had a statistically significantly higher energy percentage from fat and had lower intakes of omega-3 fatty acids (EPA and DHA), vitamin D and alcohol per 1000 kcal, while female EoE patients had lower intakes of omega-3 fatty acids, iodine and vitamin C per 1000 kcal than the general Dutch population (Table 4) [23]. In Table 5 it is shown that after correction for energy intake, male EoE patients consumed significantly more vegetables, fruit/nuts/olives, egg/egg products and miscellaneous food groups and, in contrast, less alcoholic beverages and added fat than the general Dutch population. Female EoE patients consumed more potatoes/other tubers, vegetables and fewer non-alcoholic beverages. Discussion EoE is an emerging chronic disease, affecting individuals at any age with a predominance for Caucasian males under the age of 50 [31]. To our knowledge, this is the first study in which the nutritional adequacy and overall diet quality of the habitual diets of adult patients with EoE, prior to any elimination diets other than self-imposed dietary measures, have been assessed by diet scores. In addition, this study is the first in which intake data of adult patients with EoE were compared to those of the general population. In a previous study, we were able to find a relationship between nutrition and the degree of inflammation and mucosal integrity in EoE patients, pointing towards a possible protective effect of a healthy diet consisting of more dietary fiber, fermented dairy and plant-based foods and less fat, animal foods and omega-6-rich oil [15]. In the current study, we showed that for several nutrients, the intake of adult EoE patients did not meet the DRV. Among the nutrients that were statistically significantly different from the DRV, there were several nutrients known for their beneficial or adverse effect on the microbiome, immune system, inflammation or mucosal integrity. For example, the high intake of (saturated) fat in EoE patients is likely to induce a shift in microbiome composition associated with inflammatory processes [32,33]. In addition, intakes of total fat and protein in EoE patients, although within the DRV range, were relatively high, which has been shown to negatively impact the microbiome and inflammatory processes as well [32,34]. These findings point towards a high intake of animal foods, typical for a Western diet. In addition, the low intake of dietary fiber in EoE patients is unfavorable for a healthy gut microbiota [35]. The colonic fermentation of dietary fiber results in the production of short chain fatty acids, which have anti-inflammatory and immune-regulatory benefits [36,37], and are important for the maintenance of the epithelial integrity [38]. Moreover, the vitamin A metabolite, retinoic acid, and the vitamin D metabolite, 1,25dihydroxyvitamin D3, have direct effects on immune cells, i.e., by enhancing the induction of regulatory T cells and by controlling Th1 and Th17 differentiation [39,40]. Both vitamins A and D, as well as vitamin C, selenium and iron are recognized by the European Food Safety Authority (EFSA) for their local and systemic immunomodulatory properties [41]. Intakes of these nutrients were low in male and/or female EoE patients and may induce a disbalance in their immune system. Nutritional deficiencies might gain relevance in EoE. There are a few studies on nutritional deficiencies in EoE patients, predominantly in children [42]. In this systematic review, it was found that vitamin D levels of children with EoE, both pre-and postintervention, were low. One study in adults [43] found that positive skin prick test reaction to peanut was more common in patients who had vitamin D insufficiency (adjusted odds ratio 7.57; p = 0.009). However, higher vitamin D levels correlated with higher histologic eosinophil counts (R = 0.61; p = 0.03). When comparing the diet composition of our study population to intake levels of the general Dutch population, male EoE patients had significantly lower energy intakes. After correction for energy intake, we found that intakes of several of the nutrients were different between EoE patients and the general Dutch population. EoE patients (males and/or females) had higher total fat intakes and lower intakes of vitamins C and D (all p < 0.05). Moreover, EoE patients had lower intake of omega-3 fatty acids. These differences all point towards a diet which has higher pro-inflammatory, lower anti-inflammatory and unfavorable immunomodulatory properties, and overall seems less healthy. Although a significant percentage of EoE patients avoided fruit/vegetables due to pollen-food syndrome, food allergy or EoE symptoms, EoE patients still had a higher relative intake of well-tolerated fruits and vegetables (per 1000 kcal) compared to the general Dutch population. However, fruit and vegetable consumption in both groups was far below (up to 65%) the recommended daily amounts. Remarkably, male EoE patients used less alcohol than male subjects from the general Dutch population. This may be due to the pain and discomfort caused by alcohol in EoE. Both DQI scores revealed that EoE patients consumed a less healthy diet compared to the general Dutch population. For the PANDiet score, the lower overall diet quality can be explained by differences in nutrient intakes between the two populations (i.e., in total fat, omega-3 fatty acids, iodine and vitamins C and D). The difference in DHD-index score might be explained by differences in the intake of fatty acids. In EoE patients, the intake of total fat was higher, and the intake of omega-3 fatty acids was lower than in the general Dutch population, which led to a lower DHD-index in EoE patients. In contrast to fruits and vegetables that were avoided by half of the EoE patients due to pollen-food syndrome, food allergy or EoE, only few patients (9%) avoided the intake of fish [15], thereby only limitedly contributing to the explanation for the low intake of this food group in the EoE patients. The differences in DQI scores between the total EoE patient population and the general Dutch population might also partly be explained by a difference in energy intake. Overall, the subjects in the general Dutch population had a higher mean energy intake than the EoE patients, which may have contributed to a better diet quality in the general population as individuals with a higher total energy consumption will meet the requirements for specific nutrients or food groups more easily [17]. The strengths of this study include the use of a detailed, standardized way of recording diet history and the fact that, in addition to the assessment of individual nutrient and food intakes, we used DQI scores to assess the overall diet quality of EoE patients. DQI scores examine the effects of the overall dietary pattern and represent a broader picture of food and nutrient intakes as the combination of foods and nutrients in complex eating patterns and their potential synergistic effects are taken into account. The use of DQI scores may therefore be more predictive of disease risk/severity than individual foods or nutrients. The findings of this study are in line with our previous findings on the relationship between habitual diet and severity of disease in the same study population [15]. Possible limitations of the present study are the small sample size of the EoE population (especially low numbers of females), the fact that we did not take into account the potential influence of self-imposed dietary measures due to food allergy or EoE symptoms on food and nutrient intakes, the differences in timeframes between the dietary assessment of the general Dutch population (between 2007-2010) and the EoE patients (between 2013-2015) and the cross-sectional design of the study. The latter hampers the ability to draw conclusions about the causal relationship between diet quality and disease risk and severity. Large prospective cohort studies or intervention studies are needed to assess if there is a causal relationship between diet quality and EoE. Moreover, the dietary intake of the EoE patients was assessed by a 3-day food record. It is known that the dietary intake of individuals may vary from day to day, and hence the intake of infrequently consumed foods such as fish, may be underestimated. We did not correct for multiple comparisons in our analyses, due to the explorative character of this study. However, although after correction several significant comparisons will lose significance, the main conclusion of our findings remains intact, namely that the diet of EoE patients has several pro-inflammatory properties, just as the diet of the general Dutch population. Lastly, as in the DNFCS, we did not calculate the intake of supplements, because of the inconsistent use of amounts and types of supplements by patients, which may have influenced disease outcomes. In conclusion, intakes of dietary fiber and several micronutrients were below the DRV, while intakes of saturated fat were higher than the DRV in adult patients suffering from EoE. Total protein and total fat intakes were relatively high, yet within the range of the DRV, pointing towards a high intake of animal-based foods. Compared to the general Dutch population, the overall diet of EoE patients, as assessed by two independent diet quality scores, was generally less healthy than the diet of the Dutch population, with the exception of the relative intakes (per 1000 kcal) of vegetables/fruits/olives which were significantly higher (yet still far below (up to 65%) the recommended daily amounts) and the relative intake of alcohol which was significantly lower. Thus, the habitual dietary intake of Dutch adult EoE patients has several pro-inflammatory and unfavorable immunomodulatory properties. These results support the hypothesis that an unhealthy diet is associated with development and progression of EoE. The results of this study are complementary to the results of our recently published cross-sectional study in this population [15]. We are unable to determine whether the diet is changed because of the disease, or that an unhealthy diet precedes the development of EoE. Further prospective and interventional studies are needed to demonstrate causal relationships and effects of diet on the development of EoE. Once the relationship between nutrition and EoE has been established, anti-inflammatory nutrition advice about which foods and nutrients to include and which to avoid could be provided in order to prevent and maintain remission in EoE. Supplementary Materials: The following are available online at https://www.mdpi.com/2072-664 3/13/1/214/s1, Table S1. Nutrients calculated on the basis of 3-day food diaries and distribution of food groups used. File S1. Calculation and validation of the Diet Quality Indices (DQI). Institutional Review Board Statement: The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Board AMC (NL42608.018.12 and NL49502.018.14). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented in this study are available on request from the corresponding author. A data sharing agreement will be requested.
2021-01-17T06:16:14.438Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "e4e1a272477cb2a5e0207694155a796dcd65111b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/13/1/214/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8886b317d3a0ddb9f46153e1f4d234fccac15dba", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55570012
pes2o/s2orc
v3-fos-license
Assessment of the Domestic Water Profile of the Region Surrounding Al-Ghadir River, Mount Lebanon The World Health Organization (WHO), in its Guidelines for Drinking Water Quality, defines domestic water as the "water used for all usual domestic purposes including consumption, bathing and food preparation". Today, securing adequate safe drinking water and proper sanitation became a major challenge facing Lebanon. This work is a case study with objectives the assessment of the domestic water profile of the region surrounding Al-Ghadir River at Kfarshima and Al-Sahra. Samples were collected from 3 types of household water sources (Municipality water, Private wells and Water Vended Gallons) and assessed for their physiochemical and bacteriological profile. Results showed deterioration pattern in domestic water quality profile in the three water sources. The measured physiochemical and bacteriological parameters indicates the degree of deterioration of private well sources by the sea and the wastewater infiltration necessitating the enforcement of legislations associated with the use and the management of private wells, municipality water and private vending water. Introduction The World Health Organization (WHO), in its Guidelines for Drinking Water Quality, defines domestic water as the "water used for all usual domestic purposes including consumption, bathing and food preparation" [1]. Lebanon is facing today a deterioration in its water resources specifically its drinking and groundwater resources and this is not only due to climate change and saline intrusions but also due to an increase in water anthropogenic activities and improper integrated management of its water resources. Today, the management of providing safe drinking water and proper sanitation became a major challenge facing this country. The key problems are encountered in providing water to overpopulated cities, such as Beirut and its suburbs [2]. These suburbs are the residence of about 3.8 million citizens, which constitute one third of Lebanon's total population according to the projected figure for the year 2001 [3]. The surface area of Beirut's suburbs is about 28 km 2 with a population density of 25,400 [4]. Recent attempts to estimate water demands and availability in Lebanon and Beirut have revealed a significant deficit. Domestic water need in Lebanon is estimated at 850,000 m 3 /day, with 450,000 m 3 /day available [5]. Beirut alone needs 280,000 m 3 /day, where only 180,000 m 3 is accessible [6] [7]. Beirut water authorities are the main suppliers of water for the capital and its suburbs. But the scarcity of water in Beirut has led these authorities to apply some drastic measures, such as rationing the water supply to 10h every other day [7]. Therefore, it is highly probable that water contamination is also induced throughout the distribution system by the negative pressure and inward suction during cutoff periods [8]. Water rationing as a remedial action, has become a firmly established practice for the past 4 decades. Consumers as such are resorting to using other complementary water sources. These sources are provided by water vendors, the industrial sector and by pumping private wells. Exploitation of ground water through private wells is uncontrolled and is still increasing up to the present time [5]. The excessive exploitation of ground water over the years has led to the infiltration of seawater and the deterioration of the fresh water aquifer [8] [9] [10] [11]. Parallel to the extraction of water from private wells, water shops are mushrooming. The estimated number of these shops is not available due to the complete absence of quality-control monitoring legislations. These practices have exposed the citizens to contaminated water and its resulting health problems [7] [12] [13] such as gastrointestinal diseases which are mainly due to fecal contamination of drinking water resulting from deficiencies in storage tanks and crossconnections of sewer pipes with domestic water [8]. Even though the assessment of the relative disease burden is deficient, still the disease registry of the Public Health Ministry is reporting increasing incident rates of diarrhea, dysentery and typhoid [12] [14] [15] and this is not only from contaminated water, but also from the vegetables irrigated from contaminated wells [16]. In addition to this problem, the rapid increase in urban population challenges the ability of the public sector to comply with water demands [17] [18] [19] and therefore households transfer to a number of other alternatives or complementary water sources that satisfy their need. These sources vary from owning private wells, "water vending and vended water bottles" and bottled water [7] [8] [13] [20] and again this problem has in turn aggravated the health problems. Therefore, the objective of this study is an assessment of the domestic water profile for Al-Ghadir region in the suburbs of Beirut. This assessment is attained through physical, chemical and microbiological analysis of domestic water samples. Sampling Domestic water was collected during the dry season from 75 houses from the house tap. The sampling sites (houses) were chosen based on the availability of municipality and/or complementary well water. In addition to the house tap samples, 75 Water Vended Gallons samples from 3 companies were also sampled from the same houses. Samples were taken from Kfarshima and Al-Sahra region near Al-Ghadir River. This region is highly populated with moderate drilling of private wells. This region suffers from a shortage in the municipality water especially during the dry season and therefore it depends on well water. Both municipality and well waters are not used for all domestic purposes, they are used only for cleaning and bathing and water vended bottles were used for consumption and food preparation. Water samples (300 ml) were collected in borosilicate glass bottles for bacteriological analysis. In addition, a 1-1 polyethylene bottle soaked overnight with 10% v/v nitric acid was also used for water sampling collection. The method of sampling and collection are in accordance with Standard Methods for the Examination of Water and Wastewater [21]. During sampling, a survey was also done to know for what purposes each type of water is used. Field Analysis Parameters sensitive to environmental changes were measured on site. Temperature, electrical conductivity (EC w ), pH, Eh, dissolved oxygen (DO) and total dissolved solids (TDS) were measured using Real time data logger model: YK-2005WA. Laboratory Analysis The collected water samples were divided in two bottles. One bottle was acidified with nitric acid to pH<2 and stored at 4°C for the analysis of Na by the flame photometer technique, and Fe by AAS. Working standard solutions were prepared by dilution of stock solutions (1 mg metal/ml in 2% HNO 3 ) with milliQ water. The other bottle was stored at 4°C without the addition of preservatives for the analysis of water major parameters: titration procedure was used for alkalinity (0.02 N H 2 SO 4 ), Cl -(0.014 N mercuric nitrate), Ca, Mg and total hardness (0.01M EDTA) and spectrophotometric for NO 3 -(Cadmium reduction), SO 4 2-(turbidimetry) and PO 4 3-(Ascorbic acid). The bacteriological quality was determined by membrane filtration technique (Millipore). Statistical Analysis The statistical analysis of the physiochemical parameters was performed using the SPSS software. Water Quality Profile of Well Water, Municipality Water and Water Vended Gallons The mean values of the various measured physiochemical parameters of Well water are presented in Table 1, Municipality water in Table 2 and Water Vended Gallons in Table 3. Water samples were collected during the dry season from houses in Kfarshima and Al-Sahra regions near Al-Ghadir River. This sampling was done to assess the water quality profile of this region which showed to be miserable not only from the presence of Al-Ghadir River which is highly polluted but also from the different domestic water types utilized by the people living in the region. This very poor and highly populated region utilize the three water type sources for their domestic activities. Well Water During the dry season, the recharge of groundwater is nil and this leads to limited dilution of water parameters and water use is at its highest peak. During this season also, the Lebanese Water Authorities augment the deficiency in the supply of the water resources to this region and this explains the reason of the high mineral content present in both well and municipality waters. Beginning with well waters, a very high mineral content was observed in these samples. The conductivity, the TDS levels and the concentrations of Cl -, Na + , SO 4 2and Fe 2+ were above the drinking water standards recommended by the USEPA [22]. The mean conductivity value for collected samples is 3,669 µS/cm and is three times higher than the recommended upper limit of 1,250 µS/cm. The chloride (Cl -) concentration of 1622 mg/l is even six times larger than the USEPA recommended upper level of 250 mg/l. The concentrations of Ca (=151 mg/l) and of Mg (=89 mg/l) are also higher than the set standards by USEPA. Though WHO [19] does not indicate the health hazards resulting from a considerable excess in ion concentrations, such as Cl, Mg and Ca and there is an absence of existing data relevant to human health effects from high concentrations of these ions, still these ions affect the household infrastructure and impact the corrosion of domestic pipes, the leaching of metals and the water taste [8] [19]. The high reported conductivity, Mg 2+ and Clconcentrations in well water samples are primarily due to sea water intrusion, high rate of water extraction and can be due to domestic wastewater infiltration but cannot be due to natural water type occurence. This high content of chloride does not imply a health hazard but it affects taste of water [1]. The mean pH values for well water samples (pH=8) were typical for water arising from carbonate bedrock [23]. Examining additional parameters, such as NO 3 concentrations, it was determined that its mean value is 14.51mg/l with a maximum value of 27.71 mg/l. Both the mean and the maximum concentrations values exceed both the recommended standards of USEPA and WHO (=10 mg/l). Nitrate concentrations higher than 10 mg/l are the cause of methemoglobina (blue-baby syndrome) [19]. The presence of NO 3 in water reflects an additional water deterioration profile resulting from the improper management of domestic sewage. Although regulations recommend providing septic tanks, these are replaced by cesspools because of the improper enforcement of regulations. Municipality Water Compared to the well water, a lower mineral content was recorded for the municipality water which showed also to be lower than the recommended values set by both USEPA and WHO. The mean conductivity value (=1550 µS/cm) was lower than that of well water but was still higher than the lower recommended USEPA value level for drinking (= 400 µS/cm). The concentrations of chloride in municipality water showed to be higher than the maximum limit (= 250mg/l) recommended by the USEPA for drinking. These high Clconcentrations in municipality water are most probably the outcome of mixing well water with municipality water during the dry season in houses that use both water sources (Municipality and Well). Water Vended Gallons The average concentration of major water indicators in water vended gallons (WVG) was within the acceptable standard levels ( Table 3). Bacteriological Water Quality Profile According to the WHO guidelines [1], all water intended for drinking must be free from fecal coliform bacteria. While assessing the microbial profile of collected water samples, data revealed that the most contaminated domestic water type source is well water. Fecal coliform was reported in 62% of well water samples. The occurrence of this high contamination is due to (a) the infiltration of wastewater into aquifers or wells, resulting from the old deteriorating sewage network in this region, (b) the use of cesspools and (c) the cross-connection between domestic sewer pipes and domestic water pipes. What should be noted is that this water is used also for irrigation of the agricultural lands present in the region and a study was done on the irrigated vegetables showed that these vegetables were contaminated. Fecal coliforms were reported in 39% of municipality water. This high contamination level also is due to deficient free residual chlorine to cope with contamination in the distribution network and could also be due to mixing of well water with common house municipality water tanks and /or cross-connection of wastewater pipes with domestic water pipes. Water Vended Gallons showed the lowest contamination level with fecal coliforms reported in 10% of samples. This low percentage of contamination of this type of water is considered high especially that the community in this region depends heavily on this water for cooking, drinking and sometimes for washing because they know that both well and municipality waters aren't clean and they experienced several cases of sickness (vomiting and diarrhea) related to consuming both well and/or municipality waters for drinking. The degree of contamination of this water has decreased during the last ten years. This decrease is due to the regulations putted for water shops and companies selling potable water (Decree issued in 1976 and enforced in 1983) and which require a permit from the Ministry of Health and another from the Ministry of Trade, as well as compliance with drinking water standards and minimal labeling requirements [13]. Conclusion This study has assessed domestic water quality of one of the most populated regions in the suburbs of Beirut. Based on the results, it is evident that the situation is deteriorating at a fast pace due to the contamination of domestic well water from seawater and wastewater intrusions. Wastewater intrusions arise from the cross connection of sewer pipes with domestic pipes. These results emphasized the need to: (a) promote awareness among end users of their water quality, (b) protect groundwater aquifer, (c) provide safe adequate water supplies, (d) implement proper management of domestic wastewater for the suburbs of Beirut. Initiating and sustaining these activities will protect and promote public health, reduce disease burden and achieve socioeconomic growth and development.
2019-04-27T13:07:17.147Z
2017-09-22T00:00:00.000
{ "year": 2017, "sha1": "0ba9958e448fff7eab76f5f881546700a46a1e44", "oa_license": "CCBY", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ijema.20170505.11.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "855fc90dfc537aae0307c4626e3f3f8781016a62", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
236607204
pes2o/s2orc
v3-fos-license
A smartphone application for enhancing educational skills to support and improve the safety of autistic individuals This paper presents a smartphone application that provides learning and communication support to children with autism spectrum disorder (ASD), especially in emergency situations. This application provides learning with video modeling in case of disaster, i.e., fire and rain to instruct the ASD children about safety skills. In addition, the application eases collaboration support between caregivers and ASD children. The single-subject design is used to measure the usefulness of the application, and the analysis is performed for two males and one female ASD children. The results show that the proposed application enhances the satisfaction level of all the participants with significant improvement in learning skills. Introduction Autism spectrum disorder (ASD) is a developmental disorder that produces difficulties in thinking, social contact, verbal/non-verbal communication, tough attitudes like hyperactivity, and an increase in anger. It affects how somebody observes and meets other people [1,2]. The intensity of autism spectrum disorder (ASD) can be high or low as it differs on the basis of cognitive functioning and observed shortage levels that disturb an individual. Computerized technologies have provided a huge advantage to researchers and also clinicians over the last ten years in the form of remedial and educational tools for people with ASD [3]. These days smartphone apps are being used for individuals with ASD which help in various aspects of their lives (e.g., communication, social interaction, daily living, and vocational independence) [4]. Video modeling has been useful in teaching different skills that include behavioral, social, and functional skills to people with ASD. Moreover, it gives the learners a chance to watch the model depicting target skills and asked to perform them [5]. People with ASD try to enhance their freedom, and they also endeavor to learn how to respond if unpleasant situations occur. Parents of individuals who are diagnosed with ASD are concerned about their safety because in certain situations they could be in danger; for instance, they may not be able to make a correct decision whether they are lost or not, or even if they are in an emergency situation or they might fail to talk for getting help in order to be reunited with their caregivers [6]. It is one of those areas where location detection is very helpful for caregivers to keep track of their autistic children [7]. Few studies have inspected the genuine usage of GPS technology in autism spectrum disability, dementia, or development disability [8]. In this study, a smartphone application is introduced for enhancing educational skills to support and improve the safety of ASD children. The proposed application is mainly divided into two segments, the first segment designed for autistic individuals which is further divided into two subparts (e.g., learning and emergency). The first part uses video modeling for teaching safety skills (e.g., fire and rain). The second part provides a one-touch interface to autistic individuals so they contact their caregivers in emergency situations. The second segment is purely designed for caregivers. They can easily track the location of their autistic individuals, and also, they can set a safe zone to ensure the safety of their children. In the remaining of the paper, section two covers the related work that describes the technological work done for people with autism. In section three, the proposed methodology is described. Section four presents the experimental details along with the data collection and analysis techniques. In section five, the experimental results are described. Section six describes the discussion performed in the current study. Finally, section seven presents the conclusion and future directions. Related work Increasing independence and interaction with society is an important objective of people with disabilities. However, lack of support by the community could raise some major safety issues. In [9], a mobile application developed to indicate the level of panic to the autism person in the panic state is presented. Once the level of a panic attack is selected, the device then detects the context automatically and helps the person by calling a caregiver. Another study [10] evaluated the benefits of using the iPhone 4 by adults who have a slight intellectual disability so that they could be able to send their location using video captions whenever they get lost in public. Goel et al. [11] used smart bands as a source of communication between children and their parents so that parents could keep track of their kids. The training of safety skills is often neglected for people with autism spectrum disorder (ASD). However, the importance of safety skills cannot be ignored as they could prove life-saving when needed, for example skills for emergency situations like in a fire or rainfall [12]. According to the United States Fire Administration [13], children of young ages are at higher risk of being injured than older children due to a lack of cognitive ability. The importance of smoke alarms to alert children about the danger is also described. Different studies were conducted to teach these skills to children with autism spectrum disorder ignoring whether they were needed or not. Another study [14] evaluated the fire safety skills taught to five people with autism spectrum disorder (ASD). They used a virtual reality computer program that includes the detection of fire-related disasters and evacuating in such situations. Results show that four participants did very well on a fire drill. Morrongiello et al. [15] designed a computer game for young children with autism to teach them fire safety skills. The children have to get the animated character out of the fire hazard situation as a task. It was concluded that the game effectively improved the knowledge of children about fire safety skills. A recent number of studies show the importance of video modeling to teach safety skills to young people with autism by using portable digital devices. These devices included laptop computers, handheld personal digital assistant (PDA) devices, iPods, portable augmentative, and alternative communication (AAC) devices used to display video models [16][17][18][19][20][21][22]. Taylor et al. [23] teach children with autism to seek help when lost using behavioral training. They were taught target skills with the help of video modeling and physical guidance in the school and community settings. Another study evaluated the use of spherical video-based virtual reality (SVVR) intervention smartphone app to teach adaptive skills for adults with autism spectrum disorder. The evaluation process consisted of the content experts' reviews and the actual testing with adults with ASD. Results indicated the usefulness of the proposed application as all the participants found SVVR easy to use [24]. In this study [25], the impact of embodied digital technology (DT) on the four adults with autism spectrum disorder for improving daily living skills such as doing the laundry and washing dishes was assessed. Reversal single-subject design (RSSD) used for the evaluation process and the collected data show that the participants complete the task activities without taking educators' help. Ying et al. [26] used a storytelling application for smartphones to affect five kids having ASD so that consciousness about road safety could be enhanced in them. Class teachers were asked to assess the behavior of the participating children with autism spectrum disorder. In the study, two types of storytelling techniques were used, i.e., social stories storytelling technique and digital storytelling technique. Both of these techniques were used to learn about awareness level of children, and later assist them or support them about road safety. Engaging children with ASD to road safety awareness is difficult, and by the obtained results, this study was proved to be beneficial for raising awareness. Both of these techniques were equally important in this regard. Purposed methodology In Fig. 1, the core components of the application along with the communication procedure between both users are shown. The proposed smartphone application is divided into two segments. The first segment is purely designed for autistic individuals, while the second segment is designed for their caregivers. The first segment is further divided into two parts (e.g., learning support and emergency support). In learning support, video modeling is used to teach safety skills (e.g., fire and rain) to autistic individuals. In emergency support, ASD children send the location to caregivers if they face emergency situations related to fire and rain using a onetouch interface. In the second segment (e.g., additional support) caregivers can easily track the location of the autistic individuals. They can set a safe zone to ensure the safety of their children. Once the child moves outside from the safe zone, the application notifies the caregivers by sending the current location of the child. Participants and data analysis Three participants, two males and one female ASD children aged between 14 and 18 years, took part in this study. The Childhood Autism Rating Scale (CARS) is a behavior rating scale developed by Schopler et al. [27], used to diagnose autism in children. The main purpose of the scale was to differentiate the autistic children from those with other developmental disabilities. All the participants were recruited from a nonprofit organization for children with special disabilities. The main aims are to provide them with free public education as well as to teach them life skills (e.g., positive social skills and etiquette, verbal/non-verbal communication, self-confidence, overcome stage fear, work ethics, and so on). All participants had their vision within normal range, and due to this reason, they were considered good candidates for learning from video modeling. Moreover, not a single child had previously received video-based instructions for learning. It was seen by informal observations that all participants had imitation skills, but no official assessment was conducted. The proposed study was concisely assessed and ethically accepted by the university ethical committee. Before the study conducted, an informed consent form was filled in and received by the caregivers of autistic individuals. As the participants are unable to read and understand the consent form, approval was received by their caregivers. Demographic information is provided in Table 1 for each participant. It includes the name, age, gender, and diagnoses. Selection criteria of participants included a few things, i.e., the participant should have some knowledge of using smartphone, satisfactory vision and hearing as revealed by the school system's hearing and vision test, IEP goals connected to self-help and vocational skills, the capability to attend a short video segment and generalized motor imitation. Safety skill task and equipment The focus of the intervention was mainly on instructing the participants to finish safety skill tasks related to both (a) fire and (b) rain (see Table 2). These tasks were considered important for each participant to enhance safety skills. The participants' teacher also identified these tasks as vital and described that they had not taken any instruction on fire and Video modeling: Rain and fire safety skills were taught using video modeling shown on a mobile phone. Two videos were recorded with the help of an adult model and shot from the performer's perspective [28] to teach target behaviors about safety skills (e.g., fire and rain) for ASD children. Each video consists of eight sequential steps depicting target behaviors for fire and rain safety skills. The digital video camera was used for recording videos, and those videos were uploaded onto a computer for editing. Before starting a step, a number of that step was displayed on the screen. After that, a video clip of that particular step was played (e.g., video of a hand touching the Fire Picture Thumbnail to send current location"). The duration of both videos was almost 2 min long. Participants watched a video and learned how to perform the task by looking at the performer going through the sequential steps to achieve target behaviors linked with fire and rain safety skills. Setting: All the participants were presented in the classroom devoted to children with moderate ASD equipped with a table and chairs. Two mobile phones, a Motorola G4 plus (running Android 7.0) and a Huawei Y6 (running Android 5.0) were used. All the participants used the first phone, i.e., Motorola G4 plus to perform the tasks. The other device, Huawei Y6 was used by the caregiver for receiving the current state and location of the autistic children. Moreover, the participants were taught to touch the thumbnail of fire or rain to alert the caregivers with their current situation (Tables 3 and 4). The child locates the nearest shelter 3. The child exits through the door and walks to the playground area 3. The child walks to the shelter 4. The child opens the smartphone 4. The child opens the smartphone 5. The child touches the application thumbnail 5. The child touches the application thumbnail 6. The child touches the " Fire Picture" Thumbnail 6. The child touches the " Rain Picture" Thumbnail 7. The child waits for the confirmation dialog 7. The child waits for the confirmation dialog 8. The child remains there until the caregiver approach 8. The child remains there until the caregiver approach Outcome measures and data collection The main dependent measure was the percentage of correct responses for both tasks performed by the participant every time he/she heard the fire alarm sound (to perform the fire safety task) or thunder sound (to perform the rain safety task). The first author of this manuscript and a female educator from acted as observers during the sessions. The role of observers was to maintain the checklist and evaluate the target behaviors of all the participants. Every participant was evaluated on each target behavior (e.g., fire and rain) after the sound of a fire alarm/ thunderstorm. If the target behavior was correct, then it was marked with a check [✓] sign. Similarly, if the target behavior was wrong, then it was marked with an [x] sign. Each target behavior was counted as a chance for a child to make a free-response. A behavior is regarded as correct if the next step is initiated within 10 s and completed within 20 s. Incorrect behavior was defined in different ways: if the student could not finish the step within 10 s, if the student did not initiate a target behavior within 10 s, or if the student completed a step out of order according to the task sequence. To get the percentage of right answers, a total number of right answers were divided by the total number of steps in the given task analysis. Training sessions happened twice or three times a week, and during these one-to-one training sessions, data were collected. Every session was almost 15 min long, and after each session, the participants were appreciated for participating using verbal praises. Data analysis A single-subject design (SSD) [29], precisely A-B design was used in this study along with maintenance after the intervention. In the design, the "A" corresponds to the baseline phase and "B" corresponds to the intervention phase. At the baseline phase, the participants were not taught by video modeling about fire and rain safety skills and it was supposed that all the participants are weak in displaying safety skills. In the intervention phase, safety skills were taught to participants having ASD with the help of video modeling using a mobile phone. After they had mastered the set of safety skills in the intervention phase, the maintenance phase was conducted in the next two weeks for safety skills. In the maintenance phase, the skills of a child were evaluated by providing the sound of fire alarm and thunder. No video model was shown to the children in the maintenance phase. To find the amount of intervention effect on the performance of those children, the PND (percentage of non-overlapping data) approach was used. This approach is labeled as a "meaningful index of treatment effectiveness" [30]. The non-overlap calculation provides the percentage of treatment or intervention phase data that surpasses the maximum values in the pre-treatment or baseline phase [31]. Visual analysis has many tasks and one important task is to detect the amount of difference or non-overlap in the data points through successive conditions; that is why visual analysis settles nicely with non-overlap methods and these methods deliver important information about treatment effects [32]. If the non-overlap score is above 90%, then it is considered very effective, if it is in the range of 70-90% is considered effective, 50-70% questionable, and below 50% suggests that the treatment was not effective. Moreover, the aggregated analysis was conducted for all phases, by finding the average percentage of skill proficiency by subject and overall skill proficiency in general. Procedure Prior to baseline: Two training sessions were held before the baseline phase. Session one included instructions on fire safety skills, and session two included instructions on rain safety skills. Training sessions were conducted two times per week, and the maximum duration of each session was almost 15 min. Baseline: During the baseline phase, participants had to perform the desired tasks for both safety skills. In the fire safety skill session, the participant was brought to the classroom. He/she sat on a chair and was told to perform required tasks related to fire safety after hearing the sound of a fire alarm. For the rain safety skill session, the participant was brought to the ground and he/she was asked to perform the desired tasks related to rain safety after hearing the sound of thunder. Evaluation of each task was carried out within three sessions or until the stabilization of baseline data. If any participant performs the task inaccurately or cannot give a response, the observer intervened to help the participant and completed the task himself/herself, and then the student was again provided with the opportunity to complete the next step in the list. Throughout these sessions, the observer recorded the amount of correctly performed tasks. The session was finished if the participant could not initiate the first step within 10 s or failed to finish the previous task within 20 s. Intervention: During the intervention phase, the participants were provided the smartphone having the installed application with module two already opened, and set to play the video for the targeted task. Video modeling was used for training. They were directed to carry out a task through an instruction like "Watch this." The participant then touched the screen and watched the video on how to do the task. They were then asked to perform that task after the observer said, "Now you do it" and then the participants tried to copy the relative behavior shown in the video clip about fire and rain safety skills once they hear the sound of fire alarm and thunder. Participants also receive verbal praise, i.e., "Nice job," on performing the task correctly after every third step. They were given 10 s to initiate the task and 2 min to finish it. If the participant failed to finish the task within 2 min or failed to initiate a task within 10 s, then the session was dismissed. Unsuccessful tasks were left incomplete because the tasks had to be completed in the specified order of respective task analysis. No other prompts, responses, or instructions were delivered. Throughout the intervention phase, the safety skills of participants were assessed two times a week for a total of nine data points. Maintenance: In the maintenance phase, participants were not shown any video clip for training through video modeling about fire and rain safety skills. They were brought to the classroom and ground of the school for fire and rain safety skill tasks, respectively. In the fire safety skill task, they heard the sound of fire alarm, in the rain safety skill task they heard the sound of thunder, and then they were told to perform according to the desired behaviors taught throughout the intervention session. A maximum of two sessions was required for the evaluation of each task. If the performance declined below the accepted level, then students were allowed to watch the videos again to see whether their performance recovers. Results The percentage of steps for fire safety skill tasks performed correctly by each participant is presented in Fig. 2. Child A finished 14 intervention sessions that were distributed in three phases. At baseline, he was found to have very low proficiency (25%) on one of the three data points in the fire safety drill. He improved his skill proficiency throughout the training phase throughout weeks 1-4. He finished all the steps in the last session of the intervention phase attaining 100% proficiency. He performed all the fire safety skill steps with 100% proficiency in the maintenance phase of the intervention. PND of child A in fire safety skill was equal to 78% and 100% in case of maintenance. On the basis of these results, the PNDs at the maintenance and intervention phases propose that video modeling was effective in improving the fire safety skills of the child. At baseline phase, Child B was assessed with having very low proficiency (25%) in the fire safety skill. He significantly increased proficiency from session four (25%) to session six (75%) during the intervention stage. Child B achieved 100% proficiency in showing the fire safety skills at the tenth session. However, this was not true for the eleventh session as the child reverted back to 88% proficiency. He was successful in completing all fire safety skill steps by twelfth session and displayed a very high skill proficiency (100%). In the maintenance phase, Child B showed 88% proficiency in fire safety skills as depicted in Fig. 2. PND of child B in fire safety skill was equal to 89% and 100% in intervention and maintenance phase, respectively. On the basis of these results, the PNDs at maintenance and intervention phases propose that video modeling was very helpful and effective in improving the fire safety skill of Child B. During baseline, Child C obtained 13% proficiency in the fire safety skill which gradually increased to 100% in the intervention phase. In the maintenance phase, Child C exhibited 100% proficiency in the fire safety skill as shown in Fig. 2. This result was obtained without the use of a video model, and all steps were retained during this twosession follow-up. PND of child C in fire safety skill was equal to 100% during the intervention phase and also 100% in the maintenance phase. On the basis of these results, the PNDs at maintenance and intervention phases propose that video modeling was very helpful and effective in improving the fire safety skill of Child C. The percentage of steps for rain safety skill task performed correctly by each participant is shown in Fig. 3. When child A was evaluated at baseline phase, he had very low rain safety skill proficiency (13%). At that point he only showed the ability for the first step of rain safety, "Child remains calm on thunder sound." Child A significantly increased his proficiency during the intervention stage from the fourth session (13%) to the end of the intervention session (100%). In the maintenance phase, Child A maintained 100% proficiency in a demonstration of the rain safety skill. PND for child A in the rain safety skill was equal to 89% and 100% in intervention and maintenance phase, respectively. These results suggest that video modeling was effective in improving the rain safety Fig. 3 Graph describing the percent of steps performed correctly by all three participants during baseline, intervention, and maintenance in fire safety skill skill of Child A. Figure 3 illustrates 14 intervention sessions attended by Child B which were divided into three phases, i.e., baseline, intervention, and maintenance phase. At the baseline phase, he had very low proficiency (13%) across the three data points. He gradually improved his proficiency from 13% at the beginning to 100% by the end of the intervention phase. At the end of the tenth and eleven sessions, Child B gained 75% proficiency during the intervention phase, and in the last session of the intervention phase, i.e., the twelfth session, he completed all the steps and showed 100% proficiency. PND for child B in the rain safety skill was equal to 100% during the intervention phase and also 100% in the maintenance phase. Based on the results, the PNDs at intervention and maintenance phases suggest that video modeling was effective in improving the rain safety skill of Child B. Child C finished 14 intervention sessions that were distributed into three phases. At baseline, she was found to have very low proficiency (13%). At the beginning of the intervention session, she initiated the first three rain safety skill steps. At the end of the seventh session, Child C showed a relatively higher proficiency of 75% in the rain safety skill. However, at session eight, Child C regressed to 63% proficiency. Then, she improved her proficiency throughout the remaining sessions and reached 100% proficiency by the twelfth session as shown in Fig. 3. Moreover, she maintained her rain safety skill proficiency during the maintenance phase as well. The resulting PNDs are equivalent to 100% during the intervention phase and 100% in the maintenance phase. Hence, this shows that learning with the help of video modeling was very effective for improving safety skills than learning with old traditional techniques (Fig. 4). Aggregated scores At the baseline phase, all three children had very low proficiency in a demonstration of fire safety skills, averaging at 19%. This proficiency improved in the intervention phase. During the intervention phase, all three children showed relatively better proficiency in a demonstration of fire safety skills averaging at 75%. This rate increased in the maintenance phase as the children showed higher proficiency than the other two phases averaging at 96% in fire safety skills. At the baseline phase, all three children had very low proficiency in a demonstration of rain safety skills averaging at 13%. This proficiency improved in the intervention phase. During the intervention phase, all three children showed relatively better proficiency in a demonstration of rain safety skills averaging at 64%. This rate increased in the maintenance phase as the children showed higher proficiency than the other two phases averaging at 96% in rain safety skills. Discussion Safety is a major problem for young people with autism; thus, it is a concern for caregivers because these people are at a higher risk of being hurt. Safety skills education is important for children with ASD as it promotes changes in behavior. There are different types of safety skills, and the paper has focused on two of them: (1) fire safety and (2) rain safety skills. The current study adds to the literature on fire and rain safety skills for children with ASD by developing a smartphone application. It assists ASD children in an unpleasant situation and instructs them on how to deal with these situations. To measure the effectiveness of the application in assisting children with ASD, percentage non-overlapping data (PND) technique was used. The PND score of all the participants was more than 70%, which shows that the proposed application is quite useful for providing assistance related to fire and rain safety skills. Additionally, video modeling is an effective approach for teaching children with ASD. The means of baseline, intervention and maintenance phases in case of fire safety skills are 19, 75, and 96 percent, respectively. On the other hand, the means of baseline, intervention, and maintenance phases in case of rain safety skills are 13, 64, and 96 percent, which clearly shows that the proposed application effectively improves the learning skills of autistic children. Furthermore, the satisfaction level of autistic individuals was measured with the help of questionnaires, filled by their teachers and caregivers. Results obtained from the questionnaire depict that they feel satisfied with the use of the proposed application. Moreover, they describe that children were not annoyed and do not show any hyperactivity while using the application. Conclusion and future guidelines Developmental disabilities such as autism spectrum disorder (ASD) induce numerous challenges to autistic individuals in definite areas of life, especially in communication, social interaction, imagination, learning, self-help, and independent living. At the same time, safety is also a major issue for these individuals, and lacking safety skills could be harmful. Therefore, individuals with ASD need to be aware of potential dangers in the environment and get familiar with the proper safety skills to stay safe. With the help of video modeling, different types of skills can be taught to individual with autism spectrum disorder. It is an effective way of teaching in which numerous learners can benefit at a single time. This study provides assistance regarding safety skills (e.g., fire and rain) to individuals Fig. 4 Graph depicting the percentage of steps performed correctly by all three participants during baseline, intervention, and maintenance in rain safety skill with ASD. It also reports that the learning skills of autistic individuals were gradually enhanced with the help of video modeling. Moreover, results show that autistic individuals feel satisfied and remain active while using proposed smartphone application. There are certain lines of future work related to the current study. In the future, the generalization phase will be considered and evaluation be made using several different sounds (alarms or thunder), which can deliver different auditory stimuli that can stimulate the fire or rain safety behaviors. Also, it would be valuable to generalize these skills to different environments (e.g., home and work settings). Future research should consider increasing the number of participants which will reinforce validity. At last, extending the range of age groups can make the intervention procedures stronger.
2021-05-10T00:04:19.566Z
2021-01-29T00:00:00.000
{ "year": 2021, "sha1": "ed7a0190b396e75e551bd67933df792d0858205d", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-169682/latest.pdf", "oa_status": "GREEN", "pdf_src": "Springer", "pdf_hash": "c83aa270e007fe67f939ab0e50c531fc7961109f", "s2fieldsofstudy": [ "Education", "Psychology", "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Psychology", "Computer Science" ] }
209289729
pes2o/s2orc
v3-fos-license
Secukinumab Provides Sustained Improvements in the Signs and Symptoms of Psoriatic Arthritis: Final 5‐year Results from the Phase 3 FUTURE 1 Study Objective To report the 5‐year efficacy and safety of secukinumab in the treatment of patients with psoriatic arthritis (PsA) in the FUTURE 1 study (NCT01392326). Methods Following the 2‐year core trial, eligible patients receiving subcutaneous secukinumab entered a 3‐year extension phase. Results are presented for key efficacy endpoints for the secukinumab 150‐mg group (n = 236), including patients who escalated from 150 to 300 mg (approved doses) starting at week 156. Safety is reported for all patients (n = 587) who received 1 dose or more of study treatment. Results Overall, 81.8%% (193 of 236) of patients in the secukinumab 150‐mg group completed 5 years of treatment, of which 36.4% (86 of 236) had dose escalation from 150 to 300 mg. Sustained improvements were achieved with secukinumab across all key efficacy endpoints through 5 years. Overall, 71.0%/51.8%/36.3% of patients achieved American College of Rheumatology (ACR) 20/50/70 responses at 5 years. Efficacy improved in patients requiring dose escalation from 150 to 300 mg and was comparable with those who did not require dose escalation. Exposure‐adjusted incidence rates for selected adverse events per 100 patient‐years for any secukinumab dose were serious infections (1.8), Crohn's disease (0.2), Candida infection (0.9), and major adverse cardiac events (0.5). Conclusion Secukinumab provided sustained improvements in the signs and symptoms in the major clinical domains of PsA. Efficacy improved for patients requiring dose escalation from 150 to 300 mg during the study. Secukinumab was well tolerated with no new safety signals. INTRODUCTION Psoriatic arthritis (PsA) is a chronic, inflammatory disease characterized by peripheral arthritis, axial disease, dactylitis, enthesitis, and skin and nail psoriasis (1,2). PsA can negatively affect patients' daily functioning and quality of life as a result of permanent joint damage and disability (3). The reported prevalence of PsA in the general population is up to 1%, and it affects around 30% of patients with psoriasis (2,4,5). Biologic therapies, such as anti-tumor necrosis factor (TNF) and anti-interleukin (IL)-17A antibodies, are recommended for the treatment of PsA in patients who experience an inadequate | 19 response to first-line treatment with nonsteroidal anti-inflammatory drugs (NSAIDs) and/or disease-modifying antirheumatic drugs (DMARDs) (6)(7)(8). The proinflammatory cytokine IL-17A mediates multiple biological functions that result in joint and entheseal inflammation and structural damage, which are characteristic of PsA (9,10). Recommendations from the European League Against Rheumatism (EULAR) (8) and the Group for Research and Assessment of Psoriasis and Psoriatic Arthritis (GRAPPA) (11) recognize targeting IL-17A as a therapeutic strategy to manage all the main clinical manifestations of PsA. Secukinumab, a human monoclonal antibody that directly inhibits IL-17A, provided rapid and significant improvements in all key clinical manifestations of PsA in the FUTURE 1 study (NCT01392326) with improvements sustained through 3 years (12)(13)(14). These clinical benefits have been observed in patients naïve to biologic therapy and in those with an intolerance or inadequate response to agents targeting TNF (3,8,9,11,(13)(14)(15)(16). Data from FUTURE 1 have also shown that secukinumab significantly inhibits joint structural damage through 24 weeks (14), with benefits sustained out to 3 years (3); it is worth noting that radiographic data were only collected up to 3 years in this study (3,17). Here, we present the final 5-year efficacy and safety results, including efficacy results in patients who had a dose escalation from 150 to 300 mg during the study. METHODS Study population. The study design of FUTURE 1 has been described in detail elsewhere (3). In brief, the population of the core study consisted of adults diagnosed with PsA, as classified by Classification Criteria for Psoriatic Arthritis (CASPAR) criteria (18), and with moderate to severe symptoms for 6 months or more, who must have 3 or more tender joints of 78 and 3 or more swollen joints of 76 at baseline. Patients were classed as anti-TNF inadequate responders (anti-TNF-IR) if they had been taking an anti-TNF agent at an adequate dose for 3 months or longer, had experienced an inadequate response, or had stopped treatment because of tolerability issues. Clinical judgement of the investigator was used to assess the suitability of a patient to enter the extension study based upon overall improvement and response to therapy during the core study. Only participants who had completed the 2-year core study and had signed the new informed consent form were included in the extension phase. Key exclusion criteria for the extension included patients who were deemed not to be benefiting from the study treatment based upon lack of improvement or worsening of their symptoms, patients receiving therapy with biologic immunomodulation agents other than secukinumab, and those with inflammatory disorders other than PsA, including active ongoing inflammatory bowel disease (IBD), which might confound the evaluation of therapy. Study oversight and design. This study was approved by institutional review boards or ethics committees for each study center and was conducted in accordance with the Declaration of Helsinki. All patients provided written informed consent prior to participation and again at the start of the extension study. This extension study employed a parallel group, double-blind design for the first year (up to and excluding week 156 [the 2-year core study plus the first year of the extension study]), followed by an open-label design for the next two years (week 156 onward). Blinding during the first year of the extension study was required to ensure that all patients had completed the core study and the data was locked before unblinding occurred. Investigators used their clinical judgment to decide if it was beneficial for patients to enter the extension study based upon overall improvement and response to therapy during the 2-year period of the core study. At 2 years in the core study, eligible patients completed assessments and continued on the same dose as that received during the core study (secukinumab at a dosage of 150 mg or 75 mg every 4 weeks). Study treatment dose adjustments were not permitted until week 156. During the open label study period starting at week 156, the secukinumab dose could be escalated from 75 to 150 or 300 mg or from 150 to 300 mg for patients whose signs and symptoms were not fully controlled with the current dose as judged by the investigator. Here, we report long-term results for the secukinumab 150-mg group, including patients who switched from placebo to secukinumab 150 mg at weeks 16 or 24 following the end of the placebo-controlled period. We also report efficacy results for patients who escalated from secukinumab 150 to 300 mg (approved doses). For all patients who discontinued or withdrew from the study, the investigator was to ensure that the patient completed an endof-treatment visit (corresponding to the last visit for the patient's current period of treatment) 4 weeks after last study treatment, and also returned after an additional 8 weeks for a final follow-up visit (12 weeks after last study treatment). Endpoints and assessments. The primary endpoint of this extension study was to evaluate the long-term efficacy of secukinumab with respect to the proportion of patients achieving 20% or greater, 50% or greater, or 70% or greater improvement in the American College of Rheumatology (ACR) criteria for improvement (ACR20, ACR50, and ACR70, respectively) over time, up to 5 years in patients with active PsA who completed the core study. ACR20/50/70 responses are reported for those patients who were originally randomized to secukinumab 150 mg during the core study to show the full 5-year efficacy and separately for those who entered the extension study in the secukinumab 150 mg group (including patients both originally randomized to secukinumab 150 mg and those switched from placebo to secukinumab 150 mg at week 16 or 24). Other assessments that continued for up to 5 years included resolution of dactylitis and enthesitis, which were measured using the Leeds Dactylitis and Leeds Enthesitis Indices, respectively; change from baseline in Health Assessment Questionnaire-Disability Index (HAQ-DI); Short Form 36-item (SF-36) health survey physical component summary (PCS); improvement (75% or greater and 90%) in Psoriasis Area and Severity Index responses (PASI 75/90); change from baseline in Disease Activity Score-28 (DAS28; utilizing high-sensitivity C-reactive protein); the proportion of patients achieving low disease activity (LDA) and disease remission based on DAS28 scores of 3.2 or less and 2.6 or less, respectively, and the proportion of patients achieving minimal disease activity (MDA; defined as having 5 out of 7 of the following: 1 or fewer tender joint count, 1 or fewer swollen joint count, PASI ≤ 1 or Investigator's Global Assessment [IGA] score ≤ 1, patient pain visual analog scale [VAS] ≤ 15, patient global VAS ≤ 20, HAQ-DI ≤ 0.5, tender entheseal points ≤ 1). Long-term safety and tolerability of secukinumab were assessed by monitoring vital signs, clinical laboratory variables, and treatment-emergent adverse events (AEs) over time, up to 84 days after the last administration of treatment. Statistical analysis. Efficacy data are presented using all observed data at the given time point of analysis based on the Full Analysis Set (FAS), which included all subjects with at least one efficacy assessment during the extension. For patients who discontinued during a specific period, the end-of-treatment visit (ie, final assessment 4 weeks after last study treatment) was considered the last week of the corresponding period. PASI assessments were conducted for subjects in whom at least 3% of the body surface area was affected by psoriatic skin involvement at baseline. Dactylitis/enthesitis assessments are presented among patients with dactylitis/enthesitis at baseline. Analysis is presented in a descriptive manner through week 260. Graphical representation (line plots) of ACR20, ACR50, and ACR70 response over time are provided for patients initially randomized to the 150-mg dose. Sankey-style bar charts for the ACR and PASI responses for all patients (including patients who were initially randomized to placebo followed by secukinumab 150 mg) that escalated from 150 to 300 mg and had efficacy assessments at both up to 32 and up to 56 weeks after escalation were also drawn from the observed data. The Abbreviation: BMI, body mass index; DAS28-CRP, Disease Activity Score-28-C-reactive protein; HAQ-DI, Health Assessment Questionnaire-Disability Index; PsA, psoriatic arthritis; TNF, tumor necrosis factor. a Baseline characteristics were recorded at baseline of the core study. Results are mean (standard deviation) unless otherwise stated. Sankey-style overlay presents ACR and PASI responses using mutually exclusive definitions. The safety set included all patients who took at least one dose of study treatment during the treatment period in the core or extension study. Evaluation of AEs is based on the secukinumab dose taken prior to the AE and presented as exposure-adjusted incidence rates (EAIRs) per 100 patient-years over the entire treatment period for each treatment dose and overall. If a patient experienced an AE after dose escalation, the patient was counted at the escalated dose. Baseline characteristics and subject disposition. Baseline demographics and disease characteristics for patients in the secukinumab 150-mg FAS group (including patients who switched from placebo to secukinumab 150 mg) are shown in Table 1. The patient retention rates in this 5-year study were high, with over 80% of patients who entered the extension study completing the full treatment period (Figure 1). Of the 236 FAS patients in the secukinumab 150-mg group, 193 (81.8%) completed 5 years of treatment, with 86 of 236 (36.4%) patients having escalated to 300 mg; of these 193 patients, 149 (77.2%) were TNF-naïve. Disease activity assessed by ACR responses. The ACR20/50/70 responses reported during the core study for the group of patients who were originally randomized to secukinumab 150 mg were sustained through 5 years (Figure 2) with responses of 67.9%/52.7%/37.4% at 5 years. ACR responses were similar in the secukinumab 150 mg group that included patients who switched from placebo at week 16 or 24 ( Table 2). ACR response rates achieved in the core study were sustained through 5 years in both anti-TNF-naïve and anti-TNF-IR groups, with generally higher responses observed in anti-TNF-naïve patients across the entire treatment period (Figure 2). ACR responses were generally similar regardless of whether patients were receiving concomitant methotrexate or not (Supplementary Table 1 to secukinumab 150 mg) at 2 years during the core study improved yearly through 5 years (Table 2). At 5 years, 60.8% and 75.3% of patients in the secukinumab 150-mg group achieved DAS28-CRP disease remission and LDA states, respectively. Resolution of enthesitis and dactylitis. Resolution of dactylitis and enthesitis achieved with secukinumab 150 mg up to 2 years were sustained through 5 years, with 93.9% and 78.8% of patients achieving complete resolution of dactylitis and enthesitis, respectively, at 5 years (Table 2). Efficacy across other endpoints. Efficacy responses reported during the core study with secukinumab 150 mg were also sustained throughout the extension study for the following parameters: PASI90 and PASI75, HAQ-DI response, SF-36 PCS, and MDA (Table 2). At 5 years, high PASI75 (80.6%) and PASI90 (67.0%) responses were observed, and MDA response was 39.5% (Table 2). | 23 Safety. Safety is reported as EAIR/100 patient-years for all patients (N = 587) who were administered at least one dose of study treatment during the core or the extension study. The type, incidence, and severity of AEs over the 260-week treatment period were consistent with those reported at year 1 and year 2. The EAIR of AEs and serious AEs observed with secukinumab are shown in Table 3. The majority of AEs with the highest EAIRs were infections usually involving the upper respiratory tract (EAIR = 8.0 for any dose of secukinumab), whereas the most common AE was nasopharyngitis (EAIR = 8.6 for any dose of secukinumab). Six deaths were reported during this 5-year study. Two of these deaths occurred during the core study: one died of stroke/cerebrovascular accident (a patient receiving secukinumab 75 mg), and one died of myocardial infarction (a patient receiving secukinumab 150 mg); but the other four deaths occurred during the extension phase: one patient receiving secukinumab 75 mg died from squamous cell carcinoma of the pharynx, the other three patients, in the secukinumab 150mg group, died from acute myocardial infarction, cardiac failure, and septic shock, respectively. The EAIR for Crohn's disease was 0.2 for the entire 5-year period. Confirmed cases of Crohn's disease were reported during the core study for one patient in the secukinumab 75-mg group (23-year-old male with no history of IBD; the event was graded as severe and the patient discontinued; the investigator did not suspect a relationship to study treatment) and one patient in the placebo group (exacerbation). One additional confirmed case of Crohn's disease was reported for one patient in the secukinumab 75-mg group during the extension study (68-year-old male with no history of IBD; the event was graded as moderate in severity and the patient remained in the study; the investigator did not suspect a relationship to study treatment). The EAIR for Candida infections was 0.9 for patients treated with any dose of secukinumab during the entire treatment period. The infections were located in the skin and mucus membranes, none were systemic or invasive, and all except one were nonserious, mild, or moderate in severity, responded to standard therapy, and did not lead to study treatment discontinuation. The EAIR for malignancies was 0.8 for patientss treated with any dose of secukinumab during the entire treatment period. Overall, the incidence of treatment-emergent antidrug antibodies (ADAs) was low (detected in eight patients) throughout the 5-year period; three patients were positive at baseline and continued to show ADAs, and two patients showed neutralizing antibodies at week 24. A clear relationship between ADA formation and deviating pharmacokinetics or immunogenicity-related AEs was not observed. DISCUSSION Secukinumab demonstrated sustained improvements in the signs and symptoms, function, and health-related quality of life in patients with active PsA. The patient retention rates in this study were high, with over 80% of patients who entered the extension study completing the full 5-year treatment period. These are the first 5-year findings reported for secukinumab in PsA, and the data add to the growing body of evidence supporting the use of IL-17 inhibitors for the treatment of PsA as recognized in the guidelines from the EULAR (8), GRAPPA (11), and recently the ACR/National Psoriasis Foundation (NPF) (19). This robust study also provides data on the benefits of dose escalation in patients whose symptoms are inadequately controlled on 150 mg. Improvement in ACR and PASI responses was achieved in patients who escalated from secukinumab 150 to 300 mg, indicating a benefit of dose escalation in patients who require additional control of symptoms. In agreement with previous studies (3,8,9,11,(13)(14)(15)(16), secukinumab treatment in FUTURE 1 was shown to be efficacious in both anti-TNF-naïve and anti-TNF-IR patients, with clinical responses generally higher in anti-TNF-naïve patients than in anti-TNF-IR patients. These results support the effectiveness of long-term treatment with secukinumab for biologic-naïve patients as well as for patients who have previously failed anti-TNF therapy. The safety profile of secukinumab was consistent with that previously reported for PsA (3,9,(12)(13)(14)(15)(16)(17) and psoriasis (6). No new or unexpected safety signals were observed in patients with PsA over a treatment period of up to 5 years. The rates of Candida infections, major adverse cardiac events, and malignancies reported were low throughout the 5-year treatment period and consistent with results of an analysis of long-term safety of secukinumab in patients with psoriasis, PsA, and ankylosing spondylitis (20). The low rate of Crohn's disease reported in the study is in line with findings of a recent retrospective analysis of 21 clinical trials of psoriasis, PsA, and ankylosing spondylitis, which found that IBD events were uncommon with secukinumab treatment (21). It should be noted that patients with active ongoing inflammatory disease were excluded from this study, so patients with a history of IBD were likely not enrolled. Limitations of this study included a lack of a long-term comparator due to the fact that long-term treatment with placebo is considered unethical; the placebo-controlled period of the core trial was, therefore, only up to week 16. There was no active comparator included, and results could be potentially biased as patients remaining in the study are those patients benefiting from secukinumab. Although efficacy responses in the current study were sustained irrespective of previous anti-TNF exposure, patients eligible for inclusion in this study could have been treated with no more than three anti-TNF agents; this may be viewed as a limitation of the study. The inclusion of patients in this extension study based on the investigator's judgement of their overall response to therapy likely resulted in a population of preselected responders. Results from this long-term extension study confirm the benefit of IL-17 inhibition with secukinumab for sustained improvements in the signs and symptoms of major clinical domains of PsA. With no new safety concerns over a treatment period of up to 5 years, results from the 5-year phase III FUTURE 1 study support the long-term efficacy, safety, and tolerability of secukinumab in the treatment of patients with PsA. ACKNOWLEDGMENTS Medical writing support was provided by Martin Wallace, PhD, of Novartis Ireland, Ltd., in accordance with Good Publication Practice (GPP3) guidelines (http://www.ismpp.org/ gpp3). The funding for this writing support was provided by Novartis. Additionally, Novartis is committed to sharing with qualified external researchers access to patient-level data and supporting clinical documents from eligible studies. These requests are reviewed and approved by an independent review panel on the basis of scientific merit. All data provided are anonymized to respect the privacy of patients who have participated in the trial in line with applicable laws and regulations. This trial data availability is according to the criteria and process described on www.clini calst udyda tareq uest.com
2019-11-22T01:16:14.243Z
2019-11-14T00:00:00.000
{ "year": 2019, "sha1": "bf1e7127b412a5f02f9e4e16d39dbf0f520edb9e", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/acr2.11097", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2aad634b273c7072eaed68e464118cab5e1d10fc", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
115511868
pes2o/s2orc
v3-fos-license
Research on application of nozzles for ampoule bottle production machine based on ANSYS fluent In order to determine the suitable combustion temperature suitable for ampoule bottle production, the nozzle and combustion field were numerically simulated and analysed by using ANSYS fluent software to change the inlet diameter and inclination angle of nozzle, respectively. The influence of inlet diameter and inclination angle on the central temperature of combustion flame was determined, and the reference was provided for the condition of ampoule bottle forming. The results show that the inlet diameter has a great influence on the central temperature of the combustion flame and can obtain a more favourable processing temperature range when the inlet diameter is smaller, and the change of the inclination angle has no significant effect on the central temperature of the combustion flame. When the inlet diameter is 0.6 mm and the inclination angle is 20 °, the highest processing temperature range can be obtained. This study can provide a reference for improving the production efficiency of ampoule bottles. Introduction Ampoule bottles are currently widely used in the medical field as containers for high-purity chemicals that contain injection preparations and must be insulated from the air. They are mostly made of glass [1]. The production of ampoule bottles is made of finished glass tubes for secondary processing. The key steps are heating, firing, coloring, neck marking, annealing, cooling, etc. The firing molding is the key of the bottle body forming. If the effective flame temperature is not obtained during the sintering process, it is very likely to lead to the deformation of the bottle body or the unqualified size and specification of the bottle body. In the manufacture of ampoule bottles with glass tubes, the nozzle spray flame is needed to heat the softening glass tubes. At present, with the rapid development of computational fluid dynamics (CFD) technology [2], it is possible to use CFD to simulate the dynamic working process of fluid machinery. In actual production, the internal structure of the nozzle will have a great influence on the combustion temperature. Therefore, studying the structure and key influencing factors of nozzles is a major technical problem that technicians in this field need to solve at present. Figure. 1 is a schematic diagram of the nozzle structure. The left end of the nozzle is the intake port, and the different inlet diameters and inclination angles are respectively set. A uniform inlet pressure is set in the numerical simulation process. Grid partition In this study, the symmetrical structure of the nozzle is studied, and the simplified model is the plane axisymmetric model. After simplification, the structure of the model is simple, and the meshing of the quadrangle grid in a structured grid can meet the simulation requirements [3], taking the symmetrical distribution of nozzle outlet wide 80mm, long 300mm as the outer combustion flow field region, as shown in Figure 2, the quadrilateral meshes are divided by using Gambit software for nozzles and external combustion flow field. In order to improve the flame effect, the inlet diameter and inclination angle are taken as follows: Boundary condition In the process of numerical simulation, in order to ensure that the gas does not reflux, the pressure values of the two entrances should be set to the same, and the inlet boundary conditions should be set as pressure-inlet. According to the actual working conditions, given inlet pressure of 3KPa as the research condition, the outlet is pressure-out. in order to simulate the real working conditions, the outlet pressure is set at relative atmospheric pressure of 0 Pa, and the temperature is 300K [4]. Gas fuel pre-treatment The fuel used in the calculation must be pretreated before the numerical simulation of combustion [5]. The fuel used in this paper is natural gas, its main component is methane, its chemical formula isCH , and other microelements are not considered [6]. The oxidant is O in the air. When the air is selected, only N and O in the air are taken into account, which account for 79% and 21%, respectively, excluding other trace components. The chemical reaction formula for methane is as follows: 2 → 2 Mathematical model The fluid flow must follow the law of physical conservation. The basic laws of conservation include the law of conservation of mass, the law of conservation of momentum, and the law of conservation of energy. All flow phenomena in the natural world can be described by two equations: the continuous equation (that is, the mass conservation equation) and the Navies-Stokes equation (that is, the momentum conservation equation) [7]. This paper is studied under the temperature condition of 300K.The transport equation of the turbulent energy in the standard k ε model is derived from the exact equation, but the dissipation rate equation is obtained by physical reasoning and mathematical simulation of the similar original equation [9].The combustion flow is a complete turbulent flow and the molecular viscosity can be ignored, so the turbulence model is chosen as the standard k ε model [10], This model is widely used in practical engineering problems and scientific research. The standard k-e model is a semi empirical model, mainly based on turbulent flow energy k and turbulence energy dissipation rate e. The turbulent kinetic energy equation K is an exact equation, and the turbulence energy dissipation rate e equation is an equation derived from the empirical formula. In order to ensure the closure of the model, the following assumptions are made according to the physical significance [11]. Generate item • Gradient diffusion term • Dissipative term • Glass characteristic analysis Glass is an amorphous material with no fixed melting point and softens at high temperatures. Medical glass materials are generally made of neutral borosilicate glass with better performance. The range of mass fraction of SiO is about 75%, and Fe O is 0.05%. The flame temperature of secondary reprocessing of glass is 1350°C [12].Since the production of ampoule bottles is reprocessed by glass tubes, that is, the second reprocessing of glass, the temperature of ampoule processing at 1350°C is taken as the processing temperature of ampoule bottles in this paper, and the temperature distribution of combustion field is analyzed. Numerical simulation results of nozzle and combustion field This study used ANSYS fluent software to perform numerical simulations on 9 experimental models. The results are shown in Figure 3. From the figure, it can be observed that as the flame gradually moves away from the exit, the fuel concentration in the combustion zone gradually becomes thinner, the combustion temperature begins to decrease, the flame propagation speed decreases, the flame thickness increases, and the flame area gradually expands. The combustion temperature of all combustion fields can reach a maximum of 2180K, but the production and processing temperature of ampoule bottle should reach 1350°C, that is 1623K. Numerical comparison and analysis of combustion field In order to show the effect of different adjusting position on combustion temperature field accurately and intuitively. The simulated data of the central temperature of the combustion field under the condition of the same inclination angle α and different nozzle inlet diameters d are extracted to determine the influence of the nozzle inlet diameter on the combustion temperature in this paper, as shown in figure 4. Comparing the effects of different inlet diameters, it can be found that the smaller the diameter, the easier it is to obtain a higher combustion temperature and the greater the maximum combustion temperature range. And at a position closer to the nozzle, a processing temperature of 1623K can be obtained. Its rationality is inversely related to the size of the inlet diameter. The simulation data of the center temperature of the combustion field under the conditions of the same nozzle inlet diameter d and different inclination angles α were extracted to determine the influence of the inclination angle of this article on the combustion temperature, as shown in Figure 5. Comparing the effects of different tilt angles, it can be found that the larger the tilt angle, the higher the combustion temperature can be obtained. However, the impact is not obvious. Conclusion (1) The numerical simulation method of CFD was used to simulate the combustion temperature at different inlet inclination angles. It was found that the change of the inlet tilt angle had a certain influence on the combustion temperature. The higher the inclination angle, the higher the combustion temperature, but the effect was not obvious. It reveals the change law of the combustion temperature with the entrance inclination angle. (2) The numerical simulation method of CFD was used to simulate the combustion temperature of different nozzle inlet diameters. It was found that the change of the inlet diameter had a great influence on the combustion temperature. With other conditions being equal, a more suitable combustion temperature can be obtained with a diameter of 0.6 mm. The maximum processing temperature range can be obtained at position (a). At the same time, the best ampule production position was determined at 155.182 mm from the outlet of the gas mixing valve. (3) The research on the nozzle parameters of the ampoules production machine can provide reference for the follow-up engineers and technicians to improve the qualification rate of the ampoules in the nozzle parameters.
2019-04-16T13:28:39.679Z
2018-07-01T00:00:00.000
{ "year": 2018, "sha1": "eceb1c4ede8194dd5bf4633874c798079a8fc99b", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1064/1/012012", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "053847da4b35ce60531b45ba49037216c4167ad0", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
52452701
pes2o/s2orc
v3-fos-license
Technical Note : On the effect of water-soluble compounds removal on EC quantification by TOT analysis in urban aerosol samples In this work, three different thermal protocols were tested on untreated and water-washed aerosol samples to study the influence of soluble organic and inorganic compounds on EC measurements. Moreover, analyses on the water soluble extracts were carried out. The aim was to find out the most suitable protocol to analyse samples collected in a heavily polluted area. Indeed, the tests were performed on real samples collected at an urban background station in the Po Valley, which is one of the main pollution hot-spots in Europe. The main differences among the tested protocols were the maximum temperature of the He step (i.e. 870 C, 650C, and 580C) and the duration of the plateaus during the heating procedure. Our measurements evidenced the presence of a significant amount of weakly light-absorbing carbonaceous aerosol evolving during the highest temperature step in He (i.e. 870C), which makes lower temperature protocols not suitable for EC determination in samples collected in heavily polluted areas like Milan. Introduction At the state of the art the identification of organic (OC) and elemental (EC) carbon in aerosol samples using thermal protocols is ambiguous; therefore, they are operationally defined.Indeed, part of the EC thermally evolves in oxidis-Correspondence to: A. Piazzalunga (andrea.piazzalunga@unimib.it) ing atmosphere and part of the OC can char especially in an oxygen-poor atmosphere giving origin to a refractory component (pyrolytic carbon, PyC) similar to EC (Watson et al., 2005; and therein cited literature). Among the different approaches used for OC/EC quantitative evaluation, thermal-optical analyses are the most widespread (Chow et al., 1993;Birch and Cary, 1996).Different heating ramps both in He and He/O 2 phase are reported in the literature for thermal-optical analyses.It is noteworthy that temperature and duration of the plateaus have been already identified as the main factors influencing OC/EC separation (Chow et al., 2001(Chow et al., , 2004;;Schmid et al., 2001;Conny et al., 2003;Schauer et al., 2003;Subramanian et al., 2006;Zhi et al., 2009); a twofold difference in the EC quantification by different thermal-optical approaches is quite usual (e.g.Schmid et al., 2001;Watson et al., 2005;and therein literature). PyC is considered one of the main interfering components in the EC quantification.Previous works (Chow et al., 2001;Subramanian et al., 2006) evidenced that EC and/or PyC pre-combustion can occur in the He phase at high temperature (i.e. about 850 • C) especially when inorganic catalytic compounds are present in the sample (Novakov and Corrigan, 1995;Chow et al., 2001;Yu et al., 2002;Wang et al., 2010).Moreover, the same authors singled out the presence of heavy, tar-like organic compounds which do not evolve until the highest temperature in He atmosphere is reached. The water-soluble organic compounds (WSOC) removal has been identified as an effective procedure for a better EC quantification as WSOC have an important role in PyC formation (Novakov and Corrigan, 1995;Yu et al., 2002); A. Piazzalunga et al.: Effect of WSOC removal on EC quantification nevertheless -as far as we know -a systematic analysis of the results obtained by the thermal-optical analyses on real samples after WSOC removal has never been carried out.Moreover, some inorganic catalytic compounds as well as some polymeric, partially aromatic, coloured organic products of combustion which evolve only at high temperature are water soluble; thus, they can be removed by washing the filter before analysis in order to reduce possible catalytic effects or interferences in the EC determination (Andreae and Gelencsér, 2006). The influence of the sample composition on the thermal behaviour of carbonaceous species makes difficult finding out a universal thermal method for OC/EC separation suitable for aerosol samples collected in different environments. In this work, tests were carried out with the aim of identifying the most suitable protocol for OC/EC measurements on samples collected at an urban area in the Po Valley, which is one of the major pollution hot-spots in Europe.Three protocols mainly differing for the highest temperature in the inert atmosphere (i.e.870 • C, 650 • C, and 580 • C) were tested.The novelty of this work is that the tests were performed both on untreated and water-washed samples.In addition, WSOC extracted from our samples were also analysed to study their thermal behaviour and gain further information on the different carbonaceous aerosol components. Samplings The sampling campaigns were carried out at an urban background station in the Milan university campus at about 10 m above ground level.PM 10 was sampled on two parallel quartz fibre filters (2500 QAO-UP, Pall Corporation, 47 mm diameter) pre-fired at 700 • C for 1 h (Vecchi et al., 2009) using low volume samplers (flow rate: 2.3 m 3 h −1 ). 26 parallel samplings were carried out from 17 January to 9 February 2010.The sampling strategy was to perform 9 h samplings (from 09:00 to 18:00 and from 21:00 to 06:00 LT) in order to limit the filter loading and to operate the carbon analyses in optimal conditions (see Sect. 3.2). During the sampling period the weather was cloudy or foggy (snow was registered on 5 February 2010 and no other precipitations occurred in the period).Temperature ranged from −1 • C to 10 • C. PM 10 mass was 74 µg m −3 on average (range 35-111 µg m −3 ), and total carbon (TC) accounted for about 25 % of the PM 10 mass on average. Thermal-optical transmittance analysis In this work, a thermal-optical transmittance (TOT) analyser by the Sunset Laboratory Inc. was used to quantify EC, OC, and TC in aerosol samples.The carbonatic carbon component was not considered in this work as previous studies reported that carbonate is negligible in PM 10 at most European areas.Exceptions are coastal sites in south Europe (ten Brink et al., 2004;Sillanpää et al., 2005;Perrone et al., 2011) or peculiar situations (Querol et al., 2009;Cuccia et al., 2011). Briefly, in the first part of the TOT analysis the sample is heated in an inert atmosphere (He) using different thermal ramps depending on the protocol in use.Then, the second part of the analysis is carried out in an oxidising atmosphere (He/O 2 mixture, 90/10 %) (Birch and Cary, 1996).The carbon evolving during heating is completely oxidised to CO 2 by a MnO 2 catalyst and then reduced to CH 4 to be quantified by a flame ionisation detector (FID). To account for PyC formation, the transmission of a laser beam through the sample is constantly monitored during the analysis.Transmittance usually decreases throughout the He-step, indicating the formation of light-absorbing PyC.In the He/O 2 phase, an increase of the laser signal is registered and the PyC evolution is conventionally assumed completed when the transmittance reaches its initial value.Carbon evolving after this point (called split-point) is then considered as EC. In this work, three thermal protocols mainly differing for the highest temperature in the He atmosphere were tested (see Table 1 for details).The protocol called He-870 is very similar to NIOSH2 (Maenhaut and Clayes, 2007) and ACE-Asia base (Schauer et al., 2003).The He-580 is a low temperature and time variable protocol which is a proxy of the Desert Research Institute IMPROVE A protocol (Chow et al., 2007) implemented on a thermal-optical transmittance instrument.The third protocol is EUSAAR 2 (Cavalli et al., 2010), which has been recently proposed as a standard for carbon analysis on samples collected at European regional background sites.It is noteworthy that few samples analysed in our work by EUSAAR 2 required the last step in oxidising atmosphere to be prolonged in order to obtain a complete carbon evolution, as previously reported by other authors (Kuhlbusch et al., 2009;Gilardoni et al., 2011). The protocols differ for temperature and duration of the heating steps.It is important to point out that the plateau temperatures and step time-lengths in He-870 and EUSAAR 2 are fixed.As for He-580, the plateau temperatures are fixed but the step time-lengths are variable; indeed, this protocol allows the complete evolution of the carbon at each step (i.e. each plateau lasts till the FID signal approaches to zero). Of the two quartz filters sampled in parallel, one was analysed as-is (in the following named untreated sample) and the other was water-washed to remove water soluble compounds, as explained in Sect.2.3.We analysed 26 untreated and 26 parallel washed samples using the three protocols.For each sample, only a 1 cm 2 punch was analysed for each protocol. Washing procedure To set up the filter washing procedure, PM 10 samples were collected at the location previously described (Sect.2.1) using a high-volume sampler (flow rate: 30 m 3 h −1 ) on 150 mm diameter quartz fibre filters (QAT-UP, Pall Corporation), which allowed multiple tests on the same material. Water soluble compounds were removed washing portions of the high volume sampled filters using MilliQ (Millipore) water.Each portion of the filter was placed in a filtration assembly -similar to the one reported by Yttri et al. (2009) suitably designed for this application.The washed area was 37 mm diameter wide (i.e.smaller than the total deposit area) to avoid the possible loss of sampled particles from the filter edge. Tests were carried out to determine the amount of water needed for a complete removal of water soluble compounds so that different punches of the same sampled filter were washed using increasing water quantities.Residual TC concentrations measured by TOT were used to check the washing efficiency with different water quantities.It was noticed that the residual TC on the filter decreased when the water used for the washing increased, until a minimum TC quantity (TC w ) was reached.The water quantity (V H 2 O ) necessary to reach TC w was different for each filter and depended on the initial TC load.A linear relationship (R 2 = 0.99) was found between the TC load of the sample (in the range 20-100 µg cm −2 ) and V H 2 O : This relation was used to estimate the water quantity to be used for the washing of the low volume samples collected for testing different protocols.It is noteworthy that the minimum water quantity used corresponded to the amount necessary for washing a 20 µg cm −2 TC loaded sample.After washing, filters were placed in open but dust-protected sieve-trays and air dried at room temperature for 24 h.The uniformity of the washed filter was tested measuring the TC concentration on three different punches (area: 1 cm 2 ) taken from the same filter and washed with V H 2 O .Differences among the three values were lower than 10 %. TC measurements TC was measured on untreated and water washed samples using the three protocols.In Fig. 1 the comparison between TC measured by the protocols is shown.As expected, a good agreement was found among the different protocols; indeed, TC quantification is not dependent on the thermal treatment as previously found in different inter-comparison exercises (Schmid et al., 2001;Watson et al., 2005; and therein cited literature). The TC results variability on each sample was evaluated as the ratio between the standard deviation of the three TC measurements and the average concentration value. In the untreated samples, the variability was 4 % on average (range of TC concentration: 17.6-57.0µg cm −2 ).This estimate is consistent with the ±5 % measurement precision on TC values and 0.2 µg cm −2 as minimum uncertainty reported for this instrument (Subramanian et al., 2006).Nine field blanks were also analysed.The TC content in untreated field blanks was in the range 0.4-3.5 µg cm −2 . The average TC variability in the washed samples was 9 %.Four field blanks were also washed and their TC content was in the range 0.7-3.2µg cm −2 .These values are comparable to those measured on untreated field blanks; therefore, our washing procedure did not introduce any systematic filter contamination.Moreover, the comparison between the TC variability on untreated and washed samples indicates that our washing procedure did not significantly affect the filter uniformity (i.e.no more than 5 %). EC measurements EC results obtained by the three protocols were compared for both the untreated and washed samples.There was one outlier in the dataset with an EC concentration exceeding 15 µg cm −2 .Previous works evidenced that the transmittance variation through heavily loaded filters (i.e.samples with EC > 15 µg cm −2 ) cannot be correctly monitored because the initial laser signal is too low (Subramanian et al., 2006;Wallén et al., 2010), thus the outlier was excluded from the database.It is noteworthy that EC concentration was higher than 5.5 µg cm −2 in about half of the 9 h samples collected at our station.Therefore, if 24 h instead of 9 h samplings had been carried out, EC concentration would have been higher than 15 µg cm −2 in one half of the samples, preventing the detection of transmittance variations during the He step.This observation is of particular interest because it evidences that traditional 24 h samplings in heavily polluted areas can limit the possibility to obtain reliable EC concentrations by the TOT method. EC results obtained by the three protocols on both untreated and washed samples showed a good correlation (R 2 > 0.87).Agreement in EC determination by the low temperature protocols was found (He-580 vs. EUSAAR 2 slope was 1.06 and 1.12 for untreated and washed samples, respectively).On the contrary, a disagreement up to a factor 1.6 for untreated samples was found when He-870 was compared to the lower temperature protocols (Fig. 2a, b).It is noteworthy that the disagreement between EUSAAR 2 and He-870 reduces from 1.49 to 1.24 (−17 %) after filter washing.As for He-580, the reduction was lower (from 1.59 to 1.42, −11 %, after filter washing) probably because He-580 protocol allowed the complete carbon evolution at each temperature step, thus reducing pyrolysis even in untreated samples and limiting the advantages of washing the filters. As expected, these results show that the removal of water soluble compounds from the filter is effective in reducing the differences among the EC values measured by different protocols.However, the removal of soluble compounds is not enough to obtain a full agreement among protocols as EC quantification depends also on other parameters, as it will be shown in the following sections. Another finding was that EC concentrations were generally (in 83 %, 67 % and 79 % of the cases for He-870, EU-SAAR 2 and He-580, respectively) higher in the washed than in the untreated samples; the increases were up to 54 % (with He-870), 24 % (with EUSAAR 2), and 43 % (with He-580) of the EC measured on untreated filters.This result suggests that measurements on untreated filters can lead to EC underestimation.One explanation might be that the untreated samples contain soluble compounds that catalyse the premature combustion of EC, which in turn yields lower EC values (see also Sect. 3.3). The role of the carbon fraction evolving in He at high temperature (see Sect. 3.3) and PyC formation (see Sect. 3.5) were explored to better understand differences among protocols and between washed and untreated samples, in order to choose the most suitable thermal treatment for our samples. Matching the different protocols: the nature of carbon evolving at high temperature in the inert atmosphere As mentioned before, EC results were protocol-dependent in both untreated and washed samples.One of the main differences among the tested protocols is the highest temperature in the He step.Subramanian et al. (2006) identified the carbon evolving at high temperature (about 850 • C) in the He atmosphere as responsible for the disagreement observed in EC results. In our tests, a good agreement was found comparing the sum of the EC and the carbon fraction evolving in the He4 step in the He-870 protocol (in the following called C He4 870 ) to the EC values obtained by EUSAAR 2 and He-580 for both untreated and washed samples (Fig. 3).This result suggests that to obtain the most reliable EC estimate it is mandatory to understand the chemical nature of the C He4 870 fraction (i.e.whether it is OC or EC) in the samples collected in the area of investigation.It is useful to point out that the concentration of this fraction in our samples is comparable to EC content, thus its wrong attribution can strongly affect EC determination. Opposite to what reported in the literature (Chow et al., 2004;Subramanian et al., 2006), no significant increase in the laser signal was registered during the He4 870 step (see Fig. 4), especially in the case of washed filters.The laser attenuation by the sample was calculated at the starting point of the analysis, at the maximum attenuation point, and at the end of the He step as ATN = −100 ln(I j /I 0 ) (I j , with j = 1. . . 3 is the laser transmission measured at the three points of interest and I 0 is the laser signal at the end of the analysis, see Fig. 4a).The increase in the laser signal registered during the He4 870 step corresponded on average to a 4 % (range 2 %-7 %) variation of the initial attenuation on the washed samples and a 6 % (range 3 %-12 %) on the untreated ones.The slight difference between washed and untreated samples is probably due to soluble compounds (e.g.sulphates or other salts) which can alter the thermal behaviour of the light-absorbing carbonaceous species and are removed by the washing procedure (Novakov and Corrigan, 1995;Yu et al., 2002;Hitzenberger and Rosati, 2010;Wang et al., 2010; and therein cited literature).C He4 870 is not expected to be chemically homogeneous.However, the apparent average attenuation coefficient related to C He4 870 (defined as the ratio between the ATN variation and the carbon evolved during the He4 870 step) was calculated to gain information on the optical properties of this fraction.The average attenuation coefficient was about 2.6 m 2 g −1 and about 3 m 2 g −1 for washed and untreated samples, respectively.These values are much lower than those reported in the literature for EC (about 20 m 2 g −1 ) and A. Piazzalunga et al.: Effect of WSOC removal on EC quantification PyC (about 40-50 m 2 g −1 ) on filters (Chow et al., 2004;Subramanian et al., 2006;Boparai et al., 2008) indicating that most of the C He4 870 in our samples is not strongly lightabsorbing.Therefore, the C He4 870 in this kind of samples is likely composed mainly by organic compounds showing a weak light attenuation and only a slight evolution of the light-absorbing material is registered (i.e. about 10 % of the C He4 870 is estimated to be light-absorbing).Moreover, a rough estimate of the EC evolved in He4 870 step was carried out assuming the EC attenuation coefficient as 20 m 2 g −1 .Comparing this value to the measured EC concentration -and considering that also PyC can contribute to light-absorbing carbon evolution during He4 870 -an upper limit for EC pre-combustion was calculated (12 % and 5 % of the measured EC for untreated and washed samples, respectively). With the aim of understanding the higher EC values measured by the lower temperature protocols compared to He-870, EUSAAR 2 temperatures at the split point were analysed.As the split-point in EUSAAR 2 occurred in all cases at temperatures in the 560-710 • C range, it is possible that the organic compounds evolving in He4 870 burnt together with the EC fraction when EUSAAR 2 was applied.Therefore, EC can be overestimated when our samples are analysed using the EUSAAR 2 protocol.However, it should be taken into account that this hypothesis needs further investigations because the thermal evolution occurs in different atmospheres (He and He/O 2 ) and the organic compounds might behave differently.Similar results were found using He-580, where the split points were always detected during the oxygen step at 580 • C. It is noteworthy that the C He4 870 value was lower (−28 % on average) on washed than on untreated samples.The presence of soluble compounds in the C He4 870 is another indication of the existence of organic compounds evolving at high temperatures in He; these organics can increase PyC and EC measured concentrations when using low temperature protocols. The reduction of this organic fraction, which contributes to the protocols disagreement (Andreae and Gelencsér, 2006), can further explain the slight improvement in protocols comparability reported in Sect.3.2. WSOC analysis For a better comprehension of the differences between the results obtained on untreated and washed samples, filter extracts were also analysed to determine the thermal behaviour of WSOC.To analyse water soluble compounds the water extracts obtained by all samples were mixed to obtain one single sample, vacuum-dried, and rinsed with methanol (Piazzalunga et al., 2010a).A drop of this solution was then deposited on 1 cm 2 punches of a pre-fired quartz fibre filter.The punches were placed in open but dust-protected sieve- trays, air dried (2 h at room temperature), and analysed using the three protocols.In Fig. 5 thermograms of WSOC obtained by He-870, EU-SAAR 2, and He-580 protocols are shown. As already found in previous works (Andreae and Gelencsér, 2006;Wallén et al., 2010), the He1 and He4 carbon fractions gave the highest signal when WSOC were analysed by He-870. It is noteworthy that a substantially higher carbon quantity (about + 77 %) evolved in the He/O 2 phase with the lowtemperature protocols than with the He-870 protocol, indicating that the He4 870 step was important to allow the WSOC evolution in the He phase. The WSOC laser signal increased in the He4 870 step.The apparent attenuation coefficient for the C He4 870 was about 5 m 2 g −1 , indicating again that most of this carbon fraction was weakly light-absorbing.Moreover, the few lightabsorbing carbon evolving in He4 870 was PyC in this case, as no EC was expected in WSOC due to the extraction procedure.This is a further confirmation that the EC precombustion estimates given in Sect.3.3 have to be considered as upper limits. Results on water extracts obtained by the three protocols (see Table 2) showed that no EC was measured during the analysis by the He-870 protocol, while about 1 µgC cm −2 was observed using the low-temperature protocols. This observation further suggests that in the Milan urban atmosphere organic compounds exist, which evolve in the He step at a temperature in the 650-870 • C range (about 20 % of WSOC) and are weakly light-absorbing.In this work, these compounds have been demonstrated to interfere with EC determination when low-temperature protocols are used for the analysis, leading to a possible EC overestimation. Part of these compounds can be ascribed to HULIS (HUmic-Like Substances), which are weakly absorbing materials with a biogenic origin or generated during biomass burning (Andreae and Crutzen, 1997;Andreae and Gelencsér, 2006;Iinuma et al., 2007;Schmidl et al., 2008).HULIS are mainly oxidised at temperatures higher than 600 • C. It should be noted that wood burning is a not-negligible emission source in the Milan urban area during wintertime.The contribution of its primary component was estimated in the range 6-17 % of the PM 10 mass in Milan during winter periods (Bernardoni et al., 2011;Piazzalunga et al., 2011).Following Varga et al. (2001) preliminary HULIS measurements had been carried out by our group (Fermo et al., 2009); the results showed that HULIS can account for the 30-50 % of the OC in the Milan urban area during wintertime. In the same samples analysed by the three protocols also levoglucosan -a marker for biomass burning (Simoneit and Elias, 2001) -was measured using the methodology reported in detail in Piazzalunga et al. (2010b).The presence of levoglucosan (concentration range: 0.6-4.0 µg cm −2 and levoglucosan carbon/TC = 3 % on average) in our samples indicated a not negligible contribution due to wood burning during the investigated period, which can explain the presence of an important refractory, not strongly light-absorbing fraction. It is noteworthy that the significant contributions due to biomass burning products makes low-temperature protocols not suitable for a correct assessment of the EC content in our samples because low-temperature TOT protocols can strongly overestimate it. Pyrolitic carbon formation It is well-known that PyC formation is another important source of troubles for OC/EC quantification.Indeed, literature studies (Yang and Yu, 2002) showed that neither of the following statements -necessary for a correct EC quantification -is respected in thermal-optical analysis: (a) pyrolytic carbon evolves before EC during the analysis; (b) pyrolytic carbon and OC have the same light absorption coefficient. Therefore, if PyC formation is minimised, the analysis results are more reliable. Figure 6 shows the comparison between the thermal evolution of one untreated and one washed sample with the EU-SAAR 2 and the He-870 protocols.Strong reductions in the carbon evolving in the washed filter can be noticed both in the He and He/O 2 phase.A reduction of the carbon signal in the He/O 2 phase was found also for the He-580 protocol, but no thermogram superimposition is possible due to the variable time lengths in the analysis performed by this protocol. The carbon evolving in the He/O 2 step was compared among the tested protocols to gain information on PyC formation.It was assumed that carbon evolving in oxygen with the EUSAAR 2 and He-580 protocols is comparable to that evolved in the He-870 but only with the addition of the C He4 870 contribution (in the following all these quantities will be called He/O 2 carbon), according to the considerations reported in Sect.3.3.He/O 2 carbon represents the sum of native EC, the refractory fraction of OC, and PyC formed during the thermal treatment.The He/O 2 carbon concentrations are represented in Fig. 7 separately for untreated and washed samples.Since the native EC on the filter does not depend on the chosen protocol, differences in He/O 2 measurements among protocols can be ascribed only to differences in PyC formation in the first part of the analysis.Few differences were registered between He-870 and EU-SAAR 2 He/O 2 carbon (on average 7 % in the untreated samples and 12 % in the washed samples), whereas the He-580 data were much lower (on average about −32 % in both cases).Thus, the He-870 and EUSAAR 2 protocols seemed to produce comparable PyC quantities in samples collected in a polluted urban area; on the contrary, PyC formed by He-580 was much lower.This finding was opposite to what reported by Cavalli et al. (2010) for the EUSAAR 2 protocol but they referred to samples collected at regional background stations in Europe and likely characterised by a very different chemical composition. The observed differences on the He/O 2 carbon could be due to the time duration of the He steps in the different protocols.In the He-580 protocol the steps are variable; the duration of each temperature plateau is such to allow the evolution of carbon at a specific temperature step until the peak Previous works (Yu et al., 2002) showed the importance of allowing a complete carbon evolution at each step to limit PyC formation.As for our samples, Fig. 8 shows that when the He step duration in EUSAAR 2 was comparable to the He-580 one, differences in the He/O 2 carbon between these protocols approached to zero.On the contrary, the shorter was the step time in EUSAAR 2 compared to the He-580 one the higher was the He/O 2 carbon (i.e.PyC) formed using EUSAAR 2. Focusing on the differences between untreated and washed samples, reduction of the He/O 2 carbon was registered after washing (−40 % for He-870 and EUSAAR 2 and −30 % for He-580).Therefore, the missing fraction in the washed samples was either soluble or produced by the thermal treatment of soluble compounds.Thus, the washing procedure applied to our samples reduced possible interferences in the measurements (e.g.presence of refractory organic carbon and inorganic compounds which modify the thermal behaviour as well as PyC formation), improving the protocols agreement on EC measurements as shown in Sect.3.2.The lower reduction in the He-580 protocol can be justified considering that this protocol is less prone to pyrolysis than the others even in our untreated urban samples, therefore limiting the possibility of reduction of the pyrolysing carbon component after soluble compounds removal. Conclusions This work aimed at the evaluation of the best approach to analyse EC and OC by TOT in real aerosol samples collected in a polluted urban area.Tests were carried out on both washed and untreated samples collected during a winter period in Milan (Po valley, Italy).Results obtained by TOT analysis using three different thermal protocols were discussed. As expected, EC values measured by the three tested protocols were different.The main difference was ascribed to the carbon fraction evolving during the step at 870 • C in He atmosphere (C He4 870 ).It was demonstrated that C He4 870 evolved in He/O 2 atmosphere with the two lowertemperature protocols.The evolution of this fraction before or after the split-point can differently affect the EC measured by the lower temperature protocols.It was proved that C He4 870 was mainly not light-absorbing in our samples, thus it had to be considered mainly OC.It is noteworthy that the C He4 870 quantity in our samples was comparable to the EC one, whereas the upper limit to EC pre-combustion using He-870 was estimated to be 12 % and 6 % of the EC measured on untreated and washed samples, respectively.Therefore, the He-870 protocol prevented EC overestimation when analysing samples collected in polluted urban environments like Milan.WSOC were also analysed as they are significant contributors to PyC.The washing procedure used to remove WSOC from our samples resulted in a more reliable EC estimation as the PyC formation was limited.Moreover, -as WSOC contribute to part of the C He4 870 -their removal allowed also a slight improvement in the protocols comparability. In summary, our tests on real samples -characterised by a chemical composition typical of an area affected by a complex mixture of pollution sources -suggest that protocols reaching high temperatures in He atmosphere are preferable to low-temperature ones to get rid of typical interferences which affect EC results.Moreover, our results indicate that the best approach to analyse carbon in urban aerosol samples should consider steps long enough for complete carbon evolution in order to reduce pyrolysis formation. Fig. 1 . Fig. 1.Comparison between the TC results obtained by EUSAAR 2 and He-870 (a), and He-580 and He-870 (b) on untreated and washed filters. Fig. 4 . Fig. 4. Thermograms of untreated and washed sample for low (a and c) and high (b and d) loaded samples.In Fig. 4a, the beginning of the analysis (j = 1), the point of minimum laser signal (j = 2) and the switching point (j = 3 and dashed vertical line) between the He and He/O 2 atmosphere are also reported. Fig. 8 . Fig. 8. Differences in He/O 2 carbon evolving applying EUSAAR 2 and He-580 protocols ( He/O 2 carbon) as a function of the He step duration in the He-580 protocol. Table 1 . Thermal protocols tested in this work. a Proxy of the NIOSH2 protocol.b Proxy of the IMPROVE A protocol. Table 2 . WSOC results obtained by different protocols in µg cm −2 .
2018-09-15T17:18:00.475Z
2011-10-11T00:00:00.000
{ "year": 2011, "sha1": "1b2cf080422e2e31efb3c2cff7b8c1fd4e47f499", "oa_license": "CCBY", "oa_url": "https://acp.copernicus.org/articles/11/10193/2011/acp-11-10193-2011.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "1b2cf080422e2e31efb3c2cff7b8c1fd4e47f499", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Chemistry" ] }
23088606
pes2o/s2orc
v3-fos-license
Predictors of response in the treatment of moderate depression Objective: To identify neurocognitive and sociodemographic variables that could be associated with clinical response to three modalities of treatment for depression, as well as variables that predicted superior response to one treatment over the others. Method: The present study derives from a research project in which depressed patients (n=272) received one of three treatments – long-term psychodynamic psychotherapy (n=90), fluoxetine therapy (n=91), or a combination thereof (n=91) – over a 24-month period. Results: Sociodemographic variables were not found to be predictive. Six predictive neurocognitive variables were identified: three prognostic variables related to working memory and abstract reasoning; one prescriptive variable related to working memory; and two variables found to be moderators. Conclusions: The results of this study indicate subgroups of patients who might benefit from specific therapeutic strategies and subgroups that seem to respond well to long-term psychodynamic psychotherapy and combined therapy. The moderators found suggest that abstract reasoning and processing speed may influence the magnitude and/or direction of clinical improvement. Introduction Psychodynamic psychotherapy, pharmacotherapy with fluoxetine, and a combination of psychotherapy and pharmacotherapy are all effective treatment methods for depression. 1,2 Despite the efficacy of these methods in reducing depressive symptoms, the processes that mediate improvement in a treatment outcome are not clear. Predictive variables are treatment outcome mediators. A predictive variable is a pretreatment variable that can influence treatment outcome or the natural history of a disease, and may therefore become a prognostic, prescriptive, and/or moderator variable. 3 A prognostic variable predicts outcome independently of the treatment being used, while a prescriptive variable anticipates different patterns of outcome between two or more types of treatment. 4 A moderator variable, by definition, has the ability to influence the magnitude and/or direction of the relationship between the dependent and independent variables being studied. 3 When two or more treatment modalities do not have significantly different impacts on the healing process and treatment outcome, an investigation may be necessary to identify the predictive variables that are capable of mediating the healing process, by identifying patients who improve more substantially than others do. These predictive variables can be prescriptive and/or prognostic. The identification of predictive variables promotes clinical gain for patients and mental health professionals alike, because it enables individual patients to receive the most efficacious treatment for their condition. In addition, predictive variables are likely to speed treatment decisions and improve cost-benefit aspects. Prognostic variables also help identify patients who are usually treatmentresistant, regardless of the treatment modality; this helps determine which patients need alternative therapeutic interventions. One example of a predictive variable is therapeutic alliance (i.e., the relationship between a healthcare professional and a patient), as described in one study on psychodynamic psychotherapy 5 which found that changes in therapeutic alliance in early therapy predicted symptom change at the end of treatment. Similarly, other studies have shown that therapeutic alliance predicts improvements in symptoms. 6 Most studies focused on identifying response predictors have not compared different types of treatments for depression. An extensive literature review on response predictors is beyond the scope of this study, but we found a few studies that have produced notable results on predictors within one type of treatment for depression. [7][8][9] These studies reported that patient age did not interfere with clinical outcome in adults with depression treated with antidepressants. They also found that depression severity in inpatients was negatively associated with patient discharge, while physical health status and level of education were positively associated with discharge. An important limitation in single-treatment studies seeking prediction variables is that they have little prescriptive potential. In assessing only one type of treatment, these studies offer information on the prognosis for the treatment studied but fail to discuss appropriate treatments for patients with different needs and characteristics. 10 Predictive analysis studies comparing two or more types of treatments of depression have identified important predictive variables. One study compared pharmacotherapy, cognitive psychotherapy, and placebo, and found that patients with more severe depression tended to have better responses to medication. 11 Thus, the prescriptive variable was severity of depression at the onset of treatment. In another study that used the Beck Depression Inventory (BDI), authors found predictive variables among patient statements in the measure. 12 A close look at the findings revealed that BDI items pessimism and lack of energy helped identify patients who needed additional or alternative therapeutic methods. Some studies have tried to associate patient sociodemographic variables with mood/anxiety disorders by comparing short-term therapy (STT) and long-term psychodynamic psychotherapy (LTPP). 13 Results suggested that married patients with high levels of education responded better to STT, while patients coming from single-parent homes and divorced patients responded better to LTPP or did not respond to either treatment. In a prospective trial of 180 patients randomized to receive cognitive psychotherapy, antidepressant medication, or placebo, the authors identified prognostic variables (chronic depression, older age, and lower intelligence) and prescriptive variables (marriage, unemployment, and having experienced a greater number of recent life events). These findings showed that alternative treatments were beneficial to some patients, while cognitive psychotherapy was beneficial to others. 10 Until recently, only minor attention had been given to neurocognitive performance as a moderator of treatment outcome in studies on depression, 14 despite studies on the association between cognitive variables and depressive symptoms. 15,16 Some authors have described the relevance of neurocognitive variables for predicting symptomatic improvement. 17 For instance, neurocognitive function at baseline was not predictive of symptom improvement in depression. 18 Although some speculate that cognition may mediate change in psychotherapy, 3 a definite statement that neurocognitive processes serve as mediators of therapeutic change would be premature, because researchers are only beginning to study the relationship between neurocognition and treatment outcome. For this reason, using neurocognitive variables as possible predictors is relevant for intervention research. At present, the literature offers little evidence on which variables might serve as prescriptive, prognostic, and/or moderator variables in different treatments for depression. Furthermore, intrinsic problems in statistical analysis tend to complicate results and cast doubt on some existing conclusions. A contributing factor to these complications is that the statistical power associated with interaction effects between a predictive variable and a certain treatment group tends to be low. These complications have been noted in the standard statistical approaches for identifying predictive variables, 3,10 such as covariance analyses, multiple regressions, and logistic regression models. 19 In the present study, multilevel techniques were used to identify pretreatment variables that might be associated with the clinical response of outpatients with moderate depression. Use of multilevel model techniques (or mixed models) for analysis enables statisticians to make fewer statistical assumptions, producing more precise estimates and parameters, along with the ability to use all of the data collected at a given moment in time. 20 Potential predictor variables are viewed as belonging to two different potential predictor domains: sociodemographic and neurocognitive. Because of the high number of possible predictor variables in these domains, using multilevel model techniques for statistical analysis maximizes the odds of identifying response markers (i.e., prognostic, prescriptive, and/or moderator variables) without increasing the possibility that results would be due to chance. After identifying predictive variables in this study, we reexamined each one to verify that all variables remained statistically significant as predictive of treatment outcome, given that all predictors had been tested simultaneously in the same final statistical model. This model followed recommendations presented in previous studies. 10 Design The present study derives from a randomized clinical trial comparing LTPP, fluoxetine treatment (FLU), and a combination of the two (COM) in adult patients with moderate depression. Patients were assessed five times: at baseline and at 6, 12, 18, and 24 months of treatment. Independent researchers blinded to treatment allocation carried out the assessments. The full design and results of this project are described elsewhere. 21,22 Participants Participants were adult outpatients treated in a psychiatric clinic (women, 62%; mean age, 30 years). The inclusion criteria were presence of major depressive disorder, according to the DSM-IV-TR criteria; moderate level of depressive symptoms (as measured by BDI); and provision of written informed consent. The exclusion criteria were DSM-IV-TR Axis I and II comorbidities (as assessed by the structured clinical interview for DSM disorders, SCID-I and II), suicide risk, use of other medications that influence mental functioning, severe somatic diseases, and contraindications for fluoxetine use. Patients who agreed to participate (n=272) were randomized into three treatment groups (LTPP, n=90; FLU, n=91; COM, n=91). There were no significant differences between groups regarding clinical and sociodemographic features at baseline. Treatments The model of LTPP used in this study was similar to one previously described and widely used. 23 LTPP was received once a week for 24 months. FLU was administered 20 mg/day for the first 2 weeks. Intake was evaluated and adjusted at bimonthly psychiatric appointments until an appropriate dosage was reached (up to 60 mg/day). Subsequently, patients went to monthly appointments, where they received the medication and treatment compliance was verified. Combined therapy included both LTPP and FLU received concomitantly by a given patient. No statistically significant differences were found for variables related to psychotherapists (women, n=16; men, n=8; mean duration of clinical experience, 11 years; mean age, 35 years) and psychiatrists (women, n=3; men, n=3; clinical experience, n=6 years; mean age, 31 years) in tests of variables between conditions. The Wechsler Intelligence Scale for Adults, Third Edition (WAIS-III) was the main instrument used to assess neurocognitive function and intelligence. 26 This scale, which has been adapted and validated for Brazilian populations, 27 is used throughout Brazil and Latin America, but its successor (WAIS-IV) has not been validated in Brazil. The WAIS-III consists of 14 subtests that assess specific neurocognitive functions: Vocabulary, Similarities (SIM), Arithmetic, Digit span, Information, Comprehension, Letter-number sequencing (LNS), Picture completion, Digit-symbol coding (DSC), Block design, Matrix reasoning (MR), Picture arrangement, Symbol search, and Object assembly. We conducted a psychometric reliability study of the WAIS-III to ensure that results would be reliable in depressed patients. The results showed that all subtests of the WAIS-III had good levels of reliability. 28 The association between cognitive problems and depression has been reported in previous studies, 29 and confirmed by neuroimaging studies. 30 Some researchers believe that, in addition to monitoring neurocognitive functioning, the WAIS-III may be extremely useful in predicting clinical response to treatment. 15 Potential predictors Potential predictors of response to treatment were measured prior to randomization. Available variables were assigned to two different domains. The first domain included sociodemographic data. The available variables measured were sex, age, marital status, and level of education. The second domain included neurocognitive variables, assessed by the WAIS-III. Previous researchers have viewed cognitive variables as potential predictors, as evidence suggests that cognitive changes occur prior to clinical improvement in depressed patients. 31 The possibility exists that neurocognitive deficits precede the onset of depression symptoms, 15 and the pretreatment status of cognitive processes (e.g., abstract reasoning) is considered a possible mediator of therapeutic change in psychotherapy. 3 Therefore, each WAIS-III subtest was considered a variable. These are listed in Table 1. Statistical analysis Mixed model analyses, commonly used for longitudinal data, are appropriate in evaluating the relationship between the dependent variable and time. Regression curves are adjusted for each subject, and regression coefficients are allowed to vary randomly among subjects. Because this variation occurs on intercepts and on slopes, we adjusted the random coefficient model (which arithmetically describes the relationship between observations and time). The growth curves of the groups consider the parameters of individual growth curves, as well as the average growth curve. 32 A diagonal covariance structure was used to shape the correlation between intercepts and slopes. All available data were analyzed under the intention-to-treat assumption. Maximum likelihood procedures were used for all models. The first statistical analyses aimed to investigate the association between possible improvement predictive variables (prognostics, predictive, and/or moderators) over the 24-month study period. BDI results were analyzed using growth curve models (i.e., with the time measure used as a covariate), and subjects were set as random effects. Therefore, the BDI results and growth curve of each subject at the end of treatment were derived from a collection of that subject's specific parameters. To identify relevant predictors, we considered three approaches proposed in earlier studies. 3,4,10 Interactions among treatment conditions (groups), time, and interest predictors were examined. Prognostic variables were those in which the lowerorder term was statistically significant (i.e., representing only the main variable). In this study, treatment outcome depended on the score of this predictor, regardless of the treatment received by the subject. Prescriptive variables were those in which the lowerorder term and the term representing the variable  treatment interaction were significantly related to outcome. In this study, these variables indicated that different treatment effects were occurring (depending on the value of the variable in question). Finally, moderator variables were those in which the lower-order term and the term representing the variable  time interaction were statistically significant. This means that there were interactions with the linear time effect (i.e., over time, the BDI score depends on the variable value, as it indicates that some characteristic of this variable influences the magnitude and/or direction of the relationship between intervention and outcome). A stepwise model was used within each domain. The first step was to verify whether the model that included all the variables of each domain was statistically significant. The second step was to keep only those predictors significant at p o 0.20. The third step retained predictors with p o 0.10, while the fourth retained only those with p o 0.05. Once all predictive variables had been identified, they were entered into a final model containing all remaining significant predictors. Thus, the effects of each variable were tested again while simultaneously controlling for the others. Results Because the results of our stepwise analysis of the sociodemographic domain (first domain) did not reach statistical significance, this domain was removed from the predictor model. The second domain included the neurocognitive variables defined on the basis of the subtests of the WAIS-III (see Table 1) and their interaction with time and treatment. Six significant predictors were identified: SIM, LNS, and MR as lower-order terms (i.e., prognostics); SIM  Time and DSC  Treatment as terms that moderate outcome; and LNS  Treatment as a treatment-prescriptive term ( Table 2). For lower-order effects (variables without interactions), b estimates can be interpreted as representing changes in mean BDI scores for each unit of change in the predictor variable, controlling for all other predictors in the model. An isolated predictive term indicates only a main effect of the variable (i.e., the variable pretreatment value is associated with BDI mean scores in all periods). As the variable does not depend on group or time, it is considered prognostic. On the other hand, when the term in question is statistically significant as a main term and the term variable  time is also statistically significant, this means that there is an interaction of the term in question with the effect of time. In this case, the magnitude or direction of change in the BDI mean score over time depends on (1) Predictors of response in depression treatment the value of the interaction variable. The variable is then considered a moderator. Similarly, when the term interacts with treatment (variable  treatment) but the main term is not statistically significant, the variable can also be considered a moderator, as this indicates that the term in question has a characteristic that causes a difference in the magnitude or direction of the relationship between outcome (dependent variable) and treatment (independent variable). Finally, when the term in question is statistically significant as the main term and interacts with treatment (variable  treatment), it becomes a prescriptive variable. With regard to prognostic variables, higher baseline scores in SIM predicted higher BDI scores in all periods, whereas higher baseline scores in LNS and MR predicted lower BDI scores. More specifically, adding a single unit to the SIM baseline score each time yielded a 0.31 raise in BDI mean score each time it was measured. On the other hand, a one-unit increase in LNS and MR baseline scores was associated with a reduction of 1.17 and 0.47 in the BDI mean score, respectively, each time it was measured. The LNS variable was considered treatment-prescriptive. Improvement differences in BDI scores when LTPP and COM were compared were not statistically significant. However, both treatments stood out significantly when compared to FLU. Patients with the same LNS baseline score responded better to LTPP and COM than to FLU. Nevertheless, the SIM  Time interaction showed that lower SIM baseline scores predicted smaller decreases in mean BDI scores over time, which did not depend on treatment but depended on time. The DSC  Treatment interaction indicated that the direction of the relationship between the independent variable (treatment) and the dependent variable (BDI score) depended on the DSC variable. Higher baseline scores on the DSC were negatively associated with BDI scores in the FLU treatment. However, higher baseline DSC scores were positively associated with BDI scores in LTPP and COM. In other words, higher baseline DSC scores predicted lower BDI scores in patients who received FLU. In addition, higher baseline DSC scores predicted higher BDI scores in patients who had LTPP and COM. Therefore, the DSC variable moderated the direction of the relationship between treatment and outcome. Final model with all significant predictors Once identified, the prescriptive, prognostic, and/or moderator markers were simultaneously entered into a final model, so that the effect of each marker could be maintained while controlling for the effects of the other markers. As Table 2 shows, each effect remained statistically significant when all effects were covaried. Table 3 shows the final BDI mean scores for the variables SIM, LNS, and MR (lower-order terms) in five different baseline scores (i.e., sample mean, one and two SDs above the mean, and one and two SDs below the mean). This division in baseline score (above and below the mean) was carried out deliberately to demonstrate specific subgroup differences. The variables LNS and DSC had a statistically significant interaction with treatment. Table 4 shows these variables in each treatment at different baseline scores, to illustrate the differences in within-and between-treatment subgroups. A positive association was noted between SIM baseline scores and BDI scores. Each SIM unit above the baseline sample mean represented more BDI-measured symptoms at each time point of measurement. On the other hand, a negative association was noted between BDI scores and the variables LNS and MR at the end of treatment. Each LNS and MR point above the baseline sample mean was associated with less BDI points for each time it was measured. Subjects with baseline LNS scores one SD below the mean did not reach the clinical cutoff point for treatment outcome, regardless of treatment modality. Therefore, low pretreatment LNS scores may be prognostic of poor treatment outcome. For instance, patients receiving FLU treatment would need to have very high LNS scores to achieve the BDI clinical cutoff point. This suggests that low LNS scores may contraindicate FLU alone for patients with moderate depression. There were no statistically significant differences between LTPP and COM, but both were significantly different from FLU. Figure 1 shows differences between treatments in terms of final BDI mean scores associated with the LNS prescriptive variable, t 419 = 4.23, p o 0.001, Cohen's d = 1.4360.59 (95% confidence interval). As LNS baseline scores increased, participants in the LTPP and COM groups had significantly fewer BDI points at the end of treatment when compared to the participants in the FLU group. The final model, with three prognostic variables, one prescriptive variable, and two moderator variables, accounted for approximately 46% of the between-subject variance among treatments. This variance occurred in the final BDI mean scores and in the linear slope estimates. Discussion This study aimed to investigate outcome predictors for three different treatments for depression. The objective was to identify prognostic, prescriptive, and/or moderator variables that could help guide clinical protocols. Although a number of studies on the association between cognitive variables and depressive symptoms have been published, 15,16 we found no studies using neurocognitive markers (obtained from WAIS-III) as predictive variables when comparing different treatments. Three studies that used the WAIS-III in a long-term psychoanalytic psychotherapy context reported only testretest changes in patients and did not attempt to find predictive outcome variables. 16,21,33 In the present study, the sociodemographic domain did not result in statistically significant predictive variables (i.e., age, sex, marital status, and level of education). Some studies, however, identified age as a predictor of slower response to treatment, 7,26 while others found no association between age and treatment outcome. 7,8 Level of education may be positively associated with treatment outcome. 9 Other examples of potential demographic predictors of treatment outcome include gender, marital Figure 1 Linear slope estimates for the LNS  Treatment interaction. The slope of a line is the ratio of the change in y over the change in x. This is also known as ''rise over run,'' i.e., the slope or gradient of a line describes its steepness, incline, or grade. A higher slope value indicates a steeper incline. Slope is normally described by the ratio of the ''rise'' divided by the ''run'' between two points on a line. BDI = Beck Depression Inventory; COM = combined therapy; FLU = fluoxetine treatment; LTPP = long-term psychodynamic psychotherapy. status, family history of treatment response, and socioeconomic level. 34 It is worth noting that the sample of patients for this study was very homogeneous in terms of sociodemographic and clinical profiles, which constitutes a definite limitation of this study, and that this homogeneity may have contributed to the lack of statistical significance in the sociodemographic domain. Participants were mostly young women with good socioeconomic level and educational attainment, who tended to adapt to psychotherapy more easily. 35 The possibility exists that other patients (i.e., older women or men and women with a lower social and education levels) might have difficulty adapting to psychotherapy or other long-term treatments. This discrepancy in adaptation to long-term treatment would reduce the external validity of our results. Homogeneous samples generally are not representative of the huge variety of outpatients. The antithesis between external and internal validity of data has been widely discussed. 36 In this study, most participants had a moderate level of depression (as measured by the BDI). Two studies have shown a correlation between severity of depression and treatment outcome. 9,11 This finding appears to be more likely when a wider range of patients with depression is evaluated. Predictors for treatment outcome include symptoms, patient treatment preference, early life stress, personality characteristics, and previous treatment. With regard to neurocognitive variables with prognostic features, higher LNS baseline scores (which are associated with working memory and attention) predicted lower BDI results at outcome. Two studies have associated working memory function with depression treatment outcome. 14,17 Similarly, it has been reported that lower attention predicts poorer response to depression treatment. 37 These findings suggest that patients who are less impaired in functions such as working memory and attention have a better prognosis, regardless of the type of treatment they are receiving. Another prognostic variable is abstract reasoning (MR), which some researchers believe may be important in clinical improvement. 3 The MR subtest, however, demands special consideration. Some studies have associated the results of this test at least partially with performance in tests that assess executive functioning, 38 such as the Wisconsin Card Sorting Test. Poor executive functioning tends to lead to a poor prognosis in patients with depression. In sum, better baseline performance in executive functions suggest better treatment outcomes. These findings are consistent with those of the present study, and appear to support the idea that executive functioning and working memory are related to treatment outcome in patients with moderate depression. Likewise, according to evidence showing an association between neuropsychological abnormalities and alterations on functional neuroimaging, 39 data from the present study suggest an association between the frontotemporal circuitry of the brain and treatment outcomes in depression. Neuroimaging studies corroborate this association, identifying other brain regions that are potentially involved in clinical improvement mechanisms. Changes in the prefrontal limbic region in patients with depression 15 months after LTPP, 40 as well as changes in some cortical regions (prefrontal, anterior cingulate, and insula), may be considered biological markers for treatment response and predictors of treatment outcome in patients with depression. 41 A higher baseline score in verbal abstract reasoning (SIM) was considered a prognostic predictor of slightly higher BDI scores with time. SIM is also considered an excellent test for general mental ability. 26 A possible interpretation is that a subject's level of verbal abstract reasoning may act upon his or her interpretations of BDI statements and, consequently, upon the choice of statement to be checked in the scale. Hypothetically, depressed subjects with higher levels of verbal abstract reasoning tend to be more pessimistic when choosing BDI statements, tending to score slightly higher, regardless of the treatment they are receiving. This hypothesis is supported by consistent findings showing that clinical depression is followed by negative alterations in perceptive content, which could cause negative thoughts, considerations, and judgments. 42 Findings such as this may indicate a very limited prognostic effect of the SIM variable and must be interpreted with caution. At one time, SIM was also considered an outcome moderator due to its interaction with the time variable, suggesting that some characteristic of this variable actually may interfere with outcome. Further studies must be carried out with the cognitive construct verbal abstract reasoning in patients with depression in order to clarify the time variable's possible interference with outcome, as it could be an anomalous finding in the context of so many predictors. DSC is another moderator variable that is usually associated with processing speed and working memory. This variable apparently has characteristics that alter the direction of the relationship between the independent (treatments) and dependent (BDI score) variables, indicating a trend of different patterns of response in the three treatments compared. In the LTPP and COM groups, the DSC variable apparently moderated a positive relationship, whereas in the FLU group, that relationship was negative. These data suggest that differences within treatment processes may be involved. Regarding the prescriptive variable LNS, linear slopes indicated that LTTP and COM were more clinically efficient than FLU in patients with a higher capacity of working memory and attention. Other studies have pointed out that combined treatments tend to be more effective than FLU alone in patients with mild to moderate depression. 43 We found no data on working memory capacity and its interaction with LTPP. This gap caused some LTPP findings in this study to appear relatively obscure. Secondary analyses dividing the FLU group into treatment responders and non-responders should help clarify this. Many factors that can influence patients' cognition have been emphasized in the literature: everyday experiences, physiology, psychological alterations, and cultural factors. Understanding the underlying mechanisms of therapeutic change can provide inputs for enhancement of clinical treatment results. 44 The lack of answers regarding fundamental questions on how treatments trigger clinical changes in patients precludes many patients from receiving the benefits of more adequate treatments for their individual profiles. 3 In addition, an individual's neurocognitive profile may be a marker of prognostic and prescriptive criteria for the treatment of depression. The results of the present study should be considered cautiously because of the possible presence of multiple predictor variables. Furthermore, identification of predictive (prognostic, prescriptive, and/or moderator) effects may provide more information for creating better tools for clinical decision-making. Understanding treatment moderators is key to choosing appropriate treatments and guiding clinical practice. Further studies are strongly suggested. Finally, some of the limitations to this study have been reported elsewhere. 21,22 Because of the sample homogeneity, our findings are applicable to only a particular profile of outpatients -i.e., those diagnosed with moderate depression and treated for 24 months with the therapies used in this study. Other patients or treatments could result in different predictive variables. Another limitation is that the selection of possible variables used here was made among variables that were readily available. Inclusion of other variables could also alter results of the analyses employed. Disclosure The authors report no conflicts of interest.
2017-08-30T20:12:20.161Z
2016-11-24T00:00:00.000
{ "year": 2016, "sha1": "54f870a2485fadbe9a4781f75ab3064a28fc6798", "oa_license": "CCBYNC", "oa_url": "http://www.scielo.br/pdf/rbp/v39n1/1516-4446-rbp-1516444620161976.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "88e62384daaedeb1a959f5ab1679183ea6e021f5", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
214590991
pes2o/s2orc
v3-fos-license
Social curiosity as a way to overcome death anxiety: perspective of terror management theory Social curiosity has been found to have great benefits in human life, especially in fostering interpersonal relationships. Nevertheless there is indication of other benefit of social curiosity that have not yet been explored, namely overcoming the anxiety of death. This indication is based on previous research which found a positive relationship between anxiety and social curiosity. In this study, social curiosity is framed as representation of symbolic immortality, which people use to overcome the terror of death. To support this conjecture, two studies were conducted using the Terror Management Theory (TMT) framework. Study 1 (N = 352, M age = 19.39) found a positive relationship between death anxiety and social curiosity. In Study 2 (N = 507, M age = 20.68) it was found that intolerance of uncertainty and desire for self-verification mediated the relationship between death anxiety and social curiosity. The results of this study indicate that increasing interest in obtaining information about how other people think, feel, or act is a form of mechanism used by people to control anxiety related to death. Introduction Social curiosity is defined as an interest to obtain new information and knowledge about the social world (Renner, 2006). This type of curiosity is widely known to have an important role in social interaction and human relations (Han et al., 2013;Hartung and Renner, 2013;Kasdhan et al., 2018). Social curiosity enables individual to make more accurate personal judgments about his/her interaction partners (Hartung and Renner, 2011). As such social curiosity increases one's ability to adapt and to survive. Although the importance of social curiosity has been explored in past literatures, there is a wide dearth of research on the antecedents of social curiosity as well as the mechanism between those antecedents and social curiosity. These series of study aimed to examine social curiosity based on the framework of the Terror Management Theories, abbreviated as TMT . Terror management theories (TMT) Based on TMT, human psychological needs are primarily rooted in existential dilemmas . People are born with instinctive tendencies for self-perseveration and continued existence to increase the chances of survival . People are equipped with intellectual abilities making them aware of their unavoidable vulnerability and death (Rosenblatt et al., 1989). The awareness that they are vulnerable is potential to create a paralyzing terror. The term terror refers to the emotional manifestation of the self-preservation instinct in people who are intelligent enough to know that one day they will die (Greenberg et al., 1992). This intense anxiety experience is potential to disrupt people in living their life. Thus, people need to be able to control this existential anxiety (Hayes et al., 2008;Hormone-Jones, Simon, Pyszczynski, Solomon and McGregor, 1997). TMT postulates that people use a dual component as cultural anxiety buffer. The first of dual component is cultural worldview, a set of standards that are valuable that provides explanation about existence. The second is self-esteem, people obtained by believing that a person meets the value standards in the cultural worldview that he/she holds (Greenberg et al., 2000). Cultural worldview and self-esteem are symbolic forms of immortality. As symbolic immortality, cultural worldview and self-esteem enable people to feel valuable as part of something bigger, more significant, and lasting longer than human existence (Dechesne et al., 2003). Death anxiety Death makes people experience uncertainty because they do not know when and how death occurs (Greenberg et al., 1994). Death is the only event that cannot be avoided in the future, which will kill human motivation and desire (Greenberg et al., 2010). As such, death creates extraordinary anxiety resulting in terror to human life (Hayes et al., 2008). Death anxiety is a result of people living in the shadow of death. Death anxiety exists in various cultures and is the main motivation in human behavior (Cicirelli, 2002). Past studies have found the relation between anxiety and curiosity. Studies conducted by Trudewind (2000), Litman and Pezzo (2007) found positive relationship between anxiety and curiosity; thus, indicating the tendency of people to seek social information when experiencing anxiety. As such, seeking interpersonal information helps anxious people to regain control of their environment (Renner, 2006). Based on the above findings, it can be argued that death anxiety leads people to engage in social curiosityurging people to collect social information -as a way to mitigate the death anxiety. Intolerance of uncertainty Intolerance of uncertainty is seen as a broad construct that represents cognitive, emotional, and behavioral reactions to uncertainty in everyday life situations (Freeston et al., 1994). Intolerance of uncertainty is explained as an excessive tendency of individuals to consider negative events that occur unacceptable, however small the possibility of the occurrence (Buhr and Dugas, 2002). Lowe and Harris (2019) found a positive relationship between death anxiety and intolerance of uncertainty. It might be that death anxiety creates unpredictable situation, in which people live in limbo. Living in limbo triggers cognitive, emotional, and behavioral reactions. Hence, death anxiety stimulates intolerance of uncertainty. Experiencing intolerance of uncertainty enables people to take action in overcoming the terror of death. Intolerance of uncertainty arises because uncertainty is unacceptable, causing stress and must be avoided as uncertainty triggers frustration and prevent action (Buhr and Dugas, 2002). In intolerance of uncertainty, efforts are drawn to control the future and avoid uncertainty (Freeston et al., 1994). Intolerance of uncertainty encourages people to take action. However intolerance of uncertainty can also increase worry (Ladouceur et al., 2000). This worry occurs because the intolerance of uncertainty makes people feel uncertain about many aspects of their lives (Buhr and Dugas, 2002). Hence, intolerance of uncertainty makes self-concept unstable (Kusec et al., 2016). To maintain stable self-view, people need to make their self-concept stable. One way is by doing self-verification (Swann et al., 1989). Desire for self-verification Self-verification refers to a very strong desire to obtain confirmation and stabilization of one self-view (Kwang and Swann, 2010). Through self-verification, people are more coherent about themselves (Swann and Buhrmester, 2003). This psychological coherence is interpreted as the feeling that the self and the world are as expected (North and Swann, 2009). People use social interaction as means to verify and confirm their self-concept (Swann and Read, 1981). Desire to self-verify motivates people to intensively seek information that confirms their beliefs . Whether or not it is realized, people construct a self-confirmatory social environment. People will selectively choose to interact with those who can provide self-verification (Swann and Buhrmester, 2003). Through this selective social interaction, people gain self-view (Kwang and Swann, 2010;Swann et al., 2007). Selective social interaction can be achieved when people know who they can depend on, including those who are supportive of one's selfview. In order to gain this knowledge, people need to have social curiosity. Social curiosity enables people to obtain information about other people, including their interest (Renner, 2006). Therefore, social curiosity helps people to make accurate judgments about other people (Hartung and Renner, 2011) as it facilitates understanding of social information (Hartung and Renner, 2013). Present research Information about people is a very valuable resource to have (Han et al., 2013), as it will facilitate survival and adaptation, therefore it is very important for people to develop and/or increase social curiosity. In order to increase social curiosity, it is necessary to know the antecedents of social curiosity. There has been an attempt to explain the causes of curiosity in general by Loewenstein (1994). Information gap theory explains how curiosity is a form of cognitive deprivation that occurs because of gaps in knowledge or understanding. This theory has been used extensively in various studies on curiosity. However, information gap theory is not appropriate when used to explain the causes of social curiosity. Information about people is different from other types of information, because the former is complex and has special value in the social environment, for example as a social comparison (Litman and Pezzo, 2007). Characteristics of information about people that are different from other types of information indicate that people's motives for obtaining information about other people are not just to meet the information gap. This study proposes on understanding social curiosity from other perspective, namely by looking at social curiosity using the TMT framework. The ability of TMT to explain various human behaviors has been proven. This theory can explain behavior by looking at people's most basic motives, namely the conflict between life and death (Basset, 2007;Echabe and Perez, 2016;Greenberg et al., 2010;Psyzczynski, Greenberg, & Solomon, 1997), the urge to survive despite knowing that at any time people will die, driving people to create a symbolic immortality (Florian and Mikulincer, 1998). In this study, social curiosity is proposed as a form of symbolic immortality that people try to attain. To prove the use of social curiosity as a form of symbolic immortality in dealing with death anxiety, two studies will be conducted. Study 1 is the fundamental research needed to empirically prove the relationship between death anxiety and social curiosity. Study 2 is needed to explain in detail the process of how death anxiety can increase social curiosity. In Study 2 the relationship between death anxiety and social curiosity will be proven through two mediators, namely intolerance of uncertainty and desire to self-verification. These two variables are chosen because they are correlated with death anxiety and social curiosity. In addition, these two mediators' function to provide an explanation of how personal death anxiety can direct people's attention to their social environment. Study 1 The purpose of Study 1 was to examine the relationship between death anxiety and social curiosity. Previous studies have examined the relationship between anxiety-as trait anxiety or social anxiety-and social curiosity (Renner, 2006). However, inconsistent results among past studies between anxiety and social curiosity are prevalent. Anxiety was found to be negatively correlated with curiosity (Kasdhan, 2002(Kasdhan, , 2007Kashdan and Roberts, 2004;Kasdhan, Rose and Fincham, 2004). On contrary, anxiety was also found to be positively correlated with curiosity (Litman and Pezzo, 2007;Trudewind, 2000;Renner, 2006). Furthermore, no previous studies have been noted by the authors on the relationship between death anxiety and social curiosity. Considering that death anxiety could be postulated as precursor for social anxiety based on the TMT framework, Study 1 aimed to examine the relationship between death anxiety and social curiosity. Participants and procedures The participants (N ¼ 352) were undergraduate students majoring in Psychology from private university in Jakarta Greater Area, Indonesia. Initially there were 355 participants, but three participants were eliminated. One participant was eliminated for not filling out the questionnaire, and the other two were eliminated due to their age being vastly different to the other participants. The participants consisted of 81% female (n ¼ 285); and the age mean, M ¼ 19.39. Data was collected by distributing the research questionnaires in classes. Prior to data collection, this study was granted by Ethics Committee of Psychology Faculty of Universitas Indonesia. Informed consent was obtained at the beginning of the study measurement. Thorson and Powell, 1992). Death anxiety was measured using the revised death anxiety scale. The RDAS was chosen as measure of death anxiety based on study conducted by Cicirelli (2002), in which death was made salient and the fear of death was raised at the level of consciousness through the utilization of death anxiety measure. The RDAS consisted of 25 items with a 5-point Likert scale ranging from 1 (strongly disagree) and 5 (strongly agree), with 25 and 125 for minimum and maximum scores, respectively. An example of RDAS item was 'The total isolation of death is frightening to me'. Higher RDAS score indicated more anxious about death. 8.1.2.2. Social curiosity scale (SCS; Renner, 2006). SCS was used to asses social curiosity. It aimed to determine the level of interest that an individual had on how others thought, felt, or acted. The SCS consisted of 10 items, rated on a 4-point Likert scale from 1 (strongly disagree) to 4 (strongly agree). An example of this item was 'I like to look into other people's lit windows'. Scoring was done by totaling the answers on all items. Higher SCS score indicated a higher interest in obtaining information about other people. Statistical analysis The IBM SPSS ver. 25 was used for statistical analyses (IBM Corp, 2017). Reliability analysis was computed to determine reliability coefficient for each measure. Bivariate correlation among variables were determined using the Pearson correlation, and hierarchal regression analysis controlling for gender was used to examine the relationship between the predictor and outcome. Gender was controlled as it might influence social curiosity, as well as research conducted by Taubman--Ben-Ari et al. (2002). Results Means, standard deviations, Cronbach's alphas, and bivariate associations among variables are presented in Table 1. The reliability of death anxiety scale and social curiosity were acceptable. Positive correlation between death anxiety and social curiosity was found, higher death anxiety was related with higher curiosity. Gender did not have a significant relationship with social curiosity, but it had a significant negative correlation with death anxiety. Women had higher death anxiety score than men. From Table 2 it can be seen that gender did not significantly predict social curiosity. The addition of death anxiety significantly improved the model. Death anxiety explained 2.7% increase in variance in social curiosity. In this model death anxiety (β ¼ .175; p < .01) was a better predictor of social curiosity than gender (β ¼ .112; p < .05). Discussion The results of Study 1 supported findings from previous studies in which anxiety predicted curiosity (Litman and Pezzo, 2007;Trudewind, 2000;Renner, 2006). In particular, this study provided preliminary evidence of the positive association between death anxiety and social curiosity. This result served as a basis to further illuminate the role of social curiosity as mechanism to mitigate the impact of death anxiety. In addition, gender was found in predicting social curiosity. Although this happens if death anxiety becomes the antecedent of social curiosity. Gender had a correlation with death anxiety. In this study it was found that women have higher death anxiety than men. These results were in line with previous studies (Abdel-Khalek and El Nayal, 2019;MacLeod et al., 2016;Pierce et al., 2007;Russac et al., 2007). Study 2 The purpose of Study 2 was to examine intolerance of uncertainty and self-verification as mediators in the relationship between death anxiety Note. N ¼ 352. *p < .05; **p < .01; ***p < .001. and social curiosity. Within the TMT framework, various human behaviors, such as aggression, prosocial behavior, sexual attitudes, are motivated by fear and anxiety of death (De Wall and Baumeister, 2014). This indicates that there are many ways to control the anxiety of death. Thus Study 2 was conducted to understand social curiosity as mechanism in mitigating death anxiety. In addition, this research is needed to strengthen the results in Study 1 by explaining how the process of death anxiety can increase social curiosity. Participants and procedures The number of participants in Study 2 was 507 people. Initially there 511 participants recruited. However, four participants did not meet the inclusion criteria, which is undergraduate students. Data were collected by distributing online research questionnaires to undergraduate students in Jakarta, Indonesia. The age of the participants ranged from 18 to 25 years (Mean: 20.68). There were 368 (72.6%) female participants in this study. Prior to data collection, this study was granted by Ethics Committee of Psychology Faculty of Universitas Indonesia. Informed consent was obtained at the beginning of the study measurement. Thorson and Powell, 1992) was the one used in study 1. This instrument consisted of 25 items, rated on a-5 point Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree). Item example was 'I fear dying a painful death.' Higher RDAS score indicated more anxious about death. 9.1.2.2. Intolerance of uncertainty (IUS; Buhr and Dugas, 2002). It was a measure based on the idea that uncertainty was unacceptable and should be avoided, being uncertain reflects badly on people, creating frustration, stress, and fostering inability to take action. IUS consisted of 27 items, rated on a 5-point Likert scale ranging from 1 (Strongly disagree) to 5 (Strongly Agree). Item example was 'uncertainty stops me for having a strong opinion'. Higher IUS total score indicated a higher inability to tolerate uncertainty. 9.1.2.3. Desire for self-verification (Wiesenfeld et al., 2007) assessed self-verification for personal self. This measurement consisted of 2 items. Item example of this measurement was 'I want others to understand who I am'. The measure was rated on a 7-point Likert scale, ranging from 1 (strongly disagree) to 7 (strongly agree). The Cronbach's alpha value of this measuring instrument was .7 Although the alpha coefficient was less than 0.7, this Cronbach's alpha value was adequate for research on human behavior (Vaske et al., 2017). Higher total score indicated stronger impetus for personal self-verification. 9.1.2.4. Social curiosity scales (SCS; Renner, 2006) used in study 2 was the same as the one in study 1. SCS consisted of 10 items. Item example was 'I am interested in people'. It was rated on a 4-point Likert scale, ranging from 1 (strongly disagree) to 4 (strongly agree). Higher score demonstrated growing interest in obtaining information about other people. Statistical analyses The IBM SPSS ver. 25 was used for statistical analyses (IBM Corp, 2017). Reliability analysis was computed to determine reliability coefficient for each measure. Bivariate correlation among variables were determined using the Pearson correlation. PROCESS macros for SPSS (version 3.4, model 6; Hayes, 2018) was used for mediation analysis. Results Means, standard deviation, Cronbach's alpha, and bivariate association of all the main variables in this study are shown in Table 3. The four variables in Study 2 were correlated with each other. Consistent to Study 1, death anxiety was positively correlated with social curiosity. Gender was in the mediation model serial analysis as covariate. This was based on the results of Study 1 and the results of bivariate correlations in Study 2. Mediation analysis with intolerance of uncertainty and intolerance of uncertainty as mediators 1 and 2, and gender as covariate was performed. The analyzes used PROCESS macros for SPSS with bootstraps samples 10000 with 0.95 confident intervals. The results can be found in Figure 1. Based on the results, it was found that intolerance of uncertainty and desire for self-verification significantly mediated (p < .001) the relationship between death anxiety and social curiosity. Higher death anxiety led to higher intolerance of uncertainty (p < .001), then higher intolerance of uncertainty significantly led to stronger desire for selfverification (p < .001). Eventually, stronger desire for self-verification led to higher social curiosity (p < .001). The relationship between death anxiety and social curiosity can also be mediated directly by intolerance of uncertainty (p < .05) and by desire for self-verification (p < .001). In this mediation model, gender was not significantly found to have influence. Discussions The results of Study 2 provided more support for the positive relationship between death anxiety and social curiosity. Intolerance of uncertainty and desire for self-verification was found to serially mediate the relationship between death anxiety and social curiosity. Death anxiety drove individuals to face the unknown that gave rise to intolerance of uncertainty. Experiencing intolerance of uncertainty motivated individuals to verify themselves in the form of tendency to know about other people, hence social curiosity. These findings were valuable to understand the mechanism from death anxiety to social curiosity. Separately, each mediator can mediate the relationship between death anxiety and social curiosity. Intolerance of uncertainty can directly mediate the relationship between death anxiety and social curiosity. Likewise, with the desire for self-verification. In addition, gender was found to not affect the relationships between variables formed in this model. This finding reinforced the results of research conducted by Taubman-Ben-Ari et al. (2002). General discussions The current study examined the role of social curiosity as a mean in overcoming or reducing anxiety, particularly death anxiety. Empirical results in Study 1 supported that death anxiety increased social curiosity. These results were further strengthened in study 2, in which higher death anxiety led to stronger social curiosity through intolerance of uncertainty and desire for self-verification. Thus, based on these two studies, it can be concluded that social curiosity has the potential as a mean to overcome death anxiety. As a representation of symbolic immortality, social curiosity had characteristics that buffer death anxiety. Memories about death make people feel uncertain, because death cannot be predicted (Nyatanga and De Vocht, 2006), which then raises the anxiety related to death. When people experience death anxiety, then negative effects rise in the form of fear, threat, unease, and discomfort (Nienaber and Goedereis, 2015). Experiencing these various discomforts make people unable to tolerate uncertainty and try to reduce it by increasing the drive to believe in immortality (Conn et al., 1996). One form of symbolic immortality is biological symbolic immortality. Symbolic immortality manifest in feeling of connectedness with other people and larger entities (Steele et al., 2014). This connection makes people no longer have fears that after death no one will know them and leave no impression in this world (Mikulincer et al., 2003). To achieve this symbolic immortality, people need to have good self-view. Having a stable self-view is important as it leads to feeling confident, then increase the ability to predict and control the social world, to direct behavior, and to maintain a sense of coherence, place, and continuity (Swann and Buhrmester, 2003;Swann and Read, 1980). Whereas threatening conditions make people have doubts about themselves and increase needs to reconfirm self-view through self-verification (Swann and Brooks, 2012). Even so people have hope that they will be remembered based on others' impression of them after death. People are specific on the impressions that they want to be remembered on. Thus, self-verification is important, as they will try to receive social feedback that confirms their conceptions (Swann and Read, 1980). One way that can be done is to interact with people who support their conceptions (Swann and Buhrmester, 2003). Developing an interest in obtaining social information necessitates people to recognize the supportive people around them. However, social curiosity is not only driven by the need to be selective in making contact, but it is also driven by the need to achieve biological symbolic immortality. Social curiosity helps people to establish and maintain relationships with others (Renner, 2006), thus people continue to feel as part of a larger entity. Simultaneously social curiosity is also useful to maintain extant connection as it helps to constantly update with other people's conditions. Having a connection with social group will provide a collective social identity and provide symbolic immortality at the biosocial level (Lifton & dan Olson, 1974;Vigilant and Williamson, 2003). When people continue to develop social curiosity, people need not to worry too much about death because they will still be known even after death. Based on the explanation above, it can be said that social curiosity is the core of biological symbolic immortality actualization. From the results of Study 1 and Study 2 it can be concluded that social curiosity is indirectly driven by death anxiety and directly by desire for self-verification. These findings show that the interest in obtaining information about others is fundamentally driven by the need to overcome death anxiety and is directly driven by the need for self-verification. This research illustrates how death anxiety can change the focus of people's attention. Initially the anxiety of death makes people only focus on themselves, that is, what is felt and thought about themselves when thinking about death. Then this death anxiety moves people to focus on others, to overcome the anxiety they have. Directing attention to others based on beliefs the importance of others in the existence of his life. The direct influence of desire for self-verification on social curiosity also provides specifications about increasing social curiosity. There are three motives underlying social curiosity, namely obtaining information or for learning, having control over their social environment, building and maintaining relationships with others (Hartung and Renner, 2011). Desire for self-verification is a depiction of one of the three motives, namely having control over their environment . The direct influence of desire for self-verification on social curiosity in this study indicates that other variables can also have a direct effect on increasing social curiosity. This is very possible considering the desire for self-verification only describes one of the three social curiosity motives. In addition, although in this study desire for self-verification proved to have sufficient effects to increase social curiosity, the effect of intolerance of uncertainty on desire for self-verification was not strong. This means that it is probable that there are other variables that are potentially stronger as mediator between intolerance of uncertainty and social curiosity. Based on previous research, the potential mediator is social comparison. Intolerance of uncertainty can predict social comparison (Butzer and Kuiper, 2006). The process of social comparison includes the desire to affiliate with others, the desire for information about others, and self-evaluation (Taylor and Lobel, 1989), so as to encourage increased social curiosity. An interesting finding in this study is death anxiety which correlates with total social curiosity. Previous research by Renner (2006) found that neuroticism and social anxiety only correlated with sub-factors of SCS (neuroticism was positively correlated with covert social curiosity; social anxiety positively correlated with social covert curiosity and negatively correlated with general social curiosity). Although death anxiety, social anxiety, and neuroticism are different variables, but all three are related because they contain an element of anxiety. Neuroticism correlates with death anxiety (Abdel-Khalek, 1986;Frazier and Foss Goodman, 1988;Templer, 1972). In neuroticism, there is acet anxiety (Soto et al., 2011), therefore neuroticism is often used to represent anxiety. Social anxiety is basically rooted in death anxiety. Fear of death is at the core of psychological threats , in so death anxiety underlies various kinds of anxiety and phobias (Furer and Walker, 2008). People who have social anxiety actually experience death anxiety. Social anxiety occurs because people are afraid of getting negative ratings from others when in social situations (Beidel et al., 1985). On the other hand, people with social anxiety have poor social performance that gives rise to negative responses from others, so they can experience social rejection (Voncken et al., 2008). When experiencing social rejection, the person actually has experienced social death (Steele et al., 2014). The similarities between death anxiety, neuroticism and social anxiety, which represent anxiety, should make the results of this study not much different from previous studies. Starting from this assumption, the difference between the results of this study and the previous research needs to be explained. This difference can occur influenced by cultural factors. This research was conducted in a country with a collective culture. In a collective culture there is a great need for social information so that they are still considered part of the group. In order to avoid exclusion or rejection from the group, all strategies for obtaining information about other people will be used to stay 'connected' to the group. People in a collective culture are motivated to find ways to adapt to others who are relevant, to fulfill obligations, and to be part of various interpersonal relationships (Markus and Kitayama, 1991). The role of gender in predicting social curiosity models is still questionable, as such, gender does not correlate directly to social curiosity. However, when gender is included in a model that explains the occurrence of social curiosity using death anxiety, gender is found to have a role. However, it turns out that gender does not always have a significant influence in every relationship between death anxiety and social curiosity. Gender can influence social curiosity if: 1) death anxiety is used as an antecedent of social curiosity, 2) there are no other variables that mediate the relationship between death anxiety and social curiosity. Limitations and future directions This study uses a correlational method to prove the usefulness of social curiosity as a means of overcoming death anxiety. This is based on previous research conducted by Cicirelli (2002). Measurement of death anxiety at the level of awareness can contribute to TMT, because it becomes an extension of the TMT idea. However, this study cannot be used to make inference about causal relationship between death anxiety and social curiosity. Thus, further research is recommended to use experimental design. Further research can be directed to prove that the fulfillment of social curiosity can indeed reduce death anxiety. This direction is in line with the aims of supporting the TMT premise, that is if a psychological mechanism can buffer death anxiety, then reminding individuals of death will increase reliance on that psychological mechanism; and strengthening this structure should reduce the attention and accessibility of thoughts related to death (Yaakobi, 2015). In the future, experimental research can also be complemented by the use of Agent Based Models (ABM). Research on TMT using ABM can be found on papers by Shults et al. (2017); Shults et al. (2018). Agent Based Model (ABM) is a new method of experimentation. ABM is a simulation of large numbers of autonomous agents that interact with each other and with an environmental stimulus and observation of patterns that emerge from the interactions that occur (Smith and Conrey, 2007). If ABM is used, the agent can be taken at the individual level, which has a role based on the level of intolerance of uncertainty and desire for self-verification. The agent will interact with the environment, which is designed to cause death anxiety. Interaction between agents and their environment is useful for knowing whether social curiosity is present or not. ABM is useful for testing and developing theories (Smaldino et al., 2015). This study seeks to provide a new perspective in understanding social curiosity, which is using TMT to understand the mechanism of social curiosity. It is expected that the use of ABM can further support the use of TMT in explaining social curiosity. Despite its advantages, ABM also has weaknesses in external validity, so this method will be more effective if it is equipped with direct experiments (Jackson et al., 2017). ABM itself serves as a complement to traditional or laboratory experiments (Eberlen et al., 2017). In this study the effect of death anxiety on social curiosity, both directly and through mediators, was weak. However, there is an increased effect of death anxiety on social curiosity if accompanied by mediators. This shows that there needs to be a mediator in the relationship between death anxiety and social curiosity. Based on the results of this study, future studies can explore other variables that might strengthen the effects of death anxiety on social curiosity. Research that specifically addresses social curiosity is still limited, so it cannot be concluded clearly which variables have a major influence on social curiosity. As a guideline, the motives underlying social curiosity can be used, namely, to obtain information, build and maintain interpersonal relationships, and control the social environment (Hartung and Renner, 2011). Future studies can consider involving cultural factors in research on social curiosity. The difference in the results of this study, which found a correlation of anxiety components with total social curiosity, with previous studies, which found anxiety components only correlated with subfactors of social curiosity (covert social curiosity), could be caused by cultural factors. This assumption must, of course, be verified. Comparing social curiosity in two different cultures will be very useful to better understand the construct of social curiosity, especially about the benefits and expressions of social curiosity in each culture. The importance of involving culture in curiosity research was also raised by Birenbaum et al. (2019). Conclusions This research has successfully demonstrated benefits of social curiosity, that is to reduce or overcome the anxiety of death. In particular, the benefits of social curiosity that are widely known, namely, to form and maintain interpersonal relationships (Renner, 2006), are part of the more basic benefits of overcoming death anxiety. Death awareness is critical motivation that becomes a driver in human behavior (Vail et al., 2012). This research contributes to enrich understanding of the social curiosity construct. This is very useful considering that research that specifically addresses social curiosity is still very limited. In addition, this study also provides a new perspective to explain the occurrence of curiosity, especially social curiosity. The occurrence of curiosity is mostly explained using gap information theory (Loewenstein, 1994), but here it is proven that social curiosity can also be explained using the TMT framework. The results of this study also strengthen TMT. This theory states that reducing or overcoming anxiety becomes a basic motivation in people (Echabe and Perez, 2016;Greenberg et al., 2010;Psyzczynski, Greenberg and Solomon, 1997). This research succeeded in proving that social curiosity is driven by death anxiety. Declarations Author contribution statement R. A. Fitri: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper. S. R. Asih: Conceived and designed the experiments; Analyzed and interpreted the data; Wrote the paper. B. Takwin: Conceived and designed the experiments; Analyzed and interpreted the data. Funding statement This work was supported by DRPM of Universitas Indonesia (Directorate of Research and Community Service of Universitas Indonesia).
2020-03-19T10:30:44.090Z
2020-03-01T00:00:00.000
{ "year": 2020, "sha1": "d684e201c9f8afdf05badaf8b4ddbcb5753960f6", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S2405844020304011/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a23f094db4011fd21d936b46c48210258e7b10a8", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
257557581
pes2o/s2orc
v3-fos-license
Exploring Resiliency to Natural Image Corruptions in Deep Learning using Design Diversity In this paper, we investigate the relationship between diversity metrics, accuracy, and resiliency to natural image corruptions of Deep Learning (DL) image classifier ensembles. We investigate the potential of an attribution-based diversity metric to improve the known accuracy-diversity trade-off of the typical prediction-based diversity. Our motivation is based on analytical studies of design diversity that have shown that a reduction of common failure modes is possible if diversity of design choices is achieved. Using ResNet50 as a comparison baseline, we evaluate the resiliency of multiple individual DL model architectures against dataset distribution shifts corresponding to natural image corruptions. We compare ensembles created with diverse model architectures trained either independently or through a Neural Architecture Search technique and evaluate the correlation of prediction-based and attribution-based diversity to the final ensemble accuracy. We evaluate a set of diversity enforcement heuristics based on negative correlation learning to assess the final ensemble resilience to natural image corruptions and inspect the resulting prediction, activation, and attribution diversity. Our key observations are: 1) model architecture is more important for resiliency than model size or model accuracy, 2) attribution-based diversity is less negatively correlated to the ensemble accuracy than prediction-based diversity, 3) a balanced loss function of individual and ensemble accuracy creates more resilient ensembles for image natural corruptions, 4) architecture diversity produces more diversity in all explored diversity metrics: predictions, attributions, and activations. Introduction In the context of Deep Learning (DL), it has been empirically discovered that the use of ensembles can improve the model's accuracy in tasks such as regression and classification. It has been speculated [13] that the main reason behind these improvements is the implicit diversity in the solutions found that when aggregated as an ensemble obtain better predictions. In this work, we evaluate the resiliency of diverse deep learning classifiers and the role that the different kinds of diversity play in improving them. The case for design diversity Design diversity [23] is a technique to increase the resilience of safety critical systems. It is established as a best practice in standards such as in vehicle functional safety [18] to prevent dependent failures, safety of the intended functionality [19] to address system limitations of machinelearning-based components, and avionic software [12]. A common pitfall is to misunderstand independent development as design diversity. In [47] the designers of a safety-critical system preferred to let multiple teams collaborate, although the purpose of having multiple teams is to produce multiple designs of a single specification. This was justified with the claim that specification problems can be better mitigated with such collaboration but at the cost of the sought independence. The key problem is, that independent development can (and will) produce designs with common failures mainly due to the fact that independent designs do not enforce diverse design choices. In fact, it has been statistically proven that independently developed software results in dependent failure behavior on randomly selected inputs [11]. In [22], it has been shown that what is needed to reduce dependent failure behavior is diversity in design choices. If the choices are made satisfying certain properties, it can be expected (in the average case) to obtain negatively correlated failure behavior, i.e., better than independent. In DL, however, design choices are not made by the human designers explicitly but are a result of the architecture, data, and optimization approach. Furthermore, existing diversity metrics are not directly related to the DL model's design choices and are known to have a diversity-accuracy trade-off [21]. We investigate if a diversity metric closer to the model design choices can improve the model resilience compared to existing metrics. Main research questions We aim to answer the following research questions: RQ1: Is model accuracy, size, or architecture the main explanatory factor of resilience against natural image corruptions? RQ2: Can a diversity metric closer to design choices improve the known accuracy-diversity trade-off? RQ3: Which diversity enforcement heuristic produces the most resilient models? RQ4: How diverse are the predictions, activations, and input feature attributions of models created with a diversity enforcement learning approach? The rest of the paper is structured as follows: Section 2 provides a brief overview of the current state-of-the-art in diversity enforcement and measurement. Next, the methodology is stated in Section 3. Our experiments are then presented in Section 4, followed by a discussion of the outcomes and conclusions in Section 5. Related work 2.1. Ensemble creation techniques The most relevant techniques for ensemble creation are: a) Ensembles of independently trained models where diversity originates from the training process randomness, e.g., seed. Each ensemble member loss is a function notated as: where y is the ground-truth label, and h i is the output of the ith single ensemble member. [6] presents an analysis of the resilience of independently trained ensembles. b) Bagging [1], reduces the variance of multiple models by averaging the outcomes of models created with different training data subsets. c) Boosting [37] sequentially trains models to reduce bias by sampling incorrectly classified inputs more often in the next model. d) Negative Correlation Learning (NCL) [24] trains models in parallel with a shared penalty term in their loss function to enforce prediction diversity. Generalized NCL (GNCL) [3] proposes two extensions for NCL: i) a generalized loss function for each member: where M is the total number of members in the ensemble, d is the difference h i − f with f as the ensemble prediction, D is the 2nd derivative of the loss function, and λ is a weighting hyper-parameter. ii) An implicit enforcement of diversity by balancing the ensemble and the individual loss: Representation 1 Representation diversity Representation 2 Prediction diversity Figure 1: Model behavior diversity. a) Invariant decision boundary & diverse sample representation. b) Diverse decision boundary and measured by prediction errors. c) Diverse decision boundary and measured by feature relevance. Diversity metrics in DL There are many proposed metrics for diversity. [2] presents a survey and taxonomy for diversity metrics and [14] presents a survey of diversity for ML. In this work, we focus on behavior diversity metrics of a DL model. Input data diversity such as different modalities or implementation aspects such as the number of layers is not considered. Prediction (output, failures) diversity Multiple prediction-based diversity metrics have been proposed. [21] presents a comprehensive evaluation of this class of metrics. Pair-wise measures based on the correct and incorrect statistics of two models include the Q-statistics, correlation coefficient ρ, and the disagreement measure: where the indexes a, b of the binary vectors N ab indicate the correctness of the classifiers, i.e., (h i=p = y) ⇒ (a = 1) ∧ (h i=p = y) → (a = 0). Non-pairwise measures that evaluate non-binary diversity include entropy, coincident failure diversity, cosine similarity, Kullback-Leibler divergence [10,31], and the the Shannon equitability index [32,6]: where S is the total number of prediction species/classes and p i is the proportion of observed species i. Representation (activation) diversity The intermediate representations (IR) can also be used to measure diversity. [13] compares the diversity in representation space from independently created ensembles and ensembles from variational approaches. Measuring IR diversity is challenging due to space size and semantic ambiguity, i.e., the same semantic concept can be represented in many different ways. A naive use of any diversity metric such as cosine similarity could give semantically irrelevant diversity scores. In [20], the Centered Kernel Alignment (CKA) metric is proposed to obtain a statistical measure across a dataset on the similarity of any two layers of a DL model: Figure 2: Attribution map diversity: Two models may predict the same outcome but based on different evidence. where K and L are similarity matrices of the two feature maps being compared and HSIC is the Hilbert-Schmidt Independence Criterion which measures statistical independence. The feature maps may be layer activations or attention maps such as Saliency, Integrated Gradients, and Grad-CAM [40,39,41]. [34] proposed the use of the pull-away loss term from generative adversarial networks to induce diversity of such activations. Self-attention [45] (not related to attention maps) is one of the key techniques in the transformer architecture. In [33] the embeddings used to feed attention heads are masked in such a way as to enforce diversity of activations. In zero-shot learning, the attribute concept is used to enable training models that can, later on, predict unseen labels. These attributes can be considered for IR diversity, as well [48]. Closely related to NCL, the self-supervised approach of contrastive learning [38,7] trains two models to produce latent features that are diverse for false positive cases and similar in true positives through a loss function such as the triplet loss that enforces the models to learn the similarity metric. Input feature attribution diversity The importance of input features can be used to measure behavior diversity that to the best of our knowledge has not been explored. Figure 1 shows the relationship of an attribution-based metric w.r.t. prediction and representation diversity. Note, that attribution is not the same as attention or attributes in the context of zero-shot learning. Attention maps, such as those obtained from the activation of intermediate layers of CNNs, reflect the excitation of a network given an input. This activation however is not necessarily correlated with the final prediction, e.g., it could be an inhibitory factor. Attribution, on the other hand, indicates the importance of a feature to the final decision. See Figure 2 where Saliency is used to display the original pixels masked by the attribution scores from each model. A change to a pixel with high attribution (brighter) will have a stronger influence on the model prediction than a change to a pixel with low attribution. Other diversity-based resilience approaches Augmenting the training data by applying affine transformations such as rotations and scaling, geometric distortions such as blurring, and texture transfer help DL models to generalize better with a limited training data [27]. Adver- failure probability of a using Methodology A probability of randomly sampled program π 1 is S A (π 1 ) using Methodology B probability of randomly sampled program π 1 is S B (π 2 ) some some input random program from each methodology on Figure 3: Relation between diverse design methodologies and the difficulty function in the LM Model [22] sarial training increases the robustness to intended attacks with adversarial samples to limit the model vulnerability to input perturbations [15]. Such training data approaches are effective and complementary to the design diversity approaches of this study that address the model diversity. Modality and point of view diversity [35] is an approach to address the failure modes of sensors such as cameras, radar, and lidar. The design diversity of DL models explored in this study is orthogonal to this approach, as model diversity can be applied to every single modality. Methodology We propose three sets of experiments: 1. Evaluate resiliency of diverse architectures and training approaches. 2. Measure diversity of prediction and diversity of attribution from independently created models of diverse architectures and evaluate robustness correlation. 3. Enforce diversity with NCL, evaluate the resulting robustness, and inspect three kinds of diversity: prediction, representation, and attribution. In addition to addressing the main research questions, we put the following hypothesis to test: that attribution-based diversity (Equation 7) can be positively correlated with ensemble resiliency if a better accuracy trade-off is achieved compared to prediction-based diversity. where a is the input attribution score of a model at color channel c and pixel coordinate p. The computation of the input attribution scores a is performed with an attribution method, such as Saliency. This hypothesis is inspired by the theoretical result of the Littlewood and Miller (LM) model [22] that diverse design choices can produce less common failures. Diverse attribution maps of correct classifications imply that the models make predictions based on independent factors, which is not the case in prediction diversity. Probability model for design diversity (LM model) The Littlewood and Miller (LM) model [22] defines a probabilistic framework to analyze the impact of methodological diversity in the expected failure behavior. The model defines: 1) an input space X = {x 1 , x 2 , ...}, representing all possible inputs x to a program and 2) a program space P = {(π 1 , π 2 )}, for all possible programs π that could implement a program specification. A given design methodology will determine the probability to come up with a program π and is denoted as S A (π). Another design methodology S B (π) will assign a different probability to the same program. The model uses the concept of a difficulty function θ M (x) that measures the probability that a randomly chosen program π from a given methodology distribution S M (π) will fail on a particular input x ∈ X . The key insight consists in noticing that θ A (x) can be different for a different methodology θ B (x), i.e., for some methodology, a certain input may be difficult, but for another, it may be easy. See Figure 3 for a visual representation of these spaces. An analysis of this model concludes that if the design methodologies produce different difficulty functions θ, then the expected failure behavior on a random input will be negatively correlated due to the fact that the covariance of the θ's can be negative. With this model, it is finally shown that a design methodology with diverse design choices that satisfy the following three properties will result in less common failures: 1) logically unrelated (one decision is independent of the other), 2) common failures of a decision are due to different factors, and 3) indifference to the selection of each methodology (no methodology is superior). Loss function to enforce attribution diversity We perform a first attempt to enforce attribution diversity with the following loss: This loss computes attribution scores variance in an ensemble and uses it as a penalty term weighted by λ. Failures addressed In this study, we evaluate resilience to covariate dataset distribution shifts, i.e., when the distribution of input features of the test dataset does not match the distribution of the training dataset. We use four natural image perturbations from the ImageNet-C dataset [28] that are sensible to occur in vision application domains, such as obstructions or liquid contaminants. Our scope is not to evaluate robustness against adversarial attacks, label shift variations, or resiliency to noise variations such as Gaussian, brown, etc. To understand the relationship between accuracy, size, and resiliency to natural corruptions of DL models we evaluate a set of architectures (convolutional NNs, transformers, and subnetworks from neural architecture search (NAS)) and training approaches (supervised, self-supervised, and knowledge distillation) on both the ImageNet validation dataset and on the corrupted version ImageNet-C "Lines" (strength of 1.6). See Table 1. Observations to Table 1: Although the model size is highly correlated with the final accuracy and resilience in the corrupted dataset, the architecture seems to play a more determining factor. The smallest transformer with only 28M parameters is superior to other CNNs with 2 or 3x more parameters. Self-supervision slightly decreases both metrics as appreciated in the comparison of ResNet50 and ViT models using supervised learning. SwinV2 is an exception but this model introduced more architectural innovations too. Knowledge distillation from a CNN teacher shows a slight improvement over supervised ViTs. To understand the effect of other corruptions, we evaluate a ResNet50 model 1 on six different data sets: Ima-geNet [8], ImageNetv2 [36], and four corruptions on a fixed perturbation strength from the ImageNet-C dataset [28]: Plasma (4.0), Checkerboard(4.0), Waterdrop(7.0) and Lines (1.6). See Figure 4. The first two are in-distribution, i.e., the covariates (input features) and labels of the validation set follow a similar distribution to the training data set. The last four are out-of-distribution, as the model has never seen such corruptions of the input images during training. Observations to Figure 4: Different corruptions have different effects on the model performance, and a typical "good" classifier with accuracy close to 80% can have a tremendous performance decrease in situations where a human would probably not. : Top-1 accuracy of ResNet50 on in-distribution data sets (ImageNet and ImageNetv2 [36]) and out-ofdistribution datasets (ImageNet-C). The resilience of ResNet50 drops significantly against natural corruptions. Diversity of ensembles from heterogeneous architectures To understand the diversity/accuracy trade-off of the attribution-based metric in comparison to the established prediction-based diversity approach we perform two different experiments: First, we create multiple ensembles of independently trained models with a wide diversity in architecture. Second, we create multiple ensembles using models discovered in a weight-sharing super-network [4], i.e., models whose architecture has been found using neural architecture search (NAS) and not by manual design. The architectures explored here are CNNs (ResNext [46] & SqueezeNet [17]), Vision transformers (DeiT [44]) and NAS (MNASNET [42] & BootstrapNAS [29]) using supervised or self-supervised training 2 . In total 14 models were trained with different hyperparameters to create threemember ensembles of all possible combinations. Figure 5 shows the ensemble performance of all 364 ensembles created from these 14 models using an averaging consensus mechanism, i.e., the logit output of all ensemble members is averaged first and then the highest score is used to make the prediction. The left-hand side shows on the X-axis the attribution-based diversity metric (Eq. 7). The right-hand side shows the disagreement prediction diversity metric (Eq. 4). Each point is an ensemble evaluated on the entire validation dataset of ImageNet. The color indicates the final average accuracy of the ensemble. The Yaxis indicates the average benefit of creating an ensemble: Y = A ens − A top , where A ens is the ensemble accuracy and A top is the accuracy of its most accurate member, i.e., how much accuracy improvement was obtained in comparison to a single model (the most accurate in the ensemble). In this way, it can be appreciated when an ensemble makes sense: it has to lay above the zero line (dashed). The ensemble cost is measured by the number of parameters which has a direct influence on the memory and the number of operations required. The ideal ensemble is one with the brightest Figure 5: Evaluation on ImageNet validation dataset of 364 three-member ensembles from heterogeneous architectures using averaging as consensus mechanism. Y-axis: Improvement of the ensemble against its own top ensemble member. X-axis: Normalized diversity metric. Color: absolute ensemble accuracy. Bubble size: Model parameter size. The attribution diversity metric is not negatively correlated with the ensemble improvement as disagreement diversity is. Observations to Figure 5: The figure shows how attribution diversity is positively correlated with ensemble improvement, while it corroborates the known fact that prediction-based diversity is negatively correlated [21]. Figure 6 shows the same ensemble combinations, but this time using a majority voting consensus mechanism, i.e., the prediction with the highest number of votes wins. Draws are randomly resolved. Figure 6: Evaluation on ImageNet validation dataset of the same ensembles as in Figure 5 but using voting as consensus mechanism. In contrast to averaging, voting produces mostly ensembles that decrease the final performance instead of improving it. Observations to Figure 6: The same correlation trends can be observed with majority voting. However, the most interesting aspect is that the vast majority of the ensembles here reside under the zero line. This means that majority voting with three ensembles tends on average to produce less accurate models. This corroborates the findings of [21]. We evaluate the same ensembles on five more validation datasets and verify that the observed trend in the validation dataset applies to natural corruptions. In addition, we compare the two diversity metrics to a simple validation accuracy metric. See Figure 7. Observations to Figure 7: Attribution-based diversity is better correlated as well. These results serve as evidence to confirm that the diversity-accuracy trade-off is better for attribution than for prediction diversity. However, the metric of averaging the individual accuracies of the ensemble members is more strongly correlated with the ensemble improvement in corruptions. Next, Figure 8 presents the results of the second experiment on architectures created with NAS. We used the open-source framework BootstrapNAS [29,30] to create a weight-sharing super-network. The super-network is trained from an initial ResNet50 model. We then sample 11 subnetworks with different configurations but similar complexity by varying the width and depth of the CNN. Observations to Figure 8: Although the correlations seem strong for all metrics, the actual ensemble improvement is very low, i.e., less than 0.04%. To identify the effect of the complexity of the chosen attribution method, we evaluated the pair-wise diversity on the entire validation set on six subnetworks using Saliency and Integrated Gradients attribution methods using 1, 2, 10, and 50 backpropagation passes. See Figure 9. The average correlation coefficient of the normalized diversity scores of all methods is 0.998. Using Saliency-based attribution is Enforcing diversity in homogeneous ensembles We perform a set of training experiments to enforce diversity into the ensembles through the loss function via the Negative Correlation Learning paradigm. We use ResNet50 for all ensemble members, and evaluate different heuristics: a) Independently trained members using cross-entropy as loss in Equation 1. Four different consensus approaches in GNCL using Equation 2: b) average, c) median, d) geometric mean, e) majority vote, f) GNCL and averaging consensus but masking the penalty term for incorrect classifications, i.e., (h i = y) ⇒ (λ = 0), and g) Balancing a loss function between the team and individual members (Equation 3). The optimization method in all cases was AdaBelief [49] for 100 epochs with a learning rate of 1e-3 decaying 10% every 30 epochs, epsilon of 1e-8, betas: (0.9,0.999), batch size of 64 and a λ factor of 0.2. In Ima-geNet classification, we empirically observed that bigger λ values in Equation 2 fail to learn. Results for the six heuristics are presented in Figure 10. First attempt at enforcing attribution diversity We perform a first attempt to enforce attribution diversity using the loss of Equation 8 and the same optimization parameters used in GNCL. The computational overhead to calculate the attributions is 2x using the Saliency method. Empirically, we tried five different lambda weights values: {10, 1,0.1,0.01,0.001} but found training instabilities. The smallest λ value resulted in convergence up to epoch 21 for 63.7% top1 accuracy. We believe that the penalty term of Eq. 8 is in conflict with the original loss and it would be more appropriate to investigate a better penalty term than to optimize this hyper-parameter in future work. Table 2 show three types of diversity (attribution, prediction, and intermediate representation) for three models created through independently created heterogeneous architectures, prediction diversity enforcement and attribution diversity enforcement with the following top1 accuracies on the ImageNet validation dataset: 78.2%, 76.1%, and 63.7%. Diversity of NCL ensembles Prediction diversity. In Table 2, the Shannon equitability index metric (Eq. 5) is shown for correctly and incorrectly classified samples for three ensembles: attribution diversity (Eq. 8), prediction diversity (Eq. 4) and heterogeneous architectures on all six datasets. The heterogeneous ensemble produces more diverse predictions in general. Attribution diversity. We present a few resulting attribution maps in Figure 11 for the NCL-based predictiondiversity enforcement, attribution-diversity enforcement (at epoch 21), and independently trained architectures. Observations to Figure 11: Independently trained heterogeneous architectures and attribution-diversity enforcement produce more diverse attribution maps than homogeneous models trained to have diverse prediction outcomes. Representation diversity. In Figure 12, we investigate the resulting diversity/similarity of the internal layers via CKA (Equation 6) of two ensemble members for three different diversity enforcing techniques. Observations to Figure 12: The CKA visualization reflects that the enforcement of attribution diversity produces less similarity in the layers than output diversity enforcement or by independent heterogeneous architectures. Discussion and conclusions In this section, we discuss and interpret the results of Section 4 and summarize the research questions' answers. Answers to research questions RQ1: In our experiments, it is observed that model architecture is more important to resiliency than model accuracy or size. On RQ2, we consistently observed that attribution-based diversity is more positively correlated with accuracy than prediction-based disagreement diversity. Answering RQ3, balancing the loss of the individual members and the ensemble provided a significant advantage in 3 out of 4 natural corruptions when compared to the prediction diversity enforcement variants. RQ4: Prediction diversity was higher for heterogeneous architectures trained independently than NCL on prediction or attribution diversity. Attribution diversity is significantly lower when enforcing prediction diversity compared to heterogeneous architectures trained independently. Activation diversity is low at the last layers for both prediction and attribution diversity enforcement, while for heterogeneous architectures trained independently, the middle layers showed less diversity. Results discussions Advantage of Transformers vs CNNs: The superiority of the transformer architecture in terms of resilience against natural corruptions could be attributed to their capability to pay attention globally instead of locally as CNNs do, and thus they may be able to construct more useful intermediate representations that suffer less from perturbations. NAS ensembles in our experiments are not diverse enough: Only small improvements are obtained from ensembles created from sub-networks. These are jointly trained as part of a single weight-sharing super-network and thus have little diversity. Larger search/design spaces should be explored in future work. Balancing member vs ensemble accuracy: Explicitly enforcing prediction diversity is outperformed by implicit enforcement through the balance of ensemble and individual accuracy (Equation 3). This could be due to the use of two less conflicting objectives than prediction diversity. Diversity-accuracy trade-off improvement: In contrast to the disagreement metric, attribution diversity does not require models to deviate from a correct prediction. The correlation is however not very strong as models tend to find similar features for prediction, and implicit attribution deviations may be the product of imperfect learning. The enforcement of attribution diversity with the proposed loss proved however to be insufficient and better heuristics need to be explored. Diverse architectures produced more diversity than NCL-based methods An ensemble selected from a combination of independently trained heterogeneous architectures and training approaches resulted in higher levels of diversity in prediction, attribution, and intermediate layers. However, mixing different architectures does not consistently produce good ensembles as observed in the many ensembles under the zero line in Figure 7, as a good model may not benefit from ensembling with a less good one, as their common modes are not negatively correlated. Conclusions and next steps In this study, we explored different approaches to measure and enforce diversity in ensembles and evaluated their impact on natural data corruption resiliency. The key takeaways are: 1) model architecture is more important for resiliency than model size or model accuracy, 2) attributionbased diversity is less negatively correlated to the ensemble accuracy than prediction-based diversity, 3) a balanced loss function of individual and ensemble accuracy creates more resilient ensembles for image natural corruptions, and 4) architecture diversity produces more diversity in all explored diversity metrics: predictions, attributions, and activations. In addition, other valuable findings are: a) Saliency attribution can be sufficient to measure input attribution diversity, b) Ensembles created from models of similar complexity that were discovered by weight-sharing Neural Architecture Search for our experiments barely provide any accuracy improvement, and c) Enforcing attribution-based diversity during training through a variance-based penalty term is not stable and needs further research.
2023-03-17T01:16:19.337Z
2023-03-15T00:00:00.000
{ "year": 2023, "sha1": "6ac6518277be1561151f88366a24956d9f4fa56a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "6ac6518277be1561151f88366a24956d9f4fa56a", "s2fieldsofstudy": [ "Environmental Science", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
213327926
pes2o/s2orc
v3-fos-license
Effects of working environments with minimum night lighting on night-shift nurses’ fatigue and sleep, and patient safety Objective Nurses working rotating shifts often suffer from insomnia or similar disorders because exposure to room lighting at night inhibits melatonin secretion, resulting in a disturbed circadian rhythm. This study investigated whether dark room lighting would be preferable to brighter rooms in terms of (1) fatigue and sleepiness while working, (2) quality of sleep and (3) non-interference with work performance among nurses. Methods This study used a non-randomised open-label trial between night shifts using dark (110 lx) and bright (410 lx) room lighting on the desk surface. A total of 20 nurses were enrolled in the trial from November 2015 to February 2016 at a hospital in Japan. All participants worked first with dark room lighting and then with bright room lighting. The participants completed a self-administered questionnaire at enrolment, which was collected this at the end of the intervention. Results Fatigue and sleepiness were significantly higher in dark room lighting than in bright room conditions (p<0.05). There were no significant differences in sleep quality between the dark and well-lit conditions. We detected no significant differences in the number of reported incidents or accidents comparing the two types of environments. Conclusion Dark room lighting did not ameliorate fatigue and sleepiness during night shifts. Additionally, there was no evidence of improvement in sleep quality among nurses. These findings are important, however, in terms of managing hospital risk. BACKGROUND Currently, hospital nurses in Japan work in two shifts: in the daytime (08:45 to 17:15 hours at the study hospital; 8 hours 30 min with a break), and at night (16:45 to 09:15 am the next day; 16 hours 30 min with a break). Night-shift nurses take a break or nap for 2 hours during long working hours. Some hospitals employ nurses who work night shifts only, but nurses generally work both day and night shifts. In humans, a lighting environment of >300 lx inhibits melatonin secretion even with a short exposure time of 1-2 hours. [1][2][3] With an exposure time of more than a certain number of hours, a lighting environment of 120 lx or higher also inhibits melatonin secretion. 4 This inhibition has a harmful effect on the circadian rhythm of night-shift nurses working in the usual lighting environment, disrupting naps or sleep rhythms after the night shift. 5 Reportedly, consecutive shift changes between night and day disrupt circadian rhythms and lead to problems such as insomnia, which is one of the reasons why nurses leave their jobs. [6][7][8] Additionally, lifestyle factors (including exercise, sleep patterns, timing of meals and alcohol consumption) affect the circadian rhythms of nurses. It is important for hospital managers to help reduce nurse turnover because of work environment issues. Humans temporarily stop feeling sleepy in an extremely well-lit environment of approximately 5000 lx. 8 Some reports state that short exposure to extremely bright light changes sleepiness in night-shift workers. [9][10][11] Temporary periods of wakefulness during the night may disrupt circadian rhythms. Since there is little scientific evidence related to how minimum lighting at night influences nurses' health and work, we aimed to find an ideal solution to minimise disruption of nurses' circadian rhythm and maximise hospital safety management. Study aims The authors hypothesised that disruption of shift workers' circadian rhythms could be prevented if night lighting was kept below 120 lx (hereafter, 'a dark environment'), a level that does not inhibit melatonin even with long-term exposure. [1][2][3][4] It should be noted that 120 lx complies with the Japanese regulations of the Industrial Safety and Health Act, which set 70 lx as the minimum requirement for work involving strenuous activities, and 100 lx for a hospital room. The objectives of this study were to investigate whether dark environments improve Open access nurses' (1) fatigue and sleepiness while working, (2) quality of sleep and (3) reduce interference with work performance (malpractice/incidents/accidents). METHODS In this quasi-experimental study, participants received a control treatment, followed by a washout period, and then received the intervention. The study was conducted at a 430-bed general hospital that mainly provides acute care, located in a Japanese city with a population of approximately 330 000. The hospital was originally designed to use minimal lighting. Some nurses in the hospital complained that, when working a night shift, they had to go back and forth between dark patient rooms and much brighter work stations. All wards, therefore, except for the intensive care unit and emergency department were constructed to have 'dark conditions' with only 110 lx in the work spaces. Dark conditions were defined as approximately 110 lx (colour temperature: 3500K, Color Rendering Index: Ra85) on the desk at the staff station, while well-lit conditions were defined as approximately 410 lx (colour temperature: 3500K, CRI: Ra85) on the desk, with additional ceiling lights used to create a bright environment during the second half of the study period. During daytime, well-lit conditions of approximately 630 lx were maintained, while the dark condition was maintained at 600 lx; these were determined to be equivalent to each other. Nurses who worked rotating shifts in a general ward and those who regularly had a night shift about five times per month with more frequent day shifts participated in the study. We calculated sample size by assuming a tolerance of 18%, CI of 95% and a response rate of 60%, finally arriving at a sample size of 28 participants. Thirty participants potentially would provide statistically significant differences in main outcome indicators between the two conditions. The study measured the impact of the night shift with or without dark lighting through a questionnaire completed on the last day of a run of consecutive day shifts and on the first day shift after a night shift. Over the 4-month study period, the participants underwent two phases depending on daylight exposure time between two conditions. The dark condition phase was conducted from 1 November to 31 December 2015, during a period when the ward was routinely used under this condition. The well-lit phase was conducted from 1 January to 29 February 2016. This time period was intentionally selected because of its minimal daylight. Study period Questionnaires were collected from the participants during the latter half of the second month for each condition. The latter half of the second month for dark condition ran from 17 to 30 November 2015 and that for the well-lit condition was the period from 16 to 29 February 2016. Reports of medical treatment problems related to work performance were analysed for the entire month of exposure. Reports for the dark conditions were recorded for the period from 1 to 30 November 2015 and those for the well-lit conditions were taken from the period from 1 to 29 February 2016. Questionnaires A self-administered questionnaire was given to the participants at enrolment and collected at the end of the intervention. The questions about fatigue and sleepiness were taken from 'Subjective Symptoms (2002)', developed by the Industrial Fatigue Research Committee of the Japan Association of Industrial Health. 12 This assessment is performed before and after work for comparison, with higher scores indicating stronger fatigue or sleepiness. The questions about sleep quality assessed at time of waking were taken from the 'Oguri-Shirakawa-Azumi Sleep Inventory MA Version', 13 which is a self-assessment of sleep quality. The questions about 'sleep quality' covered five factors: sleepiness on waking, sleep induction and maintenance, dreaming, recovery from fatigue and sleep duration. These five factors comprised 16 questions. Higher scores indicate better sleep quality. The questionnaire could be completed at any time during and after the shift. For the night shift, a question about 'feeling sleepy on waking after a nap' was added. Questions about malpractice/incidents/accidents were quantitatively compared by severity level based on all the reports compiled at this hospital. From these reports, we examined whether there was any mention of the effect of lighting environment, such as light intensity or visibility. The severity was classified into eight levels that were used ranging from a near-miss (level 0) to death (level 5). Levels 3 and 4 were further divided into 'a' and 'b'; 'b' was more severe than 'a'. Level 3b or higher was defined as an accident. 14 Level 3b was defined as 'a temporary injury of severe degree, for which extensive treatment was needed (severe change in vital signs, ventilator, surgery, extension of hospital stays, hospital admission as an outpatient, bone fracture, etc)'. Thus, levels above 3a included all serious issues. Data collection and analyses were performed using this classification. The analysis used a final severity level decided by an independent clinical safety committee of the hospital and was not based on the report. Analysis method A t-test was used to evaluate the difference in the average values. Fisher's exact test was used to determine differences in numbers by group. Statistical significance was set at p<0.05. We compared results between the two conditions, and the results before and after night shifts under the same conditions. An analysis of incident and accident reports was also performed. It examined the results in a 2×2 table divided by condition and severity for both day shifts and night shifts. Furthermore, related to the total number of reports and comparisons between conditions, the total number of day and night shifts in the surveyed ward were also compared with those in other wards simultaneously. Patient and public involvement Participants of this study were nurses working at a general hospital. Although patients at the hospital were not involved, participants were fully informed of significance of this study and possible benefits derived from this study results for them. Participant demographics The ward had 30 beds, and there were 30 rotating shift nurses. Of the 30 nurses who had the study explained to them in the ward, 20 were enrolled in the study after providing informed consent. One participant withdrew from the study after being transferred to another ward. Another participant who was transferred from another ward in the hospital joined in the middle of the study period. Of the 20 participants, 19 completed questionnaires for the dark exposure phase. The data for two participants were excluded as one used sleeping pills and the other failed to respond to some questions. Data of the remaining 17 subjects were analysed. For the well-lit condition, data from 10 participants were used for the analysis. Nine did not respond to the questionnaire for the well-lit condition. There were no differences in age, work experience or corrected eyesight (table 1). Fatigue and sleepiness Questions on 'subjective symptoms' related to fatigue and sleepiness covered five factors: instability, uneasiness, grogginess, lethargy and drowsiness. Table 2 shows the means and SDs by factor. Overall mean scores were higher for the dark condition than for the well-lit condition, indicating that nurses experienced more subjective symptoms in the dark conditions. The items that were statistically significant and higher for the dark conditions compared with well-lit conditions were drowsiness before work on the last day shift before a night shift (2.71±1.19 vs 1.73±1.00), and lethargy on the first day after the night shift (1.96±0.87 vs 1.32±0.48; p<0.05). For grogginess, lethargy and drowsiness, the dark condition displayed a higher trend than the well-lit condition (p<0.1). On a night-shift day, no significant differences were found for any symptom. Table 3 shows the self-assessment of the sleep quality. Sleep induction and maintenance showed a higher trend in the case of well-lit conditions on the last day shift before a night shift than for dark conditions (well-lit conditions, 52.13±9.94 vs dark conditions, 46.33±7.84; p<0.1). However, there were no significant differences in any items between the dark and well-lit conditions. Table 4 lists the numbers of incident and accident reports. These were divided into accidents (3b or higher) and incidents (3a or lower). The percentages of the number of reports for the whole hospital were compared with those during the day and night shifts, but no significant differences were found. Regarding ratio of the total number of reports for day versus night shift, there was no significant difference in the study ward between the different conditions. As compared with total number of reports for the study ward and other wards during the well-lit condition, the study ward had significantly more reports for the night shift than for the whole hospital, excluding the study ward (p<0.05). Related to medical treatment problems there were no light intensity-related reports during the entire study period. DISCUSSION No significant differences were observed in symptoms during a night shift in the dark conditions. A previous study, however, pointed out the satisfaction with the lighting and ease of concentration at work, 15 as well as brightness at work and glare in the work area caused by lighting fixtures. The difference in illumination from the background, and in the brightness entering the visual field might give effects to symptoms at work; further analysis (such as luminance analysis) will be required in the future to ascertain this. Regarding feelings about sleep quality, no significant differences were found in any items between the dark and well-lit conditions. Our results did not clearly indicate whether a dark environment prevents disruption of the circadian rhythm in night-shift nurses. [1][2][3][4] A recent intervention study 16 reported a positive effect on sleep among nurses by adjusting light exposure using both a portable light box (for 40 min exposure to bright light before night Open access shift) and sunglasses (for avoiding bright light after the night shift). Environmental lighting improvement should be combined with individual-level, subtle adjustments to obtain a clearer effect. 17 This study found that differences in lighting environment did not cause problems in work performance. The ratio of incidents during the day shift to those occurring during night shift was 3:4, while for the well-lit condition, the ratio was 1:4. The number of night-shift incidents was significantly larger in well-lit conditions. A previous study reported that a bright lighting environment for ICU nurses working at night shifts reduced sleepiness but increased the number of psychomotor errors. 16 The study ward at the hospital was designed so that when the nurses' work space was bright, patient rooms would also be bright. At nighttime after 21:00 hours, however, the patient rooms become dark to help them sleep well. Therefore, bright light coming from a well-lit working station might have affected patients' sleeping conditions, causing a relatively high frequency of incidents like a slipping from the bed. Study limitation The available sample size was insufficient to detect any statistical significance. It could also be true that there was no significant effect of the dark condition during the night shift. There may have been confounding factors regarding fatigue and feeling sleepy (such as exposure to sunlight, exercise and alcohol consumption) for which information was not collected. Other missing information included the patients' medical conditions, new hospitalisation cases during the night shifts and the number of empty beds. Since this study was conducted at only one institution, we note that it may not be fully representative or generalisable. The results obtained from the present study merely suggest the importance of monitoring the lighting environment in hospitals and conducting further studies. Despite the major limitation of sample size, our trial suggests a way of evaluating the working environment in hospitals using a quasi-experimental design with minimal interference to routine work. CONCLUSION Caring for the health of night-shift workers to prevent rapid turnover of staff due to unfavourable work environments is important for hospital management as well as for patient safety. The objectives of this study were to investigate whether dark environments bring improvement in terms of (1) fatigue and sleepiness on working, (2) quality of sleep and (3) unhindered work performance (no malpractice/incidents/accidents) among nurses. Among these three variables, we could not find significant results for (1) and (2) partly due to the small sample size. However, we clarified that lighting did not interfere with work performance in a dark environment. Although, a further large-scale study with more rigorous data collection on lightning should be conducted, this study suggests that lower lighting levels on night shifts is acceptable for nurses' work environment and safety management in general wards of hospitals. To our knowledge, minimum lighting has not yet been used in hospitals. The study site continues to employ the dark condition, and a few other hospitals now follow this method, which may enable a multisite evaluation in the future.
2022-01-20T06:24:03.921Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "4931d7d1ad9402082670309e95bdacadc1441095", "oa_license": "CCBYNC", "oa_url": "https://bmjopenquality.bmj.com/content/bmjqir/11/1/e001638.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b9f865f1452d2d63b95368d9358abfac1307b424", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
266425366
pes2o/s2orc
v3-fos-license
Do sociodemographic factors influence the levels of health and oral literacy? A cross-sectional study Background Oral health literacy has gained importance in dental literature, and its relationship with oral health status and association with health status (HL) has been reported. Then, an association between the levels of HL and OHL could be expected. This study aimed to assess the levels of HL and OHL according to sociodemographic factors and to explore a possible association between HL and OHL. Methods The European Health Literacy Survey and Oral Health Literacy Adults Questionnaire were applied to a convenience sample from Portuguese individuals. Also, sociodemographic factors such as sex, age, schooling level of the participants and their parents, and if the participants were professionals or students of the health field were assessed. To analyze the data, the Kruskal–Wallis and Mann–Whitney U tests were used to compared sociodemographic variables and the levels of literacy in general and oral health. The Spearman correlation test assessed the correlation between the levels of HL and OHL. Results HL results showed that 45.1% of the volunteers were considered in a “problematic level” and 10.3% in “excellent level”. However, 75% presented an adequate level of OHL. Regarding the levels of HL in each sociodemographic variable, significant higher levels of “excellent level” were found in health professionals and students when compared with participants not related to health area (p < 0.001). Comparisons between the levels of OHL in each sociodemographic variable showed, significant differences regarding sex (p < 0.05), age (p < 0.001), levels of schooling of the participants and their parents (p < 0.009 and p < 0.001) and relationship with health field. (p < 0.001). A significant positive – weak correlation was found between HL and OHL (p < 0.001). Conclusions HL and OHL levels are associated and could be influenced by sociodemographic factors. Background Numerous definitions of health literacy (HL) have been proposed [1,2] Notwithstanding, almost all definitions embrace the same elements, which describe a set of observable literacy skills that allow individuals to obtain, understand, appraise, and use information to make decisions and take actions that will influence health status, which vary from individual to individual [3,4].Therefore, limited HL represents an important challenge for health policies and practices across the world, since poor levels of health literacy makes difficult to read, understand and apply for health information (e.g., wording on medication bottles, discharge instructions, informed consent documents, insurance applications, and health education materials) [3]. In the United States, the 2003 National Assessment of Adult Literacy (NAAL) reported that 36% of the U.S. adult population has a basic or below basic HL [3].On the other hand, the European Health Literacy Project (HLS-EU) which consisted of nine organizations from eight European Union (EU) member states, reported a 12.4% of inadequate HL, with substantial differences between members states [2].Taking together, these results pointed out the existence of specific vulnerable groups which are influenced by sociodemographic variables [4].In addition, the HSL-EU showed that financial deprivation, social status, education, age, and genders are predictors of limited HL [2]. In this scenario, Oral health literacy [5] has gained importance in dental literature in the last decade [6].Studies have concluded that OHL is crucial in diminishing oral health disparities and in promoting oral health [7].On the other hand, populations with limited OHL have elevated risk to develop oral diseases [6], problems with the use of preventive services, poor adherence to medical instructions and self-management skills, higher health care costs and higher mortality risks [8,9].Furthermore, it was demonstrated by the Carolina Oral Health Literacy (COHL) study together with other reports, the strong influence of OHL in health behaviors and outcomes [10][11][12].Notwithstanding, a systematic review assessing the scientific evidence regarding the association between OHL and oral conditions, concluded that the evidence is weak, and that this association remains unsubstantiated, mainly because of the low quality of the available studies.However, it has been pointed out that health-related decisions made by people influence their health, which is also influenced by health literacy, and modulated by sociodemographic factors.Also, it have been explained that a relationship between OHL and health status exists [13], and that OHL is associated with oral health status [6].Therefore, it could be hypothesized that there is also an association between the levels of HL and OHL, since health determinants like income, education and personal characteristics influence health behaviors and oral health outcomes [6]. Thus, the aim of this study was to assess the levels of HL and OHL according to sociodemographic factors and to explore a possible association between the degrees of HL and OHL. Methods This research was approved by the Research Ethics Committee of Egas Moniz School of Health and Science, Almada, Portugal (N°: 1078) and conducted in accordance with the ethical principles of the Declaration of Helsinki.All individuals were informed about the research purposes and signed a voluntary informed consent form.This observational cross-sectional study was conducted following the recommendations of the Strengthening the Reporting of Observational Studies in Epidemiology (Strobe) guidelines [14]. The convenience sample was obtained from Portuguese individuals, over 16 years old.Data collection was conducted from May 24 to June 21, 2022, by using an online form, via Google Forms (Google; Mountain View, CA, USA).Briefly, the first page of the online questionnaire presented the Informed Consent Form, which described the research aims and potential risks and benefits.Thus, volunteers who accepted to participate in the study were required to digitally sign the Inform Consent Form before proceeding to fill out the structured questionnaires.The average time to fill out the entire questionnaire was approximately 12 min.Participants were invited to participate in the study by email and Whatsapp ® , from which they received a link to access to the complete online form. Health, oral health levels and sociodemographic factors European Health Literacy Survey (HLS-EU-PT-Q16 short version) The HLS-EU-PT-Q16 consists of 16 questions based on 3 domains, embracing health care, health promotion and disease prevention.Using a 4-point scale, the survey rates the degree of difficulty in carrying out tasks related to each domain.Then, a score is obtained by summing up the answers (0 to 50), in order to metric standardized the level of health literacy in four levels, depending on the score obtained: inadequate (0 to 25), problematic (25 and 33), sufficient (33 and 42) and excellent (42 to 50) [15,16] Oral Health Literacy Adults Questionnaire (OHL-AQ) The OHL-AQ is composed of 17 items divided into 4 different sections: reading comprehension, numeracy, active listening, and decision-making.The total score (0 to 17) is obtained through the sum of all the questions answered correctly, which are given a score of one.Thus, the total score is categorized into three different levels: inadequate (0 to 9), marginal (10 and 11) and adequate (12 to 17) [17,18] Sociodemographic factors To obtain a detailed characterization of the studied sample the following sociodemographic characteristics were assessed: sex, age, schooling level of the participants and their parents, and if the participants were health professionals or students of the health field. Statistical analysis The data collected on the digital platform was exported and tabulated.Descriptive statistics were performed to identify frequencies and distributions of the outcomes.Since the data presented no normal distribution, the Kruskal-Wallis and Mann-Whitney U tests were used to compare sociodemographic variables and the levels of literacy in general and oral health.The correlation between the levels of literacy in health and in oral health was assessed by the Spearman correlation test.Analyses were performed with SPSS software, version 28.0 (IBM Statistical Package for Social Sciences) with a 5% significance level. Results A total of 205 participant's answers were obtained in our study.However, since the HSL-EU-EN-Q16 questionnaire is considered valid when at least 80% of its questions have been answered, our study considered a total of 204 of valid questionnaires.The mean age of the studied population was 30.6 (± 6.3).Most of the participants included were females and the group related with health area was composed mainly of students (70%).Participant's distribution according to sociodemographic factors are shown in Table 1. Regarding the levels of health literacy 45.1% of the volunteers were considered in a "problematic level", 29.9% in "sufficient level, 14.7% in "inadequate level", and 10.3% in "excellent level" (Fig. 1).On the other hand, most of the participants (75%) presented an adequate level of oral health literacy, while 25% presented an inadequate level (Fig. 2). The comparisons between the levels of health literacy in each sociodemographic variable (Table 2) showed no significant differences regarding sex (p > 0.91), age (p > 0.94), schooling level (p > 0.24) and parent's schooling level (p > 0.19).However, professionals or students in health area showed higher frequencies of "excellent level" in health literacy when compared with participants that were not related to health area (p < 0.001).In addition, participants not related to the health area showed greater frequencies of "inadequate level" when compared with professional or students in health area (p < 0.001). On the other hand, the comparisons between the levels of oral health literacy in each sociodemographic variable (Table 3) showed, significant differences regarding sex, showing that females presented higher frequencies of "adequate level" and lower frequencies of "inadequate level" (p < 0.05).Considering that participants were divided in age sub-groups, higher frequencies of "adequate level" were found in all subgroups, and younger participants presented higher values of "adequate level" when compared with the others sub-groups (p < 0.001).In the same way, uppermost levels of schooling of the participants and their parents presented higher levels of adequate knowledge of oral health (p < 0.009 and p < 0.001).Moreover, professional or students in health area showed increased values of "adequate level" when compared with participants that are not related to health area (p < 0.001). Considering the correlation between the levels of literacy in health and in oral health (Table 4), a significant positive but weak correlation was found between these two variables (p < 0.001). Discussion In spite of the growing attention being paid to health and oral health literacy among European health policymakers, data regarding the status of this variables in Europe remains scarce.Our study found that 14.7% and 25% of the total surveyed population had an inadequate level of HL and OHL, respectively.Also, our study demonstrated that participants relationship with health area increases the frequencies of "excellent level" in HL; and factors like sex, age, schooling levels and relationship with health area rise "adequate level" frequencies in OHL.Furthermore, a significant positive correlation was found between the levels of HL and OHL. It is noteworthy that our results regarding inadequate and problematic levels of HL (59.8%) are in line with a previous study in the same population (Portuguese participants) that reported 61% of the assessed sample presented low levels of HL [16] and differ from a study reporting only 30% of low levels of HL [19].The later study included participants from other countries (Brazil and Angola) which could explain the discordance with our study [19].The HLS-EU reported frequencies of 12,4% of "inadequate level" of HL as the mean of the total studied sample, which is in line with our study that reported 14.7%.However, considering the frequencies reported by each of the assessed countries in the mentioned study, our results presented higher frequencies of "inadequate level" of HL, when compared with Ireland, Netherlands, Poland, Spain and Germany and lower frequencies when compared with Austria and Bulgaria [2].Therefore, the considerable proportions of people with inadequate health literacy implies that health literacy deficit is a challenge for public health in European countries.Differences in health programs, health policies and economic conditions could explain these differences. Regarding the levels of OHL, 75% of our sample presented an "adequate level", which is in contrast with Almeida et al. (2022) [20], that reported lower frequencies of "adequate levels" of OHL.Certainly, the fact that the mentioned study assessed a population of a different country that ours, with lower levels of schooling and income affected the results.On the other hand, our results are in line with Mendes (2019) [18] and Flynn et al., (2016) [5] which reported higher frequencies of individuals with an "adequate level" of OHL as well.It is of main importance to know the levels of OHL in a population, since studies have concluded that low levels of OHL are associated with poor oral health knowledge, which may influence self-care behavior, capacity of understand health instructions or the importance of preventive dental procedures [21][22][23][24][25]. Besides, higher prevalence of dental caries, periodontal disease and extracted teeth have been reported in individuals with low OHL [26,27].However, most of these results come from studies with methodological drawbacks which could question the validity of the results [28]. The HLS-EU have reported that sociodemographic factors like, social status, education, age, and sex could influence low levels of HL.In addition, the authors concluded that financial depravation is the strongest predictor for low HL [2].Even though our study found that the relationship of the participants with health area influenced the levels of HL, meaning that health professionals or students present higher levels of HL as expected; our study showed that sex, age, education level of the participants and their parents, and the relationship with health area could influence the levels of OHL.In addition, our study found that women presented higher levels of OHL.This was an expected finding since literature have concluded that women seek for dental treatment more than men [29].Regarding the age of our sample, the youngest participants presented higher frequencies of OHL; it was also an expected result, since nowadays there are more programs for the promotion of oral health in Portugal with special focus on 16-24 age group, which may have led to greater awareness and education in the context of oral health.Considering the results of educational levels and the relationship with health area, greater levels of education and working or studying in health area, allow to acquire more knowledge about oral health and permit to understand in a better way preventive oral health instructions and procedures, which may explain the higher levels of OHL. As a final remark, as far as we know, our study is the first one in demonstrate a positive correlation despite being weak between both levels of literacy (HL and OHL), meaning that the levels of each one could significantly affect the levels of the other.In this direction, Macek et al., (2010) [13] provide the rationale for including a measure of conceptual health knowledge in future investigations of OHL, presented a new conceptualization of the pathway between HL and oral health and the importance of assessing HL in dental care. Although this research has obtained important answers on HL and OHL, some limitations should be considered.First, it is important to note that our study used subjective tools to assess HL and OHL and no objective items are included in this tools to measure functional HL and OHL.Second, the data were collected from a small no probabilistic convenience sample, in which most of the participants included in the group related to health area were students.Therefore, our results should be analyzed with caution and not be extrapolated to other samples since the above-mentioned factors certainly influenced the results of the present study.Third, the cross-sectional design of this study prevents it from elaborating on the cause and effect.Finally, the authors strongly recommend that future studies assess whether HL is associated with more detailed measures of oral health care utilization, in studies with larger sample size and no convenience populations. Conclusions It can be concluded that: -Low levels of HL and high levels of OHL are prevalent in the studied population.-Sociodemographic factors could influence the levels of HL and OHL.-HL and OHL levels are associated. Fig. 1 Fig. 2 Fig. 1 General Health Literacy frequencies in the studied population Table 1 Demographic characteristics of the studied sample Table 2 Distribution (%) and comparisons of General Health Literacy levels considering different sociodemographic variables Table 3 Distribution and comparisons of Oral Health Literacy levels considering different sociodemographic variables Table 4 Correlation between General Health and Oral Health Literacy scores
2023-12-22T05:13:53.110Z
2023-12-20T00:00:00.000
{ "year": 2023, "sha1": "bb9197e8fc864c896f6fed5bdedd6e6c6c9bc0bd", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "bb9197e8fc864c896f6fed5bdedd6e6c6c9bc0bd", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
251505031
pes2o/s2orc
v3-fos-license
A preliminary study on the intelligent model of k-nearest neighbor for agarwood oil quality grading Essential oils extracted from trees has various usages like perfumes, incense, aromatherapy and traditional medicine which increase their popularity in global market. In Malaysia, the recognition system for identifying the essential oil quality still does not reach its standard since mostly graded by using human sensory evaluation. However, previous researchers discovered new modern techniques to present the quality of essential oils by analyse the chemical compounds. Agarwood essential oil had been chosen for the proposed integrated intelligent models with the implementation of k-nearest neighbor (k-NN) due to the high demand and an expensive natural raw world resource. k-NN with Euclidean distance metrics had better performance in terms of its confusion matrix, sensitivity, precision accuracy and specificity. This paper presents an overview of essential oils as well as their previous analysis technique. The review on k-NN is done to prove the technique is compatible for future research studies based on its performance. A agarwood oil that acid, hydrocarbon sesquiterpene, oxygenated sesquiterpene and monoterpene were the chemical produced in distillation. in oil the of the unique of oil. The agarwood extraction also shows a high percentage of yield even it is time and energy-consuming. is used to determine the agarwood metabolite composition generated either naturally or artificially, with an emphasis on the volatile components of agarwood, particularly sesquiterpene derivatives from essential oils. The substituted PEC derivative agarotetrol has been proven to have a favourable connection with agarwood quality and is utilised as a biomarker to evaluate agarwood quality. preliminary study on the intelligent model of k-nearest neighbor for … Z-score technique and GC-MS in a research study as data transformation and normalization. study consists of 11 samples of Kaffir lime oils from various The application of Z-score technique to have the advantages of being sensitive to data outliers as robust and effective in the normalization process. significant of Kaffir lime oil to have six compounds in are Limonene, terpine-4-ol, E-caryophyllene and terpinolene These compounds can be used as a guideline to classify the kaffir lime oil as two qualities which high and low A modern grading system of agarwood oil using a linear regression model to be fed into feed forward neural network (FFNN) The best regression line of hidden neurons will be identified to discriminate the quality of Gaharu oil from high to low quality. Regression interdependence of multiple variables stepwise regression interdependence variables INTRODUCTION Essential oil is a commodity that captures volatile aromatic essences extracted from different parts of the trees. Based on the Medical News Today, essential oil therapy is also one of the alternative medicine for psychological treatment. It is commonly used in the practice of aromatherapy [1], [2]. Recently, it is valued in many cultures where it is being used to treat various illnesses, perfumery and incense for religious and spiritual ceremonies purposes [3]- [5]. Currently, essential oil quality was measured and graded manually using sensory evaluation based on physical properties. Based on human perception and experience, an essential oil with the greatest grade has a lot of resin, dark oil color, strong odor and long-lasting aroma [3], [6], [7]. However, the sensory evaluation method is somehow inaccurate since different people may come with different perceptions and decisions about the technique. There is no guarantee that grading using human sensory evaluation can secure the purity or quality of the essential oils. Human trained grader technique has a significant disadvantage in terms of objectivity and repeatability due to the continuous process when deal with a bulk of samples at once, contribute to the high labor-intensive process and time-consuming [8]- [10]. As a result, several methods have been proposed and implemented to verify essential oil quality using intelligent techniques [8], [9], [11]- [17]. Agarwood oil is commonly used for medical purposes, ritual and fragrances. In today's modern society, agarwood oil become a hot topic among customers due to the strong odor, high content of resins and Review and summary Ref. Alcohol-soluble extract Method of alcohol-soluble extract is used to extract essential oil while GC-MS is used for analyze chemical compounds in agarwood oil. Results: The yield show to have less than 10% and the yield value is reduced from 15%. The highquality agarwood proved to have over 66.47% of 2-[2-4-methoxyphenylethyl] chromone and 2-(2-phenylethyl) chromone. [18] Researcher used 1g agarwood sample with ether added. Solution was filtered and at a low temperature ultrasound for 30 min. Agarwood sample was undergoing alcohol-soluble extract process and analyzed by GC-MS. Results: Results obtain that fungi of T. marchalianum, S. podzolica, H. grisea, G. butleri and C. bulbillosum were the species with high oil content and high quality. [33] Steam Distillation The essential oil extracted from A. sinensis leaves using steam distillation and separated using capillary column chromatography. The agar pieces are chipped into very small pieces and placed in water for one to five weeks. Fermented agar chips were then taken to a distillation plant to extract oil. Results: Results obtain in order to make incense light after grinding was used the low-quality agarwood. [19] Distillation In a 1:5 (weight/volume) ratio, agarwood chips and water were fed to the distiller. It was left overnight to ensure that all of the agarwood chips were wet and completely soaked in the water. Results: The chips demonstrate structural degradation due to long term heat exposure during the water distillation process. It is required 14 days to obtain the maximum oil yield for soaked agarwood chips. [34] Hydro distillation A study in investigated agarwood oil by hydro distillation. Results: It is found that fatty acid, hydrocarbon sesquiterpene, oxygenated sesquiterpene and monoterpene were the high chemical compound produced in hydro distillation. All of those chemical profiles in agarwood oil samples contribute to the sweetness of fragrant wood aroma and the unique odor of oil. The agarwood extraction also shows a high percentage of yield even though it is time and energy-consuming. [35] Supercritical carbon dioxide GC-MS technology is being used to determine the agarwood metabolite composition generated either naturally or artificially, with an emphasis on the volatile components of agarwood, particularly sesquiterpene derivatives from essential oils. Results: The substituted PEC derivative agarotetrol has been proven to have a favourable connection with agarwood quality and is utilised as a biomarker to evaluate agarwood quality. [31] Quality grading system of essential oils Essential oil compounds are susceptible to high temperatures and degraded which affects their qualities. Hence, liquid extraction with a solvent is a suitable process to solve those compounds properties instead of distillation [24]. The current method to grade oil qualities is commonly using sensory evaluation which refers to its physical appearance of consumer perception, color, odor and high fixative. In other words, a grading system for essential oil still no approval for oil grading standards that are consistent to be practices in the industry [4], [12]. It's quite impossible to spot with the naked eye. Grading the essential oil according to its chemical properties is one of the advanced techniques that had been introduced to counter the manual technique of sensory evaluation [36]. The grading system of agarwood oil data produced using GC-MS involves graphical analysis was used in [21], [35], [37]. The general flow of data analysis as illustrated in Figure 1 [35]. Missing values ratio was used for dimension reduction while correlation matrix was computed to select data with the best missing values. The samples will be removed if samples have equal or more than 75% of missing values. The results showed only 19 compounds were left to have the best data from 106 compounds of 22 agarwood oil samples. 1361 Z-score technique and GC-MS has been conducted in a research study as data transformation and normalization. This study consists of 11 samples of Kaffir lime oils from various Malaysian product [3]. The application of Z-score technique was discovered to have the advantages of being sensitive to data outliers as well as robust and effective in the normalization process. [11]. The significant compounds of Kaffir lime oil samples were clarified to have six compounds in total which are Limonene, Citronellal, β -pinene, terpine-4ol, E-caryophyllene and terpinolene [11]. These compounds can be used as a guideline to classify the kaffir lime oil as two qualities which high and low [11]. A modern grading system of agarwood oil using a linear regression model was discussed to be fed into feed forward neural network (FFNN) [8]. The best regression line of hidden neurons will be identified to discriminate the quality of Gaharu oil from high to low quality. Levenberg Marquadt (LM) algorithm was implemented for the trained dataset because it is the most commonly used optimization algorithm in many research studies [13], [38], [39]. The findings strongly showed that a best fit linear regression line with a value of R exactly 1 at hidden neurons number 2 which is the lowest compared to other neurons [40]. Regression analysis is used to study the interdependence of multiple variables while stepwise regression analysis is frequently used to discover the ideal appropriate regression model to study the interdependence of variables in more depth [41], [42]. In a research study, the performances of k-NN and artificial neural network (ANN) were measured for both intelligent techniques [8]. The input and output measured in the research included the abundances of significant chemical compounds (%) and agarwood oil qualities which are high and low. Sensitivity, precision, confusion matrix and specificity were used to test the training performance and testing data of k-NN classification system. Based on the result, the accuracy of k-NN model was in the range of 81-86% while the ANN model showed excellent accuracy of 100%. These high accuracies can be a solid reason to develop the technique further for intelligent application for agarwood oil quality classification [43]. K-nearest neighbor K-NN is a non-parametric classification algorithm [44]. The k-NN classifier model is widely implemented as one of the best-known algorithms and is easy to use in analyse the solving classification problems as well as identify the sample [13], [15], [45]. The algorithm requires 'k' value to find the closest data based on distance computation and determine the class of the new data. It also works by looking for a class of 'k' values that are related to objects in new data or data testing in the nearest training data [23], [46]. In artificial intelligence, machine learning permits to evolve through a process of the machine. There are two types of machine learning which are unsupervised and supervised learning. k-NN falls for the supervised learning method categories where labeled datasets were used [15]. The important parameters that had been observed in analysing using k-NN method are distance and classification rules. In a study, k-NN algorithm has been performed to classify breast cancer [44]. To decide how to classify a sample, the different values of 'k' for distances (Euclidean and Manhattan) and rules (majority, consensus and random) have been performed to pass all the k-NN performance [44], [47]. k-NN intelligent model has been applied in various fields such as in the medical diagnosis, grading essential oils, fake incense detection and others [30], [48], [49]. The classifier has successfully analyzed the olive oil classification based on their correct group by 'k' value equal to 5. The datasets using Euclidean distance and results showed that the k-NN model performed well on the tested classification problems between different quality types of olive oil. Only 5% different when comparing with SVM method for the overall accuracy [15]. The agarwood oil classification was implemented using the k-NN method has been done with high accuracy in the range of 81-86% [13]. The Euclidean distance was also applied for this study. The high accuracy results also indicate the opportunity to develop the technique further for application using dedicated intelligent agarwood oil quality grading [19]. Criteria of k-NN as a good classifier A review has been done on k-NN. There is various implementation of distance metrics to measure the performance for Agarwood oil. Some criteria are listed below to show that k-NN application is capable of quality grading classification: Distance metrics A distance measures the length of a straight line between two objects for agarwood compounds classification [44]. The distances allow classing the samples either is similar or do not resemble [43]. In agarwood oil grading quality classification, Euclidean distance metric (EU) has been implemented as one of the tuned parameters in the study [43], [50]. The square root of differences between coordinates of pair of objects in [7] as in (1). Where Xsj is an object at coordinate sj, ytj is another object at coordinate tj and dst is a distance between them. The advantage of using EU distance metric in k-NN model is the most universal and great for low-dimensional data [50], [51]. Existing work in [43] conducts research on classifying agarwood oil into high and low quality. The accuracy results for both distance variation (Euclidean and city-block) show 100% for testing and training datasets at k=1 until k=5. Results showed that the Euclidean distance metric performs 100% accuracy compared to other metrics which achieved at the range of 78.5% to 100%. Besides, researcher make a comparison between Euclidean distance metric and other metrics such as Cossine and Correlation in k-NN [12]. The researcher found that Euclidean distance metric had better performance in terms of accuracy compared to others due to the greatest efficiency and can be concluded as the most appropriate distance metrics for agarwood oil classification. The performance measures Confusion matrix, accuracy, sensitivity, precision and specificity was used in study [19] to describe the behaviour of classifier. A confusion matrix is tabulate in Table 2 [12]. k-NN model was used in [50] for the quality of agarwood oil classification into 2 qualities which are high and low. Euclidean metric was implemented. Based on [13], [43], k-NN resulting that the highest accuracy is yielded Euclidean for training and testing datasets. The sensitivity, specificity precision and accuracy reach 100% for Euclidean distance variation. The KNN classifier can discover the k most comparable trainings and predict the majority class among them. The advantage of using Euclidean distance in k-NN is the efficiency of its implementation [12], [52]. Existing framework in [43] showed the Euclidean distance as a natural benchmark in access the coefficient of dissimilarity because it relates to everyday physical world's typical notion of distance. Besides, researcher in [43] make a comparison between Euclidean, City-block, Cosine and Correlation distance metrics in k-NN. The researcher found that Euclidean and City-block had a better performance in term of accuracy compared to Cosine and Correlation distance. CONCLUSION The review showed that k-NN model with Euclidean distance metrics can be implemented for grading the quality of essential oil. k-NN technique had been proven to have a good classifier for performance criteria in grading essential oils. agarwood oil becomes in high demand since its benefit not only in medic scope but also in religion and any field. It can be seen that distillation extract is the most common for oil extraction due to the cost and ease to use. As a result, the k-NN technique will be employed in future studies on grading agarwood essential oil.
2022-08-12T15:04:15.459Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "7c45ef54cee858804f0ae0b4c4c90eb30beb3df2", "oa_license": "CCBYNC", "oa_url": "https://ijeecs.iaescore.com/index.php/IJEECS/article/download/27734/16622", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "1883be199ddc72fc2878a02731db2564623ecec8", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
270602950
pes2o/s2orc
v3-fos-license
Short-Term Forecasting of Photovoltaic Power Using Multilayer Perceptron Neural Network, Convolutional Neural Network, and k -Nearest Neighbors’ Algorithms : Governments and energy providers all over the world are moving towards the use of renewable energy sources. Solar photovoltaic (PV) energy is one of the providers’ favourite options because it is comparatively cheaper, clean, available, abundant, and comparatively maintenance-free. Although the PV energy source has many benefits, its output power is dependent on continuously changing weather and environmental factors, so there is a need to forecast the PV output power. Many techniques have been employed to predict the PV output power. This work focuses on the short-term forecast horizon of PV output power. Multilayer perception (MLP), convolutional neural networks (CNN), and k -nearest neighbour ( k NN) neural networks have been used singly or in a hybrid (with other algorithms) to forecast solar PV power or global solar irradiance with success. The performances of these three algorithms have been compared with other algorithms singly or in a hybrid (with other methods) but not with themselves. This study aims to compare the predictive performance of a number of neural network algorithms in solar PV energy yield forecasting under different weather conditions and showcase their robustness in making predictions in this regard. The performance of MLPNN, CNN, and kNN are compared using solar PV (hourly) data for Grahamstown, Eastern Cape, South Africa. The choice of location is part of the study parameters to provide insight into renewable energy power integration in specific areas in South Africa that may be prone to extreme weather conditions. Our data does not have lots of missing data and many data spikes. The k NN algorithm was found to have an RMSE value of 4.95%, an MAE value of 2.74% at its worst performance, an RMSE value of 1.49%, and an MAE value of 0.85% at its best performance. It outperformed the others by a good margin, and k NN could serve as a fast, easy, and accurate tool for forecasting solar PV output power. Considering the performance of the k NN algorithm across the different seasons, this study shows that k NN is a reliable and robust algorithm for forecasting solar PV output power. Introduction The world's energy suppliers are shifting towards using clean, renewable energy sources to reduce the pollution caused by fossil fuel energy sources.Photovoltaic and wind energy sources are the most favoured renewable energy alternatives because they have zero emissions, require minimal maintenance, and their initial installation cost is also coming down [1,2].The output power of solar photovoltaic (PV) energy systems is highly dependent on constantly changing weather and environmental conditions like solar irradiance, wind speed, ambient temperature, cloud coverage, module temperature, etc. Forecasting its output power is necessary to effectively plan and integrate the solar PV energy system into the main grid. Many approaches and techniques have been used to predict solar PV output power.The physical models, the statistical models, and the hybrid (combination of physical and statistical) models [3][4][5][6] are some of the major approaches that have been used to model Optics 2024, 5 294 and predict PV output power.The physical approach designs its model by simulating the conversion of global solar irradiance to electricity using weather parameters as input to a mathematical model (which describes the solar PV system) to predict the PV output power [7].The total sky imagers and satellite image techniques [8] are examples of the implementation of the physical method.These techniques make highly accurate predictions when the weather conditions are stable throughout the prediction period.The statistical techniques are designed mainly from the principle of persistence.Using tested scientific processes, they predict the PV output power by establishing a relationship between the input variables (vectors) and the target output power.The input vectors are the weather parameters (solar irradiance, wind speed, ambient temperature, module temperature, rain, humidity, etc.) that directly or indirectly affect the solar panels' electricity generation.At the same time, the PV output power is the predicted output.Traditional statistical methods [9] use regression analyses to produce models that forecast the PV output power. Artificial intelligence (AI) or machine learning (ML) is another way of applying this technique.A good example of the AI techniques that have been used to forecast PV output power are artificial neural networks (ANN) [10], long short-term memory (LSTM) [11][12][13], support vector machines (SVM) [9,10,14], etc.The multilayer perceptron neural network (MLPNN) [15], the convolutional neural network (CNN) [16,17], gated recurrent units (GRU) [18][19][20], and k-nearest neighbour (kNN) [14,19,21,22] are some instances of the ANN which have been successfully used to model and forecast solar PV output power.Even with the success of these forecasting methods, they have limitations.The SVM algorithm is computationally expensive, and one may need help interpreting the results [23].The ANN algorithm requires a large amount of data to make accurate predictions.The kNN technique requires no training time; hence, it is fast, but prediction accuracy decreases when the input data has lots of spikes and/or lots of missing data.Ratshilengo et al. [5] compared the results of modelling the global solar irradiance with the generic algorithm (GA), recurrent neural network (RNN), and kNN techniques and showed that GA outperformed others in accuracy.Most of this research focused on a single technique or forecasted solar irradiation (when they worked with more than one technique), but in this study, we aim to compare the predictive performance of modelling the actual solar PV output power using MLPNN, CNN, and kNN algorithms and show that the kNN method had the best overall performance on our data.It is more beneficial to model the solar PV output power instead of solar irradiance (because the generated PV output power also captures the impact of the ambient and module temperatures, whose rise negatively affects the PV output power and the impact of other factors that affect solar irradiance).Comparative performance analysis has not been conducted on these three modelling algorithms for forecasting solar PV output power.kNN is a simple algorithm that can serve as a fast and easy-to-use tool in forecasting solar PV output power.It is essential to mention that our data had few spikes and no missing or corrupted records. The layout of this study is as follows.Section 2 presents a brief review of PV output power forecasting, and the Section 3 presents a detailed review of artificial neural networks.Section 4 presents data description, variable selection, and evaluation metrics.Section 5 presents the results and discussion.Section 6 considers the challenges of PV output power forecasting, while conclusions are drawn in Section 7. A Brief Overview of Solar PV Power Prediction in the Literature Numerous studies have been published on forecasting PV output power.When solar panels receive irradiance, they convert the incident irradiance to electricity.Hence, solar irradiation strongly correlates with solar PV panels' output power.Machine learning techniques like the ANN [24], support vector machines (SVMs) [25], kNN, etc., have been used to forecast solar irradiance.ML techniques are equipped with the ability to capture complex nonlinear mapping between input and output data.Efforts have been made to model solar PV output power with ANNs.Liu and Zhang [12] modelled the solar PV output power using kNN and analyse the performance of their model for cloudy, clear skies and overcast weather conditions.Ratshilengo et al. [5] compared the performance of the generic algorithm (GA), recurrent neural networks (RNN), and kNN in modelling solar irradiance.They found GA outperformed the other two using their performance metrics.A combination of autoregressive and dynamic system approaches for hour-ahead global solar irradiance forecasting was proposed by [26].Table 1 summarises some previous studies on solar PV output power prediction.Some ways to forecast solar PV power are by modelling irradiance (indirectly modelling PV output power) or directly modelling the PV output power.A lot of research has been published in this regard. Artificial Neural Network ANN is one technique that has been used extensively to model and forecast solar PV output power with high accuracy [31,32].This comes from its ability to capture the complex nonlinear relationship between the input features (weather and environmental data) and corresponding output power.ANN is a set of computational systems composed of many simple processing units inspired by the human nervous system.Figure 1a shows a schematic representation of a basic ANN, with the input, hidden, and output layers, connections, and neurons.Data of the (input) features are fed into the input layer.The hidden layer (which could be more than one) processes and analyses these input data.The output layer completes the process by finalising and providing the network output.The connections connect neurons in the adjacent layer together with the updated weights. many simple processing units inspired by the human nervous system.Figure 1a shows a schematic representation of a basic ANN, with the input, hidden, and output layers, connections, and neurons.Data of the (input) features are fed into the input layer.The hidden layer (which could be more than one) processes and analyses these input data.The output layer completes the process by finalising and providing the network output.The connections connect neurons in the adjacent layer together with the updated weights.(b) A pictorial presentation of a mathematical model of an ANN cell [6]. Figure 1b presents a pictural representation of basic ANN mathematics.It shows that the neuron of a basic ANN cell is made of two parts: the activation and combination functions.The network sums up all the input values using the activation function, making the activation function act like a squeezing transfer function on the input to produce the output results.Some commonly used activation functions are sigmoid, linear, hyperbolic tangent sigmoid, bipolar linear, and unipolar step.The basic mathematical expression of an ANN is given as follows [33]: where j is the predicted network output, is the bias weight, is the number of inputs, is the connection weight, and is network input.There are many types of neurons and interconnections used in ANN.Some examples of this are feedforward and backpropagation NN.Feedforward NNs pass information/data in one forward direction only.The backpropagation NN allows the process to cycle through over again.It loops back, and information learned in the previous iteration is used to update the hyperparameters (weights) during the next iteration to improve prediction.Deep learning is a type of ANN where its layers are arranged hierarchically to learn complex features from simple ones [16].One weakness of the deep learning NN is that it takes a relatively long time to train the model.There are two basic stages of the ANN: training and testing.The data for modelling PV output power are often split into training and test sets.Generally, 80% of the data are set aside for training, while 20% are reserved for testing.During the training stage, the neural network uses the training dataset to learn and find a mapping relationship between the input data by updating the synaptic weights.Prediction errors are calculated using the forecasted and measured values.The magnitude of the errors is used to update the weights and biases, and the process is repeated until the desired accuracy level is achieved.The testing dataset is used to test the final model produced in the training stage, and the ANN model's performance is evaluated.A statistical approach that considers each experimental run as a test, called the design of experiment approach, was described by [34] for use with ANNs. Neural networks having a single hidden layer is usually enough to solve most data modelling problems, but complex nonlinear mapping patterns between the input and output data may require the use of two or more hidden layers to obtain accurate results.Multilayer feedforward neural networks (MLFFNN) [35], adaptive neuro-fuzzy interface Figure 1b presents a pictural representation of basic ANN mathematics.It shows that the neuron of a basic ANN cell is made of two parts: the activation and combination functions.The network sums up all the input values using the activation function, making the activation function act like a squeezing transfer function on the input to produce the output results.Some commonly used activation functions are sigmoid, linear, hyperbolic tangent sigmoid, bipolar linear, and unipolar step.The basic mathematical expression of an ANN is given as follows [33]: where U j is the predicted network output, b is the bias weight, N is the number of inputs, W k is the connection weight, and I k is network input.There are many types of neurons and interconnections used in ANN.Some examples of this are feedforward and backpropagation NN.Feedforward NNs pass information/data in one forward direction only.The backpropagation NN allows the process to cycle through over again.It loops back, and information learned in the previous iteration is used to update the hyperparameters (weights) during the next iteration to improve prediction.Deep learning is a type of ANN where its layers are arranged hierarchically to learn complex features from simple ones [16].One weakness of the deep learning NN is that it takes a relatively long time to train the model.There are two basic stages of the ANN: training and testing.The data for modelling PV output power are often split into training and test sets.Generally, 80% of the data are set aside for training, while 20% are reserved for testing.During the training stage, the neural network uses the training dataset to learn and find a mapping relationship between the input data by updating the synaptic weights.Prediction errors are calculated using the forecasted and measured values.The magnitude of the errors is used to update the weights and biases, and the process is repeated until the desired accuracy level is achieved.The testing dataset is used to test the final model produced in the training stage, and the ANN model's performance is evaluated.A statistical approach that considers each experimental run as a test, called the design of experiment approach, was described by [34] for use with ANNs. Neural networks having a single hidden layer is usually enough to solve most data modelling problems, but complex nonlinear mapping patterns between the input and output data may require the use of two or more hidden layers to obtain accurate results.Multilayer feedforward neural networks (MLFFNN) [35], adaptive neuro-fuzzy interface systems [36][37][38][39], multilayer perceptron neural networks (MLPNN) [15,40], convolutional neural networks (CNN) [16,40] are some examples of ANN with multiple layers.In this study, we will compare the results of modelling solar PV output power using MLPNN, CNN, and kNN models.Subsequent sections will present a brief overview of these techniques. Multilayer Perceptron Neural Networks (MLPNN) MLPNN is a special type of ANN organised in layers and can be used for classification and regression depending on the activation function used.A typical MLPNN has three layers, like most ANNs-the input, output and hidden layers.The hidden layer can have more than one hidden unit depending on the complexity of the problem at hand.Let I p be a p-th point in an N-dimensional input to MLPNN, the output be Y p , and the weight of the hidden layer be W h .To keep the discussion simple, take the case of a single-layer MLP.The output of the first hidden unit L 1 can be expressed as follows: A linear activation function could be given as follows: The nonlinear activation function could be given as follows: MLPNN algorithm applies the weight of the previous iteration when calculating that of the next iteration.Let W 1 be the weight of the input to the hidden layer and W 2 that of the hidden to the output layers.Then, the overall output Y p is given as follows [41]: Every layer of the MLP receives input from the previous layer and sends its output to the next layer, which receives it as input, and so on.Hence, every layer has input, weight, bias, and output vectors.The input layer has an activation function but no thresholds.It connects and transfers data to successive layers.The hidden and the output layers have weights assigned to them together with their thresholds.At each layer, the input vectors are multiplied with the layers corresponding threshold and passed through the activations function, which could be linear or nonlinear [42].Backpropagation is an example of a training method employed by MLPNN during its training phase.It involves two major steps: forward propagation, where the input data are fed into the network to make predictions, and backward propagation, where the errors of the prediction are fed into the network during the next iteration to update the weight to improve prediction accuracy.Some of the advantages of MLPNN are that it requires no prior assumptions, no relative importance to be given to the input dataset, and adjustment weights at the training stage [43,44]. Convolutional Neural Networks (CNNs) The CNNs are another commonly used deep learning feedforward NN used to model PV output power whose inputs are tensors.They have many hidden convolutional layers that can be combined with other types of layers, such as the pooling layer.CNN has been used effectively in image processing, signal processing, audio classification, and time series data processing.When this network is applied in image processing, the input image is a two-dimensional pixel grid, but time series data represent two-dimensional data having time steps along the rows and input features (e.g., output power, irradiance, ambient temperature, wind speed, etc.) along the column. Figure 2 presents a schematic illustration of the CNN with a one-dimensional convolutional layer.It shows the input and one-dimensional convolution layers, a dropout layer, a dense layer of fully connected neurons, a flattening layer, and the output layer.These 1D convolutional layers apply filters on the input data and extract relevant features from them [45].To prevent overfitting, the dropout layer randomly removes some neurons during the training step.The extracted features received by the fully connected dense layer are passed to the flattening layer to turn the feature maps into a one-dimensional vector.Finally, the output layer brings out the result for prediction.A few authors have used CNN to forecast PV output power, singly or in a hybrid with other algorithms.An example is [45], who used CNN and CNN-LSTM hybrid to accurately predict PV output, power leveraging its ability to capture complex variations in the time series data.Another is [46], who applied CNN-GRU and CNN-LSTM hybrid techniques to forecast PV output power. Optics 2024, 5, FOR PEER REVIEW 6 having time steps along the rows and input features (e.g., output power, irradiance, ambient temperature, wind speed, etc.) along the column.Figure 2 presents a schematic illustration of the CNN with a one-dimensional convolutional layer.It shows the input and one-dimensional convolution layers, a dropout layer, a dense layer of fully connected neurons, a flattening layer, and the output layer.These 1D convolutional layers apply filters on the input data and extract relevant features from them [45].To prevent overfitting, the dropout layer randomly removes some neurons during the training step.The extracted features received by the fully connected dense layer are passed to the flattening layer to turn the feature maps into a one-dimensional vector.Finally, the output layer brings out the result for prediction.A few authors have used CNN to forecast PV output power, singly or in a hybrid with other algorithms.An example is [45], who used CNN and CNN-LSTM hybrid to accurately predict PV output, power leveraging its ability to capture complex variations in the time series data.Another is [46], who applied CNN-GRU and CNN-LSTM hybrid techniques to forecast PV output power. k-Nearest Neighbour (kNN) The kNN is a simple supervised ML algorithm that can be applied to solve regression and classification problems [47].Supervised ML is a type of ML technique that requires the use of labelled input and output data, while unsupervised ML is the process of analysing unlabeled data.The supervised ML model tries to learn the mapping relationship between the labelled input features and output data.The model is finetuned till the desired forecasting accuracy is achieved.The kNN algorithm, like most forecasting algorithms, works by using training data as the "basis" for predicting future values.In the algorithm, Neighbours are chosen from the basis and sorted depending on certain similarity criteria between the attributes of the training data and that of the testing data.The attributes are the training (and testing) data's weather and PV output power data, while the target is the residual of the difference between them.The mean of the target values of the neighbours is used to forecast the PV power.The measure of similarity (e.g., the Manhattan distance) is given as follows [48]: where is the distance between the -th training and test data, is the weight of the -th attribute, and attribute values of the training data and test are train and test , respectively. and are the indices of the training data and test attributes, respectively, while n is the number of attributes.The weights were calculated using the k-fold cross-validation [49].The k target values are used to forecast residual as follows: k-Nearest Neighbour (kNN) The kNN is a simple supervised ML algorithm that can be applied to solve regression and classification problems [47].Supervised ML is a type of ML technique that requires the use of labelled input and output data, while unsupervised ML is the process of analysing unlabeled data.The supervised ML model tries to learn the mapping relationship between the labelled input features and output data.The model is finetuned till the desired forecasting accuracy is achieved.The kNN algorithm, like most forecasting algorithms, works by using training data as the "basis" for predicting future values.In the algorithm, Neighbours are chosen from the basis and sorted depending on certain similarity criteria between the attributes of the training data and that of the testing data.The attributes are the training (and testing) data's weather and PV output power data, while the target is the residual of the difference between them.The mean of the target values of the neighbours is used to forecast the PV power.The measure of similarity (e.g., the Manhattan distance) is given as follows [48]: where d j is the distance between the i-th training and test data, W k is the weight of the j-th attribute, and attribute values of the training data and test are x train and x test , respectively.j and k are the indices of the training data and test attributes, respectively, while n is the number of attributes.The weights were calculated using the k-fold cross-validation [49]. The k target values are used to forecast residual F R as follows: where D train is the training data-target value, k is the index of the neighbours' chosen training data, and v k is the weight of the corresponding i-th target value.At the same time, M represents the total number of nearest neighbours.One advantage of the kNN is that it requires no training time.Another is that it is simple to apply, and new data samples can easily be added.The kNN also has a few disadvantages.These include the fact that it is ineffective in handling very large data and performs poorly with high-dimension data. Another disadvantage is that it is sensitive to noisy data (that is, data having outliers and missing values). The kNN algorithm (Algorithm 1) works as follows [47]: Consider the above sudo code; assuming one has a set of training data-"train_data"with unknown labels, "test_data" is the test data one wants to predict, "calc_distance" is a method to calculate the distance between two instances, "sort" is a method to sort the distances, "get_max" is a method that obtains the label with the maximum count, and k is the number of nearest neighbours to consider.The kNN algorithm computes the distance between the "test_data" and every instance in the "train_data", selects the k nearest neighbours, and then predicts the label of the "test_data" based on the majority label among its k nearest neighbours. Data Description We have a time series hourly data having fields for PV output power, normal global irradiance, diffused irradiance, sun height, ambient temperature, reflected irradiance, wind speed, and 24-h time cycle in Grahamstown, Eastern Cape, South Africa for the period from 2009 to 2020. Figure 3 presents the graph of the data-the PV output power. Selecting Input Variables The more variables used as input, the better the performance of the algorithms, but the higher the execution time, the higher the chances of overfitting.To select the variables that will serve as inputs to the algorithms, we consider the interaction between the variables and their correlation with the output power.Figure 4 presents scatterplots of all pairs of attributes.This figure can help one to see the relationship between the variables.The diagonal plots display the Gaussian distribution of the values of each variable.As expected, there is a strong correlation between global (and diffused) solar irradiance and PV power, but there is no correlation between reflected irradiance and PV power.This fact will be demonstrated more quantitatively later using the Lasso regression analysis. Selecting Input Variables The more variables used as input, the better the performance of the algorithms, but the higher the execution time, the higher the chances of overfitting.To select the variables that will serve as inputs to the algorithms, we consider the interaction between the variables and their correlation with the output power.Figure 4 presents scatterplots of all pairs of attributes.This figure can help one to see the relationship between the variables. Selecting Input Variables The more variables used as input, the better the performance of the algorithms, but the higher the execution time, the higher the chances of overfitting.To select the variables that will serve as inputs to the algorithms, we consider the interaction between the variables and their correlation with the output power.Figure 4 presents scatterplots of all pairs of attributes.This figure can help one to see the relationship between the variables.The diagonal plots display the Gaussian distribution of the values of each variable.As expected, there is a strong correlation between global (and diffused) solar irradiance and PV power, but there is no correlation between reflected irradiance and PV power.This fact will be demonstrated more quantitatively later using the Lasso regression analysis.The diagonal plots display the Gaussian distribution of the values of each variable.As expected, there is a strong correlation between global (and diffused) solar irradiance and PV power, but there is no correlation between reflected irradiance and PV power.This fact Optics 2024, 5 301 will be demonstrated more quantitatively later using the Lasso regression analysis.One cannot precisely say for the other variables.We excluded the reflected solar irradiance from the list of input variables. Prediction Intervals and Performance Evaluation 4.3.1. Prediction Intervals The prediction interval (PI) helps energy providers and operators assess the uncertainty level in the electrical energy they supply [50,51].It is a great tool for measuring uncertainty in model predictions.We will subsequently take a brief look at prediction interval widths. The prediction interval width (PIW t ) is the estimated difference between the upper (U t ) and lower L t limits of the values given as follows: The PI coverage probability (PICP) and PI normalised average width (PINAW) are used to assess the performance of the prediction intervals.The PICP is used to estimate the reliability of the PIs, while PINAW is used to assess the width of the PIs.These two are expressed mathematically as follows [52]: where y t is the data, and y min and y max are the minimum and maximum values of PIW, respectively.The PIs are weighted against a predetermined confidence interval (CI) value.One has valid PI values when the value of PICP is greater than or equal to that predefined CI value.The PI normalised average deviation (PINAD) defines the degree of deviation from the actual value to the PIs and is expressed mathematically as follows [52]: Performance Matrices A good number of performance measurement tools are available in the literature.Some are better fit for particular contexts and target objectives. The mean absolute error (MAE) is the average of the absolute difference between the measured (y t ) and predicted ( ŷt ) data.For a total of N predictions, the MAE is given as follows: The relative MAE (rMAE) gives an MAE value comparable to the measured values.The rMAE is given mathematically as follows: Optics 2024, 5 302 The root mean squared error (RMSE) is the average of the squared difference between the measured and predicted values.The average of the square of the prediction residual.It is always non-negative and is given as follows: The relative RMSE (rRMSE) gives a percentage RMSE value.The rRMSE is given as follows: where y is the average of y t , t = 1, 2, 3, . . .N. The smaller the values for these error metrics, the more accurate the forecasted value.The R 2 score is another commonly used metric to measure the performance of a forecast.The R 2 score can be expressed mathematically as follows: The closer the value of R 2 is to 1, the more accurate the prediction of the true value. It is common practice to normalise (or scale) data before passing through the training step, but we did not practice this in our case because our data had a few missing records and outliers. Selecting Input Variables It is a common practice to use Lasso analysis to perform variable selection, which uses the ℓ loss function penalty given as follows [5]: In Table 2, we show the parametric coefficient of the Lasso regression analysis.All the variables except for the reflected irradiance are important forecasting variables. Results Python Tensor flow and Sklearn (version 1.2.2) are the software packages we used for all our investigations.The implementation details are as follows. The MLPNN model started with a fully connected layer having 128 neurons and a ReLU activation function, and a final output layer, which consists of a single neuron for output.They complied with MSE as a loss function and Adam optimiser.The compiled model was trained on the training data for 50 epochs. The CNN model starts with a one-dimensional convolutional layer to extract the features from the input data, then a max pooling layer to reduce the dimensionality of the feature maps (using a pooling size of 8).The data are then flattened and passed through a dense layer with 50 units having a ReLU activation function.Finally, the output layer consists of a single unit used to predict the target value.All these are complied with MSE and Adam as loss function and optimiser, respectively.The compiled model was also trained on the training data for 50 epochs. The kNN regressor model is initialised with a number of neighbours = 5, algorithm = auto (to allow it to select the best algorithm), leaf size = 30, metric = Minkowski, p = 2 (or L2 norm), and weights = uniform.The initialised model is trained on the training data, and prediction is made on the test data. Changing hyperparameters of each of the models were performed to see if we can obtain better results but the above configurations produced the best results on our data and are presented below. Prediction Results Figure 5 presents plots of the data and fits of the different models we used in this study for short-term forecasting (38 h ahead) of the solar PV output power for two clear sky days and two cloudy days.The graph in blue is the measured data, while that in red, green, and black are for MLPNN, CNN, and kNN models' forecasts, respectively.We can see visually from these plots that the prediction produced by kNN best fits the data for these two conditions.MLPNN also produces a reasonably good fit on a clear sky day. Optics 2024, 5, FOR PEER REVIEW 11 consists of a single unit used to predict the target value.All these are complied with MSE and Adam as loss function and optimiser, respectively.The compiled model was also trained on the training data for 50 epochs. The kNN regressor model is initialised with a number of neighbours = 5, algorithm = auto (to allow it to select the best algorithm), leaf size = 30, metric = Minkowski, p = 2 (or L2 norm), and weights = uniform.The initialised model is trained on the training data, and prediction is made on the test data. Changing hyperparameters of each of the models were performed to see if we can obtain better results but the above configurations produced the best results on our data and are presented below. Prediction Results Figure 5 presents plots of the data and fits of the different models we used in this study for short-term forecasting (38 h ahead) of the solar PV output power for two clear sky days and two cloudy days.The graph in blue is the measured data, while that in red, green, and black are for MLPNN, CNN, and kNN models' forecasts, respectively.We can see visually from these plots that the prediction produced by kNN best fits the data for these two conditions.MLPNN also produces a reasonably good fit on a clear sky day.In Figure 6, the density plots of the measured solar PV output power and the different models' predictions are presented.The solid blue line graph is the measured data, while In Figure 6, the density plots of the measured solar PV output power and the different models' predictions are presented.The solid blue line graph is the measured data, while the dashed lines represent the models' forecasts.From these graphs, it can be observed that kNN prediction best matches the data, followed closely by the MLPNN predictions.We will subsequently present a qualitative evaluation of these models' performance. Optics 2024, 5, FOR PEER REVIEW 12 the dashed lines represent the models' forecasts.From these graphs, it can be observed that kNN prediction best matches the data, followed closely by the MLPNN predictions.We will subsequently present a qualitative evaluation of these models' performance.Table 3 presents the results of evaluating our models' performance using MAE, rMAE, RMSE, rRMSE, and R 2 metrics for the four weather conditions.The kNN has the overall best performance for these metrics, followed by the MLPNN and then CNN.Table 3 presents the results of evaluating our models' performance using MAE, rMAE, RMSE, rRMSE, and R 2 metrics for the four weather conditions.The kNN has the overall best performance for these metrics, followed by the MLPNN and then CNN. Prediction Accuracy Analysis This section evaluates how the models' predictions are centred using PIs and the forecast error distribution. Prediction Interval Evaluation In Table 4, we compare the performance confidence intervals of these modes' predictions using PICP, PINAW, and PINAD with a preset confidence level of 95%.Only the kNN model has a value of PICP greater than 95% on clear sky days.The model with the lowest value for PINAD and the narrowest PINAW is the model that best fits the data [52].kNN has the smallest PINAD and has the best overall performance with respect to these prediction interval matrices. Analysing Residuals In Table 5, statistical analyses on the residuals of all the models' predictions are presented for MLPNN, CNN, and kNN models (with a confidence level of 95%) on a summer clear sky day.The table shows that kNN has the smallest standard deviation among the three models under investigation, which implies that it produces the best fit for the data.MLPNN has the next best fit for the data.kNN and MLPNN have skewness close to zero, meaning their errors have a normal distribution.All the models have a kurtosis value that is less than 3. Discussion of Results This work focused on modelling and forecasting solar PV (hourly) output power for Grahamstown, Eastern Cape, South Africa.We modelled data of PV output power from January 2009 to December 2020.The data were split into 80% training and 20% test data.We modelled the data with MLPNN, CNN, and kNN techniques and used RMSE, rRMSE, MAE, rMAE, and R 2 performance evaluation matrices to evaluate the models on cloudy and clear days in the summer and winter seasons.The kNN algorithm at its best performance had an RMSE = 1.49%, rRMSE = 2.01%, MAE = 0.85% and rMAE = 0.04%, and RMSE = 4.95%, rRMSE = 3.64%, MAE = 2.74%, and rMAE = 0.11% at its worst performance.The kNN models always had an R 2 value of 1, while the other methods under investigation had a value of less than 1 in most cases.Also, when a confidence interval analysis on the models with a preset confidence interval of 95% was performed, kNN had a PICP value that was above 95%.All these evaluation matrices show that the kNN algorithm produced the best prediction.One can also draw the same conclusion if you look at whisker and box plots of the residuals of the forecast made by the models under investigation for the four weather conditions, where the kNN model had the smallest tails (compared to that of the other models).The kNN is the best model for our data.Note that the data under investigation have very few spikes (or outliers) and missing records (and are not too noisy), so the kNN model perfectly predicted the data.Again, while MLPNN and CNN each take several minutes to train their respective model, kNN has no training step.It goes straight into modelling the PV output power.So, when it comes to execution time, kNN still wins the contest. We were inspired by the works of [5,24,53].Mutavhatsindi et al. [53] analysed the performance of support vector regression, principal component regression, feedforward neural networks, and LSTM networks.Ratshilengo et al. [5] indeed compared the GA algorithm with the RNN and kNN algorithms models' performance in forecasting global horizontal irradiance.They found the GA algorithm to have the best overall forecast performance.The kNN model in this study produced lower metric values for RMSE, MAE, rRMSE, and rMAE than those produced by [5], although they modelled global solar irradiance while we modelled solar PV output power. Discussion of Results This work focused on modelling and forecasting solar PV (hourly) output power for Grahamstown, Eastern Cape, South Africa.We modelled data of PV output power from January 2009 to December 2020.The data were split into 80% training and 20% test data.We modelled the data with MLPNN, CNN, and kNN techniques and used RMSE, rRMSE, MAE, rMAE, and R 2 performance evaluation matrices to evaluate the models on cloudy and clear days in the summer and winter seasons.The kNN algorithm at its best performance had an RMSE = 1.49%, rRMSE = 2.01%, MAE = 0.85% and rMAE = 0.04%, and RMSE = 4.95%, rRMSE = 3.64%, MAE = 2.74%, and rMAE = 0.11% at its worst performance.The kNN models always had an R 2 value of 1, while the other methods under investigation had a value of less than 1 in most cases.Also, when a confidence interval analysis on the models with a preset confidence interval of 95% was performed, kNN had a PICP value that was above 95%.All these evaluation matrices show that the kNN algorithm produced the best prediction.One can also draw the same conclusion if you look at whisker and box plots of the residuals of the forecast made by the models under investigation for the four weather conditions, where the kNN model had the smallest tails (compared to that of the other models).The kNN is the best model for our data.Note that the data under investigation have very few spikes (or outliers) and missing records (and are not too noisy), so the kNN model perfectly predicted the data.Again, while MLPNN and CNN each take several minutes to train their respective model, kNN has no training step.It goes straight into modelling the PV output power.So, when it comes to execution time, kNN still wins the contest. We were inspired by the works of [5,24,53].Mutavhatsindi et al. [53] analysed the performance of support vector regression, principal component regression, feedforward neural networks, and LSTM networks.Ratshilengo et al. [5] indeed compared the GA algorithm with the RNN and kNN algorithms models' performance in forecasting global horizontal irradiance.They found the GA algorithm to have the best overall forecast performance.The kNN model in this study produced lower metric values for RMSE, MAE, rRMSE, and rMAE than those produced by [5], although they modelled global solar irradiance while we modelled solar PV output power. Challenges of Photovoltaic Power Forecasting Forecasting solar PV output power has some challenges.One of these is that it depends on the accuracy of the future weather forecast.Since most PV output power predicting techniques take future weather forecast data as an input parameter, the accuracy of the PV output power prediction is highly dependent on the accuracy of the underlying input weather data [54].Another challenge is having an enormous amount of data.Even though having large data can help some predicting algorithms to make more accurate predictions, processing large data can consume a lot of machine resources, thereby compromising output speed, especially in cases where real-time data processing is a requirement. It is often thought that complex models like hybrid and statistical methods will yield more accurate results.Complex models, like most statistical and hybrid models, are often expected to produce more accurate results.This is not always the situation, as simpler methods can produce accurate results if the input vectors are properly preprocessed and filtered.This is also a challenge, as shown by the views held by [55] in selecting the right model and input parameters. Additionally, the problem of PV solar panel module degradation and site-specific losses exists, which negatively affects medium and long-term forecast horizon estimates.Solar PV power forecasting models depend on historical data; the forecasted data may defer significantly from the actual PV panels' output power because of ageing and panel degradation.Hence, although site-specific models have been generated, there is a need to constantly review the model's input parameters over time based on the degradation of the solar PV modules. Conclusions This study carried out a performance evaluation of MLPNN, CNN, and kNN methods in modeling solar PV output power for (solar PV installation in) Grahamstown, Eastern Cape, South Africa, for a short-term forecast horizon.Several works are available in literature where the authors modelled solar irradiance with great success.This gives a good indication of the potential electrical energy solar PV systems can provide.This study modelled the actual solar PV output power.It is more beneficial to model the PV output power instead of solar irradiance because it captures the impact of ambient temperature, module temperature, and degradation (as well as other factors) whose rise negatively affects the PV module's efficiency.After training the models, we analysed their prediction results on sunny and cloudy sky days in summer and winter.The RMSE, rRMSE, MAE, rMAE, and R 2 performance evaluator are commonly used model evaluation matrices.Applying these performance evaluators to the results of the models under investigation showed that while the CNN model had the worst performance, the kNN model had the overall best performance, followed by the MLPNN model.Statistical analysis performed on the models' prediction residuals shows that the kNN model had the smallest standard deviation, which implies that it was the best fit for the data.The skewness values of both kNN and MLPNN are close to zero, which indicates a good fit for the data.This study's findings will be a useful tool for energy providers (both private and public) who want quick and easy but accurate forecasts of their solar photovoltaic installation-to plan energy distribution and expansion of installations in a sustainable and environmentally friendly way. Figure 1 . Figure 1.(a) Schematic representation of a typical ANN having the input, hidden, and output layers. Figure 1 . Figure 1.(a) Schematic representation of a typical ANN having the input, hidden, and output layers.(b) A pictorial presentation of a mathematical model of an ANN cell [6]. Figure 2 . Figure 2. Schematic representation of a convolutional network. Figure 2 . Figure 2. Schematic representation of a convolutional network. Figure 3 . Figure 3. Plot the PV output power from 2009 to 2020. Figure 4 . Figure 4. Plots of the variables. Figure 3 . Figure 3. Plot the PV output power from 2009 to 2020. Figure 3 . Figure 3. Plot the PV output power from 2009 to 2020. Figure 4 . Figure 4. Plots of the variables. Figure 4 . Figure 4. Plots of the variables. Figure 5 . Figure 5. Plots of the solar PV output power data together with the graphs of MLPNN, CNN, and kNN models' predictions (dash lines) on a clear summer sky day (a) and cloudy day (b).The same plots are shown for a clear winter sky day (c) and a cloudy day (d).The solid lines represent the measured data, while the dashed line represents the predictions. Figure 5 . Figure 5. Plots of the solar PV output power data together with the graphs of MLPNN, CNN, and kNN models' predictions (dash lines) on a clear summer sky day (a) and cloudy day (b).The same plots are shown for a clear winter sky day (c) and a cloudy day (d).The solid lines represent the measured data, while the dashed line represents the predictions. Figure 6 . Figure 6.Density plots of the measured data (solid line) together with each model's forecast (dash lines).In the top row is the graph for the models' predictions on a clear summer sky day (a) and cloudy day (b), while the bottom panel presents the same on a clear winter sky day (c) and cloudy day (d). Figure 6 Figure 6 presents the density plots of the measured solar PV output power together with the models' predictions during the summer season (top row) on a clear sky day (a) and a cloudy day (b).The same is present for a clear winter sky day (c) and cloudy day (d) on the bottom row.The kNN model's density graph produced the closest match to the measured data for all four weather conditions under investigation.Table3presents the results of evaluating our models' performance using MAE, rMAE, RMSE, rRMSE, and R 2 metrics for the four weather conditions.The kNN has the overall best performance for these metrics, followed by the MLPNN and then CNN. Figure 6 . Figure 6.Density plots of the measured data (solid line) together with each model's forecast (dash lines).In the top row is the graph for the models' predictions on a clear summer sky day (a) and cloudy day (b), while the bottom panel presents the same on a clear winter sky day (c) and cloudy day (d). Figure 6 Figure 6 presents the density plots of the measured solar PV output power together with the models' predictions during the summer season (top row) on a clear sky day (a) and a cloudy day (b).The same is present for a clear winter sky day (c) and cloudy day (d) on the bottom row.The kNN model's density graph produced the closest match to the measured data for all four weather conditions under investigation.Table3presents the results of evaluating our models' performance using MAE, rMAE, RMSE, rRMSE, and R 2 metrics for the four weather conditions.The kNN has the overall best performance for these metrics, followed by the MLPNN and then CNN. Figure 7 Figure7presents the whisker and box plots of the residuals of the forecast made with the MLPNN, CNN and kNN models for clear sky and cloudy days during summer and winter seasons.The residual of the kNN model has the smallest tail compared to the others, followed by the forecast made with MLPNN although it made a worst prediction in the summer cloudy day under investigation.It also shows that the kNN model produced the best overall forecast. Figure 7 . Figure 7. Whisker and box plots of the residuals of the forecast made with MLPNN, CNN, and kNN models on clear sky (a) and cloudy sky (b) days during the summer season and on clear sky (c) and cloudy sky (d) days during the winter season. Figure 7 . Figure 7. Whisker and box plots of the residuals of the forecast made with MLPNN, CNN, and kNN models on clear sky (a) and cloudy sky (b) days during the summer season and on clear sky (c) and cloudy sky (d) days during the winter season. Table 1 . A summary literature review of PV power output forecasting showing references, forecast horizon, technique, and results. Table 2 . Parameter coefficient of Lasso regression. Table 3 . Evaluating models' performances on a clear summer sky day (a) and cloudy summer sky day (b) and on clear and cloudy sky days in winter ((c) and (d), respectively). Table 3 . Evaluating models' performances on a clear summer sky day (a) and cloudy summer sky day (b) and on clear and cloudy sky days in winter ((c) and (d), respectively). Table 4 . Comparing the performance of the models using PICD, PINAW, and PINAD on a confidence level set to 95% on clear sky and cloudy summer days ((a) and (b), respectively), while the second row presents the same for clear sky and cloudy winter days ((c) and (d) respectively). Table 5 . Comparing residuals of the models' prediction.
2024-06-20T15:04:45.883Z
2024-06-18T00:00:00.000
{ "year": 2024, "sha1": "e5ead6dae246966aaf842b01658a50a7482d60ae", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2673-3269/5/2/21/pdf?version=1718703807", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "52e93cf6f19dda5de694061b3dace912b1cf20dd", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Computer Science" ], "extfieldsofstudy": [] }
29174838
pes2o/s2orc
v3-fos-license
Mixed-Infection of Papaya Ring Spot Virus and Tomato Leaf Curl New Delhi Virus in Coccinia grandis in India Natural mixed infections of plant viruses are found frequently all over the world, leading to the variations in symptoms, infectivity, vector transmissibility and economic loss. Mixed infections with potyvirus and geminivirus have been reported in several hosts over a wide geographic area (Martín and Elena, 2009; Verma et al., 2014). The Potyvirus is the largest genus of the family Potyviridae. It contains more than 200 definite and tentative species (Berger et al., 2005) which cause significant losses in agricultural, pasture, horticultural and ornamental plants (Ward and Shukla, 1991). They infect a wide range of monocotyledonous and dicotyledonous plant species and have been found in most parts of the world (Gibbs and Ohshima 2010). Potyvirus distribution is worldwide, they are most prevalent in tropical and subtropical countries (Shukla et al., 1998). On the other hand geminiviruses make up a large, diverse family of plant viruses that infect a broad variety of food and fiber crops and cause significant losses worldwide. The majority of begomoviruses have a genome comprising two similar sized DNA components (DNA A and DNA B). The DNA A component encodes a replication-associated protein (Rep) International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Volume 6 Number 7 (2017) pp. 1221-1228 Journal homepage: http://www.ijcmas.com Introduction Natural mixed infections of plant viruses are found frequently all over the world, leading to the variations in symptoms, infectivity, vector transmissibility and economic loss. Mixed infections with potyvirus and geminivirus have been reported in several hosts over a wide geographic area (Martín and Elena, 2009;Verma et al., 2014). The Potyvirus is the largest genus of the family Potyviridae. It contains more than 200 definite and tentative species (Berger et al., 2005) which cause significant losses in agricultural, pasture, horticultural and ornamental plants (Ward and Shukla, 1991). They infect a wide range of monocotyledonous and dicotyledonous plant species and have been found in most parts of the world (Gibbs and Ohshima 2010). Potyvirus distribution is worldwide, they are most prevalent in tropical and subtropical countries (Shukla et al., 1998). On the other hand geminiviruses make up a large, diverse family of plant viruses that infect a broad variety of food and fiber crops and cause significant losses worldwide. The majority of begomoviruses have a genome comprising two similar sized DNA components (DNA A and DNA B). The DNA A component encodes a replication-associated protein (Rep) that is essential for viral DNA replication, a replication enhancer protein (REn), the coat protein (CP) and a transcription activator protein (TrAP) that controls late gene expression. The DNA B component encodes a nuclear shuttle protein (NSP) and a movement protein (MP), both of which are essential for systemic infection of plants (Hanley-Bowdoin et al., 1999;Gafni and Epel, 2002). C. grandis is a weed, belonging to the family Cucurbitaceae is distributed in tropical Asia, Africa and also commonly found in India, Pakistan, and Srilanka (Farrukh et al., 2008). In Southeast Asia, C. grandis is grown for its edible young shoots and fruits. Every part of this plant like leaves, fruits, stem and roots are valuable in medicine and various preparations have been mentioned in indigenous system of medicine for various skin diseases, bronchial catarrh, bronchitis and unani systems of medicine for ring worm, psoriasis, small pox, scabies (Perry, 1980). Infected plants were exhibiting multiple symptoms such as: leaf reduction, mosaic, chlorosis, curling on leaf and stem ( Figure 1). In the present study an attempt were made to characterize the mix infection of the potyvirus and geminivirus infection in C. grandis and characterization of the viruses at molecular level. Materials and Methods For the investigation of mixed infection of the potyvirus and the geminivirus, the symptomatic leaf of Coccinia grandis was collected from the Barasat, West Bengal, India and stored in -80 C for further identification and characterization of the viruses. For the detection of potyvirus in symptomatic C. grandis leaf tissues, we used serologicalbased diagnosis and RT-PCR. Virus accumulation was assessed by antigen coated plate ELISA (ACP-ELISA) according to the instruction of the manufacturer (Agdia, USA). Briefly, a fresh samples (100 mg) were ground in 1 ml indirect sample extraction buffer (0.159 gm Na2CO3, 0.290 gm NaHCO3 and PVP 2gm, and 0.02 gm NaN3, adjusted pH 9.6) and 100 μl of the samples were used for the test. ELISA plates were coated with 100 μl of plant extract for each well and incubated 1 hour at room temperature. The plant extracts were removed from the plate and washed with phosphatebuffered saline (PBS) containing 0.05% Tween 20 (PBST), pH 7.4, for seven times. The plate was then added detection antibody (1:200 with ECI buffer) for 2 hrs at room temperature or overnight at 4 °C, and then washed with PBST for eight times. Next, the alkaline phosphatase-conjugated (Agdia, USA) was diluted at 1:200 in ECI buffer [0.2 gm BSA; 2 gm PVP;] and 100 μl buffer was added in each well and incubated at room temperature for 1 hour. After the plate was washed with PBST, 100 μl of pnitrophenyl phosphate (pNPP) was added as substrate for alkaline phosphatase in dark condition and incubated at room temperature for 60 min. Absorbance at A405 nm was measured with an ELISA plate reader. Experiments were done in triplicate. Samples with absorbance values greater than or equal to three times the average of the negative samples were considered positive in ELISA. Total RNA was extracted from 100 mg of Coccinia leaf tissue using the Trizol method and was used in RT-PCR for amplification of potyvirus. RT was performed with 50 ng of total RNA mixed with oligodT primers and Super Reverse Transcriptase MuLV kit (Biobharti). Reaction mixtures were incubated at 42 °C for 50 min to synthesize the first strand cDNA, and then the reaction was inactivated by heating at 70 °C for 15 min. RT products were heated at 94 °C for 2 min and amplification was performed with 35 cycles of 30 sec for strand separation at 94 °C, 1 min for primer annealing at 50°C and 1 min for synthesis at 72 °C, and 10 min at 72 °C for final extension using pair of potyvirus specific degenerate primers (MJ1 and MJ2) (Marie-Jeanne et al., 2000). For the detection of geminivirus, two detection techniques were used: 1) PCR amplification of virus by using indigenously designed geminivirus specific primer pair and 2) using Southern blot analysis using biotinlabeled probes specific for geminivirus. Total DNA was extracted from C. grandis leaf using our new modified CTAB method and tested for the presence of geminiviruses by PCR using indigenously designed geminiviruses specific degenerate primer pair (Roy et al., 2015). The PCR was performed under following condition: initial denaturation 95 °C for 5 min, following by 35 cycles of 94 °C for 30 sec, 48 °C for 45 sec, 72 °C for 1 min, and final extension at 72 ºC for 7 min. The expected fragment size of the amplicon was about 760 nt. The PCR products were eluted from 1% agarose using the gel extraction kit (XcelGen-Xcelris Genomics) and sent for sequencing. For the Southern blot analysis, we designed a biotin labeled probe and used for the detection of the geminiviruses from the total sap of the infected plant samples. Briefly, about 5 µl of freshly prepared sap was blotted on the nitro cellulose membrane and air dried the membrane properly. After that the membrane was UV-cross linked for 30 min under UV-Cross Linker (GeNei TM, India). Prehybridization, hybridization and washing of membrane were done according to the southern blot analysis protocol using biotinlabeled probes (Weigel et al., 2015). Both the sequences of PCR products were analyzed with available sequences obtained from the GenBank database using Multalin, BLASTn, and pairwise identity scores were calculated using SDTv1.2 (Sequences Demarcation Tool version 1.2). Phylogenetic tree was constructed using Vector NTI, BioEdit and Neighbor-joining analysis with Phylip programs. Results and Discussion During the survey, leaves of the C. grandis were found to be positive in ELISA against the potyvirus specific monoclonal antibody (Agdia, USA). The absorption readings more than three times of the control were considered as positive (Figure 2a). For the detection of potyvirus, the RT-PCR product with the expected size of 327 bp, encoding the core region of the coat protein gene (Figure 2b) was amplified. PCR product was eluted with the gel elution kite (XcelGen, Xcelris) and send for sequencing and the sequence was submitted in the GenBank database as accession number; LC194215. During the detection of geminivirus a fragment of approx ~760 bp covering the parts of AV1, AC3 and AC2 genes, was amplified from C. grandis indicating the infection of the geminivirus in the plants (Figure 3a). PCR product was eluted with the gel elution kit as above and sent for sequencing and the sequence was submitted in the GenBank database as accession number; LC194216. In Southern hybridization technique, symptomatic Coccinia plants hybridized with the probe, whereas samples extracted from non-symptomatic plants were negative in results (Figure 3b). Hybridization of geminivirus probe with the DNA samples on the nitrocellulose membrane indicates that these probes can also be used for the detection of begomoviruses. The strong signal showed that the virus titer in C. grandis is high. Both the sequences of PCR products were analyzed with available sequences obtained from the GenBank database using Multalin, BLASTn, and SDT. Pairwise identity scores were calculated using SDTv1.2 (Sequences Demarcation Tool version 1.2). The amplicon of 327 bp RT-PCR product with potyviruses specific primers shared upto 92% identity at the nucleotide level with other Indian Papaya ring spot virus isolate and amplicon of about 760 bp PCR product with geminivirus specific primers shared upto 98% sequence identity with the Tomato leaf curl New Delhi virus at nucleotide level (Figure 4a and 4b). Phylogenetic trees were constructed using Vector NTI, Bio Edit and Neighbor-joining analysis with Phylip programs. Pepper mild mottle virus partial coat protein acc. No: AM491598 was used as an out group and marked with Solid Square and for the analysis of geminivirus, Banana bunchy top virus acc. No: AY534140 was used as an out group and marked with Solid Square. Our both sequences were marked with solid triangles ( Figure 5A and 5B). To our knowledge, this is the first evidence of mixed infection of PRSV and ToLCNDV in C. grandis from India, although these two viruses were reported separately earlier (Nagendran et al., 2016, Noochoo et al., 2015. Many potyviruses and geminiviruses are emerging and re-emerging in the recent years and infecting different host and threatening the economically important crops which are susceptible for these viruses. Therefore, it is essential to further study the spread of the disease and characterize the viruses in details at molecular level and study the interaction between the host and vectors of the viruses are significant areas to focus in future research.
2019-04-02T13:08:57.168Z
2017-07-20T00:00:00.000
{ "year": 2017, "sha1": "159c1e3d907ad81f5e3f66ab578cb0d8ba00fbf6", "oa_license": null, "oa_url": "https://www.ijcmas.com/6-7-2017/Siriya%20Sultana,%20et%20al.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "5c7c2d08d80616cb29a58c24f663ca66fa0debf0", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
108292925
pes2o/s2orc
v3-fos-license
MtGA2ox10 encoding C20-GA2-oxidase regulates rhizobial infection and nodule development in Medicago truncatula Gibberellin (GA) plays a controversial role in the legume-rhizobium symbiosis. Recent studies have shown that the GA level in legumes must be precisely controlled for successful rhizobial infection and nodule organogenesis. However, regulation of the GA level via catabolism in legume roots has not been reported to date. Here, we investigate a novel GA inactivating C20-GA2-oxidase gene MtGA2ox10 in Medicago truncatula. RNA sequencing analysis and quantitative polymerase chain reaction revealed that MtGA2ox10 was induced as early as 6 h post-inoculation (hpi) of rhizobia and reached peak transcript abundance at 12 hpi. Promoter::β-glucuronidase fusion showed that the promoter activity was localized in the root infection/differentiation zone during the early stage of rhizobial infection and in the vascular bundle of the mature nodule. The CRISPR/Cas9-mediated deletion mutation of MtGA2ox10 suppressed infection thread formation, which resulted in reduced development and retarded growth of nodules on the Agrobacterium rhizogenes-transformed roots. Over-expression of MtGA2ox10 in the stable transgenic plants caused dwarfism, which was rescued by GA3 application, and increased infection thread formation but inhibition of nodule development. We conclude that MtGA2ox10 plays an important role in the rhizobial infection and the development of root nodules through fine catabolic tuning of GA in M. truncatula. Nodulation is the mutual interaction between legume plants and rhizobial bacteria that forms a symbiotic nitrogen-fixing nodule. The process is tightly controlled by the host plant via the nodulation signaling pathway, wherein plant hormones including cytokinin, auxin, ethylene, and gibberellin (GA) participate (reviewed by Oldroyd 1 ). The roles of GA in nodulation of legume species are controversial and both positive and negative effects have been reported. Pea na, a loss-of-function mutant of the ent-kaurenoic acid oxidase gene (KAO), was characterized by a reduction in the size and number of nodules, indicating that GA is required to support nodule formation 2 . In contrast, other studies have indicated negative roles of GA in nodulation. In Lotus japonicus and Medicago truncatula, exogenous GA application at concentration ranges of 0.1 to 1 µM resulted in inhibition of rhizobial infection and nodule organogenesis 3,4 . Considering the fact that root hair deformation was also reduced by GA application, the negative effect of GA on nodulation was proposed to act at the very early stage of the Nod factor signaling 3 . Negative regulation of the number of nodules formed by exogenous GA was shown to be mediated by the DELLA protein, which can interact with NSP2 and NF-YA1 in vitro 4 . Over-expression of MtDELLA1 increased infection thread formation without changes in nodule number. However, null mutant della or RNAi knockdown plants had reduced numbers of infection thread and nodule formation 2,4,5 . Nodules formed in the della lines were similar in appearance to those of the wild types and still fixed the same amount of N as the wild types in pea. In addition, GA-deficient mutant plants recovered normal nodule organogenesis via knockout of DELLA 5 . Based on these results, a dual role of GA in two distinct stages of nodule organogenesis was proposed; the suppression of infection thread formation and promotion of nodule development 6 . A recent study validated this hypothesis by using various mutant pea plants with defective GA biosynthesis or signaling pathways 5 . In higher plants, biosynthesis of GA occurs first in the plastid where trans-geranylgeranyl diphosphate is converted to ent-copalyl diphosphate and then to ent-kaurene by serial action of ent-copalyl diphosphate synthase (CPS) with ent-kaurene synthase (KS). A tetracyclic diterpene ent-kaurene is oxidized to ent-kaurenoic acid by ent-kaurene oxidase (KO) and further converted to GA 12 by KAO on the membrane of the endoplasmic reticulum. GA 12 can be oxidized to GA 53 by GA13-oxidase (GA13ox). In the cytosol, GA 12 and GA 53 are further oxidized to bioactive GAs through the early 13-hydroxylation pathway or the non-hydroxylation pathway by a series action of GA20-oxidase (GA20ox) and GA3-oxidase (GA3ox). At each step, intermediate or bioactive GAs can be oxidized by GA2-oxidase (GA2ox), leading to the inactivation of these hormone molecules 7 . There are two types of GA2ox in the catabolic pathway for GAs 8 . Initially identified GA2ox utilized bioactive C19 GAs (GA 1 and GA 4 ) and their immediate precursor (GA 20 and GA 9 ) as substrates. Later, a novel type of GA2ox was discovered, which contained three unique conserved amino acid motifs and catalyzed only earlier intermediate C20 GAs (GA 12 and GA 53 ) (Fig. 1A). The 'Janus face' of GA on nodulation 9 suggested that GA biosynthesis and inactivation must be precisely regulated in accordance with the progress of nodule organogenesis. Therefore, root GA concentration should be maintained at a low level at the early stage of epidermal rhizobial infection and then at a high level at the later stage of nodule organogenesis. The cellular level of bioactive GA can be regulated in several ways, including transport of precursors or active forms of GA into the cells, inactivation of bioactive GA, or transcriptional regulation of genes involved in the biosynthesis and catabolic pathways (reviewed by Olszewski et al. 10 ). As demonstrated in the reproductive transition of rice 11 and Lolium 12 , regulation of GA transport via the vascular system is responsible for controlled organ development. GA 12 , the first GA compound produced by the GA biosynthesis pathway, is imported into the cytosol; it is then further oxidized by GA oxidases and converted to the bioactive form of GAs 10 . Recently, GA 12 was identified as the major form of GA responsible for long-distance transport through the vascular system 13,14 . This finding is consistent with the expectation that GAs involved in long-distance transport should be inactive to avoid any nonspecific effects, and then converted to an active form at the location where the active GAs are required. The GA-deficient pea mutant na had dwarfism and decreased nodule formation due to disruption of production in GA 12 precursor that ultimately leads to reduction in bioactive GA 1 15 . Therefore, control of GA 12 metabolism is expected to be an effective means to regulate the pools of precursors of downstream GA biosynthesis. The cellular GA level can also be changed through inactivation of the bioactive forms by GA2ox 13 . The major GA inactivation enzyme is C19 GA2ox 16 and the significance of C20 GA2ox was demonstrated by floral initiation in Arabidopsis thaliana 17 . Over the last decade, transcriptional regulation of genes related to the GA biosynthesis pathway in legume plants has been investigated, which has provided a comprehensive understanding of the dynamic nature of GA regulation. Gene expression studies revealed that the GA biosynthetic pathway genes are regulated in response to rhizobial inoculation or Nod factor treatment. For example, SrGA20ox1 of Sesbania rostrata was upregulated during lateral root-based nodulation and its infection-related expression pattern was dependent on Nod factors 18 . Similarly, several GA20ox and GA3ox genes of soybean were upregulated during the early stage of nodulation at 12 and 48 h after rhizobial inoculation 19,20 . Early GA precursor biosynthesis genes were also highly expressed upon rhizobium inoculation of the root hair cells of M. truncatula 21 . Most of our current understanding of the roles of GA in symbiotic nodulation is based on mutant or gene studies of GA biosynthesis-related genes in pea and DELLA in L. japonicus and M. truncatula [2][3][4][5]15,22,23 . However, genes related to inactivation or catabolic regulation of GA during symbiotic nodulation of legume plants have not been studied to date. Previously, we investigated massive temporal transcriptome dynamics of nodulation signaling in M. truncatula wild-type cv. Jemalong A17, compared to mutants with absent (nfp 24 ) or decreased Nod factor sensitivity (lyk3 25 ) and an ethylene-insensitive mutant (skl 26 ) at the early symbiotic stages (0 to 48 h post-inoculation [hpi]) with rhizobia 27 . Among the thousands of novel candidate genes undergoing Nod factor-dependent and ethylene-regulated expression, GA biosynthesis and signaling pathway genes were enriched at 12 hpi when root hair deformation and branching occurred. We surveyed the GA-related genes in a list of symbiosis-specific genes in which transcription was activated by Nod factors, and found a partial complementary DNA (cDNA) sequence showing similarity to GA2ox that mapped to the Medtr4g074130 locus in the recent M. truncatula genome release (Mt4.0). In this study, we first report the functional characterization of MtGA2ox10 encoding a novel C20 GA catabolic enzyme in symbiotic nodulation. We combine phylogenetic sequence comparison, expression analyses using RNA sequencing (RNA-seq) data and quantitative polymerase chain reaction (qPCR), native promoter::β-glucuronidase (GUS) fusion, CRISPR/Cas9-mediated gene deletion, and over-expression experiments. Our findings suggest that MtGA2ox10 plays important roles in both rhizobial infection at an early stage and nodule development at a late stage of symbiotic nodulation in M. truncatula. Table S1). None of the M. truncatula orthologs to AtGA2ox3, AtGA2ox7, and AT3G47190.1 were identified, whereas C20 GA-specific GA2ox genes in M. truncatula were present and outnumbered A. thaliana by six genes (MtGA2ox8 to 13) to one gene (AtGA2ox8). Genome-wide identification of The phylogenetic relationship of the MtGA2ox gene family with its homologs in the sequenced plant genomes was reconstructed to investigate and characterize the phylogenetic patterns of the subgroups (Fig. 1B). A total of 113 deduced amino acid sequences of GA2ox and GAOL identified from eight sequenced plant genomes, including A. thaliana, Brassica rapa, Glycine max, L. japonicus, M. truncatula, Oryza sativa, Solanum lycopersicon, and Vitis vinifera, were multiple aligned to construct a phylogenetic tree. A Maximum-Likelihood tree using the protein sequences of the GA2ox genes showed that the plant GA2ox gene family is divided into four major clades. Groups I to III consist of GA2ox and Group IV includes only GAOL. Interestingly, Group I and II contain C19 GA-specific GA2ox (C19 GA2ox), whereas Group III comprises C20 GA-specific GA2ox (C20 GA2ox). Moreover, Group III GA2ox genes contained three unique conserved amino acid motifs that are absent in C19 GA2ox ( Supplementary Fig. S1) and were relatively abundant in legume species (4-15 genes) compared to the non-legume species (1-4 genes). In each Group, legume (G. max, L. japonicus, and M. truncatula) and crucifer (A. thaliana and B. rapa) genes clustered into taxa-specific subgroups, indicating the close evolutionary relationship of genes in the same family. MtGA2ox10 is the unique gene of the MtGA2ox gene family induced by rhizobium inoculation. To examine the expressional characteristics of each MtGA2ox gene as well as other genes related to GA biosynthesis in response to rhizobial infection, we investigated the expression pattern of the genes by searching the Medicago truncatula Gene Expression Atlas (MtGEA 29 ) database, and by transcriptome analysis based on our large scale RNA-seq data from A17, nfp, lyk3, and skl roots that were inoculated with Sinorhizobium medicae ABS7M 27 . In MtGEA, none of the genes related to GA biosynthesis and inactivation exhibited nodule-specific expression (data not shown). In the transcriptome analysis using RNA-seq data, 19 out of 22 GA biosynthesis-related genes (6 GA20ox, 2 GA3ox, and 14 GAOL) and 11 out of 14 GA2ox genes were expressed in M. truncatula roots (Supplementary Table S2). Among these genes, one GA biosynthesis-related gene (MtGA3ox1) and two GA inactivation-related genes (MtGA2ox10 and MtGAOL15) showed transcriptional changes between the genotypes, which occurred between several hours to 2 days post-inoculation (dpi) with S. medicae ( Fig. 2A,B). Their transcriptions responded to rhizobium inoculation in the wild type at 12 or 24 hpi and were markedly enhanced in skl (Fig. 2C). Of particular interest was that MtGA2ox10 was transcriptionally up-regulated at 6 hpi, peaked at 12 hpi where its expression level was approximately 3-to 5-fold higher than that in nfp and lyk3, and slowly declined over the rest of the time course. In contrast, MtGA3ox1 and MtGAOL15 showed similar expression patterns in A17, nfp, and lyk3 over the time course. The peak expression of these genes in A17 at 24 hpi was only 1.4-to 1.5-fold higher than those in nfp and lyk3 (Fig. 2C). Therefore, MtGA2ox10 was a unique member of the GA metabolic pathway genes in M. truncatula, which showed up-regulation in a rhizobia-dependent and ethylene-regulated manner between 6 and 48 hpi. Moreover, the rhizobia-dependent induction of MtGA2ox10 required NFP and LYK3, indicating that its transcription occurs downstream of Nod-factor recognition. Transcriptional induction of MtGA2ox10 in M. truncatula root by rhizobium inoculation was evaluated by qPCR analysis of gene expression in a series of root samples and in different tissues, namely nodules, leaves, www.nature.com/scientificreports www.nature.com/scientificreports/ flowers, and pods ( Fig. 2D). Consistent with the results of the RNA-seq data analysis, expression of MtGA2ox10 was highly specific to the early stage of S. medicae inoculation. Transcription of MtGA2ox10 was barely detected from the un-inoculated roots at 0 hpi. However, the transcript level increased from 6 hpi, peaked at 12 hpi with a ~73-fold increase in transcript abundance compared to 0 hpi, and then gradually declined until 48 hpi. Expression of MtGA2ox10 was also detected in the mature nodule and other tissues, including roots in the absence of rhizobia, leaves, flowers, and pods; however, the levels were lower than those of the rhizobium-inoculated roots at 36 hpi. MtGA2ox10 is expressed in symbiotic tissues and nodules. Transcriptional fusion of the native MtGA2ox10 promoter and GUS reporter gene was used to examine temporal and spatial patterns of expression in transformed hairy roots of wild type A17 plants. The MtGA2ox10pro::GUS fusion construct exhibited an expression pattern nearly identical to that in the qPCR experiment, with GUS activity detected from 1 hpi, peaking at 12 hpi and declining thereafter ( Supplementary Fig. S2). To characterize the tissue-level activation of the MtGA2ox10 promoter in roots and nodules, the distribution of GUS activity in symbiotic tissues was assessed by histochemical staining and microscopic analyses of the specimens. In the absence of rhizobium inoculation, MtGA2ox10pro::GUS expression was not detected in roots (Fig. 3A). Inoculation of transgenic roots with S. medicae induced strong expression of MtGA2ox10pro::GUS at 12 hpi, with GUS activity differing between different zones; GUS activity was detected in the entire root area (epidermis, cortex, and vascular tissues), in the differentiation or maturation zone, in the vascular tissues in the elongation zone, and in the apical meristem and apices of the root cap (Fig. 3B). Interestingly, only infected or deformed root hairs in the differentiation zone exhibited GUS staining (Fig. 3C). GUS activity was reduced but localized to infected root hairs and cortex tissues, where infection thread extended at 24 hpi (Fig. 3E). At 5 dpi, strong expression of the MtGA2ox10 promoter was detected in both nascent nodules and vascular tissues (Fig. 3F). MtGA2ox10pro::GUS expression in functional nodules was observed throughout the outer layers of developing nodules at 2 wpi, and in the meristem and infection zone of mature nodules at 4 wpi, without any overlapping Magenta-Gal-stained bacterial LacZ expression in the nitrogen fixation zone (Fig. 3G,H). Root vascular bundles at 4 wpi also showed To introduce a large deletion in motif 6 of the GA2ox family which functions as an oxygenase 31 , co-expression of two distinct guide RNAs was carried out; two single guide RNAs (sgRNAs; G851 and G907) were designed on exon 3 of MtGA2ox10 (Fig. 4A,B) and placed together in a single vector under the control of the MtU6-8 promoter, resulting in a dual sgRNA construct (G851.907; Fig. 4C). Screening of transformed roots by PCR-restriction fragment length polymorphism (RFLP) with BsrD I and Eco105 I, as well as PCR amplicon sequencing, revealed that 19% (7 out of 36) of the transformed roots expressing green fluorescent protein (GFP) harbored deletion mutations in the target region (Fig. 4D,E and Supplementary Fig. S3). Among the transformed roots with edited MtGA2ox10, three samples (G851.907 KO-1 to 3) were selected and further analyzed. G851.907 KOs were characterized by heterozygous biallelic sequences with large deletions between the G851 and G907 target regions, resulting in an in-frame deletion, frame shift, or premature stop codon in the reading frame (Fig. 4F). Deletion of MtGA2ox10 strongly affected both nodule number and development on the transformed roots of M. truncatula (Fig. 5). Root growth over 2 months in pots of Perlite showed no significant difference of root length between G851.907 KOs and the control roots transformed with the empty vector (Fig. 5A,F). In contrast, the number and size of the nodules were significantly reduced in the G851.907 KO roots (Fig. 5B,C). Unlike the fully grown, cylindrical pink nodules, which measured ~2.5 mm in length on the control roots, G851.907 KO roots formed pale, white, immature nodules that measured <1 mm in length (p < 0.001; Fig. 5G) and were on average 3.7-fold fewer in number (p < 0.001; Fig. 5H). Interestingly, there were no significant differences in rhizobial colonization or zonal organization of similar-sized nodules between G851.907 KOs and the control, as seen by staining of LacZ activity (Fig. 5D). On the other hand, the number of infection thread per cm in the differentiation zone of G851.907 KO roots was 4.0-fold fewer than in the control (p < 0.001), indicating that epidermal infection of rhizobia was highly affected in G851.907 KO roots (Fig. 5E,H). www.nature.com/scientificreports www.nature.com/scientificreports/ of MtGA2ox10 (MtGA2ox10 OE) affected plant architecture. Two-month-old transgenic plants grown in pots showed characteristics of GA-deficient phenotypes; dwarfism, small dark-green leaves, and reduced stem and root growth. Biomass of the MtGA2ox10 OE plants was only 7.8% to that of the control plants ( Fig. 6A-C). Moreover, all of the T 0 plants of MtGA2ox10 OE failed to yield seeds even with application of GA 3 . MtGA2ox10 OE in A. rhizogenes-transformed hairy roots also showed a ~1.8-fold decreased root mass compared to the control (Supplementary Fig. S4). To test whether exogenous application of GA could rescue the dwarf phenotypes of the MtGA2ox10 OE stable transgenic plants, nine independent transgenic plants were treated with GA 3 at concentrations of 10 µM or 100 µM through irrigation. GA 3 application resulted in a dose-dependent recovery of plant growth in two weeks after the application (Fig. 6D,E). The transgenic plants showed different sensitivity of growth response to GA 3 compared with the control lines. Changes in the number of stem internode and length of stem internode were obvious in the MtGA2ox10 OE lines but not in the control lines at 10 µM GA 3 (Fig. 6F to H). www.nature.com/scientificreports www.nature.com/scientificreports/ qPCR analysis of GA metabolic pathway genes in the MtGA2ox10 OE transgenic plants displayed more than a 2-fold increased expression of ent-kaurene synthesis-related genes (KS in root and KAO in leaf) and GA oxidase genes (CYP714A1 and GA3ox in root and CYP714C1 in leaf) ( Supplementary Fig. S5). This result showed that the over-expression of MtGA2ox10 differentially altered the relative transcript levels of GA synthesis pathway genes in root and leaf of transgenic plants compared with the control lines. MtGA2ox10 OE also significantly affected nodulation (Fig. 7). In the control lines (n = 4), a number of nodules formed at 3 weeks post inoculation of S. medicae ABS7M (Fig. 7A). In contrast, lines over-expressing MtGA2ox10 had 23-fold increase in the number of infection threads compared with the control line (p < 0.001). However, no nodules were detected on the roots of MtGA2ox10 OE stable transgenic plants (n = 6) even after 4 weeks post rhizobium inoculation (Fig. 7B to D). Meanwhile, approximately a 1.9-fold fewer nodules formed per A. rhizogenes-transformed plant; however, no prominent difference of nodule structure or rhizobial colonization was observed in the mature nodule ( Supplementary Fig. S4). Discussion Symbiotic nodule organogenesis is a complex developmental reprograming process that requires tight regulation of the interaction between the rhizobium and the host plant. Plant hormones are important positive or negative regulators of legume-rhizobial symbiosis, as they affect the expression of symbiotic genes. Larrainzar et al. 27 noted that symbiosis-specific transcriptional activation of biosynthetic pathways for multiple plant hormones, such as ethylene, cytokinin, abscisic acid, GA, and strigolactone, takes place within hours of inoculation with the rhizobium, suggesting that these hormones likely interact to regulate downstream symbiotic responses. Interestingly, this study also reported on nuanced aspects of the GA anabolic and catabolic pathways. Both GA biosynthesis and inactivation pathway genes were upregulated, with temporal differences in a Nod factor-dependent manner. Consistent with previous suggestions 3-5 , our findings provide new insights into the activity of GA during nodulation and show that spatiotemporal regulation of GA in nodule development must be considered not only in biosynthesis, but also in catabolism. Previous studies of the roles of GA in nodulation have focused on the GA biosynthesis gene or the DELLA-mediated downstream signaling pathway. A low GA concentration is essential for the initial stage of www.nature.com/scientificreports www.nature.com/scientificreports/ infection, but inhibits the normal progress of nodule organogenesis. Therefore, GA levels must be regulated dynamically and differentially during the separate stages of nodulation, epidermal infection and nodule organogenesis 4,5 . In contrast, little attention has focused on the inactivation or transport of GA compared to biosynthesis and signaling in nodulation. In this study, we characterized the molecular function of MtGA2ox10 encoding the C20 GA-specific inactivation enzyme GA2-oxidase in symbiotic nodule organogenesis. This novel MtGA2ox gene exhibited rhizobium-dependent induction in the 6 to 36 hpi window, and negative regulation by ethylene in the M. truncatula root. Gene expression was induced as early as at 6 hpi and peaked at 12 hpi in wild type A17; it was highly enhanced in skl but was markedly low in nfp and lyk3. Native promoter::GUS fusion analysis confirmed that transcriptional activation of the MtGA2ox10 promoter was associated with rhizobium infection and nodule development. The formation of infection thread, as well as the number and size of nodules, were reduced by CRISPR/Cas9-mediated deletion of MtGA2ox10. Additionally, plant architecture and nodulation were also affected by over-expression of MtGA2ox10, whereas exogenous application of GA 3 rescued the dwarf phenotype. These findings collectively suggested that MtGA2ox10 is a unique member of the MtGA2ox gene family, controlling the low concentration of GA by catabolic inactivation of C20 GA in roots during epidermal infection of the rhizobium. Therefore, it plays as a catabolic regulator of symbiotic nodule organogenesis. MtGA2ox10 clustered into subgroup III GA2ox with substrate specificity to C20 GA, but not to active C19 GAs (Fig. 1). A number of studies have reported on the significance of C20 GA regulation for plant responses and organ development. Two C20 GA2ox genes, AtGA2ox7 and AtGA2ox8, control plant architecture and floral initiation in A. thaliana 17,32 . C20 GA2ox is also related to tillering and root development 8 , as well as to salt tolerance and root gravity responses 33 in rice, and over-expression of a C20 GA2ox in switchgrass changes the plant architecture, for example through increased tillering, a short internode length, and reduced plant height 34 . It was interesting to note that all of the reported phenotypes of C20 GA2ox over-expression showed less severe dwarfism compared to C19 GA2ox over-expression, suggesting that C20 GA2ox does not completely deplete the pools of diverse GAs and may have a more specialized role in plant development. Meanwhile, MtGA2ox10 OE in the stable transgenic plants resulted in dwarfism with low fertility and inhibition of nodule development despite of increased root infection, presumably due to ectopic inactivation of earlier intermediate C20 GAs (GA 12 and GA 53 ) or disruption of the GA pool by altered expression of KS, KAO, GA13ox, and GA3ox. These results were consistent with the previous report from pea na-1 mutant 5 ; therefore, clarified the role of GA on the different stages of nodulation (suppression of infection and activation of nodule formation). Of particular interest, the stable transgenic plants of MtGA2ox10 OE showed different root growth and nodulation pattern compared with the hairy root transformation lines (almost normal development of root and nodule). We anticipate that GAs transported to the A. rhizogenes-transformed roots from the aerial parts might compensate for the effect of MtGA2ox10 OE as demonstrated by grafting experiments in GA-deficient mutant pea and A. thaliana 14,35 . GA biosynthesis is a complex and multistep process with diverse intermediates. Therefore, it is difficult to determine the exact spatial localization of GA biosynthesis. Other studies have suggested that GAs are mobile signaling molecules in plants. The successful completion of a number of development processes requires GAs to be mobile 36 . A study of pea using radiolabeled forms of GA 19 , GA 20 , and GA 1 showed that GA 20 was the major mobile form of GA in the pea 35 . In A. thaliana, the biologically inactive C20 GA 12 is the major transported form of GA 13,14 . The membrane permeability of GA 12 allows it to serve as a long-distance transport molecule 36 . Considering the fact that the A. rhizogenes-transformed hairy roots of MtGA2ox10 OE formed normal nodules and MtGA2ox10pro::GUS expression occurred in the vascular bundles of the roots and mature nodules but not near the base of mature nodules, GA transport through the vascular system in M. truncatula is expected to be under catabolic regulation by C20 GA-specific MtGA2ox and GA precursors are converted to active forms at the location where the nodule develops. Additionally, expression of MtGA2ox10 in the mature nodule suggests that it may inhibit nodule over-growth by quantitative regulation of GA, which is a known regulator of cell expansion and cell cycle activation. Further analysis such as grafting of wild-type scions onto rootstocks of stable transgenic over-expression and knock out lines or measurement of GA content in the transgenic plants will prove this hypothesis. In conclusion, this study described the importance of fine catabolic tuning of GA for nodule development in M. truncatula. We clarified that MtGA2ox10 is a unique member of the MtGA2ox gene family regulating rhizobium infection and nodule organogenesis. This is the first report on the roles of the GA catabolic pathway gene in nodulation of legume plants and contributes towards a more comprehensive understanding of the dynamic nature of the GA regulatory mechanism. Research is underway to establish and characterize stable transformed plants with loss-of-function for MtGA2ox, to further understand the roles of GA and its regulation through catabolism and transport for symbiotic nodule development. Methods Plant growth conditions and inoculation of rhizobium bacteria. M. truncatula cv. Jemalong A17 seeds were scarified, germinated, and grown in a growth room at 22 °C under 16 h light/8 h dark conditions. For rhizobium inoculation of the seedlings, germinated 1-day-old seedlings were planted on the aeroponic caisson, a large plastic chamber with a perforated lid on top and a humidifier that sits on the bottom 37 , where they were misted with Lullien's aeroponic culture medium 38 www.nature.com/scientificreports www.nature.com/scientificreports/ Phylogenetic analysis of the GA2-oxidase gene family. For phylogenetic analysis of the GA2ox gene family in the sequenced plant genomes, putative GA2ox genes in the genomes of B. rapa, G. max, L. japonicus, M. truncatula, O. sativa, S. lycopersicon, and V. vinifera, were identified based on a BLASTP search (E value cutoff of E −10 and query coverage of 50%) using A. thaliana GA2ox genes as the seed queries. At the same time, the GA2ox protein sequences of tomato 31 , rice 8 , and grapevine 39 were downloaded from the National Center for Biotechnology Information (NCBI) GenBank database and combined with the BLASTP search results. The deduced amino acid sequences of the GA2ox genes were aligned using the ClustalW program 40 with the default parameters. The phylogenetic tree was constructed using the Maximum-Likelihood method in MEGA7 41 , with bootstrap analysis of 1,000 replicates for stability testing of the tree nodes. Identification of other GA biosynthesis pathway genes, including CPS, KS, KO, KAO, GA13OX, and GA3ox, in the M. truncatula genome (Mt4.0) was also performed by BLASTP search (E value cutoff of E −10 and query coverage of 50%) using the previously reported GA biosynthesis genes of M. truncatula 42 as the seed queries. Transcriptional expression analyses. For the transcriptome analysis, our RNA-seq data, which were deposited to NCBI under the BioProject accession number PRJNA269201, were mapped to the very recent M. truncatula genome assembly Mt4.0, as described previously 27 . Read counts were normalized using the trimmed mean of M-values (TMM) method 43 . Average TMM values for the GA metabolic pathway genes per sample were selected and analyzed by hierarchical clustering using Cluster 3 44 . A heat map was drawn with the log-transformed fold changes of the TMM values compared to 0 hpi of A17 as a control. For the qPCR analysis of MtGA2ox10, plant roots were harvested at 0, 6, 12, 24, 48 hpi and 2 weeks post-inoculation (wpi) with S. medicae ABS7M. Leaves and flowers were sampled from 8-week-old plants. Un-inoculated roots from 4-week-old plants were included as a control. Total RNA was extracted using the CTAB method 45 combined with LiCl precipitation and DNase treatment using the TURBO DNA-free kit (Ambion, Life Technologies, Carlsbad, CA, USA). First strand cDNA was synthesized using the TOPscript TM cDNA synthesis kit (Enzynomics, Daejeon, Korea) with oligo-dT. The cDNAs were diluted 10-fold and qPCR was performed using TOPreal TM qPCR premix (Enzynomics) and a CFX96 ™ Real-Time PCR Detection System (Bio-Rad, Hercules, CA, USA). The comparative cycle threshold (Ct) method, also known as the 2 −DDCt method 46 , was employed for relative quantification using the GAPDH gene (Medtr3g085850) as a reference gene. qPCR analysis of other GA biosynthesis pathway genes (CPS, KS, KO, KAO, GA13ox, and GA3ox) was also performed using the oligonucleotide primers designed to amplify target genes from the closely related family genes (Supplementary Table S3). Gene cloning and plasmid construction. All of the primers used in plasmid construction are listed in Supplementary Table S4. To construct the promoter::GUS reporter fusion, 2.1 kb upstream of the 5′-flanking region of the MtGA2ox10 gene (Medtr4g074130) was amplified from the genomic DNA of M. truncatula A17 using Phusion High Fidelity DNA polymerase (Thermo Fisher Scientific, Waltham, MA, USA). The resulting PCR amplicon was purified by agarose gel electrophoresis and cloned into pDONR221 using BP Clonase II (Thermo Fisher Scientific). The binary destination vector pRNGWFS7 (Kim, unpublished) was constructed by replacing NPT II in pKGWFS7 47 with the DsRed::NPT II translational fusion under the CaMV 35S promoter. The entry plasmid was recombined with pRNGWFS7 in the presence of LR Clonase II (Thermo Fisher Scientific) to obtain a transcriptional fusion of the MtGA2ox10 promoter to GFP and GUS (MtGA2ox10pro::GUS). To construct the over-expression vector, the full-length coding sequence (CDS) of MtGA2ox10 was amplified from the first strand cDNA which was synthesized with the total RNA isolated from the S. medicae-infected root tissues of M. truncatula A17. The amplicon was cloned into pDONR221 using BP Clonase II (Thermo Fisher Scientific) and recombined with pK7WG2D 47 using LR Clonase II (Thermo Fisher Scientific) to obtain the binary construct for over-expression of the MtGA2ox10 CDS under the CaMV 35S promoter. The binary Cas9 expression vector pGK3304 and the sgRNA cloning vector pGK2223 were constructed as follows; the Cas9 expression cassette consisting of the CaMV 35S promoter, Cas9::NLS::HA and the CaMV 35S terminator was PCR-amplified from pBAtC 48 . A Hind III site within Cas9 was removed by overlap PCR and the Cas9 expression cassette was transferred to pKGWD 47 by replacing the GFP expression cassette between the Sac I and Hind III sites. The resulting plasmid was named pKGWC. A fragment of the CaMV 35S promoter, GFP(S64T)::BAR, and the NOS terminator were amplified from pGK2720 (Kim, unpublished) and replaced the Kanamycin resistant gene in pKGWC to yield pGK3304. A sgRNA cloning vector was constructed by placing the gRNA cloning site and gRNA scaffold in pBAtC 48 under the promoter of the MtU6-8 small nuclear RNA gene promoter in pENTR_MtU6.8::gus0::UT 30 . The resulting Gateway-compatible sgRNA cloning vector was named pGK2206 and the Aar I cloning site in pGK2206 was replaced by the Bsa I cloning site to yield pGK2223. CRISPR/Cas9-mediated deletion. For the CRISPR/Cas9-mediated deletion of MtGA2ox10, two sgRNAs were designed on exon 3 of MtGA2ox10 gene using Cas-Designer 49 . The complementary oligonucleotides were annealed and cloned into the Bsa I cloning site of the entry vector pGK2223 using the Golden Gate assembly method 50 . Briefly, two complementary oligonucleotides were phosphorylated using T4 polynucleotide kinase (NEB, Ipswich, MA, USA) and annealed in a kinase buffer. The annealed oligonucleotides were mixed with pGK2223 plasmid, Bsa I and T4 DNA ligase (NEB). The reaction mixture was incubated at 37 °C for 30 min, and then subjected to 30 cycles of 5 min at 37 °C and 10 min at 24 °C. After a final incubation at 50 °C for 30 min, the Golden Gate assembly was transformed into E. coli TOP10 cells. Two entry plasmids with different sgRNA were tandem assembled using the restriction cloning method. One sgRNA expression cassette was cut out from the entry plasmid using Xba I and Spe I and inserted into another sgRNA entry plasmid which was digested by Xba I and dephosphorylated. The resulting dual sgRNA entry plasmid was recombined with the binary CRISPR/Cas9 vector pGK3304 using the LR clonase II (Thermo Fisher Scientific). www.nature.com/scientificreports www.nature.com/scientificreports/ Plant transformation. For A. rhizogenes-mediated hairy root transformation, the binary constructs were electroporated into A. rhizogenes MSU440 and transformed roots were generated in M. truncatula A17 as previously described 51 . To select the plantlets, glufosinate herbicide BASTA TM (Bayer Crop Science, Monheim am Rhein, Germany) was added to the medium at a concentration of 4 mg/l and the growing hairy roots were selected by detection of GFP using an IZX2-ILLB stereomicroscope equipped with a GFP filter set (Olympus, Tokyo, Japan). One transformed root was left for each plantlet while all non-transformed roots were removed. Four-week-old composite plantlets with transformed roots were transferred to Perlite in a 1 L pot and grown in a growth room as described above. For A. tumefaciens-mediated stable transformation, the binary constructs were electroporated into A. tumefaciens EHA105 and stable transgenic plants of M. truncatula A17 was generated as previously described 52 . Briefly, sterilized leaf explants of M. truncatula A17 were co-cultivated with A. tumefaciens on the P4 medium and callus was induced on the P4 medium containing 5 µM GA 3 (Sigma-Aldrich, https://www. sigmaaldrich.com), 40 mg/L Kanamycin (Sigma-Aldrich), and 400 mg/L Cefotaxime (Sigma Aldrich). The transgenic somatic embryos were removed from the callus tissue and were plated onto the MS medium containing 10 g/L sucrose, 50 mg/L Kanamycin, and 0.25% Gelrite for development into plantlets. When sufficiently grown, plantlets were transferred to Perlite in a 1 L pot and grown in a growth room as described above. Histochemical staining and fluorometric quantification of LacZ and GUS expression. Plant roots were harvested at 6, 12, 24, 48 hpi and 2 wpi with S. medicae ABS7M. Transformed roots were selected by detecting GFP under a fluorescence stereomicroscope as described above. The constitutive expression of LacZ in S. medicae ABS7M was detected using X-Gal as a substrate according to a standard protocol 53 . Dual staining of LacZ and GUS was carried out according to the protocol in the L. japonicus handbook 54 . The reaction was monitored overnight to avoid over-staining. Fluorometric quantification of GUS activity was conducted using 4-methylumbelliferyl b-D-glucuronide as a substrate 55 Genotyping by PCR-RFLP and sequencing. Genomic DNA was extracted from the transformed hairy roots of the A. rhizogenes-transformed composite plantlets or leaves of the stable transgenic plants using the standard CTAB method 56 for PCR, cloning, and sequencing. In parallel, a simple boiling method in 25 mM NaOH for genotyping by RFLP was applied. The CRISPR/Cas9-targeted region of MtGA2ox10 was amplified with the 2289-F and 2905-R primers, using Phusion High Fidelity DNA polymerase (Thermo Fisher Scientific). The amplicons were digested using the BsrD I (Thermo Fisher Scientific) or Eco105 I (Enzynomics) restriction enzymes and analyzed by agarose gel electrophoresis. Additionally, the amplicon was sequenced using the 2347-F primer after being cloned in the pLPS-TOPO Blunt vector (Elpis Biotech, Daejeon, Korea). Genotyping of the stable transgenic plants was performed by PCR amplification of the MtGA2ox10 coding sequence in the binary plasmid using G512-F and P35S-SF primers. GA treatment and statistics test. GA 3 (Sigma-Aldrich) was dissolved in ethanol at stock concentration of 10 mM. Two-month-old stable transgenic plants grown in pots were supplemented with nitrogen-free mFM medium containing either of 10 µM or 100 µM GA 3 at final concentration. Changes in plant architecture were recorded for four weeks. To statistically test the difference in measurements, the independent t-test was performed using SPSS. Data Availability The RNA-seq data used in this study have been deposited in NCBI's Bioproject collection under the Bioproject ID PRJNA269201.
2019-04-12T13:49:27.699Z
2019-04-11T00:00:00.000
{ "year": 2019, "sha1": "855522d1c97f703e81c1b036a9d420cfdb742561", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-019-42407-3.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "855522d1c97f703e81c1b036a9d420cfdb742561", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
118583987
pes2o/s2orc
v3-fos-license
Accurate characterization of the stellar and orbital parameters of the exoplanetary system WASP-33 b from orbital dynamics By using the most recently published Doppler tomography measurements and accurate theoretical modeling of the oblateness-driven orbital precessions, we tightly constrain some of the physical and orbital parameters of the planetary system hosted by the fast rotating star WASP-33. In particular, the measurements of the orbital inclination $i_{\rm p}$ to the plane of the sky and of the sky-projected spin-orbit misalignment $\lambda$ at two epochs about six years apart allowed for the determination of the longitude of the ascending node $\Omega$ and of the orbital inclination $I$ to the apparent equatorial plane at the same epochs. As a consequence, average rates of change $\dot\Omega_{\rm exp},~\dot I_{\rm exp}$ of this two orbital elements, accurate to a $\approx 10^{-2}~{\rm deg}~{\rm yr}^{-1}$ level, were calculated as well. By comparing them to general theoretical expressions $\dot\Omega_{J_2},~\dot I_{J_2}$ for their precessions induced by an oblate star whose symmetry axis is arbitrarily oriented, we were able to determine the angle $i^{\star}$ between the line of sight the star's spin $S^{\star}$ and its first even zonal harmonic $J_2^{\star}$ obtaining $i^{\star} = 142^{+10}_{-11}~{\rm deg},~J_2^{\star} = (2.1^{+0.8}_{-0.5})\times 10^{-4}.$ As a by-product, the angle between $S^{\star}$ and the orbital angular momentum $L$ is as large as about $\psi \approx 100$ deg $(\psi^{2008} = 99^{+5}_{-4}~{\rm deg},~\psi^{2014} = 103^{+5}_{-4}~{\rm deg})$, and changes at a rate $\dot\psi = 0.7^{+1.5}_{-1.6}~{\rm deg}~{\rm yr}^{-1}$. The predicted general relativistic Lense-Thirring precessions, or the order of $\approx 10^{-3}~{\rm deg}~{\rm yr}^{-1}$, are, at present, about one order of magnitude below the measurability threshold. INTRODUCTION Steady observations of a test particle orbiting its primary over time intervals much longer than its orbital period P b can reveal peculiar cumulative features of its orbital motion which may turn out to be valuable tools to either put to the test fundamental theories or characterize the physical properties of the central body acting as source of the gravitational field. It has been just the case so far in several different astronomical and astrophysical scenarios ranging, e.g., from the early pioneering determinations of the multipole moments of the non-central gravitational potential of the Earth with artificial satellites (Kozai 1961;King-Hele 1962;Cook 1962) to the celebrated corroborations of the Einsteinian General Theory of Relativity (GTR) with the explanation of the anomalous (at that time) perihelion precession of Mercury (Einstein 1915)-observationally known since decades (Le Verrier 1859)-, several binary systems hosting at least one emitting pulsar (Hulse & Taylor 1975;Burgay et al. 2003;Lyne et al. 2004;Kramer et al. 2006), and Earth's satellites (Lucchesi & Peron 2010 implementing earlier ideas put forth since the dawn of the space era and beyond (Lapaz 1954;Cugusi & Proverbio 1978). Plans exist to use in a similar way the stars revolving around the supermassive black hole in Sgr A * (Ghez et al. E-mail: lorenzo.iorio@libero.it 2008; Gillessen et al. 2009;Angélil, Saha & Merritt 2010;Zhang, Lu & Yu 2015). With over 1 1500 planets discovered so far and counting (Han et al. 2014), most of which orbiting very close to their parent stars (Howard 2013), extrasolar systems (Perryman 2014), in principle, represent ideal probes to determine or, at least, constrain some physical parameters of their stellar partners through their orbital dynamics. One of them is the quadrupole mass moment J 2 , accounting for the flattening of the star. It is connected with fundamental properties of the stellar interior such as, e.g., the non-uniform distribution for both velocity rates and mass (Rozelot, Damiani & Pireaux 2009;Damiani et al. 2011;Rozelot & Fazel 2013). Also GTR may turn out a valuable goal for exoplanets' analysts also from a practical point of view. Indeed, by assuming its validity, it may be used as a tool for dynamically characterizing the angular momentum S of the host stars via the so-called Lense-Thirring effect (Lense & Thirring 1918). Such a dynamical variable is able to provide relevant information about the inner properties of stars and their activity. Furthermore, it plays the role of an important diagnostic for putting to the test theories of stellar formation. The angular momentum can also have a crucial impact in stellar evolution, in particular towards the higher mass (Tarafdar & Vardya 1971;Wolff, Edwards & Preston 1982;Vigneron et al. 1990;Wolff & Simon 1997;Herbst & Mundt 2005;Jackson, MacGregor & Skumanich 2005). As a naive measure of the relevance of the Einsteinian theory of gravitation in a given binary system characterized by mass M, proper angular momentum S and extension r, the magnitude of the ratios of some typical gravitational lengths to r can be assumed. By taking (Bertotti, Farinella & Vokrouhlick 2003) where G and c are the Newtonian gravitational constant and the speed of light in vacuum, respectively, it can be easily noted that, for exoplanets hosted by Sun-like stars at, say, r = 0.005 au, Eqs 1 to 2 yield Such figures are substantially at the same level of, or even larger than those of the double pulsar (Burgay et al. 2003;Lyne et al. 2004;Kramer et al. 2006), for which one has It shows that, in principle, some of the extrasolar planetary systems may well represent important candidates to perform also tests of relativistic orbital dynamics. In the present work, we will deal with WASP-33 b (Collier Cameron et al. 2010). It is a planet closely transiting a fast rotating and oblate main sequence star along a circular, short-period (P b = 1.21 d) orbit which is highly inclined to the stellar equator. In Iorio (2011b) it was suggested that, in view of the relatively large size of some classical and general relativistic orbital effects, they could be used to better characterize its parent star as long as sufficient accurate data records were available. It has, now, became possible in view of the latest Doppler tomography measurements processed by Johnson et al. (2015), and of more accurate theoretical models of the orbital precessions involved (Iorio 2011c(Iorio , 2012. The plan of the paper is as follows. In Section 2, we illustrate our general analytical expressions for the averaged classical and relativistic precessions of some Keplerian orbital elements in the case of an arbitrary orientation of the stellar symmetry axis and of an unrestricted orbital geometry. Section 3 describes the coordinate system adopted in this astronomical laboratory. Our theoretical predictions of the orbital rates of change are compared to the corresponding phenomenologically measured precessions in Section 4, where tight constraints on some key stellar parameters are inferred, and the perspectives of measuring the Lense-Thirring effect are discussed. Section 5 is devoted to summarizing our findings. THE MATHEMATICAL MODEL OF THE ORBITAL PRECESSIONS A particle at distance r from a central rotating body of symmetry axis directionŜ = Ŝ x ,Ŝ y ,Ŝ z experiences an additional noncentral acceleration (Vrbik 2005) which causes long-term orbital precessions. For a generic orientation ofŜ in a given coordinate system, they were analytically worked out by 2 Iorio (2011c). Among them 3 , we havė which will be relevant for our purposes. In Eqs 8 to 9, a is the semimajor axis, n b = √ GMa −3 is the Keplerian mean motion, e is the eccentricity, I is the inclination of the orbital plane with respect to the coordinate {x, y} plane adopted, and Ω is the longitude of the ascending node counted in the {x, y} plane from a reference x direction to the intersection of the orbital plane with the {x, y} plane itself. Note that if the body's equatorial plane is assumed as {x, y} plane, i.e. ifŜ x =Ŝ y = 0,Ŝ z = 1, Eqs 8 to 9 reduce to the well known expressions (Bertotti, Farinella & Vokrouhlick 2003) with this particular choice, I coincides with the angle ψ between S and the particle's orbital angular momentum L. It is important to stress that, in the general case, the cumbersome multiplicative geometrical factor in Equation 8 depending on the spatial orientation of the orbit and of the spin axis does not reduce to cos ψ, as it will explicitly turn out clear in Section 3. On the other hand, it can be easily guessed from the fact that cos ψ is linear in the components ofŜ, while the acceleration of Equation 7 is quadratic in them, whatever parametrization is adopted. Such an extrapolation of a known result valid only in specific cases is rather widespread in the literature (see, e.g., Iorio (2011b); Barnes et al. (2013); Johnson et al. (2015)), and may lead to errors when accurate results are looked for. Eqs 8 to 9 are completely general, and can be used with any coordinate system provided that the proper identifications pertaining the angular variables are made. The general relativistic gravitomagnetic field due to the angular momentum S of the central body induces the Lense-Thirring effect (Lense & Thirring 1918), whose relevant orbital precessions, valid for an arbitrary orientation of S, are 4 (Iorio 2012) In the special case in which S is directed along the reference z axis, Eqs 12 to 13 reduce to the textbook results (Renzetti 2013) The perspectives of detecting general relativity, mainly in its spinindependent, Schwarzschild-type manifestations, with exoplanets have been studies so far by several authors (Iorio 2006 THE COORDINATE SYSTEM ADOPTED For consistency reasons with the conventions adopted by Johnson et al. (2015), who, in turn, followed Queloz et al. (2000), the coordinate system used in the present analysis is as follows (see Figure 1). The line of sight, directed towards the observer, is assumed as reference y axis, while the z axis is determined by the projection of the stellar spin axisŜ onto the plane of the sky, which is inferred from observations. The x axis is straightforwardly chosen perpendicular to both the other two axes in such a way to form a right-handed coordinate system; it generally does not point towards the Vernal Equinox at a reference epoch. With the present choice, the coordinate {x, y} plane does not coincide with the plane of the sky which, instead, is now spanned by the z and x axes; the {x, y} plane is known as apparent equatorial plane (Queloz et al. 2000). The planetary longitude of the ascending node Ω lies in it, being counted from the x axis to the intersection of the orbital plane with the apparent equatorial plane itself; thus, in general, Ω does not stay in the plane of the sky. Moreover, with such conventions, the angle I between the orbital plane and the coordinate {x, y} plane entering Eqs 8 to 9 and Eqs 12 to 13 is not the orbital inclination i p , which refers the plane of the sky and is one of the orbital parameters directly accessible to observations. Instead, I, which is also the angle from the unit vector k of the z axis to the planetary orbital angular momentum L, has to be identified with the angle α of Queloz et al. (2000). By considering it as a colatitude angle ofL in a spherical coordinate system, the components of the unit vector of the planetary orbital angular momentum arê In view of Eqs 16 to 18, concisely summarize Eqs 8 to 9 and Eqs 12 to 13. Another angle which is measurable is the projected spin-orbit misalignment λ. It lies in the plane of the sky, and is delimited by the projections of both the stellar spin axis and of the planetary orbital angular momentum. In our coordinate system, λ, i p are the longitude and the colatitude spherical angles, respectively, with λ reckoned from the z axis to the projection ofL onto the plane of the sky. As such, the components of the planetary orbital angular momentum versor can also be written asL L z = sin i p cos λ. In general, both I and Ω, which explicitly enter Eqs 8 to 9 and Eqs 12 to 13, are not directly measurable; they must be expressed in terms of the observable angles i p , λ. To this aim, it is useful to use the unit vectorN directed along the line of the nodes towards the ascending node, which is defined aŝ From Eqs 21 to 23, its components arê Eqs 16 to 18 and the definition of Equation 24 allow to express the components ofN in terms of I, Ω aŝ The coordinate system adopted. The axes x and z span the plane of the sky in such a way that the projection of the stellar spin S onto it defines the z axis. The y axis is directed along the line of sight towards the observer. The {x, y} plane is the apparent equatorial plane. The inclination of the orbital plane to the plane of the sky is i p , while i is the angle between the line of sight and the star's spin axis. The sky-projected spin-orbit misalignment angle λ lies in the plane of the sky, and is delimited by the projections of S and L onto it. The unit vectorN of the line of the nodes lies in the apparent equatorial plane perpendicularly to the projection of L onto it. The longitude of the ascending node Ω is counted in the {x, y} plane from the x axis to the line of the nodes. The inclination of the orbital plane to the apparent equatorial plane is I. The angle betweenŜ andL is ψ. The values of the angles used to produce the picture were arbitrarily chosen just for illustrative purposes; they do not correspond to the actual configuration of WASP-33 b. By adopting the convention 5 0 Ω 2π, Equation 28 yields 6 Ω = arccosN x forN y 0, whereN x ,N y are expressed in terms of i p , λ by means of Eqs 25 to 26. The inclination I, defined in the range 0 I π, is obtained in terms of i p , λ from and Eqs 21 to 23. If i is the angle between from the line of sight toŜ , the components of the star's spin axis in our coordinate system arê The angle ψ between the stellar angular momentum S and the planetary orbital angular momentum L can be computed from Eqs 34 to 23 aŝ S ·L = cos ψ = cos i p cos i + sin i p sin i cos λ, generally proportional to cos ψ, as previously remarked in Section 2. Finally, the configurations i p , λ, i and π − i p , − λ, π − i are physically equivalent since they correspond to looking at the planetary system from the opposite sides of the plane of the sky (Masuda 2015). In both case, the angle ψ remains the same, as explicitly shown by Equation 37. According to Eqs 8 to 9, the node precession remains unaltered, while the rate of I changes by the amount Using the precessions of I and Ω Generally speaking, while the magnitude of the classical precessions driven by the star's oblateness is at the ≈ deg yr −1 level, the relativistic gravitomagnetic ones about three orders of magnitude smaller. Despite this discrepancy, if, on the one hand, the current state-of-the-art in the orbital determination of WASP-33 b (Johnson et al. 2015), based on data records 5.89 years long (from Nov 12, 2008to Oct 4, 2014, does not yet allow for a measurement of the relativistic effects, on the other hand, they might exceed the measurability threshold in a not so distant future. Indeed, they are just ≈ 4 − 8 times smaller than the present-day errors, which amount to ≈ 2 − 8 × 10 −2 deg yr −1 (Johnson et al. 2015) for the node. In the following, we will reasonably assume that the measured orbital precessions of WASP-33 b are entirely due to the star's oblateness. This will alow us to put much tighter constraints on either i and J 2 . Our approach is as follows. The lucky availability of the measurements of both i p and λ at two different epochs some years apart leads to the calculation of the unobservable orbital parameters Ω , I from Eqs 31 to 33 at the same epochs. According to the measured values of i p , λ by Johnson et al. (2015), it isN y < 0, so that Equation 32 must be used yielding The values by Johnson et al. (2015) differ from Eqs 39 to 40 by π, likely due to the different convention adopted for the node. Since Equation 33 returns Eqs 16 to 18 and Eqs 21 to 23 agree both in magnitude and in sign. It is straightforward to compute the average rates of changė Ω exp ,İ exp by simply taking the ratios of the differences ∆Ω, ∆I of their values at the measurement's epochs to the time span, which in our case is ∆t = 5.89 yr. Our results are in Table 1. Eqs 8 to 9 provide us with an accurate mathematical model of the oblateness-driven precessions which, in view of its generality, can be straightforward applied to the present case. Eqs 8 to 9 can be viewed as two functions of the two independent variables i , J 2 . By allowing them to vary within their physically admissible ranges (Iorio 2011b), it is possible to equateΩ J 2 ,İ J 2 toΩ exp ,İ exp by obtaining certain stripes in the i , J 2 plane whose widths are fixed by the experimental ranges of the observationally determined precessions quoted in Table 1. If our model is correct and if it describes adequately the empirical results, the two stripes must overlap somewhere in the considered portion of the i , J 2 plane by determining an allowed region of admissible values for the inclination of the stellar spin axis to the line of sight and the star's dimensionless quadrupole mass moment. It is just the case, as depicted in the upper row of Figure 2. From it, it turns that As a consequence, the angle between the orbital plane and the stellar equator and its precession is as reported in Table 1. The lower row of Figure 2 depicts the physically equivalent case with π − i p , − λ. Now,N y > 0, and Equation 31 must be used yielding While the stripe forΩ is the same, it is not so forİ, as expected from Equation 38; the intersection between theİ,Ω curves corresponds to It must be noted that J 2 is unchanged. Constraining the oblateness of Kepler-13 Ab An opportunity to apply the present method to another exoplanet is offered by Kepler-13 Ab, also known as KOI-13.01 (Szabó et al. 2012;Shporer et al. 2014;Johnson et al. 2014;Masuda 2015). By using the values of its physical 7 and orbital parameters determined with the gravity darkened transit light curves and other observations (Masuda 2015), it is possible to compute analytically the rate of change of cos i p in terms ofΩ,İ by means of Equation 17 and Equation 22, and compare it to its accurately measured value (Masuda 2015) in order to infer J 2 . We obtain in agreement with Masuda (2015) who seemingly used a different dynamical modelization. We calculated our uncertainty with a straightforward error propagation in our analytical expression of J 2 thought as a function of the parameters d| cos i p |/dt, cos i p , i , P b , a/R , λ affected by experimental uncertainties (Shporer et al. 2014;Masuda 2015). valid for a circular orbit, along with Equation 17 and Equation 22, allows us to use also the value ofḃ exp independently measured by Szabó et al. (2012) with the transit duration variation, although it is accurate only to 27%. We get which is not in disagreement with Equation 48. From Equation 49, it turns out that the analytical expressions ofḃ and d cos i p /dt are not independent, so that the availability of independently measured values for both of them do not allow to determine/constrain any further dynamical effect with respect to J 2 . Luckily, it seems that other precessions, independent oḟ b, d cos i p /dt, should be measurable via Doppler tomography in the next years or so (Johnson et al. 2015;Masuda 2015). Depending on the final accuracy reached, such an important measurement will allow, at least in principle, to dynamically measure or, at least, constrain also the stellar spin by means of the Lense-Thirring effect through, e.g.,λ calculated with Eqs 12 to 13. SUMMARY AND CONCLUSIONS The use of a general model of the orbital precessions caused by the primary's oblateness, applied to recent phenomenological measurements of some planetary orbital parameters of WASP-33 b taken at different epochs 5.89 years apart, allowed us to tightly constrain the inclination i of the spin S of WASP-33 to the line of sight Table 1. Measured and derived parameters for the WASP-33 system according to Table 1 of Johnson et al. (2015) and the present study. Our values for Ω, I were inferred by assuming 0 Ω 2π, and calculating Eqs 32 to 33 with the measured values of the orbital inclination i p and the sky-projected spin-orbit misalignment angle λ released by Johnson et al. (2015), while the errors were found by numerically determining the maxima and minima of Eqs 32 to 33 thought as functions of i p , λ varying in the rectangle delimited by their measurement errors as per Table 1 of Johnson et al. (2015). The same procedure was adopted for the errors inΩ exp ,İ exp , assumed as a function of Ω 2008 , Ω 2014 , I 2008 , I 2014 varying in the rectangle determined by the errors in them previously calculated. The time span adopted for calculating the precessions is ∆t = 5.89 yr. The different values of the node quoted by Johnson et al. (2015) with respect to ours are likely due to a different convention adopted by them for the ascending node. Parameter (Johnson et al. 2015 and its dimensionless quadrupole mass moment J 2 . Our analytical expressions are valid for arbitrary orbital geometries and spatial orientations of the body's symmetry axis. By comparing our theoretical orbital rates of change of the longitude of the ascending node Ω and of the inclination I of the orbital plane with respect to the apparent equatorial plane with the observationally determined ones, we obtained i = 142 +10 −11 deg, J 2 = 2.1 +0.8 −0.5 × 10 −4 . Furthermore, the angle between the stellar and orbital angular momenta at different epochs is ψ 2008 = 99 +5 −4 deg, ψ 2014 = 103 +5 −4 deg. Thus, it varies at a ratė ψ = 0.7 +1.5 −1.6 deg yr −1 . In view of the fact that WASP-33 b should transit its host star until 2062 or so and of the likely improvements in the measurement accuracy over the years, such an extrasolar planet will prove a very useful tool for an increasingly accurate characterization of the key physical and geometrical parameters of its parent star via its orbital dynamics. Moreover, also the determination of the general relativistic Lense-Thirring effect, whose predicted size is currently just one order of magnitude smaller than the present-day accuracy level in determining the planetary orbital precessions, may become a realistic target to be pursued over the next decades. Furthermore, in view of its generality, our approach can be straightforwardly applied to any other exoplanetary system, already known or still to be discovered, for which at least the same parameters as of WASP-33 b are or will become accessible to the observation. A promising candidate, whose orbital precessions should be measurable via Doppler tomography in the next years, is Kepler-13 Ab. For the moment, we applied our method to it by exploiting its currently known parameters, and we were able to constrain its oblateness in agreemement with the bounds existing in the literature. Finally, in principle, also the periastron, if phenomenologically measurable at different epochs as in the present case, can become a further mean to investigate the characteristics of highly eccentric exoplanetary systems-and to test general relativity as wellalong the guidelines illustrated here. Figure 2. Upper row: the darkest region in the plot is the experimentally allowed area in the i , J 2 plane, which is enlarged in the right panel. It is determined by the overlapping of the permitted shaded stripes set by the precessions of the node Ω and the orbital inclination to the apparent equatorial plane I. We assumed that the experimental precessionsΩ exp ,İ exp are entirely due to the stellar oblateness J 2 , within the experimental errors. ForΩ J 2 ,İ J 2 , we used the mathematical model of Eqs 8 to 9 calculated with the values quoted in Table 1; the values for R , M , a were taken from Collier Cameron et al. (2010). The curves inside the shaded areas correspond to the best estimates forΩ exp ,İ exp ; their intersection is given by i = 142 deg, J 2 = 2.1 × 10 −4 . Lower row: same as in the upper row, but with π − i p , − λ. Note that the stripe forİ is different, in agreement with Equation 38. The solution for the stellar spin axis inclination corresponds to π − i .
2015-10-07T16:23:59.000Z
2015-08-25T00:00:00.000
{ "year": 2015, "sha1": "bbd6d4168170bf785d8238c8976c2519f461764c", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/455/1/207/3082987/stv2328.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "bbd6d4168170bf785d8238c8976c2519f461764c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
221697865
pes2o/s2orc
v3-fos-license
Effects of alternate moistube-irrigation on soil water infiltration Alternate moistube-irrigation is a new type of water-saving irrigation, and research on water infiltration with alternate moistube-irrigation is important for the design of irrigation schemes and helpful to understand and apply this technology. The effects of the pressure head (1.0 m and 1.5 m) and tube spacing (10 cm, 20 cm, and 30 cm between two moistubes respectively) on soil water infiltration in alternate moistube-irrigation were studied in laboratory experiments, and the cumulative infiltration, discharge of the moistube, and shape and water distribution of the cross-section of the wetting front were determined. The cumulative infiltration increased quickly and linearly with the infiltration time at 0-96 h (R>0.99), and changed smoothly at 96-192 h with a basically steady infiltration rate. The discharge of the moistube increased rapidly at the beginning of irrigation, then decreased before stabilizing. The cumulative infiltrations and discharges of moistube under the 1.5 m pressure head were more than those under the 1.0 m pressure head. The shape of the cross-section of the wetting front for a single moistube was similar to a concentric circle. With the increase of tube spacing, the interaction between water infiltrations of two moistubes decreased. The soil water distributions around two moistubes were similar to each other under the 1.0 m pressure head and large tube spacing. When the tube spacing was 20 cm, the soil water distribution was more uniform around two moistubes. Introduction  The shortage of freshwater resources has become a bottleneck of restricting agricultural development and global food security [1][2][3][4] . In order to alleviate the contradiction between the shortage of freshwater resources and rising world food demand, many countries are actively developing water-saving irrigation methods [5][6][7] . Moistube-irrigation, also called semi-permeable membrane irrigation, is a new type of water-saving irrigation technology that has arisen in recent years in China [8] . Moistube-irrigation takes advantage of the special properties of semi-permeable membranes to provide timely and adequate moisture to crop root zones in a continuous flow mode so that soil is always kept moist [9][10][11] . As a result of the implementation of underground continuous irrigation by means of micro and slow release, deep seepage and surface evaporation are effectively controlled, resulting in saving of irrigation water. In addition, the system only needs a low-water-pressure head and negative pressure-potential of soil water to operate, thereby also saving energy. At present, Moistube-irrigation is gradually being promoted and applied to production in China [12][13][14][15] . The research on moistube-irrigation now mainly includes two aspects: soil box simulation test and plant cultivation test. The soil box simulation test mainly focuses on the effects of pressure head, soil texture and bulk density on the characteristics of the wetting body front, the outflow and anticlogging performance of the tube [10,11] . The plant cultivation test mainly focuses on the effects of pressure head, buried depth and spacing of tubes on crop growth and yield [13][14][15] . However, most researches currently focus on conventional continuous irrigation, and researches on other irrigation modes are relatively rare. As early as the 1970s, alternate row irrigation or alternate furrow irrigation was attempted for some crops. Since the 1990s, some scholars have thoroughly studied the principle of plant root signals under water stress, providing a theoretical basis for alternative irrigation [16][17][18][19] . Controlled alternate partial root-zone irrigation technology is a water-saving irrigation technology that can not only satisfy crop water demand but also control ineffective transpiration. It can reduce plant transpiration and ineffective evaporation of soil moisture by irrigating part of the root zone alternately during some or all growth stages of crops, while other root zones are under artificial water stress, so as to save water and improve water use efficiency. At present, many studies on alternate furrow irrigation [20][21][22][23][24] and alternate drip irrigation [25][26][27][28][29] have been carried out on many crops. Wei et al. [30] reported that, compared with conventional moistube-irrigation, the alternate moistube-irrigation with a watering interval of 2 d significantly improved tomato water use efficiency without significantly reducing the fruit yield. The reason was that the alternate moistube-irrigation stimulated a compensating effect on tomato root absorbency, and enhanced the ability to absorb soil water. However, research on the combination of moistube-irrigation and alternative irrigation is rare, and research on the infiltration and migration of soil water under alternative moistube-irrigation is still scarce. The objective of this study was to determine the effects of the pressure head and tube spacing on cumulative infiltration, discharge of the moistube, and the shape and water distribution of the cross-section of the wetting front in alternate moistube-irrigation through laboratory experiments. Experimental details The experiments were carried out in the College of Water Conservancy and Engineering, Taiyuan University of Technology, China from March to June in 2018. The equipment used in the laboratory experiment (presented in Figure 1) includes a Mariotte bottle, moistube pipe, water delivery pipe, soil box, and movable bracket. Two Mariotte bottles were used to maintain constant pressure head, and different pressure heads were produced by the Mariotte bottles placed on an adjustable height bracket. Two water delivery pipes of black polyethylene (PE) with inner diameters of 16 mm were connected to the Mariotte bottle and moistube pipe. Water supply was controlled by installed valves, and alternate moistube-irrigation was carried out by opening and closing the valve at different times. The water used in the experiment was filtered urban tap water. The moistube pipe was 1 m long with an inner diameter of 16 mm, and a wall thickness of 1 mm. The moistube pipe was produced by Shenzhen Moistube Irrigation Co., Ltd. The soil box was made of transparent plexiglass and was 100 cm×40 cm×40 cm (length, width, and height). Holes with different spacing distances (10 cm, 20 cm, and 30 cm) in both short side panels of the soil box were used to accommodate the moistube pipe, and the short side panels of the soil box were detachable. Samples of clay loam soil were evenly mixed and screened by a 2 mm sieve after drying and rolling. As determined by an MS 2000 laser particle size analyzer, the particle size ranges of d ≤0.002 mm, 0.002<d≤0.02 mm, and 0.02<d≤ 2 mm were 23.30%, 40.58%, and 36.12%, respectively. The soil bulk density was set at 1.3 g/cm 3 , and the initial soil water content was 1.38%. Treatments and measurements Treatments in the laboratory experiment consisted of the factorial combinations of (i) two pressure heads of 1m and 1.5 m (H1 and H2), and (ii) three tube spacings of 10 cm, 20 cm, and 30 cm (S1, S2, and S3). According to the required bulk density, a certain amount of soil sample was loaded into the soil box and, when the soil thickness reached 30 cm, two moistube pipes were laid horizontally with different tube spacings, and then another 10 cm of soil was loaded. Three replicates were adopted in all experiments. The water levels of two Mariotte bottles were recorded before the start of the test, and then the valve of moistube 1 (M1) was opened to supply water. After 4 d, the valve of M1 was closed, and then the valve of moistube 2 (M2) was opened to supply water for another 4 d. The total testing times for each of the treatments of H1S1, H1S2, H1S3, H2S1, H2S2, and H2S3 were 8 d. The total testing time for each of the treatments of H1S2-2 and H2S2-2 was 16 d, with the valves of M1 and M2 opened and closed a second time for another 4 d. For the first 12 h of water supply, the water level of the Mariotte bottle was recorded every 2 h, and then the water level was recorded every 12 h. Cumulative infiltration and the discharge of the moistube were calculated according to the time period. The wetting front position on both sides of the soil box was drawn, and the shape of the cross section of the wetting front was depicted with AutoCAD. Soil water contents in the cross section of the wetting front were measured by a drying method at the end of the test. The short side panel of the soil box near the end of the moistube pipe was removed, and soil samples were taken from the soil cross section to determine the soil moisture content. Soil sampling points were 5 cm, 10 cm, 15 cm, 20 cm, 25 cm, 30 cm, and 35 cm longitudinally from the surface of the soil cross section, and 5 cm, 10 cm, 15 cm, 20 cm, 25 cm, 30 cm, and 35 cm horizontally from the left side (near M1) of the soil cross section. Data analysis Analysis of variance was performed to determine the effect of alternate moistube-irrigation on cumulative infiltration and discharge of the moistube using Tukey's Honest Significant Difference (HSD) test. Statistical analysis was performed using IBM SPSS Statistics version 20.0 (IBM Corporation, Somers, New York). Cumulative infiltration Cumulative infiltration under different pressure heads and tube spacing in alternate moistube-irrigation is shown in Figure 2. The cumulative infiltration of M1 and M2 increased linearly with the infiltration time at 0-96 h (R 2 >0.99). Pressure head was an important factor affecting water infiltration: the greater the pressure head, the greater the cumulative infiltration. The cumulative infiltration of M1 and M2 under the 1.5 m pressure head was significantly more than that under the 1.0 m pressure head (p<0.05). For the treatments of H1S1, H2S1, and H2S2, the cumulative infiltration of M1 was significantly more than that of M2 (p<0.05), while for the treatments of H1S2, H1S3, and H2S3, the cumulative infiltration of M1 was nearly equal to that of M2. When the tube spacing was S1, the soil wetting front of M1 had moved to the vicinity of M2 before M2 began to supply water, thus the cumulative infiltration of M2 was low due to high soil water content. When the tube spacing was S2, the soil wetting front of M1 migrated a small distance under the 1.0 m pressure head, which had little effect on M2, but under the 1.5 m pressure head, the soil wetting front of M1 migrated a large distance, thereby affecting the infiltration of M2. When the tube spacing was S3, the infiltrations of M1 and M2 had little effect on each other. The cumulative infiltration of M1 and M2 increased with the infiltration time quickly at 0-96 h, and changed smoothly at 96-192 h with a basically steady infiltration rate. At 0-192 h, the relationship between the cumulative infiltration of M1 and M2 and infiltration time can be expressed by a polynomial equation (R 2 >0.99). Vol. 13 No.4 153 Note: H1 and H2 represent pressure heads of 1.0 m and 1.5 m; S1, S2, and S3 represent tube spacing of 10 cm, 20 cm, and 30 cm; M1 and M2 represent moistube 1 and 2; y represents the cumulative infiltration and x represent the infiltration time. Figure 2 Cumulative infiltration in alternate moistube-irrigation Discharge of the moistube The discharge of the moistube under different pressure heads and tube spacings in alternate moistube-irrigation is shown in Figure 3. The discharges of M1 and M2 for the treatments of H1S1, H1S2, H1S3, H2S1, H2S2, and H2S3 increased rapidly at 0-6 h or 0-8 h, then decreased at 6-24 h or 8-24 h, and changed smoothly at 24-96 h. The discharges of M1 and M2 under the 1.5 m pressure head were significantly more than those under the 1.0 m pressure head (p<0.05). For the treatments of H1S1, H2S1, and H2S2, the discharges of M1 were significantly more than those of M2 (p<0.05), while for the treatments of H1S2, H1S3, and H2S3, the discharges of M1 were nearly equal to those of M2. When the tube spacing was S1 or S2, the difference between the discharges of M1 and M2 under the 1.5 m pressure head was larger than that under the 1.0 m pressure head. For the treatments of H1S2-2, H1S3-2, H2S2-2, and H2S3-2, the discharges of the moistube at 96-192 h were lower than those at 24-96 h, as the soil was wetter when the moistube began to supply water the second time than before irrigation. The discharge of the moistube increased rapidly at the beginning of irrigation, then decreased and remained at a stable level as time elapsed. There was an induction period that was probably within 24 h from the start of moistube-irrigation, and discharge of the moistube remained stable after 24 h of irrigation. Niu et al. [31] reported that the moistube had a weak and short duration of self-regulated flow with changes in soil moisture content at approximately 44 h, and the flow increased quickly and then decreased to a steady state after 48 h of irrigation. The difference of the time needed for the stable discharge of the moistube may be related to the pressure head, soil bulk density, soil texture, soil initial water content, and/or different test conditions. Figure 3 Discharge of the moistube in alternate moistube-irrigation Shape of the cross-section of the wetting front The shape of the cross-section of the wetting front under different pressure heads and tube spacings in alternate moistube-irrigation is shown in Figure 4. The shape of the cross-section of the wetting front for a single moistube was similar to a concentric circle, and the area of the cross-section of the wetting front under the 1.5 m pressure head was larger than that under the 1.0 m pressure head. The wetting fronts of M1 and M2 were superposed when the tube spacing was S1, were a little superposed when the tube spacing was S2, and did not affect each other when the tube spacing was S3. Zhang et al. [32] reported that wetted soil with moistube-irrigation looked like a cylindrical object, with the pipe at the axle center of its cross section, which for clay loam soil was approximately cylindrical, and for sandy soil was of obpyriform shape. The same result for clay loam soil was obtained in this experiment. Figure 5 shows the water distribution in the cross-section of the wetting front in alternate moistube-irrigation at the end of the test. For the treatments of H1S1, H1S2, H2S1, and H2S2 in which the test ended in 8 d, the soil around M2 had higher water content than that far away from M2 as the water supply from M2 had just finished, and the water migrated longer distances under the 1.5 m pressure head than under the 1.0 m pressure head. When the test ended in 16 d, the soil near M1 had higher water content for the treatments of H1S2-2 and H2S2-2 than that for the treatments of H1S2 and H2S2, and the soil water distribution was more uniform around M1 and M2. For the treatment of H1S3-2, as for the treatment of H1S3, the soil water distributions around M1 and M2 were similar to each other, and the difference between H1S3-2 and H1S3 was that the soil had higher water content for H1S3-2. For the treatment of H2S3-2, the soil near M1 had higher water content than that for H2S3, and the water distribution range for H2S3-2 was larger than that for H2S3. Conclusions The effects of the pressure head and tube spacing on soil water infiltration in alternate moistube-irrigation were studied in laboratory experiments, and the cumulative infiltration, discharge of the moistube, and shape and water distribution of the cross-section of the wetting front were determined. With the infiltration time from 0 to 96 h, the cumulative infiltration volume of M1 and M2 increased rapidly and linearly (R 2 > 0.99), while from 96 h to 192 h, it changed smoothly and the infiltration rate was basically stable. Pressure head was an important factor affecting water infiltration, and the greater the pressure head, the greater the cumulative infiltration. The cumulative infiltrations of M1 and M2 under the 1.5 m pressure head were more than those under the 1.0 m pressure head. With increased tube spacing, the interaction between water infiltration of M1 and M2 decreased. The discharges of M1 and M2 under the 1.5 m pressure head were more than those under the 1.0 m pressure head. At the beginning of moistube-irrigation, the discharge of water increased rapidly, then decreased and remained at a stable level over time. The water induction period might exist within 24 h after the start of irrigation, and the water flow remained stable after 24 h of irrigation. The cross-sectional shape of the wetting front of a single moistube resembled a concentric circle. The cross-sectional area of the wetting front under the 1.5 m pressure head is greater than that under the 1.0 m pressure head. With increased tube spacing, the interaction of the wetting bodies between M1 and M2 decreased. The soil water distributions around M1 and M2 were similar to each other under the 1.0 m pressure head and large tube spacing of S3. When the tube spacing was S2, the soil near M1 had higher water content when the test ended in 16 d compared to when the test ended in 8 d, furthermore, soil water distribution was more uniform around M1 and M2. The laboratory experiments differed in a few ways from what could be expected with actual field conditions, and the effect of alternate moistube-irrigation on plant growth should be investigated in the field.
2020-08-20T10:02:46.124Z
2020-08-07T00:00:00.000
{ "year": 2020, "sha1": "154f579676a4def5b13afe0519db6a8a9029c8f1", "oa_license": "CCBY", "oa_url": "https://ijabe.org/index.php/ijabe/article/download/5297/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e19acda050023828badb2f003d2b71476d000202", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
245819848
pes2o/s2orc
v3-fos-license
A Hybrid Improved Whale Optimization Algorithm with Support Vector Machine for Short-Term Photovoltaic Power Prediction ABSTRACT Presently, the grid-connected scale from photovoltaic (PV) system is getting higher among renewable power generations. However, the PV output power can be affected by different meteorological conditions due to PV randomness and volatility. Accordingly, reasonable generation plans can be well arranged using accurate PV power prediction among various types of energy sources, thus reducing the effect of PV system on the grid. To resolve this problem, a PV output power prediction model, namely IMWOASVM, is proposed based on the combination of improved whale optimization algorithm (IMWOA) and support vector machine (SVM). The IMWOA is used to optimize the kernel function parameter and penalty coefficient in SVM. The optimal parameter and coefficient values can then be input to SVM for enhancing the PV prediction. The performance results verify that the coefficient of determination using the IMWOA model can reach beyond 99% in both sunny and cloudy days. Simultaneously, the mean absolute errors on sunny and cloudy days are 0.0251 and 0.0705, respectively. The root mean square errors in sunny and cloudy days are 2.17% and 1.03%, respectively. The results confirm that the proposed model effectively increases the accuracy of the PV output power prediction and is superior to existing methods. Introduction With increasing global energy demand, the utilization and development of renewable energy have been becoming more and more important in the power industry (Yu et al. 2019). Solar is one of the most crucial renewable energy resources (Wang, Qi, and Liu 2019). Therefore, the development of PV power technology is considered as an effective solution to alleviate the world energy crisis (Carvajal-Romo et al. 2019). In 2016, the annual photovoltaic (PV) power generation has exceeded wind power, and the installed capacity of global PV system was 48% higher compared with 2015 (Gurung, Naetiladdanon, and Sangswang 2019). However, PV power generation is fluctuating and intermittent at all times due to the uncertainty of light intensity and other meteorological conditions (Liu et al. 2018). In addition, some factors like weather, season and others post more difficulty for PV power dispatching in grid (Gandoman, Raeisi, and Ahmadi 2016;VanDeventer et al. 2019). To address this issue, the prediction of PV power generation can provide important information for reasonable grid power planning and economic dispatching (Chai et al. 2019;Han et al. 2019;Liu, Zhan, and Bai 2019;Wang et al. 2018a). The prediction methods for PV output power can be classified as follows: long-and medium-term forecast is used for the maintenance and operation management of photovoltaic power stations in weekly units. Short-term forecast is used to arrange reasonable daily power generation in hourly or daily units (Ni et al. 2017). Ultra-short-term forecasting is used for real-time dispatching of power grids in minutes or 1 hour (Monfared et al. 2019). In the economic dispatching of power grid, He et al. (2019) suggested that the shortterm prediction played a decisive role, which directly influenced the security and stability of the system operation. Semero, Zheng, and Zhang (2018) also pointed out that the planning based on short-term prediction in the PV system can promise the reliable and economical of power supply. Alternatively, the forecast methods of PV output power are classified into direct and indirect ones. Indirect method is to estimate the output power according to the predicted variables. Due to the complex and changeable weather conditions, the current prediction accuracy is still insufficient (Pierro et al. 2017). The direct method is to directly take the historical data as input variables to predict the power output . It usually used the linear forecast model with time series, nonlinear model and mixed model of the two. Autoregressive moving average (ARMA) model and autoregressive (AR) model belong to time-series model (Bae et al. 2019). Among the nonlinear prediction methods, there are increasing application cases using such as extreme learning machine (ELM), support vector machine (SVM) and back propagation neural network (BP), aiming at minimum prediction error (Lin et al. 2018;Rana, Koprinska, and Agelidis 2016). Li et al. (2016) used hidden Markov and support vector machine regression model to predict short-term PV generation from solar radiation intensity. Li et al. (2019) applied the SVM model combined with the hybrid improved multi-verse optimizer for the short-term PV power output prediction. Wang, Qi, and Liu (2019) revealed that the photovoltaic power prediction is of great help to the stable operation of photovoltaic system. To enhance the forecast ability, the direct prediction method is selected to predict the PV output power in this study. The support vector machine model is used as the prediction model, and the improved whale optimization algorithm (IWOA) is developed to search for the optimal parameter combination in the support vector machine model. The article consists of six major sections. Section 2 gives literature reviews on photovoltaic power generation prediction methods. Section 3 introduces the construction of the integrated prediction model, including the improved whale optimization algorithm and support vector machine model. The results and analysis of photovoltaic power generation prediction are provided in Section 4. The discussion is presented in Section 5. The conclusions are made in Section 6. Wang et al. (2018b) used ARMA, BP and SVM model to predict PV power generation. The results showed that the proposed method could effectively increase the prediction accuracy. Xie et al. (2018) proposed a short-term hybrid forecast model, which mixed deep confidence network (DBN) and variational mode decomposition (VMD) in ARMA, which could better regulate the operation of power system. Raza, Nadarajah, and Ekanayake (2017) used a hybrid model, including wavelet transform (WT), ARMA, radial basis function (RBF) and neural network to predict a short-term PV power. However, it may cause a large deviation in the model due to the lack of nonlinear mechanism involved. Literature Review To enhance the precision of PV power generation forecast, Al-Dahidi et al. (2019) proposed an artificial neural network model that combined 10 different learning algorithms and 23 different training data sets. However, the proposed model was complex with limitations in its application scenarios. Hua et al. (2019) reported a long-term and short-term memory back propagation (LSTM-BP) method in the power generation forecast. Unfortunately, the training speed of this algorithm was slow, where all network parameters needed to be updated during each training process. Al-Dahidi et al. (2018) developed an extreme learning machine (ELM) model to predict PV power generation in a 264 kWp PV system. The simulation results revealed that the forecast with ELM model was more accurate than the BP neural network model. Cheng, Liu, and Zhang (2019) proposed an optimization model to enhance the ELM model parameters using the genetic algorithm (GA), and Gaussian mixture model (GMM) was used to correct the forecasted values in PV power generation. Liu et al. (2020) introduced a chicken flock optimizer to optimize the ELM parameters for forecasting PV power under various meteorological conditions. The results showed that a better forecast precision was achieved. Mojumder et al. (2016) simplified the complex mathematical problems in PV power prediction using SVM model with the combination of wavelet, radial basis function and firefly algorithm. To effectively solve the security problems in grid-connected PV system, Eseye, Zhang, and Zheng (2018) proposed a particle swarm optimization SVM (PSOSVM) model, showing better short-term PV power generation forecast than the SVM models. van der Meer et al. (2018) combined genetic algorithm with SVM model to achieve more accurate prediction than SVM models. Yang, Zhu, and Peng (2020) applied a gray correlation theory to find the main factors that may affect the consumption of clean energy. The results showed that the proposed model has a good forecast performance. Currently, some research has been working on the improvement of whale optimization algorithm (WOA). For example, Xiong, Hu, and Guo (2021) improved the WOA convergence speed by introducing a nonlinear adjustment scheme. It was then used to optimize the gray seasonal variation index model to achieve high prediction accuracy with fast speed. On initialization of the whale population, Gao et al. (2022) applied random method and chaotic sequence method to generate two initialization populations, which enhanced the diversity of individuals. Two different convergence strategies were also introduced for boosting the search ability of algorithm. It was then used to optimize the ELM model for better prediction accuracy with less time required. Liu et al. (2021a) integrated SVM into the WOA. The initial population became diverse and the optimization ability was thus enhanced. The improved algorithm was then used to optimize the SVM model to improve the prediction accuracy, but the complex nonlinear relationship behind the data was not deeply considered. Principle and Improvement of WOA WOA is based on the unique predation strategy from humpback whales. In the optimization process, three stages are regarded as the main parts of search and optimization (Mirjalili and Lewis 2016a;Simhadri and Mohanty 2019;Yuan et al. 2018). (1) Foraging encirclement stage When a whale is close to the prey location, the whale group will immediately work together to approach toward the target for rounding up. The whale position updating process is shown below. where x t is the position of the individual of the whale group. x � t is the position of the optimal individual in the whale group. x tþ1 is the individual position of the whale group after update. D represents the distance between the whale and the optimal individual. A and C are the coefficients, defined as follows. where n 1 and n 2 are randomly chosen between the range 0 and 1. The value of a is located between 0 and 2, where it decreases with the increasing iteration in a linear downward trend. (1) Bubble predation stage The bubble predation behavior of humpback whales includes two processes: shrink encirclement and spiral rise. Shrinking encirclement means that the individual closest to the prey is selected as the best search agent in the whale population, and other whales will move closer to the currently selected whale individual. Each whale updates its position according to the current optimal position of the population, and adjusting the coefficient values of A and C can control the whale to search near its prey. Whales can perform spiral contraction encirclement behavior according to the value of a. The spiral rising process is used to simulate the whale spiral motion, and the whale position updating equation is shown below. where D 0 is the distance between the prey and whale, l is randomly chosen between the range −1 and 1, and b is a constant to represent the shape of the helix. To simulate the simultaneous occurrence in contraction encirclement and spiral rise, a mathematical model for whale position updating is constructed, as shown below: The value of p is randomly chosen and uniformly distributed between 0 and 1. According to the value of p, the whale chooses spiral model or contraction encirclement to change its position during the optimization process. When p is less than 0.5, the whale will perform the contraction encirclement process. If p is greater than 0.5, the whale will move in a spiral. Note that p is a uniform distribution between [0,1], and the probability of choosing both modes are 50%. (1) Food search stage The whale food search behavior is realized by changing the value of A. In Equation (3), A is a random number between [-a, a]. When A j j is greater than 1, a search agent is stochastically chosen to change the position of other whales for enhancing the WOA exploration capability. Consequently, the whale can accomplish the global search by approaching the position of the whale that has been randomly selected. The whale position x t is updated to x tþ1 as follows. where x rand is the position of a randomly selected whale. D rand represents the distance between the whale and the randomly selected individual. The WOA starts with a random position, and the search agent changes its position from every iteration according to the optimal individual currently available or the randomly selected search agent. When A j j > 1, select the stochastic search agent; when A j j < 1, select the optimal position to update the position of the search agent. The value of p can determine whether WOA will carry out contraction encirclement or spiral motion. Finally, the WOA algorithm process stops once the specified iteration number is reached. When WOA is applied to high-dimensional problems, it may only obtain the local optimal solution, which leads to the deterioration or even failure of the optimization effect. To prevent the WOA from being trapped into a locally optimal solution, an adaptive factor is introduced, and the position update Equation (1) is updated. The updated equation is as follows: Q represents the adaptive factor. As the number of iterations increases, the value of Q will gradually decrease from 1 to 0. The improved position update equation can enable whales to conduct local optimization while approaching prey, thus improving the local search capability of WOA. To further promote the WOA global search capability, the mutation operator is introduced. The improved equation is expressed as follows. . where CauchyðtÞ denotes a random variable that obeys the Cauchy distribution. It is used to increase the search randomness in the whale optimization algorithm, and thus the global search capability can be enhanced. Performance Test of Improved WOA In this study, eight test functions are selected to test the convergence ability of the IMWOA, as shown in Table 1 Liu et al. 2021b). The IMWOA, MVO, Ant Lion Optimizer (ALO), WOA, Grasshopper Optimization Algorithm (GOA), particle swarm optimization (PSO) and Seagull optimization algorithm (SOA) are tested and compared (Dhiman and Kumar 2019;Mirjalili 2015;Mirjalili, Mirjalili, and Hatamlou 2016b;Shahrzad et al. 2017;Zhang, Wang, and Lu 2022). Under the same conditions, the populations are set as 30, the iterations are set as 500, the dimensions are set as 30, and the other parameters are default values. Each model is tested for 30 times, and the maximum, minimum and average values of each test are listed in Table 1, where the bound denotes the value range of x i and x j , and F min is the minimum value for which the function converges (Mirjalili and Lewis 2016). The convergence values in various test functions are shown in Table 2. In Table 2, the test convergence results from F 1 show that PSO has the largest value and IMWOA is the smallest one, regardless of whether it is the maximum, minimum, or average. In F 2 , ALO has the largest maximum and average convergence values, while IMWOA reaches the smallest value. In F 3 , the convergence value from WOA is the largest value and the IMWOA is the smallest value. In F 4 , the convergence value from WOA and IMWOA is much smaller among seven algorithms. In F 5 , the minimum value of WOA is equal to 0, but its maximum and average values are slightly higher than 0. All other algorithms have higher values than 0. On the other hand, IMWOA converges to zero for maximum, average and minimum values. In F 6 , the maximum value from IMWOA is the smallest among the seven models. In F 7 and F 8 , the maximum, average and minimum values in the convergence from IWOA are the smallest among all algorithms. Each test function is applied to seven models, and the convergence fitness values over iterations are shown in Figure 1. From Figure 1, the IMWOA model is confirmed to reach the fastest convergence speed in F 1 , F 2 , F 3 , F 4 and F 5 tests, and its convergence value is the smallest, which is closer to 0. However, in F 6 , the convergence value from IMWOA is slightly higher than that of MVO, but its convergence speed is still the fastest. In F 7 and F 8 , IWOA has the fastest convergence speed and the smallest convergence value. Principle of SVM Model SVM is superior in structural risk minimization (SRM) and operation speed ). The problems caused by small samples, nonlinear and high dimensions may be avoided using SVM. It plays a key role when performing pattern recognition, classification and regression forecast (Preda et al. 2018). For prediction and classification problems, SVM can be classified as support vector classification (SVC) and support vector regression (SVR). To forecast the PV output power, SVR was used in this study, and the theory is shown below. g is a given dataset, where X i is the input training sample, and Y i is the output training sample. The general linear regression equation of SVR is constructed as follows. where w represents the weight vector, which is the coefficient of x. b represents a constant. x i can be substituted into x in Equation (13) for calculation, and f ðxÞ refers to the output sample value, which may produce an error compared with y i . Both w and b are selected by the SRM principle. To enhance the generalization ability, the promotion process is as shown below: s:t: where ε represents the loss function; � i and � � i represent relaxation variables with different values; C represents the penalty coefficient; m is the number of training samples. A Lagrangian function is established as follows: where α, α � , μ and μ � are Lagrangian multipliers, which are greater than zero with different values ). The partial derivatives of w, b, � i and � � i are zero, which can be obtained as follows. The PV output power is subject to many elements with multi-dimensional characteristics. To avoid this situation, a kernel function is introduced, which can display the mapped data in a high-dimensional space on the basis of the existing model. The equation of the nonlinear regression model is: It can also be expressed as: where kðx i ; xÞ represents the kernel function. The Gaussian kernel function is used as the kernel function in this study, shown as follows. where σ is the bandwidth of the Gaussian kernel and is the key parameter in the kernel function. The values of penalty coefficient (C) and key parameters of kernel function (σ) can be used to determine the regression performance of SVM. Choosing appropriate parameters can enhance the forecast performance of SVM. Therefore, this study chooses the hybrid improved whale optimization algorithm to realize the selection of SVM parameters. Prediction of Photovoltaic Output Power by Optimizing Support Vector Machine Model with Improved Whale Algorithm The kernel function parameter σ and penalty coefficient C in SVM can be optimized by IMWOA. The mean square error (MSE) defined in Equation (24) is used as the fitness function. where m is the total of test samples number, s i is the true value, and s i is the forecast value. The flowchart of the prediction process is shown in Figure 2. The main process is shown as below: (1) PV power generation data is classified as test and training data sets. (2) Determine the model input and output. (3) Normalize test and training data. (4) Initialize the IMWOA model, set the number of search agents, the iteration number M, dimension, the search range of C and σ, etc. (5) Initialize the search agent position. (7) Update the population position using IMWOA algorithm, calculate the fitness value of individual in each generation, and select the optimal value as the best individual of each generation. (8) The global optimal individual is selected from the best individuals in each generation since the iteration is over. (9) Input the global optimal individual into the SVM model. (10) The PV output power prediction using an optimized SVM model is implemented. (11) Anti-normalize the predicted data. Prediction Results and Analysis in PV Power Generation The data used in this study came from Desert Knowledge Australia Solar Center. The sunny data from August 8 to August 12, 2017, and the cloudy sample data from October 6 to October 10, 2017, were selected. The sample data for the first 4 days was used for training, and the sample data on the last day was used for test in both sunny and cloudy weathers. Note that the output power, weather temperature, relative humidity and light intensity between 9:00am-4:00pm were recorded every 5 minutes. Correlation between Meteorological Elements and Output Power of PV Power Generation To explore the influence of meteorological elements on photovoltaic power generation, sunny weather and cloudy weather are selected for investigation in this study (Liu et al. 2020). In sunny weather, the relationship between light intensity and output power, relative humidity and output power, and temperature and output power are shown in Figure 3a-c, respectively. In cloudy weather, the relationship between light intensity and output power, relative humidity and output power, and temperature and output power are shown in Figure 4a-c, respectively. The curves from Figures 3 and 4 reveal that only light intensity has a positive correlation with the PV output power no matter on sunny or cloudy days. In order to further explore the correlation between light intensity, relative humidity, temperature and photovoltaic output power, Pearson correlation coefficient method is used to calculate the correlation between light intensity and output power, relative humidity and output power, temperature and output power (Biswas and Samanta 2021). Pearson correlation coefficient is expressed by ρ, and the calculation equation is shown in Equation (25). . ρ ¼ N P XY À P X P Y ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi N P X 2 À ð P XÞ 2 q ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi where N represents the number of calculated samples. X and Y represent two variables that are used to verify the correlation strength between light intensity and output power, relative humidity and output power, weather temperature and output power. The value of ρ is between [−1, 1]. The larger the absolute value of ρ is, the closer it is to 1, which means that the correlation between the two variables is stronger. The correlation degree corresponding to ρ is defined in Table 3 (Liu et al. 2020). The correlation values between light intensity and output power, relative humidity and output power, temperature and output power are calculated by Equation (25), as shown in Table 4. In Table 4, the correlation between light intensity and output power is close to 1, which is almost 100% correlation in both sunny and cloudy days. When it is sunny, the correlation coefficient ρ between relative humidity and output power is −0.5490, and the ρ between weather temperature and output power is 0.5485. The results show a strong correlation between relative factors. If it is cloudy, the ρ between relative humidity and output power is −0.2219, and the ρ between weather temperature and output power is 0.2759. The results indicate a weak correlation between relative factors. Prediction of PV Output Power To verify the prediction effect, this study compared the proposed IMWOASVM model with other methods such as the traditional back propagation neural network model (BP), extreme learning machine model (ELM), support vector machine model (SVM), particle swarm optimization algorithm with support vector machine model (PSOSVM), genetic optimization algorithm with support vector machine model (GASVM) and whale optimization algorithm with support vector machine model (WOASVM). The light intensity, weather temperature and relative humidity which are taken as the input of the above prediction models, and the output power is taken as the output of the prediction models. Prediction Results in Sunny Weather In sunny weather, the results from the predicted PV output power using seven models are shown in Figure 5, and the detailed data is listed in Appendix Table A1. Generally, all models are confirmed to mostly fit the true value. However, in the range of samples labeled as 0-15, the results of ELM and BP models are slightly higher than the true one, and the result of PSOSVM model is lower than the true one. In the range of the 26-60th sample numbers, the prediction error of BP is significantly higher than others. As above, it can be concluded that the PSOSVM, GASVM, SVM, WOASVM and IMWOASVM models are superior to the BP and ELM models in general. The parameters of the SVM model optimized by IWOA are C=547.7225 and σ=0.03. To better present the forecast results, the relative error (δ) is defined as follows. where Δs represents the absolute error (Δs ¼ ŝ À s). ŝ denotes the predicted output power, and s is the true value. The relative error (δ) curves using BP, ELM, SVM, PSOSVM, GASVM, WOASVM and IMWOASVM models in sunny weather are shown in Figure 6, and the detailed data is listed in Appendix Table A2. As can be seen in the range of 0-20th sample numbers, the relative errors of WOASVM and IMWOASVM are confined small between [−5%, 5%], while the errors of BP, SVM, PSOSVM, GASVM and ELM are larger, especially at the initial prediction stage. The maximum error in ELM exceeds 15%, BP exceeds 10%, and GASVM and SVM exceed 5%. In the range of 20-30th sample label, the prediction error of BP is higher than other models. In the remaining range, the errors of ELM, SVM, WOASVM and IMWOASVM models are all located between [−5%, 5%]. In conclusion as above, both WOASVM and IMWOASVM models present better prediction performance than others in sunny weather. Prediction Results in Cloudy Weather In cloudy weather, the PV output power forecast curves using BP, ELM, SVM, PSOSVM, GASVM, WOASVM and IMWOASVM models are shown in Figure 7, and the detailed data is listed in Appendix Table A3. As shown in Figure 7, in the 0-10th samples, the forecast output power is not consistent with the true one, while the predicted values of ELM and IMWOASVM are closer to the true value. In the range of sample numbers between 10-40th samples, the GASVM, WOASVM and IMWOASVM models gradually fit the true output power curve. However, the prediction curves of BP and ELM models deviate far from the true one. In the samples ranged 40-70, the forecast values show a good prediction performance. In the remaining range, ELM, WOA and IMWOASVM models are closer to the true value than BP, SVM, PSOSVM, and GASVM models. As above, it is concluded that the prediction of IMWOASVM is more consistent with the true curve, indicating a better prediction effect. The parameters of the SVM model optimized by IWOA are C=214.7622 and σ=0.0612. The relative errors (δ) of the prediction results are shown in Figure 8, and the detailed data is listed in Appendix Table A4. From Figure 8, it is found that all models have a significant relative error in the early and later stages during the prediction. However, the errors of ELM and IMWOASVM models are smaller than those of other models. In the range of samples between 30-60th samples, the error curves of all models begin to approach 0% gradually. During this period, the relative errors of BP, PSOSVM, GASVM, WOASVM, SVM and IMWOASVM models are small, while that of ELM model is slightly larger. In the range of samples between 60-85th samples, the errors of SVM, GASVM, PSOSVM and BP increase from the 0% baseline gradually, while the prediction error curves of ELM, WOASVM and IMWOASVM remain near 0% baseline, only slightly increasing at the end. As above, it can be concluded that the maximum errors of WOASVM and IMWOASVM appear in the starting prediction, but they are smaller than those of the other five models in the whole prediction period. Overall, the prediction error of IMWOASVM achieves the lowest value among all models. Prediction Evaluation To further evaluate the forecast results, the mean absolute error (MAE), rootmean-square error (RMSE) and coefficient of determination (R 2 ) are used in this study. MAE is a measure of errors between the true value and forecasted one. RMSE is the square root of the mean of the square of all of the error. The determination coefficient R 2 is the proportion of the variance in the dependent variable that is predictable from the independent variable(s), and its value is confined between 0 and 1. When it is closer to 1, higher fitting degree is reached. On the contrary, the prediction has higher errors when it is closer to 0. where m is the total of test samples number, s i is the true value, and s i is the forecast value. The values of MAE, RMSE and R 2 in cloudy and sunny days are shown in Table 5, respectively. In sunny weather, the MAE values of WOASVM and IMWOASVM models are 0.0253 and 0.0251, respectively, which are better than the other five models. From the results of RMSE, the percentages of SVM, PSOSVM, GASVM, WOASVM and IMWOASVM models are relatively small, which are 2.20%, 1.77%, 2.54%, 2.19% and 2.17%, respectively, showing better prediction accuracy. In R 2 , all models except ELM remain above 99%. Discussion The PV output power is greatly affected by meteorological conditions, which may threaten the safety and stability of power system when PV generation system is connected to grid. This study aims to accurately predict PV output power and avoid the impact of PV power fluctuation to the power system. In this work, IWOA was used to optimize the SVM model for accurately predicting the PV output power. The IWOA and six other intelligent algorithms were tested using eight test functions under the same conditions, e.g. equal population size, dimensionality, and number of iterations. Through the analysis of the test results, the IWOA was verified with faster convergence than the other tested algorithms. In addition, the convergence accuracy of IWOA was better than all other tested optimization algorithms except F6. Generally, it can be concluded that the IWOA presents the most comprehensive performance and the best search capability. IWOASVM prediction model was developed based on the combination of IWOA with SVM. It and six other models were used to predict PV power output forecasts under sunny and cloudy weather conditions. The prediction results were evaluated using MAE, RMSE and R 2 . During sunny weather, MAE, RMSE and R 2 using IWOASVM are obtained as 0.0251, 2.17% and 99.88% respectively, achieving the smallest prediction error and the best prediction result among all models. In cloudy weather, the MAE, RMSE and R 2 of IWOASVM are 0.0705, 1.03% and 99.09% respectively, which are better than other models. Clearly, the IWOASVM prediction model is confirmed to be more suitable for predicting the PV output power no matter what weather conditions. Consequently, it can provide data support to reasonably arrange the power generation tasks. It is also conducive to the PV power generation efficiency, maintaining the balance between clean energy production and demand. Conclusions In this study, a forecast model based on IMWOA combined with SVM in short-term PV power prediction has been developed successfully. The results reveal that the proposed IMWOASVM model has a better performance than other models in PV power prediction under both sunny and cloudy days. The main contributions are concluded as follows: (1) Based on the optimization of the mutation and adaptive factors in the WOA, the proposed IMWOA has been successfully developed to upgrade the capability efficiently. (2) The test function tests verify that the IMWOA model achieves the fastest convergence speed and lowest prediction error among existing algorithms such as IMWOA, MVO, ALO, WOA, GOA, SOA and POS. (3) The IMWOA can effectively find the optimal combination of C and σ in SVM so that the forecast ability in PV output power can be further enhanced. (4) Compared with WOASVM, ELM, SVM, GASVM, PSOSVM and BP prediction models, the IMOASVM model can reach the smallest MAE and RMSE values, and only its R 2 is beyond 99% under two different weather conditions. Future research is suggested to consider more various weather conditions in a real environment. The long-term PV forecast may be advanced to further maintain the operation safety and stability in the power grid network. Xie, T., G. Zhang, H. C. Liu, F. C. Liu, and P. D. Du. 2018
2022-01-09T16:14:05.704Z
2022-01-07T00:00:00.000
{ "year": 2022, "sha1": "749bddd0804f2d1db0d9a7eb88096e6d90927239", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/08839514.2021.2014187?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "1b936b2e5e180e09fd49286fefd29fd492a501ac", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
34156036
pes2o/s2orc
v3-fos-license
Mt-Hsp70 Homolog, Ssc2p, Required for Maturation of Yeast Frataxin and Mitochondrial Iron Homeostasis* Here we show that the yeast mitochondrial chaperone Ssc2p, a homolog of mt-Hsp70, plays a critical role in mitochondrial iron homeostasis. Yeast withssc2-1 mutations were identified by a screen for altered iron-dependent gene regulation and mitochondrial dysfunction. These mutants exhibit increased cellular iron uptake, and the iron accumulates exclusively within mitochondria. Yfh1p is homologous to frataxin, the human protein implicated in the neurodegenerative disease, Friedreich’s ataxia. Like mutants ofyfh1, ssc2-1 mutants accumulate vast quantities of iron in mitochondria. Furthermore, using import studies with isolated mitochondria, we demonstrate a specific role for Ssc2p in the maturation of Yfh1p within this organelle. This function for a mitochondrial Hsp70 chaperone is likely to be conserved, implying that a human homolog of Ssc2p may be involved in iron homeostasis and in neurodegenerative disease. Iron is required as a cofactor for critical proteins within mitochondria of eukaryotic cells. These proteins include heme and iron-sulfur proteins involved in diverse processes such as cellular respiration and the synthesis of metabolic intermediates. However iron is also extremely toxic, capable of generating damaging free radicals (3). Therefore, homeostatic mechanisms exist that regulate iron levels and iron protein levels within mitochondria. Yfh1p is a mitochondrial protein of Saccharomyces cerevisiae that is involved in this homeostasis (4 -8). Yeast with mutations in yfh1 accumulate iron within mitochondria (4,5) and yet are deficient in some mitochondrial iron proteins (5,6). Yfh1p is homologous to the human protein frataxin (7,8), and mutations in frataxin are associated with the neurodegenerative disease Friedreich's ataxia (9). At the cellular level, iron accumulation occurs in affected tissues in these patients, and iron proteins such as aconitase and cytochrome oxidase are deficient (6). The manner in which Yfh1p in the yeast (or frataxin in humans) affects iron homeostasis of mitochondria has not been defined. This work implicates a member of the class of Hsp70 proteins in this process. Two distinct Hsp70 proteins are found in mitochondria of S. cerevisiae (1). One of these, Ssc1p, is essential for viability and is involved in the import and subsequent folding of nuclear-encoded proteins in mitochondria (10 -13). The second, Ssc2p, 1 is one thousand-fold less abundant, and its physiological role has not been previously determined (1). We show here that Ssc2p plays a role in mitochondrial iron usage and in the maturation of Yfh1p. Assays-The assay for ferric reductase was a filter lift assay (16), modified by the addition of 50 M copper sulfate and 10 M ferric ammonium sulfate to YPD agar plates for growth of the colonies to be assayed. Measurement of high affinity radioactive iron uptake rate has been described (15). To assess mitochondrial iron, the cells were grown for 16 h (6 -8 doublings) in SD raffinose with different concentrations of radioactive 55 Fe, and mitochondria were purified (17). The cells for the microscopy were grown in SD raffinose with 5 M ferric ammonium sulfate as above. The preparation of yeast for electron microscopy has been described (18). The electron microscope was a Jeol 100CX model and was fitted with an Energy Dispersive Spectrophotometer (19). Mitochondrial import studies were described previously (20). Briefly, import reactions containing 100 g of mitochondria were initiated by adding urea-denatured preprotein (30 -40 ng). Import reaction mixtures contained 4 mM ATP and 1 mM GTP. Following import at 20°C for 5 or 15 min, reaction mixtures were treated with trypsin (0.1 mg/ml) for 30 min at 0°C. The protease was inactivated, and the samples were analyzed by SDS-polyacrylamide gel electrophoresis and fluorography. Plasmids and DNA Manipulations-Plasmid pSC30, isolated from a yeast genomic library (21), contained yeast genomic sequences from Chromosome XII, coordinates 858900-870300, and included open reading frames SUR4, ROM2, ARC18, and SSC2 (YLR369W). Plasmid pSC30-3, containing SSC2, was created by subcloning the EcoRI-KpnI genomic fragment. The plasmid pSC30-3⌬not contained a frameshift mutation in the open reading frame at the unique NotI site. For meiotic mapping, the EcoRI fragment from within ROM2 inserted into YIp5 (prom2-YIp5) was integrated into CM3260 at its unique SacI site, and this strain was crossed with 35-5B (ssc2-1). The YFH1 open reading frame was inserted into the NdeI-XhoI sites of vector pET21b (Nova-* This work was supported by grants from the W. W. Smith Charitable Trust (to A. D. and D. P.), from the Lucille P. Markey Charitable Trust (to A. D.), and from the American Heart Association (to D. P.). The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 RESULTS The present investigation was an outgrowth of our general interest in iron trafficking in S. cerevisiae. Yeast cells were subjected to a selection procedure designed to detect mutants with abnormal iron metabolism. The iron-repressible promoter of the FRE1 gene was fused to the HIS3 coding region and integrated into the genome of a haploid yeast strain. Cells harboring this gene fusion were then cultured in a medium supplemented with iron but without histidine. Mutants were selected that were unable to repress FRE1-driven gene transcription, indicating a defect in iron uptake (15), iron sensing (22), or iron distribution. A subset of these mutants was identified that was unable to grow on medium containing ethanol as a carbon source, an indicator of mitochondrial dysfunction. One such mutant, 35-5B, was chosen for further study. The mutant retained ferric reductase activity under conditions (available iron and copper) that led to repressed activity in the wild-type. This assay, used to track the mutation in genetic analyses, indicated that the mutant phenotype was recessive. Sporulation of this diploid strain which yielded 30 tetrads showing 2 ϩ :2 Ϫ segregation of the mutant phenotype indicated that the mutation was at a single locus. The mutant was then transformed with a genomic library, and a complementing plasmid pSC30 was isolated (21). The complementing activity was retained by pSC30-3 which contained the single complete open reading frame from SSC2, and this activity was abrogated by the frameshift mutation introduced into pSC30-3⌬not. Rescue of the ssc2-1 allele and sequence analysis identified a single T to G point mutation at nucleotide 658 within the open reading frame, thereby generating a stop codon within the aminoterminal portion of the predicted protein. The correctness of the identification of SSC2 as the wild-type allele of the mutation in the 35-5B strain was further verified by meiotic mapping. A URA3 marked allele of the genomic fragment carried on pSC30 was integrated into the parental strain and crossed with the ura3-52 mutant strain 35-5B. Recombination between the mutant phenotype and the URA3 marker was not observed in 12 tetrads analyzed. To evaluate mitochondrial function in the ssc2-1 strain, we investigated the growth of this strain on media with nonfermentable carbon sources. Heterogeneity arose because of loss or inactivation of mtDNA in some cells from the mutant population, as has previously been described for ssc2 mutants (1). The degree of mtDNA damage in the mutant population was ascertained by crossing haploid ssc2-1 mutant cells with rho 0 tester cells of the opposite mating type. Diploid clones derived from these zygotes were evaluated for the ability to grow on ethanol-based medium, and 8 of 18 (44%) did not grow (Fig. 1A), indicating inactivation of mtDNA in those cells. Slow growth of ssc2 mutants was reflected in the small colony size (Fig. 1B), as has been described previously (1). The slow growth was exacerbated at lower (23°C) or higher (37°C) incubation temperatures, suggesting susceptibility of the mutants to environmental stresses (Fig. 1B). A link to iron metabolism was anticipated because of the way the mutants were selected. When the ssc2-1 mutant tetrad clones, 191-36A and 191-36B, were spotted on plates containing the iron chelator ferrozine, normal growth was observed (Fig. 1B). Conversely, growth of these mutants was inhibited in the presence of iron (Fig. 1B), suggesting a toxic effect of the iron on cell proliferation or cell viability. This iron-sensitive growth was correlated with a marked increase in the rate of high affinity iron uptake in the mutants (Fig. 1C). These observations show that the normal homeostatic regulation of cellular iron uptake was perturbed in the ssc2-1 mutant. To directly assess the iron content of the mitochondria, cells from the wild-type, a congenic rho 0 strain, and the ssc2-1 mutant were cultured in media containing different concentrations of iron-55 radionuclide. Mitochondria were purified (17), and the total radioactive iron content was evaluated. In the wild-type, mitochondrial iron content varied little with the different iron concentrations of the growth medium (Fig. 1D, wild-type values 1.8 to 2.6 pmol/g of protein). By contrast, the ssc2-1 mutant strain accumulated iron within mitochondria in proportion to the iron content of the growth medium. When grown in 0.9, 1.8, or 5 M 55 Fe-containing medium, the mutant accumulated 9.5, 29.4, or 107.2 pmol/g of mitochondrial protein, respectively (Fig. 1D). The rho 0 strain did not exhibit comparable mitochondrial iron accumulation, and so the effect could not be ascribed to the absence of mtDNA. We wondered if the increased mitochondrial iron content in the mutant represented a primary problem or a consequence of the increased cellular iron uptake "spilling over" into the mitochondria. When the iron content of cellular fractions was analyzed, iron accumulation in the mutant was observed exclusively within the mitochondrial fraction. The post-mitochondrial supernatant, in fact, appeared moderately depleted of iron in the mutant compared with the wild-type strain (5.5 compared with 9.2 pmol/g of protein). These results suggest that the increase in mitochondrial iron in the mutant was not a secondary effect resulting from increased cytosolic iron but rather a primary defect. 1. Phenotypes of the ssc2-1 mutants. A, mtDNA inactivation in strain 341-5B (ssc2-1). Strains 81rho 0 (1. rho 0 ) and 341-5B (2. ssc2-1) were crossed and zygotes were manipulated. Other haploid controls were 191-33C (3. ssc2-1), 61 (4. WT), 61rho 0 (5. rho 0 ). Haploids and diploid clones arising from the cross were transferred to YE (Ethanol) or YPD (Glucose). Failure of the diploid clones to grow on ethanol plates is diagnostic of mtDNA inactivation in the parental strain, 341-5B (ssc2-1). B, growth characteristics of the mutants: temperature and iron sensitivity. Diploid x191-36 was sporulated, and spore clones carrying the mutant allele ssc2-1 (A, B) or the wild-type allele SSC2 (C, D) were examined for growth on YPD agar at different temperatures (30°C, 23°C, 37°C) or for growth on SD medium containing 1 mM ferrozine (Chelator, no added iron; Iron, 250 M ferric ammonium added). The relative concentrations of the inocula spotted onto the plates are indicated by 1 (10 3 cells/10 l) and 1:10. The wild-type clone C appeared pigmented because of a genetic trait unlinked to SSC2. C, high affinity cellular iron uptake increased in the ssc1-2 mutants. The spore clones were grown to logarithmic phase in YPD, and iron uptake was assayed using 1 M 55 Fe radionuclide in 50 mM sodium citrate buffer, pH 6.5, as described. Data are the mean Ϯ S.D. of triplicate measurements. D, mitochondrial iron content increased in the ssc1-2 mutants. Strains 61 (WT), 61rho 0 (rho 0 ), and 191-33C (ssc2-1) were grown in media with 0.9, 1.8, or 5 M iron, and the mitochondrial iron content was assayed as described. The accumulation of iron in the ssc2-1 mutant mitochondria was so great that it was visible by electron microscopy. The mitochondria were packed with electron-dense material in over 50% of the cells (Fig. 2A). The fact that the deposits indeed contained iron was confirmed by Energy Dispersive X-ray Spectroscopy. The wild-type yeast strain contained no such iron deposits, and a congenic rho 0 strain showed only rare deposits in less than 5% of cells, indicating that this appearance was specific for the ssc2-1 mutant. Under higher magnifications, the mitochondrial double membrane could be seen (arrow m, Fig. 2B), and the iron deposits were evident within the mitochondrial matrix. The deposits were granular and discontinuous in appearance, as if separated by intramitochondrial cristae (Fig. 2, B and D). In some cells, the deposit-laden mitochondria were arrayed around the nucleus (Fig. 2D). We conclude that a loss of homeostatic control in the ssc2-1 mutant leads to accumulation of vast quantities of iron as electrondense bodies within the mitochondria. Some of the features described here for the ssc2-1 mutant have been reported for yeast with mutation in yfh1. Therefore, we compared the two mutant strains directly. Both were slow growing and exhibited frequent destabilization or inactivation of the mitochondrial genome (4,7). Both retained ferric reductase activity under conditions that repress activity in the wildtype. Both exhibited elevated levels of high affinity iron uptake (354 pmol/10 6 cells/h for the ssc2-1 mutant and 464 for the yfh1 mutant, compared with 17 for the wild-type). Most striking was that both mutants exhibited increased mitochondrial iron content (107 pmol of iron/g of protein for the ssc2-1 mutant and 47 for the yfh1 mutant, compared with 2.6 for the wild-type). The increased iron within mitochondria in both strains occurred without an increase in cytosolic iron (2.2 pmol of iron/g of protein for the ssc2-1 mutant and 1.2 for the yfh1 mutant compared with 3.0 for the wild-type). Thus, the ssc2-1 and yfh1 mutants strongly resemble each other with respect to their mutant phenotypes. The similar phenotypes of ssc2-1 and yfh1 mutants suggested that the corresponding proteins might function together. We therefore considered that Ssc2p might function specifically in the import or folding of Yfh1p, analogous to the known effects of Ssc1p on import and folding of other mitochondrial preproteins. To test this hypothesis, mitochondria were isolated from the wild-type (WT) and ssc2-1 mutant (M) strains, and the import of Yfh1 preprotein was allowed to proceed, after which the unimported precursor was removed by digestion with trypsin. Two new fragments (i and m) acquired trypsin resistance, suggesting that the import of Yfh1 preprotein was followed by two processing cleavages (Fig. 3A). The Yfh1 preprotein (p) migrated at a molecular mass ϳ29 kDa, although the predicted size was only 19.5 kDa, perhaps because of the acidic nature of the protein. The initial processing cleavage removed ϳ2 kDa from the amino terminus of the preprotein and generated an intermediate size polypeptide (i) migrating at ϳ27 kDa. A subsequent cleavage removed ϳ4 kDa from the amino terminus of the intermediate form, generating a mature product (m) of ϳ23 kDa that was also trypsin-resistant (Fig. 3A). In the ssc2-1 mutant, by contrast, import of Yfh1 preprotein was efficient as judged by the appearance of the protease-resistant intermediate polypeptide (i), but the conversion to the mature form was impaired. After 5 min of incuba- 3. Yfh1 preprotein processing impaired in ssc2-1 mitochondria. Import of urea-denatured precursors of Yfh1 (A), Yfh1-Protein A (B), or Put2 (C) were evaluated in mitochondria purified from wild-type (WT) or ssc2-1 (M) strains. Import reactions were allowed to proceed for 5 min (5Ј) or 15 min (15Ј) at 20°C. Where indicated, unimported precursor was digested by trypsin. p, i, and m signify the precursor, the intermediate, and the mature form, respectively. Lane 1 in each panel (Std) indicates 35% of the precursor used per import assay. mt-Hsp70 Homolog in Mitochondrial Iron Homeostasis tion, the level of the mature form was decreased compared with the wild-type (Fig. 3A, m in lanes 2, 3, 4, and 5), whereas the level of the intermediate form of Yfh1 was increased compared with the wild-type (Fig. 3A, i in lanes 2, 3, 4, and 5). Import studies of the preYfh1-Protein A fusion similarly generated two protease-resistant polypeptide forms, differing from the precursor by ϳ2 and ϳ6 kDa (Fig. 3B). This experiment also demonstrated that the proteolytic processing steps must be occurring at the amino terminus of the Yfh1 preprotein, because Yfh1 and Yfh1-Protein A precursors were processed identically. The level of the mature Yfh1-Protein A fusion protein (m) was again decreased in the ssc2-1 strain (M) compared with the wild-type (WT) (Fig. 3B, m in lanes 2, 3, 4, and 5). A reciprocal increase in the intermediate form was noted in the early (5 min) time point in the mutant, consistent with an inefficient second processing step (Fig. 3B, i in lanes 2, 3, 4, and 5). We also studied the import of prePut2 (20), the precursor of a mitochondrial matrix protein involved in proline biosynthesis, and in this case, no difference in the appearance of protease-protected forms was observed in the ssc2-1 mutant compared with the wild-type (Fig. 3C). Consistent with our prePut2 control, earlier studies failed to demonstrate alterations in the import or processing of several other preproteins by mitochondria isolated from ssc2 mutant strains (1). These data suggest that the defect in preprotein processing that exists in the ssc2-1 strain is specific for the Yfh1 preprotein. DISCUSSION We present the following model to explain these findings (Fig. 4). i) The primary defect in the ssc2-1 mutant leads to impaired maturation of Yfh1p (yellow in Fig. 4). ii) In the ssc2-1 mutant, iron uptake into the mitochondria is greatly increased, reducing cytoplasmic iron concentrations. The iron sensor-regulator, Aft1p, which ordinarily does not affect mitochondrial iron levels, responds to the decreased cytoplasmic iron by activation of the cellular iron uptake system (blue in Fig. 4) (22). Thus, iron is continually fed from the medium to the cytoplasm to the mitochondria (red in Fig. 4). The iron accumulates as dense bodies in the mitochondria that are visible by electron microscopy. iii) Despite the excess iron, the activities of a number of mitochondrial iron proteins are decreased (e.g. in yfh1 mutants, respiratory chain complexes I, II, III, IV, and the iron-sulfur protein, aconitase (5,6)). In this model, Ssc2p is required for the generation of mature Yfh1p, thereby regulating iron usage and assembly of iron proteins within the mitochondria. A direct role for Ssc2p in these processes is also possible. We have shown that Ssc2p participates in the second processing cleavage of Yfh1 following an initial cleavage of the extreme amino-terminal signal sequence. To do this, Ssc2p might itself be acting as the processing protease. The association of proteolytic and chaperone activities in a single complex has been described for mitochondrial proteins such as Lon (23) and Afg3p and Rca1p (24). Alternatively, Ssc2p could mediate maturation of Yfh1 preprotein indirectly via effects on folding or complex formation. The iron-sulfur protein of the cytochrome bc 1 complex provides an example of an association between preprotein assembly and two-step proteolytic processing. The iron-sulfur preprotein is imported into the matrix, and the signal sequence is cleaved by the matrix processing peptidase. A second processing cleavage by the mitochondrial intermediate peptidase then occurs upon assembly of the mature protein into complex II of the mitochondrial inner membrane (25). In analogous fashion, Ssc2p might mediate processing and insertion of Yfh1p into a complex. However, physical interaction between Yfh1p and Ssc2p has not yet been demonstrated, and assembly partners for Yfh1p are not known. Ssc2p function is necessary for normal iron homeostasis, and defects of Ssc2p are correlated with iron accumulation within the mitochondria. This may result from increased activity of mitochondrial iron importers or decreased activity of exporters. Another possibility is that diversion of iron into an inactive or inaccessible form induces increased iron import into mitochondria, causing the massive accumulations that we have observed. The iron, like intermediates in some storage diseases (26), may accumulate in a metabolic dead end, causing deficiencies of iron proteins and iron-protein complexes (5,6). Ssc2p, through its effects on the maturation and assembly of Yfh1 and other proteins, might regulate this iron accumulation process. Yfh1p is homologous to the human protein frataxin, which is defective in most cases of the neurodegenerative disease Friedreich's ataxia (4 -9). Specialized Hsp70 proteins within different cellular compartments are also conserved between yeast and humans (e.g. BiP in the endoplasmic reticulum and mt-Hsp70 in the mitochondria (10)). Therefore, in humans, a specialized mitochondrial form of Hsp70, analogous to Ssc2p, is likely to be involved in the maturation of human frataxin. Our inability to identify such a homolog in the human sequence data bases at this time may relate to the incomplete nature of these data bases and the low abundance of the transcript. The human homolog of Ssc2p might be defective in forms of Friedreich's ataxia that are not explained by frataxin mutations (27) or in other neurodegenerative diseases with a mitochondrial basis.
2018-04-03T02:36:36.071Z
1998-07-17T00:00:00.000
{ "year": 1998, "sha1": "d0fa5e2f479713fe2cebb976113f230b62d90e99", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/273/29/18389.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "80a20b5fc0e63599042171bb75b975e341b4ff9d", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
17408373
pes2o/s2orc
v3-fos-license
Hedgehog Signaling Components Are Expressed in Choroidal Neovascularization in Laser-induced Retinal Lesion Choroidal neovascularization is one of the major pathological changes in age-related macular degeneration, which causes devastating blindness in the elderly population. The molecular mechanism of choroidal neovascularization has been under extensive investigation, but is still an open question. We focused on sonic hedgehog signaling, which is implicated in angiogenesis in various organs. Laser-induced injuries to the mouse retina were made to cause choroidal neovascularization. We examined gene expression of sonic hedgehog, its receptors (patched1, smoothened, cell adhesion molecule down-regulated by oncogenes (Cdon) and biregional Cdon-binding protein (Boc)) and downstream transcription factors (Gli1-3) using real-time RT-PCR. At seven days after injury, mRNAs for Patched1 and Gli1 were upregulated in response to injury, but displayed no upregulation in control retinas. Immunohistochemistry revealed that Patched1 and Gli1 proteins were localized to CD31-positive endothelial cells that cluster between the wounded retina and the pigment epithelium layer. Treatment with the hedgehog signaling inhibitor cyclopamine did not significantly decrease the size of the neovascularization areas, but the hedgehog agonist purmorphamine made the areas significantly larger than those in untreated retina. These results suggest that the hedgehog-signaling cascade may be a therapeutic target for age-related macular degeneration. I. Introduction Age-related macular degeneration (AMD) is a devastating disease that causes blindness in the elderly population, especially in developed countries. AMD is a complex multifactorial disease that involves environmental factors such as cigarette smoking and lifetime sunlight exposure, and genetic components such as factors in the complement system [6,22,28]. The multifactorial nature of AMD makes the development of a complete therapy almost impossible [6]. Correspondence to: Akio Wanaka, M.D., Ph.D., Department of Anatomy and Neuroscience, Nara Medical University Faculty of Medicine, 840 Shijo-cho, Kashihara City, Nara 634-8521, Japan. E-mail: akiow@naramed-u.ac.jp Choroidal neovascularization (CNV) is one of the major pathological changes in AMD and has therefore become a target of therapeutic strategies [6,10]. Inappropriate angiogenesis causes CNV in the AMD retina. Indeed, several angiogenic factors have been so far implicated in the pathogenesis of CNV. Vascular endothelial growth factor (VEGF)-A, VEGF-B, Angiopoietin1 and Angiopoietin2 are representative angiogenic factors and their pathogenic roles in AMD have been explored [5,26]. Among these, the angiogenic effect of VEGF-A in CNV has since been established and VEGF-A is currently the best target of anti-AMD therapy; anti-VEGF-A monoclonal antibodies (bevacizumab, ranibizumab) effectively reduce CNV development in AMD patients [23,44]. Compatible with these studies, blocking of VEGF receptor function is also effective in treating CNV [18,25]. Although the anti-VEGF and anti-VEGF receptor therapies are successful in reducing CNV, their effects are clinically modest and other VEGF-related drugs are also under investigation [46]. Different kinds of anti-angiogenic factor therapies have also been sought to provide more effective treatments. Notch signaling [1] and the Wnt pathway [47] exemplify potential targets for such alternative therapeutic developments. Sonic hedgehog (Shh) is a powerful angiogenic factor during development [33,34,38] and in cancer tissues [37]. The Shh pathway is also implicated in CNV, and because its inhibition reduces CNV [42] the Shh pathway is a candidate target of anti-AMD therapy. Interestingly, a recent genome-wide association study revealed a potential, albeit not significant, association of the Shh pathway components Gli2 and Gli3 with AMD [17]. Given the genome association results, we first examined whether Shh signaling components are upregulated in the mouse CNV model. We also investigated whether pharmacological stimulation or inhibition of Shh signaling affects CNV. Animals Adult male C56BL/6 mice (8-10 weeks old, Japan Charles River Laboratory, Yokohama, Japan) were housed in plastic cages under standard laboratory conditions (23±1°C, 55±5% humidity in a room with a 12-hr lightdark cycle) and had access to tap water and food ad libitum. The Animal Care Committee of Nara Medical University approved the protocols for this study in accordance with the policies established in the NIH Guide for the Care and Use of Laboratory Animals. Laser-induced retinal lesions and drug delivery Adult mice were anesthetized with an intraperitoneal injection of chloral hydrate (Aldrich, TX, USA). The pupils of all animals were dilated using topical 0.5% tropicamide and 0.5% phenylephrine (Midorin P, Santen Pharmaceuticals, Osaka, Japan). We placed a cover glass on the cornea and delivered four laser burns to the retina of one eye at a distance of one to two disc diameters from the optic disc using the krypton laser system (530.9 nm wavelength, MC-7000, NIDEK, Aichi, Japan). Laser settings were 100 mW power and 800 ms duration. We reproducibly generated laser burns 100 μm in diameter without any major damage to retinal arteries or veins. In each mouse, one eye was laser-treated and the other (control) was sham-operated (a cover glass was placed on the cornea but no laser burns were delivered). For mRNA quantification, mice were kept for five or seven days after treatment and then sacrificed by decapitation under deep anesthesia. The eyeballs were enucleated and the retinas were dissected out for real-time PCR analyses (see below). For double-labeling immunohistochemistry, mice were perfused with 4% paraformaldehyde in phosphate buffer at five or seven days after treatment. The retinas were dissected out and 20 μm retinal sections were cut on a cryostat. The sections were subjected to double-labeling immunohistochemistry (see below). For Shh signaling modification by pharmacological agents, mice were divided into three groups (n=5 for each group) after the laser irradiation. One group received daily intraperitoneal injections of 10 mg/kg cyclopamine (a sonic hedgehog antagonist; Cosmo Bio, Tokyo, Japan) and a second group received daily intraperitoneal injections of 15 mg/kg purmorphamine (a sonic hedgehog agonist; Santa Cruz Biotechnology, TX, USA). The third (control) group received daily injections of vehicle (10% dimethyl sulfoxide in distilled water) alone. At seven days after laser irradiation, all the mice were perfused with 4% paraformaldehyde in phosphate buffer and eyeballs were enucleated. The retinas were subjected to flat-mount immunohistochemistry for CD31 (see below). Real-time reverse transcriptase-polymerase chain reaction Total RNA of the retina was extracted using TRIzol (Invitrogen, CA, USA). The extracts were reversetranscribed using random primers and a QuantiTect Reverse Transcription kit (QIAGEN, Tokyo, Japan), according to the manufacturer's instructions. Real-time RT-PCR was performed using a LightCycler Quick System 350S (Roche Diagnostics, Tokyo, Japan), with SybrGreen Realtime PCR Master Mix Plus (Toyobo, Osaka, Japan). The specific primers used in the present study are listed in Table 1. Quantification of CNV area using CD31 immunohistochemistry The isolated retina was subjected to CD31 immunohistochemistry; briefly, the retina was incubated overnight with anti-CD31 antibody (see above) and then treated with secondary Alexa-488-conjugated anti-hamster IgG antibody (Jackson ImmunoResearch; dilution 1:400). The immunolabeled retina was cut radially at four angles (0, 90, 180 and 270°) and then flat-mounted on a glass slide. The retina was observed under a laser-scanning confocal microscope (Fluoview 1000, Olympus). Because CD31 is expressed in the endothelial cells, CD31 immunohistochemistry could clearly reveal the choroidal neovascularization with proliferated endothelial cells (Fig. 4). Using ImageJ software (NIH, USA), we measured CD31-positive areas and subjected them to statistical analyses. Statistical analysis Graphical data are presented as the mean±SEM. Sta-tistical analyses of the results of real-time RT-PCR were performed using the unpaired Student's t-test. Morphometric data of choroidal neovascularization were subjected to Bonferroni-Holm adjustment for multiple comparisons. Differences were considered significant when the p value was <0.05. III. Results and Discussion We first examined whether our laser delivery system efficiently and accurately rendered CNV in the retina by checking the time-course of expression of injury-and angiogenesis-related factors using real-time RT-PCR. As injury-related factors, we chose glial fibrillary acidic protein (GFAP; Müller glial marker), Cxcr4 (chemokine) and Hif1a (tissue hypoxia-related factor). Figure 1A indicates that all three factors were significantly upregulated at day 7 after laser treatment. GFAP [11], Cxcr4 [29,40] have been reported to be upregulated in CNV lesions in animal models, and Hif1a [41,45] in both an animal model and human AMD. The present results confirm that our laserinduced CNV at 7 days after treatment is a valid model of this aspect of AMD. Angiogenic factors such as VEGF and angiopoietins are expressed in the AMD retina, and anti-VEGF treatment has successfully reduced CNV formation [9,15,23]. Figure 1B shows the expression patterns of mRNAs for eight angiogenesis-related factors; at 7 days after laser delivery, five of these mRNAs (VEGF-A, Flt1, angiopoietin 1, Tie1 and Tie2) were upregulated. These results demonstrate that our laser-induced CNV method is comparable to those in previous studies and should therefore be applicable to assessing the contribution of the Shh signaling pathway. We next examined temporal expression patterns of Shh signaling components in retinas with laser-induced CNVs. Hedgehog ligands include Shh, Indian hedgehog (Ihh) and Desert hedgehog (Dhh), all of which share com-mon signaling components (i.e., receptors and intracellular signaling molecules) [8,27], and there are two canonical receptors, patched (Ptch) and smoothened (Smo). In addition to these, cell adhesion molecule down-regulated by oncogenes (Cdon) and biregional Cdon-binding protein (Boc) are recently identified receptors for hedgehog [36]. Growth arrest specific gene 1 (Gas1), a transmembrane protein, modulates hedgehog signaling [21,24,31]. We also examined the expression patterns of the Gli family of intracellular signaling molecules. The Gli family consists of three members, Gli1, 2 and 3, which translocate into nucleus upon activation of cell-surface receptors. Gli family members are zinc-finger proteins that function as transcription factors [19]. Figure 2 shows the temporal expression patterns of the above-mentioned hedgehog signaling components. Consistent with the GFAP, CxCr4 and Hif1 expression patterns, those of the hedgehog signaling components tended to increase at 7 days after laser treatment. Among the factors, Ptch1 and Boc mRNAs were significantly upregulated in treated compared to shamoperated retinas, and Gli1 and Gli2 were also induced at 7 days after laser treatment. It should be noted that mRNA levels of the hedgehog ligands Shh, Ihh and Dhh were comparable to their sham-operated control levels at 7 days after laser treatment. These results indicate that the laser-treated retinas with CNV lesions are capable of responding to hedgehog ligands, although the ligands themselves are not upregulated in the lesions. In the context of the development of CNV lesions and their progression, the cellular localization of the upregulated components of the hedgehog-signaling pathway is of particular interest. We therefore next examined the localization of signaling components with immunohistochemistry. We examined the cellular localization patterns of hedgehog signaling components with special reference to endothelial cells forming CNV lesions. We confirmed that immunohistochemistry with non-specific IgGs of either Armenian hamster or rabbit did not produce any signals and that there were no cross-reactions between primary Armenian hamster IgG and secondary anti-rabbit IgG or between primary rabbit IgG and secondary anti-Armenian hamster IgG (data not shown). Figure 3 shows laserphotocoagulated retinal sections that were stained with hematoxylin and eosin (A) and that were double-labeled with anti-CD31 antibody (a marker for endothelial cells) and with antibodies for Shh (B), Ptch1 (C) and Gli1 (D). CD31-positive cells formed a cluster just beneath the neural retina in the laser-induced lesion. Ptch1 immunoreactivity co-localized with the CD31-positive cell cluster (Fig. 3C). Shh Signaling in Choroidal Neovascularization Gli1 immunoreactivity also localized to the CD31-positive cell cluster (Fig. 3D). In the control sham-operated retina, we observed neither Ptch1 nor Gli1 immunoreactivities (data not shown). Consistent with the real-time RT-PCR results, we found only a few Shh immunoreactive cells without co-localization of CD31 immunoreactivities (Fig. 3B). We speculate that the Shh-positive cells could be fibroblasts, but its nature awaits further investigation. Ihh or Dhh proteins were not detected in the retinal lesions (data not shown). In the control sham-operated retina, we observed none of the hedgehog family proteins (data not shown). These results suggested that the endothelial cells forming CNV are capable of receiving hedgehog signal by having upregulated signal-transducing components. Figure 4A shows a representative flat-mount retina stained with anti-CD31 antibody at day 7 after laser delivery. In the control group, CD31-immunoreactive lesions were reproducibly formed in the retina and their size was comparable to those reported previously [1,14]. We found that the size of neovascularization in the cyclopamine (Shh antagonist)-treated retina was not significantly different from that in the control retina (Fig. 4B), whereas the pur-morphamine (Shh agonist) treatment significantly increased the size of neovascularization (Fig. 4B). Sonic hedgehog and its family members are implicated in angiogenesis in various tissues and in various situations including physiological and pathological conditions [12,39]. Promotion of angiogenesis by Shh has proven effective in wound healing [2] and in preservation of cardiomyocytes in myocardial infarction [30]. Development and maintenance of cancer depends on newly formed blood vessels, and anti-angiogenesis therapies are among the important anti-cancer strategies. Indeed, Shh-targeting therapies are already being applied clinically [35]. As the primary pathogenic phenomenon of AMD is CNV, antiangiogenesis therapies have been actively developed and brought to clinical use [3,4,20]. Shh signaling is thus of interest and its implications in CNV have been demonstrated, that is, that inhibition of Shh signaling by cyclopamine reduced the sizes of laser-induced CNV lesions [42]. Cyclopamine had little effects on the sizes of CNVs in the present study. The discrepancy may be derived from the differences of experimental conditions; Surace et al. laser-induced retinal injury [42], while we focused on earlier time points (at five or seven days after laser irradiation). Based on this assumption, we reproduced the 14-day cyclopamine treatment after laser-induced injury and found that cyclopamine reduced CNV sizes to significantly lower than those in the vehicle control (p=0.0167<0.05, n=8). Taking these results together, we consider that Shh may not be present at high enough levels to stimulate endothelial cells at relatively early stages after laser-induced injury, because the agonist purmorphamine significantly increased the sizes of CNVs (Fig. 4). We suggest that a possible therapeutic strategy for AMD, especially in early stages, is to prevent Shh upregulation rather than to block Shh signaling. Shh signaling is known to involve crosstalk with other signaling pathways such as TGF-beta [7] and FGF [13], and these growth factors upregulate Shh expression [7,13]. In this regard, blocking these growth factors may be important for the development of new therapies. Another critical point is what kind of cells produce Shh in the injured retina. Although we found no apparent upregulation of Shh protein in the injured retina of early phase, upregulation of Shh by astrocytes in hypoxic conditions has been reported in the brain [16]. Since Müller glia are equivalent to brain astrocytes, inhibition of Müller glial activation is of interest for preventing Shh upregulation. IV. Acknowledgments This work was supported in part by the Japan Society for the Promotion of Science (Grant Numbers 26293039 and 15K14354 to AW), and by a research grant from the Takeda Science Foundation to AW. We wish to thank Dr. Ian Smith (Elite Scientific Editing, UK) for assistance in manuscript editing. V. References This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
2018-04-03T05:29:04.229Z
2016-04-09T00:00:00.000
{ "year": 2016, "sha1": "1114e7d9461faf21ed2a5f26dc562db4b4566095", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/ahc/49/2/49_15036/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3718ca03246b01ed80b2e30baa43451f1f0443e4", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
189976749
pes2o/s2orc
v3-fos-license
Multiscale Structural Characterization of Biocompatible Poly(trimethylene carbonate) Poly(trimethylene carbonate) (PTMC) polymeric networks are biocompatible materials with potential biomedical applications. By controlling the chemical synthesis, their functional macroscopic properties can be tailored. In this regard, this work presents the coupling of two experimental techniques: DMA and Solid State NMR, Introduction Poly(trimethylene carbonate) (PTMC) is a biodegradable, amorphous and flexible polymer with potential biomedical applications because of its biocompatibility and biodegradability. In this regard, Dynamic Mechanical Analysis (DMA) has been extensively used to study the thermomechanical behavior of polymers through a macroscopic approach, and can be considered to be a primary characterization technique in polymer science. 22,23DMA measurements were thus undertaken in this work in order to study the evolution of the molecular mobility, namely the main α relaxation temperature, the elastic moduli E , and the mechanical crosslink density of PTMC networks with varying macromer molecular weight.Such characterizations were completed by time-domain NMR analyses.9][30][31][32][33][34][35] In particular, Double-Quantum DQ 1 H sequences have been successfully used in elastomeric-like polymer networks (i.e.natural & synthetic rubbers and PDMS 36,37 ) and, through a careful data treatment, have allowed the fine study of the networks structure and dynamics, namely the polymers' molecular mobility, crosslink density v C , and chain defects concentration w def , [36][37][38][39][40][41][42][43][44][45][46][47] as well as the evolution of such network properties with temperature, 48,49 chemical modification [50][51][52] and thermal ageing. 53,54e common basis of these approaches is their potential ability to discriminate dynamical and structural effects, which allows semi-local structural features of networks to be retrieved from local dynamical measurements.This manuscript aims to demonstrate that for functional polymeric networks such as PTMC, it is of main importance to fully characterize and comprehend their intrinsic structure.Such an investigation allows the insight to chemically tailoring their structure, allowing a better control of specific macroscopic properties.This study was carried out by combining a macroscopic method with a molecular-scale technique (i.e.DMA & MQ 1 H NMR respectively).This robust scientific approach has seldom been described in the literature and specifically, it has not been carried out for PTMC networks.By studying these polymers of different macromer molecular weights, the influence of the inner network structure can be thoroughly and precisely described. Macromer synthesis To obtain three-armed, hydroxyl-terminated PTMC oligomers, ring opening polymerization reactions of TMC were performed at 130 • C under nitrogen atmosphere using TMP as initia-tor and Sn(Oct) 2 as catalyst.By adjusting the monomer to initiator molar ratio oligomers with different molecular weights M n could be prepared.The targeted M n were 3, 10, 17.5, 25 and 40 kg/mol.The polymerization reaction was performed for 3 days.The oligomers were subsequently dissolved in dichloromethane (2 mL/g oligomer) and functionalized with methacrylic anhydride (7.5 mol/mol oligomer) in the presence of triethylamine (7.5 mol/mol oligomer) and hydroquinone (0.1 wt% relative to the monomer).After 5 days, the methacrylate functionalized oligomers (macromers, PTMC-tMA) were precipitated in cold ethanol and dried at 40 • C under vacuum for 1 week.The M n of the obtained oligomers, the monomer conversion and the degree of functionalization were determined by 1 H-NMR as described previously. 17 Network preparation To obtain crosslinked networks, the macromers were dissolved in chloroform.The chloroform solutions contained 20-40 wt% PTMC macromers.To these solutions, 5 wt% (relative to the macromers) of TPO-L photoinitiator was added.The solutions were cast in Teflon molds (50x25 mm) and the solvent was allowed to evaporate overnight.The macromers were then crosslinked at room temperature under nitrogen in a home-made crosslink box for 1h at 395-405 nm at an intensity of 1 mW/cm 2 .The obtained networks were subsequently postcured under visible light for 40 minutes at room temperature.As these networks have a functionality f = 3, the theoretical chemical crosslink density v chem of such networks can be calculated as = 1/(3 × M n ).The reaction mechanism is shown in Figure 1. Figure 1: Reaction mechanism and chemical structure of a three-armed PTMC macromer prepared by the ring opening polymerization of TMC using TMP as initiator. 16xperimental Methods Swelling characterization The volume degree of swelling at equilibrium q and the gel content w swell were determined in triplicate at room temperature by swelling rectangular shaped specimens (5x5x0.5mm) in chloroform for 24 hours, which was enough time to reach solvent sorption equilibrium.The q and w swell values were calculated from Equations 1 and 2 respectively. where m swollen is the mass of the swollen networks, m dry the mass of the insoluble part of the networks after drying, m initial the initial mass of the networks, and ρ P and ρ S the densities of PTMC (=1.31 g/cm 3 ) 16 and chloroform (=1.48 g/cm 3 ) respectively. DSC measurements The thermal properties of the obtained macromers and networks were determined by Differential Scanning Calorimetry (DSC) using a TA instruments Q2000.Samples weighing 5-10 mg were heated from -60 Elastomer networks, and more generally entangled polymer melts, are characterized by the presence of topological restrictions due to both crosslinks and entanglements which restrict reorientational motions of chain segments and introduce local anisotropy along chains.At high temperatures relative to T g , intrachain motions are very fast compared to NMR time scales and the effect of the local anisotropy can be expressed as a separate factor contributing to the relaxation signal. 30In this regime the NMR relaxation function can be expressed according to Equation3. 30,33(t) where T 2 is the spin-spin or traverse relaxation time and < cos (∆ R t) > is a term that comprises the non-zero residual dipolar interaction due to local anisotropic chain segment motions.Brackets denote the ensemble average over all polymer chains. It follows that in entangled or crosslinked polymers, the overall transverse relaxation function has a generally complex, non-exponential form, with a so-called "pseudo-solid" behavior. 30It is the residual dipolar interaction factor < cos (∆ R t) > which contains the structural information, as the local anisotropy of chain segment motions is related to the network structure.One difficulty is that both the exp (−t/T 2 ) and < cos (∆ R t) > terms often have comparable relaxation time rates.Special techniques have thus been developed to discriminate those terms, i.e. isolate the structural information. DQ 1 H measurements were performed using a Bruker Avance III MAS 2 400 MHz NMR spectrometer equipped with a 5mm 1 H static probe.Samples were finely cut to fit in the rotor and tested at different temperatures above their T α measured by DMA.DQ In the vicinity of T g , crosslinked and entangled polymers undergo complex dynamics with widely spread relaxation time distributions, corresponding to motions at different scales.In this regime, structural and dynamical contributions may not be expressed as two distinct factors, as it was illustrated in Equation 3. Then the separation of both effects is not straightforward.In order to determine at which temperature the DQ NMR signals, and thus the structural effect, become independent of temperature, PTMC 3k networks were analyzed from T α + 50 • C to T α + 100 • C every 10 • C.This range of temperatures was chosen so as to have networks with theoretical elastomeric behaviors (i.e.T + 50 • C equal or above the glass transition temperature T g (DSC) or T α (DMA) 55 ).This is the reason why the overall amplitude of the normalized signal decreases drastically as temperature decreases, as it is observed in Figure 3. In the temperature-independent regime, the normalized DQ signal originating from the network structure alone must reach the theoretical relative value of 0.5 in the long τ DQ limit. 52,56To achieve this, it is necessary to eliminate the contribution of "defects", i.e. nonelastic chains (pendant and free chains).Although the full magnetization equals a heuristic manner to easily identifying and subtracting the defects contribution I def is to 43,52 which is shown in Figure 2. I def was determined by fitting a double exponential on the long-time domain of the magnetization signal.The percentage of defects w def was obtained by extrapolating this contribution to τ DQ = 0, as it is also shown in Figure 2. The normalized DQ signal excluding the non-elastic chains contributions was then calculated from Equation 5: Hence, in this work all of the PTMC networks were studied at T α + 90 • C with the I nDQ signals being computed using Equation 5.This ensured that the samples were tested at the 6. In this work the values for k and D stat were not obtained quantitatively.As these factors should be identical in all samples in the series, measuring D res values allows quantitative comparison between the different networks.To obtain the D res value for each polymer characterized at T α + 90 • C, the I nDQ signals were fitted by a function detailed in Equation 7, up to I nDQ = 0.48: where n is an exponent varying between 1 and 2. The closer n is to the value of 2, the more homogeneous the network is. Dynamic Mechanical Analysis Dynamic Mechanical Analyses were performed on a TA Instruments Q800 DMA operating in tensile mode.PTMC films were cut into ISO 527-4b dogbone-shaped specimens with operational dimensions of 18 × 2 × 0.7mm 3 .These samples were heated from -140 • C to 180 • C with a heating rate of 3 • C/min and analyzed with a frequency of 1 Hz, a pre-strain of 0.01%, and strain of 0.1%.The main α relaxation temperature T α was obtained from the half-height point of E drop corresponding to this relaxation. 57rthermore, the crosslinking density v C−DM A for each PTMC network was obtained by conducting a DMA strain sweep measurement at T α + 90 • C so as to allow a precise comparison between these results and those obtained by NMR at the same molecular mobility state.The linear regime was thus determined by plotting the storage modulus E as a function of the strain and the value of E was taken from the linear regime plateau.Then, the crosslinking density v C−DM A was calculated according to Equation 8. where R is the ideal gas constant = 8.314 J/mol • K, T =T α + 90 • C, f is the network functionality which for PTMC samples is equal to 3, and φ is a factor linked to the network model.For the affine model 58 φ = 1, whereas for the phantom model 59 φ = f −2 f .In this work, two series of v C−DM A values were thus calculated according to both the affine and phantom models. Results and Discussion Three-armed, methacrylate functionalized PTMC oligomers (macromers) were prepared via the ring opening polymerization of TMC and subsequent functionalization with methacrylic anhydride.Figure 1 shows the chemical structure of a PTMC macromer.By adjusting the monomer to initiator ratio, oligomers with different molecular weights were obtained.Table 1 shows the obtained molecular weights as confirmed by High Resolution 1 H-NMR in solution in DMSO.Subsequent functionalization yielded macromers with a degree of functionalization ≥ 86%.Then, characterization of the macromers by DSC indicated that the materials had a T g between -21 The obtained PTMC networks were similarly characterized by and by swelling in chloroform for 24 hours.Table 2 provides an overview of these properties.All networks exhibited a T g of approximately -15 • C as shown in Figure 5a.The gel contents w swell of the networks were found to be between 70 and 93%, with the lowest w swell for the network prepared from the PTMC 40k macromer, i.e. with the highest molecular weight.This result is similar to that previously reported by Schüller-Ravoo et al. 16 It was also found that the degree of swelling q of the networks in chloroform (summarized in Table 2) increased with macromer molecular weight, as was expected since networks with a lower crosslink density are able to swell more.It is seen in Figure 5b that the elastic modulus E below T α diminishes when the macromer molecular weight increases.Then, from Figure 5b, the α relaxation temperatures T α (comparable to the T g obtained by DSC) were obtained and are listed in Table 2. Interestingly, contrary to the T g values obtained by DSC, the T α diminishes with the macromer molecular weight, a trend that is in line with previously reported T g values for PTMC networks. 16The difference of results given by the two techniques can be attributed to the fact that DMA is more sensible to the physical and chemical crosslink density when compared to the DSC, which probes mostly the molecular mobility.Thus, the aforementioned result is expected because if the distance between reticulation nodes increases, the polymer chains within the network are less constrained by the nodes and thus their relaxation movements can be activated at lower temperatures.This would also lead to a more heterogeneous material.Indeed, 2. These phenomena could be due to the presence of a different network structure in this PTMC sample that is different from the rest of the materials.This will be further discussed with the 1 H MQ NMR results, as they can provide an additional insight on the networks structure. Furthermore, it is seen in Table 2 that the crosslink densities obtained from swelling Furthermore, from the I nDQ normalization calculations, the percentage of defects (i.e. chains not participating in the network) w def was also obtained.These values are also listed in Table 2.These values are similar to those obtained by swelling experiments (w swell , found in Table 2).This shows that the results given by NMR structural measurements are comparable to macroscopic physico-chemical tests and that they are quantitative. The residual coupling constant D res was subsequently obtained by fitting the I nDQ signals with Equation 7. The obtained values are found as well in Table 2.It must be recalled that D res does not give the crosslink density but is proportional to it (see Equation 6).It can be seen that the D res values decrease when the macromer molar mass increases, which means that the NMR crosslink density v C−N M R , which could be extracted from D res according to Equation 6, would decrease accordingly with the molar mass of the PTMC macromers, meaning that when the length of the macromer increases, the amount of crosslinks in the material decreases.These results are again in good agreement with those observed by macroscopic characterizations, i.e.DMA analyses, and also perfectly coherent with expectation. All 3 and are compared to those previously obtained for this network with a single chain relaxation distribution domain.It is seen in Figure 8 that a linear relationship between the crosslink density v C−DM A for both the affine and phantom models and D res .This was expected from rubber elastic-ity theory, 60,61 and from the affine, 58 phantom, 59 and junction affine 62-64 network models themselves.This is attributed to both NMR and DMA being capable of probing the chemical network, as well as chain entanglements acting as physical crosslinks.According to the rubber elasticity theory, 60,61 if a pure chemically-crosslinked network with no entanglements is considered, the value at the y-intercept of the D res vs. v C plots should be zero, since the chemical crosslink density of a non-crosslinked polymer would be non-existent.In this study a non-zero y-intercept value was obtained.A similar result was found by Vieyres et.al. 52for natural rubber, and was attributed to physical entanglements. In the case of the studied PTMC materials, the existence of physical entanglements is highly probable.Indeed, the PTMC macromer molecular weight M n is much larger than the PTMC repeating unit molecular weight M 0 = 102 g/mol.For instance, for the PTMC 3k network the M n /M 0 ratio is of ca. 30 and for the PTMC 40k network it is of ca.400.Thus, chain entanglements would be able to exist within the macromers before the formation of chemical crosslinks.More importantly, some trapped entanglements may be formed during Conclusion This study has demonstrated the pertinence of combining a multiscale approach to characterize a series of homogeneous crosslinked PTMC networks by DMA analyses and DQ 1 H Solid State NMR measurements.It is established herein that the results yielded by both experimental methods complement well each other and allow a fine study of a polymer network structure and its influence on its macroscopic thermomechanical properties.Specifically, it was confirmed that the studied PTMC networks, having different macromer molar masses, possess the same intrinsic chemical and physical network morphology.The only difference between them is their crosslink network densities, which are due to the different macromer molar masses.Moreover, it was demonstrated by both DMA and NMR measurements that chain entanglements acting as physical crosslinks are also present in such PTMC networks and their concentration as well as their effect on the thermomechanical behavior of these materials can be quantifiable.The results obtained in this work allow a better understanding of PTMC materials and the tailoring of their properties to enhance their potential use in biocompatible applications, achieved by combining DMA and solid state NMR through a robust scientific approach.This concept will be further extended to assess the influence of temperature as well as the chemical synthesis procedure on the PTMC network structure and their functional macroscopic properties.Finally, this work has shown that such a study can be readily undertaken on similar elastomeric-like functional polymeric networks. Briefly, the molecular weight was determined by comparing the area of the CH 3 initiator peak at δ = 0.92ppm with the area of the PTMC methylene peak at δ = 4.24ppm.The conversion is calculated by comparing the TMC monomer peak at δ = 4.45ppm with the area of the PTMC methylene peak at δ = 4.24ppm.The degree of functionalization is determined by comparing the − C CH 2 − 1 H signals at δ = 5.58ppm and δ = 6.13ppm of the methacrylate groups with the CH 3 initiator peak at δ = 0.92ppm. 1 H experiments were thus based on the aforementioned pulse sequence.DQ-NMR experiments using the Baum and Pines/Saalwächter sequence, yield two components as a function of the DQ evolution time 2τ DQ : the DQ buildup I DQ and the reference decay I REF , which are exemplified in Figure 2. Figure 2 : Figure 2: MQ 1 H NMR I ref , I DQ , I ref − I DQ , and I def signals obtained for the PTMC 10k network at T α + 90 • C. The contribution from defects I def is emphasized in the I ref − I DQ signal as the fraction of the signal with a long relaxation time (dashed black curve).Extrapolation of the I def signal to τ DQ = 0 gives the fraction of defects w def . Figure 3 : Figure 3: I nDQ normalized signal obtained from Equation 4 as a function of τ DQ for the PTMC 3k network at different temperatures. ) DQ signals for the PTMC 3k network characterized at temperatures equal to 70 • C, 80 • C, and 90 • C (T α + 80 • C, T α + 90 • C, and T α + 100 • C respectively) were then normalized by using Equation 5.The computed I nDQ signals were then plotted as a function of τ DQ .These results are shown in Figure 4.It is seen in Figure 4 that the I nDQ relative amplitude of 0.5 is obtained for the three considered temperatures.In detail, the I nDQ signal at 70 • C does not exactly superpose with the I nDQ signals obtained at 80 • C and 90 • C.This means that at 70 • C, the I nDQ normalization is still a little dependent on the temperature.However, for temperatures equal or higher than 80 • C for the PTMC 3k network (i.e.T α + 90 • C and above), the I nDQ normalized signals superpose well with each other (i.e.independent of temperature). Figure 4 : Figure 4: I nDQ normalization signal obtained from Equation 5 as a function of τ DQ for the PTMC 3k network at different temperatures. Figure SI. 1 ( Supporting Information) shows an example of such a fit for the PTMC 10k network characterized at T α + 90 • C. Figure 5 : Figure 5: (a) DSC Thermograms highlighting the T g and (b) Storage modulus E obtained for all PTMC networks as a function of temperature. Figure Figure 5b displays the obtained storage mechanical modulus E as a function of temperature obtained by DMA for each PTMC as a function of temperature.The loss modulus E is shown in Figure SI.3 (Supporting Information).Such plots allowed to deepen the understanding as regards the molecular mobility and thermomechanical properties of PTMC networks at or below T α . Figure SI. 3 ( Figure SI.3 (Supporting Information) shows that the α relaxation peak becomes broader as the PTMC molecular weight increases.In particular, the PTMC 40k network would seem to possess either a very large T α or a shouldering towards higher temperatures corresponding to a second α relaxation.This network has a larger α relaxation temperature distribution ∆T α (taken at half-height) by ca. 5 • C than those of the other networks as listed in Table and by DMA v C−DM A evolve similarly according to the molar mass of the PTMC macromers.In detail, the v C−chem values are fairly similar to those of v C−DM A obtained by the affine model, with v C−DM A being slightly higher than v C−chem .This difference would be a first indicator of a presence of not only chemical crosslinks but also chain entanglements that would behave as physical nodes.Moreover, the v C−DM A values calculated from the phantom model are ca.three times larger than those of v C−chem .This might mean that the PTMC networks may be better described by the affine model.Concerning the MQ 1 H NMR measurements, Figure 6a shows the I nDQ signals after normalization with Equation 5, as a function of τ DQ for all studied PTMC networks at T α + 90 • C. It is observed that the evolution of the I nDQ buildup is different for all samples, becoming steeper when the molar mass of the network decreases, i.e. when the crosslink Figure 6: (a) I nDQ values as a function of τ DQ for all PTMC networks obtained by MQ 1 H NMR measurments at T α + 90 • C, and (b) same plot with a normalization of τ DQ by D res . Figure 7 : Figure 7: Single and double relaxation distribution fits obtained from Equation 7 for PTMC 40k networks superposed to the I nDQ vs. τ DQ experimental signal. Figure 8 : Figure 8: D res values obtained by MQ 1 H NMR measurements as a function of DMA v C−DM A crosslink densities calculated either from the affine model or phantom model for all studied PTMC networks.The dashed lines are linear fits and serve as a guide for the eyes. the PTMC macromer photoreticulation.The amount of such trapped entanglements may in fact increase as the chemical crosslink density increases, as reflected by the non-linear variation of both E and D res at small v Cchem values in Figures SI.4 and SI.5 (Supporting Information).Moreover, Figure8shows that DQ NMR measurements are more sensitive to physical entanglements than DMA analyses.Altogether these results show that the physical chain entanglements characterized herein by DQ 1 H NMR and DMA have also an important contribution to the thermomechanical behavior of PTMC materials when compared to the chemical crosslink network.This is specifically true in the case of PTMC 40k networks, as their chemical crosslink density v Cchem is small, thus the presence of physical entanglements reinforces the thermomechanical properties of this network. Figure SI. 3 : Figure SI.3:Loss modulus E obtained for all PTMC networks as a function of temperature. Figure SI. 4 : Figure SI.4:D res values obtained by MQ 1 H NMR measurements as a function of the chemical v C−chem crosslink densities for all studied PTMC networks.The dashed line is a linear fit and serves as a guide for the eyes. Figure SI. 5 : Figure SI.5:E at T α + 90 • C obtained by DMA as a function of the chemical v C−chem crosslink density.The dashed line is a linear fit and serves as a guide for the eyes. • C to 100 • C at 10 • C/min and subsequently cooled to -60 • C Table 1 : • C and -14 • C as shown in Figure SI.2 (Supporting Information).It is observed that the macromers T g slightly increases with increasing molecular weight, though within experimental error deviation range.The values are summarized in Table 1.Physico-chemical properties of the obtained macromers. Table 2 : Physico-chemical, DSC, DMA, and MQ 1 H NMR results obtained for the studied PTMC networks. 52 analogous result was observed by Vieyres et.al.52fornatural rubbers with different crosslink densities. Table 3 : D res and n values obtained from Equation7for PTMC 40k networks considering a single or two chain relaxation distribution domains.It is observed in Table3that the D res value for the first chain population domain (i.e.0 ≤ I nDQ ≤ 0.25) is slightly higher than that for the second domain (i.e.0.25 ≤ I nDQ ≤ 0.48).Moreover, their average value is fairly equal to the D res value previously obtained by fitting the whole PTMC 40k network I nDQ vs. τ DQ signal.The single and double relaxation distribution fits obtained from Equation 7 are shown in Figure7.These fits are superposed to the experimental I DQ vs. τ DQ plot for the PTMC 40k network.
2019-06-14T15:03:37.139Z
2019-05-31T00:00:00.000
{ "year": 2019, "sha1": "10ccfcfadae9fcf6a9c36b4d3dcc24e85d890ad4", "oa_license": "CCBYNC", "oa_url": "https://hal.science/hal-02350595/file/S0142941820308709.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "e8f5e2e2df1e34d85349658a9fd0ef0634556dea", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
20296094
pes2o/s2orc
v3-fos-license
Differential Regulation of Two Palmitoylation Sites in the Cytoplasmic Tail of the β1-Adrenergic Receptor* S-Palmitoylation of G protein-coupled receptors (GPCRs) is a prevalent modification, contributing to the regulation of receptor function. Despite its importance, the palmitoylation status of the β1-adrenergic receptor, a GPCR critical for heart function, has never been determined. We report here that the β1-adrenergic receptor is palmitoylated on three cysteine residues at two sites in the C-terminal tail. One site (proximal) is adjacent to the seventh transmembrane domain and is a consensus site for GPCRs, and the other (distal) is downstream. These sites are modified in different cellular compartments, and the distal palmitoylation site contributes to efficient internalization of the receptor following agonist stimulation. Using a bioorthogonal palmitate reporter to quantify palmitoylation accurately, we found that the rates of palmitate turnover at each site are dramatically different. Although palmitoylation at the proximal site is remarkably stable, palmitoylation at the distal site is rapidly turned over. This is the first report documenting differential dynamics of palmitoylation sites in a GPCR. Our results have important implications for function and regulation of the clinically important β1-adrenergic receptor. The ␤ 1 -adrenergic receptor (AR) 3 is a G protein-coupled receptor (GPCR) critical to proper heart function and memory formation (1)(2)(3) and is the major cardiac target of ␤-blocker therapy for patients of chronic heart failure (4). Increasingly detailed molecular characterization of AR structure and signaling has led to novel treatment strategies, including rational drug design (5), targeting multiple components of the signaling complex (6,7), and the potential to personalize treatment for patients based on genetic background (8). The covalent addition of palmitic acid to cytoplasmic cysteine residues via a thioester bond is a prevalent modification of GPCRs. Unlike other acyl modifications, S-palmitoylation is reversible, and many proteins have regulated cycles of palmitoylation and depalmitoylation (9). Unlike soluble substrates, S-palmitoylation of integral membrane GPCRs is not required for membrane association, but instead changes their structure and contributes variably to receptor function (10,11). The palmitoylation status of ␤ 1 AR has not yet been determined. Recently the crystal structure of turkey ␤ 1 AR was solved, providing insight into the organization of the transmembrane domains, and the ligand binding pocket (12). This structure, however, lacked a large portion of the C-terminal tail and had a mutated putative palmitoylation site. Thus, unlike for the crystal structures of rhodopsin, which included palmitoylated cysteines (13,14), no information on ␤ 1 AR palmitoylation was gained. We recently discovered that efficient delivery of ␤ 1 AR to the cell surface required expression of a Golgi resident protein, golgin-160 (15). S-Palmitoylation is known to influence the trafficking and the specific subcellular localization of many substrates (16,17). It has also been reported that golgin-160 interacts with GCP16 (18), which is a subunit of the Ras palmitoyltransferase (19). Thus, we reasoned that golgin-160 might influence the surface expression of ␤ 1 AR by promoting proper palmitoylation at the Golgi. To test this hypothesis, we first investigated the palmitoylation of ␤ 1 AR. We report here that ␤ 1 AR is S-palmitoylated on its C-terminal tail proximal to the seventh transmembrane domain at residues Cys 392 and/or Cys 393 , which comprise a de facto consensus site for GPCR palmitoylation. Unexpectedly, we identified a second site of palmitoylation, further downstream on the tail at residue Cys 414 . These sites are modified in different subcellular compartments, and mutation of Cys 414 but not Cys 392 or Cys 393 affects agonist-mediated internalization of ␤ 1 AR. Interestingly, although the palmitate modification at the proximal site is quite stable, modification at the distal site is rapidly turned over. These results provide new information on ␤ 1 AR modification and will inform future experiments that rely on an accurate structural understanding of this receptor. Metabolic Labeling with Bioorthogonal Palmitate Reporter-Transiently transfected cells were labeled for 30 min with 0.5 ml of 50 M alk-16 in DMEM with 10% FCS. To determine palmitate turnover, labeled cells were chased in normal growth medium for the indicated times. Cells were lysed in Brij lysis buffer (1% Brij-97, 150 mM NaCl, 50 mM triethanolamine, pH 7.4) with EDTA-free protease inhibitor mixture (Roche Applied Science) on ice. Cell lysates were collected following centrifuging at 16,000 ϫ g for 20 min at 4°C to remove cell debris. Immunoprecipitations were performed using anti-FLAG-M2 affinity resin as above. The beads were resuspended in 40 l of SDS buffer (4% SDS, 50 mM triethanolamine, pH 7.4, 150 mM NaCl) and 3 l of freshly prepared click-chemistry reaction mixture (azide-rhodamine (100 M, 10 mM stock solution in dimethyl sulfoxide), TCEP (1 mM, 50 mM freshly prepared stock solution in deionized water), TBTA (100 M, 10 mM stock solution in dimethyl sulfoxide), and CuSO 4 ⅐5H 2 O (1 mM, 50 mM freshly prepared stock solution in deionized water)). Reactions were incubated with shaking for 1 h at 30°C. The reactions were diluted with 5 ϫ SDS-PAGE buffer (250 mM Tris, pH 6.8, 10% SDS, 50% glycerol, 0.5% bromphenol blue) and 0.5% ␤-mercaptoethanol and incubated for 20 min at 37°C. Reactions were resolved by SDS-PAGE. In-gel Fluorescence Imaging and Immunoblotting-After proteins were separated by SDS-PAGE, the gel was washed twice with deionized water for a total of 20 min. Palmitoylated ␤ 1 AR was visualized by directly scanning the gel (excitation 532 nm, 580 nm filter, 30-nm band pass) on a Typhoon 9400 imager (GE Healthcare). No signal saturation was observed. Images were processed and analyzed using the ImageQuant TL software (GE Healthcare). Following in-gel fluorescence imaging, total ␤ 1 AR was detected by either in-gel immunoblotting or traditional immunoblotting as described previously (15). For in-gel immunoblotting, gels were washed in PBS with 0.1% Tween 20 (PBST) for 10 min at room temperature. Gels were incubated with anti-FLAG-M2 antibody (Sigma) in PBST followed by IRDye800-conjugated anti-mouse IgG secondary antibody (Rockland, Gilbertsville, PA) in PBST. In all cases, immunoblot images were collected using the Odyssey infrared imaging system (Licor, Lincoln, NE). For data analysis, the alk-16 signal was normalized to the relative amount of total ␤ 1 AR detected by immunoblotting. For Fig. 5B, the normalized signal for the wild-type protein was set to 100% for each experiment, and the normalized signals from all mutants were compared with this signal. For Fig. 5C, the total signal for each ␤ 1 AR construct in each experiment was set to 100%, and the contributions of mature and immature bands were calculated. For Fig. 6B, the 0 chase time point was set to 100%, and subsequent signals (normalized based on expression level) for each mutant from each experiment were compared. Variance was determined by one-way ANOVA, and p values were calculated with the Tukey test. Measurement of ␤ 1 AR Half-life-HEK293 cells grown in 35-mm dishes were transfected with 0.5 g each of the indicated construct. 16 h later, the cells were starved for 15 min with DMEM lacking Met and Cys and labeled for 15 min in fresh Met/Cys-free DMEM with 0.2 mCi/ml Expre 35 S 35 S labeling mix (PerkinElmer Life Sciences). Medium was replaced with normal growth medium for the indicated times. Cells were lysed with detergent solution and immunoprecipitated as described above. SDS-polyacrylamide gels were dried, and radiolabeled proteins were detected by phosphorimaging (Molecular Imager FX, Bio-Rad). Bands were quantified using Quantity One software (Bio-Rad), and analysis was performed with Microsoft Excel. Measurement of ␤ 1 AR Surface Levels-HEK293 cells were grown on poly-L-lysine-coated wells in 12-well dishes and in 35-mm dishes for expression control. Each construct was transfected in triplicate in the 12-well dishes, plus one 35-mm dish, and one untransfected control to determine background binding. At 16 h after transfection, the cells in the 35-mm dishes were lysed as described (15) for analysis by Western blotting. Cells in the 12-well dishes were rinsed three times on ice with cold PBS, and incubated with 10 nM 3 H-labeled CGP-12177 (PerkinElmer Life Sciences) in KRH buffer (136 mM NaCl, 4.7 mM KCl, 1.25 mM MgSO 4 , 1.25 mM CaCl 2 , 20 mM HEPES, pH 7.4, 2 mg/ml BSA) for 3 h at 4°C. Cells were then rinsed three times on ice with cold PBS and lysed with detergent solution. Lysate was added to scintillation fluid and counted. For analy-sis, values were from samples where the maximum ligand binding was Ͻ30% of the input. Binding of ligand to nontransfected cells was Ͻ5% of that for cells expressing wild-type ␤ 1 AR. Internalization Assay-HEK293 cells grown on poly-L-lysine-coated glass coverslips transiently expressing the indicated FLAG-␤ 1 AR construct were fed anti-FLAG-M2 antibody at 1 g/ml dilution and treated with or without 10 M isoproterenol (Iso) at 37°C (Sigma) for the times indicated. Following treatment, cells were washed with PBS and were untreated, or surface antibody was removed by an acid wash (0.5 M NaCl, 0.5% HOAc, pH 1) for 1 min at room temperature. Cells were washed with PBS, fixed, and permeabilized as above. Fixed cells were probed with rabbit anti-␤ 1 AR antibody (Santa Cruz Biotechnology, Santa Cruz, CA) with 1% bovine serum albumin (BSA), followed by incubation with Alexa Fluor 488 anti-mouse and Texas Red anti-sheep IgGs. All fields were selected for similar expression levels and expression profile in the anti-␤ 1 AR field before viewing in the anti-FLAG field. For each experiment, images were taken on the same day at the same shutter speed, and all manipulations of image intensity were applied consistently to all images. Average pixel intensity of internalized antibody was determined using ImageJ software (National Institutes of Health). Variance was determined by one-way ANOVA, and p values were calculated with the Tukey test. RESULTS ␤ 1 AR Is Palmitoylated on Cysteines 392, 393, and 414-Many GPCRs are S-palmitoylated on their C-terminal tails, down-stream of the seventh transmembrane domain (21). We used two programs to predict palmitoylation sites on ␤ 1 AR, NBA-Palm, and CSS-Palm 2.0 (22,23). Both programs predicted palmitoylation on cysteines 392 and 393, which reside at this position and are highly conserved across species (Fig. 1A). This position is also analogous to the palmitoylation site of the closely related ␤ 2 AR, which has a single modified cysteine (24). To examine experimentally the palmitoylation state of ␤ 1 AR, we incubated HEK293 cells transiently expressing FLAGtagged ␤ 1 AR with [ 3 H]palmitic acid. Parallel dishes were incubated with [ 35 S]methionine/cysteine to monitor protein expression levels. Lysates were immunoprecipitated with an anti-FLAG-M2 antibody and examined by fluorography. As described previously (15), two major bands were observed for ␤ 1 AR: a faster migrating immature band (ϳ56 kDa) and a slower migrating mature band (ϳ64 kDa), representing the O-glycosylated mature form of ␤ 1 AR (Fig. 1B, left) (25). Radiolabeled palmitic acid was incorporated into ␤ 1 AR (Fig. 1B, right, lane 1), predominantly in the mature band. To confirm that the labeled palmitate was incorporated via a thioester bond, labeled ␤ 1 AR was incubated with 1 M hydroxylamine. In-gel hydroxylamine treatment resulted in the loss of 3 H signal, compared with a parallel gel treated with 1 M Tris (data not shown). These data indicate that, as expected, ␤ 1 AR is modified with palmitic acid via a thioester bond. We next sought to identify the specific residues modified by palmitic acid. We expressed a construct with Cys 392 and Cys 393 mutated to serines (␤ 1 AR C392S/ C393S) and found that although incorporation of [ 3 H]palmitic acid was reduced, it was not eliminated (Fig. 1B, lane 2), suggesting additional sites of palmitoylation. We therefore introduced mutations at each of the other cytoplasmic facing cysteine residues (cysteines 261, 378, 414, 451, and 467) in combination with C392S/C393S. Only when Cys 414 was mutated together with residues Cys 392 and Cys 393 was incorporation of [ 3 H]palmitic acid abolished (Fig. 1B, lane 4, and data not shown). This site is highly, although not universally, conserved across species (Fig. 1A). Because of their relative positions on the C-terminal tail, we refer to residues 392 and 393 as the proximal palmitoylation site and amino acid 414 as the distal palmitoylation site. The triple cysteine mutant (C392S/C393S/ C414S) is referred to as palmitoylation-null. Interestingly, the immature form of ␤ 1 AR was labeled only when the proximal site cysteines were present (Fig. 1B), indicating that the proximal site is modified earlier in the secretory pathway than the distal site. Mutation of the Palmitoylation Sites Does Not Destabilize ␤ 1 AR or Affect Its Steady-state Localization-Mutation of the palmitoylation sites of several GPCRs leads to destabilization of the receptors, most likely due to misfolding. We found no significant difference in the half-lives or extent of maturation through the medial Golgi for any of the mutant proteins (Fig. 2). Because several GPCRs with mutated palmitoylation sites are trafficked inefficiently (26 -29), we examined the steady-state distribution of ␤ 1 AR palmitoylation mutants by indirect immunofluorescence microscopy using an antibody recognizing the N-terminal FLAG epitope. All mutant proteins were expressed at the plasma membrane, similar to the wild-type protein (Fig. 3A). The internal juxtanuclear staining co-localized with TGN46, a marker of the trans-Golgi network ( Fig. 3A and supplemental Fig. S1), most likely representing ␤ 1 AR en route to the plasma membrane. None of the mutant proteins accumulated in the endoplasmic reticulum, which together with the similar half-lives of the proteins, suggests that the mutant proteins were not misfolded. To quantify the surface levels of ␤ 1 AR, we assayed the binding of a radiolabeled ligand to the surfaces of cells expressing each of our constructs. We found no substantial difference in surface levels in cells expressing any of the mutants, compared with wild type (Fig. 3B). Nearly all binding was due to expression of the transfected ␤ 1 AR constructs because untransfected controls bound less than 5% of ligand, relative to the wild-type ␤ 1 AR-expressing samples (data not shown). Taken together, these data indicate that mutation of palmitoylated cysteines of ␤ 1 AR does not disrupt the stability or steady-state distribution of the receptor. Thus, preventing palmitoylation of ␤ 1 AR did not mimic the phenotype of reduced delivery to the cell surface observed in cells lacking golgin-160 (15). This observation along with the finding that overexpression of golgin-160 promotes palmitoylation-null ␤ 1 AR surface expression similar to wild-type ␤ 1 AR (data not shown) suggests that golgin-160 does not have a role in palmitoylation of ␤ 1 AR. Agonist-stimulated Internalization Is Impaired in Mutants Lacking a Distal Palmitoylation Site-To evaluate the effect of ␤ 1 AR palmitoylation on receptor internalization following agonist stimulation, we measured the surface levels of ␤ 1 AR by binding radiolabeled ligand following stimulation with 10 M Iso or vehicle control. However, we saw no significant change in surface levels (data not shown), consistent with previously published reports that ␤ 1 AR has low levels of internalization following agonist stimulation in HEK293 cells (e.g. Ref. 30). To detect the low level of receptor internalized in these cells in a highly sensitive assay, we fed anti-FLAG-M2 antibody to live cells in the absence or presence of 10 M Iso and visualized internalized antibody after removing surface antibody with an acid wash. Without an acid wash, the signal was primarily at the cell surface (Fig. 4A). However, when acid-washed, only weak, punctate staining representing internalized receptor was observed, indicating very low levels of basal internalization. This signal increased significantly for cells treated with Iso, although still represented only ϳ6% of total fluorescent labeling ( Fig. 4A and data not shown). Additionally, agonist stimulation did not lead to a significant loss of fluorescence signal in the absence of an acid wash, consistent with the results of the radioactive ligand binding experiment. Therefore, we examined the agonist stimulated internalization of ␤ 1 AR by measuring the signal from internalized wild-type FLAG-␤ 1 AR or the indicated FLAG-␤ 1 AR mutants. We consistently observed that cells expressing FLAG-␤ 1 AR lacking the distal palmitoylation site internalized less antibody than when the distal site was intact (Fig. 4B). A quantification of the intensity of signal revealed that mutation of the distal site alone, or in combination with the proximal site, reduced internalization of ␤ 1 AR by approximately half, whereas mutation of the proximal site alone caused no defect (Fig. 4C). Use of Novel Palmitoylation Reporter to Quantify Extent of Palmitoylation Accurately-To investigate the level of labeling at each site and to characterize the dynamics of palmitoylation, we used a recently developed labeling method that could be quantified easily and accurately. Proteins labeled with [ 3 H]palmitate must be detected by fluorography, and it is difficult to obtain an accurate quantitative signal on x-ray film due to the nonlinear exposure of silver grains by photons (31). We thus used bioorthogonal labeling and in-gel fluorescence for quantification (32). Transiently transfected HEK293 cells were incubated with medium containing the bioorthogonal palmitic acid reporter (alk-16) for 30 min. After lysis and immunoprecipitation, samples were reacted with azide-rhodamine (on-bead copper-catalyzed azidealkyne cycloaddition). This labeling method has been shown to be more specific, sensitive, and efficient than radioactive methods (32,33). Following SDS-PAGE, in-gel fluorescence analysis provided a linear, quantitative signal. The labeling pattern was similar to what we observed after [ 3 H]palmitic acid labeling (compare Figs. 1B and 5A). To quantify the labeling, the fluorescent signals from five independent experiments were normalized to overall ␤ 1 AR expression level as determined by Western blotting of the same gel (Fig. 5B). When Cys 392 and Cys 393 were mutated to serines, the label was 44% Ϯ 11% of that obtained for the wild-type protein. When Cys 414 was mutated to serine, the label was 64% Ϯ 21% of that for the wild-type protein. When both sites were mutated, a low level (7% Ϯ 2%) of labeling was observed. It is possible that when the normally palmitoylated cysteines are absent, additional cysteines can be S-palmitoylated to a minor extent. To examine the usage of each of the cysteines in the proximal site, we expressed ␤ 1 AR with Cys 392 and Cys 414 mutated to serine, and ␤ 1 AR with Cys 393 and Cys 414 mutated to serine. No significant difference (45% Ϯ 15 and 50% Ϯ 24% of wild type, respectively) was observed compared with the distal site mutant with both proximal cysteines available. This most likely indicates that most ␤ 1 AR molecules expressed in HEK293 cells are palmit- We also observed that the immature form of ␤ 1 AR was labeled in all cases where a proximal site cysteine was available, but not when both were mutated to serine (Fig. 5A). We quantified the contribution of signal from mature and immature bands for each construct (Fig. 5C). Although labeling of the immature form accounted for 23% Ϯ 8% of the wild-type signal, it contributed only 9% Ϯ 4% of the signal in the C392S/C393S mutant, an amount that is similar to the labeling of palmitoylation-null described above. This indicates that the proximal site can be palmitoylated before the protein is processed in the medial Golgi, but the distal site is primarily or exclusively modified later in the secretory pathway. Turnover at Distal Site Is Highly Dynamic-The previous experiments measured the steady-state levels of palmitoylation at each site of ␤ 1 AR. The intensity of labeling is determined by both the extent of the incorporation of alk-16 as well as the rates of turnover. Palmitoylation sites with high rates of turnover will have increased signal, due to replacement of nonlabeled palmitic acid with the labeled analog. To study the dynamics of S-palmitoylation at each site, we examined the rates of turnover of palmitate at the proximal and distal sites using pulse-chase labeling. Cells expressing wild-type ␤ 1 AR, C392S/C393S, or C414S were labeled with alk-16 for 30 min followed by chase in medium lacking alk-16 for various times. Although the signal from the proximal site showed no reduction after 90 min of chase, the label incorporated at the distal site was rapidly turned over, with very little palmitoylated protein left at 15 min of chase (Fig. 6). Because the half-lives of the proteins were all much longer than the loss of signal at the distal site (Fig. 2), the loss of signal is most likely due to palmitic acid turnover and not protein degradation. Taken together, these data indicate that the proximal site is modified early in the secretory pathway and turns over slowly, whereas the distal site is modified after trafficking through the medial Golgi and has a high rate of turnover. The surprising difference in turnover at the proximal and distal sites makes comparison of steady-state palmitoylation at each site difficult because only newly synthesized ␤ 1 AR appears to be palmitoylated at the proximal site, whereas a larger pool of mature ␤ 1 AR is likely available for modification at the distal site. DISCUSSION ␤ 1 AR Is S-Palmitoylated at Two Sites in Its Cytoplasmic Tail-We report here that ␤ 1 AR is palmitoylated at the two cysteines residing on the C-terminal tail proximal to the membrane (Cys 392 and Cys 393 ) and further downstream at Cys 414 . By primary sequence, ␤ 1 AR is most closely related to ␤ 2 AR (52% amino acid identity), which has a single palmitoylated cysteine, equivalent to the proximal site of ␤ 1 AR (24). Based on this homology and the lack of a known sequence requirement for palmitoyltransferases, it has been assumed that ␤ 1 AR is S-palmitoylated only at the cysteines residing at this proximal site (12,34,35). Our findings underscore the necessity to determine experimentally all of the residues that are modified on GPCRs to provide a complete understanding of receptor regulation and function. Making conserved mutations to a protein of interest is also the most direct way to examine the contribution of palmitoylation to the function of that protein because treatment with an inhibitor (such as 2-bromopalmitate) globally prevents palmitoylation and may indirectly impact the function of a protein of interest. The finding that ␤ 1 AR has an additional palmitoylation site relative to ␤ 2 AR is surprising given the similarities regarding ligand binding and tissue distribution. However, these receptors have distinct activities. Although ␤ 2 AR localizes to cave- olae in unstimulated cardiomyocytes and relocalizes following ligand binding, ␤ 1 AR is distributed throughout the plasma membrane and does not relocalize after ligand binding (36). Similarly, within cardiomyocytes co-cultured with sympathetic ganglion neurons, ␤ 1 -and ␤ 2 AR localize to contact sites, but only ␤ 2 AR relocalizes away from the contact sites following sympathetic ganglion neuron stimulation, revealing differences in the spatial-temporal regulation of the receptors (37). The two receptors have unique binding partners (38) and form distinct signaling complexes through varied interactions with cAMP phosphodiesterases, which are differently regulated in response to agonist signaling (39). ␤ 1 AR and ␤ 2 AR also promote distinct downstream signaling events. ␤ 1 AR couples only to G␣ s , and excessive stimulation leads to apoptosis of cardiomyocytes. On the other hand, ␤ 2 AR can couple to either G␣ s or G␣ i , and stimulation was found to protect cardiomyocytes against apoptosis (40). S-Palmitoylation has been shown for other proteins to regulate behavior such as protein localization (particularly regarding cholesterol-rich domains) and proteinprotein interactions (11). It is possible that associated proteins regulate the function of these receptors by modulating the palmitoylation state at each site, allowing for a "fine-tuned" response. Some GPCRs are not S-palmitoylated, and the majority of those that are S-palmitoylated are modified only at the consen-sus site. The presence of an additional distal palmitoylation site on the tail of ␤ 1 AR places it in a third group of GPCRs. The 5-hydroxytryptamine (HT) 4(a), 5-HT 7(a) , the TP␤ isoform of thromboxane A 2 (TP␤), and the follicle-stimulating hormone (FSH) receptors have all recently been reported to have distal palmitoylation sites, in addition to palmitoylation at the proximal site (41)(42)(43)(44). Functionally, there is no obvious connection among these receptors, which have varied tissue distribution, signaling pathways, and are coupled to different G proteins. But they all likely adopt a conformation consisting of five intracellular loops when fully S-palmitoylated (Fig. 7), and the distal site may regulate receptor internalization similarly for all receptors (see below). Distal Palmitoylation Site Contributes to Internalization following Agonist Stimulation-Following agonist binding and second messenger transduction, many GPCRs are desensitized, turning off signaling. The common route of desensitization involves phosphorylation by the GPCR kinase family and/or second messenger-regulated kinases PKA or PKC. Many receptors are then internalized and sequestered within the cell, destined for recycling to the surface, or down-regulation (for review, see Ref. 45). We observed a low level of ␤ 1 AR internalization following agonist stimulation. This is consistent with several previous studies of ␤ 1 AR internalization in HEK293 cells (e.g. 30), although not all (e.g. 46). In cell culture studies more closely resembling physiological conditions, Iso treatment of rat cardiac myocytes was found to cause internalization and down-regulation of ␤ 1 AR. Interference with the endocytosis machinery caused surface retention of ␤ 1 AR and deficient downstream signaling through Akt (47). In light of cell type differences, trafficking events other than internalization may be used to regulate ␤ 1 AR function spatially (e.g. relocalization within the membrane; see above). We consider the internalization defect we observed for distal site mutants to be an intriguing preliminary observation, which may reflect a more specific paradigm of regulated relocalization that may only be observed in a relevant culture system and may be different for different cell types. The involvement of a distal site in efficient internalization is analogous to observations made for the other GPCRs with acylation at distal sites. Mutation of the distal site reduced the rates of internalization of the FSH receptor (44), and although all three palmitoylation sites of TP␤ contributed to ligand-induced internalization, only the distal site mutants were deficient in tonic internalization. By contrast, the proximal site alone was found to promote maximal coupling to G␣ q (42). Studies also show a contribution of palmitoylation to internalization of 5-HT 4(a) . Unlike ␤ 1 AR, FSH, or TP␤ receptors, mutation of the 5-HT 4(a) distal sites did not inhibit internalization. However, when the proximal site was mutated, there was a pronounced increase in ligand-stimulated internalization that was lost when the distal sites were mutated in combination (48), indicating the need for an intact distal site for the hyperinternalization phenotype. The consequence of mutation of the distal site to HT 7(a) internalization was not reported (43). It is interesting to note that such a diverse group of GPCRs appear to have a similar use for a distal palmitoylation site, although the significance and universality of this feature are currently unclear. Characterization of additional GPCRs, as well as a better mechanistic understanding of the distal site usage, may reveal a common route. Differences in Regulation at Two Palmitoylation Sites Suggest Unique Functions-To measure the relative levels of S-palmitoylation and the dynamics of turnover accurately, we needed a quantitative measure of palmitoylation. Fluorography is required to detect tritium on x-ray films, and densitometry of these films is imprecise due to the inherent nonlinearity of exposure of the silver grains by photons (31). To quantify S-palmitoylation accurately, we used a newly developed method of bioorthogonal labeling and in-gel fluorescence. This method allowed us to quantify levels of incorporation of the palmitic acid analog accurately and to normalize the signal to overall ␤ 1 AR expression levels by immunoblotting of the same gel. This method provided a linear fluorescent signal that could be easily quantified and was more practical than the acyl-biotin switch assay or mass spectrometry (49). We observed that the sums of the signal obtained from mutations at the proximal and distal site (43 and 61%, respectively) nearly equaled the signal from the wild-type protein. This further suggests that the sites are independently modified and attests to the accuracy of the labeling method. Additionally, we were able to use the bioorthogonal palmitate reporter in pulse-chase experiments to compare the rates of turnover for palmitic acid at each site. Unlike other acyl modifications, S-palmitoylation is reversible, and many proteins have regulated cycles of palmitoylation and depalmitoylation (9). We found palmitoylation of ␤ 1 AR at the proximal site to be relatively stable. By contrast, palmitate at the distal site was rapidly turned over, with nearly all signal chased within 15 min. Because of the stable modification at the proximal site, it was not possible to calculate a half-life for the palmitoylation accurately with the chase times we used. However, because there was no loss of signal by 90 min, it is possible that the proximal site is modified once and only once during the course of the life of the protein. Thus, S-palmitoylation of the proximal site could cause a stable structural modification, as opposed to that at the distal site, which is more dynamic and therefore may function as a "switch" (Fig. 7). This is the first report of differential S-palmitoylation dynamics in a GPCR and predicts important functional consequences. Consequences of ␤ 1 AR Palmitoylation-We observed that S-palmitoylation at the two sites in newly synthesized ␤ 1 AR likely occurs in different cellular compartments. The immature form of ␤ 1 AR was palmitoylated only when the proximal site was intact. This indicates that the proximal site can be palmitoylated prior to trafficking through the Golgi. Because of the relatively long labeling time (compared with the rate of trafficking), we cannot currently determine whether the proximal site can also be palmitoylated in a later compartment or whether the mature signal represents ␤ 1 ARs that were palmitoylated pre-Golgi or in the early Golgi and chased into the mature fraction. Because of the long half-life of the proximal modification, it does not appear likely that this signal comes from dynamic turnover of palmitic acid. This suggests that the two sites in ␤ 1 AR may be modified by different protein palmitoyltransferases, allowing for a greater flexibility of receptor activity and additional pathways of regulation. Due to early acquisition of palmitate at the proximal site of ␤ 1 AR and its low rate of turnover, this modification may play a structural role contributing to the proper folding of ␤ 1 AR. The proximal palmitoylation site is directly downstream of the eighth helix (H8), a structural component known to contribute to G protein coupling of many GPCRs, including rhodopsin (50,51) and ␤ 1 AR (52). We observed that the proximal site is palmitoylated early in the secretory pathway. It is possible that palmitoylation contributes to the structural stability of H8, strongly anchoring it to the membrane, thus contributing to G protein coupling. Although proteins mutated at this site do not have a shorter half-life in our cell culture system, it is possible that there is a long term consequence to an animal expressing ␤ 1 AR mutated at this site, particularly under stress conditions. For example, a recent report describes the photoreceptor cell degeneration of mice expressing a palmitoylation-null rhodopsin, but only when the mice were exposed to bright light. Under normal laboratory lighting conditions, no defect was noted (53). It is possible that ␤ 1 AR mutated at the proximal site would similarly have a dramatic defect in experiments testing stress conditions over a long term experiment in animal models. We found that palmitoylation at the distal site of ␤ 1 AR was highly dynamic, and mutation of this site impaired agonist stimulated internalization. It is therefore possible that regulation of this site contributes to desensitization following signaling. Despite their sequence similarities, ␤ 1 AR does not behave like ␤ 2 AR following ligand binding. ␤ 1 AR internalizes at a higher rate in cardiomyocytes and does not relocalize away from contact sites with sympathetic ganglion neurons or out of caveolar fractions following ligand binding (36,37). Therefore, regulation of S-palmitoylation at the distal site by a specific subset of palmitoyltransferases could contribute to discrimination between these receptors. This would provide additional control of the signaling response. ␤ 1 AR could be desensitized by increased phosphorylation, arrestin binding, and/or decreased coupling to G proteins. It is possible that modulation of distal site palmitoylation can provide a rapid mechanism for control of these phenomena, contributing to desensitization, down-regulation, or recycling. The prediction of palmitoylation sites is inexact and can only be conclusively demonstrated experimentally. We have identified three cysteines at two sites on ␤ 1 AR that are palmitoylated. S-Palmitoylation at the two sites is apparently regulated independently because modification occurs in different compartments and is turned over at different rates. This study provides information necessary to investigate further the contribution of each palmitoylation site to the function of ␤ 1 AR in specific contexts, cell culture, and animal systems.
2018-04-03T04:15:51.920Z
2011-04-04T00:00:00.000
{ "year": 2011, "sha1": "245d94d840b5710e1f3bfb488cebdd7dda809665", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/article/S0021925820511575/pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "b4553c08e45cb3071868546b632ce6aeee281741", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
3173586
pes2o/s2orc
v3-fos-license
Alternative lengthening of telomeres: remodeling the telomere architecture To escape from the normal limits on proliferative potential, cancer cells must employ a means to counteract the gradual telomere attrition that accompanies semi-conservative DNA replication. While the majority of human cancers do this by up-regulating telomerase enzyme activity, most of the remainder use a homologous recombination-mediated mechanism of telomere elongation known as alternative lengthening of telomeres (ALT). Many molecular details of the ALT pathway are unknown, and even less is known regarding the mechanisms by which this pathway is activated. Here, we review current findings about telomere structure in ALT cells, including DNA sequence, shelterin content, and heterochromatic state. We speculate that remodeling of the telomere architecture may contribute to the emergence and maintenance of the ALT phenotype. INTRODUCTION The vast majority of human cancers utilize a telomere maintenance mechanism to compensate for the gradual telomere shortening that accompanies cellular proliferation, and thereby obtain an unlimited replicative capacity. This can be accomplished by up-regulation of the ribonucleoprotein telomerase which adds telomeric repeats onto the ends of linear chromosomes by reverse transcription of an RNA template molecule (Morin, 1989), or by the alternative lengthening of telomeres (ALT) pathway (Bryan et al., 1995). Immortalized human cell lines that utilize ALT exhibit numerous phenotypic characteristics that are consistent with the hypothesis that ALT involves homologous recombination (HR)mediated DNA copying of a telomeric DNA template (Dunham et al., 2000). These characteristics include telomere length heterogeneity (Bryan et al., 1995(Bryan et al., , 1997, abundant extrachromosomal linear and circular telomeric DNA (Ogino et al., 1998;Tokutake et al., 1998;Cesare and Griffith, 2004;Wang et al., 2004;Henson et al., 2009;Nabetani and Ishikawa, 2009), an elevated frequency of telomere-sister chromatid exchange (T-SCE) events (Bechter et al., 2004;Londono-Vallejo et al., 2004), and the presence of a specific subclass of promyelocytic leukemia (PML) nuclear bodies, containing telomeric DNA, shelterin proteins, and HR factors including Mre11-Rad50-Nbs1 (MRN), termed ALT-associated PML bodies (APBs; Yeager et al., 1999). The template for synthesis of new telomeric DNA can be the telomere of a non-homologous chromosome (Dunham et al., 2000) or telomeric sequences elsewhere in the same telomere or the telomere of a sister chromatid (Muntoni et al., 2009), and we speculate that extrachromosomal telomeric DNA may also act as the copy template (Henson et al., 2002). Telomere length maintenance is a characteristic of almost all cancers. Consequently, there is considerable interest in the use of telomere maintenance inhibitors as a broad-spectrum cancer therapy, and telomerase inhibitors have entered clinical trials (Ruden and Puri, 2012). However, telomerase inhibitors are unlikely to be effective for ALT tumors, and there is a possibility that telomerase-positive tumors will become resistant by activating ALT. This is supported by recent studies showing that telomerase extinction in mouse lymphomas results in emergence of ALT activity and other adaptive responses (Hu et al., 2012). Therefore successful therapeutic targeting of telomere maintenance in cancers will encompass the development of ALT inhibitors. This will be facilitated by insights into the molecular details of ALT and how this mechanism is activated. Furthermore, the possibility remains that ALT activity may also exist under normal physiological conditions, with evidence for the mechanism seen in the mouse zygote during the early cleavage steps postfertilization (Liu et al., 2007), and most recently in the somatic cells of mice (Neumann et al., 2013). These data suggest that while some form of ALT activity may constitute a natural aspect of telomere biology, the mechanism may become dysregulated during cancer development. Here, we review aspects of normal telomere function and the current understanding of ALT, with particular emphasis upon the structural modifications that occur to the telomere during the activation and maintenance of ALT. TELOMERE CAPPING FUNCTION Telomeres contain several kilobases of the repetitive sequence 5 -TTAGGG-3 , which are predominantly double-stranded, but terminate in a single-stranded 3 overhang of the G-rich strand (Moyzis et al., 1988). This terminus can invade upstream duplex telomeric DNA and anneal to the complementary C-rich strand, resulting in the formation of a lariat structure known as a telomere loop (t-loop; Griffith et al., 1999). The t-loop is thought to protect the chromosome by sequestering the free end, thereby preventing it from being recognized as a break by the DNA damage response (DDR) proteins (de Lange, 2004). Telomeres may also form other higher order structures such as G-quadruplexes (Williamson, 1994). The chromosome end is further protected by telomere-binding proteins, especially a six-subunit protein complex (consisting of the proteins TRF1, TRF2, TIN2, POT1, RAP1, and TPP1) known as shelterin (Palm and de Lange, 2008). The protection afforded to chromosome ends by the telomeric nucleoprotein complex is referred to as telomere capping. Telomeres become uncapped when they undergo excessive shortening, presumably because they are no longer able to form a protective higher order structure and/or bind sufficient shelterin and other telomereassociated proteins, or when telomere-binding proteins such as TRF2 or POT1 are depleted experimentally (Denchi and de Lange, 2007). Removal of the entire shelterin complex has demonstrated the complexity of the capping function, which inhibits processing by multiple pathways, including ataxia telangiectasia mutated (ATM), ATM and Rad3-related (ATR), non-homologous endjoining (NHEJ), HR, and resection (Sfeir and de Lange, 2012). Loss of capping function can be recognized by co-localization of the telomere with various markers of the DDR, such as phosphorylated histone H2AX (γ-H2AX) and tumor suppressor p53-binding protein 1 (TP53BP1), which is referred to as a telomere dysfunction-induced focus (TIF; Takai et al., 2003), or by NHEJ of chromosome ends. TELOMERE CAPPING FUNCTION IN ALT CELLS Most ALT cells lack functional p53 and contain remarkably large numbers of TIFs (Cesare et al., 2009). Although ALT is associated with a relatively high level of genetic instability (Lovejoy et al., 2012), this is compatible with continued cell cycling, so it seems most likely that the TIFs represent an intermediate or transient state rather than representing fully uncapped telomeres. The TIFs in ALT cells can be partly suppressed by expression of exogenous TRF2 in a manner consistent with its ability to inhibit the function of the DDR protein, ATM. Many of these TIFs occur on telomeres that are not short, and are not suppressed by lengthening the shortest telomeres with exogenous telomerase (Cesare et al., 2009). These observations suggest that ALT cells contain telomeres with abnormal capping function, and raise the question whether these abnormalities are actually required for ALT activity. Alternative lengthening of telomeres cells exhibit a very substantial increase in T-SCEs, although the rate of HR elsewhere in the genome is not increased compared to telomerase-positive cells (Bechter et al., 2003(Bechter et al., , 2004Londono-Vallejo et al., 2004). Thus there appears to be a specific defect in the ability of the telomere cap in ALT cells to suppress telomeric HR, and given the proposed involvement of HR intermediates in ALT-mediated copying of telomeric template DNA, it is reasonable to speculate that this cap defect is essential for ALT. This defect does not result from mutations in KU70, TRF2, POT1, or RAP1 which are all wild-type and present at normal levels in ALT cells (Lovejoy et al., 2012), so other explanations must be sought. TRF2 is of particular interest in the context of telomere capping in ALT because, in addition to its involvement in suppression of telomeric HR described previously, it has a role in the formation of t-loops and four-strand DNA junctions, and in the protection of these structures against enzymatic cleavage. This suggests that TRF2 may have a role in the regulation of telomeric recombination by both promoting t-loop formation and preventing resolution of telomeric recombination intermediates (Stansel et al., 2002;Fouche et al., 2006;Poulet et al., 2009). In addition, through its interaction with the helicases BLM and WRN, TRF2 is also involved in the unwinding of duplex telomeric DNA (Opresko et al., 2002) and potentially in the resolution of aberrant telomeric structures. The total level of TRF2 in ALT cells is not significantly different from other cells (Lovejoy et al., 2012), but the total quantity of telomeric DNA is significantly increased , and overexpression of TRF2 is able to suppress the formation of TIFs (Cesare et al., 2009). These observations suggest that the amount of TRF2 (and possibly of other shelterin components) relative to telomeric DNA is decreased in ALT cells, resulting in a partial functional deficiency that may contribute to the prevalence of intermediate-state TIFs in these cells, and an HR-permissive telomeric state. ABNORMAL DNA SEQUENCES IN ALT TELOMERES The proximal regions of normal human telomeres are composed of variant repeats such as TGAGGG, TCAGGG, and TTGGGG (Allshire et al., 1989;Baird et al., 1995). These regions are hypervariable and reflect a high underlying mutation rate, predominantly involving base substitutions and simple intra-allelic expansions and contractions. Characterization of these events is possible due to linkage disequilibrium spanning these proximal regions, which has resulted in the evolution of a limited number of haploid lineages with related telomere sequence maps (Baird et al., 1995). Variant repeats are usually restricted to the proximal 2 kb of the telomere (Allshire et al., 1989); however, several studies have indicated that ALT telomeres may contain an abundance of abnormal DNA sequences. Firstly, the C-circle assay produced higher yields from some ALT cell lines following inclusion of deoxycytidine triphosphate (dCTP), indicating the presence of sequences other than TTAGGG in telomeric C-circles (Henson et al., 2009). In addition to variant repeats, telomeres of ALT cells are able to accommodate large amounts of non-telomeric sequences such as SV40 DNA (Fasching et al., 2005;Marciniak et al., 2005). We recently used a sequencing approach to show directly that variant repeats are dispersed throughout ALT telomeres (Conomos et al., 2012). We propose that this results from the HR-mediated telomere replication that has previously been shown by telomere mapping experiments to occur in the variant repeat-dense proximal regions of the telomere (Varley et al., 2002). This can be predicted to cause a breakdown of linkage disequilibrium, and ultimately the spreading of variant sequences throughout the telomere Frontiers in Oncology | Cancer Molecular Targets and Therapeutics (Figure 1) which may have profound implications for the structure and function of telomeric nucleoprotein. One of the consequences of these changes may be to "lock in" a recombinogenic telomeric state. Telomere exchange events have been shown to occur at low frequency in normal telomere biology (Baird et al., 1995). We hypothesize that these may even more rarely involve the proximal telomere region, but that the frequency increases after genetic changes such as loss of p53 suppressor function. When variant repeats spread from the proximal telomere region in this way, they may destabilize the telomere in favor of recombination, resulting in the incorporation of more variant repeats and permitting further recombination, thereby creating a positive feedback loop that results in sustained ALT activity. This hypothesis is supported by telomere mapping analysis of clonal cell populations derived from an ALT cell line compared to precrisis cells, in which all clones contained a mutant telomere map, presumably as a result of a single early inter-telomeric recombination event during clonal expansion following crisis (Varley et al., 2002). The reason that a change in DNA content may result in increased telomeric recombinogenicity may lie in its effects on protein binding. ALTERED PROTEIN BINDING AT ALT TELOMERES The shelterin complex binds specifically to the TTAGGG repeat sequence by means of the Myb domains in TRF1 and TRF2 which bind duplex telomeric repeats (Court et al., 2005;Hanaoka et al., 2005), and by sequence-specific binding of POT1 to singlestranded telomeric DNA (Loayza et al., 2004). Telomeres present a challenge to the DNA replication machinery, giving rise to replication-dependent defects, and they consequently resemble fragile sites. It is unclear what aspect of telomere structure confers this fragile nature; however, TRF1 is required to prevent these replication problems (Sfeir et al., 2009). Moreover, TRF2 and POT1 function independently to repress DNA damage signaling and DNA repair pathways (Denchi and de Lange, 2007). The specificity of shelterin binding to TTAGGG repeats means that any sequence perturbations in the telomere are likely to have a profound impact on shelterin binding. Variant repeat interspersion not only disrupts shelterin binding, but can also be predicted to result in sequence-specific binding of other proteins (Figure 2). This is exemplified by the localization of a group of nuclear receptors to the telomeres of ALT cells (Dejardin and Kingston, 2009;Conomos et al., 2012) because of their high binding affinity for the TCAGGG variant repeat (Conomos et al., 2012). It has been demonstrated experimentally that telomeric incorporation of TCAGGG repeats directly resulted in recruitment of nuclear receptors, an increased number of TIFs and the induction of some ALT phenotypic characteristics. It remains to be determined whether other sequences within ALT telomeres are similarly responsible for altered protein binding. EPIGENETIC STATE OF ALT TELOMERIC CHROMATIN It is possible that aspects of telomere architecture other than DNA sequence and shelterin binding also contribute to a state that is permissive for telomeric recombination and ALT activity. Telomeric chromatin carries histone modifications characteristic of transcriptional repression (reviewed in Blasco, 2007;Grewal and Jia, 2007). These include the heterochromatic marks, H3K9me3 www.frontiersin.org FIGURE 2 | Remodeling of the telomere architecture during activation of the ALT mechanism. Non-canonical repeat sequences existing in the proximal region are distributed throughout the telomere array during ALT activation (see Figure 1). Hence, there is an insufficient concentration of shelterin binding sites for telomere capping, causing the telomere to elicit a DDR, whilst still being able to suppress chromosomal end-to-end fusions caused by NHEJ. DNA-binding proteins capable of binding specifically to these non-canonical sequences are consequently spread throughout the telomere, increasing its recombinogenicity. These proteins may also be capable of recruiting various chromatin remodeling complexes which can alter the telomere architecture further, in favor of telomeric recombination. The results of several studies predominantly in mice, have suggested that alterations in telomeric chromatin may cause some phenotypic characteristics of ALT, and may ultimately result in ALT activity. Manipulation of mouse telomeric and subtelomeric heterochromatin resulted in a substantially increased number of T-SCEs and telomere elongation (Gonzalo et al., 2006;Benetti et al., 2007a,b). Furthermore, a number of studies have shown that loss of telomeric heterochromatic marks in mice leads to an increase in the number of APBs per cell (Garcia-Cao et al., 2004;Gonzalo et al., 2006;Benetti et al., 2007aBenetti et al., ,b, 2008. It has been speculated that telomeric chromatin can adopt a more open configuration, thus facilitating HR, ALT-mediated telomere elongation, and APB formation, although increased telomerase activity due to greater access of telomerase to the telomere cannot be excluded as the cause of these alterations. It therefore remains an interesting possibility that a "closed" telomeric and subtelomeric chromatin state is involved in repressing the ALT mechanism (Gonzalo et al., 2006;Benetti et al., 2007a,b). Decreased subtelomeric DNA methylation, resulting from mutant DNA methyltransferases, was reported to be associated Frontiers in Oncology | Cancer Molecular Targets and Therapeutics with increased telomeric recombination frequency and telomere lengthening in mice (Gonzalo et al., 2006). Human telomerasepositive cell lines showed a negative correlation of subtelomeric DNA methylation with telomere length and telomere recombination, and treatment of telomerase-positive cell lines with demethylating drugs caused hypomethylation of subtelomeric repeats and increased telomere recombination (Vera et al., 2008). In human ALT cells, however, the relationship between subtelomeric DNA methylation and ALT activity is currently unclear. One study found that the level of subtelomeric DNA methylation was heterogeneous in human ALT cells, but that on average it was similar to the level in the non-immortalized cells from which they were derived, and much less than in telomerase-positive cell lines (Ng et al., 2009). A caveat to this and other studies of subtelomeric DNA methylation is that only a small number of subtelomeric DNA regions at various distances from the telomeres were sampled. It has also been observed that ALT cells have more TERRA than normal cell strains or telomerase-positive cell lines, even when adjusted for the greatly increased telomeric DNA content of ALT cells (Ng et al., 2009). Another study found that there is genome-wide hypomethylation of Alu repeats and pericentromeric Sat2 DNA sequences in ALT-positive human tumor cells, and that although subtelomeric DNA hypomethylation was frequently present in these cells it was not required for HR manifested as T-SCEs (Tilman et al., 2009). THE ROLE OF CHROMATIN REMODELING FACTORS IN ALT Circumstantial evidence for an altered epigenetic state in ALT telomeres was obtained by mass spectrometric analysis of the protein composition of telomeric chromatin (Dejardin and Kingston, 2009). Numerous chromatin remodeling proteins were found to be present at the telomeres of an ALT cell line but were not detected at the telomeres of the telomerase-positive control. Most notably, a class of nuclear receptors, which bind to variant repeats and are capable of initiating gene expression changes via recruitment of chromatin remodelers (Cui et al., 2011), were identified at ALT telomeres. It is possible that recruitment of such proteins may alter the heterochromatic state of ALT telomeres, contributing to the derepression of telomeric recombination. Recent studies of ALT tumors and immortalized cell lines found a strong correlation between telomere maintenance by ALT and loss of activity of the switch/sucrose non-fermentable (SWI/SNF) family ATP-dependent helicase (ATRX) or its binding partner death-associated protein 6 (DAXX; Heaphy et al., 2011;Bower et al., 2012;Lovejoy et al., 2012;Schwartzentruber et al., 2012). ATRX and DAXX form a chromatin remodeling complex that localizes to PML nuclear bodies (Xue et al., 2003), although the precise mechanism of chromatin remodeling remains elusive. Nevertheless, it has been shown that ATRX and DAXX act in concert to deliver the histone variant H3.3 to telomeres in a replication-independent manner (Goldberg et al., 2010;Law et al., 2010;Lewis et al., 2010). While the purpose of this H3.3 deposition at telomeres is not understood, it has been postulated that inhibition of ATRX/DAXX function may result in the loss of heterochromatic marks thought to suppress the inherently recombinogenic nature of repetitive telomeric DNA. Some ALT tumors, however, have mutations in both H3.3 and a member of the ATRX/DAXX complex (Schwartzentruber et al., 2012), which indicates that the loss of some function of ATRX/DAXX other than H3.3 deposition is selected for in ALT tumors. ATRX also appears to have a function in the repression of TERRA (Goldberg et al., 2010), which is consistent with the observation that elevated levels of TERRA exist in many ALT tumors and cell lines compared to those which have activated telomerase (Ng et al., 2009;Lovejoy et al., 2012;Sampl et al., 2012). ATRX depletion in mouse embryonic stem cells has also been shown to reduce HP1α recruitment to telomeres and to cause an increase in telomere dysfunction as demonstrated by localization of γ-H2AX at chromosome ends (Wong et al., 2010). Alternatively, loss of ATRX/DAXX function may act elsewhere in the genome and lead to altered gene expression, e.g., by binding to DNA structures such as G-quadruplexes (Law et al., 2010), thus indirectly effecting changes that promote ALT activity. Nonetheless, depletion of either ATRX or DAXX failed to activate ALT in SV40-transformed fibroblasts (Bower et al., 2012;Lovejoy et al., 2012), suggesting that loss of ATRX/DAXX function alone is not sufficient for ALT to be initiated. CONCLUDING REMARKS In light of the evidence reviewed above we propose that remodeling of the telomeric architecture plays a key role in permitting sufficient levels of ALT activity to prevent telomere shortening in ALT cell lines and tumors. Changes in DNA content, in which variant repeat sequences that occur in the proximal region of the telomere become spread throughout the telomeres, are common. This presumably occurs initially via a rare, stochastic event in which the proximal region is used as a copy template by a telomere, but the presence of these sequences in a telomere contributes to a state which is permissive to ALT that results in their spread to other telomeres. Consequences of this altered DNA content include binding of additional proteins as well as a decreased relative shelterin content that may lead to secondary changes in telomeric heterochromatin. Furthermore, other alterations in telomeric chromatin marks may also contribute to the ALT-permissive state, including changes that may result from loss of ATRX/DAXX function, which is a common characteristic of the ALT mechanism. ACKNOWLEDGMENTS This work was supported by an Australian Postgraduate Award (to Dimitri Conomos), a Cancer Institute NSW Research Scholar Award (to Dimitri Conomos), a Cancer Council NSW Program Grant (to Roger R. Reddel) and an NHMRC project grant (#1009231) (to Roger R. Reddel and Hilda A. Pickett). Allshire, R. C., Dempster, M., and Hastie, N. D. (1989). Human telomeres contain at least three types of G-rich repeat distributed
2016-05-12T22:15:10.714Z
2013-02-20T00:00:00.000
{ "year": 2013, "sha1": "e531dc22c3a584939ab42bb729ca8516fbaab849", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2013.00027/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e531dc22c3a584939ab42bb729ca8516fbaab849", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
1403241
pes2o/s2orc
v3-fos-license
Ecosystem services of the Southern Ocean: trade-offs in decision-making Ecosystem services are the benefits that mankind obtains from natural ecosystems. Here we identify the key services provided by the Southern Ocean. These include provisioning of fishery products, nutrient cycling, climate regulation and the maintenance of biodiversity, with associated cultural and aesthetic benefits. Potential catch limits for Antarctic krill (Euphausia superba Dana) alone are equivalent to 11% of current global marine fisheries landings. We also examine the extent to which decision-making within the Antarctic Treaty System (ATS) considers trade-offs between ecosystem services, using the management of the Antarctic krill fishery as a case study. Management of this fishery considers a three-way trade-off between fisheries performance, the status of the krill stock and that of predator populations. However, there is a paucity of information on how well these components represent other ecosystem services that might be degraded as a result of fishing. There is also a lack of information on how beneficiaries value these ecosystem services. A formal ecosystem assessment would help to address these knowledge gaps. It could also help to harmonize decision-making across the ATS and promote global recognition of Southern Ocean ecosystem services by providing a standard inventory of the relevant ecosystem services and their value to beneficiaries. Introduction ''Ecosystem services'' are the benefits that mankind obtains from natural ecosystems (Millennium Ecosystem Assessment 2005, Daily et al. 2009) including food, fresh water and the maintenance of an equable climate. Human activities put pressure on natural systems, and obtaining one benefit (such as fish for food) from an ecosystem may impact its ability to provide other benefits (such as supporting biodiversity). Organizations charged with managing human activities that impact ecosystems must therefore make trade-offs between the different benefits that ecosystems provide (McLeod & Leslie 2009, Link 2010. Recent ''ecosystem assessments'' have attempted to collate information on the character, status, distribution and value of ecosystem services at global or regional scales (IPBES 2012). The objective of collating such information is to clarify how ecosystems, the achievement of social and economic goals and the intrinsic value of nature are interconnected (Ash et al. 2010). Such assessments attempt to translate the complexity of nature into functions that can be more readily understood by decision-makers and nonspecialists. Their authors suggest that this increases the transparency of trade-offs associated with decisions that may impact ecosystems (Carpenter et al. 2006, Beaumont et al. 2007, Fisher et al. 2009, UK NEA 2011. The continent of Antarctica and the surrounding Southern Ocean have, to date, been under-represented in global ecosystem assessments (e.g. Millennium Ecosystem Assessment 2005, UNEP 2010, 2012 and have not been the subject of any detailed regional assessment. This continent and ocean (which we subsequently refer to as the Antarctic) cover 9.7% of the Earth's surface area and play significant roles in the functioning of the Earth system (Lumpkin & Speer 2007). Their under-representation in ecosystem assessments potentially limits the information available for decision-making about regional and global activities that impact Antarctic ecosystems. It could also lead to underestimates of the consequences of change in Antarctic ecosystems and the global significance of the services they provide. The governance system for the Antarctic comprises a set of international agreements known as the Antarctic Treaty System (ATS). These treaties imply that the management of activities that impact ecosystems should consider the associated trade-offs. For example, the Protocol on Environmental Protection (1991) recognized ''the intrinsic value of Antarctica, including its wilderness and aesthetic values and its value as an area for the conduct of scientific research, in particular research essential to understanding the global environment'' (http://www.ats.aq/documents/ recatt/Att006_e.pdf, accessed April 2013). Decisions on the conduct of human activities, including scientific research, must therefore consider potential impacts on environmental, aesthetic and wilderness values. The Convention on the Conservation of Antarctic Marine Living Resources underpins the management of fishing activities in the Southern Ocean. The Convention entered into force in 1982, and established the Commission for the Conservation of Antarctic Marine Living Resources as its decision-making body. The acronym 'CCAMLR' is often used to refer to both the Convention and the Commission. In this paper, we use 'CCAMLR' to refer to the Commission and 'the Convention' to refer to the legal instrument. The Convention aims to ensure the ''rational use'' of marine living resources subject to ''principles of conservation'' (Fig. 1) including the maintenance of harvested stocks and of ecological relationships between harvested stocks and other species, the recovery of previously depleted stocks, and the prevention of irreversible change (http://www. ccamlr.org/en/document/publications/convention-conservationantarctic-marine-living-resources, accessed April 2013). Decisions that comply with the Convention must therefore Fig. 1. The three-way trade-off used in krill fishery management and its relationship with conservation principles and ecosystem services. The goals of ecosystem-based management (McLeod et al. 2009) map directly onto the principles of conservation set out in the Convention (two left hand columns). The three-way trade-off (yellow boxes) is influenced primarily by the principles of conservation, and it explicitly considers maintenance of provisioning services (fishery catch) in the present (fishery performance) and in the future (status of the krill stock). It also considers the status of predator populations. Ideally krill fishery management should consider fishery impacts on all ecosystem services. The krill stock and predator populations are indicators of ecosystem health, but whether they are useful indicators of other ecosystem services (red lines) is unknown. consider the trade-offs between the current benefit of catches, the benefit of future catches from a healthy stock, and the more general benefits of a healthy ecosystem. The purpose of the current paper is to review existing knowledge of Southern Ocean ecosystem services and the way this knowledge is currently used in decision-making. We collate available information on the identity, distribution, beneficiaries and global significance of Antarctic marine ecosystem services. We use the management of the main Southern Ocean fishery, which harvests Antarctic krill, Euphausia superba Dana, as a case study to explore the extent to which regional decision-making currently uses the type of information that formal ecosystem assessments generate. A full assessment of the status, trends and value of Southern Ocean ecosystem services is beyond the scope of this study, but we discuss the further work required and the potential benefits of conducting a formal ecosystem assessment. While we acknowledge that these objectives are also relevant to the terrestrial Antarctic, we limit our consideration to the marine ecosystem services of the Southern Ocean. For the purposes of this study, we define the Southern Ocean as the area covered by the Convention (http://www.ccamlr.org/ en/organisation/convention-area, accessed April 2013). The northern boundary of this area approximates to the position of the Antarctic Polar Front, which is an important ecological boundary between neighbouring oceans. This front is where cold polar surface waters sink beneath temperate surface waters. It is generally located between c. 508S and 608S (Moore et al. 1997); the higher latitude being the northern boundary of all other ATS agreements (http://www.ats.aq/imagenes/info/ antarctica_e.pdf, accessed April 2013). The following two sections provide brief introductions to ecosystem assessment and direct human interactions with the Southern Ocean ecosystem. Tables I and II present key information about Southern Ocean ecosystem services, and the remaining sections consider the existing use of information on ecosystem services in the management of the Antarctic krill fishery in the Scotia Sea and southern Drake Passage. This forms the basis for our discussion of how an ecosystem assessment might aid CCAMLR's decision-making processes. Ecosystem assessment Ecosystem assessments aim to comprehensively characterize the status and trends of relevant ecosystems, the services they provide, the drivers of change, and the potential consequences of such change (Carpenter et al. 2006, Ash et al. 2010. This includes identifying how ecosystem services affect human well-being, who benefits, and where these beneficiaries are located. It can include identifying the specific value of ecosystem services to their beneficiaries (TEEB 2010). An ecosystem assessment adds value to existing information by clarifying how ecosystems, human well-being and the intrinsic value of nature are interconnected (UK NEA 2011). The practical purpose of these assessments is to provide information that can help decision-makers to better understand how their decisions might change specific ecosystem services. This theoretically equips decision-makers to choose policies that sustain the appropriate suite of services (Ash et al. 2010). The Millennium Ecosystem Assessment (MA) was a landmark example of a global ecosystem assessment (Millennium Ecosystem Assessment 2005). Its objective was to ''assess the consequences of ecosystem change for human well-being'', and it established a framework which has formed the basis for a number of subsequent global and regional ecosystem assessments (e.g. CAFF 2010, UK NEA 2011, UNEP 2012. The MA recognized four categories of ecosystem services: provisioning (e.g. food, freshwater); regulating (e.g. climate regulation, water purification); cultural (e.g. aesthetic benefits and recreation); and supporting (e.g. nutrient cycling and primary production). These categories notably exclude the roles played by polar icecaps in storing water that would otherwise increase sea levels, and by sea ice in holding back continental ice and increasing the Earth's albedo. They also exclude some naturally occurring resources such as minerals and hydrocarbons. The MA definition of ecosystem services includes benefits that are directly perceived and used by people (such as food and water) and those that are not (such as storm regulation by wetlands) (Costanza 2008). Direct-use benefits of ecosystem services may be consumptive (e.g. the consumption of wild caught fish), or non-consumptive (e.g. the enjoyment of those fish by scuba divers) (Saunders et al. 2010). Non-use benefits may be derived, for example, from the knowledge that a resource or service exists or is being maintained (Ledoux & Turner 2002, Saunders et al. 2010. Benefits may be enjoyed at the location of a particular ecosystem service (e.g. local subsistence fishing) or at a great distance from it (e.g. large-scale commercial fishing by far seas fleets with global markets). By definition, ecosystem services have value to their beneficiaries. Ecosystem assessments aim to identify the relative value of each ecosystem service based on various measures. In the case of consumptive use, it might be possible to measure value in economic terms, but it is also important to consider other types of value (Costanza et al. 1997). Various authors have described non-use benefits in terms of existence or presence value, altruistic value (knowledge of benefits being used by the current generation), and bequest value (knowledge of benefits being used by future generations) (Gilpin 2000, Chee et al. 2004, Saunders et al. 2010. The preservation of a resource or service for future use, or the avoidance of irreversible decisions until further information is available (Millennium Ecosystem Assessment 2005) is sometimes considered as a use value in itself (Saunders et al. 2010). However, it may be categorised separately as an unknown use, including a SOUTHERN OCEAN ECOSYSTEM SERVICES (Hanchet et al. 2008). Krill (Euphausia superba) used mainly in meal and krill oil production and as the basis for various biochemical products. Highest krill abundances and majority of krill fishing occurs in Scotia Sea and Southern Drake Passage (CCAMLR Area 48) (Atkinson et al. 2004, CCAMLR 2012a. Primary production -algae associated with sea ice (winter) and phytoplankton blooms (summer) (Atkinson et al. 2004). Catch limits also in place for CCAMLR subareas 58.4.1 and 58.4.2 (East Antarctica), but there is no current harvesting in this region (CCAMLR 2012a). Ocean current systems -transport of krill in ACC across the Scotia Sea (e.g. from spawning sites along western Antarctic Peninsula to South Georgia) (Murphy et al. 2004). Spawning and nursery areas in appropriate habitats. Demersal fish including mackerel icefish are harvested from shallow island shelves while lithoid crabs and rays are harvested from deeper waters. There are Conservation Measures for these species in subareas 48.3 and 58.5. Ocean current systems -transport of larvae and juveniles. Genetic resources Genetic diversity in all marine species, including harvested resources. All ecosystem components supporting biodiversity. All Southern Ocean. Biochemicals, medicines, pharmaceuticals Bioprospecting for biological resources (plants, animals, microorganisms) that can be used for e.g. pharmaceutical or industrial products (Jabour-Green & Nicol 2003). All ecosystem components supporting biodiversity. Potentially all Southern Ocean. Fresh water Fresh water stored in icebergs and ice shelves. Formation of ice shelves and iceberg calving. Coastal areas, ice shelves. Regulating services Air quality regulation Uptake of chemicals and pollutants from the atmosphere. Waste treatment, nutrient cycling, sequestration of CO 2 (see below). All Southern Ocean, and storage of pollutants in marine sediments. Climate regulation Antarctic Bottom Water as a driver of global ocean circulation (Rintoul et al. 2001). Formation of Antarctic Bottom Water and transport northwards (Orsi et al. 2001, Rintoul et al. 2001. Formation over continental shelf and in polynyas; transport in abyssal ocean (Orsi et al. 2001). Sequestration of CO 2 by the Southern Ocean (Sabine et al. 2004, Le Quéré et al. 2007). Solution of CO2 in seawater, and sinking of dead organic matter (Sabine et al. 2004). All Southern Ocean. Regulation of global sea level ). Floating ice shelves may hold back further melting of ice sheets on land. Coastal areas, ice shelves. Waste treatment Decomposition of organic wastes. Decomposition by bacteria and microorganisms. All Southern Ocean. Supporting services Photosynthesis & primary production Photosynthesis by phytoplankton. Production of oxygen and uptake of CO 2 by phytoplankton. Highly variable, but regions of high productivity include Polar Frontal Zone and Marginal Ice Zone (Treguer & Jacques 1992). Assimilation of energy and nutrients by phytoplankton, as a food source for higher trophic levels. Summer phytoplankton blooms, growth of winter sea ice algae. Upwelling of nutrient-rich waters. Nutrient cycling Cycling of nutrients required for plant production such as nitrogen, phosphorus & silicon (Knox 2007). Cultural services Spiritual & religious value Spiritual and symbolic value of Antarctica as a wilderness. All ecosystem components. All Southern Ocean. Antarctic wildlife, particularly marine mammals and birds. All Southern Ocean, particularly wildlife and scenery in coastal regions. Areas of particular aesthetic value. Majority of tourist landings currently in Antarctic Peninsula region, with smaller numbers visiting sub-Antarctic islands and continental sites in e.g. the Ross Sea region. Aesthetic value Wilderness areas, wildlife, undisturbed spaces. All ecosystem components. All Southern Ocean, particularly wildlife and scenery in coastal regions. 606 SUSIE M. GRANT et al. Principles of conservation: Fish sold mainly in Japanese and US markets (Catarci 2004). i) Prevention of decrease in size of populations, to ensure stable recruitment. Additional economic importance for governments which generate revenue from fishing licences, and for port states, and others involved in processing or related industries. ii) Maintenance of ecological relationships (associated & dependent species). iii) Prevention of changes to ecosystem which are not reversible. See Table II Principles of conservation. Krill products sold primarily in US, Asian & European markets . Additional economic importance for governments which generate revenue from fishing licences, and for port states and others involved in processing or related industries. The reported catch of species other than krill or toothfish was 2109 t in 2010/11 (CCAMLR 2012a). Principles of conservation Genetic resources Required for maintenance of Southern Ocean biodiversity, including harvested resources. Unknown, but potentially global. No specific recognition, although the principles of conservation require the maintenance of harvested, associated and dependent populations. Unknown, but potentially global No specific recognition, although the principles of conservation require the maintenance of harvested, associated and dependent populations. Fresh water Not currently used as a resource but has been proposed as a future source of freshwater for other regions. Unknown None Regulating services Air quality regulation Uptake of CO 2 and other pollutants contributes to global air quality. Global None Climate regulation Global ocean circulation system drives weather patterns and regulates temperature in all parts of the world. Global None Southern Ocean is one of the major global sinks of atmospheric CO 2 . Increasing absorption may result in CO 2 saturation limiting further uptake, as well as ocean acidification (Le Quéré et al. 2007). Global None Loss of ice from the West Antarctic ice sheet is likely to contribute tens of cm to global sea level by 2100. Global None Projected total sea level rise of up to 1.4 m by 2100 ). Supporting services Photosynthesis & primary production Maintains Southern Ocean food webs, including harvested species. Global No specific recognition, although the principles of conservation require the maintenance of harvested, associated and dependent populations. 1.7 3 10 9 t C yr -1 produced by Southern Ocean south of 508S (Priddle et al. 1998). Equivalent to 3.5% of total world ocean productivity (Field et al. 1998). Nutrient cycling Required for maintenance of Southern Ocean biodiversity. Global No specific recognition, although the principles of conservation require the maintenance of harvested, associated and dependent populations. Cultural services Spiritual & religious value Unknown, but significant symbolic value to many people who have or have not visited the region. Unknown, but potentially global. No specific recognition, although the principles of conservation require the maintenance of harvested, associated and dependent populations. Tourism & recreation 33 824 tourists visited Antarctica in 2010/11 season (www.iaato.org), in comparison to 87 3 10 6 visiting Florida in 2011 (www.visitflorida.com) (Antarctica is 80 times the size of Florida, but has only 0.04% of the number of Florida's visitors) Current cost of tourism limits potential beneficiaries to a very small minority of the global population. No specific recognition, although the principles of conservation require the maintenance of harvested, associated and dependent populations. IAATO members include 102 companies from 15 countries (South America, North America, Europe, Japan, Australia and New Zealand) (www.iaato.org) Additional economic importance for governments charging landing fees and ''Antarctic gateway'' ports. Aesthetic value Unknown, but potentially global No specific recognition, although the principles of conservation require the maintenance of harvested, associated and dependent populations SOUTHERN OCEAN ECOSYSTEM SERVICES 'quasi-option value' where future use assumes the availability of increased knowledge or technology (Ledoux & Turner 2002, Chee et al. 2004. The objective of ecosystem assessment to provide a comparison between ecosystem services has led to attempts to express these different values in standardized, and often monetary, terms. The monetary value of an ecosystem service is arguably equivalent to the cost of replacing that service or finding another means of gaining similar benefits (Ledoux & Turner 2002). In some cases, particularly for those services which constitute the Earth's life support systems (e.g. climate regulation) this value is unlimited, because the service would be irreplaceable if lost completely. The Total Economic Value (TEV) framework is increasingly used to assess the value of ecosystem services by combining both monetary and non-monetary aspects of overall value (Ledoux & Turner 2002). Figure 2 sets out a simple TEV framework adapted from previous studies (Ledoux & Turner 2002, Chee et al. 2004, Saunders et al. 2010). The loss of 'natural capital' such as forests or fish stocks is not included in traditional economic accounting models such as Gross Domestic Product (GDP) (Dasgupta 2010). In some cases, the exploitation of natural resources might result in a positive growth in GDP, when the degradation or unsustainable use of those resources has in fact reduced natural capital. Valuation of ecosystem services provides information that might help to inform policy decisions that reduce such loss or degradation of natural capital (Costanza et al. 1997, Ledoux & Turner 2002. Human uses of the Southern Ocean The Southern Ocean is the only ocean that does not border a permanently inhabited landmass and, consequently, it was unknown and unexploited until the late 1700s. The economic importance of its ecological resources grew rapidly following Captain Cook's discovery of abundant fur seals at South Georgia in 1775. The Southern Ocean Table II. Comparative value of the current catch, catch limits, and standing stock estimates of Antarctic krill at two geographic scales. Values in bold are the results of our calculations, which include values based on market values of krill products and equivalent percentages of global marine capture fishery production (by mass). Other values are the assumptions on which these results are based and were obtained from the stated sources. a Information supplied December 2011 by Aker Biomarine, a major krill fishing company. b Free on board value (FOB) 5 market value minus freight costs. c First sale value for krill oil does not include production or freight costs. d The ''trigger level'' is the term used in Conservation Measure 51-01 (CCAMLR 2012b) to describe the currently operational catch limit. This limit is in place until a procedure for subdivision of the overall catch limit into smaller management units has been established. We have referred to this as the ''interim catch limit'' in the main text. e The ''precautionary catch limit'' is the term used in Conservation Measures (CCAMLR 2012b(CCAMLR , 2012c to describe the total catch that could be permitted once spatial subdivision has been agreed. f Although there are catch limits for areas outside the Scotia Sea and southern Drake Passage, there were no reported catches for these areas in 2010/11. 608 became the world's main source of seal products in the 1800s and whale products in the 1900s (Bonner 1984, Headland 1992. Populations of fur seals were reduced almost to extinction by the early 19th century. Attention then shifted to elephant seals and southern right whales. By the first half of the 20th century, these stocks had also declined and improved technology allowed offshore hunting of other baleen whales and sperm whales to become established. Whaling ceased in the 1960s when it was no longer economically viable. Finfish and then Antarctic krill became the major focus for exploitation, which continues until the present-day. Historical harvesting operations and catch sizes are mainly well documented (e.g. Laws 1953, Kock 1992, CCAMLR 2012a, Hill 2013a.5), although illegal, unregulated and unreported (IUU) fishing has occurred, most recently for high-value toothfish (Ö sterblom & Bodin 2012). The extent and scale of this living resource extraction, and the fact that some whale and finfish stocks remain depleted (Bonner 1984, Kock 1992 demonstrates that the Southern Ocean is far from being a pristine wilderness as it is sometimes characterized. The hostile and remote nature of the Southern Ocean, and the lack of a permanent human population have constrained direct use of its ecosystem services. Nevertheless, marine harvesting, science and tourism all directly impact the Antarctic environment (Clarke & Harris 2003, Tin et al. 2009). Scientific research and its associated logistic and support requirements have been a major focus of human activities in Antarctica and the Southern Ocean since the early 20th century. Up to 6000 scientific and support personnel are stationed in and around Antarctica at the peak of the summer season (Clarke & Harris 2003), and the Antarctic Treaty aims to maintain a high level of protection for the Antarctic environment as a scientific resource. The iconic wildlife, unique seascapes and coastlines, and relative isolation are all important factors in attracting recreational visitors. Antarctic tourism did not become established until the 1970s, and although it has expanded and diversified significantly during the last 40 years the number of visitors remains relatively low (around 35 000 each year; http:// iaato.org/tourism-statistics, accessed April 2013). Ecosystem services provided by the Southern Ocean Using the four categories identified by the MA, we have identified and described the ecosystem services provided by the Southern Ocean and the ecosystem components corresponding to the provision of these services (Table I). Of the 24 ecosystem services examined by the MA we suggest that 12 have direct relevance in the Southern Ocean. Others are relevant only to terrestrial habitats or where there is a resident human population. Table I also lists the current beneficiaries of each identified ecosystem service and the spatial distribution of these services where applicable. Species that are particularly important to the provision of ecosystem services include harvested species such as Antarctic krill, toothfish, and other fish species; iconic or flagship species (Zacharias & Roff 2001) such as penguins, whales, seals and albatrosses; and phytoplankton, zooplankton, and macro-zooplankton species which play SOUTHERN OCEAN ECOSYSTEM SERVICES key roles in primary production and nutrient cycling. There are potential benefits from services which are as yet unknown in the Southern Ocean. Endemism is high in many marine taxa (Arntz et al. 1997) suggesting the potential for products that cannot be sourced elsewhere. A few genetic and biochemical materials have been patented for use in pharmaceutical or industrial products but the potential of such resources has yet to be fulfilled (Jabour-Green & Nicol 2003). Other services such as the provision of freshwater may not be viable or utilized at present, but remain potentially important for the future if there are changes to global supply and demand. Ecosystem services provided by the Southern Ocean have few direct, local beneficiaries. The provisioning services support consumption elsewhere. For example, markets for toothfish and Antarctic krill products are predominantly in northern hemisphere nations in East Asia, North America, and Europe (Catarci 2004. Regulating and supporting services such as climate regulation, ocean circulation and nutrient cycling provide benefits to human populations globally. Marine ecosystem services may occur within welldefined locations (e.g. the spawning grounds of a particular fish species which support a provisioning service), or across much larger and spatially less distinct areas (e.g. sequestration of CO 2 across the entire Southern Ocean). There is some potential for spatially explicit mapping of ecosystem services in the Southern Ocean, for example to illustrate the spatial dimension of catch value (UK NEA 2011). Information is also available on tourist landing sites (http://iaato.org/tourismstatistics) and ship traffic (Lynch et al. 2010). Mapping of regulating and supporting services may be more difficult to achieve, although datasets such as sea surface chlorophyll concentrations (e.g. http://oceancolor.gsfc.nasa.gov) may serve as useful proxies. Table II presents some simple estimates of the comparative value of the Antarctic krill stock as an illustration of the value of Southern Ocean ecosystem services. The Antarctic krill stock in the Scotia Sea and southern Drake Passage is managed with an interim catch limit but there is also a higher potential limit, known as the ''precautionary catch limit'' (CCAMLR 2012b). These two catch limits are respectively equivalent to 0.8% and 7.1% of global marine capture fisheries production in 2011 (FAO 2012) with first sale values of about US$ 824 3 10 6 yr -1 and US$ 7.4 3 10 9 yr -1 . The comparable first sale value of the global fish catch is c. US$ 85 3 10 9 yr -1 (Pikitch et al. 2012). The current market for krill oil alone is c. US$ 82 3 10 6 yr -1 (Hill 2013a). These economic values should be considered alongside the value of other ecosystem services provided by the Antarctic krill stock. Pikitch et al. (2012) estimated that the contribution to predator production made by Antarctic krill is higher than that of any comparable species in the world's oceans. Other types of value based on the components of TEV (Fig. 2) might include option, existence, or bequest value. Investment in research and conservation gives some indication of the importance society currently attaches to ecological resources. The coverage of closed or protected areas which limit fishery access, for example at the South Orkney Islands (CCAMLR 2012c) and South Georgia (http://www.sgisland.gs/download/MPA/ MPA%20Plan%20v1-1.01%20Feb%2027_12.pdf), is a nonmonetary indication of conservation investment. However, the cost of research and protection is likely to be much lower than the hypothetical replacement value. Existing use of information about ecosystem services in the ATS Ecosystem assessments aim to characterize ecosystem services in terms of their identity and status. This status might be assessed relative to reference points defining desirable states. Ecosystem assessments also attempt to identify the beneficiaries of ecosystem services and to evaluate potential drivers and consequences of future ecosystem change. This is intended to facilitate decisionmaking based on trade-offs between ecosystem services. This section uses the Antarctic krill fishery in the Scotia Sea and southern Drake Passage as a case study to identify the extent to which management processes consider tradeoffs and use the types of information that are collated in ecosystem assessments. Overview of decision making within CCAMLR The instruments of the ATS govern existing and potential human activities in the Southern Ocean, although these instruments are legally binding only on signatory nations. The Protocol on Environmental Protection prohibits mineral exploitation south of 608S and specifies the conduct of scientific, logistic and tourist operations. CCAMLR manages fishing activities in the wider Southern Ocean ecosystem. A total of 8% of this area falls under the jurisdiction of national governments (including the marine areas around Heard Island and McDonald Island, Iles Kerguelen and Iles Crozet, the Prince Edward Islands, South Georgia and the South Sandwich Islands), some of which apply CCAMLR management measures. CCAMLR manages fishing and related activities by implementing regulations known as Conservation Measures. Commissioners are representatives of national governments. CCAMLR is advised by a Scientific Committee which, in turn, is advised by a number of scientific working groups. Decision-making at each of these levels is by consensus (Hill 2013a, fig 14.4). The Antarctic krill fishery in the Scotia Sea and southern Drake Passage accounted for 91% by mass of the total Southern Ocean catch in the 2010-11 fishing season (CCAMLR 2012a). There are a number of reviews that describe the development of CCAMLR's management approach for this fishery (Constable et al. 2000, Hill 2013a), which we also summarize here. The Convention's principles of conservation (CCAMLR 1982) were an early articulation of the goals of Ecosystem Based Management. Ecosystem Based Management takes account of trade-offs between ecosystem services, and has the goals of maintaining the ecosystem productivity, health and resilience that underpins the provision of ecosystem services (McLeod & Leslie 2009). Management of Antarctic krill fisheries has generally focused on the three-way trade-off between the performance of the fishery, the status of the krill stock, and the status of selected krill predators. In this tradeoff, the status of krill predators is used as a proxy for the health and resilience of the wider ecosystem (Fig. 1), although CCAMLR has also considered other impacts of the fishery, such as larval fish bycatch . The Antarctic krill harvest from the Scotia Sea and southern Drake Passage has been capped at 620 000 t yr -1 since CCAMLR first began to regulate the fishery in 1991. This interim catch limit is less than the ''precautionary catch limit'' (currently 5.61 3 10 6 t yr -1 ) which has been updated a number of times in response to revised estimates of Antarctic krill biomass (e.g. Trathan et al. 1995, Hewitt et al. 2004a, SC-CAMLR 2010. The ''precautionary catch limit'' defines the potential maximum harvest when the management approach is sufficiently developed to allow the interim limit to be removed. CCAMLR's scientific working groups have used the three-way trade-off to develop and evaluate management approaches that address two key questions: what is the appropriate overall catch limit, and how should this be spatially distributed to minimize local depletion of krill and its predators? The first question led to a set of decision rules which CCAMLR established in the early 1990s to identify the ''precautionary catch limit'' (SC-CAMLR 1994). These decision rules were formulated for use with simulation models and an estimate of the initial biomass of Antarctic krill, which is assumed to represent the biomass prior to any impacts of fishing. One rule allows for the simulated Antarctic krill stock to be depleted to 75% of its initial biomass. This compares with the maximum sustainable yield reference point which is widely used in other fisheries and allows depletion to around 60% (Smith et al. 2011). Thus the decision rule reserves a proportion of Antarctic krill production for its predators. Smith et al. (2011) suggested that depletion to 75% of initial biomass represents a reasonable trade-off between the benefits of harvesting and ecosystem health. Another rule constrains the risk of the simulated krill population falling to low levels likely to impact productivity. Work is ongoing within CCAMLR's scientific working groups to address the second question. These groups have identified ecologically-based spatial subdivisions of the fishery (Hewitt et al. 2004b) and assessed the potential consequences of different spatial fishing patterns (Plagányi & Butterworth 2012, Hill 2013b. The krill biomass in any area varies naturally over time (Brierley et al. 2002, Atkinson et al. 2004. The patterns of variability are also likely to change in response to climate change and fishing (Everson et al. 1992). It might therefore be appropriate to vary area-specific catch limits, or other activities, such as monitoring, in response to information about the state of the krill stock or the wider ecosystem (Constable 2002, Trathan & Agnew 2010, SC-CAMLR 2011. CCAMLR's scientific working groups aim to develop a ''feedback management procedure'' (SC-CAMLR 2011) to address these issues. They have considered the use of data from the fishery, small-scale krill surveys (e.g. Brierley et al. 2002) and krill predators (Constable 2002, Hill et al. 2010) to indicate the state of the ecosystem. However, further work is required on all aspects of the proposed procedure, including definition of its specific objectives. CCAMLR has not, to date, agreed a management approach that will prevent excessive localized depletion of the krill stock, and consequent impacts on krill predators, if catches increase beyond the interim catch limit. It therefore retains the interim limit and has recently established additional caps within the fishery's four subareas (CCAMLR 2012d). The Antarctic krill catch increased from 126 000 t in 2001/02 to 181 000 t in 2010/11. This expansion coincided with new developments in harvesting and processing technology and new markets for krill products , CCAMLR 2012a. Catches remain below 0.4% of the estimated available biomass in the Scotia Sea and southern Drake Passage (60.3 x 10 6 t), while the interim catch limit is around 1% of this estimate. These values are low compared with most established fisheries elsewhere in the world (FAO 2012) and compared to the standard reference points used to evaluate sustainability (Worm et al. 2009) but some authors have questioned whether any krill fishing is sustainable (Jacquet et al. 2010). The decision rules represent a practical solution to the need to balance effects on different ecosystem components, which did not require an economic valuation of the relevant ecosystem services. However, CCAMLR has not yet identified an approach which balances these effects at the appropriate ecological scale, and so relies on interim management measures. The current challenges facing the managers of the krill fishery include increasing demand for krill products, public interest in other ecosystem services that krill may support, and the pressure of climate change. CCAMLR is attempting to meet these challenges through developing a ''feedback management procedure''. Consideration of the character and status of ecosystem services Antarctic krill is an important species in much of the Southern Ocean, where it is a major prey item for a diverse community of predators including fish, seabirds, marine SOUTHERN OCEAN ECOSYSTEM SERVICES mammals and cephalopods (Atkinson et al. 2009. Ecosystem components of interest to CCAMLR therefore include the Antarctic krill stock and its predators. CCAMLR and the wider research community are actively addressing questions about the status and trends of these components. CCAMLR's ecosystem monitoring programme (CEMP) was established in 1987. It aims to detect and record significant changes in critical components of the marine ecosystem and to distinguish between changes due to harvesting of commercial species and changes due to environmental variability, both physical and biological (Croxall 2006). CEMP monitors Antarctic krill and nine predator species (penguins, albatrosses and fur seals) representing the 'dependent and related populations' referred to in the Convention's principles of conservation (Fig. 1). The monitored ecosystem components are consistent with the three-way trade-off. The choice of monitored components therefore reinforces the assumption that krill predators are suitable indicators of the wider state of the ecosystem. The spatial scales and species for which the state of predator populations should be evaluated to inform krill fishery management remain to be defined. In 2000, CCAMLR conducted a multi-national largescale synoptic survey to estimate the biomass of Antarctic krill in 2 x 10 6 km 2 of the Scotia Sea and southern Drake Passage (Hewitt et al. 2004a). Some CCAMLR Members also monitor krill biomass in smaller areas. For example, the UK has estimated biomass in an area of at least 8000 km 2 to the north of South Georgia since 1981 and on a regular basis since 1996 (Brierley et al. 2002). A series of studies that integrate data from national science programmes has, independently of CCAMLR, produced recent estimates of circumpolar krill biomass and production, and an assessment of trends in krill abundance (Atkinson et al. 2004(Atkinson et al. , 2009. Other studies, mainly associated with CEMP data, have assessed the status and trends of various krill predator populations (e.g. Forcada et al. 2005, Forcada & Trathan 2009). Turner et al.'s (2009) review of Antarctic climate change and environment collated much of the relevant information from published scientific studies, while Flores et al. (2012) provided a more krill-focused review. Many national science programmes and several international science coordination and implementation bodies have a Southern Ocean focus, addressing questions about the status and trends of ecosystems (e.g. Murphy et al. 2012). These programmes have sometimes identified a particular ecosystem service, or the need to manage activities that affect ecosystem services, as the motivation or benefit of their research, but none has aimed to provide a comprehensive assessment of ecosystem status and trends. Definitions of the desirable states of ecosystem components and of the fishery (and therefore undesirable states to avoid) remain elusive (Hill 2013b). Two prominent recent studies have suggested tentative reference points for ''forage'' species, such as krill, that support diverse predators. Cury et al. (2011) analysed the relationship between prey availability and seabird breeding success. They recommended maintaining forage species above a third of the maximum biomass observed in long-term studies. Smith et al. (2011) used ecosystem models to assess the propagation of fishery impacts through the foodweb. They suggested maintaining forage species above 75% of their unexploited biomass. Each of these reference points carries caveats which will need to be addressed before implementation. The Cury et al. (2011) analysis was based on aggregated data from a range of ecosystems, including the Scotia Sea. Simplistic application of its recommendations to the krill fishery suggests that krill should be maintained at levels which were only observed in six of the 21 years analysed. This highlights the difficulties in practical application of universal reference points. More detailed consideration of the scale of predator foraging, the response of different predators, and the current state of the ecosystem will be necessary to develop recommendations for the krill fishery. The 75% reference point has already been used to suggest overall krill catch limits, but CCAMLR recognizes that by itself this does not provide adequate protection against localized depletion of krill and consequent impacts on predators (Hewitt et al. 2004b). Consideration of beneficiaries of ecosystem services The Preamble to the Antarctic Treaty (1959) recognized that peaceful use of the Antarctic and scientific cooperation are in the interests of ''all mankind'' (http://www.ats.aq/ documents/ats/treaty_original.pdf, accessed April 2013). The Convention states a commitment to ''rational use'', which is often interpreted by CCAMLR Members as meaning sustainable fishing. However, the Convention does not explicitly define the term, meaning that it can be applied to the use of other ecosystem services (Watters et al. in press). Questions about the ability of ecosystem services to supply local needs are inappropriate for the Southern Ocean due to the geographical separation between these ecosystem services and their beneficiaries. This fact might partly explain why there has been little direct consideration within CCAMLR of the relationships between ecosystem services and human well being. The fishing industry and its employees, suppliers and customers are direct beneficiaries of the Antarctic krill fishery. The beneficiaries of other ecosystem services that the fishery could impact are less clearly defined, although these could include tourists, scientists, and others who might benefit from the maintenance of predator populations and the wider ecosystem (see Table I). The consensus decision-making in CCAMLR provides a mechanism for accommodating multiple opinions representing multiple ways of valuing different ecosystem services. However, consensus decision-making also has recognized drawbacks including the disproportionate influence of minority opinions and a tendency to default to the status quo. For many Members there will be pressure to ensure that decisions are defensible in terms of both the Convention and public opinion. Nonetheless, in order to have an influence, opinions must be represented at national government level, and there is no automatic requirement to represent all beneficiaries, or to consider the relative value of different ecosystem services to different beneficiaries. Several conservation-focused non-governmental organisations (NGOs) also take an interest in krill fishery issues. Some of these have observer status within CCAMLR under the umbrella of the Antarctic and Southern Ocean Coalition. However, few interest groups or direct beneficiaries have stated their specific objectives for krill fishery management. Hill (2013a) noted that most groups identify ''sustainability'' as a key requirement but that few have provided a tangible definition of this term. Furthermore, some uses of this term are mutually contradictory. Nonetheless, Ö sterblom & Bodin (2012) reported that 117 diverse organizations responded to the crisis of IUU harvesting of toothfish in the Southern Ocean with shared purpose. Their actions resulted in a substantial reduction in IUU fishing. This suggests that effective cooperation between diverse interest groups is possible. CCAMLR faces the challenge of making operational decisions on the basis of its conservation principles that are acceptable to a diverse community of beneficiaries and interest groups. At present there is little information about the values that these groups place on ecosystem services, or their specific objectives for the ecosystem or the fishery. The types of question posed by ecosystem assessments might help to identify these values and objectives. Consideration of future change The MA examined how ecosystems and the services they provide might change under plausible future scenarios. This is a key question being asked by many Antarctic-focused national science programmes and international coordinating bodies including the Scientific Committee on Antarctic Research and the Integrating Climate and Ecosystem Dynamics in the Southern Ocean programme , in conjunction with ATS bodies including CCAMLR. The Intergovernmental Panel on Climate Change intends to increase its coverage of the status and prognosis for Southern Ocean ecosystems with a dedicated chapter in the forthcoming Fifth Assessment Report. The impetus for such activity has come mainly from the scientific community but the strong interaction between scientists and decision makers within CCAMLR ensures shared purpose. The paucity of historical data presents a particular challenge for defining baseline status and relative reference points for living components of the Southern Ocean ecosystem (Hill et al. 2006. Clarke & Harris (2003) and identified key influences on the current status of Antarctic ecosystems, and suggest potential ecosystem responses to further change. Climate forcing is a major influence on the Southern Ocean ecosystem (Everson et al. 1992). This apparently results from complex interactions between natural climate processes, and the anthropogenic effects of the ozone hole and greenhouse gases , Turner & Overland 2009). Although limited human activity in the Southern Ocean constrains the potential direct influences (Trathan & Agnew 2010), potentially important drivers of change include: fishing; the ongoing consequences of historical exploitation of seals, whales and fish; pollution; disease; and invasive species (Clarke & Harris 2003, Trathan & Reid 2009). The Convention identifies the importance of the effects of fishing and associated activities ''on the marine ecosystem and of the effects of environmental changes''. CCAMLR's 2009 resolution 30/XXVIII (http://www.ccamlr. org/en/resolution-30/xxviii-2009, accessed April 2013) also recognized the importance of climate change, urging ''increased consideration of climate change impacts in the Southern Ocean to better inform CCAMLR management decisions'' and encouraging ''an effective global response to address the challenge of climate change''. These statements require ongoing consideration of how to secure the delivery of a limited set of ecosystem services while minimizing the impact on others. Further work remains necessary to quantify and forecast environmental change, to understand levels of uncertainty, and to assess potential impacts on ecosystem services, including their social and economic implications. Discussion The previous sections have provided a preliminary characterization of the Southern Ocean's ecosystem services, demonstrating their global importance in terms of climate regulation, food supply and the maintenance of biodiversity. The high estimated value of the Antarctic krill stock relative to global fishery landings provides an illustration of this global significance. We have also discussed the extent to which the functions of ecosystem assessment are already integrated into the management of the Antarctic krill fishery. This demonstrates that trade-offs between the benefits obtained from harvesting and the potential impacts on other ecosystem services are a major component of CCAMLR's decision-making process. The governance system for the Southern Ocean offers unique opportunities for managing the trade-offs between ecosystem services because its influence covers a whole ocean ecosystem. In 2009, CCAMLR designated a Marine Protected Area located entirely within the High Seas (CCAMLR 2012c). This global first is an important milestone in protecting ecosystems that are beyond national jurisdiction. Furthermore the Convention's principles of conservation effectively require management that accounts SOUTHERN OCEAN ECOSYSTEM SERVICES for such trade-offs. The developing management of the Antarctic krill fishery acknowledges these trade-offs, but simplifies them to a three-way consideration of fishery performance and the status of krill and predator populations. It is appropriate to assess whether this threeway trade-off fully represents CCAMLR's responsibilities under the Convention and the wider ATS. CCAMLR faces further challenges in developing its management approach, and in ensuring that this approach is co-ordinated with organizations responsible for other human activities at both the global and regional scale. The ecosystem services of the Southern Ocean are a global resource from which all of mankind indirectly benefits. Most beneficiaries of these ecosystem services never have any direct contact with the ecosystem. There is, however, a small and relatively privileged group of direct beneficiaries that includes fishing and tourism companies, affluent tourists and consumers of the premium products (such as krill oil and Antarctic toothfish) derived from Antarctic fisheries. These activities also create employment and therefore another category of beneficiary. In their consideration of growing demand for marine fisheries products, Garcia & Rosenburg (2010) identified krill as a resource that could perhaps support further exploitation. Thus, the composition of the group of direct beneficiaries could change over time. The spatial disconnect between the ecosystem services and the majority of beneficiaries means that the role of interest groups as intermediaries between beneficiaries and managers is particularly pronounced. There is an important distinction between beneficiaries and interest groups. Beneficiaries include the whole human race benefiting from a wide range of ecosystem services, while interest groups often focus on a narrow set of benefits and objectives. The specific requirements of beneficiaries are not currently well understood with the consequence that CCAMLR is yet to define operational objectives for the state of the krill stock, its predators and the wider ecosystem (Hill 2013a, 2013b. The Southern Ocean ecosystem is strongly influenced by human activities elsewhere (Clarke & Harris 2003), and is particularly vulnerable to the effects of climate change ). Ecosystem managers arguably have a duty to maintain the regulatory and supporting services required for healthy ecosystems, and therefore to ensure appropriate interaction with the wider global community on such issues. Identifying objectives that are consistent with its responsibility and influence are an additional challenge faced by CCAMLR. Ecosystem assessment could help CCAMLR to meet these various challenges by providing a comprehensive characterization of the status, trends, and drivers of change to ecosystems and the services they provide for human well-being. A regional ecosystem assessment for the Southern Ocean would address its under-representation in existing global assessments. Such an assessment would also have benefits for CCAMLR and the wider ATS. Firstly, it would increase knowledge about the connections between the broad suite of Southern Ocean ecosystem services and the social and economic goals of CCAMLR Members. Clearer information on the value of ecosystem services would address the existing need for information about the objectives for each component of the three-way trade-off. It would also promote consideration of ecosystem services that are not currently represented in decision-making. Secondly, an assessment which gives equal consideration to the full range of provisioning, supporting, regulating and cultural services would be a substantial undertaking involving a wide community. This, in itself, could help forge more substantial links between the different components of the ATS. The end product would provide a consistent basis for coordinating activities related to managing or understanding ecosystem impacts. The information presented here could provide a starting point for such an assessment. New research would be needed to fill some obvious gaps such as the spatial mapping (e.g. Naidoo et al. 2008, Maes et al. 2011) and economic valuation (e.g. Costanza et al. 1997) of ecosystem services, and the assessment would serve as a gap analysis to highlight other data needs. Best-practice developed in many other regional assessments could be useful (Ash 2010). CCAMLR is a user of information on the status and trends of marine ecosystems but it does not fund or directly mandate the collection of such data. The reliance of CCAMLR on donated information is a significant challenge to both the achievement of an ecosystem assessment and the long-term management of ecosystem services in the Southern Ocean (Hill 2013a(Hill , 2013b. There are several potential solutions, including a new initiative by the fishing industry to support the scientific work of CCAMLR . We acknowledge that an ecosystem assessment would be a significant task in terms of resource requirements and coordination effort, but we believe it would deliver significant and long-term practical benefits. Conclusion The ecosystem services provided by the Southern Ocean are significant on a global scale, as illustrated by the potential of Antarctic krill to supply the equivalent of 11% of current world fishery landings. The terms ''ecosystem services'' and ''ecosystem assessment'' are not commonly used within the community concerned with managing human activities in the Southern Ocean. Nonetheless this community is actively gathering and applying much of the information that ecosystem assessments seek to collate. The Convention, in particular, articulates the requirement to consider trade-offs between ecosystem services. The management of the krill fishery represents a practical implementation of this requirement despite a lack of information about how beneficiaries value the relevant ecosystem services. A formal ecosystem assessment could provide necessary information on the wider suite of ecosystem services that fishing might interact with and how beneficiaries value these services. Such information is likely to aid the future development of krill fishery management and help remove the current reliance on interim measures. Formal and comprehensive ecosystem assessment would require considerable investment but could substantially improve coordination between management bodies focused on different human activities at both the regional and global scale.
2017-05-02T19:42:02.511Z
2013-06-12T00:00:00.000
{ "year": 2013, "sha1": "ed573227c8f2882bef19a2f86215fcfe8622eb53", "oa_license": "CCBYNCSA", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/34BEF491BF14F22A572C421BED45D2F5/S0954102013000308a.pdf/div-class-title-ecosystem-services-of-the-southern-ocean-trade-offs-in-decision-making-div.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "cedc6c8b4e8f6d61de55277bae8bfb3ff11cec03", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Business", "Medicine" ] }
56030868
pes2o/s2orc
v3-fos-license
Distorted plane waves in chaotic scattering Distorted plane waves, sometimes called Eisenstein functions, are a family of eigenfunctions of a Schr\"odinger operator that are not square integrable. More precisely, they can be written as the sum of a plane wave and an outgoing wave. We shall study distorted plane waves in the semiclassical limit, in a general setting which includes manifolds that are Euclidean near infinity, under the hypothesis that the classical dynamics is hyperbolic close to the trapped set, and that some topological pressure is negative. Introduction In this paper, we will consider on R d a semiclassical Hamiltonian of the form . We will study the "distorted plane waves", or "scattering states" associated to P h . They are a family of functions E ξ h ∈ C ∞ (R d ) with parameter ξ ∈ S d (the direction of propagation of the incoming wave) which are generalized eigenfunctions of P h , that is to say, they satisfy the differential equation but which are not in L 2 (R d ) (since P h has no embedded eigenvalues in R + ). These distorted plane waves resemble the actual plane waves, in the sense that we may write where, E out is outgoing in the sense that it satisfies the Sommerfeld radiation condition: One can show (see for instance [Mel95,§2] or [DZ16,§4]) that for any ξ ∈ S d−1 and h > 0, there exists a unique function E ξ h satisfying conditions (1), (2) and (3). Condition (3) may be equivalently stated by asking that E ξ out is the image of a function in C ∞ c (R d ) by the outgoing resolvent (P h − (1 + i0) 2 ) −1 , or by asking that E ξ out may be put in the form E ξ out (x) = e i|x|/h |x| − 1 2 (d−1) a ξ h (ω) + O 1 |x| , where ω = x/|x|. The function a h (ξ, ω) := a ξ h (ω) is called the scattering amplitude, and is the integral kernel of the scattering matrix minus identity. The scattering amplitude, and hence the distorted plane waves, are central objects in scattering theory. The aim of this paper is to discuss the behaviour of distorted plane waves in the semiclassical limit h → 0. Distorted plane waves can be seen as an analogue, on manifolds of infinite volume, of the eigenfunctions of a Schrödinger operator on a compact manifold. It is therefore natural to ask questions similar to those in the compact case: what can be said about the semiclassical measures of distorted plane waves ? About the behaviour of their L p norms as h → 0 ? About their nodal sets and nodal domains ? The answer to these questions will depend in a drastic way on the properties of the underlying classical dynamics. Let us define the classical Hamiltonian by p(x, ξ) = |ξ| 2 + V (x), and the layer of energy 1 as E = {ρ ∈ T * R d ; p(ρ) = 1}. Note that this is a non-compact set, but its intersection with any fibre T * x X is compact. We also denote, for each t ∈ R, the Hamiltonian flow generated by p by Φ t : T * R d −→ T * R d . For ρ ∈ E, we will say that ρ ∈ Γ ± if {Φ t (ρ), ±t ≤ 0} is a bounded subset of T * R d ; that is to say, ρ does not "go to infinity", respectively in the past or in the future. The sets Γ ± are called respectively the outgoing and incoming tails (at energy 1). The trapped set is defined as It is a flow invariant set, and it is compact, because V is compactly supported. If the trapped set is empty, then we can easily describe the distorted plane waves in the semiclassical limit. Namely, one can show (cf. [DG14,§5.1]) that E ξ h is a Lagrangian (WKB) state. Furthermore, for any χ ∈ C ∞ c (R d ), the norm χE ξ h L 2 is bounded independently of h. However, if the trapped set is non-empty, the distorted plane waves may not be bounded uniformly in L 2 loc as h → 0. Actually, χE ξ h L 2 could grow exponentially fast as h → 0. If we want this quantity to remain bounded uniformly in h, we must therefore make some additional assumptions on the classical dynamics. Let us now detail these assumptions. Hypotheses on the classical dynamics • Hyperbolicity assumption: In the sequel, we will suppose that the potential V is such that the trapped set contains no fixed point, and is a hyperbolic set. We refer to section 2.1.2 for the definition of a hyperbolic set. The potential in figure 1 is an example of such a potential. • Topological pressure assumption: For our result on distorted plane waves to hold, we must also make the assumption (Hypothesis 6) that the topological pressure associated to half the logarithm of the unstable Jacobian of the flow on K is negative. The definition of the topological pressure will be recalled in section 3.4. Hypothesis 6 roughly says that the system is "very open". One should note that in dimension 2, this condition is equivalent to the fact that the Hausdorff dimension of K is strictly smaller than 2. In the three-bumps potential of figure 1, this condition is satisfied if the three bumps are far enough from each other, but it is not satisfied if the bumps are close to each other. • Transversality assumption: Our last assumption does not concern directly the classical dynamics, but the Lagrangian manifold 1 Note that the plane wave e i h x·ξ is a Lagrangian state associated with the Lagrangian manifold Λ ξ . We need to make a transversality assumption on Λ ξ . This assumption roughly says that the direction ξ defining Λ ξ is such that the incoming tail Γ − and Λ ξ intersect transversally. We postpone the precise statement of this assumption to Hypothesis 4 in section 2.1.4. This assumption is probably generic in ξ, although we don't know how to prove it. In [Ing15], we will show that it is always satisfied for every ξ, when we consider geometric scattering on a manifold of non-positive curvature. Statement of the results In Theorem 5, we will give a precise description of E ξ h as a sum of WKB states, under the assumptions above. Since the precise statement of the theorem is a bit technical, we postpone it to section 3.5, and only state two important consequences of this result. The first one is a bound analogous to what we would get in the non-trapping case. Theorem 1. Suppose that the Hypothesis 2 on hyperbolicity holds, that the topological pressure Hypothesis 6 is satisfied, and that ξ ∈ S d−1 is such that Λ ξ satisfies Hypothesis 4 of transversality. Let χ ∈ C ∞ c (X). Then there exists a constant C ξ,χ independent of h such that for any h > 0, we have χE ξ h L 2 ≤ C ξ,χ . Remark 1. The bound (6) could not be obtained directly from resolvent estimates. Indeed, as we will see in section 3.3.2, the term E out in (2) can be written as the outgoing resolvent (P h − (1 + i0) 2 ) −1 applied to a term which is compactly supported, and whose L 2 norm is O(h). Therefore, we have a priori that χE ξ h L 2 ≤ O(h) χ(P h − (1 + i0) 2 ) −1 χ L 2 →L 2 , as least if the support of χ is large enough. But under hypotheses 2 and 6, it is known since [NZ09] (see Theorem 4) that χ(P h − (1 + i0) 2 ) −1 χ L 2 →L 2 ≤ C | log h| h , and such estimates are sharp in the presence of trapping (see [BBR10].) Such a priori estimates would therefore only give χE ξ h L 2 ≤ C| log h|. Our next result concerns the semiclassical measure of E ξ h . Consider on T * R d the measure µ ξ 0 given by dµ ξ 0 (x, v) = dxδ v=ξ . The measure µ ξ 0 is the semiclassical measure associated to e i h x·ξ , in the sense that for any ψ ∈ C ∞ c (T * R d ) and any χ ∈ C ∞ c (R d ), we have For the definition and properties of the Weyl quantization Op h , we refer the reader to section 3.1.1. We then define a measure µ ξ on T * R d by for any a ∈ C 0 c (T * R d ). We will show in section 6.3 that this limit exists under our above assumptions. Actually, the proof will not use the hypothesis 6 that the topological pressure of half the unstable jacobian is negative, but the much weaker assumption that the topological pressure of the unstable jacobian is negative. The following theorem tells us that, under our hypotheses, µ ξ is the semiclassical measure associated to E ξ h , and it gives us a precise description of µ ξ close to the trapped set. Theorem 2. Suppose that the Hypothesis 2 on hyperbolicity holds, that the topological pressure Hypothesis 6 is satisfied, and that ξ ∈ S d−1 is such that Λ ξ satisfies Hypothesis 4 of transversality. Then for any ψ ∈ C ∞ c (T * R d ) and any χ ∈ C ∞ c (R d ), we have Furthermore, for any ρ ∈ K, there exists a small neighbourhood U ρ ⊂ T * R d of ρ, and a local change of symplectic coordinates κ ρ : U ρ → T * R d with κ ρ (ρ) = 0 such that the following holds. Remark 2. Theorem 2 tells us that the distorted plane waves E ξ h have a unique semiclassical measure. This result is therefore analogous to the Quantum Unique Ergodicity conjecture for eigenfunctions of the Laplace-Beltrami operator on manifolds of negative curvature. However, on compact manifolds of negative curvature, the semiclassical measure we expect is the Liouville measure. Here, the semiclassical measure given by Theorem 2 is very different from the Liouville measure, since, close to the trapped set, it is concentrated on a countable union of Lagrangian sub-manifolds of T * X. There is therefore a deep difference between compact and non-compat manifolds concerning the semiclassical measure of eigenfunctions, a fact which was already noted in [GN14]. Idea of proof Theorems 2 and 1 will be deduced from a precise description of the distorted plane waves E ξ h microlocally near the trapped set. In Theorem 5, we will show that, microlocally near the trapped set, E ξ h can be written as a convergent sum of WKB states. Let us now explain how this result is obtained. By definition, the distorted plane waves E ξ h are generalized eigenfunctions of the operator P h . Therefore, if we write U (t) = e − i h P h for the Schrödinger propagator associated to P h , we would like to write formally that U (t)E ξ h = e −it/h E ξ h . Of course, this expression can only be formal, since E ξ h / ∈ L 2 , but we will give it a precise meaning by truncating it by some cut-off functions. By equation (2), E ξ h may decomposed into two terms, which we will write E 0 h and E 1 h in the sequel. E 0 h is a Lagrangian state associated to the Lagrangian manifold Λ ξ , while E 1 h is the image of a smooth compactly supported function by the resolvent (P h − (1 + i0) 2 ) −1 . Using some resolvent estimates and hyperbolic dispersion estimates, we will show in the sequel that, for any compactly supported function χ, we have lim t→∞ χU (t)E 1 h = 0. Therefore, in order to describe E ξ h , we only have to study U (t)E 0 h for some very long times. Since E 0 h is a Lagrangian state, its evolution can be described using the WKB method. To do this, we will have to understand the classical evolution of the Lagrangian manifold Λ ξ for large times. We will show that for any t > 0, the restriction of Φ t (Λ ξ ) to a region close to the trapped set consists of finitely many Lagrangian manifolds, most of which are very close to the "outgoing tail" of the trapped set (see Theorem 3 for more details). Relation to other works The study of the high frequency behaviour of eigenfunctions of Schrödinger operators, and of their semiclassical measures, in the case where the associated classical dynamics has a chaotic behaviour, has a long story. It goes back to the classical works [Shn74], [Zel87] and [CDV85] dealing with Quantum Ergodicity on compact manifolds. Analogous results on manifolds of infinite volume are much more recent. In [DG14], the authors studied the semiclassical measures associated to distorted plane waves in a very general framework, with very mild assumptions on the classical dynamics. The counterpart of this generality is that the authors have to average on directions ξ and on an energy interval of size h to be able to define the semiclassical measure of distorted plane waves. Their result can be seen as a form of Quantum Ergodicity result on non-compact manifolds, although no "ergodicity" assumption is made. In [GN14], the authors considered the case where X = Γ\H d is a manifold of infinite volume, with sectional curvature constant equal to −1 (convex co-compact hyperbolic manifold), and with the assumption that the Hausdorff dimension of the limit set of Γ is smaller that (d − 1)/2. In this setting, distorted plane waves are often called Eisenstein series. The authors prove that there is a unique semiclassical measure for the Eisenstein series with a given incoming direction, and they give a very explicit formula for it. This result can hence be seen as a Quantum Unique Ergodicity result in infinite volume. Our result is a generalization of those of [GN14]. Indeed, we also obtain a unique semiclassical measure for the distorted plane waves with a given incoming direction. Our assumption on the topological pressure is a natural generalization of the assumption on the Hausdorff dimension of the limit set of Γ to the case of nonconstant curvature. As in [GN14], the main ingredient of the proof is a decomposition of the distorted plane waves as a sum of WKB states. Although our description of the distorted plane waves and of their semiclassical measure is slightly less explicit than that of [GN14], our methods are much more versatile, since they rely on the properties of the Hamiltonian flow close to the trapped set, instead of relying on the global quotient structure. In [Dya11], the author was able to obtain semiclassical convergence of distorted plane waves on manifolds of finite volume (with cusps), by working at complex energies; see also [Bon14] for more precise results. The main argument of [Dya11], [Bon14] and of [DG14], which is to describe the distorted plane waves as plane waves propagated during a long time by the Schrödinger flow, is the starting point of our proof. However, the reason of the convergence in the long-time limit is very different in [Dya11] and [Bon14], [DG14] and in the present paper. Many of the tools used in this paper were inspired by [NZ09]. We will use the notations and methods of this paper a lot. Let us notice that most of the results of the present paper can be made more precise if we suppose that we work on a manifold of non-positive sectional curvature, without a potential. This has been studied in [Ing15], where the author is able to show, by using the methods developed in the present paper, that distorted plane waves are bounded in L ∞ loc independently of h, and to give sharp bounds on the Hausdorff measure of nodal sets of the real part of distorted plane waves restricted to a compact set. Organisation of the paper In section 2, we will state and prove a result concerning the propagation by the Hamiltonian flow of Lagrangian manifolds similar to Λ ξ near the trapped set, under general assumptions. In part 3, we will state Theorem 5, which is our main theorem, giving a description of distorted plane waves as a sum of WKB states. We will deduce Theorem 1 as an easy corollary. In section 4, we will recall various tools which were introduced in [NZ09], and which will play a role in the proof of Theorem 5. We shall then prove Theorem 5 in section 5. Section 6 will be devoted to the proof of the Theorem 2. The main reason why we want to state Theorem 5 for generalized eigenfunctions that are more general than distorted plane waves on R d is that our results do also apply if the manifold is hyperbolic near infinity (which allows us to recover some of the results of [GN14]), as is shown in [Ing15,Appendix B]. Our results do probably also apply if the manifold is asymptotically hyperbolic; this shall be pursued elsewhere. The author would like to thank Stéphane Nonnenmacher for suggesting this project, as well as for his advice during the redaction of this paper. He also thanks the anonymous referee for suggesting several clarifications and improvements in the paper. 2 Propagation of Lagrangian manifolds 2.1 General assumptions for propagation of Lagrangian manifolds Let (X, g) be a noncompact complete Riemannian manifold of dimension d, and let V : X −→ R be a smooth compactly supported potential. We denote by p(x, ξ) = p(ρ) : For each t ∈ R, we denote by Φ t : T * X −→ T * X the Hamiltonian flow at time t for the Hamiltonian p. Given any smooth function f : X −→ R, it may be lifted to a function f : T * X −→ R, which we denote by the same letter. We may then defineḟ ,f ∈ C ∞ (T * X) to be the derivatives of f with respect to the Hamiltonian flow. Hypotheses near infinity We suppose the following conditions are fulfilled. Hypothesis 1 (Structure of X near infinity). We suppose that the manifold (X, g) is such that the following holds: (1) There exists a compactification X of X, that is, a compact manifold with boundaries X such that X is diffeomorphic to the interior of X. The boundary ∂X is called the boundary at infinity. (2) There exists a boundary defining function b on X, that is, a smooth function b : X −→ [0, ∞) such that b > 0 on X, and b vanishes to first order on ∂X. (3) There exists a constant 0 > 0 such that for any point Note that, although part (3) of the hypothesis makes reference to the Hamiltonian flow, it is only an assumption on the manifold (X, g) and not on the potential V , because V is assumed to be compactly supported. We will write X 0 := {x ∈ X; b(x) ≥ 0 /2} By possibly taking 0 smaller, we can assume that supp(V ) ⊂ {x ∈ X; b(x) > 0 }. We will call X 0 the interaction region. We will also write By possibly taking 0 even smaller, we may ask that Definition 1. If ρ = (x, ξ) ∈ E, we say that ρ escapes directly in the forward direction, denoted Part (3) of Hypothesis 1 implies the following geodesic convexity result, which reflects the fact that once a trajectory has left the interaction region, it cannot come back to it. Hyperbolicity Recall that the trapped set was defined in (4). In the sequel, we will always suppose that the trapped set is a hyperbloc set, as follows. Hypothesis 2 (Hyperbolicity of the trapped set). We assume that K is a hyperbolic set for the flow Φ t |E . That is to say, there exists a metric g ad on a neighbourhood of K included in E, and λ > 0, such that the following holds. For each ρ ∈ K, there is a decomposition Figure 2: A surface which has negative curvature close to the trapped set of the geodesic flow, and which is isometric to two copies of R 2 \B(0, R 0 ) outside of a compact set. It satisfies Hypothesis 2 near the trapped set and Hypothesis 1 at infinity. We will call E ± the unstable (resp. stable) subspaces at the point ρ. We may extend g ad to a metric on the whole energy layer, so that outside of the interaction region, it coincides with the metric on T * X induced from the Riemannian metric on X. From now on, d will denote the Riemannian distance associated to this metric on E. Let us recall a few properties of hyperbolic dynamics (see [KH95,Chapter 6] for the proofs of the statements). i) The hyperbolic set is structurally stable, in the following sense. For E > 0, define the layer of energy E E E := {ρ ∈ T * X; p(ρ) = E}, and the trapped set at energy E as K E := {ρ ∈ E E and Φ t (ρ) remains in a compact set for all t ∈ R}. If K is a hyperbolic set for Φ t |E , then is Hölder-continuous iv) Any ρ ∈ K admits local strongly (un)stable manifolds W ± loc (ρ) tangent to E ± ρ , defined by where > 0 is some small number. We call , the weak unstable and weak stable subspaces at the point ρ respectively. Adapted coordinates Let us now describe the construction of a local system of coordinates which is adapted to the stable and unstable directions near a point. In the sequel, these coordinates will be considered as fixed, and used to state Theorem 3. Lemma 2. Let ρ ∈ K. There exists an adapted system of symplectic coordinates (y ρ , η ρ ) on a neighbourhood of ρ in T * X such that the following holds: Proof. We may identify a neighbourhood of ρ ∈ T * X with a neighbourhood of (0, 0) ∈ T * R d . Let us take e ρ 1 = H p (ρ), and complete it into a basis (e ρ 1 , ..., e ρ d ) of E +0 ρ such that e ρ i , e ρ j g ad (ρ) = 1 for 2 ≤ i, j ≤ d. For any > 0, write D = {u ∈ R d−1 , |u| < }. We define the following polydisk centred at ρ: where δ comes from (12). We also define unstable Lagrangian manifolds, which are needed in the statement of Theorem 3. Definition 2. Let Λ ⊂ E be an isoenergetic Lagrangian manifold (not necessarily connected) included in a small neighbourhood W of a point ρ ∈ K, and let γ > 0. We will say that Λ is a γ-unstable Lagrangian manifold (or that Λ is in the γ-unstable cone) in the coordinates (y ρ , η ρ ) if it can be written in the form Λ = {(y ρ ; 0, F (y ρ )); y ρ ∈ D}, where D ⊂ R d , is an open subset with finitely many connected components, and with piecewise smooth boundary, and F : Note that, since F is defined on R d , a γ-unstable manifold may always be seen as a submanifold of a connected γ-unstable Lagrangian manifold. Let us also note that, since Λ is isoenergetic and is Lagrangian, an immediate computation shows that F does not depend on y ρ 1 , so that Λ can actually be put in the form Λ = {(y ρ , 0, f (y ρ )); y ρ ∈ D}, Hypotheses on the incoming Lagrangian manifold Let us consider an isoenergetic Lagrangian manifold L 0 ⊂ E of the form where X 1 is a closed subset of X\X 0 with finitely many connected components and piecewise smooth boundary, and ϕ : X 2 x −→ ϕ(x) ∈ T * x X is a smooth co-vector field defined on some neighbourhood X 2 of X 1 . We make the following additional hypothesis on L 0 : Hypothesis 3 (Invariance hypothesis). We suppose that L 0 satisfies the following invariance hypotheses. We also make the following transversality assumption on the Lagrangian manifold L 0 . It roughly says that L 0 intersects the stable manifold transversally. Hypothesis 4 (Transversality hypothesis). We suppose that L 0 is such that, for any ρ ∈ K, for any ρ ∈ L 0 , for any t ≥ 0, we have Note that (16) is equivalent to . On X = R d , Hypothesis 4 is likely to hold for almost every ξ ∈ S d−1 , at least for a generic V . In [Ing15], the author shows that this hypothesis is satisfied for every ξ on manifolds of non-positive curvature which have several Euclidean ends (like the one in Figure 2), when there is no potential. Statement of the result Let us now state the main result of this section, which describes the "truncated evolution" of Lagrangian manifolds. Truncated Lagrangians Let (W a ) a∈A be a finite family of open sets in T * X. Let N ∈ N, and let α = α 0 , α 1 ...α N −1 ∈ A N . Let Λ be a Lagrangian manifold in T * X. We define the sequence of (possibly empty) Lagrangian manifolds (Φ k α (Λ)) 0≤k≤N by recurrence by: In the sequel, we will consider families with indices in A = A 1 A 2 {0}. For any α ∈ A N such that α N −1 = 0, we will define if there exists 1 ≤ i ≤ N − 1 with α i = 0, and τ (α) = 0 otherwise. Theorem 3. Suppose that the manifold X satisfies Hypothesis 1 at infinity, that the Hamiltonian flow (Φ t ) satisfies Hypothesis 2, and that the Lagrangian manifold L 0 satisfies the invariance Hypothesis 3 as well as the transversality Hypothesis 4. Fix γ uns > 0 small enough. There exists ε 0 > 0 such that the following holds. Let (W a ) a∈A1 be any open cover of K in T * X of diameter < ε 0 , such that there exist points ρ a ∈ W a ∩ K, and such that the adapted coordinates (y a , η a ) centred on ρ a are well defined on W a for every a ∈ A 1 . Then we may complete this cover into (W a ) a∈A an open cover of E in T * X where A = A 1 A 2 {0} (with W 0 defined as in (8)) such that the following holds. There exists N uns ∈ N such that for all N ∈ N, for all α ∈ A N and all a ∈ A 1 , then W a ∩Φ N α (L 0 ) is either empty, or is a Lagrangian manifold in some unstable cone in the coordinates (y a , η a ). is a γ uns -unstable Lagrangian manifold in the coordinates (y a , η a ). Remark 3. For a sequence α ∈ A N , N − τ (α) corresponds to the time spent in the interaction region. Our last statement therefore says that if a part of L 0 stays in the interaction region for long enough when propagated, then its tangents will form a small angle with the unstable direction at ρ a . Remark 4. The constant ε 0 and the sets (W a ) a∈A2 depend on the Lagrangian manifold L 0 . If we take a whole family of Lagrangian manifolds (L z ) z∈Z satisfying Hypothesis 3 and Hypothesis 4, then we will need some additional conditions on the whole family to be able to find a common choice of ε 0 and (W a ) a∈A2 independent of z ∈ Z. An example of such a condition will be provided by equations (36) and (37). Note that these equations are automatically satisfied if Z is finite. Proof of Theorem 3 Proof. From now on, we will fix a γ uns > 0. Let ρ 0 ∈ K, and consider the system of adapted coordinates in a neighbourhood of ρ 0 constructed in section 2.1.3. Recall that the set U ρ0 ( ) was defined in (14). We define a Poincaré section by Note that the spaces E ± ρ0 are tangent to Σ ρ0 , and that the coordinates (y ρ0 , η ρ0 ) introduced in (13) form a symplectic chart on Σ ρ0 . Actually, we will often need a non-symplectic system of coordinates built from the coordinates (y ρ , η ρ ). Before building this non-symplectic system of coordinates, let us explain why it is a crucial ingredient of our argument. The main tool in the proof of Theorem 3 is the so-called "Inclination lemma", which roughly says that a Lagrangian manifold which intersects the stable manifold transversally will get more and more unstable when propagated in the future. This is a very easy result in the case of linear hyperbolic diffeomorphisms, but we must add some quantifiers in the case of non-linear dynamics to make it rigorous. Namely, one can say, as in [NZ09, Proposition 5.1], that given a γ > 0, there However, we may not use this result directly for the following reason. The smaller we take , the longer the points of the Lagrangian manifold L 0 may spend in the part of the interaction region which is not affected by the hyperbolic dynamics before entering in some U ρ ( ) for some ρ ∈ K. Yet the longer they spend in this "intermediate" region, the more stable the Lagrangian manifold may a priori become. To avoid such a circular reasoning, we should introduce another system of coordinates, in which the description of the propagation of the Lagrangian manifolds in the intermediate region is easier. Alternative coordinates In this paragraph, we will describe a system of "alternative", or "twisted" coordinates built from the one we introduced in section 2.1.3, but which may differ slightly from them. Given a ρ ∈ K, we introduce a system of smooth coordinates (ỹ ρ ,η ρ ) as follows. On Σ ρ , these coordinates are such that and if we denote by L ρ the map defined in a neighbourhood of (0, 0), we have Now, ifρ has straight coordinates (y ρ (ρ), η ρ (ρ)), we letρ ∈ Σ ρ be the point with straight coordinates (0, y ρ (ρ), 0, η ρ (ρ)). We do then define the twisted coordinates ofρ bỹ Note that this system of coordinates doesn't have to be symplectic. We have ∂y ρ Given a ρ ∈ K, and , > 0, we definẽ where δ is an energy interval on which the dynamics remains uniformly hyperbolic. Finally, the Poincaré section in the alternative coordinates is represented as In the sequel, we will be working most of the time in a situation where << (that is, with sets much thinner in the unstable direction than in the stable direction). The main reason why we needed to introduce alternative coordinates is that they give a simpler expression for the Poincaré map (see Remark 5). Let us now define this map. The map κ ρ0 need not be symplectic, since it is defined in the twisted coordinates which need not be symplectic. However, if we had defined the Poincaré map in the straight coordinates, it would have been automatically symplectic. The linearisation of the two systems of coordinates are identical at ρ 0 by equation (19). Therefore, by using the hyperbolicity assumption, we see that the differential of κ at ρ 0 takes the form where · corresponds to the matrix norm. Hence, the Poincaré map κ ρ0 takes the form and the functionsα andβ satisfy: We therefore have for some constant C 0 , since κ is uniformly C 2 . Remark 5. Equation (25) is the main reason why we needed to introduce alternative coordinates, and will play a key role in the proof of Lemma 8. If we had defined the Poincaré map in the straight coordinates, we wouldn't have had α(0, η ρ0 ) = 0 or β(y ρ0 , 0) = 0 Remark 6. By compactness of the trapped set, the constants C 0 and ν may be chosen independent of the point ρ 0 . We may also find a C > 1 such that, independently of ρ 0 and ρ 1 in K, we have Finally, by possibly taking C 0 larger, we may assume that all the second derivatives of the map L ρ defined in (18) are bounded by C 0 independently on ρ ∈ K. Changes of coordinates and Lagrangian manifolds Let us describe how a Lagrangian manifold is affected when we go from twisted coordinates to straight coordinates centred at the same point. Points on Λ are parametrized by the coordinateỹ. We may hence see their straight coordinates u, s as functions ofỹ. By equations (19), (20) and Remark 6, we have Therefore, on Λ,ỹ → y is invertible. We may hence write η as a function of y, and we have . That η is actually independent of y 1 comes from the fact that Λ is an isoenergetic Lagrangian manifold, and that we are working in symplectic coordinates. Let us now describe the change between two systems of twisted coordinates. Let ρ, ρ ∈ K. If they are close enough to each other, the map L : (ỹ ρ ,η ρ ) → (ỹ ρ ,η ρ ) is well defined on a set containing both ρ and ρ , of diameter d(ρ, ρ ). Combining the fact that the (un)stable subspaces E ± ρ are Hölder continuous with respect to ρ ∈ K δ with some Hölder exponent p > 0, and point (v) of Lemma 2, we get: where and where L is of the form for some unitary matrix U y . Here, L η might not be unitary, but it is invertible, and by compactness of K, L η −1 may be bounded independently on ρ. Now, by compactness, the second derivatives of L may be bounded independently of ρ and ρ . Therefore, for any ρ in a neighbourhood of ρ, we have with R ρ ≤ C d(ρ, ρ ) and C independent of ρ . By possibly enlarging C 0 , we may assume that L η −1 ≤ C 0 . We may also assume that C 0 /2 is larger than the constants C and C appearing in the bounds on R ρ,ρ and R ρ . We will use the previous remarks in the form of the following lemma, which describes the effect of a change of twisted coordinates on a Lagrangian manifold. Lemma 4. Let ρ, ρ ∈ K be such that d(ρ, ρ ) < , and let Λ be a Lagrangian manifold which may be written in the twisted coordinates centred on ρ as Proof. Consider points on Λ. By assumption, theirη ρ coordinate is a function of theirỹ ρ coordinate. Therefore, using the map L, their coordinates (ỹ ρ ,η ρ ) may be seen as functions ofỹ ρ . Let us denote by L y and L η the two components of L. By definition, we havẽ ∂ỹ ρ ≤ γ. Therefore, we have: where U is unitary. By equations (28) and (30), we have R ≤ 2γC 0 p < 1 by assumption. Therefore,ỹ ρ →ỹ ρ is invertible, and we have ∂ỹ ρ We may seeη ρ as a function ofỹ ρ , and we have and the lemma follows. Propagation for bounded times Let us fix a ν 1 ∈ (ν, 1), where ν was defined in (22). Recall that p was defined in (29) as the Hölder exponent of the stable and unstable directions. From now on, we fix an > 0 small enough so that This is possible because 1+ν1 2 < 1. We also ask that C 0 p < 1/2. Note that, although condition (32) looks horrible, it is designed to work well with Lemma 4. Let us introduce a first decomposition of the energy layer. Recall that we defined W 0 in (8) as the external part of the energy layer. We define W 1 := {ρ ∈ E\W 0 ; d(ρ, K) < /2)} for the part of the energy layer close to the trapped set, and W 2 := {ρ ∈ E\W 0 ; d(ρ, K) ≥ /2)} for the intermediate region. See figure 3 for a representation of these different sets. Note that we will later introduce a finer open cover of the energy layer, using the sets W a appearing in the statement of the theorem. The following lemma tells us that the set W 2 is a transient set, that is to say, points spend only a finite time inside it. Lemma 5. There exists N ∈ N an integer which depends on such that ∀ρ ∈ W 2 , we have either Proof. This result comes from the uniform transversality of the stable and unstable manifolds (which is a direct consequence of the compactness of K). It gives us the existence of a d 1 ( ) > 0 such that, for all ρ ∈ W 2 ∪ W 1 , We may therefore write A point in the first set will leave the interaction region in finite time in the future, while a point in the second set will leave it in finite time in the past. By compactness, we can find a uniform N as the one in the statement of the lemma. The following lemma is a consequence of the transversality assumption we made. It tells us that when we propagate L 0 during a finite time N and restrict it to a small setŨ ρ ( , ) close to the trapped set, we obtain a finite union of Lagrangian manifolds in the alternative coordinates. Here, the size of the set in the unstable direction depends on N , but its size in the stable direction does not. can be written in the coordinates (ỹ ρ ,η ρ ) as the union of at most N N disjoint Lagrangian manifolds, which are allγ N -unstable : Proof. Let us consider a 1 ≤ t ≤ N . First of all, Φ t being a symplectomorphism, it sends Lagrangian manifolds to Lagrangian manifolds. The restriction of a Lagrangian manifold to a region of phase space is a union of Lagrangian manifolds. We now have to prove that, if we take small enough, these Lagrangian manifolds are allγ N unstable, for someγ N > 0 which is independent of ρ. Let ρ ∈ K. By hypothesis, W − loc (ρ) and Φ t (L 0 ) are transverse when they intersect. Therefore, in a small neighbourhood of the stable manifold {ỹ ρ = 0}, each connected component of Φ t (L 0 ) may be projected smoothly on the twisted unstable manifold {η ρ = 0}. That is to say, there exists a > 0 and a γ > 0 such that each connected component of Φ t (L 0 ) ∩Ũ ρ ( , ) is γ-unstable in the twisted coordinates around ρ, for some γ > 0. Now, since the changes of coordinates between twisted coordinates are continuous, we may use the compactness of K to find uniform constants > 0 and γ > 0 such that each connected By compactness ofŨ ρ ( , ), the number of Lagrangian manifolds making up Φ t (L 0 ) ∩Ũ ρ ( , ) is finite. This concludes the proof of the lemma. Applying this lemma to N = N + 2, we define the following constants, which we shall need later in the proof (recall that γ uns has been fixed). where C comes from Remark 6, and C 0 comes from equation (26). Remark 7. As explained in Lemma 5, N is the maximal time spent by a trajectory in the intermediate region W 2 . The time N 1 will be the time necessary to incline a γ 0 -unstable Lagrangian manifold to a γ uns -unstable Lagrangian manifold, as explained in Proposition 1. As for the constant Remark 8. The constant ε 0 in Theorem 3 will depend only on γ 0 and 0 . Therefore, the proof of Lemma 6 tells us that if we consider a whole family of Lagrangian manifolds (L z ) z∈Z satisfying Hypothesis 3 and Hypothesis 4, we will be able to find an ε 0 > 0 uniform in z ∈ Z provided we have the following uniform transversality condition: Lemma 7. There exists a neighbourhood W 3 of Γ − ∩ W 1 in E, a finite set of points (ρ i ) i∈I ⊂ K and 0 < 1 < 1 , such that the following holds. Proof. The sets Ũ ρ ( , 2 ) ρ∈K form an open cover of a neighbourhood of (Γ − ∩ W 1 ). Let us denote by W 3 such a neighbourhood. By compactness, we may extract from it a finite open cover Ũ i i∈I := Ũ ρi ( , 2 ) i∈I , which still satisfies (i). Since W 3 is a neighbourhood of Γ − ∩ W 1 , there exists a constant 2 > 0 such that the following holds: Therefore, there exists 0 which is (ii). Finally, since the setŨ i are open, we may shrink 1 so that (iii) is satisfied. Remark 9. The constant ε 0 appearing in Theorem 3 will be smaller that 1 (see Lemma 9), therefore each of the sets (W a ) a∈A1 will be contained in someŨ i . Furthermore, we will have W a ⊂ {ρ ∈ E; d(ρ, K) < ε 0 }. Hence, a point ρ ∈ W 1 \W 3 ∪ {ρ ∈ W 2 ; d(ρ , Γ − ) ≥ d 1 } will not be contained in any of the sets (W a ) a∈A1 when propagated in the future. Lemma 6 tells us that Φ N (L 0 ) ∩Ũ i consists of finitely many γ 0 -unstable Lagrangian manifolds. Our aim will now be to take a Lagrangian manifold included in aŨ i1 , to propagate it during some time N ≥ N 1 , then to restrict it to aŨ i2 , for i 1 , i 2 ∈ I. The remaining part of the Lagrangian, which is in W 1 \W 3 , will not meet the sets (W a ) a∈A1 when propagated in the future, as explained in Remark 9. Propagation in the setsŨ The propagation of Lagrangian manifolds in the setsŨ i is described in the following proposition, which is the cornerstone of the proof of Theorem 3. Recall that γ uns was chosen arbitrarily at the beginning of the proof, and that N 1 was defined in (34). Proposition 1. Let N ≥ N 1 , ι = (i 0 i 1 ...i N −1 ) ∈ I N and i ∈ I. Let Λ 0 ⊂Ũ i0 be an isoenergetic Lagrangian manifold which is γ 0 -unstable in the twisted coordinates centred on ρ i0 . ThenŨ i ∩Φ ι (Λ) is a Lagrangian manifold contained inŨ i , and it is γuns (1+2C0 p ) 2 -unstable in the twisted coordinates centred on ρ i . Proof. The first part of the proof consists in understanding how Φ n (Λ 0 ) behaves for n ≤ N 1 , in the twisted coordinates centred on ρ i0 . This is the content of the following lemma, which is an adaptation to our context of the "Inclination lemma" (See [KH95, Theorem 6.2.8]; see also [NZ09,Proposition 5.1] for a statement closer to our context and notations). Proof. By assumption, Λ 0 may be put in the form We will consider restrictions of the Lagrangian manifolds at intermediate times to the Poincaré sections centred at Φ k (ρ i0 ): is of the form (24). From the equation (24) and the definition of C, we see that the maximal rate of expansion in the unstable direction is bounded by (C + C 0 p ). Therefore, the definition of 2 implies that for any k ≤ N 1 , the projection of Λ k sec on the unstable direction is supported in B(0, 1 ). To lighten the notations, we will writeỹ k andη k instead ofỹ Φ k (ρi 0 ) andη Φ k (ρi 0 ) . Let k ≥ 0, and suppose we may write where D k ⊂ B(0, 1 ), and df k C 0 ≤ γ k for some 0 < γ k ≤ γ 0 . Note that the key point in the following computations is that, since we have chosen "alternative" coordinates, we have |∂ ηα k (ỹ k ,η k )| ≤ C 0ỹ k ≤ C 0 1 . The projection of Φ 1 |Λ k sec on the horizontal subspace reads where for each k, A k is a matrix as in (23). By differentiating, we obtain : where r k has entries bounded by C 0 1 γ 0 ≤ C 0 . Therefore, the map is invertible, andỹ k+1 →ỹ k is contracting. This implies that Λ k+1 sec can be represented as a graph . Differentiating with respect toỹ k+1 , we get where the last inequality comes from (31). First of all, the fact that this slope is bounded uniformly on Λ k+1 sec implies that Λ k+1 sec can indeed be written in the form where D k+1 ⊂ B(0, 1 ), and df k+1 C 0 ≤ γ k+1 , where γ k+1 ≤ γ k ν 1 + γuns(1−ν1) After times N > N 1 , the Lagrangian manifold may not be included inŨ Φ N (ρi 0 ) ( , 1 ). Therefore, we may have to change of coordinates. By Lemma 8, at time N 1 , our Lagrangian manifold Φ N1 (Λ 0 ) is included inŨ Φ N 1 (ρi 0 ) ( , 1 ) and is (1+ν1)γuns 4 -unstable. We want to studyŨ j ∩ Φ N1 (Λ 0 ) for j ∈ I, in the coordinates centred at ρ j , and to apply the computations made in the proof of Lemma 8 again. Let us see how all this works. We may continue this argument of changing coordinates and propagating to any time N ≥ N 1 : we always obtain a single Lagrangian manifold which is (1+ν1)γuns 4 -unstable. This concludes the proof of Proposition 1, because we assumed that C 0 p < 1/2. Remark 10. In [NZ09], Proposition 5.1, the authors prove using the chain rule that for each ∈ N, there exists a constant C large enough such that the following holds. If i 1 , i 2 ∈ I and if Λ ⊂Ũ i1 is a Lagrangian manifold in some unstable cone, generated by a function f in the coordinates (ỹ ρi 1 ,η ρi 1 ) with f C ≤ C , then Φ 1 (Λ) ∩Ũ i2 is a union of finitely many Lagrangian manifolds, all of which are in some unstable cone in the coordinates (ỹ ρi 2 ,η ρi 2 ), and are generated by functions with a C norm smaller than C . In particular, this shows that on the Lagrangian manifold Φ N ι (Λ) described in Proposition 1, the function s ρi (y ρi ) has a C norm smaller than C , where C is a constant independent on N . Properties of the sets (W a ) a∈A1 The following lemma is an adaptation of Lemma 6 to the "straight coordinates". Note that the main reason why we want to use these straight coordinates is because they are symplectic, which will play a crucial role in the proof of Theorem 5. Lemma 9. There exists ε 0 < 1 such that, if (W a ) a∈A1 is an adapted cover of K of diameter ε 0 such that for each a ∈ A 1 , W a ∩ W 0 = ∅, and there exists a point ρ a ∈ W a ∩ K = ∅. Then there exist N Nuns ∈ N and γ such that the following holds. For each a ∈ A 1 , for each 1 ≤ N ≤ N uns , the set Φ N (L 0 ) ∩ W a consists of at most N Nuns Lagrangian manifolds, all of which are γ -unstable in the straight coordinates centred on ρ a . Proof. Let us choose ε 0 > 0 small enough so that C 0 ε 0γNuns < 1 and such that each set of diameter smaller that ε 0 and which intersects K is contained in someŨ ρ ( , δ), with δ <δ Nuns . By applying Lemma 6, we know that there exists N Nuns ∈ N,δ Nuns > 0 andγ Nuns > 0 such that ∀0 < δ ≤δ Nuns , ∀ρ ∈ K, ∀1 ≤ N ≤ N uns , Φ N (L 0 ) ∩Ũ ρ ( , δ) can be written in the coordinates (ỹ ρ ,η ρ ) as the union of at most N Nuns Lagrangian manifolds, which are allγ Nuns -unstable. This gives us the statement in the twisted coordinates. To go to the straight coordinates, we may simply use Lemma 3 thanks to the assumption made on ε 0 . For any a ∈ A 1 , 1 ≤ k ≤ N uns , W a ∩ Φ k (L 0 ) consists of finitely many Lagrangian manifolds. Let us define d a,k as the minimal distance (with respect to the distance d) between the Lagrangian manifolds which make up W a ∩ Φ k (L 0 ), with the convention that this quantity is equal to +∞ if W a ∩ Φ k (L 0 ) consists of a single Lagrangian manifold or is empty. We then set d := min(ε 0 , min a∈A1 1≤k≤Nuns {d a,k }) > 0. Remark 11. If we consider a whole family of Lagrangian manifolds (L z ) z∈Z satisfying Hypothesis 3 and Hypothesis 4, we will be able to apply Theorem 3 to them with sets (W a ) a∈A2 independent of z ∈ Z provided the constant d is well-defined, that is to say, provided we have where d z a,k is the minimal distance between the Lagrangian manifolds which make up W a ∩ Φ k (L z ), with the convention that this quantity is equal to +∞ if W a ∩ Φ k (L z ) consists of a single Lagrangian manifold or is empty. The flow (Φ t ) is C 1 with respect to time, hence Lipschitz on [0, N uns ]. Therefore, there exists a constant C > 0 such that for all t ∈ [0, N uns ], for all ρ 1 , ρ 2 ∈ E, we have We take ε 2 := d/C. We now complete (W a ) a∈A1 to cover the whole energy layer. Construction and properties of the sets (W a ) a∈A2 Recall that W 0 = T * (X\X 0 ), and that b is the boundary defining function introduced in Hypothesis 1. We build the sets (W a ) a∈A2 so that, if we set A = A 1 ∪ A 2 ∪ {0}, the following holds: • Each of the sets (W a ) a∈A2 has a diameter smaller than ε 2 . • (W a ) a∈A is an open cover of E. Our next lemma is the first brick of the proof of the uniqueness of the Lagrangian manifold making up Φ N α (L 0 ). It relies on the fact that the sets (W a ) a∈A2 have been built small enough. Lemma 10. Let k ≤ N uns , α ∈ A k , and a ∈ A 1 . Then the set W a ∩ Φ k α (L 0 ) is empty or consists of a single Lagrangian manifold. Proof. Let us suppose that Φ k (L 0 ) ∩ W a is non-empty. We have seen in Lemma 9 that it consists of finitely many Lagrangian manifolds, with a distance between them larger than d. Therefore, for any 1 ≤ k ≤ k, the sets Φ −k (Φ k (L 0 ) ∩ W a ) consist of Lagrangian manifolds which are at a distance larger than ε 2 from each other. Because of the assumption (9) we made, we have α k ∈ A 2 for some k ≤ k. Since the sets (W a ) a∈A2 have a diameter smaller than ε 2 , they separate the Lagrangian manifolds which make up Φ −k (Φ k (L 0 ) ∩ W a ). We deduce from this the lemma. Structure of the admissible sequences We will now state two of lemmas which put some constraints on the sequences α ∈ A N , with α N ∈ A 1 such that Φ N α (L 0 ) = ∅. The first of these lemmas tell us that we may restrict ourselves to sequences such that α k = 0 for k ≥ 1. Lemma 11. Let N ∈ N, and let α ∈ A N , and a ∈ A 1 . Suppose that α k = 0 for some 1 ≤ k ≤ N −1, Proof. By hypothesis, Φ k α1...α k (L 0 ) ⊂ W 0 , and it intersects W 1 in the future. We have W 0 = DE − ∪ DE + , and a point in DE + cannot intersect W 1 in the future. Therefore, the points in Φ k α1...α k (L 0 ) which intersect W 1 in he future are all in DE − . But by Lemma 1, the point in DE − can only have pre-images in W 0 . Therefore, we have , where the second inclusion comes from Hypothesis 3. Let us now take advantage of Remark 9 to show that, from time k ≥ N + 2, all the interesting dynamics takes place in W 3 . Lemma 12. Let N ≥ N + 2, α ∈ A N with α i = 0 for i ≥ 1. Proof. If ρ ∈ W 1 , then the result follows from Remark 9. We must therefore check that we cannot have ρ ∈ W 2 ∪ W 0 . First of all, note that Lemma 1 implies that we cannot have ρ ∈ W 0 . This lemma also implies that for each a ∈ A 1 ∪ A 2 , we have Suppose now that ρ ∈ W 2 . Since k ≥ N + 2, and α i = 0 for i ≥ 1, we have Φ −N −1 (ρ) ∈ W a for some a ∈ A 1 ∪ A 2 . Therefore, by equation (38), we have Φ −N (ρ) / ∈ W 0 . By the proof of Lemma 5, this would imply that d(ρ, Γ − ) ≥ d 1 . By Remark 9, this implies that we cannot have Φ N −k (ρ) ∈ W a for some a ∈ A 1 , a contradiction. End of the proof of Theorem 3 Let N ≥ 0, α ∈ A N and a ∈ A 1 . If N ≤ N uns , the result of Theorem 3 is a consequence of Lemma 9 and Lemma 10. Consider now N ≥ N uns > N + 2. We will assume that W a ∩ Φ N α (L 0 ) = ∅. Thanks to Lemma 11 and to Hypothesis 3, we may assume that α i = 0 for all i ≥ 1. From Lemma 12, we deduce that where i α ∈ I is such that W α N ⊂Ũ iα . Let us define Λ k := {ρ ∈ Φ k α (L 0 ); ∀k ≥ 0, Φ k (ρ) ∈ W α k+k }. By Lemma 12, for each k ≥ N + 2, we have Λ k ⊂ W 3 ∩ W α k . Therefore, by Lemma 7 (iii), there exists a i k ∈ I such that Λ k ⊂Ũ i k , and we obtain that We know from Lemma 6 and Lemma 10 that Φ N +2 α1...α N +2 (L 0 ) consists of a single Lagrangian manifold, which is γ 0 -unstable in the coordinates centred on any point of K. Applying Proposition 1, we know that the right hand side of (39) is a Lagrangian manifold which is γuns (1+2Ca p ) 2 -unstable in the twisted coordinates centred on ρ iα . We first apply Lemma 4 to write this Lagrangian manifold in the twisted coordinates centred on ρ a . Thanks to equation (32), it is γuns (1+2Ca p ) -unstable. We then use Lemma 3 to write this Lagrangian manifold in the straight coordinates centred on ρ α N , and we deduce that it is γ uns -unstable. This concludes the proof of Theorem 3. Remark 12. Therefore, in the coordinates (y a , η a ), W a ∩ Φ N α (L 0 ) may be put in the form for some open set D N,α,a ⊂ R d . Remark 10 tells us that for any ∈ N, the functions f N,α,a have C norms which are bounded independently of N , α and a. Generalized eigenfunctions We shall state our results about generalized eigenfunctions under rather general assumptions. We shall then explain why these assumptions hold in the case of distorted plane waves on manifolds which are Euclidean near infinity. In the sequel, we will consider a Riemannian manifold (X, g) with a real-valued potential V ∈ C ∞ c (X), and define the Schrödinger operator Here c 0 > 0 is a constant, which will be 0 in the case of Euclidean near infinity manifolds (see 3.3 for the definition of such manifolds). Before stating our assumptions, let us recall a few definitions and facts from semiclassical analysis. Pseudodifferential calculus We shall use the class S comp (T * X) of symbols a ∈ C ∞ c (T * X), which may depend on h, but whose seminorms and supports are all bounded independently of h. We will sometimes write S comp (X) for the set of symbols in S comp (T * X) which depend only on the base variable. If U is an open subset of T * X, we will denote by S comp (U ) the set of functions in S comp (T * X) whose support is contained in U . Definition 3. Let a ∈ S comp (T * Y ). We will say that a is a classical symbol if there exists a sequence of symbols a k ∈ S comp (T * Y ) such that for any n ∈ N, We will then write a 0 (x, ξ) := lim h→0 a(x, ξ; h) for the principal symbol of a. We associate to S comp (T * X) the class of pseudodifferential operators Ψ comp h (X), through a surjective quantization map This quantization map is defined using coordinate charts, and the standard Weyl quantization on R d . It is therefore not intrinsic. However, the principal symbol map is intrinsic, and we have and is the natural projection map. For more details on all these maps and their construction, we refer the reader to [Zwo12,Chapter 14]. For a ∈ S comp (T * X), we say its essential support is equal to a given compact K T * X, if and only if, for all χ ∈ S(T * X), , we define the wave front set of A as: noting that this definition does not depend on the choice of the quantisation. When K is a compact subset of T * X and W F h (A) ⊂ K, we will sometimes say that A is microsupported inside K. Let us now state a lemma which is a consequence of Egorov theorem [Zwo12, Theorem 11.1]. For a tempered distribution u = (u(h)), we say that a point ρ ∈ T * X does not lie in the wave front set W F (u) if there exists a neighbourhood V of ρ in T * X such that for any Lagrangian distributions and Fourier Integral Operators Phase functions Let φ(x, θ) be a smooth real-valued function on some open subset U φ of X ×R L , for some L ∈ N. We call x the base variables and θ the oscillatory variables. We say that φ is a non degenerate phase function if the differentials d(∂ θ1 φ)...d(∂ θ L φ) are linearly independent on the critical set In this case is an immersed Lagrangian manifold. By shrinking the domain of φ, we can make it an embedded Lagrangian manifold. We say that φ generates Λ φ . Lagrangian distributions Given a phase function φ and a symbol a ∈ S comp (U φ ), consider the h-dependent family of functions We call u = (u(h)) a Lagrangian distribution, (or a Lagrangian state) generated by φ. By the method of non-stationary phase, if supp a is contained in some h-independent compact set K ⊂ U φ , then Definition 4. Let Λ ⊂ T * X be an embedded Lagrangian submanifold. We say that an h-dependent family of functions u(x; h) ∈ C ∞ c (X) is a (compactly supported and compactly microlocalized) Lagrangian distribution associated to Λ, if it can be written as a sum of finitely many functions of the form (40), for different phase functions φ parametrizing open subsets of Λ, plus an O(h ∞ ) remainder. We will denote by I comp (Λ) the space of all such functions. Fourier integral operators Let X, X be two manifolds of the same dimension d, and let κ be a symplectomorphism from an open subset of T * X to an open subset of T * X . Consider the Lagrangian A compactly supported operator U : D (X) → C ∞ c (X ) is called a (semiclassical) Fourier integral operator associated to κ if its Schwartz kernel K U (x , x) lies in h −d/2 I comp (Λ κ ). We write U ∈ I comp (κ). The h −d/2 factor is explained as follows: the normalization for Lagrangian distributions is chosen so that u L 2 ∼ 1, while the normalization for Fourier integral operators is chosen so that U L 2 (X)→L 2 (X ) ∼ 1. If U ∈ I comp (κ) and O ⊂ T * X is an open bounded set, we shall say that U is microlocally unitary near O if U * U ≡ I L 2 (X)→L 2 (X) microlocally near O × κ(O). Local properties of Fourier integral operators In this section we shall see that, if we work locally, we may describe many Fourier integral operators without the help of oscillatory coordinates. In particular, following [NZ09,4.1], we will recall the effect of a Fourier integral operator on a Lagrangian distribution which has no caustics. We will recall in section 4.2 how this formalism may be applied to the study of the Schrödinger propagator. Without loss of generality, we can find linear Lagrangian subspaces, Γ j , Γ ⊥ j ⊂ T * R d , j = 0, 1, with the following properties: • if π j (resp. π ⊥ j ) is the projection T * R d → Γ j along Γ ⊥ j (resp. the projection T * R d → Γ ⊥ j along Γ j ), then, for some neighbourhood U of ρ 0 , the map is a local diffeomorphism from the graph of κ| U to a neighbourhood of the origin in Γ 1 × Γ ⊥ 0 . Let A j , j = 0, 1 be linear symplectic transformations with the properties and let M j be metaplectic quantizations of the A j 's as defined in [DS99, Appendix to chapter 7]. Then the rotated diffeomorphismκ is such that the projection from the graph ofκ is a diffeomorphism near the origin. It then follows that there exists a unique functionψ ∈ C ∞ (R d × R d ) such that for (x 1 , ξ 0 ) near (0, 0), The functionψ is said to generate the transformationκ near (0, 0). Note that ifT ∈ I comp (κ), then Thanks to assumption (41), a Fourier integral operatorT ∈ I comp (κ) may then be written in the formT u(x 1 ) : with α ∈ S comp (R 2d ). Now, let us state a lemma which was proven in [NZ09, Lemma 4.1], and which describes the effect of a Fourier integral operator of the form (43) on a Lagrangian distribution which projects on the base manifold without caustics. Then, for any symbol a ∈ S comp (Ω 0 ), the application of a Fourier integral operator T of the form (43) to the Lagrangian state a(x)e iφ0(x)/h associated with Λ 0 can be expanded, for any L > 0, into where b j ∈ S comp , and for any ∈ N, we have The constants C ,j depend only on κ, α and sup Ω0 |∂ β φ 0 | for 0 < |β| ≤ 2 + j. Assumptions on the generalized eigenfunctions We consider generalized eigenfunctions of P h at energy 1, that is to say, a family of smooth functions E h ∈ C ∞ (X) indexed by h ∈ (0, 1] which satisfy We will furthermore assume that these generalized eigenfunctions may be decomposed as follows. Hypothesis 5. We suppose that E h can be put in the form where E 0 h is a tempered distribution which is a Lagrangian state associated to a Lagrangian manifold which satisfies Hypothesis 3 of invariance, as well as Hypothesis 4 of transversality, and where E 1 h is a tempered distribution such that for each ρ ∈ W F h (E 1 h ), we have ρ ∈ E. Furthermore, we suppose that E 1 h is outgoing in the sense that there exists 2 > 0 such that for all χ, χ ∈ C ∞ c such that χ ≡ 1 on {x ∈ X; b(x) ≥ 2 }, there exists T χ > 0 such that for all t ≥ T χ , we have The most natural example of such generalized eigenfunctions is given by distorted plane waves, which we are now going to define. Note that they depend on a parameter ξ ∈ ∂X, so that they actually form a whole family of generalized eigenfunctions. It is also possible to define generalized eigenfunctions which satisfy Hypothesis 5 on manifolds which are hyperbolic near infinity. This is done in [Ing15,Appendix B]; the construction mainly follows [DG14, Section 6], but some work has to be done to check that E 1 h is a tempered distribution. Distorted plane waves on Euclidean near infinity manifolds Definition 5. We say that X is Euclidean near infinity if there exists a compact set X 0 ⊂ X and a R 0 > 0 such that X\X 0 has finitely many connected components, which we denote by X 1 , ..., X l , such that for each The surface in figure 2 is an example of a Euclidean near infinity manifold. Note that we may assume that supp V ⊂ X 0 . Note also that any Euclidean near infinity manifold fulfils hypothesis 1. Indeed, we may take a boundary defining function b such that b(x) = (1 + |x| 2 ) −1/2 if x ∈ X i which we identify with R d \B(0, R 0 ). To define distorted plane waves, we will simply give a definition of each of the two terms which compose them as in (44). Definition of E 0 h By definition of a Euclidean near infinity manifold, we have: with X 0 compact, and for each 1 ≤ i ≤ N , there exists an isometric isomorphism equipped with the Euclidean metric g 0 . The boundary of X may then be identified with a union of spheres: Let ξ ∈ ∂X. We have ξ ∈ S i for some 1 ≤ i ≤ m. Take a smooth functionχ : X −→ [0; 1] which vanishes outside of X i , and which is equal to 1 in a neighbourhood of S i . We define the incoming wave E 0 If we write L 0 for the Lagrangian submanifold (with boundaries) X i × {ξ} ⊂ T * X, then E 0 h is a Lagrangian distribution associated to L 0 , which satisfies Hypothesis 3 of invariance. Definition of the distorted plane waves Let us set Recall that the outgoing resolvent R h (1) is defined as R h (1) := lim →0 + (P h − (1 + i ) 2 ) −1 , the limit being taken in the topology of bounded operators from L 2 comp (X) to L 2 loc (X). We shall use the following resolvent estimate, which was proven in [NZ09]. Theorem 4. [Resolvent estimates for Euclidean near infinity manifolds] Let X be a Euclidean near infinity manifold such that Hypothesis 2 on hyperbolicity and Hypothesis 6 on topological pressure hold. Then for any χ ∈ C ∞ c (X), there exists C > 0 such that for all 0 < h < h 0 , we have We define E 1 h := R h (1)F h , which is a tempered distribution thanks to Theorem 4. We then define the distorted plane wave as : To check the outgoing assumption on E 1 h , we must explain why that exists 2 > 0 such that for all χ, χ ∈ C ∞ c such that χ ≡ 1 on {x ∈ X; b(x) ≥ 2 }, there exists T χ > 0 such that for all t > T χ , From [DG14, §6.2], we know that for any ρ ∈ W F h (E 1 h ), we have ρ ∈ E, and either ρ ∈ Γ + or there exists a t > 0 such that Φ −t (ρ) = (x, ξ) where x ∈ spt(∂χ), whereχ is as in Section 3.3.1. We may take 2 < 0 small enough so that spt(χ) ⊂ {x ∈ X, b(x) > 2 }. Suppose that ρ = (x, ξ) is such that x ∈ spt(1 − χ) and π X (Φ t (ρ)) ∈ spt(χ). Then, by geodesic convexity, (x, −ξ) ∈ DE + . Therefore, since spt(χ) ⊂ {x ∈ X, b(x) > 2 } and spt(1 − χ) ⊂ {x ∈ X, b(x) < 2 } and since b decreases in the future along the trajectory of (x, −ξ), it is impossible that there exists t > 0 such that On the other hand, if ρ ∈ DE + , then (48) is always satisfied as long as T χ is large enough so that Φ Tχ DE + ∩ T * (spt(1 − χ)) ∩ T * spt(χ) = ∅. This shows that E 1 h is outgoing. Finally, one readily checks that we have, in the sense of PDEs: We will sometimes simply write E h instead of E ξ h , to avoid cumbersome notations. The definition of E h seems to depend on the choices of the cut-off functions we made. Actually, the distorted plane waves can be defined in a much more intrinsic fashion, using the structure of resolvent at infinity. We don't want to enter into the details here (see [DG14,Section 6], and the references therein, or [Mel95, Chapter 2]). Topological pressure We shall now give a definition of topological pressure, so as to formulate Hypothesis 6. Recall that the distance d was defined in section 2.1.2, and that it was associated to the adapted metric. We say that a set S ⊂ K is ( , t)-separated if for ρ 1 , ρ 2 ∈ S, ρ 1 = ρ 2 , we have d(Φ t (ρ 1 ), Φ t (ρ 2 )) > for some 0 ≤ t ≤ t . (Such a set is necessarily finite.) The metric g ad induces a volume form Ω on any d-dimensional subspace of T (T * R d ). Using this volume form, we will define the unstable Jacobian on K. For any ρ ∈ K, the determinant map can be identified with the real number where (v 1 , ..., v n ) can be any basis of E +0 ρ . This number defines the unstable Jacobian: From there, we take where the supremum is taken over all ( , t)-separated sets. The pressure is then defined as This quantity is actually independent of the volume form Ω and of the metric chosen: after taking logarithms, a change in Ω or in the metric will produce a term O(1)/t, which is not relevant in the t → ∞ limit. Hypothesis 6. We assume the following inequality on the topological pressure associated with Φ t on K: We will give an equivalent definition of topological pressure in section 4.1, better suited to our purpose. Statement of the results concerning distorted plane waves We may now formulate our main result. Theorem 5. Suppose that the manifold X satisfies Hypothesis 1 at infinity, and that the Hamiltonian flow (Φ t ) satisfies Hypothesis 2 on hyperbolicity and Hypothesis 6 concerning the topological pressure. Let E h be a generalized eigenfunction of the form described in Hypothesis 5, where E 0 h is associated to a Lagrangian manifold L 0 which satisfies the invariance Hypothesis 3 as well as the transversality Hypothesis 4. Then there exists a finite set of points (ρ b ) b∈B1 ⊂ K and a family (Π b ) b∈B1 of operators in Ψ comp h (X) microsupported in a small neighbourhood of ρ b such that b∈B1 Π b = I microlocally on a neighbourhood of K in T * X such that the following holds. Let U b : L 2 (X) −→ L 2 (R d ) be a Fourier integral operator quantizing the symplectic change of local coordinates κ b : (x, ξ) → (y ρ b , η ρ b ), and which is microlocally unitary on the microsupport of Π b . For any r > 0, there exists M r > 0 such that we have where a n,β,b ∈ S comp (R d ) are classical symbols, and each φ n,β,b is a smooth function independent of h, and defined in a neighbourhood of the support of a n,β,b . The setB n will be defined in (85) . Its cardinal behaves like some exponential of n. We have the following estimate on the remainder For any ∈ N, > 0, there exists C , such that for all n ≥ 0, for all h ∈ (0, h 0 ], we have β∈Bn a n,β,b C ≤ C , e n(P(1/2)+ ) . Remark 13. This theorem can be considered as a quantum analogue of Theorem 3. Indeed, as we explained in section 1, we will prove it by describing the evolution of the Schrödinger flow of Lagrangian states, while Theorem 3 described the evolution by the Hamiltonian flow of associated Lagrangian manifolds. Actually, the sets containing the microsupports of the operators (Π b ) b∈B1 will be built from the sets (W a ) a∈A1 constructed in Theorem 3, as explained in section 4.1. Remark 14. The remainder R r is compactly microlocalised, since the other two terms in the decomposition (51) are compactly microlocalised. Therefore, for any ∈ N, by possibly taking M r larger, we may ask that R r C = O(h r ). Theorem 5 may be used to identify the semiclassical measures associated to our generalized eigenfunctions, as in Theorem 2. We shall do this only microlocally close to the trapped set, since the expression for the semiclassical measure on the whole manifold may become very complicated. Let us denote by π b the principal symbol of the operators Π b introduced in the statement of Theorem 5. The following corollary is a more precise version of (the second part of) Theorem 2. Corollary 1. There exists a constant 0 < c ≤ 1 and functions e n,β,b for n ∈ N, β ∈B n and b ∈ B 1 such that for any a ∈ C ∞ c (T * X) and for any χ ∈ C ∞ c (X), we have The functions e n,β,b satisfy an exponential decay estimate as in (52). The functions e n,β,b will be closely related to a 0 n,β,b (y ρ b ), the principal symbol of a n,β,b (y ρ b ) appearing in (51). Actually, e n,β,b (y ρ b ) will either be the square of the modulus of a 0 n,β,b (y ρ b ), or the square of the modulus of the sum of a finite number of a 0 n,β,b (y ρ b ), for different values of n and β. These different terms will come from the fact that a point may belong to Φ n,t0 β (L 0 ) for several values of n, β. Strategy of proof To study the asymptotic behaviour of the distorted plane wave as h goes to zero, we would like to write thatŨ (t)E h = E h , whereŨ (t) := e it/h U (t). However, this equation can only be formal, because E h / ∈ L 2 (X). Instead, we use the following lemma from [DG14] (Lemma 3.10): Lemma 15. Let χ ∈ C ∞ c (X). Take t ∈ R, and a cut-off function χ t ∈ C ∞ c (X) be supported in the interior of a compact set K t , such that where d g denotes the Riemannian distance on M . Then, for any ξ ∈ S d , we have Since E h is a tempered distribution by assumption, we have, for any t > 0 and χ ∈ C ∞ c (X): where χ t is as in Lemma 15. We may then iterate this equation as follows: we write that χ t = χ + χ t (1 − χ), and obtain We may iterate this method to times N t ≤ M t| log h| for any given M > 0. We obtain Now, choose χ ∈ C ∞ c (X) as in Hypothesis 5, and take t > T χ . Lemma 16. Let t > T χ , M > 0, and χ ∈ C ∞ c (X) be such that χ ≡ 1 on {x ∈ X, b(b) > 2 }, where 2 < 0 is as in Hypothesis 5. For any k ≤ M | log h|, we have Proof. We only have to prove that (χŨ (t))( . This is a consequence of (45) in Hypothesis 5. Therefore, we have for any χ ∈ C ∞ c (X) as in Lemma 16: Let us now introduce tools from [NZ09] to analyse these terms in more details. 4 Tools for the proofs of Theorem 5 Another definition of Topological Pressure Recall that E E and K E were defined in (10) and (11) respectively. For any δ > 0 small enough so that (12) holds, we define be a finite open cover of K δ/2 , such that the W a are all strictly included in E δ and of diameter < ε 0 , where ε 0 comes from Theorem 3. For any T ∈ N * , define W (T ) : The topological pressure is then: Recall that we assumed that P(1/2) < 0. Let us fix 0 > 0 so that P(1/2) + 2 0 < 0. Then there exists t 0 > 0, andŴ an open cover of K δ with diam(Ŵ) < ε 0 such that We can find A t0 so that {W α , α ∈ A t0 } is an open cover of K δ in E δ and such that Therefore, if we take δ small enough, and if we rename By taking t 0 large enough, we can assume that log(1 + 0 ) + t 0 (P(1/2) + 0 ) < 0. A new open cover of E By hypothesis, the diameter ofŴ in (57) is smaller than ε 0 , so that we may apply Theorem 3 to it. We complete it into an open cover (W a ) a∈A as in Theorem 3, and if α ∈ A N for some N ≥ 0, we define as previously W α := N −1 k=0 Φ −k (W a k ). Let us rewrite as (V b ) b∈B2 the sets (W α ) α∈A t 0 where α ∈ A t0 \A t0 such that α k = 0 for some 0 ≤ k ≤ t 0 − 1. We will also write V 0 for the set W 0,0,...,0 . If we write B = B 1 B 2 {0}, the sets (V b ) b∈B form an open cover of E in T * X. Actually, by compactness of the interaction region, we may find a δ > δ > 0 small enough so that (12) holds and such that, by replacing and if Λ is a Lagrangian manifold, we will define for each 0 ≤ k ≤ N − 1 the set Φ k,t0 β (Λ) by By definition of the sets b ∈ B, we have Φ N,t0 β (Λ) = Φ N t0 α β (Λ), where α β ∈ A N t0 is the concatenation of all the sequences which make up the b k , 0 ≤ k ≤ N − 1. Therefore, once we have fixed a point ρ b ∈ K ∩ V b for each b ∈ B 1 , we have the following analogous of Theorem 3. Corollary 2. There exists N uns ∈ N such that for all N ∈ N, for all β ∈ B N and all b ∈ B 1 , then V b ∩Φ N,t0 β (L 0 ) is either empty, or is a Lagrangian manifold in some unstable cone in the coordinates Remark 15 (New definition of the sets (V b ) b∈B1 ). The sets (V b ) b∈B1 form an open cover of K. By compactness, they form an open cover of {ρ ∈ E; d(ρ, K) ≤ 3 } for some 3 > 0. Hence, if for each b ∈ B 2 we replace each V b by V b ∩ {ρ ∈ E; d(ρ, K) > 3 /2} which we still denote by V b , the sets (V b ) b∈B still form an open cover of E, and the conclusions of Corollary 2 do still apply. By adapting the proof of Lemma 5, we see that by possibly enlarging N uns , we may suppose that for all b ∈ B 2 , for all Note also that thanks to Lemma 1, we have for Remark 16. In [NZ09], Proposition 5.2, the authors proved the following statement. There exists a γ 1 > 0 such that the following holds. Let b, b ∈ B 1 , and let Λ be a Lagrangian manifold contained in satisfies the following estimate on its domain of definition: is the unstable Jacobian of ρ b , defined in (49). In the sequel, we will always suppose that γ uns < γ 1 . For each b ∈ B 1 , we will denote by U b a Fourier integral operator quantizing the local change of symplectic coordinates (x, ξ) → (y ρ b , η ρ b ). The Schrödinger propagator as a Fourier integral operator Let us explain how the formalism of section 3.1.3 may be used to describe the Schrödinger propagator U (t) acting on L 2 (X). We shall state a lemma proven in [NZ09,Lemma 4.2]. Recall that for 0 < δ < 1, we defined E δ as |E−1|<δ E E . They induce on V 0 and V 1 the symplectic coordinates where ξ (j) ∈ R d is fixed by the condition F j (ρ j ) = (0, 0). Then the operator on L 2 (R d ), is of the form (42) for some choice of the A j 's microlocally near (0, 0) × (0, 0). Iterations of Fourier integral operators We recall here the main results from [NZ09,§4] concerning the iterations of semiclassical Fourier integral operators in T * R d . Let V ⊂ T * R d be an open neighbourhood of 0, and take a sequence of symplectomorphisms (κ i ) i=1,...,N from V to T * R d , such that ∀i ∈ {1, ..., N }, we have κ i (0) ∈ V , and the following projection: is a diffeomorphism close to the origin. We consider Fourier integral operators (T i ) which quantise κ i and which are microlocally unitary near an open set U × U , where U V which contains the origin. Let Ω ⊂ R d be an open set such that U T * Ω, and, for all i, κ i (U ) T * Ω. For each i, we take a smooth cut-off function χ i ∈ C ∞ c (U ; [0, 1]), and let Let us consider a family of Lagrangian manifolds Λ k = {(x, φ k (x)); x ∈ Ω} ⊂ T * R d , k = 0, ..., N such that: We assume that there exists a sequence of integers (i k ∈ {1, ..., J}) k=1,...,N such that ). We will say that a point x ∈ Ω is N -admissible if we can define recursively a sequence by x N = x, and, for k = N, ..., 1, x k−1 = g k (x k ). This procedure is possible if, for any k, x k is in the domain of definition of g k . Let us assume that, for any admissible sequence (x N ...x 0 ), the Jacobian matrices are uniformly bounded from above: where C D is independent on N . This assumption roughly says that the maps g k are (weakly) contracting. We will also use the notation and assume that the D k 's are uniformly bounded: The following result can be found in [NZ09,Proposition 4.1]. Proposition 2. We use the above definitions and assumptions, and take N arbitrarily large, possibly varying with h. Take any a ∈ S comp and consider the Lagrangian state u = ae iφ0/h associated with the Lagrangian Λ 0 . Then we may write: where each a N j ∈ C ∞ c (Ω) depends on h only through N , and R N L ∈ C ∞ ((0, 1] h , S(R d )). If x N ∈ Ω is N -admissible, and defines a sequence (x k ), k = N, ..., 1, then otherwise a N j (x N ) = 0, j = 0, ..., L − 1. We also have the bounds The constants C j, , C 0 and C L depend on the constants in (60) and on the operators {S j } J j=1 . We shall mainly be using this proposition in the case where for all k, we have D k ≤ ν < 1. In this case, the estimates (61), (62) and (63) imply that for any ∈ N, there exists C independent of N such that for any N ∈ N, we have Microlocal partition We take a partition of unity b∈B π b such that : We then set We can decompose the propagator at time t 0 into: The propagator at time N t 0 may then be decomposed as follows: Hyperbolic dispersion estimates We will use the following hyperbolic dispersion estimate, coming from [NZ09, Proposition 6.3]. the proof of which can be found in [NZ09, Section 7]. Lemma 18 (Hyperbolic dispersion estimate). Let M > 0 be fixed. There exists a h 0 > 0 and a C > 0 such that for any 0 < h < h 0 , for any N < M log(1/h), for any β ∈ B N 1 , we have 5 Proof of Theorem 5 Proof. Having introduced these different tools, we may now come back to the proof of Theorem 5. Decomposition of χE h Let χ ∈ C ∞ c (X) be as in Lemma 16. We may suppose T χ ≤ t 0 . Then, by equation (55), we have: where the cut-off function χ t0 ∈ C ∞ c (X) is such that where d X denotes the Riemannian distance on X. We shall require the following lemma. The proof of (i) is the same as that of Lemma 5, while the proof of (ii) essentially follows from point (3) of Hypothesis 1. Equation (68) may then be rewritten as By summing over k and reordering the terms, we get, for any K > 2N χ + 3N uns + 4 Let us note that from Lemma 14 and Hypothesis 3, for each 0 ≤ l ≤ N χ , there exists χ l ∈ S comp (X) such that χŨ 0 Let us introduce the notation Thanks to equation (71), we can study the different terms in equation (67). The first term in the right hand side of (67) may be bounded by the following lemma. Lemma 21. Let r > 0. We may find a constant M r ≥ 0 such that for any M > M r , for any M r | log h| ≤ N ≤ M | log h|, we have: Proof. We use (70), Lemma 18 and the topological pressure assumption to obtain: By assumption, E h is a tempered distribution, so that χ t0 E h L 2 ≤ C/h r . Therefore for some small . The lemma follows by taking M r large enough. Using Lemma 21, and equation (71), we may rewrite equation (67) as The second term may be bounded by O(h r ) thanks to Lemma 21. By using equations (72) and (73), we get Construction ofB 0 From now on, we fix b ∈ B 1 and r > 1. We may write where we have used the notation U χ β = χŨ β l χ...χŨ β0 . Note that each of the U b Π b U χ β is a Fourier Integral Operator from L 2 (X) to L 2 (R d ). Thanks to Corollary 2, we may use Lemma 14 to describe the action of each of these Fourier Integral Operators on the Lagrangian state (1 − χ)χ t0 E 0 h . If we denote byB 0 the set Nχ+3N uns +3 l=0 B l , we may write where e 0,β,b (y b ) = e φ 0,β,b (y ρ b )/h a 0,β,b (y ρ b ; h), with a 0,β,b and φ 0,β,b as in the statement of Theorem 5. Let us now consider the other terms on the right-hand side of equation (74), which will be indexed byB n , n ≥ 1. Thanks to Remark 16, we may apply Proposition 2 to describe the action of T We obtain that U βn +1 Π βn +1Ũ β0...βn χE 0 h ) = en ,β , with In the notations of Section 4.3, we have by Remark 16 that for any N uns + 1 ≤ k ≤n D k = S T (V β k ) 1 + O( p ) < 1. We therefore set Thanks to equation (61) in Proposition 2 and equation (64), we obtain for any ∈ N: for some constants C , C . End of the propagation Using equation (74) with To finish the proof, we have to apply U b Π b (χŨ (t 0 )) Nχ+1Ũ βn...βn U * βn to en ,β . To do this, one should once again decompose the propagator, and study with U χ β as in (76). To analyse each of the terms on the right-hand side of (82), we use once again Lemma 14 (the lemma may be applied, thanks to Theorem 3 and to Lemma 17). The key point to obtain estimate (52) is to notice that for any N ≥ N uns + 1, we have thanks to (58) By applying (86) for N = N n+2Nχ+2,l , and combining it with (84), we get (52). Note that, although the statement of Theorem 5 describes the generalized eigenfunctions E h only very close to the trapped set, equation (81) can be used to describe E h in any compact set, though in a less explicit way. Semiclassical measures The main ingredient in the proof of Corollary 1 is non-stationary phase. Let us recall the estimate we will use, and which can be proven by integrating by parts. We shall only give a sketch of proof here, and refer to [Hör,§7.7] for more details. Sketch of proof. To prove this result, we simply integrate by parts, noting that Hence, when we integrate by parts, the worst term in the integrand will involve second derivatives of φ times h |∂φ| 2 , and will therefore be a O(h 2 ) by assumption. By integrating by parts more times, we will gain a factor h 2 every time, so that I h (a, φ) is actually a O(h ∞ ). Note that the sketch of proof above tells us that, if we could say that when ∂φ(x, h) is small, then the higher derivatives of φ are small as well, i.e., if we had ∀k ≥ 2, ∃C k such that |∂ k φ(x, h)| ≤ C k |∂φ(x, h)|, then we would have I h (a, φ) = O(h ∞ ) provided |∂φ(x, h)| ≥ Ch 1− . However, it is not clear that we can estimate the higher derivatives of the phase functions which appear in this section. Distance between the Lagrangian manifolds To take advantage of Proposition 3, we need a lower bound on the distance between the Lagrangian manifolds which make up Φ n,t0 (L 0 )∩V b . To prove such a lower bound, let us first state an elementary topological lemma. Proof. Suppose for contradiction that for any > 0, there exists ρ , ρ such that d(ρ , ρ ) < and such that for all b ∈ B such that ρ ∈ V b , we have y / ∈ V b . By compactness of T * X 0 ∩ E, we may suppose that ρ converges to some ρ. We then have ρ −→ x, and if b ∈ B is such that ρ ∈ V b , then and ρ , ρ ∈ V b for small enough, a contradiction. We may now state our lower bound on the distance between the Lagrangian leaves which make up Φ n,t0 (L 0 ) ∩ V b . Proof. Since T * X 0 ∩ E is compact, we may find a constant C > 0 such that for any ρ, ρ ∈ E ∩ T * X 0 , d(Φ t (ρ), Φ t (ρ )) ≤ e Ct d(ρ, ρ ), where d is the distance on the energy layer which we introduced in section 2.1.2. Using the definition ofB n , we deduce the following result about the functions φ n,β,b in the statement of Theorem 5. Proof of Corollary 1 We shall now prove Corollary 1, which we recall. Corollary 4. There exists a constant 0 < c ≤ 1 and functions e n,β,b for n ∈ N, β ∈B n and b ∈ B 1 such that for any a ∈ C ∞ c (T * X) and for any χ ∈ C ∞ c (X), we have where C 2 comes from Corollary 3. Let a ∈ C ∞ c (T * X), χ ∈ C ∞ c (X) and b ∈ B 1 . Using the fact that Op h (ab) = Op h (a)Op h (b) + O L 2 →L 2 (h) for any a, b ∈ S comp (X), the self-adjointness of Π b , and the unitarity of U b on the micro-support of Π b , we see that we have Now, using Egorov's Theorem ([Zwo12, Theorem 11.1]), we know that where a b = a • κ b + O L 2 (h). Using decomposition (51), we have Op h (a b ) e iφ n,β,b /h a n,β,b , M | log h| n =0 β ∈B n e iφ n ,β ,b /h a n ,β ,b + O(h c ), We now want to fix a n ≤ M |logh| and a β ∈B n , and to analyse the behaviour of Op h (a b ) e iφ n,β,b /h a n,β,b , M | log h| n =0 β ∈Bn e iφ n ,β ,b /h a n ,β ,b . Recall that the integrals are well defined, because the phase functions are well-defined in a neighbourhood of the functions a n,β,b . can be described using the methods of section 4.3 along with the results of section 2. In particular, we obtain, like in [NZ09,(7.12)] that we may find C, > 0 such that for all N ∈ N and all β ∈ B N 1 , we have We may deduce from this the following bound for the measure Φ β µ. Note that this could also be deduced directly from the transport equations for measures, without using Schrödinger propagators and Egorov's theorem. By possibly taking the sets V b smaller, we may ensure just like in section 4.1 that b∈B1 exp{S t0 (V b )} ≤ exp{t 0 (P(1) + )}. Therefore, we obtain that If we assume that the flow (Φ t ) is Axiom A, that is to say, that the periodic orbits are dense in K, then [Bow75, §4.C] guaranties us that P(1) < 0. Now, we have that Φ N t0 * µ ξ = β∈B N Φ β µ ξ , and we may use (91) along with the assumption that P(1) < 0 to show that, if a is non-negative, t → T * R d a • Φ t dµ ξ 0 is bounded. Showing that µ ξ is the semiclassical measure associated to E h follows from [DG14, §5.1] (which relies on Egorov's theorem), along with estimate (47).
2016-12-03T10:25:57.000Z
2015-07-10T00:00:00.000
{ "year": 2015, "sha1": "ded73cde293a25fd1c0f7aa0d39ea2f64704c702", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1507.02970", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ded73cde293a25fd1c0f7aa0d39ea2f64704c702", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
199438272
pes2o/s2orc
v3-fos-license
The value of cordocentesis in current management of intrauterine patient OBJECTIVES: Analyzing the clinical group to evaluate current indications for cordocenteses, their complications and data obtained in further pregnancy management. METHODS: Retrospective analysis evaluated 92 cordocenteses (diagnostic and therapeutic) performed during the period of 2007‒2018. These were performed between 17 and 36 weeks of gestation under ultrasound guidance by a specialist at 2nd Department of Gynecology and Obstetrics, Faculty of Medicine, Comenius University. RESULTS: Out of 92 procedures, 78 were diagnostic and 14 were therapeutic. The diagnostic cordocentesis was successful in 97.4 % and intrauterine therapy was successful in 85.7 %. There were 2 (2.56 %) diagnostic cordocenteses complicated by fetal demise and 2 (14 %) intrauterine demises in therapeutic cordocentesis. The pathological karyotype was detected in 14.5 %. Aneuploidia was present in 4 cases (44.4 %), mosaicism in 4 cases (44.4 %) and triploidia in one case (11.1 %). CONCLUSION: Despite of novel molecular genetic technique cordocentesis still plays unreplaceable role in current prenatal diagnosis and treatment. The risk of complications of cordocentesis increases depending on the severity of fetal pathology in pathologic pregnancies. In some situations it can be used as a useful tool for original fetal diagnosis and therapy (Tab. 3, Ref. 20). Text in PDF www.elis.sk. Introduction Cordocentesis (intrauterine puncture of the umbilical cord that is made to obtain a sample of fetal blood or to supply a drug) is an invasive procedure in fetal medicine, implemented in the second and third trimester of pregnancy. In terms of diagnosis, it is necessary in cases where the fetal genetic, metabolic, hematological, infectological and oxygenation status need to be determined. After receiving an exact diagnosis, it can also serve as a mean of direct fetal treatment -performing fetal transfusions, pharmacological and hormonal treatments. Despite the fact that we currently consider the cordocentesis to be routine, it is necessary to carry it out only in strictly indicated cases and to follow the rate of its early and late complications. Cordocentesis should be performed by an experienced specialist. Strict audits of indications and complications are essential for the use of cordocentesis in prenatal diagnosis and fetal medicine. We analyzed 92 cordocenteses that were performed at our center. The aim of our study is to provide knowledge on the range of indications and complications of this invasive prenatal diagnostic technique, to evaluate the yield of the data obtained. Methods The study was carried out at the 2nd Department of Gynecology and Obstetrics, Faculty of Medicine, Comenius University. During the period of 2007-2018 a total of 92 cordocentesis procedures were performed, of which 78 were diagnostic and 14 therapeutic. Women at 17 and 36 weeks of gestation who underwent appropriately indicated cordocentesis were consecutively recruited into the study. The procedure was performed after obtaining written consent. Data were retrospectively analyzed. The procedure was performed under an ultrasound -guided freehand technique with preprocedural antibacterial skin preparation, without administration of maternal sedation and prophylactic antibiotics. Local anesthesia was not applied. 22 G needle was used. Depending on the indication for the procedure, gestational age, maternal body habitus, and the distance from skin to target, the length of the needle was also adequately used. Fetal cardiac activity was controlled immediately after the procedure, and further control was performed after 12 to 24 hours. After the procedure there was a reduced physical exercise of at least 12 hours to 24 hours recommended. Bed immobilization was not needed. Results A total of 92 cordocentesis procedures were performed with a success rate of 97.4 % in diagnostic cordocentesis and 85.7 % in intrauterine therapy. The mean gestational age at diagnostic cordocentesis was 22 weeks of gestation (range, 17 to 36 weeks). The mean gestational age at intrauterine therapy was 28 weeks of gestation (range, 20 to 31 weeks).The cordocentesis indications and therapeutic indications are summarized in Tables 1 and 2. The number of diagnostic cordocenteses and therapy in the respective gestational age are shown in Table 3. In this analysis we have specifi cally evaluated detection of pathological karyotypes (subgroups -unclear TAC result, late karyotyping and suspected ultrasonography morphology). The pathological karyotype was present in 14.5 % by executing a cordocentesis. In the spectrum of genetic diagnoses, aneuploidy was present in four cases (44.4 %), mosaicism in four cases (44.4 %) and triploidy in one case (11.1 %). Two complications (2.56 %) were observed during the implementation of diagnostic cordocentesis. Complications led to pregnancy loss. In the fi rst case, the fetus was in the 18th week of gestation with fetal hydrops and in the other case there was fetus with severe intrauterine growth restriction (IUGR) -minus 5/6 weeks of gestation. Two therapeutic complications (14 %) were observed with therapeutic cordocentesis followed by fetal demise. In the fi rst case, this was an extremely early onset of fetal anemia with generalized fetal hydrops at 19th gestational week. Despite the successful application of intrauterine transfusion, the fetus died in utero within 24 hours. In the latter case, intrauterine fetal demise was noticed within 12 hours after alcohol chemosclerosis of an acardiac twin. Discussion In spite of new non-invasive prenatal diagnostic methods such as cell-free DNA determination in non-invasive prenatal diagnosis (NIPT), invasive prenatal diagnosis and especially cordocentesis plays an important role in prenatal diagnosis and fetal medicine. As part of prenatal diagnosis, its role is to ascertain defi nitive diagnosis that is necessary for further decision-making on pregnancy management. The overall success rate of cordocentesis is reported in 97-98 % (1, 4), which is also confi rmed by our results (97.4 %). It is therefore an accepted and relatively safe method of prenatal diagnosis, but its risk of complications is higher compared to other methods. Taking this into account, cordocentesis should be carried out by an experienced specialist who has suffi cient experience. Higher procedure-related complications may occur in high-risk pregnancies, for example in association with intrauterine growth restriction, non-immune fetal hydrops, chromosomal abnormalities. It is reported in 3.2 -10.2 % (1, 2, 5, 6). We have to consider some complication as: bleeding from the injection site (20-30 %), severe fetal bradycardia (5-10 %) in certain cases requiring emergency delivery (4), development of chorioamnionitis, paraumbilical hematoma and vertical transmission of human immunodefi ciency virus or hepatitis transmission (1,3,4). Despite the wide spectrum of intrauterine pathology investigated, we achieved only a low percentage of complications reaching the lower limit indicated by world medical sources (2.56 %). Less frequently, cordocentesis procedure in terms of the diagnosis, is useful for determination of acidosis parameters in prenatally diagnosed chronic hypoxia of the fetus, predicted on the basis of abnormal umbilical artery Doppler waveforms (7,8). A defi nitive diagnosis of fetal thyreopathy in prenatally detected fetal goiter can be defi ned by determining thyroid hormone serum levels from a fetal blood specimen (9,10,11,12). The most standard for common procedure within intrauterine therapy in terms of cordocentesis is intrauterine transfusion (IUT). It is the only possibility of fetal treatment in fetal infectious conditions and Rh iso-immunization (13). The intrauterine hemotherapy of our patients achieved a success rate of 92.3 %, compared to data published in the literature which stated up to 86-95 % (13,14,15). Higher IUT complication rates leading to pregnancy loss are often due to high risk pregnancy that requires urgent invasive therapy (16,17). This procedure may be complicated by premature rupture of the membranes and premature labor, with all the risks of prematurity (15). In our group, we observed one case of premature rupture of membranes with subsequent premature labor in pregnancy with generalized fetal hydrops. Other types of intrauterine therapies, such as intraumbilical application of the ablation medium in cardiac twin, remain very rare. In the case of fetal impasse it is the only way to save the fetus (18,19,20). By analyzing our group we have confi rmed the persistence of a wide range of indications for cordocentesis. The analysis also confi rmed a low complication rate despite the high severity of intrauterine conditions. Cordocentesis represents an important diagnostic -therapeutic procedure without which prenatal diagnosis and fetal medicine cannot be performed nowadays. Learning points Cordocentesis is an important invasive prenatal diagnostic technique in fetal medicine. Cordocentesis is an invasive technique but in skilled hands it is a safe method. It offers accurate diagnosis which can only be achieved by fetal blood sampling. Intrauterine therapy performed by cordocentesis can be in lifethreating situations the only one possibility for the fetus.
2019-08-06T13:03:16.007Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "a75fedde76e27e16750654cf6ee1f9ed60007b7e", "oa_license": null, "oa_url": "http://www.elis.sk/download_file.php?product_id=6264&session_id=47qeob2tlp12ufd3pkfcoe6p57", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "dbf71ecb6148d88095017cd3715cbe9c3a4a367c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
264477348
pes2o/s2orc
v3-fos-license
A Case Report of an Unusual Presentation of a Pedunculated Gastrointestinal Stromal Tumor: Acute Upper Gastrointestinal Bleeding in a 57-Year-Old Female Gastrointestinal stromal tumors (GISTs) are rare tumors accounting for 0.1-3% of gastrointestinal (GI) neoplasms. ‏In the past, GIST was classified as leiomyomas, leiomyosarcomas, and leiomyoblastomas. However, now it is evident that GIST is a separate tumor entity, and it is the most frequent sarcoma of the GI tract. We report a case of a 57-year-old female with a five-day history of black tarry stools, two episodes of vomiting of dark-colored blood, dizziness, abdominal pain, night sweats, and palpitation, provoked by a change of position. After a computerized tomography (CT) of the abdomen and pelvis, a GIST was suspected, which was confirmed with histopathology. Acute upper GI bleeding is a rare presentation of GIST. Clear guidelines should be developed for GIST. An early diagnosis is crucial for a better prognosis. Introduction Gastrointestinal stromal tumors (GIST) are rare tumors.They are considered a subtype of the mesenchymal tumors of the gastrointestinal (GI) tract accounting for 0.1-3% of GI neoplasms.GISTs were believed to arise from smooth muscles.However, evidence indicates that these tumors originate from stem cells that differentiate into the interstitial cells of Cajal, or directly from the interstitial cells of Cajal [1].The interstitial cells of Cajal are considered a part of the myenteric plexus in the GI tract and are responsible for regulating peristalsis [2].The most frequent locations of GIST are the stomach 50-70%, the small intestine 20-30%, the colon and rectum 5-15%, and less than 5% in the esophagus.Most patients present in the sixth decade, with abdominal pain, nausea, vomiting, early satiety, and dyspepsia.However, in some patients, GISTs can be a source of intraperitoneal hemorrhage (61%) or can cause bleeding in the GI tract lumen, resulting in melena, hematemesis, or anemia.GIST can be classified histologically in spindle cell type (70%), epithelioid cell type (20%), and mixed type (10%).Surgery is the definitive therapy for GIST.In this article, we report a 57-year-old female who was diagnosed with mixed GIST [3]. Case Presentation History We report a case of a 57-year-old female who presented to a private hospital with a five-day history of acute black tarry stools, two episodes of vomiting of dark-colored blood, dizziness, abdominal pain, night sweats, and palpitation, provoked by a change of position as laying down improved these symptoms.According to the patient, the symptoms were relieved by lying down and drinking water.During the hospital admission, she also reported developing stabbing abdominal pain that was mildly relieved with water intake and associated with nausea and multiple episodes of extensive black bloody vomiting.The patient reported a history of weight loss (9kg) since the onset of symptoms.She had a past medical history of hyperlipidemia.Her past surgical history was not clear.She denied any recent travel, contact with sick patients, smoking, and the use of alcohol or drugs.The patient developed uncontrollable smelly bloody diarrhea.The patient denied any history of heartburn, pain radiating to the back, or pain aggregated or relieved by meals. Examination The patient was vitally and clinically stable.The patient was afebrile, pallor was not noted on the skin or the eyes.On examination, the patient had mild epigastric abdominal pain and tenderness.The patient had no signs of jaundice, encephalopathy, edema, or caput medusa, which excluded liver disease or portal hypertension.There was no abdominal destination, or rebound tenderness.The bowel sounds were normal.The cardiac examination was insignificant with a normal S1 and S2. Investigations The patient was suspected to have a GIST.An endoscopy was used to provide an accurate evaluation, which indicated an isolated fundus varix and non-active esophageal varix. A computerized tomography (CT) of the abdomen and pelvis was done, and the results showed a submucosal/endophytic gastric fundus heterogeneously hyperdense structure (Figures 1-2).A CT scan shows a well-defined rounded submucosal pedunculated structure arising from the gastric fundus measuring 3.4 x 3.2 x 3 cm. GIST: Gastrointestinal stromal tumor The diagnosis was confirmed through magnetic resonance imaging (MRI) of the abdomen, indicating GIST (Figure 3). GIST: Gastrointestinal stromal tumor We received this specimen (Figure 4) in formalin containing a 6.5 x 5.5 x 3.5 cm fundic mass, the tumor is solitary, well-circumscribed, and fleshy with no necrosis identified.The gastric mucosa overlying the tumor is normal and no ulcer is seen. FIGURE 4: Specimen We received this specimen in formalin containing a 6.5 x 5.5 x 3.5 cm fundic mass. Immunobiological staining was used to differentiate between the types of GIST.In our patient, the caldesmon stain was positive, which is only observed in 10% of all the cases (Figures 5, 6,7,8).The neoplasm is positive for DOG1, Caldesmon, and CD117.It's negative for Ki-67. Surgical intervention, outcome, and follow-up Considering the radiological and histopathological findings, the patient underwent a laparoscopic partial fundus resection as it was appropriate for this case.The tumor was resected to a negative margin of 1 cm distal to the mass.The patient tolerated the procedure well and was transferred to the post-anesthesia care unit (PACU) in stable condition.The patient was discharged in a stable condition and instructed to come to the ER in case of severe pain, fever, or vomiting.An outpatient appointment was scheduled after six months for clinical and radiological follow-up. Discussion GISTs, previously classified as leiomyosarcomas due to their resemblance to smooth muscles, based on immunohistochemistry, have been recognized as stromal tumors associated with the expression of neural crest cell antigens in 1984.The cellular origin of GITS is thought to be from mesenchymal stem cells, programmed for differentiation into the interstitial cell of Cajal.These cells initiate coordinated GI motility and are the pacemakers of the GI system [4].Population-based studies from European countries, in addition to SEER (surveillance, epidemiology, and results) data from the United States, indicate an annual incidence rate of 6.5% to 14.5% and an age-adjusted incidence rate of 0.68 to 0.8 per 10,000.Unfortunately, the global incidence of GISTs is not known, given the relative homogeneity of the previous population-based studies.GISTs most frequently are diagnosed later in life; however, they can occur at any age.Shenoy and Singh reported a neonatal boy diagnosed with GIST in the terminal ileum after presenting with intestinal obstruction and vomiting one day after his birth [5].To current knowledge, there is one report of solitary gastric pseudo-variceal rupture caused by pedunculated GISTs of a 64-year-old male [6].He was admitted to the emergency department due to episodes of hematemesis and melena, and presented with normal blood pressure (120/90 mmHg) and tachycardia (102 beats/min).His medical history is negative for peptic ulcers and liver diseases which is similar to the the presentation of our patient.An upper endoscopy was performed, which showed Bluish-bloated gastric mucosal folds of the greater curvature with a string-ofbeads aspect and a red spot.These findings mimicked the post-ruptured status of solitary gastric varices (GVs).However, the endoscopy in our patient showed no active (non-bleeding) isolated fundal varies.CT scan revealed an irregular pedunculated mass measuring approximately 60 mm in diameter on the serosal side of the stomach, the enhanced mucosal side of the tumor suspected of being vascular-enriched gastric submucosa.In our case, the CT scan showed a well-defined rounded submucosal pedunculated structure arising from the gastric fundus measuring 3.4 x 3.2 x 3 cm. Diagnosis The location of GIST impacts the diagnostic workup involved.In all patients, regardless of the presenting symptoms, a history and a physical examination should serve as the starting point for the diagnostic workup.Most frequently, patients with these tumors present with anemia and other signs and symptoms of chronic GI bleeding.Patients rarely present with acute GI bleeding manifesting as melena or hematochezia [7].We believe that such an unusual presentation is associated with the development of isolated fundal varices in association with GIST as in the case of our patient.This case is important because it illustrates the significance of keeping mixed GIST as one of the differentials for patients presenting with acute upper GI bleeding especially since previous studies had demonstrated the consequences of delayed surgery in these patients.In addition to GI bleeding, GISTs may also present with signs and symptoms of a mass effect caused by the tumor, such as abdominal pain or discomfort, early satiety, abdominal distension, or a palpable mass.In an additional 15% to 30% of cases, GISTs are found incidentally during surgery, imaging, or autopsy [8].Gastroscopy, endoscopic ultrasound, and abdominal and pelvic imaging support the diagnosis.The final diagnosis is based on the pathological and immunohistochemical examination.Immunohistochemical staining for the CD 117 tyrosine kinase receptor, which reveals the presence of interstitial cells of Cajal, is used to confirm the diagnosis of GIST.The expression of CD 117 distinguishes GISTs from stomach schwannomas and genuine leiomyomas, which consistently test negative for CD 117.In about two-thirds of GISTs, CD 34 is expressed.A crucial confirming sign for the diagnosis of this malignancy is CD117.These tumors may display a mixed subtype, an epithelioid pattern, or a spindle cell pattern histologically [9].In our case, the histopathology revealed a spindle cell pattern.Tumor size and mitotic rate are the most important tumor factors for local recurrence and metastasis.The risk of recurrence and metastasis increases with a tumor size of more than 5 cm and mitosis > 5 per 50 HPF.The histopathology report of our patient was low-risk benign GIST.A case report of Alkhaldi described a seven-year-old female presenting with a four-month duration of pallor accompanied by a low hemoglobin of 5 and severe hypochromic anemia.An upper endoscopy and barium swallow were negative.The patient was managed with blood transfusion and iron supplements.After a recurrent episode of the same complaint, a CT was done that showed a 3cm upper posterior fundal mass and GIST was confirmed by immunohistochemistry. Recently, many GIST cases have been reported in pediatrics, and it is important to have clear guidelines regarding the diagnosis and management of pediatric-onset GIST.No current guideline exists.It is important to consider GIST as a differential diagnosis for pediatric patients presenting with signs and symptoms of chronic anemia. Treatment The treatment approach for GI stromal tumors depends largely on the size and stage of the tumor.When dealing with a localized tumor that can be surgically removed and is larger than 2 cm, the primary treatment method is surgical resection [10].However, for patients with locally advanced disease where complete surgical removal is not feasible without causing functional impairment, the use of preoperative imatinib therapy can help reduce the tumor size prior to surgery.In cases where the disease is considered high-risk, adjuvant therapy is recommended, typically involving three years of tyrosine kinase inhibitors, preferably imatinib [11].When the tumor is unresectable or has spread to other parts of the body, treatment with tyrosine kinase inhibitors is recommended [12].In the case of our patient with a tumor measuring 3.5 x 3.6 x 2.4 cm, the treatment plan involved complete tumor removal with clear margins, and postoperative therapy was deemed unnecessary based on the biopsy results.However, it is important to note a previous case where delayed therapy resulted in significant consequences.In that case, a 51-year-old female with a GIST measuring 102 x 56.3 x 46.6 mm experienced a delay of five months in undergoing surgery and receiving medical treatment due to COVID-19 regulations.As a result, the patient suffered from upper abdominal pain and black tarry stools, eventually leading to severe bleeding and symptoms such as fatigue and syncope.Her hemoglobin level dropped significantly from 11 mg/dL to 5.5 mg/dL over a period of four months [13]. Prognosis The outlook for individuals diagnosed with GI stromal tumors depends on factors such as where the tumor is located, the rate of cell division (mitotic count), and the size of the tumor.Additional factors that influence the prognosis include whether the tumor is completely removed with no residual cancer cells at the surgical margins and whether there was a rupture of the tumor during the surgical removal.Common complications associated with GI stromal tumors include GI bleeding and the physical pressure exerted by the tumor.Chronic GI bleeding caused by GIST is the most frequent complication and can lead to anemia.Furthermore, these tumors can also result in intestinal blockage, bleeding within the abdominal cavity, and rupture followed by inflammation of the abdominal lining (peritonitis) [13]. Conclusions Mixed GISTs are rare, especially when presenting as acute GI bleeding.GIST presenting acutely can be misdiagnosed initially as isolated gastric fundal varices.Radiological imaging plays a major role in diagnosing the condition.Different treatment options are considered, such as medical and surgical therapy. Although the mechanism behind the development of GVs is known, the association between GVs and GIST needs further studies in terms of prevalence, management, and complications. FIGURE 2 : FIGURE 2: In the coronal view, a submucosal pedunculated structure originating from the gastric region was observed (arrow), indicating the presence of a GIST. FIGURE 3 : FIGURE 3: An abdominal MRI revealed the presence of a gastric fundus mass, indicative of a GIST (arrow). FIGURE 5 : FIGURE 5: HistologyHistology reveals the tumor was composed of cells with spindle (elongated nuclei and eosinophilic cytoplasm arranged in fascicles) and epithelioid (mildly pleomorphic epithelioid cells with abundant eosinophilic cytoplasm) morphology. FIGURE 6 : FIGURE 6: Immunobiological staining revealed a positive expression of CD117 in the neoplastic cells. FIGURE 7 : FIGURE 7: Immunobiological staining revealed a positive expression of Caldesmin in the neoplastic cells. FIGURE 8 : FIGURE 8: Immunobiological staining revealed a positive expression of DOG-1 in the neoplastic cells.
2023-10-26T15:13:24.378Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "10bed17c32f1459868e9e9ca643346a60a87c572", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/case_report/pdf/181062/20231024-15044-1rdiifi.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "92b32cb7a163b31c2d2d531b76e9c2e1ca624d9a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
152264832
pes2o/s2orc
v3-fos-license
Developing an instrument for assessing college students’ perceptions of teachers’ pedagogical content knowledge Ongoing professional development for college teachers has been much emphasized. However, previous research has seldom addressed college students’ perceptions of teachers’ knowledge. The purpose of this study was to develop an instrument that could be employed to evaluate college students’ perceptions of teachers’ PCK. According to the pilot test results, the four categories finally constructed were Subject Matter Knowledge (SMK), Instructional Representation and Strategies (IRS), Instructional Objects and Context (IOC), and Knowledge of Students’ Understanding (KSU). In the main study, seven items were generated under each category, making a total of 28 items in the questionnaire, and the instrument was administered to 172 education college students. Results of analysis indicated that the instrument about teachers’ knowledge show satisfactory validity and reliability. The suggestions for application of the instrument in future research are also made. Introduction The notion of ongoing professional learning and development for college teachers has been much emphasized (Guskey, 1985;Fullan & Stiegelbauer, 1991;Johnson, 1993;Clarke & Hollingsworth, 2002;Garcia & Roblin, 2008). The central focus of current professional development efforts aligns most closely with the change as growth in learning perspective. Within this perspective, change is identified with learning, and it is regarded as a natural and expected component of the professional activity of teachers and schools (Clarke & Hollingsworth, 1994). Many novice college teachers with doctoral degrees and a certain level of their subject matter knowledge were not able to be effective teachers (Clarke & Hollingsworth, 2002;Major & Palmer, 2006;Jang, 2008a). One of the reasons was that they did not need to get a Teacher's Certificate to become primary and secondary school teachers (Jang, 2008a). Aside from the subject matter knowledge, they needed more pedagogical knowledge (Leinhardt & Smith, 1985;Hasweh, 1987;Lenze & Dinham, 1994). There are many ways for college teachers' professional growth, such as researching, instructional observance and reflection, publication of reports and books, joining workshops to share experiences with others, seminars, attending professional activities, journal reading, writing, curriculum designing, and peer activities (Lieberman, 1995;Ball, 1996;Cooney & Krainer, 1996;Sykes, 1996;Loucks-Horsely et al., 2003;Dalgarno & Colgan, 2007). Nevertheless, promoting college teachers' "Pedagogical Content Knowledge (PCK)" is the key point to advancing professional growth of teachers (Lenze & Dinham, 1994). It has also been reported that the success of college teaching depends not only on the teachers' subject-matter knowledge but also on their personal understanding of students' prior knowledge and learning difficulty (Grossman, 1990;Lederman, Gess-Newsome & Latz, 1994). In addition, other factors of success include their own teaching methods and strategies, curriculum knowledge, educational situation, goal and value (Shulman, 1987). In particular, the college teachers' pedagogical content knowledge is the main issue of the current college education revolution (Shulman, 1986(Shulman, , 1987). Shulman's notion of PCK has attracted much attention and has been interpreted in different ways (Grossman, 1990;Geddis, Onslow, Beynon & Oesch, 1993). The foundation of science PCK is thought to be the amalgam of a teacher's pedagogy and understanding of content such that it influences their teaching in ways that will best engender students' learning for understanding. Initially, college teachers separate subject-matter knowledge from general pedagogical knowledge. These types of knowledge are, however, being integrated as a result of teaching experiences. By getting acquainted with the specific conceptions and ways, teachers may start to restructure their subject-matter knowledge into a form that enables productive communication with their students (Lederman, Gess-Newsome and Latz, 1994). According to Lederman, Gess-Newsome and Latz (1994), the development of PCK among science teachers is promoted by the constant use of subject-matter knowledge in different teaching situations. Many scholars suggest that PCK is developed through an integrative process rooted in classroom practice, and that PCK guides the teachers' actions when dealing with a specific subject matter in the classroom. Purpose of the research Greater emphasis has been paid on the development and research of elementary and secondary teachers' PCK (Grossman, 1990;Gess-Newsome & Lederman, 1993;Van Driel et al., 1998;Loughran, Mulhall & Berry, 2004;De Jong, Van Driel & Verloop, 2005;Dalgarno & Colgan, 2007). Some studies have even developed an instrument for examining students' perceptions of secondary teachers' knowledge (Tuan, Chang, Wang & Treagust, 2000). However, previous research on learning environments has seldom addressed college students' perceptions of teachers. The purpose of this research was to develop an instrument for evaluating college students' perceptions of teachers' PCK in order to help college teachers understand better how they teach. Pedagogical content knowledge The impact of constructivist epistemology seems to be important in PCK. As constructivism emphasizes the role of previous experience in knowledge construction processes, it is not surprising that teachers' knowledge is studied in relation to their practice in research from this point of view. Shulman (1987) regarded PCK as the knowledge base for teaching. This knowledge base comprises seven categories, three of which are content-related (content knowledge, PCK, and curriculum knowledge). The other four categories refer to general pedagogy, learners and their characteristics, educational contexts, and educational purposes. PCK is concerned with the representation of concepts, pedagogical techniques, knowledge of what makes concepts difficult or easy to learn, knowledge of students' prior knowledge, and theories of epistemology. It also involves knowledge of teaching strategies that incorporate appropriate conceptual representations in order to address learners' difficulties and misconceptions and to foster meaningful understanding (Mishra & Koehler, 2006). Reynolds (1992) summarizes the literature on PCK in the following aspects: (1) teaching students through different skills and methods; (2) thinking about the content scope and sequence that needs to be covered; (3) understanding students' previous conceptions, skills, knowledge and interests related to the particular topic; (4) using appropriate representation to introduce subject matter knowledge; (5) using different strategies to help students understand and become interested in a topic; (6) using appropriate evaluation methods to assess students' understanding of subject matter knowledge. Tuan (1996) further investigated the essence of PCK and suggested that the components of PCK include teachers' understanding of subject matter knowledge, teachers' methods, teaching representations, curriculum knowledge, assessment knowledge, knowledge of students' understanding of the topics, and knowledge of the context of the learning environment (Tuan, Chang, Lee, Wang & Cheng, 2000). Grossman (1990) has tried to remedy this situation by distinguishing four general areas of teacher knowledge that can be seen as the cornerstones of the emerging work on professional knowledge for teaching: General pedagogical knowledge, knowledge of context, subject matter knowledge, and PCK. Within Grossman's knowledge base for teaching, the general pedagogical knowledge is defined as knowledge concerning general principles of instructions, learning and learners, knowledge related to classroom management, and knowledge about the aims and purposes of education. Knowledge of context includes knowledge of school setting, for example culture, and knowledge of individual students (Van Dijk & Kattmann, 2007). PCK is a unique domain that is informed by other knowledge areas. There seems to be a reciprocal relationship between PCK and the foundational knowledge domains, subject matter, pedagogy, and context. The foundational knowledge domains inform PCK, which influences the teacher's knowledge of the subject matter, pedagogy, and the context (Gess-Newsome, 1999). Major and Palmer (2006) used a qualitative study of faculty members participating in a university campus-wide problem-based learning initiative to examine the process of transforming faculty pedagogical content knowledge. They found that faculty existing knowledge and institutional intervention influenced new knowledge of faculty roles, student roles, disciplinary structures, and pedagogy. Teachers' PCK is deeply personal, highly contextualized and influenced by teaching interaction and experience (Van Driel, Beijaard & Verloop, 2001;De Jong, Van Driel & Verloop, 2005;Van Dijk & Kattmann, 2007). Mulholland and Wallace (2005) suggested that science teachers' pedagogical content knowledge requires the longitudinal development of experience as they develop from novices into experienced teachers. Certain studies also showed that a teacher well equipped with the subject-matter knowledge might be able to transfer his/her knowledge in an efficient way, making it easier for students to acquire knowledge (Carter & Doyle, 1987;Tobin & Garnett, 1988). The subject-matter structures of biology teachers were investigated during a year of professional teacher education (Gess-Newsome & Lederman, 1993). Their knowledge structures appeared to be mainly derived from the college science courses. While these structures were often vague and fragmented on entering teacher education, they developed toward more coherent and integrated views of biology during teacher education. Students' perceptions of teachers' knowledge Jang (in press) designed a peer coaching-based model for in-service science teachers' PCK. Four science teachers and 123 secondary students took part in this study on the application of the developed PCK-RIER model (Research, Instruction, Evaluation, and Reflection). Students thought that science teachers had rich subject matter knowledge and often assessed knowledge of their understanding. However, science teachers had difficulty implementing representational repertoire and instructional strategies. Students' perceptions and teachers' reflection are the important factors of this model for developing science teachers' PCK. It is recommended that this model should be adopted in teacher education to offer more opportunities for professional growth among science teachers. Knight and Waxman (1991) pointed out that although students' perceptions might not be consistent with the reality generated by outside observers, they could present the range of reality for individual students and their peer in the classroom. Using students' perceptions can enable researchers and teachers to appreciate the perceived instructional and environmental influences on students' learning processes. According to Lloyd and Lloyd (1986), students expected teachers to provide a sense of how the constituent parts of a discipline fit together, to have rich and adequate subject matter knowledge, and to be able to teach this subject matter knowledge to their understanding level. Olson and Moore (1984) revealed that, from the students' perspective, a good teacher knows the subject matter knowledge well, explains things clearly, makes the subject interesting, gives regular feedback, and gives extra help to students. Similarly, Turley (1994) found that students' perceptions of effective teaching were a combination of method, context, student effort, and teacher commitment. Students considered those teachers who knew their subject, showed evidence of thoughtful planning, used appropriate teaching strategies, instructional and representational repertoires, and gave adequate structure and direction to effective teachers (Tuan, Chang, Wang & Treagust, 2000). In brief, research on students' perceptions of teachers' teaching revealed that students expect teachers to have strong content knowledge, inferring that they were able to perceive whether teachers' content knowledge was good or bad. Students also expected teachers to use effective instructional methods; in other words, they expected teachers to have good pedagogical content knowledge (Shulman, 1987). Research Methodology In order to explore college teachers' understanding of PCK and actualities, we first designed a questionnaire on novice teachers' PCK. The categories of questionnaire constructed from Shulman's (1987) PCK. It comprised 15 questions involved in three categories (instructional representation, strategies, and assessments of students' prior knowledge), and each category had five questions. We conducted a pilot test on 16 novice teachers of PCK workshops at the beginning of the first semester of 2007 in order to examine their understanding of PCK. Moreover, we also selected 182 college students to join the test. After interviewing with some teachers and reviewing the suggestions from the Advancing Teachers' Teaching Excellence Committee (ATTEC), we found many overlaps between instructional representation and strategies. On the other hand, most college teachers put emphasis on subject matter knowledge, and hence instructional context was neglected in this study. The four categories finally constructed were Subject Matter Knowledge (SMK), Instructional Representation and Strategies (IRS), Instructional Objects and Context (IOC), and Knowledge of Students' Understanding (KSU) (See Appendix). Seven items were generated under each of the four categories as agreed by the researchers and team teachers; the instrument was also revised according to the suggestions from five experienced college teachers of the ATTEC. Once the conceptual framework for the instrument was established, the items should be easy for college students in Taiwan to comprehend. Each category of items should be meaningful from the students' perspective. Subject Matter Knowledge (SMK) refers to students' perceptions of the extent to which the teacher demonstrates a comprehension of the subject matter and ideas within the discipline. The construction process of content knowledge and entire structure and direction of subject knowledge are also included. Examples of items in this category are: a1. My teacher knows the content he/she is teaching. a7. My teacher knows the whole structure and direction of this subject matter knowledge. Instructional Representation and Strategies (IRS) refers to students' perceptions of the extent which the teacher uses a representational repertoire including analogies, metaphors, examples, and explanations, and the teacher selects teaching strategies if benefit the content learning, including informational technology. Examples of items in this category are: b1. My teacher uses familiar examples to explain concepts related to the subject matter. b7. My teacher uses multimedia or technology (e.g. PowerPoint) to express the concept of subject. Instructional Objects and Context (IOC) comprises knowledge about the aims and process of education. IOC also includes the interactive atmosphere in the curriculum, teachers' attitudes, knowledge related to classroom management, knowledge of school setting, and instructional values. c1. My teacher makes me understand clearly the objectives of this course. c6. My teacher copes with our classroom context appropriately. c7. My teacher's teaching belief or value is active and aggressive. Knowledge of Students' Understanding (KSU) refers to college students' perceptions of the extent to which the teacher evaluates student understanding before and during interactive teaching, and at the end of lessons and units. Examples of items in this category are: d1. My teacher realizes students' prior knowledge before the class. d7. My teacher's tests help me realize the learning situation. Analysis of data The pilot study indicated that the survey yielded high reliability and validity results (Jang, 2008b) and some items from the original instrument were revised. The revised instrument representing college students' opinions about their teacher's PCK were divided into four main categories, namely SMK, IRS, IOC and KSU, consisting of seven items per category for a total of 28 items. The questionnaire survey employed in this research included one openended question for students to comment on the course. Finally, participants of this study were students in the College of Humanities and Education at the University. They took courses from the novice college teachers of participating in PCK workshops at the beginning of this semester. A total of 172 valid responses were collected. This research used the quantitative research method. Statistical analyses on the survey data were carried out. The survey adopted the Likert scales, with five points designed for students to express their opinions as follows: "Never", "Seldom", "Sometimes", "Often", and "Always" correspond respectively to 1 -5 points according to students' responses. Moreover, we also tested reliabilities and validities in connection with the questionnaire. For reliabilities, Cronbach's alpha would be adopted to evaluate the internal consistency. On the other hand, factor analysis would be adopted to evaluate the constructed validities. Table 1 shows the descriptive statistics of students' responses to the four categories in the questionnaire including mean scores and standard deviation. Results obtained by ANOVA show significant differences (p = .000 < .05, F = 46.757) among the four categories. The highest mean score is the SMK (M = 4.35, SD = 0.542), followed by the IRS (M = 4.28, SD =0.573) and the IOC (M = 4.23, SD = 0.634) with the score of the KSU (M = 3.99, SD = 0.828) being the lowest. All of them indicate that these items were investigating occur between 'often' and 'always'. As seen in the table 1 results, students considered their teachers' subject matter knowledge (SMK) rich and positive, but indicated that knowledge of students' understanding (KSU) could be improved. Furthermore, we also analyzed each item. Items with mean below 4 points included Q-b3 (My teacher's teaching methods keep me interested in this subject), Q-d1 (My teacher realizes students' prior knowledge before the class.), Q-d2 (My teacher knows student's learning difficulties of this subject before the class.), and Q-d4 (My teacher's tests evaluate my understanding of the subject.). The study showed that college students expressed that most of the teachers could not fully understand students' difficulties and prior knowledge (KSU). The critical way to promote KSU was to make students become aware of their teachers' understanding and pay attention to their learning difficulties. From students' comments, it may be caused by teachers' subjectivities, faster progress of the program, and too many assignments so that it affected teachers' teaching results. Many students in a normal class were also a problem. We advised to adopt an e-learning method to make up for the insufficient interaction among students and college teachers. Reliabilities of the instrument In reliability of this instrument, we used Cronbach alpha values to evaluate its internal consistency. After statistic analyzing, the Cronbach's alpha value of 28 items was 0.965, indicating that the questionnaire had a good internal consistency. Moreover, in "Item-Total Statistics" ( Table 2) the section of "Corrected Item-Total Correlation" revealed each corrected item not only presented a higher correlation (all correlation values were greater than 0.400) but also had a higher psychological homogeneity. Furthermore, from the section of "Cronbach's alpha if item Deleted", it meant: if we deleted one of the items, the Cronbach's alpha values were almost the same, even smaller than items deleted. It followed from what had been said that the instrument had a higher consistency and reliability so that it was unnecessary to delete any item from the scale. After exploring the item-total statistics, the factor analysis of 28 items would be the next in order to test the constructed validity. By factor analyzing, we tried to examine whether construction of this questionnaire were consistent to four perspectives of PCK we defined or not. Besides, we also checked the definitions and concepts of items which were involved in four categories were accurate or not. Finally, the questionnaire was formed so that we could use it to do the research and explore further. Validity The validity of the new instrument was confirmed in terms of its content and constructed validity. Content validity refers to the extend that the content of the items measure what is claimed to be measured (Anastasi, 1988;Zeller, 1994); in this case content validity was ascertained by responses during the process of developing the instrument, from college students and experienced college teachers. Additionally, the researchers in teachers' pedagogical content knowledge areas provided comment and critical feedback. According to Zeller (1994), constructed validity focus on the assessment of whether a particular measure relates to the measures consistent with a theoretically anticipated way and this is usually done by factor analysis (Anastasi, 1988). Based on the factor loadings, the items in each category were deleted that had low item-scale correlations from the reliability section and had factor loadings less than 0.40. This study also corresponded with factor analysis: 1. Pilot questionnaire had cited some theories and references associated; 2. Items were clearly divided into few categories (perspectives); 3. All of the items were also defined clearly; and 4. All of the items were checked and revised by some experts (Wu, 2007). The factor analysis of four categories: SMK, IRS, IOC, and KSU In table 3, factor loadings of seven items in SMK were over 0.600 (ranging from 0.762 to 0.860), and the percentage of its total variance explained was 64.514%. For IRS, the factor loadings of its seven items were over 0.600 (ranging from 0.625 to 0.819), and the percentage of its total variance explained was 57.031%. For IOC, the factor loadings of its seven items were over 0.600 (ranging from 0.745 to 0.885), and the percentage of its total variance explained was 67.659%. Finally, the factor loadings of seven items in KSU were over 0.600 (ranging from 0.749 to 0.834), and the percentage of its total variance explained was 64.159%. It revealed that each category could explain the variance of items up to 36% at least, and all of items were retained. After factor analysis with four categories, the items of each category were confirmed approximately. If numbers of distributed items is appropriate, we had better to proceed testing internal consistency of four categories further. After re-testing that, we found the Cronbach's alpha value of four categories were still higher (ranging from .871 to .918). In other words, items of each category had a higher internal consistency. Conclusions The major contribution of this study was to develop an instrument that could be used to evaluate college students' perceptions of teachers' PCK. The findings differ from Major and Palmer (2006) where they used qualitative methods, such as interviews and course portfolios, to describe the process of existing and transforming faculty's PCK. In our study, we adopted the quantitative way (survey) to analyze and realize the PCK that college faculty members (especially novice teachers) would have. The data analysis indicated that the questionnaire of this study had satisfactory validity and reliability. The uniqueness of the survey was that it was specifically related to college teachers' knowledge within the particular teaching and learning context. This was important because research had shown college teachers' knowledge influenced students' perceptions of the learning environment. Tobin (1996) showed that teachers' metaphors and beliefs influenced how they taught and implemented the curriculum and their level of content knowledge influenced whether or not students were taught for factual retention or for understanding. Tobin and Fraser (1990) also investigated the teaching characteristics of exemplary science teachers. These effective teachers used not only management strategies to sustain students' engagement but also teaching strategies such as problem solving activities. Besides, they also provided concrete examples for abstract concepts, asked questions to increase students' understanding, helped students to engage in both large or small group activities, and maintained favorable classroom learning environments. In this study, college students expressed that most teachers were insensible to students' learning difficulties and their prior knowledge, which became the weak point of college teachers' teaching. Concerning the teaching strategy, college teachers adopted inefficient teaching approach, which couldn't stimulate students' interests of learning for certain subject matter. The causes form students' comments were teachers' subjective attitude, their explanatory lecture in class, and the speedy procession of curriculum, which couldn't match up with students' learning status. The current questionnaire functioned as an instrument to help college teachers understand actively students' reaction toward the procession of courses, improve the comprehension on students' learning difficulties, further adopt proper and efficient teaching strategies and achieve a better performance in teaching. Through the reflective research and participation to the PCK workshops, college teachers would amend their teaching approaches and strategies, and further improve their ability in course design (Sykes, 1996;Loucks-Horsely et al., 2003;Dalgarno & Colgan, 2007). New perspective in further research was concerned about the implementation of the questionnaire as an instrument to investigate college teachers' teaching results based on PCK. Participants were college teachers from different departments in the campus, not only from the College of Humanities and Education previously. Upon the analysis of the survey collected from university students' feedback in class as well as the follow-up testing of reliability and validity of the collected questionnaire, college teachers' auto-examination mechanism was established on the teaching performance and the efficiency related to PCK. The investigation of the questionnaire was planning to be implemented in the middle and the end of semester. Other data sources were collected from personal interviews with teachers, as well as from students' feedback information, and further examined individual teacher's improvement of PCK. Furthermore, it also could be designed a teaching or research model for college science teachers to develop their PCK (Jang, in press). The above mention helped to comprehend the interactive condition between college teachers and students in class, which constituted the substantial reference for the college curriculum design and instruction in the future. A. SMK (Subject Matter Knowledge) C. IOC (Instructional Objective & Context) 1 My teacher knows the content he/she is teaching. 1 My teacher makes me clearly understand objectives of this course. 2 My teacher explains clearly the content of the subject. 2 My teacher provides an appropriate interaction or good atmosphere. 3 My teacher knows how theories or principles of the subject have been developed. 3 My teacher pays attention to students' reaction during class and adjusts his/her teaching attitude. 4 My teacher selects the appropriate content for students. 4 My teacher creates a classroom circumstance to promote my interest for learning. 5 My teacher knows the answers to questions that we ask about the subject. 5 My teacher prepares some additional teaching materials. 6 My teacher explains the impact of subject matter on society. 6 My teacher copes with our classroom context appropriately. Comments: In this course, if you have any learning difficulty or opinion, please describe it as follows. _____________________________________________________________________________________________ _____________________________________________________________________________________________ Thanks for filling in this questionnaire
2019-05-10T13:09:21.805Z
2009-01-01T00:00:00.000
{ "year": 2009, "sha1": "2c668d37da0105085339ebf811d244e661bd491f", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.sbspro.2009.01.107", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "797ffc166d0e8c75e8e686200a58d8a1258f931a", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
234584009
pes2o/s2orc
v3-fos-license
Prevalence of Dirofilaria immitis in Dogs from Shelters in Vojvodina , Serbia Background: Dirofilaria immitis is vector borne parasite of carnivores, with zoonotic potential, endemic in many parts of the world, including Europe. The aim of this study was to determine the prevalence of Dirofilaria immitis infection in dogs from shelters, especially compared to their lifestyle. Dogs living in shelters in Serbia may be at high risk of acquiring vector borne pathogens, mainly because most of them live outside in pens and backyards, in contact with vectors. Also, dogs in shelters are not always regularly treated against ectoparasites, thus, representing an easy feeding source for the vectors. The objective of this study was to determine the prevalence of Dirofilaria immitis infection in dogs from 5 shelters in South Bačka and Central Banat districts, in Autonomous Province of Vojvodina, Northern part of Serbia. Also, the objective was to compare the relation of infection with Dirofiaria immitis with age, sex, type of keeping the animals and preventive treatment in dogs. Materials, Methods & Results: Between May 2017 and October 2019, blood samples were collected from 336 randomly selected dogs from 5 shelters in 2 districts, South Bačka and Central Banat districts, in Autonomous Province of Vojvodina, Northern part of Serbia. The epidemiological survey has been conducted with all of the dogs involved in this research. The survey was designed to collect data about sex, age, lifestyles, food type, treatment against mosquitoes with insecticides and filarioid worms with macrocyclic lactones, regular testing for Dirofilaria infections. The presence of circulating microfilariae was examined using a modified Knott’s test. For the presence of circulating adult female Dirofilaria immitis antigen, serum samples were tested by commercially available enzyme-linked immunosorbent assay, which reacts to antigen of female Dirofilaria. In total, 336 dogs were examined for the presence of Dirofilaria immitis antigen. For that dog population which came from 5 shelters, total prevalence was 25.30%. Most of the positive findings were observed in a shelter where dogs lived exclusively outdoors in fenced yards in big groups and they were partly tested for heartworm infections from time to time. These dogs were not treated with macrocyclic lactones, against mosquitoes with insecticides or filarioid worms. The prevalence in this shelter was 56.36%. On the contrary to that, the lowest positive findings were detected in the shelter, where dogs were allowed to move freely between outside and indoors and they were also provided with accommodation indoors. These dogs have been regularly tested for Dirofilaria infections and treated against mosquitoes with insecticides and filarioid worms with macrocyclic lactones. In this shelter the seroprevalence was 7.69%. Microfilariae of Dirofilaria immitis were detected, by modified Knott’s test, in all of the antigen positive dog samples; except in 2 dogs from one shelter. Discussion: This study shows persistence of cardiopulmonary dirofiariosis in shelter dogs under different maintaining conditions. By comparing the data during the last 17 years, it can be stated that there is a constant increase of prevalence for Dirofilaria immitis in dogs in northern part of Serbia over the years. The results gained in this study are important from the veterinary point of view, but also from the Public Health point of view. INTRODUCTION Cardiopulmonary dirofilariosis caused by Dirofilaria immitis is a vector borne disease transmitted by mosquitos, with zoonotic potential [1]. D. immitis resides in dogs and wild canids as definitive hosts [25]. Mosquito species that have been identified as vectors of dirofialrioasis in Europe are Aedes vexanus, Aedes albopictus, and Culex pipens complex [3,23]. D. immitis infections were first found in Mediterranean countries, but later the disease has spread over the Balkans and Central Europe [7-9, 24,34], due to climate change, density of the vectors and travel of infected dogs [17]. The first report of cardiopulmonary dirofilariosis in dogs, in Serbia (Yugoslavia) was in 1989, during necropsy of three dogs and then in 1999 which influenced the constant diagnosis in dogs in Serbia [5, 18,22,35,36]. During the last two decades dirofilariosis has spread over Serbia and multiple studies on determination and differentiation Dirofilaria spp. were conducted in hunting dogs, pet owned dogs, military dogs of Serbian army, stray dogs, dogs from shelters [26,28,[40][41][42]. The objective of this study was to determine the prevalence of Dirofilaria immitis infection in dogs from five shelters in South Bačka and Central Banat districts, in Autonomous Province of Vojvodina, Northern part of Serbia. Also the objective was to compare the relation of infection with Dirofiaria immitis with age, sex, type of keeping the animals and preventive treatment in dogs. Study areas, animals and sampling Between May 2017 and October 2019, blood samples were collected from 336 randomly selected dogs from 5 shelters in 2 districts, South Bačka and Central Banat districts, in Autonomous Province of Vojvodina, Northern part of Serbia. Samples distribution according to shelters was 94 for A, 52 for B, 65 for C, 55 for D and 70 for D. Sample size was calculated using OpenEpi software (https://www.openepi.com/ SampleSize/SSPropor.htm) [4]. Most of the dogs were medium-sized mixed breed, aged between 6 months and 15 years. In shelters B and partly D when dogs enter the shelter they are checked for D. immitis infection and if positive, they are in most cases treated. Epidemiological data about dogs from shelter are shown in Table 1. With the consent of persons responsible for managing the shelters, a 4 mL blood sample was withdrawn from the cephalic vein of each dog using 2 labelled tubes, one without anticoagulants and the other with Heparin. After centrifugation, the serum samples were stored at -20°C until further processing. Detection of microfilariae The presence of circulating microfilariae was examined using a modified Knott's test. The test was performed according to Bazzochi et al. [2]. The differentiation of microfilariae was done based on morphological characteristics of two Dirofilaria species microfilariae -the shape of cephalic and caudal ends of D. immitis and D. repens [10]. Detection of adult female D. immitis antigen Serum samples were tested for the presence of circulating adult female D. immitis antigen by commercially available enzyme-linked immunosorbent assay VetLine Dirofilaria Antigen ELISA 1 in accordance with the manufacturer's instructions. The point prevalence was calculated within the cross-sectional design of epidemiological study. Statistical analysis All data were analyzed using statistical program Past 4 2 . The chi-square test and Fisher's test were used to estimate statistical differences between groups. The level of statistical significance was set at P < 0.05. RESULTS For population of dogs from 5 shelters from the territory of South Bačka and Central Banat districts total D. immitis antigen prevalence was 25.30% (85/336) found by ELISA method. The highest number of positive findings was observed in shelter D, where 31 were positive out of 55 tested blood sera of dogs (56.36%). Shelter C identified 27 antigen reactive blood serum samples from dogs from 65 (41.54%) examined. In shelter E, a total of 70 blood sera of dogs were examined, of which 12 gave a positive result, respectively a prevalence for D. immitis antigen of 17.14% was recorded. A prevalence for D. immitis antigen of 11.70% was obtained in shelter A, and a total of 94 samples were examined, of which 11 gave a positive finding. The lowest prevalence for D. immitis antigen was recorded in shelter B 7.69% where a total of 52 blood sera of dogs were tested and a positive result was obtained in 4 samples. Microfilariae of D. immitis were detected, by modified Knott's test, in all of the antigen positive dog samples; except in 2 dogs from shelter B. The results of prevalence of D.immitis in 5 shelters are shown with Figure 1. The results were also analysed by the age and sex (male/female) of the dogs and the data are shown in Table 2. There was no significant difference between males and females in any of the shelters. In all of the shelters there was a significant diference between the prevalence for D. immitis in dogs under 2 years of age, compared to all the other age groups. DISCUSSION In this study the prevalence for D. immitis of 25.30% was confirmed for Northern part of Serbia. The results gained in this study are important from the veterinary point of view, but also from the Public Health point of view. This study shows persistence of dirofiariosis in shelters under different maintaining conditions. By comparing the data during the last seventeen years, it can be stated that there is a constant increase of prevalence for D. immitis antigen in dogs in northern part of Serbia over the years. Retrospectively, in 2003-2004, prevalence for D. immitis in dogs was 5.9-7% [39], in 2006-2007, prevalence for D. immitis in dogs with clinical symptoms was 80%, but in dogs with no clinical symptoms it was 10-11% [33]. During the period from 2009 to 2013, prevalence for D. immitis in dogs with or without clinical symptoms was 27.6% [37]. Pajković et al. [26] reported that the prevalence for D. immitis in military dogs was 14%, in time period from 2004 to 2010. Savić et al. [36] in a study from 2013-2014, showed that the prevalence for D. immitis in hunting and military dogs was 22.78% and for pet dogs prevalence for D. immitis was found to be 22%. In the same study, the authors have found a lower prevalence for D. immitis in dogs from asylum, namely only 3.12%, but in this study only one asylum was analysed. The total prevalence in pet, military and asylum dogs reported by the authors [36] for D. immitis, was 15.29%, and in 92.3% of positive samples, D. immitis were confirmed by PCR. During the period 2015-2017, Savić et al. [38] analysed 482 dog blood samples from dogs entering the shelters and asylums, and they concluded that the prevalence for D. immitis among stray dogs in Vojvodina region is between 5% and 8%, that more prevalent is D. immitis than D. repens. In the latest published research in Vojvodina, Potkonjak et al. [29] detected infection with Dirofilaria spp. in 27.1% of the stray dogs. Dirofilaria spp. infection is widespread in the countries surrounding Serbia also. The prevalence for D. immitis in dogs from Hungary was reported to be 8.1% [6], 3.3%-7.2% in Romania [13,21], 7.4%-9.2% in Bulgaria [11,15], but Panayotova-Pencheva et al. [27] found adult D. immitis during the necropsies of dogs and reported the prevalence for D. immitis in dogs of 33.3%. The prevalence for D. immitis in dogs from North Macedonia was reported to be 12.5% [16]. Rapti & Rehbein [30] reported the prevalence for heartworm in dogs from Albania as 13.5%. The prevalence for D. immitis in dogs from Bosnia and Herzegovina was reported to be 3.1% [43], and 0.6%-7.5% in Croatia [12,14]. The prevalence found in this study for the Northern part of Serbia is rather high compared to the neighboring countries. It is also higher than in the Southern part of the country. One example is the wider area of Leskovac city where the prevalence of D. immitis is 8.51% [19]. This could also be due to the fact that much more research has been done in dogs and wild canids in the Northern part of Serbia, but it also is because of the different landscape. The South of the country is more hilly and not as humid as the Northern part. It is also interesting that South Bačka region has a total prevalence for D. immitis of 27.44% and the Middle Banat region has a lower prevalence of 17.14%. This could be due to the fact that less samples were analysed from the Middle Banat region, but it can be also a consequence of the geographic factors. In 2 dogs from ayzlum B microfilariae of D. immitis were not detected, even though they were positive by ELISA. After analyzing the results it was found out that those dogs were actually treated with a larvicidae therapy for one time before the sampling. So, those two dogs were positive, since they were not yet cured from the adult forms of Dirofilaria. These dogs were treated with a "slow kill" therapy which is based on elimination of larval stages and adults remain live for some time [20]. The possibility of occult infection is about 20 % in dogs and it means infection with adult Dirofilaria immitis in the absence of circulating microfilariae. It can be detected in dogs with prepatent infection, unisexual heartworm infection, drug-induced sterility of adult heartworms, and an immune-mediated infection [31]. In our case it was a consequence of drug treatment. In all the shelters there is a significant diference between the prevalence for D. immitis in dogs under 2 years of age, compared to all the other age groups. This was expected even though it is more possible that older dogs have adult forms of D. immitis. There are two reasons for this situation that especially refers to shelters. The first one is that there are a lot more young dogs in the shelters than the old ones (over 60% were dogs under 2 years of age in all the shelters except in shelter B). For that reason there were more young dogs available for sampling then the older ones. The other reason is that in some shelters (B and partly D) when dogs enter the shelter they are checked for D. immitis infection and if positive, they are in most cases treated with larvicide therapy for some time [20]. In relation to the sex of dogs which were analysed, approximately half were males and half were females in every shelter. There was no significant difference between males and females in any of the shelters, meaning that the prevalence for D. immitis was similar for both sexes in all of the shelters. There is no evidence in the literature that sex of dog influences in any way the infection with Dirofilaria immitis. CONCLUSIONS The results of this study indicate the persistence of Dirofilaria immitis infection in dogs from 5 shelters. The shelters are placed in South Bačka and Central Banat districts, in Autonomous Province of Vojvodina, Northern part of Serbia. The study was done during the period from May 2017 to October 2019. These results indicate that dirofiariosis should always be considered in this region, which is endemic for the disease during the past years. This also means that shelters can serve as sentinels of D. immitis infection. They can be reservoirs for the disease in dogs but also in humans. Dogs from shelters rarely have preventive methods applied against D. immitis infection and in most cases they are without any protection against vector-borne diseases. Due to public health threats, it is necessary to enable routine diagnostic testing for dogs in shelters and treatment of infected animals, as well as prophylactic treatment of healthy dogs to reduce the spread of infection to other dogs and humans. Declaration of interest. The authors report no conflict of interest. The authors alone are responsible for the content and writing of the manuscript.
2020-12-31T09:02:15.903Z
2020-12-12T00:00:00.000
{ "year": 2020, "sha1": "bfbadf490c0cf94cc7fe65d2d97d8ce792b45942", "oa_license": "CCBY", "oa_url": "https://seer.ufrgs.br/ActaScientiaeVeterinariae/article/download/106140/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c5b30d81ac1e3540addf237a3e49f7784a1c8986", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
56335486
pes2o/s2orc
v3-fos-license
Occupational Health and Safety Status in the Management of Faecal Sludge in Ghana: A Case Study of the Lavender Hill Faecal Treatment Plant Faecal sludge management in Ghana has been undertaken in different scales using different methods for years. Each of these methods involves human intervention in one form or the other. Direct human-to-faecal matter contact cannot be avoided completely in faecal sludge treatment. However, the degree of contact depends on the finesse of technology employed. Also, emission of gaseous substances along the treatment value chain both as a direct result of the sludge or of chemicals being employed in the treatment process is another challenge. In whichever way the situation is looked at, all these processes will present occupational safety and health issues to workers and other stakeholders if not proactively regulated. Data was collected at five different levels using different instruments at Lavender Hill Faecal Treatment Plant operated by Sewerage Systems Ghana Ltd. was analyzed for this study. This was done to ascertain the status of Health and Safety Practices and worker risk and or hazard exposures in a typical faecal management in Ghana. It was realised that a very comprehensive safety management systems have been instituted to ensure protection for all. Activities at the plant are regulated by an approved written health and safety policy, environmental management policy and standard operating procedures documents. Physical structures have safety warning signs fixed on them where appropriate and the plant’s operations are supported with state-ofthe-art technology – gas detectors, buoyancy devices around open tanks, supply of appropriate personal protective equipment, provision of sanitary facilities among others. Management of the plant considers health and safety of every person admitted to the site an utmost priority. This is demonstrated by management’s commitment to releasing funds and direct participation in safety programs. Awareness creation in the form of orientations and trainings is effectively communicated to all site patrons. Mental wellbeing of workers is ensured through a welfare system and a physical activity program. It is not surprising therefore that despite the extent of hazards associated with the faecal matter handling yet no serious incident, accident and or health related issues had yet been identified after more than a year of operation. Introduction Workers of health and safety at the workplace is provided for under the Labour Code of Ghana to be the responsibility of employers [1]. This is in view of the high rate of accidents recorded in the workplace. According to Labour Department of Ghana Annual Report in the year 2000 [2], a total of 8,692 accidents were work related accidents were reported for compensation claims. Apart from accidents occurring, the business of faecal sludge treatment presents its own challenges. Uncontrolled contact of faecal matter or any faecal matter laden material expose workers to a myriad of diseases [3]. Ensuring workers are safe from these hazards is compliant with international best practices. The introduction of the International Labour Organisation (ILO) constitution [4] which highlights the protection of the worker against sicknesses, diseases and injury arising out of employment is fundamental element of social justice. Occupational safety and health is human right and decent work eventually is safe work [5]. At the Lavender Hill Faecal Treatment Plant managed which is being managed by the Sewerage Systems Ghana Limited (SSGL), worker health and safety is considered a topmost priority by management. This is demonstrated by its commitment in terms of the provision of resources and the support of the activities of the Health, Safety and Environment (HSE) department. Worker engagement, education and training, and effective communication of safe work habits is a continuous effort at the treatment plant. In addition to the Lavender Hill Treatment plant, the company has other treatment plants [3]. According to the works of Asumeng in 2015 [6], Ghana does not have a national policy on occupational health and safety management as the ILO convention number 155 (1981) requires. There are however, the Labour Act 2003 (Act 561) [7], the Factories, Offices and Shops Act 1970, (Act 328) [8] and the Mining Regulations 1970 (LI 665) [9], which have some regulations about health and safety management at the work environment. History of Faecal/Sewage Treatment in Ghana Modern methods of treating faecal and sewerage in Ghana is very low as observed by UNICEF in 2016 [10]. The national average for sewerage coverage is as low as 4.5%. Tema is the only municipality with a comprehensive sewerage system. Accra has a sewerage system covering the following areas: State House and ministries area, Dansoman and parts of the Central Business District with low property connections. There are also a number of satellite sewerage systems for Teshie-Nungua, Burma Camp, University of Ghana, Legon, Achimota School, 37 Military Hospital and Ridge areas. Most of These treatment facilities have broken down and not in use. The Mudor treatment plant, serving Dansoman/Korle Bu areas, Ministries, Flag Staff House etc. got broken down [10] but has been rehabilitated and being managed by Sewerage Systems Ghana Limited as part of its 3 current plants since 2017 [3]. Until recently, when the SSGL's Lavender Hill faecal treatment plant was commissioned, septage was directly discharged into the sea [11][12]. The consequences of this on the sea as a common property resource, the community in which the discharge point was located in terms of spread of diseases and odour was enormous. Upon the establishment of the Lavender Hill faecal treatment plant an average of about 200 cesspit emptiers dislodged at the plant every day [3]. Health and safety at work are considered to be very important issues as they are intrinsically linked with the overall well-being of working people [13]. The consequences of wastewater treatment hazards can be severe in terms of disposition to sicknesses. The probability of death occurring, even though small, is a reality. The consequences manifest in the cost of taking care of employees and attendant effect on machinery downtime and general productivity. Not a significant percentage of people work in the waste water treatment industry. There does not appear to be any statistics on the phenomenon in the West African sub region since the treatment of faecal waste is still in the developmental stages. The risks involved are similar across some regions [14]. Due to the nature and characteristics of the faecal sludge being received at Lavender Faecal Treatment Plant (LFTP, the implementation of health and safety at the plant had been conceived as an inherent part of the plant right from the onset. Therefore, the roll out of health and safety operations started from ground zero, as it were. The establishment of structures and operation procedures started from the scratch and has since been developing gradually over the period. The development is even though not at the optimum as of now, it's making very giant strides towards the achievement of this noble goal. The SSGL's medium to long term objective of obtaining ISO certification in its operation adds impetus to its health and safety goals of achieving standards matching international best practices anywhere in the world. Laws Governing Occupational Health and Safety in Ghana Pieces of legislation introduced by government in the past have sought to protect the health, safety and welfare of all workers. These legislations include Factories, Shops and Offices Act of 1970 [8], Act 328 and Labour Act, 2003, Act 651 [7]. The Labour Act, for example, makes it obligatory for the employer to "ensure that every worker employed in Ghana works under satisfactory, safe and healthy conditions (Labour Act, 2003 Act 651, Article 118:1). Embedded in these laws are rights and obligation of both employers and employees. It is required for example that employees use the safety appliances, fire-fighting equipment and personal protective equipment provided by the employer The employers' obligation under the Labour Act includes setting standards to safeguard the wellbeing of their employees, providing personal protection equipment, and providing necessary information, supervision and training consistent with the level of literacy of the employees. The works of Asumeng and co-workers [6] noted that The Labour Act, 2003 Act 651 Act [7], is not specific on how to implement safety provisions at the organizational level and about whom to report accidents and occupational illnesses to. It is not even clear or does not specify what to consider as Occupational Illness. It does not specify who to be responsible for ensuring the industries in Ghana implement corrective actions as per recommendations. There is no national body, policy nor processes that govern occupational health and safety management in Ghana. These gaping holes or lacuna in the law leaves the implementation of health and safety in many organisations in the country at the mercy of the benevolence of business owners. As result, workers who get involved in accidents and incidents at the workplace requiring the payment of compensation are either bullied to keep quiet, threatened dismissal or do not get any compensation at all. Numerous injuries, illnesses, property damages and process losses take place at different workplaces but due to under reporting or misclassification due to lack or thorough standards, or unfamiliarity with the existing guidelines, people are not normally in the known of such events as well as their actual or potential consequences [6]. These loopholes notwithstanding, the operation of the Lavender Hill Faecal Treatment Plant is undertaken with human feeling and respect for basic rights of every human being to a decent job. Handling faecal matter in any shape or form ought to be considered beyond the ordinary and SSGL Management is very committed to this self-imposed obligation even when not being ''watched'' as it were. The commitment is demonstrated by the recruitment of professional health and safety personnel and the approval of a health and safety policy document by the board of directors. Occupational safety and health have been repeatedly mentioned as a fundamental right of every worker, and are referenced in the Alma Ata Declaration on Primary Health Care (1978) [15], the WHO constitution, the UN's Global Strategy on Health for All (2000) [16], the ILO Constitution [4] and in many other multilateral conventions and documents Occupational Hazards in the Faecal Treatment Industry The industry relies on septage trucks that visit individual homesteads to siphon septage. Back at the plant there is a complex mix of activities ranging from movement of cars, maintenance of plant equipment and machinery, civil works, pedestrian activities, offloading of septage, environmental cleaning, working around open liquid holding tanks, flaring of biogas, physico-chemical and microbiological laboratory analysis of the faecal sludge among others. Each of these activities present their own health associated risk and unique hazards. These hazards are appreciated in their various forms and measures have been instituted to alleviate them. According to Mackay [17] the aim of any harm prevention strategy should be to have exposure to risk factors below a level which can cause harm. It is essential to note that hazards only represent potential to cause harm. Whether the harm actually occurs or not depends on circumstances, such as the toxicity of the health hazard, exposure amount, the extent of the risk factors present, and duration of exposure to the risk factors. The research work [17] indicates that preventive strategies have elements comprising both surveillance and control measures, and proper design of the preventive strategies require understanding about the relationships between hazard, harm and risk. There is theoretical and empirical evidence linking hazards to harms, through risk factors [18 -20]. There is the need for understanding of these basic concepts: hazards, risks, and harm. Hazards according to the Ghana Ministry of Health-MOH (2010) refer to those features, either physical or psychosocial or a combination of both, of the workplace that have the potential to lead to harm or unwanted consequences. It is an inherent property of a substance, agent, source of energy or situation having the potential of causing considerable consequences. The likelihood that exposure to a hazard will lead to harm is technically referred to as risk. MOH (2010) [21] noted that risk represents: "The probability that damage to life, health, and/or the environment will occur as a result of a given hazard (such as exposure to toxic chemical)". At the Lavender faecal treatment plant, the following key occupational safety and health hazards are identified. These are safety hazards, biological hazards, ergonomic hazards, chemical hazards and psychological hazards. Safety Hazards These are the most common and will be present in most workplaces at one time or another. Safety hazards include unsafe conditions that can cause injury, illness and death. They are the most common features of most workplace. These hazards include: spills on floors or tripping hazards, such as blocked aisles or cords running across the floor; working from heights, including ladders, scaffolds, roofs, or any raised work area; unguarded machinery and moving machinery parts; guards removed or moving parts that a worker can accidentally touch; electrical hazards like frayed cords, missing ground pins, improper wiring; confined spaces, machinery-related hazards (lockout/tagout; forklifts). Biological Hazards This type of hazards is associated with biological agents. At Lavender Hill these include pathogenic viruses, bacteria and helminths. These arise as a result of exposure to blood and other body fluids, fungi/mold, bacteria and viruses, insect bites, and contact with faecal matter. These have the potential of resulting in infections of various forms. Ergonomic Hazards These usually occur when the type of work, body positions and working conditions put strain on the worker's body. They are the hardest to spot since it is not always possible to immediately notice the strain on your body or the harm that these hazards pose. Short-term exposure may result in sore muscles the next day or in the days following exposure, but long-term exposure can result in serious long-term illnesses. Ergonomic Hazards include: improperly adjusted workstations and chairs, frequent lifting, poor posture, awkward movements, especially if they are repetitive, repeating the same movements over and over, having to use too much force, especially if you have to do it frequently, and vibration. Chemical Hazards These are present when a worker is exposed to any chemical preparation in the workplace in any form (solid, liquid or gas). Some are safer than others, but to some workers who are more sensitive to chemicals, even common solutions can cause illness, skin irritation, or breathing problems. They include: Liquids like cleaning products, paints, acids, solvents. Psychological Hazards Psychosocial hazards are defined to include the interactions among job content, work organisation and management, and other environmental and organisational conditions, on the one hand, and the employees' competencies and needs on the other. Thus, psychological hazards refer to various forms of workplace interactions that have a hazardous influence over employees' health through their perceptions and experience (ILO, 1986) [4]. A psychological hazard is any hazard that affects the mental well-being or mental health of the worker and may have physical effects by overwhelming the individual's coping mechanisms and impacting the worker's ability to work in a healthy and safe manner [22]. Cox and Griffiths (2005) [23] also consider psychosocial hazards to be those aspects of the design and management of work, and the social and organisational contexts of work that have the potential for causing psychological or physical harm. Study Area All Cesspit trucks from Greater Accra and some parts of Eastern and Central Regions which is estimated to be 200 daily dislodged at the LHFTP. For this study, the Lavender Hill Faecal treatment plant where almost all the cesspit trucks dislodged was selected, it is located in James Town. This site or plant was selected because it is the first of its kind in the country and some part of Africa in faecal sludge treatment technology applications. Methodology The lavender hill faecal treatment plant employs a total of 102 staff made of 9 females and 93 males till date but there are more room for additional hands to be added in the near future. This is made of senior level managers, middle level technical personnel made up engineers and other professional's technicians and general plant attendants. The level of exposure to faecal matter differ and happen at different times with different exposure durations. Faecal treatment is specialised field which could potentially endanger life if not handled with care. Workers suffer different health related hazards. Management therefore is concerned and takes responsibility for every worker. Data was collected at five different levels using different instruments: 1. Enumeration of health and safety structures installed or implemented since operation of the plant begun in December 2016 and still in operation. These include the status on: i. Fire detecting and suppression/extinguishing mechanisms ii. Physical safety structures -fall arrests, buoyancy devices and warning signs installed iii. Mechanisms for detecting tolerable gases and other harmful substances levels/air quality analysis iv. Safety committee meetings v. Medical monitoring vi. Physical activity program vii. Sanitary methods 2. Number of personnel given safety induction, on the job training and participation in the continue training program of the company 3. Amount of personal protective equipment types and quantities procured and supplied with appropriate training on usage since beginning of the operations 4. The level of understanding and appreciation of health and safety consciousness of workers 5. Liaison with external auditing bodies on the company's activities reporting obligations. A total of 32 workers, representing 31.4% of the entire workforce of SSGL were interviewed. Workers were selected randomly. Selected workers made up of a mix of old people who had been around since the inception of plant activities, those who joined somewhere mid-way and the very recent staff. Major health hazards considered included vulnerability, administrative changes and the level of knowledge. Observation techniques were also adopted to understand the major installations, vulnerable zones and risk factors at various places. During the interview of workers, all data were recorded properly in a notebook. The collected data was manually coded according to the objective of the study. All the collected data were summarized and examined carefully. Then data were made MS Excel sheet. The study also relied on secondary data for verification of facts and to comparison with previous works. Primary data are first-hand information collected through various methods such as observation, interviewing, key informant interview and focus group discussion. The observation methods were participated structured and controlled. The interview method of collecting data involved presentation of oral-verbal stimulating and reply in terms of oral-verbal personal response. Study also complete one focus group discussions (FGD). Secondary sources of data and information included HSE departmental records, survey records, written documents, different relevant books, articles, reports, journals and research papers. The study reflects the general scenario of health and safety status of SSGL Lavender Hill site. Structural and Operational Developments The establishment of structures and operation procedures started from the scratch and has since been developing gradually over the period. The development is even though not at the optimum as of now, it's making very giant strides towards the achievement of this noble goal. Development of physical health and safety structures and procedures, development of orientation and training programs for stakeholders, personal protective equipment types and such related issues has been evolving. Since this type of innovation is new to this environment, the challenge to also innovate and think outside the box is a daily occurrence. Experiences of staff of the HSE department coupled with those of other experts/colleagues from diverse backgrounds of endeavour, especially in the engineering fields, collaborating has contributed immensely the achievement of the success story of Lavender Hill Faecal Treatment Plant today. Instituted Health and Safety Structures at Lavender Hill Since the inception of the operation of the LHFTP in December 2016, resources, time, effective consultation and collaboration and careful planning have resulted in the establishment of the following structures. i These physical and institutional structures working in concert have ensured the foundation is properly set to ensure effective implementation of health and safety. Man Power Development in Health and Safety at LHFTP At the LHFTP different categories of stakeholders are given orientation and training. These categories include staff of SSGL, contractors of the company and their assignees, septage truck operators and visitors. The visitor category is made up of people ranging from students in academic institutions to researchers to administrators in local government agencies and politicians. Safety Induction/Orientation For each category, visit to any part of the plant was preceded by a safety orientation or briefing and the provision of basic personal protective equipment normally made up of head, hands and feet protection. The challenge was always with septage truck emptier drivers who often ignored safety advice. This stemmed from the history of non PPE use which had become part of them. To them, business as usual as opposed to innovative ways which was the order of the day. The table below details statistics on stakeholders that received orientation training since the incepting of plant operations at LHFTP. In all a total of one thousand six hundred and eighty-one (1681) stakeholders have received safety induction prior to a visit, to commence actual work schedule as staff or to commence a contract. Content of materials taught at each of these categories is different. The number trained in 2017 was one thousand two hundred and fifty-eight. This is significant in that it was the first full year of plant operations. Staff Training In addition to the induction or safety orientations preceding actual commencement of work, staff of SSGL undergo regular safety training. These trainings include training on emergency response, fire incidents and fighting, first aid, training on permits among others. Within the period special general staff were trained to become safety assistants. Training and retraining of staff was ensured staff safety obligations were fulfilled. Trainers were drawn from both internal expertise and externally. Liaison with external expertise also help develop strong bond with key institutions which tended to benefit SSGL. Forty-one different safety items applied in varied situations and circumstances were issued out. The table gives an idea of items most frequently used. The various types of nose and hand protection were the most consumed. Safety Consciousness of Workers Knowledge level of workers was tested with a simple random survey. A total of 32 workers, representing 31.4% of the entire workforce LHFTP were interviewed. Selected workers were made up of a mix of old personnel who had been at post since the inception of plant operation, those who joined somewhere mid-way and the very recent staff. The personnel were divided into three equally-sized knowledge groups based on health and safety knowledge score. Respondents were then grouped by their Knowledge Level Score (KLS) into three groups namely: i. Low knowledge -KLS of less than 10 ii. Medium knowledge -KLS of between 11 and 20 iii. High knowledge -KLS of between 21 and 30. Knowledge levels were tested on six main areas. These were types of hazards, donning and doffing of PPEs, awareness about first aid, response to emergency, basic hygiene and work around open liquid holding tanks. Each of these was allocate a maximum score of five (5) being highly knowledgeable and zero (0) being little or no knowledge at all. The results showed that majority (53.1% of medium knowledge and 31.3% of high knowledge) of workers interviewed showed very satisfactory level of knowledge and Ghana: A Case Study of the Lavender Hill Faecal Treatment Plant consciousness about issues of health and safety. However, 15.6% fell within the low knowledge bracket. These definitions (listed above) will be used as baseline to assess the level of knowledge of future knowledge groups to determine any variations. For example, this analysis will be compared to next year's data to see if knowledge levels within the categories will change within the workforce. The level of understanding and appreciation of health and safety consciousness of workers is an important parameter in determining the sort of training program to adopt. It is also a measure of performance of the HSE Department. Liaison with External Auditing Bodies Over the period of operation of the LHFTP, important state regulatory agencies have been involved in ensuring adherence to regulations and laws. Some of these organisations include the department of factories inspectorate, Ghana national fire service, Environmental Protection Agency, Ghana National Ambulance Service and the Red Cross Society of Ghana. Collaboration has been in the areas of ensuring adherence to standards, advisement, training and provision of other services. Beyond these, subscription to international journals on health and safety and membership of internationally recognised bodies on health and safety provide the platform for knowledge sharing and acquisition. Conclusion and Recommendation This study aimed to determine the current health and safety status of the Lavender Hill Faecal Treatment Plant in Accra Ghana. The focus has been on set up of health and safety structures -physical and institutional. The study has brought to the fore in a systematic manner the establishment and evolution and the implementation of safety principles to ensure the operational safety of all kinds of patrons of the plant. Chiefly among these are the workers who spend most of their productive time in the plant. Workers are safe from most of the hazards identified on site. This is due to the level of alertness and consciousness level among workers. Malaria is the main disease that plague workers. This is evident from the medical records at the human resources department. Collection of water in poodles on site is being corrected with engineering controls. Mosquito repellents are supplied to workers who come on night shifts. SSGL is building a culture of a very safe working environment for all patrons of its companies, LHFTP being the headquarters. The setting up of various structures and the systematic approach to awareness creation communicated in simple language is the vehicle for achieving this objective. The provision of appropriate personal protective equipment meeting international standard ensure adequate protection is offered to workers in situations where PPE use is the last resort. An array of all hazard control measures are employed depending on the situation at hand. Existing knowledge levels in health and safety of most workers from their previous places of work, even though, not applicable en bloc, is still useful since the basic safety precautions needed for a safe work to a large extend apply in most workplaces. This is more of the application of common sense. The LHFTP cherishes its relationships with external organisations be they regulatory or for support and collaboration. The frontiers for more partners is being pushed to network relevant organisations to reap full benefits of synergy. An important contribution from this study is that it is providing a baseline data for future wastewater treatment plants of this magnitude in the West African sub region. What is happening at lavender hill is an innovation the first of its kind in the whole sub region. Subsequent plants may not have to reinvent the wheel. The LHFTP did not have the advantage of such data.
2019-05-30T23:45:26.769Z
2018-05-14T00:00:00.000
{ "year": 2018, "sha1": "732eb4393afa26b373392fd9d10c3784d1a354bc", "oa_license": "CCBY", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.jher.20180402.11.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "9d83fdb0e10ffede5b2b0393ec3e7cc3f11b3dd1", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Business" ] }
221328992
pes2o/s2orc
v3-fos-license
Pre- and Post-Operative Nutrition Assessment in Patients with Colon Cancer Undergoing Ileostomy Introduction: Patients undergoing ileostomy surgery often experience electrolyte disturbances and dehydration, especially during the first post-operative period. Recently, research has also begun on how the newly constructed ileostomy affects the patient’s nutritional status. Aim: The aim of the present pilot study was to assess the nutritional status of patients before and after the construction of the ileostomy as well as nutrition-related factors. Material and Method: This was a pilot study. The sample consisted of 13 adult patients diagnosed with colorectal or colon cancer who underwent scheduled ileostomy surgery. The evaluation tool used was “Original Full Mini Nutritional Assessment (MNA)”. Patients underwent nutritional assessment before the surgery (time 0), on the 7th post-operative day (time 1), and on the 20th post-operative day (time 2). The statistical significance level was set at p < 0.05. Results: All patients had a drop in MNA score on the 7th and 20th post-operative days. Factors associated with MNA were weight loss, mobility, body mass index (BMI), number of full meals consumed per day, portions of fruits and vegetables consumed per day, and mid-arm circumference, p < 0.05, respectively. Pre-operatively, 38.5%, of patients had severe weight loss (>3 kg), 23% moderate weight loss and 38.5% minimal weight loss. Pre-operatively, 92.3% of participants were able to move on their own and 69.2% on the 20th post-operatively day. Furthermore, BMI >23 kg/m2 had 84.6% of participants pre-operatively and 30.8% on the 20th post-operative day. In terms of portions of fruits and vegetables consumed per day, 30.8% of patients consumed at least 2 times, pre-operatively and no one (0%) on the 20th post-operative day. Moreover, pre-operatively all participants (100%) had arm circumference >22 cm while on the 20th post-operative day, only 38.5% of participants had arm circumference >22 cm. Conclusions: In the first 20 days after the construction of an ileostomy, the nutritional status of the patients is significantly affected. Decreased patient nutrition in both quantity and ingredients and reduced fluid intake appear to adversely affect the patient’s nutritional status. Introduction During recent decades, the number of ileostomies created has been expanding enormously due to surgical management of various intestinal disorders. Depending on indications, surgical technique and emergency demands, stomas may be either temporary or permanent [1]. An ileostomy is a surgically created opening of a piece of ileum on the abdomen through which digested food passes into an external system. The most prevalent causes that lead to ileostomy construction are bowel cancer, trauma and acute abdomen conditions [2]. An ileostomy is frequently associated with various stoma complications, reference [3,4] which occur in up to 50% of cases, [5] and are attributed to both operative and patient related factors [5][6][7]. Significantly more, ileostomy is related with increased morbidity, mortality and a staggering economic burden on patients and healthcare system [7]. In terms of clinical characteristics, an ileostomy is associated with malnutrition, excessive output (defined as output ≥1500 mL for two consecutive days) and problems related to leak or stoma appliances [8][9][10]. In more detail, the ileum is responsible for the absorption of lipids, carbohydrates, proteins and vitamin B12 [9]. Patients being typically deprived of their terminal ileum are at higher risk for dehydration, impaired nutritional status, electrolyte imbalances, deficiencies in B-12, iron, magnesium, fat, and folic acid [11,12]. Therefore, it is clinically meaningful to provide nutrition support (oral, enteral or parenteral), water and electrolytes in order to prevent malnutrition [9] along with proper dietary requirements [11]. Prompt evaluation of nutritional status allows the identification of patients at risk, thus contributing to the recovery after surgery. Also, this evaluation may improve clinicians' ability to empower patients to manage their ileostomy more efficiently [6]. To the best of our knowledge, data exploring nutritional status of patients with an ileostomy are limited. However, it is widely accepted that nutritional evaluation is important with respect to patient outcomes. Thus, the aim of this study was to explore the nutritional status of patients with an ileostomy in three periods of time: (a) before the surgery (time 0), (b) 7th post-operative day (time 1), and (c) 20th post-operative day (time 2) as well as to identify factors associated with nutrition status. Study Population, Design, Setting, and Period of the Study In the present pilot study were enrolled 13 adult patients diagnosed with colorectal cancer who underwent scheduled surgery for an ileostomy in a public hospital in Attica. All participants had Standard (Brooke) end ileostomy. It was a convenience sample. The study included patients during the period August 2017-July 2018. Sample: Inclusion and Exclusion Criteria During the period which the research was conducted, from a total of 20 patients who were initially identified as eligible for participation, only 13 were finally enrolled because 7 refused to participate or had other co morbidities. Inclusion criteria in the study were as following, patients: (a) being diagnosed with colorectal cancer (b) hospitalized in a public hospital in Athens during the study period and (c) having the ability to write and read the Greek language fluently. Exclusion criteria were as following, patients: (a) with a history of mental illness, (b) patients with other inflammatory bowel disease (ulcerative colitis and Crohn's disease) and (c) being unable to communicate throughout the study period. Data Collection and Procedure Collection of data was performed by the method of the interview using a questionnaire which was developed by the researchers so as to fully serve the purposes of the study. Completion of the each questionnaire lasted approximately 15 min and took place in the evening shift when patients were free of other tasks or examinations. Also, data were collected through medical history or physical assessment, or in collaboration with other specialists. Measuring height accurately, participants had to stand with feet flat, together, and against the wall. A metal tape was used to measure from the base on the floor to the marked measurement on the wall to get the height measurement. To measure weight accurately, a digital scale was used which was placed on a firm flooring while participants had to stand with both feet in the center of the scale. All patients underwent weight measurements under the same circumstances (the same scale, the same clothing, the same hour). Body mass index (BMI) calculation based on the following formula: BMI = body weight/height 2 (kg/m 2 ). BMI classification was adopted: <18.5 underweight, 18.5-24.9 normal body weight, 25.0-29.9 overweight, and <30.0 obesity. In the present study there was no intervention or control group since this research merely recorded nutritional status in patients with ileostomy before and after surgery. Nutritional Assessment (Study Instrument) To measure nutritional state, the Mini Nutritional Assessment (MNA) was used. This screening tool identifies persons who are malnourished or at risk of malnutrition. MNA which was developed almost 20 years ago, still remains the most widely used screening tool for malnutrition among adults or the elderly. Initially, this tool included 18 questions (Original Full MNA) while later was constructed the Short Form of MNA consisting of 6 questions to simplify the process [13][14][15][16]. In our pilot study to assess the nutrition of patients with ileostomy, it was used the Original Full MNA which is available at https://www.mna-elderly.com/ [13]. Original Full MNA is recommended for a more detailed assessment of patients' nutritional status and apart from demographic data, it also includes clinical features, Specifically more it includes: • decrease in food intake due to loss of appetite, digestive problems, chewing or swallowing difficulties, during the last 3 months; The final score attributed to the patient (malnutrition indicator score) ranged from 0-30. Patients with less than 17 points were characterized as "malnourished", those with 17-23.5 points as "at risk of malnutrition" while patients with 24-30 points as "at normal nutritional status" [13][14][15]. MNA is a simple tool to measure nutritional status. MNA has been used in hundreds of studies and translated into more than 20 languages with high sensitivity, specificity, and reliability. MNA is recommended by many national and international clinical and scientific organizations and can be used by a variety of health professionals, including physicians, dietitians, nurses or research assistants [14]. MNA provide several advantages in patients with an ileostomy. More in detail, this short and valid tool which is easily applied in daily practice, may help clinicians to develop prompt strategies to improve the nutritional state of ileostomy patients. Furthermore, in patients with malnutrition, the perioperative support may decrease the risk of post-operative leakage and infectious complications [16]. Last but not least, MNA is widely used in patients with cancer of all ages even though it is neither developed specifically for this disease nor for persons younger than 65 years [17]. Ethical Considerations The study was approved by the Thesis Review Committee of the Post-Graduate Program "Wound care and Treatment" of the Department of Nursing of the University of West Attica (Approval Reg Number 123 -6/2/2018). Patients who met the entry criteria were informed by the researcher for the purposes of this research. All patients participated only after they had given their written consent. Data collection guaranteed anonymity and confidentiality. All subjects had been informed of their rights to refuse or discontinue participation in the study, according to the ethical standards of the Declaration of Helsinki (1989) of the World Medical Association. Informed Consent: Informed consent was obtained from all individual participants included in the study. Statistical Analysis All statistical analyzes were performed with the SPSS statistical package (IBM SPSS Statistics, version 21.0, Armonk, NY, USA, IBM Corp.). The regularity of the distributions of continuous quantitative variables was assessed by the Shapiro-Wilk criterion, as well as by the use of graphs to control symmetry and curvature (P-P or Q-Q plots). Continuous variables were expressed as intermediate values (25th-75th percentile) and qualitative-categorical variables as absolute numbers and relative frequencies (%). We checked the statistical significance of differences between groups with Mann-Whitney U and Kruskal-Wallis tests for variables that did not follow the normal distribution. All statistical value values (p-values) emerged from bilateral tests and set at a statistical significance level of 5% for all analyses. Results The study population consisted of 13 patients, 10 men (76.9%) and 3 women (23.1%) who had colorectal cancer and underwent ileostomy surgery. Patients' characteristics are shown in Table 1. In terms of MNA score, the median score in time 0 (before surgery) was 24, in time 1 (7th day post-surgery) was 18.5 and in time 2 (20th day post-surgery) was 19.0. According to MNA scale, ranges from 24 to 30 are characterized as "at normal nutritional status". Therefore in time 0, patients are at normal nutritional status whereas in time 1 and in time 2, patients are at "risk of malnutrition" since values range from 17-23.5 points are characterized as at risk. Mini Nutritional Assessment scores range values are shown in Table 2. Table 3 presents factors associated with MNA score. More in detail, factors that were statistically significantly associated with time 0, 1, 2 (pre-operative-7th post-operative-20th post-operative day) were as follows: (a) Weight loss. Pre-operatively, 38.5% of patients had severe weight loss (>3 kg), 23.1% had moderate weight loss and 38.5% had minimal weight loss. Respectively, the percentages on the 7th post-operative day were 46.2%, 15.4% and 38.5%, while on the 20th post-operative day 53.8%, 15.4% and 30.8%. The difference in the percentage of those who suffered total weight loss on the 7th and 20th day, was statistically significant (p < 0.05). (b) Mobility. A statistically significant difference was observed between individuals who preoperatively could move on their own and those in the 20th post-operative day, (92.3% vs. 69.2%), p < 0.05. (d) The number of full meals consumed per day. Pre-operatively, 84.6% of participants had at least 2 meals per day, while on the 20th day, 69.2% of participants had at least 2 meals per day, p < 0.05. (e) The portions of fruits and vegetables consumed per day. Pre-operatively 30.8% of patients consumed at least 2 servings of vegetables and fruits per day while on the 20th day, no one (0%) consumed 2 servings of fruits and vegetables, p < 0.05. (f) The mid-arm circumference. Pre-operatively all patients had an arm circumference of more than 22 cm, while on the 20th post-operative day only 38.5% had an arm circumference >22 cm, p < 0.05. Table 3. Factors associated with MNA (n = 13). Table 3 presents factors associated with MNA score. More in detail, factors that were statistically significantly associated with time 0, 1, 2 (pre-operative-7th post-operative-20th post-operative day) were as follows: (a) Weight loss. Pre-operatively, 38.5% of patients had severe weight loss (>3 kg), 23.1% had moderate weight loss and 38.5% had minimal weight loss. Respectively, the percentages on the 7th post-operative day were 46.2%, 15.4% and 38.5%, while on the 20th post-operative day 53.8%, 15.4% and 30.8%. The difference in the percentage of those who suffered total weight loss on the 7th and 20th day, was statistically significant (p < 0.05). (b) Mobility. A statistically significant difference was observed between individuals who pre-operatively could move on their own and those in the 20th post-operative day, (92.3% vs. 69.2%), p < 0.05. (d) The number of full meals consumed per day. Pre-operatively, 84.6% of participants had at least 2 meals per day, while on the 20th day, 69.2% of participants had at least 2 meals per day, p < 0.05. (e) The portions of fruits and vegetables consumed per day. Pre-operatively 30.8% of patients consumed at least 2 servings of vegetables and fruits per day while on the 20th day, no one (0%) consumed 2 servings of fruits and vegetables, p < 0.05. (f) The mid-arm circumference. Pre-operatively all patients had an arm circumference of more than 22 cm, while on the 20th post-operative day only 38.5% had an arm circumference >22 cm, p < 0.05. Discussion This pilot study explored the nutritional state of 13 patients who underwent an ileostomy due to colon cancer. In terms of demographic characteristics, participants' age ranged from 52.5 to 71 years. The prevalent age for colorectal cancer was over 50 years, however a rise in younger individuals was noticed, thus supporting the need for colonoscopy screening at the age 45 in order to detect those with early-onset [18][19][20]. Haleshappa et al. [19] showed that 27.8% of 89 patients were diagnosed with colon cancer in the age <40 years. More awareness to young-onset will be critical to improve outcomes in this patient population [20]. In terms of sex, men are at a slightly higher risk of developing colon cancer than women. Worldwide, colorectal cancer is the third most common cancer while for women, rectal cancer does not figure in the top 10 cancers, whereas colon cancer ranks 9th [19,20]. This pilot study showed a weight loss and reduction in BMI from pre-operative measurement to 3rd post-operative. More in detail, on the 20th post-operative day, 53.8% of patients had severe weight loss compared to 38.5%, pre-operatively while only 38.5% had BMI >23 kg/m 2 compared to 84.6%, pre-operatively. Moraes et al. [12] showed weight loss in more than half of patients after ileostomy who in the majority were above 50 years old, female, married and of incomplete elementary school. A relevant study conducted by Kim et al. [21] showed severe weight loss and BMI reduction, post-operatively among 72.7% (n = 50) of patients who underwent a colostomy or prophylactic ileostomy. Moreover, a weight loss of 5.2 ± 2.3 kg was present in 28% of stoma patients readmitted to hospital [22]. A reduction of BMI may be developed up to 40 days after hospital discharge [21] while a sharper BMI decrease is more prevalent in patients with high-output stoma (HOS) [6,22,23]. Early HOS (within 3 weeks of stoma formation) occurred in 75 (16%) of ileostomies/jejunostomies [23]. However, in less than two years after surgery, patients present adequate BMI [12]. The result of the current study that patients reduced the number of full meals and the intake of fruits and vegetables post-operatively is in line with Oliviera et al. [24] who showed that ileostomy patients (20%) avoided foods for fear of appliance leakage when compared with colostomy ones (4.8%), and reported the intake of vegetables and fruits as the most problematic. Interestingly, patients with an ileostomy tend to decrease total intake and restrict consumption of some foods due to repercussions on the volume and appearance of feces and other issues associated with aesthetics and well-being. Avoidance of certain foods may in turn increase the risk for nutritional deficiencies [12]. Notably, quality and quantity of food is crucial for ileostomy patients since reduction in protein intake may affect tissue repair after surgical construction of a stoma. Post-operatively, it is important to provide a high-energy, high-protein diet for wound healing that is low in excess insoluble fiber while pre-operatively, fiber and lactose intolerances are common [25]. Nutritional prehabilitation before major surgery is a matter of vital importance as it is shown to reduce post-operative complications, increase recovery speed, and improve patients' quality of life. Noteworthy, prehabilitation is defined as the process of expanding patient's functional and psychological capacity to reduce potential deleterious effects of a significant stressor, such as a surgical procedure and furthermore, it involves a multifactorial and interdisciplinary approach. Malnourished surgical patients have higher post-operative morbidity, mortality, length of hospital stay and readmission rates [26,27]. Messaris et al. [28] showed a 60-day readmission rate of 16.9% (n = 102) after colon or rectal resection with diverting loop ileostomy. Kulaylat et al. [29] showed creation of an ileostomy as an independent predictor for readmission within 30 days after a colectomy. Taking into consideration these elevated rates, it is easily to understand that malnutrition is an additional risk for complications. Migdanis et al. [30] recommend that an oral isotonic drink post discharge can have a prophylactic effect on patients with a newly formed ileostomy, preventing readmissions. It should be stressed that after hospital discharge, nutritional requirements may vary greatly depending on the remaining bowel, the fluid and electrolyte abnormalities, the overall health and other diagnoses [25]. Equally important is the finding of reduced mobility and independence of living in post surgery period. Indeed, patients experience physical impairment, deranged body function and emotional trauma, which further minimize their ability for self care and limit their social or sexual life [31][32][33][34]. Ang et al. [32] demonstrated that following ileostomy surgery, the most common stressors reported by patients during hospitalization included stoma formation, diagnosis of cancer, and preparation for self-care. After discharge, the stressors encompass adapting to body changes, altered sexuality, and impact on social life and activities. Self-efficacy plays an important role in the likelihood of adopting health behaviour changes and is associated with heightened motivation, treatment adherence and improved clinical and social outcomes [31][32][33][34]. Pre-and post-operative education in clinical settings regarding recovery process may be an essential step for patients and caregivers to cope with stoma stressors. Reinwalds et al. [33] indicated the following themes after an ileostomy: life being controlled by the altered bowel function, uncertainty regarding bowel function, and being limited in social life. Limitations of the Study This study has some limitations. Convenience sampling is one of the limitations in this study. This method is not representative of all population with an ileostomy living in Greece, thus limiting the generalizability of results. Additionally, this was a pilot study which had the purpose to examine the feasibility of an approach that is intended to be used in a larger scale study. Furthermore, there were no blood tests along with nutritional assessment. Given that it was a pilot study, the sample size was small and there was no sample size calculation, despite many significant associations being observed. The strengths of the study include the wide spread instrument of MNA that may permit comparison among populations with an ileostomy. Also, this pilot study involves 3 measurements with available pre-operative data since many studies do enroll patients after the construction of ileostomy. Conclusions This pilot study showed that in the 20th post-operative day, ileostomy patients had weight loss, reduced BMI, limited mobility, decrease in number of full meals, fruits and vegetables and less arm circumference. Evaluation of baseline nutritional status of patients with colon cancer should be a part of routine clinical practice. The understanding that nutritional deficit frequently accompanies an ileostomy, underpins the value of periodic nutritional assessment along with dietary education. A multidisciplinary team of surgeons, nurses, gastroenterologists, nutritionists and hospital pharmacists needs to be established under the umbrella of a specially designed protocol for such cases. Nutritional assessment as the most significant concern for people with an ileostomy should arguably be among research priorities. It is anticipated that the present results will contribute to further research into this lifesaving procedure. Funding: This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors. Conflicts of Interest: The authors declare that they have no conflict of interest.
2020-08-27T09:08:40.733Z
2020-08-23T00:00:00.000
{ "year": 2020, "sha1": "f9b8ec2e92f7f1a3ae07a8cd7063ba62c23f4710", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/17/17/6124/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4e3eea6c2807150aa3739082b3e13befb45daea3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
75136614
pes2o/s2orc
v3-fos-license
Cytotoxicity of Ag, Au and Ag-Au bimetallic nanoparticles prepared using golden rod (Solidago canadensis) plant extract Production and use of metallic nanoparticles have increased dramatically over the past few years and design of nanomaterials has been developed to minimize their toxic potencies. Traditional chemical methods of production are potentially harmful to the environment and greener methods for synthesis are being developed in order to address this. Thus far phytosynthesis have been found to yield nanomaterials of lesser toxicities, compared to materials synthesized by use of chemical methods. In this study nanoparticles were synthesized from an extract of leaves of golden rod (Solidago canadensis). Silver (Ag), gold (Au) and Ag-Au bimetallic nanoparticles (BNPs), synthesized by use of this “green” method, were evaluated for cytotoxic potency. Cytotoxicity of nanomaterials to H4IIE-luc (rat hepatoma) cells and HuTu-80 (human intestinal) cells were determined by use of the xCELLigence real time cell analyzer. Greatest concentrations (50 µg/mL) of Ag and Ag-Au bimetallic were toxic to both H4IIE-luc and HuTu-80 cells but Au nanoparticles were not toxic. BNPs exhibited the greatest toxic potency to these two types of cells and since AuNPs caused no toxicity; the Au functional portion of the bimetallic material could be assisting in uptake of particles across the cell membrane thereby increasing the toxicity. Recently, nanotechnology has become an intensely researched area of and nanoproducts are widely gaining uses, especially in electronics, health care, cosmetics and medicine. One question is, how safe are nanomaterials? In assessing cytotoxicity of nanomaterials, one aspect is to determine potencies under various conditions of the cell cultures, such as temperature, pH and nutrient concentrations 1 . Gold (Au) and silver (Ag) nanoparticles are the most studied noble metals and they are increasingly being applied in various biological treatments. Silver has been applied as antimicrobial agents, whereas gold nanoparticles have shown promise in diagnosis and therapy of cancer 2,3 . Au-NPs absorb visible light and within picoseconds, deliver wavelength-specific energies with targeted precision and efficiency. Thus, they can be applied in light-mediated clinical treatments (photodynamic therapies), for which bimetallic alloy NPs could be seen to exhibit better functionalities. The colour of Au nanoparticles, which is in the visible region of the spectroscopy, have ability to bind with biological molecules or ligands which aids in bioimaging and other biomedical applications. These noble nanometals exist in various structures, such as nanospheres, nanocages, nanorods, nanoflowers, nanopolygons and their functions vary based on produced structures 1,4 . Various morphologies offer interesting possibilities of diffusion, surface interaction with target molecules or organisms thereby directing their roles actively. Since sizes of nanomaterials are in the nanometre range, they are able to penetrate cells, a property that has been utilized in cell targeting. There are three major methods of synthesizing nanoparticle: physical, chemical and biological, but the most used and conventional method is the chemical approach. Chemical synthesis of NPs results in NPs being less toxic (e.g. Au) or equally toxic (e.g. Ag) relative to bulk chemicals 5,6 . When used in cellular applications, due to gradual releases of chemicals, used during syntheses, chemical-based syntheses have disadvantages due to toxicities to cells. Development and uses of effective alternatives, such as nanoparticles produced by use of extracts of plants might exhibit lesser toxic potencies. Plant materials contain active pharmacological ingredients, which not only serve as reducing agents, but can also act as capping agents for NPs and as a result intensify their biomedical efficacies. Furthermore, the use of biomolecules as reductants offers significant advantages over other similar protecting agents 7 . Au and Ag have a long history of antimicrobial and anti-infective properties that exceed that of their metal ions and as such synergistic actions of NPs containing these two metals, would inherently surpass previously existing materials of similar action with due to lesser toxicity exhibit excellent biocompatibility. Also, to avoid or minimize toxicity to cells and environments, costs of synthesis and dangers involved in handling chemical reducing agents, more eco-friendly methods for syntheses of metal NPs were preferred. Since extracts of the angiosperm Solidago canadensis has been used traditionally for several medicinal applications relating to antimicrobial and antioxidant effects, it was hypothesized that it could be used during phytosynthesis of nanomaterials 8 . Application of biogenic phytosynthesis to produce NPs has been proposed as a more biocompatible, alternative to chemical syntheses 9 . Phytosynthesised AgNPs 10 and AuNPs 11 exhibited lesser toxic potencies than did NPs produced via chemical reactions. However, there were no data on comparative toxicities of monometallic and bimetallic photosynthesized NPs, which hampered assessment of potential hazards of NPs. In this study, cytotoxicity of Au, Ag, and Au-Ag bimetallic alloy NPs produced by use of extracts of leaves of S. canadensis were determined and results used to assess potential effects of these NPs on humans and wildlife 12 . H4IIE-luc rat hepatoma cells were used as an indication of a detoxification response and the HuTu-80 cells (HTB-40 ™ ) isolated from human intestine were used to indicate uptake by this tissue. Rat liver and human intestine equivalents of normal cells were not included in this study due to availability. It was hypothesised that advanced physicochemical properties exhibited by the novel, monometallic and bimetallic NPs might influence applications in drug delivery, medical theranostics and in vivo imaging. Materials and Methods Characterization. Syntheses of nanomaterials using plant extract. Leaves from the plant S. canadensis (golden rod) of which identification was confirmed by a plant taxonomist; were collected from a botanical garden in Mafikeng, North West Province, South Africa. In preparation for processing, leaves were washed using double distilled water to remove sand and debris and were dried at room temperature (22-26 °C) under air for three weeks before being ground using a pestle and mortar. An aqueous extract was prepared by heating approximately 2 g of ground plant extract in 100 mL of distilled water at 80-85 °C and filtered immediately through Whatmann filter paper. The filtrate was allowed to cool to 25 °C and used for synthesis of NPs. Silver nitrate (AgNO 3 ) and Gold (III) chloride hydrate (HAuCl 4 .xH 2 O) (Sigma Aldrich, Darmstadt) were used to synthesize gold and silver nanoparticles; where the plant extract (50 mL) was added to 500 mL of aqueous 1 mM HAuCl 4 .xH 2 O and AgNO 3 salt respectively. Samples were heated between 70-80 °C for a period of 1 hour. Solutions were sampled at different intervals as the reaction underwent a colour change, samples were analysed for the appearance of plasmon bands monitored by use of an UV-Vis spectrophotometer (UV-1901 Agilent Technology, Cary series UV-vis spectrometer, USA). A similar process was followed for Ag-Au bimetallic nanomaterials, however 250 mL of each ionic salt was added in situ to the 50 mL of plant extract. Periodic changes in colour were seen due to formation of plasmon bands, which were also confirmed by use of UV-vis spectroscopy. Transmission electron microscopy. Transmission electron microscopy (TEM) was performed by applying one drop of the prepared nanomaterials (AuNPs, AgNPs and Ag-Au bimetallic NPs) onto a carbon coated copper grid and allowed to settle for three minutes. The grid was allowed to dry and TEM was performed using of a model JEOL2100 instrument fitted with a LaB 6 electron gun. Images were captured using a Gatan Ultrascan digital camera. Characterization in exposure medium. Stock solutions (1 mg/mL) of powdered nanomaterials in MilliQ water were diluted in Dulbecco's Modified Eagle's Medium (DMEM) (Sigma, Darmstadt). Dynamic light scattering (Malvern Zetasizer Nano series, NanoZS) was used to measure the hydrodynamic size distribution and zeta potential of the nanomaterials in culture medium prior to exposure. Cytotoxicity using xCeLLigence. Maintenance of cells. Immortalised cell-lines were employed to measure toxic potencies to NMs. Cell-lines do not have all the constituents of primary cells and are genetically modified to never stop growing, nonetheless, they are a good model to assess toxic potency, especially in cases where NMs were developed as anti-cancer drugs for future use. H4IIE-luc rat hepatoma cells [13][14][15] were obtained from University of Saskatchewan, Canada. HuTu-80 cells (HTB-40 ™ ) isolated from the human intestine were obtained from the American Type Culture Collection (Manassas, VA, USA). Both cell lines were cultured in DMEM supplemented with 10% foetal bovine serum (FBS) (Thermo Science, USA) in tissue culture dishes. Cells www.nature.com/scientificreports www.nature.com/scientificreports/ were maintained in a humidified incubator, with 5% CO 2 at 37 °C. Cells were handled in a sterile laminar flow hood, which was carefully cleaned with 70% ethanol. Cytotoxicity assay: exposure to nanoparticles. Cells were seeded at a density of 8.0 × 10 4 cells/mL and left to adhere for a period of 12 h 16 . Both cell lines were exposed to 5, 25 and 50 µg/mL of Ag, Au and Au-Ag in triplicate. Unexposed cells acted as a control. Interference from NPs with the gold-plated, E-plate was monitored by adding NPs to wells containing medium, but no cells. Cytotoxicity of the two cell lines were measured independently using a real-time cell analyser; xCELLigence system RTCA single plate (SP) instrument from ACEA Biosciences with RTCA software (version 1.2.1). The software measures electrical impedance across microelectrodes on the bottom of each well in the gold-plated E-plate. The ionic state, altered by growth of cells; are measured and translated into cell index (CI) values, which are correlated in real time with growth of cells. Readings were taken every 10 min for 105 h. statistical analysis. After exposure, data were normalized by use of RTCA data analysis software. Normalisation refers to the manipulation of data at a specific time point (nanomaterial treatment) which is then set as 1.0 by the software. All other values are represented as a proportion of this value. Normality was investigated by use of the Kolmogorov-Smirnov test and homogeneity of variance was assessed by use of Levene's test (IBM, SPSS). Sample size, unequal variance and data that were not normally distributed dictated that a non-parametric test (Mann-Whitney U) had to be performed. Significance of deviations of slopes from the control slope were defined as p < 0.05 17 . Results and Discussion Characterization. Transmission electron microscopy of the NPs in MilliQ water revealed different shapes and sizes of particles. Aggregation occurred during synthesis of the Au-Ag BNP. The primary shape was spherical, however triangular and rod-like shapes were also formed. Most individual Ag-NPs and Au-NPs were more uniform and spherical with a mean diameter of 15 nm which suggested more homogenous electron densities within the volume of particles. The Au-NPs were more aggregated when compared to the more dispersed Ag-NPs. Phytochemicals present in the leaf extract were efficient at capping and stabilizing NPs. Distribution of the synthesised NPs in the medium gave insight into their stability, solubility, motion kinetics and inherent performance in biological systems. For biological performance of nanomaterials, natural media usually contain mixed salts which would lead to increase in nanosize and sedimentation of aggregates. pH can affect dissolutions of nanomaterials by altering surface charges. Cation/anion valence concentrations of reaction media also affects stabilities of nanomaterials [18][19][20] . As a result, mean sizes of Au-NPs were approximately 238.2 nm, which is greater than Ag-NPs and Ag-Au BNPs, at 180.6 and 186.3 nm respectively (Table 1). Due to agglomeration experienced by the nanoparticles in DMEM medium, sizes observed during this study were greater than sizes determined by use of TEM. (Fig. 1). Both AgNPs and Ag-Au BNPs had 5.1-5.3% of the nanomaterials in the less than 10 nm range. The percentage intensity (percentage size ranges of particles distribution) is indicated in Table 1. All nanomaterials tested exhibited negative Zeta potentials. Ag-Au-NPs had a charge of −10.5 mV, while Ag had −6.84 and Au had −9.46 mV. Zeta potential greater than 30 mV or less than −30 mV are indicative of stable dispersions of nanomaterials in solution 21 . Zeta potentials observed during this study indicated that during dispersion with bath sonicator NPs formed an unstable dispersion which aggregated and eventually settled out 22 . Bioactive components of the plant extract did not affect stabilization or zeta potential. Cytotoxicity of Nps. Behaviour of NPs in biological media or determination of their toxic potency depend on material constitution or arrangement. Shapes of nanoparticles are important in determining toxicity. For instance, triangular-shaped silver nanomaterials exhibit greater toxic potency relative to spherical NPs 23 . Surface area, large ratio of surface atoms to bulk atoms results in greater reactivity and toxic potencies 24 . Potential interferences of NPs with electrical impedance were evaluated by monitoring the CI of blank wells containing only nanomaterials. These wells received the two highest concentrations (25 and 50 µg/mL) for each material to determine if interference with gold-plated wells occurred. The CI indicated no nanomaterial interferences, however HuTu-80 cells exhibited greater CI than did H4IIE-luc (Fig. 2). Viabilities of the two cell lines varied www.nature.com/scientificreports www.nature.com/scientificreports/ with differences in vulnerability ascribed to genetic differences between the two cell lines 25 . Cells were exposed to three concentrations (5,25 or 50 µg/mL) of all the NPs prepared by use of plant extract (Au-NPs, Ag-NPs and Ag-Au-BMPs). Simplified graphs were used in figures and raw output data was included as supplementary data (Supplementary material). When compared to the control, HuTu-80 cells exhibited no significant differences among the 50 µg/mL concentration of Au-NPs, although cell growth was stimulated (Fig. 3). In contrast, the greatest concentrations of Ag-NPs (Fig. 4) and Ag-Au-BMPs (Fig. 5) and second greatest concentrations of Ag-NPs (Fig. 4) and Au-NPs (Fig. 3) caused a significant decrease in cell viability (p < 0.05). Au-NPs at 5 µg/mL (Fig. 3) also caused significant cytotoxicity, but this was not the case for Ag-NPs at the same concentration (5 µg/mL) (Fig. 4) that did www.nature.com/scientificreports www.nature.com/scientificreports/ not significantly affect growth of cells. The two least concentrations of Ag-Au-BMPs did not significantly affect viability (Fig. 5). H4IIE-luc cells exhibited statistically significant differences in cells exposed to the two greatest concentrations (25 and 50 µg/mL) of all three types of NPs (Figs 6-8). Au-NPs significantly stimulated growth of cells while both Ag-NPs and Ag-Au-BMPs caused significant decreases in viability of cells (Figs 6-8). The least concentration (5 µg/mL) of Ag-NPs (Fig. 7) and Au-NPs (Fig. 8) caused non-significant stimulation of growth of cells, while Ag-Au-BMPs caused a significant decrease in cell viability of H4IIE-luc cells. While accumulations of Au-NPs, Ag-NPs and Au-Ag-BNPs by the HuTu-80 and H4IIE-luc cells were not evaluated during this study, several previous studies have investigated uptake of NPs by various types of cells 17,26 . There are many factors such as size, nature of the capping agent, zeta-potential, vehicle and coating that may influence uptake of NPs by cells 27 . Due to their small size, Ag-NPs enter mammalian cells as aggregates through endocytosis and also cross the blood-brain barrier. Upon entering cells, they are translocated to the cytoplasm and nucleus. Possible mechanisms that caused toxicity include the decrease of mitochondrial function, release of lactate dehydrogenase (LDH), cell cycle deregulation, production of reactive oxygen species (ROS) and induced apoptotic genes leading to formation of micronuclei, chromosome aberration and DNA damage 28 . AgNPs interact with the immune system and cause inflammation in treated cells 29 . Au-NPs have, however, been shown to be readily taken up into cells 16 . In contrast to the cytotoxic nature of AgNPs, AuNPs have anticancer properties. For this mechanism AuNPs target cancer cells and the tumour suppressor genes and oncogenes to induce expression of caspase-9 which is an initiator caspase involved in apoptosis 28 . Although non-immortalized cells were not included for comparison in this study, the plant nature of the compounds tested in the current study should exhibit lesser toxic potency compared to non-cancerous cells as well as those tested. This is due to the antioxidant properties of plant-based molecules that results in greater toxicity to cancerous cells and lesser toxicity to healthy cells by expression of apoptotic molecules 30 . Targeted treatment using NMs for anticancer therapy has proven to show promise however once NMs are released and can come in contact with normal cells they will be further altered by interactions with biomolecules, changes in zeta potential and dissolution rates. All changes that NMs undergo can therefore affect their toxicity to cells. Adaption of BNPs, by altering the surface alloy, can increase their ability for cancer therapy by decreasing toxic effects on normal cells 28 . Since surface charges of the various NPs were all negative, uptake can be related to size, which was within a similar range, and the surface-core makeup of various NPs. Since aggregations of Au-Ag-BNPs were observed during this study, particles could have been taken up as clusters, a phenomenon that has also been observed previously 31 NPs can be accumulated by cells by various mechanisms depending on sizes of aggregations. As reported previously aggregated NPs can be accumulated by a combination of macro-pinocytosis and caveolae-mediated endocytosis 32 . Larger particles result larger loads, which in turn, as observed for Au-Ag-BNPs results in greater toxic potency. States of NPs in media might also affect accumulation into and clearance from cells 25 . Au-Ag-BNPs exhibited greater toxic potency to both cell lines. Accumulation of NPs into cells could be aided by the Au surface coating by a Trojan horse effect. Once Au-Ag-BNPs entered cells, it is broken down and the consequent release of Ag occurs, which can result in greater toxicity as seen in monometallic Ag exposure. In solutions containing Ag-NPs, zero-valent silver (Ag°) sometimes occurs with forms of ionic Ag, either from partial reduction of precursors or oxidation of silver NPs to release Ag + . Such situations were suggested data from the powder X-ray diffraction 33 . Cationic silver (Ag + ) has been reported to have potentially greater toxic potency compared to Ag 34 . Toxicological effects vary as a function of oxidation state and also dissolution characteristics of NPs 35 . Silver sulphide (AgS) is less bioavailable and less toxic to living organisms 36 . Toxicological nature of Ag NPs in this research could be due to the coexistence of Ag 2 O in both Ag-NPs and Ag-Au NPs. It has also been reported that toxicological profiles or biological behaviours of NP can vary, depending on the substrate used 37 . Conclusions Unique properties of NPs instil them with beneficial properties. The use of plant extracts for the synthesis of nanomaterials can present interesting and useful properties. These "green" extracts that serve as substrates in syntheses of NPs can significantly affect properties and behaviours of NPs. Nanoparticles obtained from plant extracts might be less expensive and more ecologically friendly, than the conventional, less natural, ones. Due to increasing production and widespread usage of nanomaterials especially in biological applications, assessments of potential effects of nanoparticles in cells were necessary. Zeta potentials revealed unstable natures of www.nature.com/scientificreports www.nature.com/scientificreports/ nanoparticles, which might have resulted from aggregation of particles. Not all biologically synthesized nanomaterials are necessarily safe. AuNPs synthesized from golden rod extract exhibited lesser toxic potency than NPs synthesized without plant leaf extracts. Thus NPs synthesized in the presence of plant extract might be useful as theranostic agents. Mechanisms of reactions of nanomaterials in media are complex and more investigation is needed to establish baseline information before widespread applications. Uses of normal (non-cancerous) cell cultures are also recommended in further testing of the toxicity of the bimetallic nanoparticles prepared in this study, as they are closer to an in vivo situation and should be included in tests to validate these compounds for clinical use.
2019-03-13T13:53:53.053Z
2019-03-12T00:00:00.000
{ "year": 2019, "sha1": "1e47642256c7d8600f2e4391183334fc059d04b2", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-019-40816-y.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1e47642256c7d8600f2e4391183334fc059d04b2", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
139106100
pes2o/s2orc
v3-fos-license
Current epidemiological status of Middle East respiratory syndrome coronavirus in the world from 1.1.2017 to 17.1.2018: a cross-sectional study. Background Middle East respiratory syndrome coronavirus (MERS-CoV) is considered to be responsible for a new viral epidemic and an emergent threat to global health security. This study describes the current epidemiological status of MERS-CoV in the world. Methods Epidemiological analysis was performed on data derived from all MERS-CoV cases recorded in the disease outbreak news on WHO website between 1.1.2017 and 17.1.2018. Demographic and clinical information as well as potential contacts and probable risk factors for mortality were extracted based on laboratory-confirmed MERS-CoV cases. Results A total of 229 MERS-CoV cases, including 70 deaths (30.5%), were recorded in the disease outbreak news on world health organization website over the study period. Based on available details in this study, the case fatality rate in both genders was 30.5% (70/229) [32.1% (55/171) for males and 25.8% (15/58) for females]. The disease occurrence was higher among men [171 cases (74.7%)] than women [58 cases (25.3%)]. Variables such as comorbidities and exposure to MERS-CoV cases were significantly associated with mortality in people affected with MERS-CoV infections, and adjusted odds ratio estimates were 2.2 (95% CI: 1.16, 7.03) and 2.3 (95% CI: 1.35, 8.20), respectively. All age groups had an equal chance of mortality. Conclusions In today’s “global village”, there is probability of MERS-CoV epidemic at any time and in any place without prior notice. Thus, health systems in all countries should implement better triage systems for potentially imported cases of MERS-CoV to prevent large epidemics. Background Middle East respiratory syndrome coronavirus (MERS-CoV) infection is considered to cause a new viral epidemic [1], and was first reported in a patient who died from a severe respiratory illness in a hospital in Jeddah, Saudi Arabia, in June 2012 [2,3]. From 1.1.2012 to 17.1.2018, world health organization (WHO) has notified a total of 2143 laboratory-confirmed cases of MERS-CoV, including at least 750 deaths related to this infection from 27 countries around the world [4]. The origin of MERS-CoV has been widely discussed. Initially, a bat reservoir was posited based on phylogenetic similarity of certain bat coronaviruses with MERS-CoV. However, there has been no clear bat source of infection or a consistent history of contact with bats in known cases of MERS-CoV to date [5,6]. Another source such as dromedary was later introduced as a possible reservoir in some studies [7][8][9][10]. Some studies have declared that all cases of MERS-CoV were directly or indirectly linked to residence or travel to 10 countries: Saudi Arabia, UAE, Jordan, Qatar, Kuwait, Oman, Yemen, Egypt, Iran, and Lebanon [6,11]. The MERS-CoV infection has high mortality rates, especially in patients with comorbidities such as diabetes and renal failure, evoking global concern and intensive discussion in the media along with respiratory droplet route of its transmission [12]. Laboratory-confirmed MERS-CoV cases have been reported during hospital-based cluster outbreaks between 1.1.2017 to 17.1.2018, and cases are still detected throughout the year [4]. The occurrence of a large number of MERS-CoV cases and their associated deaths in the world indicate that this disease must be considered as a severe threat to public health [13] because millions of pilgrims from 184 countries converge in Saudi Arabia each year to perform Hajj and Umrah ceremony. Upon their return to home, pilgrims hold a ceremony attended by family members and friends. Oriental etiquette to share hospitality with others increases the transmission of probable MERS-CoV cases to others [12,14]. Worldwide awareness of MERS-CoV is low, the disease has high intensity and lethality with unknown mode of transmission and source of MERS-CoV infection (i.e. whether zoonotic or human disease) [15]. Therefore, it is necessary to design and implement a research to identify some unknown epidemiological aspects and also determine the current epidemiological situation of MERS-CoV and its mortality risk factors in order to prevent, control and anticipate effective interventions. Methods Permission was obtained from WHO to conduct this analytical-descriptive epidemiological study. Using census method, data related to laboratory-confirmed MERS-CoV cases between 1.1.2017 to 17.1.2018 were extracted from disease outbreak news on MERS-CoV from WHO website as follows. Demographic information such as age, gender, reporting country, city, health care worker; clinical data and exposure status of MERS-CoV cases including comorbidities, exposure to camels, camel milk consumption, exposure to MERS-CoV cases, day/month of symptom onset, day/month of first hospitalization, day/month of laboratory confirmation, final outcome (dead or survived) of MERS-CoV cases were recorded. Statistical analysis All statistical analyses were conducted using SPSS, version 21 (IBM Inc., Armonk, NY, USA). Quantitative measurement was expressed by medians and qualitative variables were presented as absolute frequency and percentage. Logistic regression was used to calculate the odds ratio (OR) with a 95% confidence interval in order to assess the probable relationship between risk factors and final outcome (dead/survived) of laboratory-confirmed MERS-CoV cases. P values of less than 0.05 were regarded as statistically significant. Results A total of 229 MERS-CoV cases, including 70 deaths (30.5%), were recorded in the disease outbreak news on WHO website from 1. The median age of subjects was 53.2 years (range: 10-89 years). To assess the effect of several potential risk factors on death in morbid cases related to MERS-CoV infection, we used OR index in order to better understand the mechanism of this relationship, and we reported both crude and adjusted OR. Based on this indicator, variables such as comorbidities and exposure to MERS-CoV cases were significantly associated with mortality in affected people with MERS-CoV infections ( Table 1). Six countries were affected with MERS during the period of this study. The majority of cases (approximately 93.9%) with highest mortality (98.6%) as well as 100% of female cases have been reported from Saudi Arabia ( Table 2). The epidemic curve of laboratory-confirmed cases of MERS between 1.1.2017 and 17.1.2018 is shown in Fig. 1. It can easily be seen that two peaks are evident in this period: the first at the beginning of April 2017 and the second at the beginning of July 2017. Our results indicate that the number of MERS-CoV cases remained constant from the beginning of September 2017 to the end of January 2018. Discussion The findings have important implications for infection control practice. Especially, we found evidence that was contrary to many studies declaring that the high mortality rates are related to MERS infection with increasing age [16][17][18]. Our results on MERS-CoV cases in global level showed that all age groups are somewhat at risk of death from this infection. The chance of mortality in MERS-CoV cases in all age groups is fairly equal. Therefore, in the care and treatment of MERS-CoV cases, our results suggest that this important point is better to be considered on behalf of health care staff. In this study, we observed a higher disease occurrence and death of (Table 1). A possible explanation for a higher disease occurrence and mortality of MERS-CoV among men is that men are likely to spend more time outdoors and hence have a higher risk of exposure to a source of infection. The evidence linking MERS-CoV transmission between camels and humans cannot be ignored. Several studies have shown that persons with direct and indirect contact with dromedary camels had a significantly higher risk of MERS-CoV infection. Our finding was inconsistent with other studies that did not mention such evidence (Table 1). Random error may be one of the reasons for obtaining this result since there were not details of exposure to camels and camel milk consumption for laboratory-confirmed MERS-CoV cases. Our research is consistent with many studies that provided evidence of human-to-human transmission for MERS-CoV infection [15,19,20]. Figure 1 shows two peaks during June until September, which coincides with the largest mass gathering of Muslims around the world in Saudi Arabia to perform Hajj and Umrah ceremony. This finding highlights the effect of congregation in the spread of MERS-CoV infection. Our findings in Table 2 and Fig. 2 show that most cases are reported from Saudi Arabia after about 7 years since the start of MERS-CoV pandemic (June 2012 to January 17, 2018). So, it seems necessary that epidemiologic investigations are conducted by Ministry of Health in Saudi Arabia and international partners to better understand the transmission patterns of MERS-CoV. This study had a number of limitations. Assessment of the relationship between mortality related to MERS-CoV infection and some potential risk factor requires reliable sources of mortality data. We used the data recorded in the disease outbreak news on MERS-CoV from WHO website. The quality and accuracy of this data depend primarily on quality of the recorded data reported by national IHR focal point from different countries to WHO. In this study, the researcher was unable to verify the accuracy of the data, which potentially results in information bias. In addition, information for some of the variables was not available and the number of missing data was high, which might introduce a negligible selection bias in results. Another limitation of this research was that possible misclassification of cases may occur due to the respondent's declarations such as exposure to camels, camel milk consumption, and exposure to MERS-CoV cases, which potentially occurs as a result of measurement bias. Despite the above limitations, the current analytical-descriptive epidemiological study may have a number of implications for health care policy by using the global data. It also reminds us that effective national and international preparedness plans should be in place as well as measures to prevent, control and predict such viral outbreaks, improve patient management, and ensure global health security. Conclusions The results of this analytical-descriptive epidemiological study revealed and confirmed some potential risk factors for MERS-CoV cases, which were reported as a possible risk factor in previous research studies. In fact, it reminds us that there is probability of MERS-CoV epidemic at any time and in any place without prior notice in today's "global village".
2019-04-27T18:04:18.398Z
2019-04-27T00:00:00.000
{ "year": 2019, "sha1": "5187f0f3f45b0a1882a44001d193a24c97b0bb1b", "oa_license": "CCBY", "oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/s12879-019-3987-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5187f0f3f45b0a1882a44001d193a24c97b0bb1b", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
267122520
pes2o/s2orc
v3-fos-license
Semiochemical 2-Methyl-2-butenal Reduced Signs of Stress in Cats during Transport Simple Summary Sixteen cats were used in a model of behavioral and physiological transport stress. Cats were not accustomed to being transported. In an objective evaluation, cats wore a PetPace (PP) collar that recorded carotid pulse rate (PR) and general activity. Video cameras recorded cat behavior during the 70 min transport experience. Cats also wore a plastic collar containing either 2-methyl-2-butenal (2M2B) collars or a placebo collar. This randomized, placebo-controlled, blinded study found that cats with a 2M2B collar had a lower PR, slept more, sat less, and self-groomed more compared with cats wearing a placebo collar. Control cats hid near the back of the transport kennel, and some vomited or had excessive salivation, whereas cats with 2M2B collars did not hide, vomit, or salivate. This controlled study demonstrates behavioral and physiological benefits to transported cats from the use of 2M2B collars. Abstract Some cats experience stress when they have novel experiences, such as infrequent transport. This study was a randomized, placebo-controlled, blinded study that sought to objectively evaluate the effects of a 2M2B collar on transported cat physiology and behavior. The statistical model included effects of cat treatment (2M2B vs. control), period (70 min), sex, and interactions. Cats wearing 2M2B collars had an 8% lower PR (p < 0.01), and they slept more and did not hide at the back of the kennel. While control cats vomited or showed excess salivation, cats with 2M2B collars did not show these signs of stress. Male cats were less active during transport than females. Male cats slept more with 2M2B collars compared with male cats with a control collar, but females showed similar sleeping overall regardless of which collar they wore. Female cats increased activity during transport when they had a 2M2B collar, while male activity did not differ with control or 2M2B collars. These data support the concept that the semiochemical 2M2B can reduce stress in transported cats based on objective physiological and behavioral measures. Introduction The domestic house cat (Felis catus) is a domesticated, obligatory carnivore that lives in or near homes as a pet or companion around the world and is not much different genetically from its ancient ancestor, the Wildcat [1].While it is unusual for a predator to be kept as a pet, the domestic cat benefits from its dual role as companion and vermin controller.In some ways, the house cat is living in an environment that contrasts with its natural habitat, which would include hunting areas and environmental surveillance.One must appreciate the olfactory acuity of the domestic cat and how its olfactory environment can be less rich in a home than in the wild [2]. The cat has a well-developed olfactory system that includes the main olfactory epithelium (MOE) and the vomeronasal organ (VNO) [3].Both systems are functional in the cat.The MOE perceives primarily aerosol and volatile molecules, while the VNO can be activated by liquid or less-volatile chemical signals.A semiochemical is a broad term used for olfactory signals that can change behavior and can include pheromones, interomones, attractants, and plant products that have an olfactory-behavior effect.Some semiochemicals activate one or the other olfactory systems (MOE or VNO).Catnip, for example, activates the MOE and not the VNO to induce its behavioral effects [4].The use of semiochemicals to reduce stress is a recent idea that should be possible because the olfactory sensory system has neural links to areas of stress control in the amygdala and hypothalamus [3]. Most semiochemicals and pheromones can be found naturally in the environment.2-methyl-2-butenal (2M2B) is a natural molecule found in many plants, animals, and foods.The 2M2B molecule is a food-grade flavoring agent and is safe if consumed by humans [5].2M2B is found in berries, certain vegetables, chicken fat, butter fat, beer, and the mammary secretions of rabbits [6].First identified as a rabbit pheromone, 2M2B is a volatile molecule that rabbit pups use to orient towards their mother's nipple, inducing nipple-searching behaviors [6,7].While not described as a pheromone other than in the rabbit, 2M2B has effects on the brain and behavior of many species.Since semiochemicals are conserved across species, one compound may invoke different behavioral or biological changes in different species.Studies have reported that 2M2B can change the brain and/or behavior of dogs [8], cats [9], pigs [10], and humans [8].Clearly, the metabolic production of 2M2B and its perception by olfaction are conserved over many species.Generally, 2M2B at low concentrations has a calming effect on animals (as an interomone) in that when animals can smell it, especially during stressful periods, they have behavioral and physiological signs of reduced stress [11].While the interomone 2M2B has not been shown to be a pheromone in cats, this molecule can reduce aggression and measures of anxiety in paired cats [9]. The interomone effect refers to when a semiochemical has a pheromone effect on the physiology or behavior of one species and is not described as a pheromone in a second species, but the molecule(s) have effects on the behavior or physiology of the second species [12].For example, 2M2B has not been reported in the secretions of pigs but serves as an interomone in that it increases feed intake and weight gain in healthy, weaned pigs [10].Another interomone example is the pig pheromone androstenone, which has been reported to stop barking in dogs [13]. Transportation of companion animals is unavoidable in many situations (e.g., veterinarian clinic visits).Transportation becomes a stressful event for both the owner and the animal.Cats are less often transported than dogs, so the novelty of transport may be stressful to the domestic house cat, as has been shown for other species [14].Behaviors such as vocalizing, increased defecation and urination, and ears pressed back are indicators of stress that cats express during all stages of a veterinary visit, especially the transport phase [15].The EU has regulations about the transport of cats, but a review of the literature found insufficient data to support evidence-based regulations [16], other than to recommend conditioning cats to transport crates and procedures. The use of semiochemicals in cats to minimize common stressors has not been widely studied.Models of transport stress, while common in livestock, have not been often described for domestic cats.The use of different feline semiochemicals, such as feline facial semiochemicals, has been reported to have calming effects [17] and may reduce urine marking [18].A feline interdigital pheromone has also been reported to aid in correcting inappropriate behaviors, such as scratching [19].No study to date has provided clear evidence that the molecules marketed as cat pheromones are actually pheromones in cats. Feline facial "pheromones" have been reported for cats [2].While marketing calls these molecules "pheromones", it is important to point out that studies have not confirmed that they are pheromones according to thoughtful reviews and papers that set criteria for calling a molecule(s) a pheromone [7,11,20].Here we refer to the cat facial molecules as semiochemicals because they can change behavior but have not been shown to be pheromones by accepted scientific definitions backed by appropriate studies. One recent study developed a transport model for cats in order to evaluate the effects of a synthetic semiochemical as a potential intervention for feline stress during transportation. Animals 2024, 14, 341 3 of 14 Shu and Gu [21] conducted a pilot study to examine if the Feline Facial Semiochemical (FFS) might provide a stress-reduction effect during short-distance transport of cats.Data were collected by cat owners, not validated observers.Cat owners were asked to spray either a placebo or a cat facial semiochemical in a cat carrier and then transport their cat(s).They evaluated 150 cats (75 per treatment group).At baseline time zero, baseline stress scores did not differ between the two groups.FFS-treated cats were less active during transport than control cats.Both control and FFS-treated cats meowed less over time, but the reduction was greater among FFS-treated cats.The authors used a visual analog scale (VAS) as a key measure of stress response.The effects of treatment were not significantly different (p = 0.755) on the VAS overall between the treatments.When baseline data were used as a covariate, the treatment effect was significant among cats with higher stress scores.Although the effects of FFS were small, one might expect it to work better for cats that were more stressed.Here, we used a similar transport model to assess the effects of an interomone on cat behavior and physiology.Most previous work did not include objective measures of physiology (such as heart rate).This study was designed to sample both the behavior and physiology of transported cats using objective measures in a controlled setting (not in homes). This work had one primary objective: To determine if 2M2B can be used to change behavior and physiology to improve the welfare of cats during transport.We used measures of cat behavior and heart rate to evaluate cat experiences. Materials and Methods All research was conducted at Texas Tech University (TTU) with approval by the TTU Animal Care and Use Committee (IACUC # 19104-12).All procedures were consistent with the U.S. Animal Welfare Act. Animals Sixteen mixed-breed cats (8 males and 8 females) both intact and neutered within the age range of 1.5 to 16 years of age (5.0 ± 4.3) were used in this study (cat characteristics are given in Table S1 in the Supplementary Material).Cats weighed between 2.6 and 4.7 kg (3.91 ± 0.71).All cats were selected from a USDA-inspected and certified facility.Cats were fed once daily with ad libitum water, except during transport.Cats selected for the study had not experienced transportation or carriers within 5 months of the start of the study.Cats were individually penned in the research facility with concrete floors, a resting board, and a chain link fence between adjacent cats.Each pen measured about 1 × 4 m.Cats received environmental enrichment and daily human contact, with minimal human handling.Cats infrequently left their home pen, and cats were not acclimated to transport. Experimental Design Sixteen cats were transported individually in four separate trips over 70 km within 70 min (Figure 1).For all trips, cats were transported in pet kennels with dimensions of 0.71 m × 0.52 m × 0.55 m.One trip consisted of four vehicles, each transporting one cat within a pet kennel.A trip was considered the distance to and from a park in Vernon, Texas, that was estimated to be 70 km away.The four vehicles were split into two groups to represent the two treatments.Two cars were the control (no pheromone), and the other two cars were the 2M2B (pheromone).The cars remained in their designated treatment groups for all four trips to avoid the exposure of control cats to the 2M2B pheromone. 30-40 min into the travel time.Temperature was maintained at around 25 °C for the duration of the trip.No food was in the vehicles during transportation.Noise was mitigated as much as possible; no music was playing, and no researchers spoke during the transport.All noise capable of being controlled by the researchers was diminished in the vehicle.Each treatment group was balanced for sex, and trips were balanced for treatment.Each of the 4 trips had four vehicles, with two vehicles of control and two vehicles of 2M2B.Each treatment within the trip would have one female and one male cat (i.e., for each trip, n = 4 cats, 1 male and 1 female/treatment group). Treatments were delivered through collars produced by PeIQ (Omaha, NE & Boise, ID) and contained either the 2M2B treatment or the placebo control treatment that contained nothing.Placebo collars appeared identical to the collars containing 2M2B.We have previously reported the release of 2M2B from these collars [22] over a 4-week period.New collars were used during this work.All cats wore identically shaped collars to blind researchers to each treatment group throughout the study, but they were able to see the two colors of collars (so as not to confuse the 2 treatments).Each treatment was designated a collar color, but researchers were blind to which color represented which treatment.This allowed researchers to verify that trips were balanced by sex and treatment.Thus, this work was a placebo-controlled, randomized study with investigators blind to treatment groups.Trained, validated observers recorded objective behavior data while also being blind to treatment groups. Measurements Measures of behavior and physiology were collected.Heart rate data are rarely reported for cats due to the difficulty of obtaining these measures.Physiological measures (pulse rate and activity) were monitored by a PetPace pulse rate monitor collar from Pet Pace LLC, Burlington, MA, USA.The collar senses carotid artery pulses, which correlate with heart rate; by definition, heart rate equals pulse rate.Cats tolerate a collar well, while more invasive heart rate sensors will limit cat mobility.The PetPace collars record both pulse rate and general activity.To obtain the PetPace data, the collar recordings must be uploaded to the internet and then downloaded for data analysis. Behaviors were recorded by HDBV-301 video cameras (HausBell, USCLOUND TRADE LTD., Rosemead, CA, USA) placed inside the vehicles.The timeline of the study is shown in Figure 1.The PetPace monitors were acclimated to cats for 20 min before they were placed in the kennel and transported to the vehicles. Parameters were recorded every 2 min throughout the trip.The starting and ending times were recorded to match the times of the PetPace monitors.The pulse rate was measured as beats per minute (bpm) as the device rested on the carotid artery. Activity level was recorded by the PetPace collars as numbers (ranging from 0.4 to 27.7).Activity data were obtained by averaging values in 10 min intervals during the 70 min trip.Activity data were also collected from live video recordings.Each treatment group was balanced for sex, and trips were balanced for treatment.Each of the 4 trips had four vehicles, with two vehicles of control and two vehicles of 2M2B.Each treatment within the trip would have one female and one male cat (i.e., for each trip, n = 4 cats, 1 male and 1 female/treatment group). Treatments were delivered through collars produced by PeIQ (Omaha, NE & Boise, ID) and contained either the 2M2B treatment or the placebo control treatment that contained nothing.Placebo collars appeared identical to the collars containing 2M2B.We have previously reported the release of 2M2B from these collars [22] over a 4-week period.New collars were used during this work.All cats wore identically shaped collars to blind researchers to each treatment group throughout the study, but they were able to see the two colors of collars (so as not to confuse the 2 treatments).Each treatment was designated a collar color, but researchers were blind to which color represented which treatment.This allowed researchers to verify that trips were balanced by sex and treatment.Thus, this work was a placebo-controlled, randomized study with investigators blind to treatment groups.Trained, validated observers recorded objective behavior data while also being blind to treatment groups. Measurements Measures of behavior and physiology were collected.Heart rate data are rarely reported for cats due to the difficulty of obtaining these measures.Physiological measures (pulse rate and activity) were monitored by a PetPace pulse rate monitor collar from Pet Pace LLC, Burlington, MA, USA.The collar senses carotid artery pulses, which correlate with heart rate; by definition, heart rate equals pulse rate.Cats tolerate a collar well, while more invasive heart rate sensors will limit cat mobility.The PetPace collars record both pulse rate and general activity.To obtain the PetPace data, the collar recordings must be uploaded to the internet and then downloaded for data analysis. Behaviors were recorded by HDBV-301 video cameras (HausBell, USCLOUND TRADE LTD., Rosemead, CA, USA) placed inside the vehicles.The timeline of the study is shown in Figure 1.The PetPace monitors were acclimated to cats for 20 min before they were placed in the kennel and transported to the vehicles. Parameters were recorded every 2 min throughout the trip.The starting and ending times were recorded to match the times of the PetPace monitors.The pulse rate was measured as beats per minute (bpm) as the device rested on the carotid artery. Activity level was recorded by the PetPace collars as numbers (ranging from 0.4 to 27.7).Activity data were obtained by averaging values in 10 min intervals during the 70 min trip.Activity data were also collected from live video recordings. Behavior measures (each defined in Table 1) were collected by trained, blinded, and validated observers from training videos collected during the study.Videos were watched using 1 min scan sampling, and observations of 10 min intervals were averaged to determine the percentage of time animals spent expressing each behavior.The location of the cat was also recorded.Location was determined by the position within the kennel the cat chose to rest or sit in.Location was when the cat head was either at the front or back of the kennel.The location was recorded every minute to determine the percentage of time the cat spent in the two locations.Detailed time of each behavior was used to calculate the time the cat spent adjusting positions and being active based on video recordings.Behaviors are outlined in the ethogram portion of Table 1. Table 1.Definitions of electronic measures and ethogram for cat behavioral measures (adapted from [22]). Pet Pace data Pulse Statistical Analyses Data were first evaluated for assumptions in the analysis of variance (ANOVA).For data that met the assumptions for parametric analyses, the data were analyzed with a simple repeated measures ANOVA with two treatments (control vs. 2M2B).Each cat served as an experimental unit.Physiological and behavioral measurements were analyzed using repeated measures in Proc Mixed Procedures of SAS 9.4.The statistical model included fixed effects as treatment, cat within treatment, sex, period, interaction of treatment and period, and interaction of treatment and sex.Period refers to the 10 min intervals within the 70 min travel time.The predicted difference test within the SAS Proc Mixed Procedures was used to separate least squares means when the parameter overall F-value was significant (p < 0.05).All cats were adults; cat body weight and age did not interact with treatment effects. When measures varied over time, multiple regression models were generated to describe and compare the response of PR over time during transport.If the response over time fit a linear, quadratic, or cubic model, a graph was generated to show both the data points and the regression line values.Generally, PR and HR increase during the trip, then they are reduced, perhaps as cats acclimate to the conditions.Best-fit regression models were compared for cats in the control or 2M2B treatment groups.For other measures where the time-by-treatment effect was significant (p < 0.05), but with a non-significant regression model to describe the data, a simple line graph was produced showing at which time periods the control differed from the 2M2B treatment group. For non-parametric analyses, a chi-square test was used, for example, to determine if the location of the cat during transport was different between control and 2M2B-treated cats.Correlation coefficients were calculated to determine the relationship between activity recorded by the PetPace collar and video and pulse rate recorded with the PetPace collar and video to assist future work in the validation of this methodology. Results Overall, during transport, cats wearing the 2M2B collar had decreased PR (p < 0.0001), sitting behavior (p = 0.04), but increased sleeping (p = 0.006) and self-grooming (p = 0.09) compared with cats wearing a placebo collar (Table 2).Cats that wore the 2M2B collars also tended to have a higher activity level as monitored by the PetPace collar (p = 0.07); the main difference in activity observed was more self-grooming among 2M2B-treated cats compared with placebo-controlled cats.Other behaviors, such as lying and adjusting positions, were not altered by the 2M2B treatment (p > 0.10).Self-grooming was not observed in cats wearing a control collar, while self-grooming was present among cats wearing a 2M2B collar.* From video records.SE = standard error of the least squares mean.TRT = treatment effect (control vs. 2M2B).PER = period effect; the 8, 10 min periods (see Figure 1).TRT*PER = the interaction between treatment and period.Sex = male vs. female.TRT*Sex = the interaction between treatment and sex. Cats in the 2M2B-treated groups spent more time at the front of the kennel, while control cats spent more time at the back of the kennel (p < 0.01; Table 3).This significant effect would be clear to any casual observer.Hiding near the back of the kennel would indicate more fear or discomfort among control cats than those exposed to 2M2B therapy.Presented in Figure 2 are PR data from control and 2M2B-treated cats over the course of the study.Note that for all cats, their PR increased and then decreased as they continued their journey.The overall lower PR for 2M2B-treated cats compared with control cats was observed to be consistent over each time point (Figure 2).The regression equations for control and 2M2B-treated cats were different, primarily in the lower PR levels of cats wearing 2M2B collars than control cats.Cats in the 2 treatment groups started out with very similar PRs, but the cats with control collars had a uniformly higher PR than cats with 2M2B collars.The drop in PR associated with time zero was when the 2M2B collar was placed and before transport began.While one can see the immediate drop in PR once the 2M2B collar was placed, this did not happen among cats with a control collar (Figure 2).Cats with 2M2B collars had an average of 8% lower PR than control cats overall.Period effects were observed: All cats had increased, then decreased pulse rates (p < 0.0001; Figure 3), activity levels (p = 0.07; Figure 3), and spent more time apparently sleeping (p = 0.004; Figure 3, but had decreased sitting behaviors (p = 0.02; Figure 3) over time.The interaction between treatment (2M2B vs. placebo) and period was significant for lying and sitting (Table 3; Figure 3).The treatment by period effect was not significant for sleeping, but the main effect of treatment (p < 0.01) showed that cats with 2M2B collars spend more time apparently sleeping during transport than cats wearing a placebo collar. Cats treated with 2M2B spent more time lying at period "0" (start of the transportation) and "10" (10 min into the travel), but less time lying at period "40" to "70" (40-70 min into the travel time) compared to the control group (Figure 3).These changes document how transported cats adapted to travel in general. Some sex and treatment-by-sex effects were observed.Male cats were less (p < 0.01) active than female cats, and they spent less (p < 0.001) time sitting than females (Figure 4).The sex-by-treatment effects describe how male and female cats responded differently to the 2M2B collars.Both males and females had lower PR when they wore a 2M2B collar than when cats wore a placebo collar; however, the decrease in PR was more pronounced on pulse rate (LS means ± SE, BPM).Cats with control or 2M2B collars had a similar overall response, with elevated then declining PR during transport.Note that although cats in the 2 treatment groups had similar PR at time −10 (prior to collar placement); averaged other times, cats with 2M2B collars had 8% lower (p < 0.001) PR than cats with a control collar.The quadratic equations that best describe the response over time for each treatment were consistent with high R 2 values.n = 16 cats; treatment effect, p < 0.0001; see Table 2 for statistical details. Period effects were observed: All cats had increased, then decreased pulse rates (p < 0.0001; Figure 3), activity levels (p = 0.07; Figure 3), and spent more time apparently sleeping (p = 0.004; Figure 3, but had decreased sitting behaviors (p = 0.02; Figure 3) over time.The interaction between treatment (2M2B vs. placebo) and period was significant for lying and sitting (Table 3; Figure 3).The treatment by period effect was not significant for sleeping, but the main effect of treatment (p < 0.01) showed that cats with 2M2B collars spend more time apparently sleeping during transport than cats wearing a placebo collar. Cats treated with 2M2B spent more time lying at period "0" (start of the transportation) and "10" (10 min into the travel), but less time lying at period "40" to "70" (40-70 min into the travel time) compared to the control group (Figure 3).These changes document how transported cats adapted to travel in general. Some sex and treatment-by-sex effects were observed.Male cats were less (p < 0.01) active than female cats, and they spent less (p < 0.001) time sitting than females (Figure 4).The sex-by-treatment effects describe how male and female cats responded differently to the 2M2B collars.Both males and females had lower PR when they wore a 2M2B collar than when cats wore a placebo collar; however, the decrease in PR was more pronounced among male cats than female cats (Figure 4).Male cats with 2M2B collars spent more time sleeping, less time lying, and less time adjusting body position.Overall, 2M2B collars increased the activity of females but had no effect on the overall activity of males (Figure 5).We had an insufficient sample size to evaluate the effects of intact vs. castrated animals.Because we did observe sex effects, an examination of the effects of spay/neuter on cat behavior in future studies might generate interesting effects. Animals 2024, 14, x FOR PEER REVIEW 8 of 14 among male cats than female cats (Figure 4).Male cats with 2M2B collars spent more time sleeping, less time lying, and less time adjusting body position.Overall, 2M2B collars increased the activity of females but had no effect on the overall activity of males (Figure 5).We had an insufficient sample size to evaluate the effects of intact vs. castrated animals. Because we did observe sex effects, an examination of the effects of spay/neuter on cat behavior in future studies might generate interesting effects.)) on sleeping (LS means ± SE, % of time).*, **, means difference between the control and 2M2B-treated groups being significant at p < 0.05 and p < 0.01, respectively.The treatment effect was significant (p < 0.01) (2M2B-treated cats slept more than control cats), but the treatment-by-time effect was not statistically significant.See Figure 6 for sex by treatment effect (p < 0.001) on sleeping.While the treatment vs. control effect was significant overall, the time points of 40 and 50 min after transport found more sleeping among cats with 2M2B collars than control collars (p < 0.05). Animals 2024, 14, 341 9 of 14 % of time).*, **, means difference between the control and 2M2B-treated groups being significant at p < 0.05 and p < 0.01, respectively.The treatment effect was significant (p < 0.01) (2M2B-treated cats slept more than control cats), but the treatment-by-time effect was not statistically significant.See Figure 6 for sex by treatment effect (p < 0.001) on sleeping.While the treatment vs. control effect was significant overall, the time points of 40 and 50 min after transport found more sleeping among cats with 2M2B collars than control collars (p < 0.05).(LS means ± SE, BPM).#, ** means difference between the control and 2M2B-treated groups being significant at p < 0.10 and p < 0.01, respectively.* shows a trend with a p < 0.04.Overall, the sex-by-treatment effect was significant (p < 0.05) for most measures and showed a trend for adjusting the body during transport (p = 0.06).However, PR was lower among both males and females exposed to 2M2B collars compared with cats exposed to control collars.Male cats exposed to 2M2B had a greater PR-reduction effect than female cats.Male cats with 2M2B collars slept more but had less time spent lying and tended to adjust position less during transport.Transported female cats were more active with 2M2B collars than control cats, while male cat activity did not differ between 2M2B and control collared cats.(LS means ± SE, BPM).#, ** means difference between the control and 2M2B-treated groups being significant at p < 0.10 and p < 0.01, respectively.* shows a trend with a p < 0.04.Overall, the sex-by-treatment effect was significant (p < 0.05) for most measures and showed a trend for adjusting the body during transport (p = 0.06).However, PR was lower among both males and females exposed to 2M2B collars compared with cats exposed to control collars.Male cats exposed to 2M2B had a greater PR-reduction effect than female cats.Male cats with 2M2B collars slept more but had less time spent lying and tended to adjust position less during transport.Transported female cats were more active with 2M2B collars than control cats, while male cat activity did not differ between 2M2B and control collared cats. By way of validating the PP collars, we calculated correlation coefficients for measures of activity collected by the PP collars compared with video records (Figure 6).While the correlation coefficient (r = 0.56 and R 2 = 0.322) differed significantly from zero, these two measures of activity are likely measuring different types of activity because of the moderate relationships between the two variables. Animals 2024, 14, x FOR PEER REVIEW 11 Figure 6.Correlation between activity recorded by the PetPace collar and video (s, % of time).W there is general agreement over a range level of activity, one should not expect PR and activity perfectly correlated, as some stressed animals have elevated PR and are inactive, especially w frightened. Three cats (37% of cats) from the control group exhibited sickness behaviors (2 v ited and one had excessive salivary secretion) during the transport.No cats in the 2M group exhibited sickness behaviors.All cats were adults; cat body weight and age did interact with treatment effects. Discussion This study describes a viable model to assess treatment effects in a model for transport stress.The study was sensitive enough to detect significant differences wi Three cats (37% of cats) from the control group exhibited sickness behaviors (2 vomited and one had excessive salivary secretion) during the transport.No cats in the 2M2B group exhibited sickness behaviors.All cats were adults; cat body weight and age did not interact with treatment effects. Discussion This study describes a viable model to assess treatment effects in a model for cat transport stress.The study was sensitive enough to detect significant differences within our sample size of eight cats per treatment (n = 16).This model is far less variable than when in-home data are collected, especially when conclusions rely on many consumer measures of cat behavior.The PetPace system delivered objective PR data in a reliable fashion.Combining objective behavioral and physiological data gives us the most complete understanding of the effects of a semiochemical.This is one of the few studies that examined any semiochemical using an objective, randomized, blinded, placebo-controlled model that does not rely on consumer opinion.The placebo effect is very real for animal behavior studies.For example, in Shu and Gu [21]; Figure 3), transported cats given a placebo had a reduced stress score, but the semiochemical treatment group had a still lower stress score (among only cats with high stress scores).This points to the importance of placebo controls, especially in consumer-reported data, in that any effect of an intervention must be greater than any placebo effect.In our work reported here, the effects of 2M2B were larger and more consistent across measures than the data thus far reported for FFS (one of the few semiochemical therapies available).We are not aware of a side-by-side evaluation of these two available semiochemicals in transported cats. One can examine the measures that indicate that cats with 2M2B collars were less stressed than transported cats with a placebo collar.To the people handling the animals, control-cat stress responses were readily observable.Control cats hid in the back of the kennel, and some control cats vomited or salivated excessively.The objective measures of PR confirmed that cats wearing placebo collars had elevated PR compared with cats with a 2M2B collar.Cats with 2M2B collars also self-groomed (control cats did not) and slept more than cats with placebo-controlled collars.These indicators provide evidence that 2M2B can provide relief to cats during transport or alternative stressful situations. Few interventions are available to prevent or reduce stress-induced reactions among domestic cats, apart from drugs.Semiochemicals are not drugs and are natural molecules found in animals and plants.They are considered clean, green, and ethical technologies that are favored over conventional drugs [11].Semiochemical therapy shows promise for improving the lives of cats in a natural way. In the current study, cats were exposed to a complex stressor, including confinement in carriers, exposure to an unfamiliar environment (back seat of the vehicle), and the noise, movement, and vibration associated with the movement of the vehicle.While not well studied, a normal reaction of cats to stress is to remain still and inactive; indeed, our control cats spent much time apparently hiding in the back of the transport kennel.Ellis [23] showed that stressed cats prefer concealed areas (hiding), reduce activity levels, and decrease behavior diversity.The following stressors have been reported to cause cats to reduce activity levels: Inconsistent caretakers [24] and novel environments (entering an animal shelter [25]; unfamiliar yards [26]; and visit to veterinary clinic [27].In this study, 2M2B increased the activity level recorded by the PetPace collar and the video recordings.This finding is interesting because of the limited opportunity for movement in the kennel because cats were confined in the carrier.The findings are also interesting because of the moderate or low correlation between PP and video recording of cat activity.We conclude that live video and the PP collar measure overlapping but not identical measures of cat activity. According to the videos, cats changed positions (alternating between lying and sitting) and/or repositioned to the back of the carrier when the vehicle started to move or when the road became bumpy (Figure 5).Hiding is a behavior that is exhibited by stressed cats [25,28], and the 2M2B collared cats remained more toward the front of the carrier.With 2M2B collars, transported cats did not hide in the farther back portion of the carrier, where it was darker and more secluded.Cats that wore the 2M2B collars spent more time sleeping and less time sitting than control cats.These behavioral differences exhibited by the control and 2M2B-treated cats during transportation indicated that 2M2B reduced behavioral and physiological signs of stress in cats during transportation. The 2M2B molecule, although an interomone (not yet able to be called a pheromone), has an effect within seconds to minutes.Note the heart rate data at time −10 and zero time points for cats with control or 2M2B collars.Control cats moved from their kennel to the transport vehicle for a 10 min acclimation period and had about a 12% increase in PR, while cats with 2M2B collars had about a 4% decline in PR during this acclimation period.This means that cats respond to 2M2B as a releaser semiochemical in that its effects are observable in seconds to minutes.However, the effect clearly lasts for much longer (at least the 70 min transport).A collar releasing 2M2B is unlikely to activate the VNO of the cat (unless they scratched the collar and then licked it; a behavior not observed).Thus, the data, while not conclusive, suggest to us that 2M2B activates the MOE rather than the VNO to cause the stress-reducing effects observed. It is interesting that the effect of 2M2B in reducing stress during transport was apparently greater among male cats than female cats (though females benefited in most measures).The differences between gender results are hard to express currently due to a lack of research.Current studies do not show a difference between behavioral or stress scores between sexes.There is evidence of differences between breeds, but there is not much understanding around gender [29].Pheromones or interomones that provide alternative results dependent on sex have not been widely studied and offer a new branch of research that should be investigated.The effects of semiochemicals on cats of each sex require further study, and sex-specific therapies may be needed. Conclusions We described here an objective model that examined the stress of transport for individual adult cats.The model is reliable and able to detect biological differences with a sample size of 16 cats (eight per treatment).The model requires multiple vehicles and people to control for effects over time and to avoid contamination of control vehicles with semiochemicals.We show here for the first time that 2M2B can be used to reduce stress-like behavioral and physiological responses among transported cats. The overall effects of 2M2B on transported cats revealed a response that was consistent across multiple measures.All measures point to 2M2B providing relief from the behavioral and physiological effects of transport stress.The magnitude of the stress reduction observed here has not been reported for any other intervention to date.We conclude that collars containing 2M2B may serve as a useful tool to reduce stress during transport and perhaps could reduce stress in alternative stressful situations that cats experience. Figure 1 . Figure 1.Timeline of experimental procedures.The treatment or placebo collars were placed on the cats for a 10 min acclimation period before transportation began.Cats wore the PetPace monitor and collar throughout the transport period.During transport, drivers maintained a relatively consistent speed (~112 km/h) with minimal sudden turns or stops.The U-turn of the round trip occurred 30-40 min into the travel time.Temperature was maintained at around 25 • C for the duration of the trip.No food was in the vehicles during transportation.Noise was mitigated as much as possible; no music was playing, and no researchers spoke during the transport.All noise capable of being controlled by the researchers was diminished in the vehicle.Each treatment group was balanced for sex, and trips were balanced for treatment.Each of the 4 trips had four vehicles, with two vehicles of control and two vehicles of 2M2B.Each treatment within the trip would have one female and one male cat (i.e., for each trip, n = 4 cats, 1 male and 1 female/treatment group).Treatments were delivered through collars produced by PeIQ (Omaha, NE & Boise, ID) and contained either the 2M2B treatment or the placebo control treatment that contained nothing.Placebo collars appeared identical to the collars containing 2M2B.We have previously reported the release of 2M2B from these collars[22] over a 4-week period.New collars were used during this work.All cats wore identically shaped collars to blind researchers to each treatment group throughout the study, but they were able to see the two colors of collars (so as not to confuse the 2 treatments).Each treatment was designated a collar color, but researchers were blind to which color represented which treatment.This allowed researchers to verify that trips were balanced by sex and treatment.Thus, this work was a placebo-controlled, randomized study with investigators blind to treatment groups.Trained, validated observers recorded objective behavior data while also being blind to treatment groups. Figure 2 . Figure 2. Treatment by time effects (control and 2-methyl 2-butenal (2M2B)-treated (n = 8 cats/treatment))on pulse rate (LS means ± SE, BPM).Cats with control or 2M2B collars had a similar overall response, with elevated then declining PR during transport.Note that although cats in the 2 treatment groups had similar PR at time −10 (prior to collar placement); averaged other times, cats with 2M2B collars had 8% lower (p < 0.001) PR than cats with a control collar.The quadratic equations that best describe the response over time for each treatment were consistent with high R 2 values.n = 16 cats; treatment effect, p < 0.0001; see Table2for statistical details. Figure 3 .Figure 3 . Figure 3. Sitting, sleeping, and lying of cats during transport with control or 2M2B collar.Data were collected via video recordings starting at time zero through 70 min of transport.Treatment by time effects (control (n = 8) and 2-methyl 2-butenal (2M2B)-treated (N = 8)) on sleeping (LS means ± SE, Figure 4 .Figure 4 . Figure 4. Effects of cat sex during transport (control (n = 8) and 2-methyl 2-butenal (2M2B)-treated (n = 8)) for activity (left) and sitting (right graph) behaviors.Activity levels were recorded by the PetPace collars (LS means ± SE, % of time).Sitting behaviors were objectively quantified from video records.**, means difference between the control and 2M2B-treated groups is significant at p < 0.01.Note males were less active than females, and they spent less time sitting because they were less active (sitting was considered active, or not resting/sleeping).Male cats were less active, and they sat less than females overall. Figure 5 . Figure5.Sex-by-treatment interaction for transported cats' physiology and behavior.(LS means ± SE, BPM).#, ** means difference between the control and 2M2B-treated groups being significant at p < 0.10 and p < 0.01, respectively.* shows a trend with a p < 0.04.Overall, the sex-by-treatment effect was significant (p < 0.05) for most measures and showed a trend for adjusting the body during transport (p = 0.06).However, PR was lower among both males and females exposed to 2M2B collars compared with cats exposed to control collars.Male cats exposed to 2M2B had a greater PR-reduction effect than female cats.Male cats with 2M2B collars slept more but had less time spent lying and tended to adjust position less during transport.Transported female cats were more active with 2M2B collars than control cats, while male cat activity did not differ between 2M2B and control collared cats. Figure 5 . Figure5.Sex-by-treatment interaction for transported cats' physiology and behavior.(LS means ± SE, BPM).#, ** means difference between the control and 2M2B-treated groups being significant at p < 0.10 and p < 0.01, respectively.* shows a trend with a p < 0.04.Overall, the sex-by-treatment effect was significant (p < 0.05) for most measures and showed a trend for adjusting the body during transport (p = 0.06).However, PR was lower among both males and females exposed to 2M2B collars compared with cats exposed to control collars.Male cats exposed to 2M2B had a greater PR-reduction effect than female cats.Male cats with 2M2B collars slept more but had less time spent lying and tended to adjust position less during transport.Transported female cats were more active with 2M2B collars than control cats, while male cat activity did not differ between 2M2B and control collared cats. Figure 6 . Figure6.Correlation between activity recorded by the PetPace collar and video (s, % of time).While there is general agreement over a range level of activity, one should not expect PR and activity to be perfectly correlated, as some stressed animals have elevated PR and are inactive, especially when frightened. Table 2 . Effects of 2M2B on PetPace measures and behaviors of 16 cats during transport.N = 16 cats. Table 3 . Effects of 2M2B on the percentage of time spent on location in the kennel during transport.n = 16 cats. Table 3 . Effects of 2M2B on the percentage of time spent on location in the kennel during transport.n = 16 cats. Figure 2. Treatment by time effects (control and 2-methyl 2-butenal (2M2B)-treated (n = 8 cats/treatment)) on pulse rate (LS means ± SE, BPM).Cats with control or 2M2B collars had a similar overall response, with elevated then declining PR during transport.Note that although cats in the 2 treatment groups had similar PR at time -10 (prior to collar placement); averaged other times, cats with 2M2B collars had 8% lower (p < 0.001) PR than cats with a control collar.The quadratic equations that best describe the response over time for each treatment were consistent with high R 2 values.n=16 cats; treatment effect, p < 0.0001; see Table2for statistical details.
2024-01-24T16:19:06.025Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "cc75f7d8ce4fc7137c7e3c7a2009fa5dab611f9b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2615/14/2/341/pdf?version=1705913484", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1abe3c264942c6124a18d45bf52cd9fd846d2996", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259619143
pes2o/s2orc
v3-fos-license
Incentives and Girl Child Education in Ghana: An Examination of CAMFED’s Support Scheme on Enrollment, Retention and Progression in Garu-Tempane District Promoting female learning otherwise known as Girl Child Education continues to engage the attention of policy makers and practitioners in education and development generally; a situation greatly shaped by the myriad of obstacles militating against girl child education. In the current study, we probed the influence of CAMFED’s girl child education support scheme on school enrollment, retention and progression. We applied a reflexive evaluation approach anchored on a concurrent mixed methods research design in the study. Using a multistage sampling procedure, selected respondents were administered a combination of semi-structured questionnaires and in-depth interviews. We analyzed our data using multiple regression and descriptive statistics, supported by content analysis. The findings show that school enrollment and progression of the girl child increased substantially after the introduction of CAMFED’s intervention. However, gender-based perceptions continue to stifle girl child education, in spite of the behavior change component of the intervention scheme. The paper concludes that policy interventions should address the sociocultural bottlenecks inhibiting the education of the girl child if the gains made so far must be sustained. Going forward, studies could focus on measurement of performance as a function of girl child education support schemes beyond the basic level of education. Introduction Girl child education the world over is crucial for the socioeconomic transformation and liberation of women from lives of squalor and subservience (Chitando, 2016).Evans-Solomon (2004) views girl child education as any formal education that the girl child receives to enable her acquire knowledge, skills, good habit, values and attitudes relevant for effective and meaningful functioning in society.Values acquired by girls through education allow them to exhibit their talents.Investing in girls' education has been found to be cost-effective for developing countries aiming to improve their standard of living (Ananga, 2011;Glewwe & Muralidharan, 2016;Sperling & Winthrop, 2015).Studies have also shown that considerable social and welfare benefits accrue from the education of girls including lower fertility and infant mortality rates (Amin et al., 2017;Owusu-Darko, 1996;Shabaya & Konadu-Agyemang, 2004;Sperling & Winthrop, 2015;Spreen & Kweri, 2013), increases in wage earnings (BBC, 2015;Sperling & Winthrop, 2015), and decrease in malnutrition (Sperling & Winthrop, 2015;Spreen & Kweri, 2013).By not educating the girl child to comparable standards like the boy child, it is estimated that low-and middle-income countries lose around 92 billion dollars each year in gross domestic product (Diaw, 2008;Monkman & Hoffman, 2013).Meanwhile, children of educated women have increased chances of going to school and this could produce multiplier benefits for both their families and society, since by educating girls, societies and nations benefit (Monkman & Hoffman, 2013; United Nations Children's Fund [UNICEF], 2004;World Bank, 2022). In reality, fewer females receive formal education compared to males in the developing world (Monkman & Hoffman, 2013;Todaro & Smith, 2009), a situation that impedes economic development and reinforces social inequalities.Consequently, girl child education has become one of the most significant developmental challenges facing sub-Saharan Africa (Alhassan, 2013;Asigri, 2012).This is because the sub-region has very low enrollment and retention rates for girls relative to boys while dropout and absenteeism rates remain higher, even as their achievements and performance continue to decline (BBC, 2015).Although at the global level, enrollment rates are leveling for boys and girls-in fact, two-thirds of all countries have reached gender parity in primary school enrollment (United Nations Children's Fund [UNICEF], 2022)-completion rates for girls are lower in low-income countries, particularly in sub-Saharan Africa, where 63% of female primary school students complete primary school, compared to 67% of male primary school students (World Bank, 2022). Like many sub-Saharan African countries, Ghana is confronted with the daunting challenge of promoting girl child education (Alhassan & Odame, 2015).Charged by a constitutional imperative to provide Free, Compulsory Universal Basic Education (FCUBE) of good quality for all children, girl child education is of essence in this drive.The Government is also committed to attaining goal four (4) of the Sustainable Development Goals (SDGs) on education by 2020 (Ministry of Education, 2018).The Education Strategic Plan of the Ministry of Education's goal 10 is specific to the promotion of girl's education.Earlier, Ghana was the first country to ratify the UN Convention on the Rights of the Child, and also sanctioned the UN Convention on the Elimination of all forms of Discrimination against Women (CEDAW) (BBC, 2015).The country's financial resource commitment to the education sector exceeds the global average and is well beyond the UNESCO target of 6% of GDP (BBC, 2015). Ghana has made considerable progress in expanding free basic education under the FCUBE program and has subsequently introduced initiatives including the Ghana School Feeding Program(GSFP), capitation grant, free school uniforms, Science, Technology and Mathematics Clinics (STMCs) for girls, the recruitment of regional and district girl child education officers to work with partners and related agencies (National Development Planning Commission, 2005), the elimination of schools under trees, and in the last 4 years, the introduction of the Free Senior High School Policy (Ghana Statistical Service GSS [GSS], 2017).In spite of these interventions targeting a fast-growing school-age population, the education system is still plagued by daunting challenges as the country strives toward achieving the education for all targets as envisaged in the Education Strategic Plan 2018-2030, particularly in the areas of girl child enrollment, retention and progression. As a mercantilist colonial policy to maintain Northern Ghana as a labor reserve to service the economy of the Colony and Ashanti, education in the north of the country was not encouraged (Aziabah, 2019;Bening, 1990).For example, the first school in the north of the country was instituted by the White Fathers in Navrongo in 1907 whereas the first public (government) schools were established in Tamale and Gambaga in 1909and 1912respectively. Later, others were established in Wa (1917), Lawra (1919), Yendi (1922), Salaga (1923) and Bolgatanga (1937) (Graham, 1976).However, a sharp contrast exists in the south which witnessed the establishment of schools as early as the 1800s.For instance, Mfantsipim school, Wesley Girls' High School and the Presbyterian Training College at Akuapem-Akropong were established in 1876, 1884 and 1848 respectively (Graham, 1976). To support girl child education in contexts where poverty-that is, living on a per capita income of less than two-thirds of the national average (GSS, 2017), and unable to afford the basic necessities for school, amidst stereotypes against female participation in formal education-is pervasive, the Campaign for Female Education (CAMFED), a Civil Society Organisation, was launched in 1998 in Ghana.Its main objective is to assist girls at the Junior High School level who regularly dropout of school due to their families' inability to afford basic necessities like school uniforms and text books.CAMFED's intervention program began in the Garu-Tempane District in 2012.Usually, households that are poor (the poverty rate in the district is about 42.04%) have difficulties financing their children's education.Inability to pay school fees, procure school uniforms, books and other learning material may constitute an opportunity cost of children to their households (Roithmayr, 2002).The support meets all the direct educational costs for girls, namely; uniforms, shoes, school and examination fees, books, and stationery.Also, a social support system helps to keep girls in school through the recruitment of a trained female mentor who is assigned to every partner school at the expense of CAMFED.The program works with teachers, parents, traditional leaders, and education and health officials to identify and select beneficiary girls on the CAMFED scholarship (Campaign for Female Education [CAMFED], 2010).The key question undergirding our study is thus: How is CAMFED's intervention scheme in support of girl child education contributing to school enrollment, retention and progression? Using the Garu-Tempane District which is notorious for high dropout rates and low enrollment rate for girls in Ghana (Asigri, 2012;GSS, 2014), the study examines the effect of CAMFED's intervention package in support of girl child enrollment, retention and progression.The paper substantively argues that embedding girl child education support in the sociocultural setting of the people is critical to their success and sustenance.The Garu-Tempane District Education Directorate (2014) had earlier revealed that, only 41% of students who were supposed to be at the Junior High School level were in school and that a disproportionate number of those who were not in school were females.Significantly also, the proportion of females at the basic school level (67.6%) in the District drops rapidly as they transition to the secondary level (6.0%) (GSS, 2014).Generally, high absenteeism and low retention (50%) among the girl child characterize the state of girl child education in the District (Garu-Tempane District Education Directorate, 2017). Limited studies (Adetunde & Akensina, 2008;Asigri, 2012) have examined the issue of girl child education in the study setting.In particular, few scientific studies have examined the effect of girl child support interventions on enrollment, retention and progression in northern Ghana including the Garu-Tempane District.It is against this backdrop that this current study aims to identify and examine, within the framework of empowerment theory, the extent to which girl child support interventions have contributed to enrollment, retention and progression among girls in basic schools in deprived educational settings, using CAMFED's intervention scheme as a case study. Definition of Variables We express our measurement variables of enrollment (gross enrollment), retention and progression in rates and define them among others as follows: Basic education according to the 2008 Education Act (Act 778) consists of ''(a) two years of kindergarten education, (b) six years of primary school education and (c) three years of junior high school education'' (Republic of Ghana, 2008, p. 3). Gross enrollment ratio measures the ''number of pupils or students enrolled in a given level of education, regardless of age, expressed as a percentage of the official school-age population corresponding to the same level of education.The ratio can exceed 100% due to over-aged and under-aged children who enter school late/early and/ or repeat grades'' (Ministry of Education, 2013, p. 106).Retention rate ''is a proxy measure for school completion, giving the percentage of a cohort who entered a level of education who are then in the final year of that level the appropriate number of years later.It does not account for repeaters'' (Ministry of Education, 2013, p. 107).For the basic cycle, it is calculated as enrollment in JHS3 expressed as a percentage of enrollment in P1 based on all schools.Progression rate measures the number of pupils (or students) admitted to a grade of a level of education in a given year, expressed as a percentage of the number of pupils (or students) enrolled in the previous grade of that level of education.In the sections that follow, we elaborate on our theory of application, specify our methodology and offer an analysis of the data obtained through our collection instruments. Deprived district is one that is poor, where such poverty is measured using multidimensional indicators namely; living conditions (electricity, housing, assets, overcrowding, cooking fuel, water, and toilet facility), education (attendance, attainment, and school lag); and health (insurance coverage and mortality). A Theory of Empowerment The concept of empowerment is central in any effort at achieving transformation in social relations and cultural norms.Within the sphere of education, empowerment is critical in narrowing social inequality and enhancing social justice.Ledwith (2011, p. 2) contends that ''empowerment is a form of critical education that encourages people to question their reality: this is the basis of collective action and is built on principles of participatory democracy.''Kabeer (2005) examines empowerment in three interrelated dimensions: agency, resources, and achievement.Resources may involve those material and non-material things necessary for the upkeep and development of the person or wellbeing of a group.Agency refers to a person's ability to make choices plus the capacity to give effect to those choices even in the face of opposition.For the effective exercise of agency, awareness of the immediate circumstances, desire for change and the resources to effect the change are necessary.Thus, a combination of resources and agency makes achievement possible.Achievement refers to the potential to live one's desired life.In relation to girl child education, achievement as the ultimate outcome of girl child education requires enhancing her abilities (agency) through increasing her access to resources (Kabeer, 2005).Within this framework, we analyze the girl child intervention support offered by CAMFED to determine how such intervention support empowers the girl child with respect to developing her abilities as a result of increased access to resources. The needs of women (girls) are grouped into two; practical interests/effective agency and strategic interest/transformative agency (Kabeer, 2005;Mosedale, 2005).Practical interest refers to roles ascribed to women due to their sex and respond to their immediate practical needs, and are enacted by females themselves (Boyd, 2002;Moser & Moser, 2005;Walter, 2011).When girls' practical interests are met, they are only helped to exercise their gender allotted roles more easily, activating their effective agency (Kabeer, 2005, p. 15;Mosedale, 2005, p. 248).Strategic interest/transformative agency deals with girls' subordination or restrictions in a given society and demands shaping women's or girls' struggles (Boyd, 2002;Moser & Moser, 2005;Walter, 2011).The strategic gender needs/transformative agency should thus constitute the focal point of any development initiative seeking to empower women and girls, and challenge institutional restrictions that impede their potential for self-realization. The goals of girl child education are closely linked to empowerment which is one of the key approaches to tackling challenges of human rights and development in society (Tembon & Fort, 2008;World Bank, 2001).The World Bank (2022) contends that girls' education goes beyond getting girls into school.It is also about ensuring that girls learn and feel safe while in school; have the opportunity to complete all levels of education, acquiring the knowledge and skills to compete in the labor market; gain socio-emotional and life skills necessary to navigate and adapt to a changing world; make decisions about their own lives; and contribute to their communities and the world.The theory of change in this respect is that, as more girls are empowered, the more communities will be changed through them.That is, an educated mother will have the potential to disrupt the undesirable cycle of poverty and ignorance in her social setting (Kanyoro, 2007;World Bank, 2022).Girl child education thus, in the light of empowerment, goes beyond the girl child per se to encompass the broader society.The skills acquired by girls in school create pathways to enhanced employment and health outcomes as they are socialized in competencies that enable them to communicate, engage and negotiate in a bureaucratic world (Sperling & Winthrop, 2015).Building the capacity to critically scrutinize oneself and one's community via formal education offer girls the ability to recognize inequality and strive toward social justice (Murphy-Graham, 2012).However, viewed from the African perspective, promoting girl child education in traditional Ghanaian society could create an empowered girl who feels powerless due to impediments that may obstruct her reintegration in to such social settings, the consequences being her inability to realize other social aspirations such as marriage, family life and social recognition.Thus, to enable communities to navigate the hurdles of social change, promoting female education should be approached progressively through embedding the processes of change in the culture and traditions of the communities concerned.In this regard, girl child education should not simply be construed as a western phenomenon, but as part of the transformational changes in values, norms and behavior that come with the evolving nature of society and development. For the kind of education that empowers the girl child to be realized, it requires among others: textbooks and learning materials policy that reflect gender equality (O'Neil et al., 2015;Sperling & Winthrop, 2015); demonstration and teaching of gender equality by teachers; accessibility of the girl child to female mentors and role models; strengthening of girls' decision-making and negotiation skills; and the creation of opportunities for developing their leadership skills (Sperling & Winthrop, 2015).Empowerment theory thus offers an appropriate framework to analyze CAMFED's school intervention programs aimed at effectuating positive outcomes for girl child education in the Garu-Tempani District. Significance of Girl Child Education A number of studies have found that investing in girls' education is a cost-effective strategy for developing countries aiming to improve their standard of living (Ananga, 2011;Glewwe & Muralidharan, 2016;Sperling & Winthrop, 2015).Considerable social and welfare benefits accrue from the education of girls including lower fertility and infant mortality rates (Amin et al., 2017;Owusu-Darko, 1996;Shabaya & Konadu-Agyemang, 2004;Sperling & Winthrop, 2015;Spreen & Kweri, 2013).Investigating maternal education and child survival in Ghana, Owusu-Darko (1996) discovered that the higher the education level of the mother, the better the survival rate of her children.The mother's level of education has equally been established to directly affect economic output and the level of her daughters' education (Swainson, 1995).Also, girls' education has been found to reduce hunger.A cross country analysis of 63 countries for instance revealed that improvements in female education led to 43% decrease in malnutrition (Atta, 2015).A BBC (2015) study has also shown that every additional year of formal education of the girl child raises her wages by 20%, while overall dividends on primary education were relatively higher in favor of girls than for boys.Meanwhile, low and middle income countries lose around 92 billion dollars annually due to non-education of girls to standards comparable to those for boys (Diaw, 2008). The significance of girls' education in developing countries cannot be overemphasized.From a broader perspective, supporting girls' education has proven to bear positive implications for other measures of development (Shabaya & Konadu-Agyemang, 2004; United Nations Children's Fund [UNICEF], 2004).Educated women possess the skills and capabilities to raise their earning potential, which is vital for the wellbeing of the many femaleheaded households in developing countries.In countries where poverty levels are high, improving girls' education has positive impact on economic growth (Dollar & Gatti, 1999).Educating girls and women has a positive impact on levels of agricultural and industrial productivity.Therefore, it is not surprising that countries that record higher levels of girls' enrollment in school also record higher levels of economic productivity, lower fertility, lower infant and maternal mortality, and longer life expectancy than countries that miss out on high enrollment levels for girls (United Nations Children's Fund [UNICEF], 2004).A United Nations Children's Fund (UNICEF) analysis of household data for 55 countries in 2004 disclosed that children of educated women had an increased chance of going to school, and the more schooling the women had obtained the more likely it was that their children will benefit from education, thus distributing the multiplying benefits for both themselves and society (United Nations Children's Fund [UNICEF], 2004). An additional benefit accruing from the promotion of the education of girls and women is to be seen in the changes occasioned in household behavior and practice (Ridley & Bista, 2004).For instance, enhanced sustenance of children has been stablished to be strongly associated with increased levels of education and earnings of the mother than of the father (Ridley & Bista, 2004).This is a crucial observation for women and girls who possess fewer resources at the household level compared to men and boys due to their diminished influence over decision-making in the heterosexual household.By consciously increasing women's share of cash income in the household therefore, an increase in their share of household resource allocation to health, education and general household consumption are assured.It must however, be acknowledged that social and economic inequities in underdeveloped settings cannot just be solved by education alone; but that education can play an important part in this.Building in girls the capacity to recognize inequity and strive toward social justice is critical, but this kind of personal agency needs to be acknowledged within the context of the broader social and economic structure in underdeveloped settings. Study Area and Design The study area is Garu-Tempane District in the Upper East Region of Ghana.Prior to 2021, the District had a total population of 130,003 representing 1.2% of the region's population at the time of the study.This has now increased to a combined figure of 158,767 following the division of the district into two (Garu District-71,774; Tempane District-86,993), representing 12.2% of the region's total population (GSS, 2021, p. 64-65).Females account for 52.3% of the District's mainly rural (95%) population.In respect of the population aged 11 years and above, 39.6% are literate whereas 60.4% are non-literate (GSS, 2013).The share of females at the basic school level is higher at 67.6% but decreases sharply to 6.0% as they ascend the educational ladder to the secondary level (GSS, 2014).The share of literate males is 50.1 %.With respect to the population aged 3 years and above, 42.6% are currently attending school, 7.4% have attended school in the past and 50.0%have never attended school.The share of the population with basic education is 15.8%.Females (aged 20-24) marry early and so the expectation is that school retention for girls will be lower compared to boys.The population of married females with no education is 88.7%.Cultural practices which relegate girls to practical interest roles such as being in the kitchen, and perceptions of women as only good for marriage and therefore not worthy of investments because such investments will only yield benefits for their husbands and not the home of their parents deprive girls the right to formal education (Alhassan, 2013;GSS, 2014).The Garu-Tempane District thus provides a suitable context for examining the effectiveness of girl child education interventions.In this regard, an evaluative study, employing a quasi-experimental design through a combination of reflexive and constructed controls, and anchored on a concurrent mixed methods design are applied in this paper. Materials and Methods The study employed a mixed methods design-specifically, the concurrent mixed methods design-in which both qualitative and quantitative data were collected.The qualitative data provided deep insight and meaning into the quantitative data.Female students in Junior High Schools (JHS 1-3), Head teachers, CAMFED's Program Coordinators, Teacher-mentors, Parents, District Education Office focal persons from the Garu-Tempane District formed the target population fothis investigation.A multistage sampling procedure enabled the identification and selection of respondents at the school level.The first stage of the sampling procedure involved stratifying the schools into beneficiary and nonbeneficiary schools.Beneficiary schools refer to schools whose pupils were supported by the CAMFED's intervention program while non-beneficiary schools are schools that did not have pupils under the CAMFED's support.Simple random sampling was then applied in the second stage to select two beneficiary (out of 11 beneficiary schools) and two non-beneficiary (out of 34 nonbeneficiary schools) schools.The 11 beneficiary schools had a beneficiary girl population of 314 in the District while the overall girl child population (JHS 1-3) in the District was2,802 (Garu-Tempane District Education Directorate, 2017).By applying Miller and Brewer's (2003) formula for sample size determination namely, where n = sample size; N = sample frame and e = error or significance level, the sample size for girls was computed as follows: Given that N = 2802% and e = 5% = 0.05.Then, n = 2802 1 + 2802(0:05) 2 = 350: Through proportionate sampling, we distributed the sample size among the four selected schools.The details of the distribution are presented in Table 1.Using the list of girls in each of the schools included in the study, systematic sampling enabled the selection of every fourth girl child in each school.Head teachers and teacher-mentors of the sampled basic schools were purposively selected due to their special knowledge of the CAMFED program.Head teachers in basic schools perform both academic and administrative roles and possess the knowledge and have access to information on pupils' retention and progression rates.Teacher-mentors also produce reports for CAMFED on their activity outcomes coupled with their role as program committee members at the school level.Additionally, CAMFED Program Officials, District Education/Assembly focal persons who are considered to have in-depth knowledge on CAMFED's program activities and its effects were purposively selected. Forty parents, comprising 20 girl child beneficiary parents and 20 girl child non-beneficiary parents were also selected for the study.Systematic sampling was applied such that every ninth girl child in each school was used to select the corresponding parent for interview.Parents form part of the stakeholders in CAMFED support programs and hence their views are considered critical to understanding the impact of the interventions.In all, 175 beneficiary girls, 175 non-beneficiary girls, two head teachers, two teacher-mentors, 40 parents, two CAMFED Program Officials, one District Education Officer and an Assembly Member formed the total sample size of 398 for the investigation.Female students (both beneficiaries and non-beneficiaries and their parents) were administered semi-structured questionnaires while the rest of the respondents were interviewed as key participants.Cross-tabulations and Chi-square tests were used to analyze the quantitative data while thematic analysis was applied to the qualitative data. Demographic Characteristics The demographic information on beneficiary and nonbeneficiary students is presented in Table 2 below.The data shows that students come from large family sizes (about 71% had at least four siblings), majority of the parents are unemployed (both beneficiaries and non-beneficiaries) and live on subsistence farming.A sizeable proportion of parents (61%) of beneficiary girl children were married while a significant (75%) proportion of parents of non-beneficiary children were separated parents.About 74% of parents of non-beneficiary girl children were divorced.These revelations raise questions about the selection criteria for beneficiaries of CAMFED's support packages.This is hinged on the believe that female students coming from strained family backgrounds (separated or divorced) may be penurious compared to those coming from married family backgrounds.CAMFED employs beneficiary selection criteria that privileges brilliance (measured by scoring high marks in school) over need, and this may likely have sieved off students who are needy but may not be performing well because of the effect of a separated/divorced family background.A review of the selection criteria that prioritizes ''need'' could cure this disproportionality in access. The educational level of parents and guardians was found to be generally low.Half (50%) of the parents (beneficiaries and non-beneficiaries) had no formal education.Dolan et al. (2014) found that parents' level of education had an influence on the education of the girl child.Our interviews with parents showed that in spite of their low level of education, they are striving to educate their children.A parent averred: ''I have not been to school, however, I can see the benefits of education.These days if you are not educated you will find it difficult to cope in all areas of life.As a result, most parents including me, are making all the necessary efforts to educate our children including the girl child'' (Interview with parent respondent III, April 12, 2019).The foregoing is indicative that the significance of education for children of both sexes are multiple, and this is not lost on parents as many are taking steps to ensure their children get educated. Girl Child Enrollment, Retention and Progression Examining the trend in girls' enrollment, retention and progression in basic schools in the District is a core objective of this study.A careful scrutiny of these access indicators is critical in determining the effect of the support interventions offered by CAMFED.Table 3 presents enrollment statistics of the four sampled schools from 2013 to 2018.Results from the table reveal increases in enrollment of the girl child at the JHS level for the selected beneficiary schools since the introduction of the CAMFED support scheme in 2016.The interest here is on female gross enrollment which refers to the total number of students (females) registered in, and attending a particular level of the education system regardless of age.Prior to the support scheme, girl child enrollment into JHS 1 for the selected schools showed a chequered trend, rising from 197 in 2013 to 213 (8.1%) in 2014 and then sharply declining to 111 (247%) in 2015.However, since the scheme's commencement, enrollment figures for JHS 1 have shot up rapidly, rising to 331 (198.2%) in 2016 from the preceding year's figure of 111.This further rose to 432 (30.5%) in 2017 and then to 502 (16.2%) in 2018.In fact, whereas prior to the intervention, boys outnumbered females in enrollment at the JHS level, the picture changed in 2017 with the females outnumbering their male counterparts in enrollment as can be seen from the enrollment data for JHS 1 in Table 3. An analysis of students' retention rate shows that not only are the females staying in school but that there is a concomitant increase as they move from one grade to the next higher grade.Using 2016 enrollment as base year figures (CAMFED school intervention program commenced in 2016), we determine the retention rate to be 153.8%.We compute the retention rate from Table 3 above as follows: the number of students enrolled in JHS3 in 2018 expressed as a percentage of JHS1 enrollment in 2016.The retention rate signals that as females progress, their numbers increase.This abnormal increase is explained by an interview participant as follows: ''As a result of CAMFED's intervention, many parents and families caused their wards to be transferred to CAMFED supported schools thus swelling up their numbers'' (Interview with Head Teacher, 2019).Progression rates were also found to be above 100%. For instance, the progression rate from JHS1 to JHS2 in 2017 is 117.5%, and that for JHS2 to JHS3 in 2018 is 130.8%.The rising progression rates are supported by the views of two head teachers, namely; that parents who have relations in cities and towns outside the district, and had sent their children there for better education started to withdraw them from the city so they could come back home and benefit from CAMFED's girl child support package.Stakeholders generally attributed the increase in enrollment to CAMFED's intervention scheme in the District, a view consistent with that of Kwapong (2009) and Kanyoro (2007) who found a positive relationship between girl child education support and enrollment of the girl child.An interview participant observed as follows: ''The fact that the girl child is provided support with shoes, uniforms, school and exam fees, books, and stationery have encouraged parents to ensure that their girl children do not dropout out of school.This has helped to increase the enrollment of the girl child in school compared to the situation before the introduction of the intervention in the District'' (Interview with Head Teacher, April 17, 2019).Another interview participant, commenting on the effect of the social support of the female mentor on girl child enrollment and progression noted: ''CAMFED helps to keep girls in school by providing them with the social support of a trained female mentor in every partner school.The female mentor is available for the students to share their intimate problems with her for counseling and support.The female mentor also serves as an inspiration to the girl child'' (Interview with Teacher-mentor I, April 18, 2019).The CAMFED approach which involves working with all stakeholders who matter in girl child education has ensured the creation of the requisite environment at school, the home and the community for girl child education.The teacher-mentor explains further: ''CAMFED works with teachers, traditional leaders, parents and health and education officials with the aim of increasing retention and progression in schools.This approach has exposed all the actors involved to what is required of all at the various levels-school, home and community-to enhance girl chid education'' (Interview with Teachermentor II, April 18, 2019). A beneficiary girl child had this say on the effect of the support on her education: ''We the beneficiaries are the envy in our schools and communities.Many of our colleagues are striving to be like us.As a result, we work hard to ensure that we are continuously on the program.Through the education and counseling provided by the program, we have also come to realize the benefits of education for our future.As a result, we now take education more seriously'' (Interview with beneficiary girl child I, April 15, 2019). Another girl child commented on the effect of the program on their parents' attitudes toward education which has helped to achieve the high enrollment and progression: ''Our parents are now eager that we don't fall out of the program.They no longer ask us to stay at home and help with chores, rather they ensure that we go to school every day.Besides, their focus is gradually shifting away from giving us out for marriage'' (Interview with beneficiary girl child V, April 15, 2019).Reinforcing the motivation to go to school, an Assembly Man noted: ''They have no excuse not to go to school and to do well to move to the next class.They are provided with sandals, uniforms, books among others.They consider themselves privileged to get this support and will do well to make our schools and parents and communities proud'' (Interview with Assembly Member, May 7, 2019).A parent recounted the benefits of school enrollment of the girl child in his (Tumbo & Mutelo, 2010) while bringing about enormous changes to the lives of girls and their families, communities and regions (Mak et al., 2010). Influence of CAMFED's Intervention on Sociocultural and Institutional Factors Affecting Girl Child Education This section examines the influence of CAMFED's intervention on the institutional and sociocultural factors shaping girl child education in the Garu-Tempane District from the perspectives of all the actors involved in girl child education.Table 4 presents the perspectives of both beneficiary and non-beneficiary girl children on the influence of CAMPFED's interventions on customs and traditions in relation to the formal education of the girl child.It is observed that there are significant positive changes in attitude (from two respondents who, prior to the intervention, held the view that both girls and boys be given equal opportunity to formal education to 79 respondents now agreeing to same after the intervention) after the support kicked in.Major changes are equally observed in the views of respondents who initially believed in limiting the girl child to the kitchen and marriage responsibilities.A beneficiary girl child revealed: ''Before the coming of CAMFED, my parents got me convinced I was only useful for the kitchen and marriage, but after CAMFED intervened, their views have changed and so are mine, and now they always want me to be in school'' (Interview with beneficiary girl child X, April 17, 2019).Another girl said that, ''My parents are very happy with the support.They now always want me to be in school like my male siblings'' (Interview with beneficiary girl child XV, April 17, 2019).However, with respect to financing girl child education at the JHS level, no changes are observed in the views of beneficiary girls before and after the intervention.In like manner, the view that vocational training is the preserve of girls still lingers on very strongly even among beneficiary girls in the study area with 20 girl child respondents holding on to this view before and after the intervention.However, the influence of CAMFED's intervention on perceptions such as ''girls perform more household chores than boys'' and ''boys are seen as superior to girls'' were insignificant.This brings in to question, the efficacy of CAMFED's intervention in addressing the strategic gender needs/transformative agency of women revolving around their subordination and restriction (Boyd, 2002;Moser & Moser, 2005;Walter, 2011), shaped by sociocultural and institutional structures. An opinion leader commented on the influence of CAMFED's intervention on sociocultural factors affecting girl child education as follows: ''I will say the intervention has made some progress even though a lot still needs to be done.This is because, it takes time to achieve changes in mind-sets of people.I am convinced that the seed that has been sown will yield bountiful fruits in the future'' (Interview with Assembly Member, May 7, 2019).The influence of CAMFED's support initiatives on teachers' approach to gender-based teaching and learning presented in Table 5 above shows that respondents who indicate that they were given equal opportunity to participate in classroom activities such as asking and answering questions rose from 78 to 156 after the introduction of the program.Institutional structures/school related factors which exhibit biases against girls' education actually affect the education of the girl child (Acquaye, 2021;Akinyi & Musani, 2015;Ocho, 2005).In the instant case, the decline in exhibition of bias by teachers toward females in class is impacting positively on their learning experiences. Conclusion and Recommendations We set out to investigate the influence of CAMFED's empowerment approach on girl child enrollment, retention and progression in a deprived district-a district deficient in the basic requirements to enable teaching and learning proceed uninterrupted-in Ghana.We also examined sociocultural and institutional impediments to girl child education in the district.CAMFED supports girls who routinely dropout of school due mainly to poverty and deprivation by absorbing all the direct educational costs for girls, including uniforms, sandals, school and examination fees, books and stationery.This is augmented by the provision of trained female mentors as social support for all partner schools in the intervention program. Our results show that the intervention targets female children from predominantly deprived families, majority of whom are engaged in subsistence farming, and are largely illiterate.But, we also note that children from separated/divorced family backgrounds are not prioritized in the intervention scheme.Nevertheless, we observed a rapid increase in gross enrollment of the girl child at the JHS level for the selected beneficiary schools beginning in 2016, and then a steady rise thereafter.An analysis of students' retention rates indicates that females are not only staying in school but that there is a concomitant increase in their numbers as they move from one grade to the next higher grade.Our content analysis reveals this concomitant increase alongside retention is due mainly to transfer of students from other schools to CAMFED intervention schools at the request of students' parents.This is to enable such transferred students benefit from CAMFED's support.Progression rates show similar trends of improvements.For instance, the progression rate from JHS1 to JHS2 in 2017 is 117.5%, and that for JHS2 to JHS3 in 2018 is 130.8%.Our results thus support the efficacy of incentive packages in improving girl child enrollment, retention and progression at school (Kanyoro, 2007;Kwapong, 2009;Mak et al., 2010;Jukes et al., 2008;Tumbo & Mutelo, 2010).But we also note that there may be social costs such as failure to reintegrate, lack of family life, and social recognition on the part of the girl child, if these interventions are not embedded in the values, traditions and practices of Ghanaian communities. With respect to the influence of CAMPFED's intervention on customs and traditions in relation to the formal education of the girl child, we observed positive changes in attitudes of teachers in respect of equal treatment for both boys and girls, after the support kicked in.We also observed major changes in the views of parents and girl child respondents who initially believed in limiting the girl child to the kitchen and marriage responsibilities.Given that institutional structures and cultural practices are relatively stable and enduring, a rapid change is not expected in the short term.Sustained education, campaigns and advocacy may be required to eventually bring about the desired change.With respect to financing education of the girl child, and restricting girls to vocational training, we observed ambivalence in attitudes of respondents.This calls for intensified education and engagement of communities and families by schools and districts with the support of CAMFED. Given that the lack of female teachers as role models Lake et al. (2015) functions as an institutional context factor limiting girl child education, the use of female role models in CAMFED's intervention programs has greatly influenced teachers' adoption of gender-based approaches to teaching and learning.This has been bolstered by CAMFED's participatory approach to its program implementation which nests all relevant actors at the community (chiefs, religious leaders, assembly members) and institutional (teachers, school administrators and policy makers) levels in its actor engagement configuration.As noted by Pawson (2003), there is now consensus on the role of context in the outcomes of programs.That is, the outcomes of programs are not only dependent on the program theory or theory of change but largely on its context, which is the central argument of realist evaluation (Monkman, 2011).We conclude by noting that girl child education interventions should address the sociocultural bottlenecks inhibiting the education of the girl child if they must produce sustainable gains.This could be achieved through active engagement with all relevant actors at the operational level to identify and capture the contextual factors shaping behavior and attitudes in the design and implementation of program interventions.CAMFED'S beneficiary selection criteria should be reviewed to prioritize access for girl children coming from separated/divorced homes.Whereas the findings of this study reveal the effectiveness of CAMFED's intervention in creating an environment supportive of the enhanced enrollment and progression of the girl child at the basic level, its effect on final outcomes such as improved health and academic performance have not been explored.Thus, further research could focus on these areas to ascertain CAMFED's program impacts on quality and physical indicators of education in intervention districts. Table 1 . Sample Size Distribution. Table 3 . Enrollment for the Period 2013 to 2018. Table 4 . Influence of Customs and Traditions on Formal Education of the Girl Child. Table 5 . Influence on Teachers' Approach to Gender Based Teaching and Learning.
2023-07-11T16:28:17.280Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "2b70e2bda5bc0f9fda8631b0e9ffcbca11f743b0", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/21582440231183904", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "2efd60cebd27c72f830223233dd2a22f2de75323", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
73542336
pes2o/s2orc
v3-fos-license
An experimental study of two grave excavation methods: Arbitrary Level Excavation and Stratigraphic Excavation The process of archaeological excavation is one of destruction. It normally provides archaeologists with a singular opportunity to recognise, define, extract and record archaeological evidence: the artefacts, features and deposits present in the archaeological record. It is expected that when archaeologists are excavating in a research, commercial or forensic setting the methods that they utilise will ensure a high rate of evidence recognition and recovery. Methods need to be accepted amongst the archaeological and scientific community they are serving and be deemed reliable. For example, in forensic contexts, methods need to conform to scientific and legal criteria so that the evidence retrieved is admissible in a court of law. Two standard methods of grave excavation were examined in this study with the aim of identifying the better approach in terms of evidence recovery. Four archaeologists with a range of experience each excavated two similarly constructed experimental ‘single graves’ using two different excavation methods. Those tested were the arbitrary level excavation method and the stratigraphic excavation method. The results from the excavations were used to compare recovery rates for varying forms of evidence placed within the graves. The stratigraphic excavation method resulted in higher rates of recovery for all evidence types, with an average of 71% of evidence being recovered, whereas the arbitrary level excavation method recovered an average of 56%. Neither method recovered all of the evidence. These findings raise questions about the reliability and so suitability of these established approaches to excavation. Background The process of digging a grave can be considered as a single event of rapid deposition or a 'time capsule' due to the relatively short period of time in which the process in undertaken (Greene 1997;Foxhall 2000). The process of backfilling the grave generally results in a stability in the position of evidence and the human remains present within the grave structure (Hanson 2004). A grave can be defined as an excavation in the earth for the reception of a corpse (Oxford English Dictionary 2015). As a grave is dug, it 'cuts' the natural and or man-made layers (strata), which are removed and the stratigraphic sequence is disturbed. This process results in the formation of a new surface (walls and floor) beneath the ground onto which a body or bodies are placed (Hanson 2004). Subsequently, the removed natural/man-made layers are placed back into the grave structure as a 'fill' over the body. Typically, however, these layers become intermixed during their removal and replacement. Differences form in the colour, texture, chemistry, compactness, volume, water retention, odour, organic content and pH level between the disturbed area associated with the grave structure and that of the undisturbed natural/man-made layers through which it was dug (Wolf 1986;Killam 2004). These differences enable the archaeologist to define areas of disturbance allowing for burial locations to be identified and excavated. In normal archaeological fieldwork, the process of excavating a grave is perceived as a simple one. During excavation the grave cut is defined and the grave fill is often found to be a single stratigraphic deposit, and is removed as such, whilst the body is viewed as an artefact (Browne 1975;Hunter 1994). In practice, the stratigraphy of graves can be much more complex, for example in cemetery contexts where there are multiple internments over time. In forensic contexts, a grave is considered to potentially contain multiple layers that can be recognised including those of organic decomposition and additives such as lime that may have been used to assist in a grave concealment process (Hunter 1994;Congram 2008). If the original grave fill has later come to be disturbed, for example by the perpetrator or animal activity, the grave structure may then contain several different cuts and fills (Hochrein 2002). Normal archaeological excavation methods have had to be adapted in the light of the potentially complex nature of recent burials and their forensic investigation (Hunter 1994). This adaptation is largely characterised by processes to establish forensic relevance, limit contamination, record stratigraphy using spits and sections across the grave, as well as the retention of grave fills for subsequent detailed analysis. However, as with field archaeology generally, the methods utilised and published by forensic archaeologists/anthropologists vary extensively. They have evolved to their current state according to the archaeological practices advocated by practitioners and professional bodies in their country of origin, and the inherited traditions present in each. Consequently, different excavation methods and recording systems are used by different archaeological practitioners in accordance with their individual preferences, which are largely formed by the site types from which an archaeological practitioner has gained their academic training and experience (Carver 2009;Carver 2011:107). Two principal methods, the arbitrary level excavation method and the stratigraphic excavation method have developed through different traditions and archaeological needs. Arbitrary level excavation As part of their academic training physical anthropologists and archaeologists may receive training in archaeological field schools, many of which emphasise excavation by arbitrary levels as a standard approach. This method is commonly utilised in test pitting in professional archaeological assessments, contributing to the wide scale adoption of the arbitrary level excavation method in forensic casework. Practitioners using this method have published technical papers regarding the forensic application of archaeological techniques, and as a consequence, the arbitrary level excavation method has come to be regarded as a standard excavation method for forensic investigations (Ramey-Burns 1996;Crist 2001;Komar and Buikstra 2008). During the arbitrary level excavation of a grave, soil is removed in a succession of predetermined levels, usually 0.05 m, 0.10 m, or 0.20 m in depth (Hester 1997:88), over an arbitrary but carefully measured area, usually determined by the perceived size of the grave at surface level. As evidence is identified the earth 'matrix' that surrounds it is removed leaving each item upon a soil 'pedestal'. These items are measured in situ and only removed when they are deemed to be hindering the progress of the excavation (Joukowsky 1980;Brooks and Brooks 1984;Ramey-Burns 1996;Tuller and Đuric 2006;Connor 2007). During this process, soils that comprise the deposits backfilling the grave as well as the surrounding natural/man-made strata through which the grave was originally dug are removed in spits across the defined area of excavation. In order to provide access to the burial, trenches are often dug around the remains resulting in the removal of the grave walls (Joukowsky 1980;United Nations 1991;Godwin 2001:9). Some practitioners advocate against the removal of the grave walls however, as these surfaces may be of assistance when interpreting the method by which the grave was constructed, and assist investigators in establishing links between the crime scene and the perpetrator(s) (Powell et al. 1997;Hochrein 2002;Dupras et al. 2006;Connor 2007). The arbitrary level excavation method has several perceived advantages, including: spatial and depth control of soil removal and artefact recovery; easier access to the remains and artefacts from different angles; dynamic photographs can be taken of both the human remains and artefacts; it assists with potential water drainage issues that can damage the integrity of the grave structure; and it limits the time spent standing on a grave structure of limited size that could damage the human remains and artefacts (Spennemann and Franke 1995;Pickering and Bachman 1997;Godwin 2001;Hochrein 2002;Tuller and Đuric 2006). Notionally less archaeological skill and experience are required to utilise this method, as spits can be easily measured and levelled to accurate standard depths. However, there are inherent problems with this method, including: the method destroys and ignores stratigraphic interfaces and layers present within the grave; it introduces artificial divisions of deposits and evidence which can result in evidence retrieved during the process of an excavation having no known stratigraphic origin; it results in the mixing of strata and artefacts from the grave structure (fills and cuts) and natural strata through which the grave was dug potentially leading to contamination of soils and artefacts that may pre or post-date the grave; the grave walls can only be recorded in plan at the interface of each arbitrary level (if distinguishable from the natural strata) which will not always allow for the accurate recording of the grave cut including tool marks; and pedestalled artefacts may be moved during excavation (Harris 1979(Harris , 1989(Harris , 2002Hanson 2004;Hunter and Cox 2005;Komar and Buikstra 2008). Despite these weaknesses, the arbitrary level excavation method continues to have advocates for its application; it has been argued that this is largely due to the fact that, normally, graves lack complex stratigraphy and are usually comprised of a singular fill, and therefore the application of arbitrary units is justifiable. The primary emphasis when utilising this method is often upon the recovery of artefacts and human remains, rather than understanding the entirety of the grave formation process. Arbitrary level excavation provides the easiest and most efficient method for meeting this objective (Pickering and Bachman 1997;Haglund et al. 2001). However, it may be necessary to demonstrate as complete a stratigraphic record as possible has been recognised and excavated, and that the evidence of that stratigraphic record has not been lost, but was recovered and documented. It is a normal requirement that excavation should be undertaken to a standard that allows re-interpretation from the documentation. Archaeologists may therefore need to demonstrate they have recorded the basis to accurately interpret the stratigraphic record, record the stratigraphic sequence, and justify the reconstruction of the sequence of human and taphonomic events that occurred at the site under investigation (Harris 1989(Harris , 2002Hanson 2004). It has been argued that the best way that this can be achieved is through the use of the stratigraphic excavation method (Barker 1987, Harris 1989Hochrein 2002;Hanson 2004). Stratigraphic excavation When using this method, separate archaeological stratigraphic contexts are identified and excavated individually in sequence, and recorded as individual stratigraphic phenomena. The entire grave is viewed as an archaeological feature. Thus the fills and interfaces are normally revealed and recorded in their entirety and grave walls may be exposed and maintained throughout the entire excavation process. This allows for the retention of tool marks and geotaphonomic evidence present on the surfaces of the grave walls and grave floor (Hochrein 2002). There are several perceived advantages to stratigraphic excavation including: three dimensional recognition, assessment and recording of each stratigraphic context; revealing of interfaces between deposits; chronological recovery of evidence by context; spatial and depth control of soil removal and artefact recovery; prevention of contamination between stratigraphic contexts; dynamic photographs can be taken of both the human remains and artefacts reflecting their chronological deposition; and removal of deposits that records the sequence of deposition to aid in the reconstruction of events. The main problems with this method are: without tents and other precautions water can collect in the grave; that excavation in limited spaces and at depth can limit access to the human remains (Tuller and Đuric 2006); difficulties in recognising individual stratigraphic contexts, especially interfaces; that the method is more complicated to perform than other methods; and that the method may be perceived to slow down excavation. In normal archaeological excavation, differences in interpretation or implications of mistakes made during the excavation and interpretation of archaeological sites are not seen as inherently problematical. However, differences in interpretation, misinterpretations or destruction or loss of evidence during the excavation process in forensic contexts have potentially greater ramifications. The results from such work have significant legal, political, social and media impact. Loss of evidence may impact investigations and prosecutions, and in some countries, for example Iraq, there are legal penalties (fines and imprisonment) for evidence loss (Crist 2001; Law on the Protection of Mass Graves 2006). It is therefore prudent that excavation methods are assessed and tested to determine their suitability. Establishing whether there may be error rates, variation in results and impacts on interpretation depending on methods used is a sensible scientific aim. Given that each archaeological site is unique, how excavation methods can be compared raises questions about how to approach experiments to assess this. An experiment was designed so that the arbitrary level excavation method and stratigraphic excavation method could be tested in a controlled environment. This would compare evidence recognition, recording and recovery rates for typical evidence forms present within a grave site when excavated by participating archaeologists. The timeframes of the experiment (concerning the creation and excavation of the artificial features) matched that seen in forensic casework, where there is often a limited time between burial and recovery. Experimental design In order to allow for the objective comparison of the stratigraphic excavation method and arbitrary level excavation method it was decided that artificial features with similar properties to single graves would be utilised. They were designed to be as identical as possible to each other in regards to their location and properties: shape, size, archaeological contexts and evidence. The aim was to minimise the number of variables that could affect evidence recovery, and standardize the structure and content to ensure that each method could be directly compared. During this experimental study evidence was defined as: artefacts, tool marks, and stratigraphic contexts (deposits/ fills, cuts/interfaces). The 'graves' were created using a mechanical digger. This was deemed justifiable as mechanical diggers are commonly used to dig graves (Hunter and Cox 2005). Through using a mechanical digger the researchers were able to impose standard dimensions and also distinctive tool marks on the walls and base of the graves, which, if identified, would assist the archaeologists in their interpretation of how the grave was constructed. Each grave measured 1.20 m in length, 0.75 m in width, and 0.85 m in depth. Approximately 2.0 m was left between each experimental grave to ensure that an adequate working space was left for the excavations to be undertaken. The experimental graves did not contain any form of skeletal remains as this experimental study was not concerned with the osteological recovery potential of the two excavation methods, something explored by Tuller and Đuric (2006). Morse et al. (1976a;1976b) discuss how they created 'graves' with no skeletal remains for the purposes of training investigators in forensic archaeological excavation procedures. Therefore, the researchers classified these cut features as graves despite the absence of skeletal remains, but with the expectation of participants that remains were present. The artefacts (Figure 1.0; 2.0) that were included in the graves were chosen to represent items typically found in clandestine burials. In addition, it was determined that these items would preserve during the short time between their burial and subsequent excavation (Janaway 1996;Janaway 2002). These items were also common, easily identifiable items, and thus would be recognisable to participants. These items also varied in size, composition and shape enabling the researchers to determine if excavation (by either method) had a tendency to recover artefacts of a certain size, composition or shape. Several soil fills were used to back fill each grave cut. A secondary cut was made into these fills, which was itself filled. Artefacts were placed within these fills and on interfaces (Figure 1.0; 2.0). The depth and distribution of each stratigraphic context was matched to be the same in each grave. All artefacts were placed in the same location in each context in all graves and the locations recorded in three dimensions. Moreover, according to scholars such as Hanson (2004) and Hunter and Cox (2005), the arbitrary level excavation method can result in the mixing of artefacts from a grave fill with those present within the natural undisturbed strata through which the grave was dug, thus resulting in the collection of evidence unrelated to the grave creation events. The stratigraphic excavation method can also lead to over-excavation of contexts as the excavator seeks to define interfaces and the edges of deposits. In light of these observations, the researchers created incisions into the natural undisturbed strata 0.15 m beyond the edge of the grave cut into which a key, marble and coin were placed. Such items are those that could easily be lost at the site prior to or after the graves creation. Through the inclusion of such evidence in the experiment, the researchers could assess if excavation would result in extraneous evidence being retrieved. In all, eleven distinct horizontal deposits were added to each grave. Although the presence of multiple perfectly horizontal deposits are, as Praetzellis (1993:18) states, the "exception rather than the rule" in archaeological sites, following this procedure made the exact replication of each grave and matched positioning of the contents achievable, accurate and efficient. One potential effect of horizontally placed deposits is that the excavated arbitrary 0.10 m levels could coincide with the horizontal deposit interfaces within the grave fills. This may favour recognition of evidence during arbitrary level excavation. The stratigraphic sequence was made more realistic and less uniform by varying the depth of deposits from 0.05 m and 0.10 m. Moreover, the inclusion of the internal feature and associated fill cutting the primary fills of the grave, and two additional cut features and associated fills in the floor of the graves allowed both methods to be compared through the potential to reveal a number of vertical and horizontal interfaces ( Additionally, all graves were left exposed to the elements for seven days. This was intended to produce the typical geotaphonomic phenomenon of surface cracking (Figure 4.0). In experiments conducted by Hochrein (2002: 55), it was noted that such phenomenon can be recovered during excavation and can be indicative of a grave feature being prepared in advance of a homicide event; thus providing a sign of premeditation. To further this concept of a pre-prepared grave, leaf litter from the surrounding area was placed into the bottom of the grave. As Hunter and Cox (2005: 109) note, the presence of vegetation in the bottom of graves can be indicative of a grave that has been left open for a time before infilling. The inclusion of this vegetation layer disguised the 'true' grave floor, providing a qualitative test for the archaeologists during the excavation experiment, to see if they excavated the grave until the floor of the grave or 'sterile' deposits were reached, as recommended in forensic archaeological excavation literature (Hunter and Cox 2005). Each grave was covered with loose soil and turf so that visually the general outline of each grave was not visible at surface level. The graves were set up in natural stratigraphy of leached grey and orange sand with iron panning, over gravel layers. The fills used in the graves were formed from the material removed during the machine excavation, except for the layer of leaf litter. Other factors taken into consideration were that each archaeologist would be excavating two replica graves each using different methods, and that multiple archaeologists would be excavating their graves at the same time, with the potential to overlook or communicate with neighbouring excavators. To prevent the former factor from being an issue, the graves were arranged in sets of two, which were 180°mirror images of one another. This was so excavators would not recognise the properties of the second grave they excavated compared to the first grave. In addition, at no point were the archaeologists informed that the graves were identical in terms of dimensions and content. Moreover, from the findings of previous researchers such as Harris (1979;1989;, Hanson (2004), Tuller and Đuric (2006), and Komar and Buikstra (2008), it was evident that the arbitrary level excavation method could be expected to intercut the different stratigraphic contexts contained within the graves and destroy certain forms of evidence, including: the grave walls and tool marks. Therefore, each archaeologist was told to use the arbitrary level excavation method for their first grave excavation. Although this represents a clear bias in the organisation of the experiment it was deemed justifiable as it would assist in reducing the overall impact of participants potentially recognising similarities between their graves. To combat the latter factor, forensic tents were placed over the graves to limit views, whilst they were excavated and tarpaulins were placed over the graves when the site was left. The participants also agreed not to talk with one another until the experiment had finished. Each of the participants were self-selecting volunteers, but were required to have had varying experience in the excavation of grave features. Archaeologist 1 had gained seven days of archaeological excavation experience and had excavated one grave previously. Archaeologist 2 had gained three months of archaeological excavation experience and had excavated two graves previously. Archaeologist 3 had obtained two and a half years of archaeological excavation experience and had excavated five graves previously. And Archaeologist 4 had six years archaeological excavation experience, and had excavated over 100 graves. Excavation and recording equipment Participants were able to select excavation and recording equipment from the following: mattock, shovel, digging spade, buckets, trowel, hand shovel, sieve, tape measures, ranging poles, scales, line level, plumb bob, string, photographic board, cameras, drawing board and permatrace. For the arbitrary level excavation method, the archaeologists were provided with a recording pack containing spit-level forms, unit-level forms, an artefact register, a photographic register, a drawing register, and a human remains recording form. Whereas, for the stratigraphic excavation method, the archaeologist's recording pack contained context recording forms, an artefact register, a photographic register, a drawing register, and a human remains recording form. Observation sheets were provided to the excavators so they could describe the process they were undertaking. Excavation procedure Method guidance documents were provided for the arbitrary level excavation method and were adapted from the excavation guidelines outlined in Ramey-Burns (1996; and Connor (2007) (see Appendix 1). The use of Ramey-Burns' method guidelines was deemed appropriate as she had also contributed to the formation of the United Nations excavation guidelines (1991), which have been used globally during international investigations of human rights violations. The participants were briefed on the method order to employ and that they were excavating graves. Following provision of the aforementioned guidance and the recording forms, the archaeologists defined the outline of the grave cut; they then delineated an area larger than the grave -3.0 m in length by 2.0 m in width using pegs and string. Each archaeologist proceeded to remove the overlaying turf and first 0.10 m spit using available tools. Once the first 0.10 m spit was removed the archaeologists continued to excavate in arbitrary 0.10 m levels. When an artefact was identified its location was recorded in three dimensions and spitlevel noted, it was then left upon a soil pedestal. All evidence and associated pedestals were left in place until the individual excavator decided that it was hindering the progress of their excavation. The evidence was then removed and the pedestal excavated. All soil removed during the excavation of each spit was kept separate from other spits and was sieved. The final spit, 0.80 m to 0.90 m took the archaeologists to the depth of sterile soil (see Figure 5.0). The method guidance documents provided for the stratigraphic excavation method were adapted from the excavation guidelines outlined by the Museum of London Archaeology Service (1994), Hanson (2004), and Hunter and Cox (2005) (see Appendix 2). Following provision of the aforementioned guidance and the recording forms, the archaeologists defined the outline of the grave cut. The archaeologists then excavated each fill/deposit they observed within the grave and maintained the boundaries of any interfaces identified. Each of the interfaces and fills/deposits recognised were treated as unique (contexts) and any fills/ deposits were stored and sieved separately. When an artefact was identified, its three dimensional location was recorded and context noted. The grave walls were kept intact throughout the entire excavation process (see Figure 6.0). Throughout the experimental excavations, the archaeologists were observed and their actions documented using voice notes, written notes and photographs. The researchers ensured that they did not communicate with the archaeologists during experimental testing so as to minimise any potential biases. Results and Discussion The results presented in this paper focus on the recovery of archaeological evidence. Results relating to the recording and interpretation of archaeological evidence will be reported elsewhere. Artefacts Each of the four participants excavated one grave using the arbitrary level excavation method and then another using the stratigraphic excavation method. No participants recognised that the graves had identical properties, or were a 180°mirror image of each other. All participants used the tools and materials available. They did not communicate with each other. They provided feedback on their excavation, methods employed and issues encountered by completing observation sheets as the excavations progressed. Using the arbitrary level excavation method an average of 64% of artefacts were recovered (Table 1.0). The rate of retrieval varied between 55-77% amongst the archaeologists (Table 1.0). Artefacts were found both in the locations in which they had been placed and out of situ (where items were moved during excavation). The amount of artefacts found out of situ varied from 18-54% (Table 1.0). There was a distinct correlation between the time that an archaeologist spent excavating and the amount of artefacts that were found out of situ, with the more time spent excavating leading to more artefacts being found in situ. Through observing the archaeologists whilst they were using the arbitrary level excavation method it was apparent that the recovery of artefacts out of situ can be attributed, in part, to the method itself; when the archaeologists were trenching around the suspect grave cut area in order to create an access trench using a mattock the archaeologists inadvertently removed the edge of the grave fill, where the definition between the natural undisturbed strata and grave fill was less distinct, resulting in some artefacts situated near the edge of the grave cut being knocked out of situ and recovered during sieving. Despite finding artefacts out of situ, the archaeologists were able to reassociate artefacts with the spit from which they had originated and determine their relative depositional sequence. However, all archaeologists failed to identify all of the contexts within the grave structure, and subsequently associated some of the recovered artefacts with the incorrect contexts. The extent to which they were incorrect varied in accordance with the number of contexts correctly identified, with the accuracy of the interpretation of the depositional sequence of artefacts placed into the grave averaging at 51%, with a variance rate of 4% (Table 1.0). An average of 72% of the placed artefacts were recovered using the stratigraphic excavation method, with the total artefact retrieval rate varying between 59-82% amongst the archaeologists (Table 1.0). Each of the archaeologists identified artefacts both in the locations in which they had been placed and also out of situ (where items were moved during excavation). Artefacts identified out of situ were recovered by sieving individual contexts. The amount of artefacts found out of situ varied from 0-46% (Table 1.0). As found with the arbitrary level method, there was a distinct correlation between the time that an archaeologist spent excavating and the amount of artefacts that were found out of situ. Despite finding artefacts out of situ, due to the archaeologists using the stratigraphic excavation method archaeologists were able to reassociate the artefacts that they had recovered in the sieve with the context (deposit/fill/interface/ cut) from which the artefacts had originated. Thus they were able to place these items within the stratigraphic sequence of the grave and determine their relative depositional chronology. However, all of the archaeologists failed to define all of the contexts within the grave structure. They subsequently associated some of the recovered artefacts with the incorrect contexts, making their reconstruction of the stratigraphic sequence and overall interpretation of the artefacts deposition sequence incorrect. However, the extent to which their reconstructions were incorrect varied in accordance with the number of contexts correctly identified, with the accuracy of the interpretation of the depositional sequence of the artefacts placed into the grave averaging at 71%, with a variance rate of 38% (Table 1.0). Extraneous artefacts As stated earlier, the arbitrary level excavation method could result in the mixing of artefacts from the grave fill with those present in the natural undisturbed strata through which the grave was dug, leading to the inclusion of artefacts unrelated to the grave creation event. The inclusion of a marble, key and coin outside the grave boundary, within the natural undisturbed strata tested this supposition. Whilst utilising the arbitrary level excavation method two archaeologists recovered extraneous artefactsmarbles and coins (Table 1.0). The close proximity of these items to the boundary of the grave cut and subsequent pedestalling of these items resulted in these archaeologists being unable to distinguish these items as unrelated to the grave structure, and therefore, mistakenly categorised these items as artefacts related to the grave. The other two archaeologists did excavate the areas containing the extraneous artefacts, but failed to recognise or locate any of the items. Whilst utilising the stratigraphic excavation method, one archaeologist identified an extraneous artefact (Table 1.0). The recovery of the key occurred whilst this archaeologist was attempting to define the boundaries Key information: Archaeologist 1: 7 days of archaeological experience, Archaeologist 2: 3 months of archaeological experience Archaeologist 3: 2.5 years of archaeological experience Archaeologist 4: 6 years of archaeological experience SE = Stratigraphic excavation method ALE = Arbitrary level excavation method AF = Artefacts SC = Stratigraphic contexts TM = Tool marks *The total evidence recovery is the sum of the artefacts (in and out of situ), stratigraphic contexts and tool marks recovered expressed as a percentage of the total of the three classes of evidence. of the grave cut, and mistakenly overcut the grave edge, leading to the recovery of the key. Stratigraphy Through following the arbitrary level method of excavation each archaeologist proceeded to remove a 2.0 m×2.0 m area that included the grave structure and surrounding natural strata in a series of 0.10 m spits. Through excavating using this method, an average of 51% of the stratigraphic contexts were correctly identified (Table 1.0). There was little variance in the number of stratigraphic contexts correctly identified using this method, with the results ranging from 48-52% (Table 1.0). All of the archaeologists were able to identify the grave cut as the grave fill was distinct from the natural undisturbed strata, and were able to measure its dimensions all the way to the base of the grave, as all of the archaeologists' spits coincided with the grave floor. The archaeologists could map the grave cut's dimensions, in plan form only, as the method itself had destroyed the grave structure as spits were removed. All of the archaeologists failed to identify and define the presence of secondary cuts within the grave structure. This is due to the method itself, as the approach did not require archaeologists to look for or maintain evident interfaces within the grave structure. By not maintaining the limits of interfaces, the archaeologists found it difficult to identify and define the stratigraphic contexts present. Ultimately, this resulted in the archaeologists being unable to define the chronology of activity within the grave structure; the artefacts that were placed into the secondary cuts becoming intermixed and grouped with the artefacts retrieved from the primary grave fills. The failure of all of the archaeologists to identify all of the primary grave fills was the result of the method. Eight of these fills were 0.05 m in depth, thus as the archaeologists excavated using their 0.10 m spits they inadvertently excavated two fills within one spit, resulting in the combining and intermixing of the fills and the artefacts contained within them. Through following the stratigraphic excavation method each archaeologist proceeded to remove each individual deposit/fill, defined by differences in texture (the size of the soil particles), composition (types of organic and inorganic matter), volume, compactness and colouration. They did so in the reverse order in which they were deposited, from the latest to the earliest. This method approach enabled the archaeologists to define the interfaces/cuts present. This meant that any 'cuts' identified by the archaeologists during the excavations were defined as a unique event (context), and any fills/deposits contained within them were excavated separately. This allowed the archaeologists to document different phases of activity present within the grave structure, and in turn, separate any of the artefacts recovered into the different stratigraphic phases of deposition present within the grave structure. An average of 71% of the stratigraphic contexts (deposits/fills/interfaces/cuts) were correctly identified whilst using the stratigraphic excavation method (Table 1.0). However, the number of stratigraphic contexts correctly identified varied significantly between archaeologists from 52-90% (Table 1.0). One archaeologist failed to identify the secondary cut and associated fill at the top of the grave, and three archaeologists did not identify the secondary cuts found at the base of the grave and their associated fills. One archaeologist correctly identified all of the primary fills contained in the grave structure. However, as one archaeologist was able to identify all of the primary grave fills present and another archaeologist was able to define all of the secondary cuts and associated fills within the grave structure it demonstrates it was possible to do so. It suggests the failure by some of the archaeologists to identify and define all of the stratigraphic contexts present in the grave may not have been due to the method itself but other factors such as excavation experience, ability and the observation skills of the individual archaeologist. Tool marks The arbitrary level excavation method recovered an average of 12.5% of tool marks present within the grave (Table 1.0). Only one archaeologist identified the presence of a machine bucket tool mark because the archaeologist's final spit coincided with the grave floor, which maintained the imprint of the bucket teeth. As a result, the archaeologist was able to determine that the grave was created using a mechanical digger. All of the other archaeologists failed to identify the presence of any tool marks. This can be attributed to the method itself as the arbitrary level excavation method followed by the archaeologists destroyed the grave walls and tool marks while developing access to the grave, leading to three of the archaeologists being unable to determine how the grave was constructed. The stratigraphic excavation method recovered an average of 62.5% of the tool marks present within the grave (Table 1.0). All of the archaeologists were able to identify the presence of machine bucket tool marks. They were therefore able to discern how the grave was constructed. Only one archaeologist identified the mattock mark along the grave wall. The failure of three of the archaeologists to identify the mattock mark is not accountable to the method itself, but the observation skills of the individual excavator, as by utilising this method the grave walls were maintained and therefore all tool marks were potentially recoverable. Time There was a significant difference in the number of hours it took to complete the excavation of the graves using the two methods. Whilst excavating using the stratigraphic excavation method the archaeologists took an average 11¼ hours to complete the excavation, although the time spent excavating varied between 8-17 hours amongst the archaeologists (Table 1.0). In comparison, whilst excavating using the arbitrary level excavation method, the archaeologists took an average of 19½ hours to complete the excavation, but the time spent excavating varied between 8-31 hours amongst the archaeologists (Table 1.0). The difference in the length of time that it took for the archaeologists to complete the excavation is largely due to the requirement of the arbitrary level excavation method to remove both the natural undisturbed strata as well as the stratigraphic contexts contained within the grave itself, resulting in over three times the volume of soil (and more compact soil) needing to be removed in order to complete the excavation. Approximately 2.8m 3 of soil being extracted and sieved using the arbitrary level excavation method and 0.8m 3 using the stratigraphic excavation method. This accounts for the greater length of time it took for the archaeologists to complete the excavation of the grave using the arbitrary level excavation method. In addition, the need to remove three times the volume of material to excavate the same feature may also compromise recovery rates as a result of increased fatigue. Experience In regards to experience, the results indicate that higher levels of experience have a positive impact on overall performance and evidence recovery (Table 1.0). Only Archaeologist 1, who had the least experience, did not follow this trend. This result can be explained by the fact that this participant spent between 6-9 hours longer than the other participants excavating using the stratigraphic excavation method, and 8-23 hours longer than the other participants using the arbitrary level excavation method (Table 1.0). Through using this extra time the participant was able to successfully identify more evidence than one might have expected, given their lack of experience. These findings highlight that time as well as experience are key variables in improving overall performance and evidence recovery in archaeological investigations; the greater the length of time spent excavating and the more archaeological experience gained, the better the overall evidence recovery process will be. This has important implications for forensic investigations where pressure is placed on forensic archaeologists to finish their investigative work as quickly as possible. These results show that such time constraints could reduce the volume of evidence recovered and thus the reliability of the investigative team's findings. Conclusion and Recommendations The results gained from this comparative excavation experiment indicate that the stratigraphic excavation method was the most productive in terms of total evidence recovery; with all participants achieving consistently better recovery rates of relevant artefacts, stratigraphic contexts and tool marks. While both methods recovered the majority of artefacts, participants using the stratigraphic method were consistently more successful at identifying the stratigraphic contexts, especially the interfaces and surfaces. Moreover, when using the arbitrary level method, the participants consistently destroyed both the vertical and horizontal interfaces present. The stratigraphic excavation method also proved to be a faster method of excavation, as the arbitrary level excavation method required a greater volume of soil, and consolidated undisturbed deposits to be removed. When using the stratigraphic excavation approach, the archaeologists were more able to determine the method by which the grave was created. Moreover, due to the retention of the grave walls during excavation, the archaeologists were able to identify the surface cracks between the grave walls and fills, as well as define the layer of vegetation at the bottom of the grave. They were therefore able to suggest that the grave may have been left open prior to backfilling. The arbitrary level excavation method also allowed for the recovery of the vegetation layer, but due to the destruction of the grave walls, the archaeologists were unable to identify the surface cracks. Consequently they could also suggest that the graves had been left open prior to backfilling, but with less certainty than with the stratigraphic excavation method. The arbitrary level excavation method also resulted in four items of extraneous evidence being recovered. This has implications for the dating of contexts and features. In forensic settings, if items such as these were recovered and thought to be related to the criminal events and the grave structure when they were not, it could result in a considerable waste of investigative time and resources, misdating of the grave feature, the incorrect identification of potential murder weapons, and false leads to identify perpetrators. On the basis of the results of this limited experimental study, the stratigraphic excavation method is more appropriate for the excavation of single graves, due to its ability to consistently recover a greater percentage of evidence types than the arbitrary level excavation method regardless of experience or skill level. While the arbitrary level excavation method is often deemed easier to undertake, and the stratigraphic excavation method is perceived as more complex to employ, all of the archaeologists consistently achieved a better rate of success in recovering all evidence using the stratigraphic excavation method, despite variation in their experience levels. This small-scale experiment was designed primarily to compare excavation methods applied to the same stratigraphic sequence, with the same tools and background information available to excavators. The experiment did not allow for variation in method on each grave. In this way the normal flexibility of approach to excavation archaeologists may apply was limited, this was deliberate as the aim was to test a method as a standard approach. The experiment did not have enough participants to assess in depth or statistically the impact of experience and skill of excavators on the implementation of methods and rate of evidence recovery. However, the fact that neither method was able to recover all evidence contained within the grave(s) in this experiment is of interest, considering the excavators were provided with the tools that would allow all evidence to be found. Variation in how excavation methods reveal the archaeological record and how those methods are employed should be of concern for all archaeologists. Given the usage of excavation methods in criminal casework, it is therefore of importance that researchers investigate why there is variation and how evidence recovery rates can be improved. Similar research is being undertaken in a range of scientific disciplines that are applied to legal work (NAS 2009). It is evident that there is a lack of standardisation in regards to the application of traditional archaeological excavation methods even in forensic archaeology (see for example Groen et al. 2015). This is largely a reflection of the lack of standardised practices in commercial archaeological and research-led fieldwork practiced globally; a variety of favoured excavation methods are employed regionally around the world (see for example Carver et al. 2015). These methods have been directly adopted into forensic fieldwork. Where the stratigraphic excavation method and arbitrary level excavation method are actively used, they are often used exclusively, rather than as part of a range of methods that best suit the nature of the site under investigation. Any method used during the course of a forensic investigation may be required to be subjected to empirical testing in order to ensure that it is reliable and therefore admissible (Daubert Standards 1993;Rule 702 2000;Hunter and Cox 2005), and it should be presumed that this will be the case. Nevertheless, little research has been conducted to experimentally test archaeological methods and so establish such reliability. The assessment in this small study of these two common archaeological excavation methods should be viewed as a pilot study to test the applicability of this experimental approach, and it has provided useful results to use to develop further studies and stimulate discussion. While it is important for archaeology as a discipline to consider assessment of excavation methods, and indeed there in an ethical impetus to undertake the best possible practice (see Harris 2006), it is in stringent legal contexts that a lack of empirical testing of methods can impact whether evidence is accepted in a court of law. In order for forensic archaeology to continue to develop as a discipline, it is recommended researchers continue to experimentally test archaeological excavation methods as well as recording systems to ensure that they are suitable for use in forensic practice. There are clear consequences to not doing so. Step 2 Carefully remove the grave fill. Ensure that you maintain identifiable stratigraphic boundaries; grave cut(s), different fills etc. Step 3 Complete removal of grave fill, exposing the skeleton/ body and grave surface for analysis.
2018-12-27T15:27:56.028Z
2016-09-12T00:00:00.000
{ "year": 2016, "sha1": "c9a8ebcd79410e47eb019407d7cea58a4df7561d", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/20548923.2016.1229916?needAccess=true", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c9a8ebcd79410e47eb019407d7cea58a4df7561d", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [ "Engineering" ] }
248015867
pes2o/s2orc
v3-fos-license
Facile Preparation of a Glycopolymer Library by PET-RAFT Polymerization for Screening the Polymer Structures of GM1 Mimics Commercialized oligosaccharides such as GM1 are useful for biological applications but generally expensive. Thus, facile access to an effective alternative is desired. Glycopolymers displaying both carbohydrate and hydrophobic units are promising materials as alternatives to oligosaccharides. Prediction of the appropriate polymer structure as an oligosaccharide mimic is difficult, and screening of the many candidates (glycopolymer library) is required. However, repeating polymerization manipulation for each polymer sample to prepare the glycopolymer library is time-consuming. Herein, we report a facile preparation of the glycopolymer library of GM1 mimics by photoinduced electron/energy transfer-reversible addition–fragmentation chain-transfer (PET-RAFT) polymerization. Glycopolymers displaying galactose units were synthesized in various ratios of hydrophobic acrylamide derivatives. The synthesized glycopolymers were immobilized on a gold surface, and the interactions with cholera toxin B subunits (CTB) were analyzed using surface plasmon resonance imaging (SPRI). The screening by SPRI revealed the correlation between the log P values of the hydrophobic monomers and the interactions of the glycopolymers with CTB, and the appropriate polymer structure as a GM1 mimic was determined. The combination of the one-time preparation and the fast screening of the glycopolymer library provides a new strategy to access the synthetic materials for critical biomolecular recognition. Setup of the equipment for PET-RAFT polymerization at open-air condition. The equipment for PET-RAFT polymerization was assembled using a regulated power supply, a circuit board, and LEDs. Each LED bulb was fitted into a 96-well plate with a hole diameter of 4.5 mm for each well to serve as a light source. Two circuits, where four LEDs were connected in series, were connected in parallel to the power supply. The voltage and current of the regulated power supply were set as 14 V and 0.05 A, respectively. Synthesis of galactose acrylamide (GalAAm) TBTA (265 mg, 0.5 mmol), galactose azide (1.02 g, 5.0 mmol), BtnAAm (615 mg, 5.0 mmol), and CuSO4 (80 mg, 0.5 mmol) were dissolved in MeOH (25 mL) / H2O (25 mL) mixture. The oxygen was removed by bubbling nitrogen. L-Asc-Na (200 mg, 1.0 mmol) was added and stirred at 30 °C for 24 h under nitrogen atmosphere. The solution was concentrated under reduced pressure, and the precipitate was filtered. The crude product was purified by reverse-phase chromatography (Biotage SNAP ULTRA C18, gradient from water to methanol). The fraction containing the product was concentrated under reduced pressure and stirred with a metal scavenger (2.5 g) at room temperature for 24 h. After removal of metal scavenger of SiliaMets by filtration, the solution was obtained by freeze-drying (893 mg, 55%). Synthesis of N-butylacrylamide (ButylAAm) Butyl amine (300 mg, 3.0 mmol) and N,N-diisopropylethylamine (0.63 mL, 3.6 mmol) were dissolved in dry dichloromethane (6 mL) and stirred in ice bath. Acryloyl chloride (0.29 mL, 3.6 mmol) was slowly dropped into the solution and the mixture was stirred for 10 h at room temperature. The progress of the reaction was confirmed by TLC (EtOAc : hexane = 2 : 1, UV). The reactant was washed by saturated brine once. The organic phase was dried by MgSO4, filtered and concentrated under reduced pressure. The crude product was purified by silica column chromatography (EtOAc: hexane = 2 : 1) to give N-butyl acrylamide as white solid (203 mg, 53%). 1 Synthesis of N-cyclohexyl acrylamide (CyHexAAm) Cyclohexyl amine (292 mg, 2.9 mmol) and N,N-diisopropylethylamine (0.61 mL, 3.5 mmol) were dissolved in dry dichloromethane (5.2 mL) and stirred in ice bath. Acryloyl chloride (0.26 mL, 3.2 mmol) was slowly dropped into the solution and the mixture was stirred for 10 h at room temperature. The progress of the reaction was confirmed by TLC (EtOAc : hexane = 2 : 1, UV). The reactant was washed by saturated NaHCO3(aq) once. The organic phase was dried by MgSO4, filtered and concentrated under reduced pressure. The crude product was purified by silica column chromatography (EtOAc: hexane = 2: 1) to give N-cyclohexyl acrylamide (357 mg, 80%). 1 137 mm NaCl, 2.68 mm KCl) was flew through (0.1 mL/min), and SPRI reflectivity change (defined as "SPRI signal") was monitored until the SPRI signal was stable. Then, protein solution with a certain concentration was injected with flow rate of 0.1 mL/min in all experiments, and the SPRI signal was monitored. In the measurement, the SPRI signal was regarded as the amount of protein adsorption. The binding constants of CTB were calculated with the Langmuir isotherm using the SPRI signals ∆ = ∆ max 1+ (1) S7 ΔR, ΔRmax, c, and Ka are the SPRI signal, the maximum SPRI signal, the protein concentration, and the binding constant, respectively. Based on eq 1, the plots of the SPRI signals were analyzed by nonlinear regression to derive the binding constants.
2022-04-08T15:16:37.196Z
2022-04-06T00:00:00.000
{ "year": 2022, "sha1": "5303f2a236af56d3437ab239cbad466a16c8bc82", "oa_license": "CCBY", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.2c00719", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "813ab6dc7c99a799766be4fe84e55ce338df5935", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
83770664
pes2o/s2orc
v3-fos-license
THE D – WAVE OF THE ELECTRORETINOGRAM OF PERCH ORIGINATES IN THE CONE PATHWAY during light offset, a positive potential called the d-wave of the electroretinogram (ERG) in cone-rich retinas (B r o w n, 1968). The d-wave is believed to be generated from cone-driven secondary retinal cells, such as the OFF bipolar cells (S t o c k t o n and S l a u g h t e r, 1989; N a a r e n d o r p and W i l l i a m s, 1999). It has been suggested that in zebrafish, during the transition from light to dark adaptation, the b-wave represents a function of both rod and cone systems (R e n and L i, 2004). The positive d-wave, on the other hand, represents mainly, if not exclusively, cone functions (A n d j u s, 2001; R e n and L i, 2004). THE D–WAVE OF THE ELECTRORETINOGRAM OF PERCH ORIGINATES IN THE CONE PATHWAY. Milena Milošević1, A. Bajić2, and Z. Gačić1. 1Center for Multidisciplinary Studies, University of Belgrade, 11000 Belgrade, Serbia, 2Faculty of Biology, University of Belgrade, 11000 Belgrade, Serbia. In order to show that the d-wave could be an indicator of cone-dominated retinas, we performed experiments on perch (Perca fluviatilis).Animals were electrofished in the floodplain zone of the Danube River (kilometer 1136).The fish were kept in captivity for at least 15 days in order to acclimatize to the experimental conditions (dark with room controlled temperature of 15°C).Perch were anesthetized (phenobarbital sodium) and curarized (tubocurarine) following procedures recommended by H a m a s a k i et al. (1967), adjusting the dosage soas to induce respiratory arrest.Artificial respiration was provided continuously by forcing aerated and temperature-controlled water through the gills.The immobilized fish were positioned laterally on a plastic platform inside a light-proof Faraday cage.After removal of the cornea, lens and most of the vitreous, the in situ eyecup was filled with Ringer solution.Electroretinogram potentials were detected with non-polarizable silver chloride electrodes (Ag-AgCl 2 , World Precision Instruments, Inc., model EP2), the active one being introduced into the interior of the saline filled eyecup.The reference electrode was in the retro-orbital space.The signal was conducted to a computer via a differential preamplifier and a PCI-20428W-1 AD-converter (8bit; 125-Hz sampling rate).Photic stimuli were delivered by a single-beam optical system using an 8 V 50 W tungsten-halogen lamp as the light source, and providing independent control of intensity (neutral density filters) and duration (electromagnetic shutter, UniBlitz model T132) of the test flashes.Light intensities were calibrated and checked by placing the active surface of a custom-made radiometer probe in the position usually occupied by the eyecup preparation.When comparing intensity/amplitude relations in different preparations, relative intensity (IR) scales were used, plotting ERG amplitude voltage against attenuation extent in log units. After 1 h of dark adaptation, ERGs were recorded.Figure 1A shows responses obtained with a 1-s (ts) "white" flash ranging in intensity from 0.282 µW/cm 2 (-3 log intensity units) to 282 µW/cm 2 (0 log intensity units).In this series, the c-wave is masked by the d-wave, and immeasurable directly from the ERG.In order to reconstruct the c-wave, we removed the samples at intervals of [ts, ts+1s] and fitted the resulting curve with Chebyshev rational functions of higher orders (ninth or tenth). The criteria for selection of the fitting function were slope of the b-wave and its amplitude.The amplitude of the c-wave was then measured from the fitted curve.A series of isolated off-responses (Fig. 1B) was obtained by subtracting the fitted curve from the original ERG response (method shown in Fig. 1E). The stimulus intensity-amplitude relation was checked by fitting experimental data with the basic model: (N a k a and Rushton, 1966), where V 0 is the normalized voltage (V/Vmax) of the ERG signal (Vb, Vc or Vd; method of measuring shown in Fig. 1D), I 0 is the stimulating light intensity corresponding to V 0 = 1/2, and exponent a is constant (Fig. 1C).The slopes (parameter a values) of normalized log profiles were 0.8057 for the b-wave, 0.5288 for the off-response, and 0.5603 for the c-wave.The saturation level for the b-wave was reached at a relatively low stimulus intensity of 7 µW/cm 2 (-1.6 log intensity units, Fig. 1C).The saturation level for the c-wave was reached with 40 times higher stimuli than in the case of the bwave, 282 µW/cm 2 (0 log intensity units, Fig. 1C).The saturation level of the d-wave was never reached, even when maximal intensity stimuli were applied as in cone-driven horizontal cells of eel retina (B y z o v et al., 1998).The obtained results are in accordance with the previous finding that the d-wave represents cone functions (A n d j u s, 2001; R e n and L i, 2004). Fig. 1 . Fig. 1.Relationship of normalized amplitude of response V/Vmax and log intensity of stimulation.A: A series of ERGs obtained with incremental stimulation on the eye of perch.Intensity of stimulation was 282 µW/cm 2 , duration 1 s.B: Isolated off -responses of previous series.C: Amplitude/intensity relations in the perch for b-wave (solid circles), c-wave (solid squares) and off-response(solid triangles).Fitting according to the basic model of N a k a and R u s h t o n (1966).D: Amplitudes of measured ERG components.Va: a-wave, measured from zero to a-wave minimum; Vb: b-wave, measured from minimum of a-wave to maximum of b-wave ("peak to peak"); Vc: c-wave measured "peak to peak"; Vd: d-wave, measured from breaking point of c-wave to maximum of d-wave.
2018-04-19T23:45:50.543Z
2006-01-01T00:00:00.000
{ "year": 2006, "sha1": "22ad03df19e6ee49693d17a977a8cbe11672efcb", "oa_license": "CCBY", "oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0354-4664060433PM", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "22ad03df19e6ee49693d17a977a8cbe11672efcb", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
18112654
pes2o/s2orc
v3-fos-license
rs929387 of GLI3 Is Involved in Tooth Agenesis in Chinese Han Population Tooth agenesis is one of the most common anomalies of human dentition. Recent studies suggest that a number of genes are related to both syndromic and non-syndromic forms of hypodontia. In a previous study, we observed that polymorphism in rs929387 of GLI3 might be associated with hypodontia in the Chinese Han population based on a limited population. To further confirm this observation, in this study, we employed 89 individuals diagnosed with sporadic non-syndromic oligodontia (40 males and 49 females) to investigate the relationship between polymorphism in rs929387 of GLI3 and tooth agenesis. These individuals were analyzed with 273 subjects (125 males and 148 females) diagnosed with non-syndromic hypodontia and 200 healthy control subjects (100 males and 100 females). DNA was obtained from whole blood or saliva samples and genotyping was performed by a Matrix-Assisted Laser Desorption/Ionization Time of Flight Mass Spectrometry (MALDI-TOF MS) method. Significant differences were observed in the allele and genotype frequencies of rs929387 of GLI3. Distributions of genotypes TT, TC and CC of rs929387 polymorphism were significantly different between the case group and the control group (P = 0.013) and C allelic frequency was higher in case group [P = 0.002, OR = 1.690, 95% CI (1.200-2.379)]. Additionally, our analysis shows that this difference is more pronounced when compared between the male case group and the male control group. The function study suggests that variation in GLI3 caused by rs929387 leads to a decrease in its transcriptional activity. These data demonstrated an association between rs929387 of GLI3 and non-syndromic tooth agenesis in Chinese Han individuals. This information may provide further understanding of the molecular mechanisms of tooth agenesis. Furthermore, GLI3 can be regarded as a marker gene for the risk of tooth agenesis. Introduction Permanent tooth agenesis is one of the most common dental developmental anomalies in human [1]. The prevalence of dental agenesis of permanent teeth ranges from 2.2 to 10.1% in the general population excluding third molars [2]. The majority of persons are missing only one or two teeth and hypodontia is often used as a collective term to describe the absence of one to six teeth excluding third molars [3,4]. Oligodontia refers to the absence of more than six teeth, excluding third molars [5]. Tooth agenesis may present as part of a syndrome, however, the non-syndromic form is more common. Tooth development is a very complicated process involving many genes and signaling pathways [17]. Certain alterations in one or more of these genes may cause tooth agenesis [1]. Several studies suggest that gene polymorphisms may cause disease susceptibility [18,19]. Single nucleotide changes, which occur at a high frequency in the human genome, are the most common polymorphisms and may affect the function of genes. Thus, single nucleotide polymorphisms (SNPs) may be a risk factor for non-syndormic tooth agenesis. In a previous study, we observed that polymorphism in rs929387 of GLI3 might be associated with hypodontia in Han population [20]. However, that is only based on a limited population. In this study, we collected individuals diagnosed with non-syndromic oligodontia and studied the two type of population (individuals with non-syndromic hypodontia and individuals with non-syndromic oligodontia) together. Our results show that polymorphism in rs929387 of GLI3 associated with tooth agenesis in Chinese Han population, especially in male. We further employed functional study to test whether variation caused by rs929387 could affect the function of GLI3. Subject selection and sampling This study was approved by the Institutional Review Board of Peking University School and Hospital of Stomatology. A total of 562 individual subjects were analyzed in this study, which include 89 subjects (40 males and 49 females) diagnosed with sporadic non-syndromic oligodontia, 273 subjects (125 males and 148 females) diagnosed with sporadic non-syndromic hypodontia (excluding the third molar) and 200 healthy control subjects (100 males and 100 females). All individuals participating were genetically unrelated ethnic Han Chinese from Beijing or the surrounding regions. All subjects didn't have a history of teeth extraction or loss. Naturally missing teeth within the adult dentition were confirmed by X-ray examination and no other dental anomalies were observed in any subjects. All of participants provided their written informed consent to participate in this study. Blood samples and oral swabs were coded to maintain confidentiality. Genomic DNA of participants with tooth agenesis were extracted from peripheral blood lymphocytes using the the TIANamp Blood DNA kit (Tiangen, Beijing, China) according to manufacturers' instruction. DNA samples of the normal volunteers were extracted from buccal epithelial cells using the TIANamp Swab DNA kit (Tiangen, Beijing, China) according to the manufacturers' instructions. Polymorphism genotyping Primers for polymerase chain reactions (PCR) and single base extensions were designed using the Assay Designer software package (Sequenom, Inc, San Diego, CA). The forward primer was 5'-ACGTTGGATGTCGCTGGCCCTCCTCAC-3' and the reverse primer was 5'-ACGTTGGATGATGCCCCGAGGAGGTG-3'. SNP genotyping was performed using the MassARRAY system DNA constructs The expression vector pCMV6-GLI3 with the c-Myc epitope tags was purchased from OriGene Technologies, Inc. In vitro site-directed mutagenesis was performed to construct pCMV6-P998LGLI3 by using the QuikChange Lightning Site-Directed Mutagenesis Kit (Stratagene Corp., La Jolla, CA, USA). The mutated constructs were verified by sequencing the whole vectors. Eight directly repeated copies of a GLI-binding site (GLI-BS: 5'-GAACACCCA-3') fragment were subcloned into the pGL3-Basic vector (Promega) upstream of the firefly luciferase reporter gene [21]. Cell culture, transient transfection and luciferase reporter assay HeLa cells were seeded with 1×10 5 per well in a 6-well plate for 24 hours prior to transfection. After overnight incubation, cells were transfected with Lipofectamine 2000 (Invitrogen) according to the manufacturer's instruction. GLI3 expression vectors (pCMV6-GLI3 or pCMV6-P998LGLI3) were cotransfected with pGL3-GLI-BS Luc reporter plasmid; the phRL-TK plasmid (Promega) was used as the internal control. Cell extracts were prepared using the Cell Culture Lysis Reagent (Promega) 48 h after transfection and the extracts were assayed by using Dual-Luciferase Reporter Assay System (Promega). Firefly luciferase activity was normalized based on Renilla luciferase activity. All reporter assays were repeated at least three times. Data shown are average values ±SD from one representative experiment. Protein preparation and Western blot analysis Cells were washed twice with ice-cold phosphate-buffered saline (PBS) and lysed in RIPA lysis buffer (50 mM Tris-HCl, pH 7.4; 150 mM NaCl; 1% deoxycholate Na; 1% NP-40; 0.1% sodium dodecylsulfate, with freshly added protease inhibitor cocktail) for 30 min at 4 °C. Cell lysates were clarified by centrifugation at 4 °C at 16000g for 20 min. Protein concentrations were determined using the BCA protein assay reagent (Pierce, USA). Equal amounts of protein were electrophoresed by SDS-PAGE and transferred onto a nitrocellulose membrane (Amersham Pharmacia, UK). Membranes were blocked in Tris-buffered saline containing 0.1% Tween-20 (TBST) and 5% nonfat milk and then incubated overnight at 4 °C with the appropriate primary antibody. After washing in TBST buffer, the membranes were incubated for 1 h with the corresponding IRDye™ 700-conjugated secondary antibody. The blots were scanned using an Odyssey Imaging System (LI-COR Bioscience). Primary antibodies were purchased from the following commercial sources: Mouse monoclonal antibodies against Myc (Sigma-Aldrich, St. Louis, MO); polyclonal antibody against β-actin. (Cell Signaling, Beverly, MA, USA). Statistical analysis Chi square was used to test if genotype distributions were in Hardy-Weinberg equilibrium. Clinical information and gender was compared across genotypes, using chi-square tests. When p-values is lower than 0.05, it was considered statistically GLI3 in Tooth Agenesis PLOS ONE | www.plosone.org significant. The associations between genotypes and the risk of tooth agenesis were estimated by computing the odds ratio (OR) and their 95% confidence intervals (95%CI) from logistic regression analyses. The results of luciferase reporter assay were expressed as mean±SD of triplicate independent experiments. The data were analyzed by Student's t-test. All statistical tests for this analysis were performed using SPSS 13.0 software. Polymorphism in rs929387 is associated with tooth agenesis in Han population According to the number of missing teeth, general tooth agenesis was divided into two groups: 1) hypodontia group refers absence of one to six teeth and 2) oligodontia group refers absence of more than six teeth. Table 1 clearly shows that the distribution of genotype and allele was significantly different among different groups. The CC and TT genotype frequencies were 6.7% and 61.9% in the hypodontia group, 5.6% and 55.1% in the oligodontia group, and 2.9% and 72.6% in the control group, respectively. The distribution of genotype exhibited significant differences in the tooth agenesis cases (hypodontia vs. control, P=0.039; oligodontia vs. control, P=0.016; hypodontia+oligodontia vs. control, P=0.016). Then we investigated the distribution of genotype and allele in different gender groups. For male, both alleles and genotype frequencies exhibited significant differences among different groups ( Table 2). CC and TC showed higher frequency in male hypodontia group and male oligodontia group than male control group (hypodontia vs. control, P=0.033; oligodontia vs. control, P=0.0.004; hypodontia+oligodontia vs. control, P=0.007). Compared with male control group, combined CC and TC genotypes also showed significant differences in male hypodontia group (P=0.020), male oligodontia group (P=0.001) and the male case group(P=0.003). Additionally, compared with an allele frequency of 14.9% in male control group, the frequency of allele "C" was 26.1% in male hypodontia group and 25.3% male oligodontia group(hypodontia vs. control, P=0.007; oligodontia vs. control, P=0.001; hypodontia +oligodontia vs. control, P=0.001). However, no significantly statistical differences were observed among female groups (Table 3). Comparisons of groups with different teeth missing positions and the control group (all normal individuals) are shown in Table 4. Compared with control group, posterior teeth missing group showed more significant results than anterior teeth missing group(P=0.021,in genotype for anterior teeth missing group; P=0.008, in genotype for posterior teeth missing group; P=0.006,in allele for anterior teeth missing group; P=0.001, in allele for posterior teeth missing group) and maxillary teeth missing group showed more significant results than mandibular teeth missing group(P=0.003,in genotype for maxillary teeth missing group; P=0.035, in genotype for mandibular teeth missing group; P=0.001,in allele for maxillary teeth missing group; P=0.008, in allele for mandibular teeth missing group). Additionally, compared with control group, premolar teeth missing group showed more significant results than incisor teeth missing group, canine teeth missing group and molar teeth missing group (P=0.005, in genotype for premolar teeth missing group; P=0.001, in allele for premolar teeth missing group). Point mutation caused by rs929387 lead to a reduced transcriptional activity of GLI3 To evaluate the transactivation activity of the mutant GLI3 proteins caused by rs929387, we performed luciferase reporter assay experiments in HeLa cells, using the GLI-BS Luc plasmid as reporter. GLI-BS Luc plasmid contained 8 directly repeated copies of a GLI-binding site [21]. The mutant GLI3 protein had decreased capabilities to induce the luciferase signal, suggesting that mutant GLI3 caused by rs929387 leads to a decrease in its transcriptional activity (Figure 1a). Equal transfection and synthesis efficiencies of the wt and mutant GLI3-constructs were controlled on a Western blot analyzing ratios of expressed GLI3 to β-actin (Figure 1b). This result suggested that rs929387 polymorphism may affect the function of GLI3 protein. Discussion Many factors, including environmental and genetic factors, multi-reagent chemotherapy and radiotherapy, may contribute to tooth agenesis [22]. Although the exact mechanism of tooth agenesis has not been fully elucidated, genetic factors are believed to play a major role in tooth agenesis. The incidence of tooth agenesis is very high (ranges from 2.2 to 10.1%) [2], however, only a few cases can be linked to gene mutation [1]. This suggests that tooth agenesis may be a polygenic disease. Hundreds of genes have been associated with tooth development and can potentially contribute to tooth agenesis. These genes code for signaling molecules, transcription factors, and factors controlling cell proliferation and [23]. Individuals with distinct polymorphic alleles may exhibit subtle and specific phenotypic variations in dental patterning. Consequently, it can be speculated that association studies between gene polymorphisms and hypodontia as well as other mild malformations will reflect qualitative defects of embryogenesis [24]. Therefore, we focus on the association between tooth agenesis and single nucleotide polymorphisms. In a previous study, we found the association between twors929387 of GLI3 and non-syndromic hypodontia [20]. In this study, we investigated the potential function of rs929387 in oligodontia individuals (absence of more than six teeth). The results further confirm our previous conclusion that polymorphisms on rs929387 of GLI3 may be a risk factor for Chinese Han population with tooth agenesis. Oligodontia group was also investigated in this study. We found that the difference for genotype and allele frequencies was more significant in oligodontia group than in hypodontia group. The data demonstrate a strong relationship between the marker rs929387 of GLI3 and sporadic oligodontia tooth agenesis in the Han population and implicated allele C as its risk factor. Interestingly, following stratification of the case and control groups on the basis of gender, comparisons revealed marked differences in rs929387 between the gender groups than between all case-control groups. We found that the difference was more significant in males than in females. Actually, no significant difference was observed in female groups. It suggests that rs929387 may be a risk factor for male Han population. The position of missing teeth may also contribute to this difference. The frequency of allele C was higher in posterior teeth missing group and maxillary teeth missing group. Also compared results among different type of teeth missing group (incisor, canine, premolar and molar), frequency of allele C was highest in premolar teeth missing group than others. Our results shows rs929387 of GLI3 may have more close relationship with posterior and maxillary teeth missing. Tooth development is known to be a complex process in which different genes are involved in the development of each tooth [23]. The results of this study are in accordance with this point. The GLI3 protein is a zinc finger transcription factor expressed in early development. This transcription factor regulates downstream genes by direct binding to specific sequences in the promoter region of target genes [25]. The GLI3 protein is a downstream mediator of the sonic hedgehog pathway, and this pathway includes several genes that cause abnormal phenotypes in the human when mutated (for example, SHH, PTC1, and CBP) [26]. Shh pathway is involved in both lateral (epithelial-mesenchymal) and planar (epithelialepithelial) signaling in early tooth development and GLI3 is expressed in both the epithelial and mesenchymal layers. A recent study demonstrates the expression of SHH signaling in the developing human tooth and suggests a conserved function of SHH signaling pathway during human odontogenesis [27]. Thus, we assume that variation in GLI3 may affect teeth development via SHH signaling pathway. The marker rs929387 (c.2993C→T) is located in exon14 of GLI3 and includes a C→T transversion resulting in Pro 998 Leu. Rs929387 is located in the transactivation and CBP-binding regions of GLI3 [28]. By luciferase reporter assay test, we found variation in GLI3 caused by rs929387 could reduce transcriptional activity of GLI3. This test further confirmed that polymorphism in rs929387 may affect the function of GLI3 and then affect the development of teeth. However, the specific mechanism is not clear and more experiment need to be done to reveal the mechanism of this complex process. Previous studies on polymorphisms and tooth agenesis was very rare. One of our studies suggests the potential relationship between polymorphism in rs929387 of GLI3 and nonsyndromic hypodontia. Another recent report found significant difference between rs929387 and hypodontia in a Turkey family-based analysis [29]. However, in the Brazilian casecontrol study, they didn't find the same relationship. This may provide a clue that polymorphism in rs929387 may associate with tooth agenesis not only in Chinese Han population but also in other ethnic groups and their relationship was different among groups. The human dentition develops in a long process that starts during the second month of embryogenesis and is completed during adolescence when the third molars (''wisdom teeth'') erupt. The process is regulated by tissue interactions and genetic networks. Genetic factors (mutation or polymorphism) can explain some causes of tooth agenesis. Also there Epigenetics, an area of research that is studying how environmental factors produce lasting changes in gene expression without altering DNA sequence, may provide new insights into this question. In summary, in this study, we demonstrate that polymorphism in rs929387 of GLI3 may contribute to the sporadic non-syndromic tooth agenesis in Chinese Han people. Our gene functional study shows that polymorphism in rs929387 affected the expression of GLI3 gene. However, more experiments may be need to elucidate the regulatory mechanism of GLI3-mediated tooth agenesis.
2016-05-04T20:20:58.661Z
2013-11-20T00:00:00.000
{ "year": 2013, "sha1": "be09dba66d65b3b5f11cb4b4f08c02de08bd7ebe", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0080860&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "be09dba66d65b3b5f11cb4b4f08c02de08bd7ebe", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
231645850
pes2o/s2orc
v3-fos-license
THE MEANING OF HIJRAH AMONG NIQOBERS IN SOCIAL MEDIA INTRODUCTION Hijrah, which has been interpreted as a migration from a place to another, has experienced a meaning expansion due to the fact that one's knowledge and perspective have also been evolving. One of the characteristics of hijrah today is the use of hijab for women (Amna 2019; Annisa 2018; Putri and Firdaus 2018). In line with that, hijrah becomes a different medium for Millennial in creating religious identities and religious symbols (Zahara, Wildan, and Komariah 2020). Today's young generation is easy to find their integrity and identity through their idols and role models by joining groups THE MEANING OF HIJRAH AMONG NIQOBERS IN SOCIAL MEDIA Until today, studies related to religious studies tend to have two fundamental things. The first is that the studies are related to religious groups that divide society, and the second is the study of religious groups that unite society. Through Islamic Community Organization (ORMAS) Communication Forum in Pamekasan, there has been a commitment produced in actualizing the internal religious harmony. Several activities were also carried out and have contributed to unifying religious communities (Hasan 2015). However, what commonly becomes a popular study in the research articles is about groups classified as radicals and dividers of society (Laisa 2014;Rokhmad 2012;Ruslan 2017;Thoyyib 2018). Talking about radical religious groups, it turns out that many Muslim women who wear niqob are still viewed negatively or associated with radicalism in Indonesia (Putri 2019). On the other hand, women who do not wear the niqob are seen as kafir or non-believer (Nasir 2008). Such a stigma is considered inappropriate because not all women with niqob are radicals, and not all women without hijab do not perform prayers. Someone is said to be kafir when he or she associates God to something else and worship anything other than God, as what was done by the people of Mecca during the Jahiliyyah era (Afandi 2017). Meanwhile, the meaning of radical is easy to find in the social media, such as the radical da'wah, which appeared along with the viral case of religious blasphemers (Fatmawati, Noorhayati, and Minangsih 2018). The religious group that unites the people can be seen from the Muhammadiyah organization that has laid the foundation for its movement, amr ma'ruf nahi munkar, to society as a field of spreading da'wah (Suryanto 2016). Besides, the Nahdlatul Ulama (NU) organization has spread the unity of Muslims through the moderation of Islamic education in Indonesia, which teaches Islamic teachings not only on doctrinal matters but also with the dynamics of times and people's ways of thinking (Abdurrahman 2018). These dynamics cannot be separated from the Qur'an and hadith, and still hold on to the statements of "maintaining the old traditions and taking new, better ones" (Hakim 2018). Religious studies that unite Muslims in Indonesia are not only described by the existence of religious organizations but also the presence of religious figures or preachers as visualized in the trends in various social media at this era. It is reflected in a research that shows that preachers can hypnotize people through the rhetoric of their delivery, such as Ustad Abdul Somad, Khalid Basalamah, and Hanan Attaki (Rosyada 2018). The previous studies have yet to carry out many studies on hijrah associated with niqob, especially the Niqobers. The purpose of this paper is to complement the lack of studies in the concept of hijrah according to the Niqobers on social media, which have tended to ignore the perspective of Niqobers. This paper specifically presents the deconstruction of the concept of hijrah from the perspective of Niqobers who join a community in the social media. In line with that, three questions are formulated: a) how is the hijrah conception based on the view of Niqobers?; b) How do the background and experience of the Niqobers influence the hijrah conception?; c) How does the hijrah conception of Niqobers deconstruct the meaning of religion in the social media? The answers to these three questions are the main cores of the discussions in this paper. This study assumes that hijrah has experienced a shift of meaning in line with the development of human life patterns. The Niqobers' view of hijrah is reflected in three things: the circumstance, fashion, and attitude of a person. The most piercing thing of Niqobers' hijrah is shown by the clothing (niqob) that they wear. It is motivated by the influential factors, such as the self-encouragement, the influence of family/parents, a force from husband/partner. Some of the Niqobers' hijrah are taken as a compulsion at first, but they gradually become self-aware. LITERATURE REVIEW Deconstruction Deconstruction is an analysis method that uncovers the structure and codes of language, especially the design of the oppositional pairs to create a game of signs without an end and final meaning. According to Derrida as cited by Ungkang (2013), there are three deconstructions of readings: 1) ascertaining part of a text contradiction considered to be the most important or dominant; 2) showing how this hierarchy can be ignored in the book and how the revealed scale takes an arbitrary or illusory attitude; 3) putting the contradictory elements of the reader into the problem and making the text ambiguous. Under the three deconstructions, the meaning of hijrah will not be limited because all of the images are a raise from the doer. They have different conceptions from one individual to another. By deconstructing the meaning of hijrah, it will be found the concept, which is commonly believed by niqob users (Ungkang 2013). The concept of deconstruction is widely used in the studies related to literary works, such as a study conducted by Imron (2015), which found out that the presence of Hindu and Balinese culture, which is very closed to its castle, has been deconstructed into a new discourse for the reader of the short story The First Night of Candidate Pastor (Malam Pertama Calon Pendeta) (Imron 2015). The same case was conducted by Belasunda, stating that the movie texts have been studied with the deconstructive approach, which expresses the ideas, concepts, idioms, aesthetics, and parodies. The Javanese opera film can reflect the issues on feminism and gender, which are reflected with the domination of masculinity, power, class clashes, and capitalism (Belasunda, Saidi, and Sudjudi 2014). In line with that, Munarti emphasized that the concept of deconstruction can see the aesthetics and the meaning of a shift in the form and structure of tones in Gamat music in the society. It shows that the aesthetic diversity can be reflected through mutual acceptance between different cultures in Gamat music performances, which implies the meaning of renewal, creativity, expressiveness, innovation, and multiculturalism (Musik & Pertunjukan 2015). The interpretation on the Qur'an verses has been widely carried out with misuse of interpretation until it is trapped in violence and radicalism (Abdillah 2017). A paradigm of anthropocentric interpretation has varied the deconstruction of the meaning of a text with the main view that the universe that has been created by God is only intended for the welfare of the mankind, which creates the human brutality in exploring natural resources by ignoring the impacts that possibly occur (Abdillah 2014). The same case, as said by A. Nur, that in Tafsir Al-Misbah, there has been a description of the Isra'iliyyat, such as that Prophet Noah's boat is divided into different floors. The lower floor is a place for wild animals, the middle floor is for food storage, and the top floor is for Prophet Noah and his followers (Nur 2014). In his study, J. Abdillah stated that many Muslims have misused the perception of verses that contain violent meanings, such as the words jihad, qatala, and ma'rakah in the name of God, which are often used to justify an offense (Abdillah 2017). A different case from the study conducted by Y. Fauziyah is that throughout the history, female preachers (ulama) have not received attention due to the dominance of male preachers in various Islamic histories, thus the deconstruction of roles that will balance patriarchy and matriarchy is needed (Fauziyah 2014). Hijrah Specific names or symbols, under psychological perspective, will inspire and even figurative meanings to someone. Therefore, the word "hijrah" provides a shifting impression for Muslims. Their lives are always dynamic. Before hijrah, while in Mecca, despite facing extraordinary and cruel challenges, the Prophet and his companions had settled into the social order. They are already established economically, influence, and so on. Usually, a person in power does not want to go down the arena, because so far, he has not had the second chance to fully imagine, let alone prepare for his life in case he does not have the power. Since the beginning of his life, the Prophet was prepared to have nothing. When they received Allah's orders to migrate to Medina with their friends, they did not think about how to live and support their family in a new place. For the Prophet and his companions, it was nothing to lose. They were not worried that one would lose anything. This is because, they had nothing to worry, such as money deposit, animals, land, and precious jewelry. While they do not own himself because it has been given and bought by Allah (Ibrahim 2016). Royyani said that hijrah is a personal right that begins to shift into a movement that is conducted communally (Royyani 2020). The movement of hijrah becomes one of the popular dakwah movements that develop into a social trend (Royyani 2020). The massive movement of hijrah is a phenomenon of a new social Islamic movement transforming into a dynamic social reality. It occurs in global and national community. The image of the Islamic-based social activity has been long been discussed. The religious movement is a religious transformation implemented in changing religious behavior through group activities. The millennial Muslim generation is an element of society that forms a deep pattern of hijrah. The meaning of hijrah for the millennial Muslim generation begins from a collective awareness of self-identity, which is a part of Islam, so that awareness arises to contribute to practice their religion (Zahara et al. 2020). The number of people wearing niqob in Indonesia is increasing. It is strongly supported through promotions via several social media, such as Instagram, which is actively moving to popularize the practice of wearing the veil (Husna 2019). The development of Islam through fashion is recognized as having a unique and widespread impact. This practice is undoubtedly a favorable field for the development of Islamic civilization in contemporary times. Even so, the 'oblique' harvest is still heard, for example, in Muslim groups who decide to wearing a niqob, such as the words 'ninja,' 'terrorist,' 'cult' and other nicknames accompanied by their seemingly obscure behavior (Nursalam and Syarifuddin 2017). The urgency to cover women's body is an obligation written in the Qur'an and the Sunnah of the Prophet, and are automatically meant to protect women from the risk of adultery and other cruel acts (Nursalam and Syarifuddin 2017). The phenomenon of hijrah also reaches a broader segment with many deliberation studies or tabligh akbar, attended by Indonesian public figures and popular ustadz to invite others to follow them. By engaging Indonesian public figures and popular religious leaders, the movement has become a strong magnet in introducing the phenomenon of hijrah through preaching in an interesting and contemporary way (Amna 2019). Hijrah becomes a social phenomenon that marks a phase of the crisis in humans, especially among young people. In this phase, a person needs an answer, which it then transforms into a change, such as behavioral introspection and shift. The most popular concept of hijrah becomes a spiritual journey to the righteous. The existing studies have shown that hijrah is interpreted as a movement from one place to another, from Mecca to Medina (Ibrahim 2016). The migration does not mean leaving the house. They do not migrate for living and wealth, but for serving God's faith and religion. Ummah (2019) reveals that the contextualization of the meaning of hijrah is a change in a person's life from bad to better one, from shirk to the straight path. Hijrah is a pattern and strategy of fighting fi sabilillah towards futuh and falah (Abidin 2017;Giovany, and Chatamallah 2018). The real manifestation of hijrah for a Muslim is being profound in fighting for Islamic ideals with honest and sincere faith (Suryana 2019). According to Prasanti, the meaning of hijrah for the community of "Ayo hijrah" is a joint commitment to make changes for the better, which it must be enacted in verbal and nonverbal forms (Prasanti and Indriani 2019). In contrast, according to non-santri teenagers, The Meaning of Hijrah among Niqobers in Social Media Penny Respati Yurisa, Muassomah, and Irwan Abdullah hijrah means following recitations on Youtube channel (Syahrin and Mustika 2020). It is different from Fitkon students who interpret that hijrah is a movement to seek religious knowledge through technology (Setiawan 2017). Royyani, in her study, shows that hijrah is interpreted as a high intention of following God's commands and humanizing humans (Royyani 2020). Niqobers The term niqob comes from a Persian language 'chador,' which means 'tent.' In Iranian tradition, the veil is a garment covering the entire body of a woman from head to toe. Indian, Pakistani, and Bangladeshi call it purdah, while Bedouin women in Egypt and the Gulf region call it Burqo, which covers the face in particular (Umar 2018). From the meaning of the word, niqob is a name intended for clothing that functions to cover a woman's face from the nose or under the curve of the eye down. Niqob phenomenon becomes a social issue debated among Ulama. Some go to an obligatory; some say it is sunnah; and some also believe that it is only required for women who wear it. There are also some opinions saying that it is only intended for prophetic wives by looking at the context of the Asbab Nuzul, it was not for all Muslim women, as argued by Al-Mahlab, Ibn Batthal, and Ibn Juzayy al-Kalbi (Sudirman 2019). According to Fitria (2008), from several perceptions and impacts that have been mentioned, many women wearing a veil still maintain the use of niqob. They have several reasons: 1) women wearing niqob interpret the veil as an Islamic commandment with the Sunnah law, and using it made them feel better in their religion; 2) veiled women consider it as a necessity, and when wearing it, they feel psychological comfort; 3), it functions as a selfcontrol against the deviation of Islamic teachings. Also, the veil for Muslim women who wear it is considered a symbol that can be used as a form of following one of the commands of Islam. The veil is regarded as a symbol that reflects one of the pious women who can maintain her honor (Karunia and Syafiq 2019). Niqab or veil is a variant of the headcover model by Muslim women in Indonesia. The Indonesian people need to know the veil, headscarf, and hijab beforehand. The differences between the three are on how many parts of body covered and the crosssectional area of the fabric. Indonesian historical records suggest that the first woman who wears the headscarf was a nobleman in Makassar in the 17 th century. At that time, a headcover functions as a cloth covering around and still showing some woman's hair (Andaya 2006). Niqob is a part of clothing of women during the Jahiliyyah era. This clothing model lasted until the Islamic period. Prophet Muhammad did not question the model, but not to oblige, urge, or mandate the niqob to women. Suppose that niqob is perceived as clothing that can protect women's spirit and wasilah to maintain their survival, as many parties claim, undoubtedly Muhammad will oblige his wives. Niqab or veil is only a part of the clothes by some Arab women from both pre-Islamic (as explained above) and after. There are no specific instructions regarding this dress, if it is an obligation (Sudirman 2019). Social Media Social media, also known as social networking, is part of a new media. The interactive content in a new media is very high. Social media is defined as an online media where users can easily participate, share, and create content, including blogs, social networks, wikis, forums, and virtual worlds. Blogs, social networks, and wikis are the most common forms of social media used by people worldwide (Zubair 2017). Online social media, called online social networking, is not online mass media because it has social power that significantly influences public opinion that develops among society. Raising support or mass movements can be formulated due to online media's capability because it proves to shape a public or community opinion, attitudes, and behavior. This phenomenon can be seen from the case of Prieta Mulyasari versus the Omni International Hospital. This is the reason why media is called social media, not mass media (Zubair 2017). Social media invites anyone who are interested in "something" to participate by giving feedback openly, giving comments, and sharing information in a fast and unlimited time. Social media has a significant influence on a person's life. Someone who starts small thing can make it big with social media, or vice versa. For the community, especially teenagers, social media has become addictive, making users go away without opening social media (Putri, Nurwati, and S. 2016). Social media is an effective and efficient means of conveying information to other parties. As a medium with very high social dynamics and allowing open communication to various parties with various backgrounds and interests, social media is the right means to generate citizen participation in building cities. As stated by Howard and Parks in Rahadi's article, social media consists of three parts: Information, infrastructure, and tools used to produce and distribute media content. Media content can be personal messages, news, ideas, and cultural products in digital format. Those who produce and consume media content in digital form are individuals, organizations, and industries (Rahadi 2017). Social media commonly used by consumers to share texts, images, audio, and video information with other people and companies, and vice versa (Kotler and Keller 2009). It allows users to interact with a broad audience, which encourages the value of user-generated content and the perception of interactions with other people. Social media is used productively by all spheres of society, business, politics, media, advertising, police, and emergency services. Social media has become the key to provoke thought, dialogue, and action around social issues (Carr and Hayes 2015). CONCEPTUAL FRAMEWORK Hijrah is often associated with a woman's image in a veil. The cover's word equivalents are very diverse, including hijab, niqob, burqa, or purdah. The point is a thin cloth sheet covering a woman's face when she is outside the house. Women in veil are also identical with black clothes (Qolbi 2013). Most women who commit hijrah are associated with veil because, in a study of Winda about the phenomenon of hijrah in FISIP, University of Riau, they start wearing the syaríe hijab and even wearing a veil (Putri and Firdaus 2018). Niqober is a term for women who wear the niqob or veil. They call themselves as niqobers or niqobis and build communities in social media. Niqob is a current trend, where previously wide hijab without face covering has become a trend (Dewi 2019). Fajriani states that hijrah is often marked by a change in dress style to become more Islamic (Fajriani 2019). Like the Niqab Squad Jogja (NSJ) community, a community for veiled women as a place for da'wah to spread out complete Islamic teachings (Husna 2019). Communication has shifted from face-to-face to social messaging-based communication using the internet (Budiyono 2016). Social media is created to facilitate two-way communication. A medium conveyed via online makes it easier for users to play an active role and exchange. The information distribution is characterized by one to many targets and many to many targets (Budi, Arif, and Roem 2019). Information technology has undergone extensive and very fast development, which eases people to interact because it is easily accessible to anyone in any place. Veiled women has been so far exclusive and rarely interacting with their neighbors who are not veiled (Sari, Lilik, and Agustin 2014). They only interact with their groups, but they have a community on social media such as Facebook, Instagram, and WhatsApp groups to interact. Bandung Niqab Squad Community Instagram account followers are 4,883 (Permatasari and The Meaning of Hijrah among Niqobers in Social Media Penny Respati Yurisa, Muassomah, and Irwan Abdullah Putra 2018). They are not even afraid to do activities in the house, whereas previously veiled women did many domestic activities or at home. The number of veiled women communities on Facebook is not small. Veiled women are active in social media with different motives. They use it to trade, to share lively literacy related to religious studies, and some consider social media as a source of positive information about spiritual practice (Zulfa and Junaidi 2019). RESEARCH METHOD This research is a qualitative descriptive study. The approach employed is a phenomenological approach that is done by observing the current hijrah phenomenon. This study is based on research on social media, Facebook, which is incorporated into three communities: Indonesian Niqob Community (Komunitas Cadar Indonesia), Hijrah Album (Album Hijrah), and Indonesian Hijrah (Hijrah Indonesia). These groups consist of men and women members, but the Niqobers dominate, and most of them are married and are under 40 years old, and there are a few of them who are not married. They build this community to stay in touch and discuss religion, social, and economic matters, including their understanding of hijrah. Facebook is considered as a forum for discussion that is very effective for them. Aside from being easy to use, it is also familiar to the community, especially young families. For them, Facebook also becomes a place to confide in and develop the family's economy. The data were collected using two data collection techniques, namely observation, and documentation. Observations were made by observing the status of Niqobers on Facebook who were discussing hijrah, then collecting the data in the form of screen-captures of the status and conversations of the Niqobers. The data from the observations were collected that way to be processed to the next stage. The documentation, in the form of the subjects' dialogues and conversations, was not limited to the date. The limitation in this paper is that every dialogue must be related to the hijrah conception in their perspective. This documentation is critical as a form of data that can be presented in this paper as the original data, which will be given an appropriate interpretation later. The data collected from the observation and documentation were then analyzed through three stages of qualitative research: data reduction, data display, and conclusion drawing. The data reduction was made by mapping the collected data by selecting and sorting the relevant data to the research topic. Meanwhile, irrelevant data were excluded. The next step is grouping the data based on the research sub-topics. The data display was done by presenting the data classified based on the sub-discussions in tables to facilitate readers' understanding and provide descriptions and explanations easily. Three tables could be presented. The first one is the hijrah conception, the second is the background of wearing niqob, and the last one is how the concept of the Niqobers' perspective in the deconstruction of the meaning of hijrah. Finally, the conclusion drawing was done by firstly interpreting the data and elaborating them clearly that the research conclusions can be correctly summarized. RESULT AND DISCUSSION The meaning of hijrah has been deconstructed according to the development of era, science, and socio-culture. It can be seen the shift in the definition of hijrah from various perspectives. However, in this paper, it focuses on the view of niqobers. This paper describes the concept of hijrah according to veiled women's viewpoint, the factors behind the emergence of this concept, the conception of veiled women's hijrah influencing the meaning of religion in social media. The Concept of Hijrah According to the View of Niqobers Hijrah experiences a shift of meaning since the era of the Prophet up to now. At the time of the Prophet, they defined it as a migration Table 1 shows that there are three concepts of hijrah that are perceived by niqobers. First, a concept that is based on circumstances, hijrah is a process of moving from one place to another and from situations and conditions. Hijrah is interpreted as a process of muhasabah or selfevaluation with a change from one condition to the better one that is loved by God, such as covering the aurat, becoming a devout servant, and so on; moving from the bad to the goodness. This journey of hijrah underwent many changes from a state of neglect and ignorance to a state of mindfulness, from carelessness to awareness, from instability to stability, and from ignorance to a state of knowledge and enlightenment. Second, the concept of hijrah is based on changing attitudes and behavior. This concept describes a change from bad behavior to a good one. This can be reflected in the submission of one to God by carrying out His commands and staying away from God's prohibitions that can be actualized by covering the aurat gradually to get God's pleasure. Hijrah is a beautiful process that can be done at various levels, from the lowest to the highest. People who commit hijrah will be able to control their emotions and be more motivated to perform ibadat or worship. They will also be more patient, less emotional, and become diligent in performing worship. Third, the concept is based on the change in fashion. There is a learning process towards a complete (kaffah) hijrah by changing the fashion or appearance in this concept. The complete hijrah is the peak of hijrah using niqob. The simple changes are made by wearing long dresses and long hijabs, selecting friendship by staying away from friends who are assumed to act against the sharia, limiting themselves from friends who are considered not-Islamic. It is also to start an Islamic life, engage in Islamic conversations only, and associate with behavior according to the Islamic law. The three concepts of hijrah are the niqobers' perceptions and views of hijrah. The concept of hijrah has undergone a shift in meaning as the era keeps developing as well as one's perspective. One way to do hijrah is reflected through the niqob, and according to those women, they feel like they are at a level higher than those who wear the typical or standard hijab. Those women wear niqob with several stages. There are those who do hijrah by wearing the niqob when they leave the house, while when they are at home or with relatives, they take the niqob off. Others wear their niqob when they feel ready (to wear) and when they are not ready, they take it off. Such a concept of hijrah is believed and widely used by one who is going to hijrah. They believe in the meaning of hijrah by shifting their self-status before God, which is to become better servants of God and make them closer to Him. As Taqwa emphasized, hijrah is interpreted as a sacrifice and one's determination to change for the better (Taqwa 2011). Hijrah is depicted by covering the face from the eyes of the men who are not her mahram, because the face that is not covered by niqob will make men experience the adultery of the eyes. Prasanti and Indriani (2019) also highlights that covering the face is one way to change one's attitude from the outer appearances aspects, from open to closed (Prasanti and Indriani 2019). Hijrah does not stop by wearing niqob, but also joining the sermons together with other niqob communities. Besides, hijrah is described through environmental interactions. Those who choose to hijrah prefer to stay at home more than socializing with society. They tend to spend their time in the domestic area instead of the public one. When they leave their home, they must go together with their mahram. Their social interaction is done through social media, such as online marketing, or delivering customer orders. They go out for specific needs, such as taking their kids to school or shopping at the market. These perspectives of niqobers are going against Ibrahim's study that interpreted hijrah as the construction of pluralist society, civilized society, and the enrichment of a dynamic and creative ethos (Ibrahim 2016). The online market becomes a place for Niqobers to interact with other people. They do online trading by selling commodities convenient with their everyday use, such as long dresses and niqob. The commodities imply the da'wah elements, which become the focus of their sales. The consumers of the market are also commonly coming from their community. They only have a small number of outsider consumers, although there is no specific limitation. Therefore, niqobers are very close to their community and having a high commitment to make their jama'ah (community) bigger. How Niqobers Deconstruct the Conception of Hijrah The concept of hijrah, according to veiled women is based on two factors, internal and external. These factors are described in the Table 2 Table 2 shows that women who commit hijrah can be reflected from the use of niqob influenced by two factors, the internal and external ones. The internal factor means that a woman does hijrah because of her self-encouragement. The desire to be better every day is a powerful factor. The hijrah of this group is shown through the use of niqob. Wearing niqob is one way to protect oneself from slander when one is away from her husband, and as long as her husband permits her to wear niqob. The feeling of guilt towards the husband can also motivate someone to wear niqob. At home, a woman tends to like using home dresses more and does not wear any makeup, but she wears the best clothes and makeup when she wants to leave the house. This makes a woman encouraged to wear niqob as to protect her beauty for her husband only. They perceive that using niqob is a good sunnah. Thus every woman who wears niqob are is intended to complete their religion. There are three external factors that influence the hijrah done by a niqober. First, it is influenced by the family factor. Fellow family members can influence someone to wear niqob. Parents' doctrine is a major factor that becomes the reason why a woman decides to wear niqob. A beautiful woman who does not wear niqob will trigger slander because men who are not her mahram can see her face. Then, when they see her face, she will get sins, and that is the doctrine of parents to their daughters to commit hijrah and start wearing niqob. As emphasized by Cahyaningrum and Desiningrum (2017), the decision-making for Muslim women to wear niqob is influenced by parents, if they are still minors to the age of a child or are not yet adults (Cahyaningrum and Desiningrum 2017). Second, the factor of partners has become a reason for someone to wear niqob. When a husband joins a sermon that encourages the wives to wear niqob, the submissive wife will automatically do that. A wife wears niqob voluntarily due to her faith and love for her husband. In line with that, Nursalam and Syarifudin (2017) stated that there are many women wearing niqob because of their husbands' force, such as a woman in the Tobi'ah village who first wore niqob because her husband forced her (Nursalam and Syarifuddin 2017). A wife wants to get the husband's pleasure by obeying him. Although she wears niqob as a means of her husband's order, this habit has got into her heart and becomes a practice. In other words, partners have a strong influence on the hijrah decision of women. Third, the environmental factors are one of the external factors laid behind the niqobers. Living among niqobers will make Muslim women feel comfortable, even though at the first time of trying to wear niqob, she was just curious. Then, they feel comfortable because she could avoid slanders. The environment in which a person makes friends will have a strong influence. If one is surrounded by niqobers every day, it will be uncomfortable not to use it too. The environment is very effective in influencing women to wear niqob. This factor has also greatly encouraged Muslim women to use niqob for those who are at the age of 20 to 25 years old, where they are adults and brave enough to decide their way of life (Fitriani and Astuti 2012). Putri and Firdaus (2017) also added that view, stating that Muslim women who could easily be influenced by the environment are mostly those who live in the university environment (Putri and Firdaus 2018). As stated by Nursalam and Syarifuddin (2017), some women start to wear niqob because of the influence from the organizations and their peers (Nursalam and Syarifuddin 2017). Meaning of Religion in Social Media Veiled women have predominantly dedicated their time in the domestic area and minimized interaction in the public space. Almost all activities are completed in the house unless there is a need for them to leave the house. But not, a few of them are very active in interacting with other veiled women through social media, as provided in Table 3. Table 3 shows that there are many people assuming women doing hijrah with niqob are radical. It is influenced by the recent issues related to bombs detonated by the bomb brides. The wives of the bombers were niqobers. Those women wearing niqob are considered radical. The state is getting worse when news broadcasted on television shows the wives of ISIS terrorists, and they also wear niqob. Radicalism is attached to those who wear niqob, even though not all niqobers are attached to radical Islamic teaching. They never discussed the tutorials on assembling bombs or other extreme teachings in the sermons they attended. Accordingly, niqob is often associated with the fanatical, fundamental, and hard-line Islamic organizational attributes (Putri 2019;Zulfa and Junaidi 2019). It is due to the fact that the majority of the wives and families of the suicide bombers and terrorists, who have been accused of blasting terror in Indonesia, wear niqob (Khoiroh and Chakim 1970). Niqobers on social media reject the radical label attached to them. It is just similar to the ready call label attached to Muslim women who wear miniskirts. Therefore, the radical label is rejected by veil users, because they are not actually becoming radicals. Instead, they merely perform the sunnah of the Prophet Muhammad. Radical Niqobers are often considered radical due to the news about suicide bombings done by niqobers in some areas in Indonesia. Even the wives of the suicide bombers wear niqob. Besides, in fact, the wives of ISIS members also wear niqob. This is what triggers the assumption that niqobers are attained to the radical ideology. It goes in line with Nursalam and Syarifuddin research (2017), which found the phenomenon of viewing niqobers negatively and that they are not accepted in society, because society views the niqobers as terrorists (Nursalam and Syarifuddin 2017). A lot of negative stigma about the existence of niqobers arises from society, as drawn in a study that proves that within 6 respondents, only 2 of them did not have negative assumptions (Apriani 2018;Puspasari 2013). The majority of respondents in a study also stated that they were not happy to meet niqobers, they feel scared and worried (Karunia and Syafiq 2019; Rahman and Syafiq 2017; Zulfa and Junaidi 2019). Wearing niqob is identical with the Arabic culture because the origin of niqob, as believed by the niqobers, is a sunnah done by the Prophet Muhammad's wives. Long before the Prophet's era, niqob has been used as a head accessory in the Greek Age. Some people assume that niqob in Indonesia is brought from Arab, but some others believe that niqob is not an Arabic culture. For its users, niqob is seen as the Prophet's sunnah that is good to be done, and some of them even think it is obligated. There is, of course, a reason why niqob is often associated with the Arabic culture. It is because most of Muslim women who live in Arab mostly wear niqob. The polemic and debate whether niqob are an Arabic culture or not keeps going on today. When discussing niqob, the word hijrah accompanies. Seven students at the Faculty of Social and Political Sciences (FISIP), Riau University, have done the hijrah by modifying their appearance, changing their behavior, and wearing niqob (Putri and Firdaus 2018). Niqob, as an Arabic culture, is an opinion that occurs within the religious observers. By wearing niqob, people usually start to do Arabic traditions, either in clothing or food. They eat Arabic foods, such as kebabs, paratha, Kabuli rice, dates, and others. If they are sick, they will eat herbal medicines branded with Arabic words. Then, they do exorcism or ruqya, cupping, or Arab-style medicine to cure the pain. Their fashion also changed to wearing long dress and niqob. As a result, a perception that hijrah is an Arabic culture occurs. Zahara, et al. (2020) stated that women who have changed their appearance and attitudes to be more Islamic are actually learning to hijrah (Zahara, et al. 2020). Some people agree with the statement that niqob is a symbol of hijrah. The trend of hijrah using niqob is also commonly found within Indonesian artists. They are changing the lifestyle to be Islamic, covering the self with niqob and hanging out with friends who also wear niqob. The artists wear niqob because they receive the guidance, not because they belong to the radical ideology. Niqob is not merely one symbol of hijrah that is currently a trend among the youth. The hijrah conception that is shown by one using niqob has varied the views within society. They assume that niqob is one level better than the typical or standard hijab. By wearing niqob, a person is assumed to get the guidance already. Therefore, niqob becomes a symbol of hijrah. In other words, those who have yet to wear niqob means that they have yet to receive the guidance and have yet to hijrah. Some people also assume that wearing niqob is not a symbol of hijrah, but a religious commodification. Since hijrah becomes a trend within the youth, there is also the occurrence of halal makeup products, the term 'dating' is changed into 'ta'aruf', then selfie is regarded as Muslimah selfie. These things make niqob, hijrah, and religion become a commodification with many followers. Women who have done hijrah share their thoughts a lot on social media, especially Facebook. They create communities for their hijrah peers, and they learn to hijrah using niqob, and motivate each other (Utami 2019). This community is growing in social media and has many followers. Outside the community, some people do hijrah who share their opinions about hijrah in social media, either in the form of posts/ status or a question that expects an answer from their social media mutual. Their communication pattern can be analyzed and conclude the concept of hijrah that they believe all this time. CONCLUSION Many aspects characterize the phenomenon of hijrah which is popular today, such as the changes in clothing style and behavior. Many female artists do hijrah by wearing a long headscarf while the male ones grow beards and change their clothing style into Koko shirts or short pants. Some artists also begin to join in studies about Islam with some people they consider ustadz. Apart from the life of artist, many people also claim that they do hijrah and then order their wives to wear a veil, to take a part in Islamic studies, and to change their style into more Islamic, such as sending their children to the boarding school. Migration has become a phenomenon associated with the veil and radicals. Due to the fact that the terrorists' wives are caught wearing veils, classification and discussion are needed. Using the conception approach, this research found a shift in the meaning of hijrah in the past. Now, hijrah has been analyzed by various concepts. There are three concepts of hijrah for veiled women, a change in conditions, in behavior; and in dressing style. Hijrah with the veil is influenced people close to us, family, and environment. The concept of hijrah for veiled women affects the meaning of religion. The data collection in this study was done through social media and WhatsApp. It would be better to do in-depth interviews about the concept of hijrah that is perceived by the niqobers. The discussion on religion is a sensitive matter, and the valid and in-depth data will help create a conclusion in this research. Since hijrah, in this case, was just analyzed with the conceptual
2020-12-31T09:02:39.509Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "285c23cd72b243fe43ce8c6aafe893350b260484", "oa_license": "CCBY", "oa_url": "https://blasemarang.kemenag.go.id/journal/index.php/analisa/article/download/1200/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "47892abd68cf3c591b69a1b957b9e4a4e0657a96", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [] }
213530794
pes2o/s2orc
v3-fos-license
Life cycle assessment of biodiesel production from crude palm oil: A case study of three Indonesian biodiesel plants This study assessed the life cycle of biodiesel production from crude palm oil at three Indonesian biodiesel plants. The LCA stage is based on ISO 14040, which determines the goal, scope, inventory, impact assessment, and interpretation of the areas under study. However, the main of the research was to compare the carbon footprint biodiesel production processed. Furthermore, the study facilitated and utilized the Greenhouse Gas Protocol V1.02 as a method of life cycle impact assessment, with CO2eq evaluated using fossil fuel, land transformations, and biogenic. The results showed that palm cultivation using crude oil produced the highest pollution source in all categories with the highest at CO2eq. Also, palm cultivation using crude oil production at A, B, and C caused impact categories of 3.2 ton, 3.02 ton, and 2.85-ton fossil CO2eq. These were also extracted from the transportation and production process of crude palm oil in plants that use diesel fuel. Finally, in the production of biodiesel from biodiesel plants A, B, and C, in all categories, the highest impact source of pollution is caused by crude palm oil production. Introduction Biodiesel is an environmentally friendly renewable energy source, which emits lower carbon than fossil diesel. These carbon emissions from the use of diesel, such as CO 2 , CO, and SO 2 cause air pollution and can trigger global warming and climate change [1]. Event though biodiesel has low emission in use, it produces CO 2 during its production process. A study of the Life Cycle Assessment (LCA) is therefore needed to calculate the carbon footprint in biodiesel production to determine the process causing the highest pollution. The results of the LCA is used to evaluate future biodiesel production, thereby, making the process environmentally friendly [2,3,4,5]. In Indonesia LCA calculation for biodiesel production is needed in accordance with ISO 14040:2006 [2]. The ISO is then ratified to become SNI ISO 14040:2016 [6]. Assuming the biodiesel plants in Indonesia do not have the results of LCA report, it means they do not follow ISO 14040 international policies. Previous researchers studied LCA analysis of Indonesia's biodiesel plants [8,5,3] with the crude palm oil used as feedstock. According to the analysis, the biodiesel production process in plants A, B, and C has the highest pollution due to the use of chemical fertilizers at the palm cultivation stage [5,7,8]. As previously described, the LCA stage is based on ISO 14040, and used to determine the goal, scope, inventory, impact assessment, and interpretation of the biodiesel plants. The previous studies which uses LCA with crude palm oil shows several research such as IPCC 2007 [5,7] and the CML 2 baseline 2000 [8]. IPCC 2007 is in the impact category and potential to global warming (GWP) [5,7]. Also CML 2 baseline 2000 consists of the results obtained from calculating the eutrophication (E), acidification (A), global warming potential (GWP), human toxicity (HT), photochemical oxidation (PO), marine aquatic eco-toxicity (MAE), terrestrial eco-toxicity (TE), fresh water aquatic eco-toxicity (FWAE), ozone layer depletion (ODP), and abiotic depletion (AD) impact on the plants [8,9,10]. The current study utilizes the Greenhouse Gas Protocol V1.02 as an LCA method of for biodiesel productions. It calculates impacts in various categories previously compared. However, in this research was calculated impact categories: fossil CO 2 eq, biogenic CO 2 eq, CO 2 eq from land transformation, and CO 2 uptake. [11]. The objective of this research, therefore, was to compare the results of carbon footprint in biodiesel production from three Indonesian biodiesel plants. In addition, recommendations were provided to ensure that the stages involved in the biodiesel production processes of plants A, B, and C are environmentally friendly. Figure 3: System boundary of biodiesel production from biodiesel plant C [3]. Inventory This study uses the SimaPro 8.4.0.0 faculty version software and Ecoinvent 3 database. Secondary data was obtained from biodiesel production of plants A, B, and C, as shown in tables 1, 2, and 3. from biogenic sources such as trees and plants). 4) Fossil-based carbon (carbon originating from fossil fuels) [11]. Interpretation Each biodiesel produced has different amounts of carbon footprint, which is dependent on mass and energy input. Weighting of Carbon Footprint Weighting represents the magnitude of impact on carbon footprint on each biodiesel production process from biodiesel plant A, B, and C, with each expressed in a ton. Figures 4, 5, and 6 analyze the weighting of the carbon footprint during palm cultivation till crude palm oil is produced with high pollution source found in fossil CO 2 eq. Furthermore, the palm was cultivated until crude palm oil produced at A, B, and C were 3.2, 3.02, and 2.85 tons. Fossil CO 2 eq was extracted from the transportation and production process of crude palm oil in plants that use diesel fuel resulting to pollution at the cultivation stage due to the use of chemical fertilizers [5,7,8]. With the production of 2.37, 2.24, and 2.11 tons, the largest amount of biogenic CO 2 eq was extracted from forest and land burning when clearing the palm plantations and from the use of biomass fuel in boiler engines [12,13,14,15,16,17,18]. Palm was cultivated till crude oil was produced in A, B, and C which lead to the production of CO 2 uptake by -5.97 ton, -5.65 ton, and -5.32 ton. Figure 6. Weighting of the carbon footprint of palm cultivation until crude palm oil production C. Figures 7,8,and 9 indicate the weight of the carbon footprint production in A, B, and C, with the highest pollution in fossil CO 2 eq. In plant A, the use of methanol and electricity caused pollution in sequences of 0.386 and 0.1821 tons. Similarly, the use of methanol and sodium hydroxide caused pollution of the highest fossil CO 2 eq, in a sequence of 0.163 ton and 0.107 ton in plant B, while methanol and electricity led to sequences of 0.09 and 0.182 tons in plant C. The biodiesel production process from plants A, B, and C is more environmentally friendly in oil palm plantations without burning the forest. It is advised that a system of selective logging of trees with large and clear plantation land be conducted as this solution tends to reduce CO 2 pollution. Furthermore, transportation that utilizes diesel fuel and reduces its usage in the crude palm oil production process should be avoided. Conclusion The most environmentally friendly impact assessment of carbon footprint from other biodiesel plants is planted C. However, the highest impact in all categories is caused by the palm cultivation through crude palm oil production. It is, therefore, recommended that palm should be cultivated till crude palm oil production in A, B, and C becomes environmentally friendly without using diesel fuel transportation technique, thereby, reducing the consumption of diesel during the production process.
2019-11-28T12:42:10.711Z
2019-11-21T00:00:00.000
{ "year": 2019, "sha1": "c3286e282c4026c508122e9b6a27a7f37424441f", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/348/1/012002", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "71d52911102a52255395469f17331306dee47606", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
272048713
pes2o/s2orc
v3-fos-license
Community composition, population structure and regeneration potential of tree species in Oak-dominated mixed forests of Rajouri district in Jammu and Kashmir, India. The study was carried out to explore the diversity and regeneration potential of trees species in mixed Oak forest of Rajouri district of Jammu and Kashmir (India). A total of 20 tree species were recorded from the area dominated by various species of oak particularly Quercus leuchotrichophora. Quercus leuchotrichophora shows maximum values of density, basal cover and IVI. In different localities, it has different groups of associates like Q. floribunda, Q. semecarpifolia, Q. glauca, Buxux wallichiana, Pinus roxburghii, Aesculus indica, Rhododendron arboreum , etc. Majority of the species show very poor regeneration, which is a matter of concern and demands for proper conservation strategies INTRODUCTION Trees provide the overall physical structure of habitat in a forest ecosystem (Singh, et al, 2016;Malik et al 2016).A forest crop, majorly represented by its tree species, continues its growth and rejuvenation through the addition of newer individuals.Every living organism tends to expand its population and thus continues its existence through its succeeding generations.Population refers to the number of individuals of a species in an area at a specific point of time.The ratio of various age groups in a population determines its reproductive status and indicates its future course (Odum, 1971).Plant species maintain and increase their populations through the process of regeneration which is the key ecological phenomenon in any community. Natural regeneration is an essential process for preservation of biodiversity and good health of an ecosystem. Natural regeneration of tree species depends mainly on the production and germination of seeds and the establishment of new recruits that is seedlings and saplings (Rao, 2008).It is affected by environmental factors and anthropogenic pressures prevalent in a region.Demographic variables such as the recruitment, mortality and growth rates of individuals describe the population dynamics of a plant community (Watkinson, 1997) and determine its regeneration potential.Presence of sufficient number of seedlings and saplings indicates good regeneration behavior of a particular species.Inadequate number of young trees, saplings and seedlings in a population depicts poor regeneration whereas complete absence of seedlings and saplings indicates no regeneration.Regeneration potential of various woody species decides the future composition of a forest in space and time (Henle et al., 2004).Reliable information on regeneration trends of woody species in a plant community not only helps in predicting the future composition of a crop but also provides basis for effective forest management and conservation. Oak-dominated forests form an important group of vegetation in the Himalayas.Besides their huge ecological significance, they are also found closely associated with the socio-economy of the locals.However, under the influence of increased anthropogenic pressure and possible climate change-related stresses, these forests are fast shrinking in terms of area, density and diversity. Although sufficient literature exists on ecological attributes including population dynamics of various types of vegetation in other parts of the Himalayas, little or no such information is available for Oakdominated mixed forests of Jammu and Kashmir.Various species of oak and its associate tree species abundantly grow on the southern slope of the Pir Panjal Himalayan range in Jammu and Kashmir.They form an important group of vegetation and represent temperate broadleaved forests in the state.Like most of mountain forest ecosystems (Krauchii et al 2000), these forests also have a major problem of poor regeneration.The present study was undertaken with an aim to explore and describe the population structure and regeneration potential of major tree species in Oak-dominated plantations of Rajouri forest division (which forms part the Pir Panjal range) in Jammu and Kashmir. MATERIAL AND METHODS Study area: Rajouri district of Jammu and Kashmir in India forms part of the mighty Pir Panjal Himalayan range.It lies between 30 50N and 33 30N latitude and 70 E and 74 10E longitude with an altitudinal range from 370-6000 m above sea level spreading over an area of 2630 sq km.Topography of the district varies from plains or gentle slopes to hilly and very hilly (Fig 1).The region is drained by numerous perennial rivers originating from northern snow-capped mountains.Main soil types present in the area include Utisols, Sub-Mountainous Soil (Alfisols) and Bhabar Soil (Entisols).Climate is generally mild with warmer in lower plains and harsher and cold with heavy snowfall on in upper mountainous part.Average annual rainfall is 1150 mm and average temperature varies from 7.42 to 37.4 degree Celsius.Higher reaches support characteristic alpine vegetation whereas lower slopes exhibit rich coniferous and broadleaved forests between 1000 m to 3000 m elevations.The district has 48.48% of its geographic area under forest cover (Anonymous, 2009) that supports a good deal of biodiversity including several endemic plant and animal species.From Forest Management point of view district Rajouri falls under the Western Circle and is divided into two divisions namely Nowshera Forest Division and Rajouri Forest Division.Rajouri Forst Division, where this study was undertaken, comprises of three territorial ranges viz., Kalakote Forest Range, Rajouri Forest with Oak as the principal species (Anand, 2014).Sampling: After the preliminary survey during 2018-19, four forest sites representing all the three territorial ranges of Rajouri forest division, were selected for data collection (Table 1).Plots of 10 ha (1000 m x 1000 m) in size, visually representatives of overall vegetation of the area, were delineated for detailed study.20 quadrats of 20 m x 20 m size were laid randomly at each forest site for analysis of the vegetation. Nested quadrats of 1 m x 1 m and 5 m x 5 m were used for seedlings and saplings respectively.Circumference at breast height (CBH =1.37 m) was taken for the determination of tree basal area.Plants with circumference less than 10 cm, 10-30 cm and above 30 cm were considered seedlings, saplings and trees respectively. Vegetation Analysis: The dominance of the plant species was determined by the Importance Value Index (IVI) of species.The value of IVI was computed by summation of the values of the relative frequency, relative density and relative dominance (Curtis and McIntosh, 1950;Mishra, 1968).Basal cover is considered as the portion of ground surface occupied by a species (Greig-Smith, 1983) and it was calculated by using the following formula: Tree basal area = (G) 2 / π ; Where, G is girth of tree at 1.37 m π is equal to 3.14; Total basal cover= Density x Tree basal area. Regeneration Status: Regeneration status of individual tree species was determined on the basis of their quantitative potential at different age classes in the following manner: • No regeneration, if a species is present only in adult form. • New regeneration, if the species has no adults but only seedlings or saplings. RESULTS AND DISCUSSION Community composition: A total of 20 tree species were recorded from the study area (Table 1). Quercus leuchotrichophora was the main dominant species.Community structure, floral composition, diversity and other ecological attributes of vegetation in a region are mainly determined by its geographic location, climate, soil conditions and other environmental factors.Floral composition of the studied area is similar as reported by other workers for temperate moist forests of the western Himalayas (Singh, et al, 2016, Malik, et al, 2014, 2016).Forest area is dominated by various species of Oak, particularly Quercus leuchotrichophora, which is believed to be the climax species of the mid altitudes in the western Himalayas (Singh and Rawat, 2012;Troup, 1921). Population structure and regeneration potential: Population structure of major trees species (with higher IVI) is summarized in Table 2.At Site I. Quercus leuchotrichophora showed almost an equal number of trees ( 210) and seedlings ( 211) but slightly higher number of saplings ( 230).Pyrus pashia showed a higher number of seedlings ( 50) and saplings (63) than trees (47).Rhododendron arboreum and Quercus floribunda, however, had very low number of seedlings and saplings (Figure 1).At site II all tree species except Pyrus pashia had very low number of seedlings and saplings in comparison to adult trees (Figure 2). A comparatively better density of saplings (287) and seedlings (205) was observed for Quercus leuchotrichophora against its adult trees (233) at site III (Figure 3).Buxus wallichiana showed a density of 63 seedlings and 75 saplings against 40 trees.Quercus floribunda, however, had 32 seedlings, 59 saplings and 61 trees.At Site IV all the species except Pyrus pashia and Pinus roxburghii showed poor seedling density Quercus leuchotrichophora had 121 seedlings, 298 saplings and 262 trees (Figure 4).observed much better trends of regeneration for similar forests in some other parts of the Himalayas. Poor regeneration of majority of the tree species including Quercus leuchotrichophora can be attributed to various natural as well anthropogenic factors.Among natural factors species fecundity, site conditions, climatic change etc., are very important determinants that affect seed production and dispersal of seeds as well as germination, growth and survival of the seedlings.Biotic interference like deforestation, grazing, lopping, forest fires, etc. also affect forest regeneration.The locals in the entire Pir Panjal belt heavily rely on nearby forests and thus exert tremendous pressure on them.Oak, for its multiple uses including as fodder, fuel wood, timber, etc., face serious threats in the area. The species showing poor or no regeneration are actually at high risk of depletion even if they are dominant at present (Nowacki and Abrams, 2008;Malik and Bhatt, 2016).The situation, thus, demands for an immediate and appropriate management and conservation strategy for the Oakdominated forests of Jammu and Kashmir. As conclusion, the area is rich in woody vegetation with Oak dominating the canopy.However, most of the tree species show very poor regeneration behaviour, which is a matter of great concern and demands immediate attention for implementation of appropriate conservation and management strategies. Figure 2 Figure 3 Figure 2 Proportion of seedlings, saplings and trees of various species at Site I Figure 4 Figure 5 Figure 4 Proportion of seedlings, saplings and trees of various species at Site III It showed highest values of frequency (100%), density (420 to 560 trees/ha, excluding seedlings) basal cover (55.61 m²/ha to 87.49/ha m²) and IVI (148.410 to 167.248).Other species of Oak present in the region were Quercus floribunda, Quercus semecarpifolia and Quercus glauaca.Buxus wallichiana Baill, Quercus floribunda and Quercus semecarpifolia followed in terms of IVI at various sites.Total density ranged from 770 to 975 individuals/ha whereas total basal area was between 88.84 and 133 m²/ha. Table 2 . Phytosociological characteristics of study area Table 3 Singh et al. (2016)s, saplings and trees of major species Analysis of population structure of various species indicates poor or average regeneration behaviour for majority of the tree species.Quercus leuchotrichophora showed poor regeneration at all sites.Rhododendron arboreum and Quercus floribunda which were present at three out of four sites also showed very poor regeneration.Pyrus pashia and Pinus roxburgii, however, showed a better density of seedlings and saplings and this indicated their good regeneration potential.Buxus wallichiana, an endemic species with very restricted distribution in the Pir Panjal region, present at one site (Site III), showed good regeneration behavior.Bhat (2012), Ballabha et al. (2013)andSingh et al. (2016)
2024-08-29T16:18:00.457Z
2023-01-25T00:00:00.000
{ "year": 2023, "sha1": "828afbecf96bfb93e1101e23caa88db755753095", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.7770/safer-v11n1-art2378", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "706ec5987e68905467c3a006e3cde7c87991ea22", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
10534759
pes2o/s2orc
v3-fos-license
Association of plasma potassium with mortality and end-stage kidney disease in patients with chronic kidney disease under nephrologist care - The NephroTest study Background Low and high blood potassium levels are common and were both associated with poor outcomes in patients with chronic kidney disease (CKD). Whether such relationships may be altered in CKD patients receiving optimized nephrologist care is unknown. Methods NephroTest is a hospital-based prospective cohort study that enrolled 2078 nondialysis patients (mean age: 59 ± 15 years, 66% men) in CKD stages 1 to 5 who underwent repeated extensive renal tests including plasma potassium (PK) and glomerular filtration rate (GFR) measured (mGFR) by 51Cr-EDTA renal clearance. Test reports included a reminder of recommended targets for each abnormal value to guide treatment adjustment. Main outcomes were cardiovascular (CV) and all-cause mortality before end-stage kidney disease (ESKD), and ESKD. Results At baseline, median mGFR was 38.4 mL/min/1.73m2; prevalence of low PK (<4 mmol/L) was 26.5%, and of high PK (>5 mmol/L) 6.4%; 74.4% of patients used angiotensin converting enzyme inhibitors (ACEi) or angiotensin receptor blockers (ARB). After excluding 137 patients with baseline GFR < 10 mL/min/1.73m2 or lost to follow-up, 459 ESKD events and 236 deaths before ESKD (83 CV deaths) occurred during a median follow-up of 5 years. Compared to patients with PK within [4, 5] mmol/L at baseline, those with low PK had hazard ratios (HRs) [95% CI] for all-cause and CV mortality before ESKD, and for ESKD of 0.82 [0.58–1.16], 1.01 [0.52–1.95], and 1.14 [0.89–1.47], respectively, with corresponding figures for those with high PK of 0.79 [0.48–1.32], 1.5 [0.69–3.3], and 0.92 [0.70–1.21]. Considering time-varying PK did not materially change these findings, except for the HR of ESKD associated with high PK, 1.39 [1.09–1.78]. Among 1190 patients with at least two visits, PK had normalized at the second visit in 39.9 and 54.1% respectively of those with baseline low and high PK. Among those with low PK that normalized, ARB or ACEi use increased between the visits (68.3% vs 81.8%, P < .0001), and among those with high PK that normalized, potassium-binding resin and bicarbonate use increased (13.0% vs 37.0%, P < .001, and 4.4% vs 17.4%, P = 0.01, respectively) without decreased ACEi or ARB use. Conclusion In these patients under nephrology care, neither low nor high PK was associated with excess mortality. Electronic supplementary material The online version of this article (10.1186/s12882-017-0710-7) contains supplementary material, which is available to authorized users. Background The mainstays of nonspecific secondary prevention of chronic kidney disease (CKD) progression, irrespective of cause, include blood pressure control and proteinuriadirected strategies to preserve residual kidney function, with special emphasis on angiotensin-converting enzyme inhibitors (ACEi) or angiotensin-receptor blockers (ARB) [1][2][3][4]. However, fear of inducing hyperkalemia, an inherent risk associated with the mechanism of action of these drugs, may limit their initiation or dose increases, given the considerable attention paid to this risk, especially in patients with CKD, diabetes mellitus, and or heart failure (HF) [5][6][7][8]. Although the exact serum (S K ) or plasma potassium (P K ) concentration associated with increased mortality remains controversial, growing evidence suggests that in patients with CKD, diabetes mellitus, or HF, especially the elderly, a S K > 5.0 mmol/L is associated with a higher risk of death [9,10]. Moreover, a post-hoc analysis of the Reduction of Endpoints in non-insulin-dependent diabetes mellitus with the Angiotensin II Antagonist Losartan (RENAAL) trial showed that increased S K concentrations ≥5.0 mmol/L at 6 months were associated with an increased risk of doubled serum creatinine or end-stage kidney disease (ESKD), independent of baseline renal function and other important predictors of renal outcomes [11]. Low S K < 4 mmol/L has also been associated with excess mortality and hospitalization, especially for patients with CKD and HF [12], for whom the relation between S K and mortality is U-shaped [13]. The frequent concomitant use of non-potassium-sparing (thiazide and loop) diuretics may induce low S K in CKD patients, and again a U-shaped relation has been observed between S K and mortality, with mortality risk significantly greater at S K < 4.0 mmol/L than at 4.0 to 5.5 mmol/L. In this CKD cohort, only the composite of cardiovascular events or death as an outcome was associated with elevated S K (>5.5) [14]. Risk for ESKD was also elevated at S K < 4 mmol/L. Hayes et al. reported a significant nonlinear association between S K and all-cause mortality in a retrospective CKD survey; regression splines showed that mortality increased in association with both high and low S K levels [15]. Other studies in CKD patients have also shown low S K (<3.5 mmol/L) is associated with excess mortality [4] and ESKD risk [16]. Another study found low S K (<4 mmol/L) associated with mortality in patients with CKD but not with ESKD [17]. Higher S K (>5 mmol/L) was associated with excess ESKD in one study [16] but not another [17]. Nevertheless, it appears that high S K (>5, 5.6 or 6 mmol/L) is associated with excess mortality [4,17]. Of note, all these studies reported to have measured S K which is known to overestimate potassium concentration on average by 0.4 mmol/L as compared with plasma potassium (P K ) which reduces the risk for blood coagulation [18,19]. In this study, we aimed to evaluate the association of P K with renal and cardiovascular outcomes, along with treatment practice patterns in the use of drugs apt to modulate P K in a cohort of patients with CKD under optimized nephrologist care, characterized by repeated extensive laboratory work-ups. Study population NephroTest is a prospective hospital-based cohort study that enrolled 2084 adult patients with any diagnosis of CKD stages 1-5 referred by nephrologists to three departments of physiology for extensive work-ups between January 2000 and December 2012 [20]. The NephroTest work-up was designed to optimize CKD care by providing nephrologists with a large set of blood and urine tests to assess each patient's metabolic complications and cardiovascular risk at yearly intervals. Laboratory report notified any relevant abnormal values, such as P K lower than 3.5 or higher than 5.0 mmol/L, together with a reminder of current recommended targets, to guide treatment adjustment [20]. Eligible patients were ≥18 years of age, not pregnant, not on dialysis, and not living with a kidney transplant. After exclusion of 6 patients with missing data for P K or treatment at baseline, this analysis included 2078 patients (Additional file 1: Figure S1). Measurements Clinical and laboratory data were recorded during a 5-h in-person visit at enrollment and during follow-up. They included demographics, renal diagnosis, medical history, height and weight, resting blood pressure, and medications. We collected blood and urine samples to determine levels of P K , venous CO2, HbA1c, and albumin, as well as urinary creatinine, albumin, and potassium. P K status was studied in three categories: < 4 mmol/L (low P K ), 4-5 mmol/L (normal P K ), and >5 mmol/L (high P K ). Diabetes was defined as either fasting glycemia ≥7 mmol/L or HbA1c ≥6.5% or antidiabetic treatment. At each visit, GFR was measured by 51 Cr-EDTA renal clearance. Briefly, 1.8-3.5 MBq of 51 Cr-EDTA (GE Healthcare, Velizy, France) was injected intravenously as a single bolus. An hour was allowed for distribution of the tracer in the extracellular fluid, and then the average renal 51 Cr-EDTA clearance was determined for five to six consecutive 30-min clearance periods. Over the study period, patients underwent a total of 5523 laboratory visits, and a median of 2 [IQR, 1-4] per patient); 1190 patients (57%) had at least two visits. Outcomes The primary endpoints were ESKD, defined by dialysis start or preemptive kidney transplantation, and pre-ESKD all-cause mortality. The secondary endpoints were pre-ESKD cardiovascular (CV) mortality and all-cause death, regardless of ESKD. Events were identified either from patients' medical records or through record linkage with the national REIN (Renal Epidemiology and Information Network) registry of treated ESKD and the national death registry. All survival data were right-censored on December 31, 2013, or to the date of last visit for patients not identified in registries. Cardiovascular causes of death included ischemic heart disease, cerebrovascular disease, HF, dysrhythmia, peripheral arterial disease, sudden death, and valvular disease. Patients were followed up through December 31, 2013. These outcomes were studied in 1941 patients after exclusion of 137 with baseline GFR < 10 ml/ min/1,73m 2 or lost to follow-up from the initial sample (Additional file 1: Figure S1). Statistical analyses In the overall population, we first used analysis of variance (ANOVA), the Kruskal-Wallis test, or the chi-square test, as appropriate, to compare patients' baseline characteristics by P K status subgroup. We then used multinomial logistic regression models to estimate odds ratios (OR) and their 95% confidence intervals (95% CI) for low and high P K associated with baseline characteristics, with normokalemia as the reference category. Second, we performed Cox regression models to estimate crude and adjusted cause-specific hazard ratios (HR) and their 95% confidence intervals (95% CI) for ESKD, and pre-ESKD all-cause and CV mortality associated with P K status at baseline, with normokalemia [4-5 mmol/L] as the reference category. In each of these models, the competing events were treated as censored observations [21]. Adjustment covariates were similar in all analyses: age, center, sex, ethnicity, smoking status, body mass index (BMI), diabetes, baseline mGFR, albuminemia, urinary potassium, log albumin/creatinine ratio, medication that may decrease P K (nonpotassium-sparing diuretics, bicarbonate treatment, potassium-binding resins), and medication that may increase P K (potassium-sparing diuretics, ACEi or ARBs, β-blockers). We tested the proportionalhazard assumption with Schoenfeld residuals against time for each covariate; because it was not satisfied for mGFR in the cause-specific Cox model with ESKD, we stratified rather than adjusted for baseline mGFR level, using six classes of mGFR (10-20, 20-30, 30-40, 40-50, 50-60, >60 mL/min per 1.73m 2 ). To account for changes in P K over time, we used time-dependent Cox models to estimate crude and adjusted HRs for each outcome associated with P K during follow-up. In the time-dependent analysis, medications were also updated at each visit. Finally, penalized splines were used in fully adjusted time-dependent Cox models to represent the functional relation between P K measurements and the risk of each outcome. Third, we described changes in P K status between the first and the second visit in the subpopulation of patients with at least two visits as well as changes in medication between the visits for patients with low or high P K at baseline that normalized at the second visit. Changes were tested with McNemar's test. Statistical analyses were performed with SAS version 9.4 (SAS Institute, Cary, NC) and R version 3.0.2. Patients with high P K tended to be younger, more frequently men, with a history of cardiovascular disease, diabetes, lower mGFR, and higher albuminuria, and more frequent prescriptions for ACEi, ARB, bicarbonates, or potassium-binding resins (Table 1). Those with low P K were younger, more often women, and had prescriptions for those medications less often. In multivariable analyses (Table 2), higher ORs of high P K were significantly associated with diabetes, current smoking, lower mGFR, and prescriptions for P K -increasing medication (i.e., ACEi or ARB or potassium-sparing diuretics), and lower ORs with older age and female gender. In contrast, higher ORs of low P K were significantly associated with female gender and use of potassium-lowering medication, and lower Ors with lower mGFR, CVD history, and potassium-increasing medication. Association of P K status with ESKD and pre-ESKD mortality Over a median follow-up of 5 We found no significant association between P K during follow-up and pre-ESKD, overall or CV mortality, or with overall mortality regardless of ESKD (Fig. 1). HRs for ESKD were slightly but significantly higher at higher P K levels (>5 mmol/L). Changes in P K status between visits At the enrolment visit, 66.4% of patients were normokalemic, and at the second visit, 64.2% (Table 3). The median (q1-q3) duration between the first and the second Overall, between the two visits, half of the patients remained in the normokalemic subgroup, while 39.9% of those with low P K and 54.1% of those with high P K at baseline had normal P K at the second visit. In patients with low P K that normalized, ACEi or ARB use increased between the visits (68.3% vs 81.8%, P < .0001) ( Figure 2). In those with high P K that normalized, use of potassium-binding resins and bicarbonates also rose between visits (13.0% vs 37.0%, P < 0.001 for potassiumbinding resins, and 4.4% vs 17.4%, P = 0.01 for bicarbonates). The use of ACEi or ARB did not change between the two visits (80.4% at visit 1 vs 84.8% at visit 2, P = 0.32). Nonetheless, ARB use increased between visits 1 and 2 (36.9% vs 50.0%, P = 0.03). Discussion In this cohort of CKD patients under nephrologist care, low P K (< 4 mmol/L) was relatively common, but hypokalemia (< 3.5 mmol/L) and high P K uncommon. Neither high nor low P K , at baseline or during follow-up, were associated with all-cause or CV mortality in this population. A major finding from this selected cohort of patients receiving optimized nephrologist care is that the lack of excess mortality with high P K was apparently observed in the absence of reduction in the use of ACEi or ARBs over time. Optimal care of patients with CKD stage 3 or higher should involve annual assessment of metabolic and cardiovascular complications and adaptation of medication to achieve recommended therapeutic targets [22]. The NephroTest work-up implemented since 2000 in the three university hospitals in this study sought to improve CKD care by providing comprehensive assessment of CKD complications at yearly intervals together with reminders of current recommended targets. It should be emphasized that the unique design of this study with exclusive participation of patients with optimized nephrology care makes it difficult to compare our results with those from other studies. Moreover, we measured PK which is likely to have resulted in a slight shift towards lower values as compared with other studies using SK. A U-shaped relation has previously been reported between S K and mortality in several cohorts of HF [13], hypertension [23], and CKD patients [4,14,15,17], but we observed no such association with P K in the NephroTest cohort. Although no causality could be ascertained in this observational setting, we note that 74.4% of the CKD patients in our cohort were treated with ACEi or ARB at baseline (a higher rate than in the above-mentioned CKD cohorts, where it was 58.0, 59.0, and 62.1% [14,15,17] and 29.0% [4],), they had a low baseline prevalence of high P K and a reinforced follow-up, in that patients agreed to undergo, beyond their routine nephrology care, additional extensive laboratory testing. Strikingly, low P K (common at baseline) and high P K (uncommon at baseline) were corrected in a substantial number of patients between the first and second NephroTest work-up: management was responsive to test results, as shown by the increased prescriptions for ARBs in patients with low baseline P K , and the increased prescription for potassium-binding resins and bicarbonate in those with high baseline P K . Interestingly, the stable ACEi and increased ARB use in these patients suggests that the nephrologists were not reluctant to prescribe drugs that might promote still higher P K . In contrast, a recent retrospective survey of US CKD patients reported a U-shaped association between S K and discontinuation of these medications blocking the reninangiotensin-aldosterone system (RAAS) [4]. It may be that the use of first-generation potassium-binding resins, either sodium-based (e.g., sodium polystyrene sulfonate, SPS) or calcium-based (e.g., calcium resonium), and bicarbonates made the RAAS inhibition sustainable (by taking care of the low P K part of the U-shape curve) while avoiding life-threatening high P K (by blunting the right-hand side of the U-shaped relation between P K and outcomes). This interesting hypothesis warrants testing in randomized trials. Only a few studies have observed a higher risk for ESKD associated with high S K [11,16]. Our study found a slight but statistically significant excess risk of ESKD at higher P K levels, observed only with time-dependent Cox models. Because both P K and ESKD risks rise as GFR falls, it is difficult to determine whether this reflects the potential impact of P K on CKD progression or residual confounding by mGFR level. Management of patients with chronic hyperkalemia is currently in the process of changing, and these findings are relevant to these changes [22]. Until recently, recommendations for these patients called for a lowpotassium diet and the elimination of both potassium supplements and drugs, such as NSAIDS, that can compromise renal function. Instead, today, physicians are supposed to begin treatment with a non-potassiumsparing diuretic if indicated or to increase the dose for patients already on a diuretic. Dose reduction or discontinuation of RAAS inhibitors, especially mineralocorticoid receptor antagonists, is also recommended. Patients with chronic hyperkalemia for whom continued use of these drugs is thought necessary, such as those with CKD and/or HF with reduced ejection fraction, can be treated with a potassium-lowering agent such as SPS alone or with sorbitol and the RAASinhibitor (RAASi) treatment continued [24]. Unfortunately, the poor tolerability of available P K -lowering agents tends to induce poor compliance over the long run. SPS has been available to reduce potassium levels for several decades, but it is poorly tolerated and its use, especially in combination with sorbitol, has been associated with bowel necrosis [25]. Because SPS exchanges P K for Na+, it can increase sodium absorption and, therefore, plasma volume, it may be dangerous in patients with volume overload such as those with chronic HF, CKD, and/or salt-sensitive hypertension. The recent availability, at least in the US, of the nonabsorbed potassium-lowering polymer Patiromer and the likely availability within the year of the potassiumbinding agent ZS 9 provide an opportunity to continue RAASi in patients with hypertension [25]. (11) Although both Patiromer and ZS9 have been shown to be effective in reducing P K to normal levels in patients with hyperkalemia and to be relatively well tolerated, their long-term effectiveness on CV and renal outcomes with continued RAASi treatment must be evaluated and compared to those outcomes in patients switching to another class of antihypertensive agent [26]. Whether the additional potassium and kidney function monitoring and reminders that were the heart of the NephroTest intervention contributed to blunting the relation between P K and the outcomes tested must also be considered. Observational data certainly suggest that implementation of potassium and GFR monitoring is inadequate, even though it is recommended by all guidelines for patients treated with ACEi or ARB [27], or mineralocorticoid receptor antagonists [28]. Major strengths of our study include its large sample size and duration of follow-up, together with a high level of accuracy in patient phenotyping including the use of reference methods for measuring GFR, potassium (in plasma which is preferable to serum), and several biomarkers of metabolic complications, both at baseline and follow-up visits. Several limitations should also be noted, including its observational nature, and the percentage (6.6%) of patients excluded from the analysis because of baseline GFR < 10 mL/min/1.73m 2 or loss to follow-up. Although this may have decreased the study power; particularly for extreme P K values, it is unlikely to have biased our findings. As discussed above, the NephroTest cohort was highly selected, compared to the overall CKD patient population, a selection that precludes any generalization of our findings. Nevertheless, it was this selected nature of our population that made it possible to identify clinical practice patterns, and it is these that may lead to improved clinical management of dyskalemia in other patients. Finally, because drug doses were not recorded, we cannot document whether or not ACEi or ARB dosage was reduced when not withdrawn in patients with high P K . Conclusions In this cohort of patients under nephrology care, low P K and high P K appeared to be managed dynamically over time, that is, with careful attention and responsiveness to the patient's current metabolic status. In this context, neither low nor high P K was associated with excess overall and cardiovascular mortality. Our study supports the concept perceived in clinical practice that transient abnormality in potassium levels can be controlled by appropriate interventions, and thus may not necessarily indicate the worse outcome or imply the need for discontinuation of ACE-I or ARB. Availability of data and materials The datasets analysed during the current study are available from the corresponding author on reasonable request. Ethics approval and consent to participate All patients provided written informed consent before inclusion. The NephroTest study complied with the Declaration of Helsinki and was approved by an ethics committee (CCTIRS MG/CP09.503). Consent for publication Not applicable.
2017-10-26T03:43:27.757Z
2017-09-12T00:00:00.000
{ "year": 2017, "sha1": "0bf7bb2f75e300b032c31da1daf540f496e65e9e", "oa_license": "CCBY", "oa_url": "https://bmcnephrol.biomedcentral.com/track/pdf/10.1186/s12882-017-0710-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "46d73e1af992f240443c0b1650d3ab21eb8b768d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
135394625
pes2o/s2orc
v3-fos-license
Impact of Bio-fertilizer on Growth Parameters and Yield of Potato Potato (Solanum tuberosum L.) is being a high yielding, nutrient exhaustive and short duration crop needs higher quantities of fertilizers and pesticides as compare to other crops. A normal potato crop yielding 30 t/ ha removes about 100 kg N/ ha from soil (Pandey et al., 2006). Nitrogen and phosphorus are the major nutrients need in potato cultivation along with potassium. However, continuous and excessive use of chemical fertilizers is causing ecological and health hazards as well as deteriorating the soil health resulting decline in crop yields. Under these circumstances, organic sources play a vital role in improving the soil fertility and productivity of crop. The bio-fertilizers viz. Azotobactor, Phosphobacteria and Bacillus have been recognized as cheapest fertilizer input for improving soil health and fertility for optimum crop production. However, their effects depend on types of the crops, soil and environmental conditions. Singh, (2001) reported that the ability of Azotobacter and Phosphobacteria to proliferate in the rhizosphere of crop suggests an increased nutrient availability to the plants. Pfeiffer, (1984) reported that defined biodynamic approach as working with the energy from International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Volume 6 Number 5 (2017) pp. 1717-1724 Journal homepage: http://www.ijcmas.com Introduction Potato (Solanum tuberosum L.) is being a high yielding, nutrient exhaustive and short duration crop needs higher quantities of fertilizers and pesticides as compare to other crops. A normal potato crop yielding 30 t/ ha removes about 100 kg N/ ha from soil (Pandey et al., 2006). Nitrogen and phosphorus are the major nutrients need in potato cultivation along with potassium. However, continuous and excessive use of chemical fertilizers is causing ecological and health hazards as well as deteriorating the soil health resulting decline in crop yields. Under these circumstances, organic sources play a vital role in improving the soil fertility and productivity of crop. The bio-fertilizers viz. Azotobactor, Phosphobacteria and Bacillus have been recognized as cheapest fertilizer input for improving soil health and fertility for optimum crop production. However, their effects depend on types of the crops, soil and environmental conditions. Singh, (2001) reported that the ability of Azotobacter and Phosphobacteria to proliferate in the rhizosphere of crop suggests an increased nutrient availability to the plants. Pfeiffer, (1984) reported that defined biodynamic approach as working with the energy from ISSN: 2319-7706 Volume 6 Number 5 (2017) pp. 1717-1724 Journal homepage: http://www.ijcmas.com Application of different bio-fertilizers alone or in combination with others as seed, soil and foliar spray revealed that the bio-fertilizers have stimulatory effect on germination, sprouting behaviour and growth parameter of potato. The maximum germination and number of bud with 5 in number per tuber was recorded from T 7 treatment in which treatment was given as soil application FYM @ 150gm/pot + Mustard cake @ 150 gram/pot + tuber treatment with T. viride + foliar spray with bio-formulation of T. viride. It was also cleared that bio-fertilizers have stimulatory effect on vigour of plants. The maximum plant height was recorded in treatment T 7 (soil application of mustard cake + tuber treatment and foliar spray with T. viride) with the value of 11.16cm at 30 day age of plant followed by treatment T 4 (soil application of mustard cake + tuber treatment and foliar spray with Azotobacter), T 1 (soil application of neem cake + tuber treatment with PSB), representing value of 11.06cm and 10.73cm, respectively. The effect of seed treatment and foliar spray with bio-fertilizer on tuber size and yield was recorded that maximum number of large size tubers (5) and yield (844.85gm) was found in T 7 treatment, where treatment was given as soil application FYM @ 150gm/pot + mustard cake @150gm/pot + tuber treatment with T. viride + foliar spray with bio-formulation of T. viride cosmos, earth, cow and plants are systematically and synergistically harnessed, which create and maintain life. Pathak, and Ram. (2005) reported that the application of biodynamic with compost or field sprays (BD) gave higher yield and better return in vegetables. Considering above point in view, the study was undertaken in the present study as "Impact of Bio-fertilizer on growth parameters and yield of potato". Tuber seed treatment The packets of Azotobacter containing 200gm inoculum were obtained from Department of Soil Science (Microbiology), Chandra Shekhar Azad University of Agriculture and Technology Kanpur. The seed tubers of potato variety Kufri Sindhuri was used to conduct the experiment. Seed tubers were treated with Azotobacter @ 2g/10g of tuber seed. 10gm Jaggery was also added to make slurry and mixed it with seed tuber (Biswas et al., 2016). Then the tubers were kept in shade for dry. On the other hands, seed tubers were also treated with formulation of neemcake, mustard cake @ 25% and bio-formulation of Trichoderma viride, Trichoderma harzianum and phosphorus solubilising bacteria @ 2g/10gm of the tuber seeds. The seed tubers were treated by dipping the tuber in prepared solution separately. The treatments were given for 2 hours before the sowing of tuber. Germination and growth parameters of potato The experiment was conducted in the Glass house complex, Department of Plant Pathology, C.S.A. University of Agriculture and Technology, Kanpur. The 30cm earthen pots were used to conduct the experiment. The pots were previously filled with a mixture of sterilized sandy loam soil and farm yard manure in the ratio of 2:1. In each pot, 1 seed tuber was sown and watered as per need base. The details of the treatments were given as below:-T1 = Soil application FYM @150gm/pot + neem Cake @150gm/pot + tuber treatment with T. harzianum + foliar spray with bioformulation of T. harzianum T2 = Soil application with FYM @150 gm/pot + Tuber treatment with PSB + foliar spray with bio-formulation of PSB T3 = Soil application FYM @150gm/pot + tuber treatment with Azotobacter + foliar spray with bio-formulation of Azotobacter T4 = Soil application FYM @150gm/pot + mustard cake @ 150gm/pot + tuber treatment with Azotobacter + foliar spray with bioformulation of Azotobacter T5 = Soil application FYM @150gm/pot + neem cake @ 150gm/pot + tuber treatment with PSB + foliar spray with bio-formulation of PSB T6 = Soil application FYM @150gm/pot + tuber treatment with T. harzianum + foliar spray with bio-formulation of T. harzianum T7 = Soil application FYM @150gm/pot + mustard cake @ 150gm/pot + tuber treatment with T. viride + foliar spray with bioformulation of T. viride T8 = Soil application with FYM @150gm/pot + Tuber treatment with Azotobacter + foliar spray with bio-formulation of Azotobacter T9 = Soil application with FYM 300gm (Control). The experimental design was laid out in simple Complete Randomise Design. Three replications were kept for each treatment. Three pots were sown with untreated seed tubers served as control. The observations pertaining the effect of different treatments were taken on germination pattern and plant height (cm) at every 24hrs. up to 30 day age of plant. The tuber size and yield of crop, were also taken after harvest of crop. Germination pattern Seed tuber was treated with different biofertilizers might be responsible for early breaking of seed tuber dormancy and thereby increasing the germination percentage of seed tuber. The observation on pattern of germination of tuber was taken at every 24 hours up to 3 days after sowing. Plant height For this purpose, three plants were selected randomly and shoot height was measured (in cm) from the soil surface at basal portion to tip of leaf of plant with the help of meter scale at every 24 hrs upto 30 days age of plant. Three replications were kept for each treatment. The average of three plants height was divided by 3 for obtaining their mean to consider plant height. Effect of bio-fertilizers on tuber size and yield To explore the possible effect of bio-fertilizer on tuber size and yield, the potato was harvested and grading the tuber as number of large, medium and small size was also weight by electric balance separately from individual treatment and yield was calculated by weight of the total tubers per treatment. Effect of bio-fertilizer on growth parameters of potato plants The effect of tuber seed treatment with biofertilizer on growth parameters and seed germination in glass house condition shows that bio-fertilizer were effective in increasing seed sprouting and vigor of plants (Table 1a & b) &2). Germination pattern The stimulatory effect of different bio fertilizers on germination pattern of potato might be responsible for early breaking of seed dormancy. The observation on date of first germination & number of sprouting branches were recorded and data presented in the table showed that first sprouting of tuber was recorded from T1 and T7 treatments at 12 day after sowing (Table 1a). Among the treatment, late sprouting was found in T3 treatment which is also at par with control plant. As per concern on the number of sprouting bud, the maximum number of bud with 5 per tuber was found in T 7 treatment where treatment was given as soil application with FYM @ 150gm/pot + Mustard cake @ 150 gram/pot + tuber treatment with T. viride + foliar spray with bio-formulation of T. viride which was followed by T 4 treatment (Soil application FYM @ 150gm/pot + Mustard cake @ 150gm/pot + tuber treatment with Azotobacter + foliar spray with bioformulation of Azotobacter) and T 1 treatment (Soil application FYM @ 150gm/pot + Neem cake @ 150 gm/pot + tuber treatment with T. harzianum + foliar spray with bio formulation of T. harzianum, representing the value 4 and 3, respectively (Table 2). Among the treatment, least number of buds was found in case of T 3 , and T 6 , representing 1 bud per tuber in each. Shanmugaiah et al., (2009) observed cotton seeds treated with T. viride increased seed germination, root and shoot length, fresh and dry weight and vigour index over control. Table.2 Effect of bio-fertilizer on sprouting of seed tuber Plant height The effect of soil application with FYM and seed treatment with various bio-fertilizers on plant height of potato was studied under Glass house complex in pot culture experiment. The observation on plant height was taken at every 24 hrs. up to 30 days after sowing (Tables 2 and 3). The data presented in table 1a and 1b showed that biofertilizers have stimulatory effect on vigour of plants. The maximum plant height was recorded in treatment T 7 (soil application of mustard cake + tuber treatment and foliar spray with T. viride) with the value of 11.16cm at 30 day age of plant followed by treatment T 4 (soil application of mustard cake + tuber treatment and foliar spray with Azotobacter), T 1 (soil application of neem cake + tuber treatment with PSB), representing value of 11.06cm and 10.73cm, respectively. From the data presented in table 1a and 1b, it is also cleared that all the treatments were able to increase the growth of plant over control. Ravindra et al., (2015) found that the yield of tomato crop significantly increase by the combined application of seed treatment with T. harzianum + soil application of neem cake powder + foliar spray of Carbendazim. Barik and Goswami (2003) found the efficacy of biofertilizers with nitrogen levels on growth, productivity and economy in wheat. Rasool et al., (2011) reported that the Trichoderma isolates increased seedling growth and nutrient uptake in tomato. Effect of bio-fertilizers as seed treatment and foliar spray on tuber size and yield of potato The effect of seed treatment and foliar spray with bio-fertilizer on tuber size and yield was studied after harvesting. Tubers were graded as large (more than 50gm), Medium (25 gm -49.5gm) and small (less than 25 gm) ( Table 3). It has found that maximum number of large tuber was found in T 7 treatment, representing with weight of tuber is 261.27gm, where treatment was given as soil application FYM @ 150gm/pot + mustard cake @150gm/pot + tuber treatment with T. viride + foliar spray with bio-formulation of T. viride which was followed by T4 treatment (Soil application of FYM @150gm/pot + mustard cake @150gm/pot + tuber treatment with Azotobacter + foliar spray with bioformulation of azotobacter). Similar observations have also seen reported in case of medium and small size tubers. But highest number of small size tuber was found in case of T 4 treatment. From the table, it is also cleared that in case of T 6 and T 9 , there is no formation of large tuber and least number of tuber was found in case of T 2 treatment, showing 1, 2, and 12 in number against large, medium and small, respectively. As per yield is concerned, the highest yield (844.85gm) was recorded from treatment T 7 (soil application of FYM @ 150gm/pot mustard cake @ 150gm/pot + tuber treatment and foliar spray with T. viride), followed by treatment T 4 (soil application of FYM @ 150gm/pot + mustard cake @ 150gm/pot + tuber treatment and foliar spray with Azotobacter), and T 1 treatment (Soil application FYM @150gm/pot + neem Cake @150gm/pot + tuber treatment with T. harzianum ÷ foliar spray with bio-formulation of T. harzianum) with the value of 760.03 gm, 516.60gm, respectively. Kachroo and Razdan (2006) reported that combined application of Azotobacter + Azospirillium with different levels of N fertilizer significantly increase the grain yield of wheat. Bhattari and Hess (1993) found that the increased yield response in cultivation of spring wheat (Triticum astivum L.) with Azospirillium spp. of Nepalese origin. Yadav et al., (2000) also found that Azotobactor increases yield and nitrogen economy in wheat under field condition.
2019-04-27T13:03:33.449Z
2017-05-20T00:00:00.000
{ "year": 2017, "sha1": "99f3262d071d80321387f9e33293b9cf84f87fa6", "oa_license": null, "oa_url": "https://www.ijcmas.com/6-5-2017/Morajdhwaj%20Singh,%20et%20al.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "83045c071ecf96685d5939b6269316fccca09e02", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Environmental Science" ] }
247906376
pes2o/s2orc
v3-fos-license
Characterizing Pulse Attenuation of Intra-Cloud and Cloud-to-Ground Lightning with E -Field Signal Measured at Multiple Stations : In this paper, we analyze the waveform data of nearly 200,000 intra-cloud (IC) and cloud-to-ground (CG) lightning discharges detected by the Jianghuai Area Sferic Array on 26–29 August 2019 to investigate the propagation features of lightning electromagnetic fields. Through the analysis of variation in the electric field ( E -field) signal of lightning during the actual propagation, it was found that (1) the attenuation of lightning E -field signal with distance can be fairly well described by the power-law relationship E = ar − b , and the attenuation index is b = 1.02 (for IC) and b = 1.13 (for CG); (2) under the situation of the same propagation path, the IC pulses experience less attenuation than CG pulses; and (3) through the comparison with simulations, it can be seen that the attenuation of lightning E -field pulse is affected by the conductivity of the ground surface, and according to the attenuation factor of lightning E -field strength, it can be inferred that the conductivity in the Jianghuai area ranges between 0.005 S/m and 0.01 S/m, which is in good agreement with the measured conductivity in this area. Our results suggest that lightning radiation could provide a feasible means for remotely sensing the ground conductivity. Introduction The electromagnetic (EM) wave excited by lightning is subject to the influence of underlying terrain features and surface conductivity during its propagation, and the highfrequency components rapidly attenuate as the propagation distance increases, causing the weakening of lightning electromagnetic fields and an increase in waveform rising time [1][2][3][4][5][6][7][8]. The understanding of the characteristics of propagation attenuation for the EM pulses generated by lightning discharges on the ground surface with finite electrical conductivity is not only important for remote sensing of the parameters of lightning-discharging current, but also has great application value for evaluating the coupling mechanism and destructive effects between lightning electromagnetic pulses (LEMPs) and the electronic and electrical equipment of various industrial facilities. At present, the propagation attenuation effect of low-frequency/very-low-frequency (LF/VLF) LEMPs is mainly examined via two methods: The first method is to investigate the propagation features of LF/VLF EM waves generated by lightning along the ground surface with finite conductivity through theoretical simulation or numerical modeling [5,[9][10][11]. In these simulation studies, the ground surface is usually presumed as a plane with uniform electrical conductivity, or as being composed of segments with varying conductivity. In recent years, by adopting the finite-differential time-domain (FDTD) method, it has become possible to investigate the influence of more complicated ground surface features-such as irregular undulating terrain (e.g., mountains) and ground stratification-on the propagation features of lightning EM waves. These model assumptions unavoidably have some deviations from the actual propagation of EM waves. To avoid the aforementioned issues, 2 of 14 some researchers examined the characteristics of propagation attenuation by measuring the variation in the peak value of EM waves from the same lightning discharge at different propagation distances. Uman et al. 1976 analyzed the observation data of artificially triggered lightning in Florida; it was found that the peak value of discharging pulses for CG strokes usually attenuates by 10% while propagating more than 50 km over the soil in Florida State, and attenuates by 20% while propagating over 200 km [12]. Orville 1991 andIdone et al. 1993 analyzed the measurement results of calibrated sensors at different stations for the stroke peak value of rocket-triggered lightning, and fitted the relationship for the variation in the EM field peak value of CG strokes with the propagation distance [13,14]. Orville 1991 examined the measurement results from four lightning observation stations, which were between 117.9 and 259.1 km, for seven artificial rocket-triggered lightning flashes at the Kennedy Space Center (KSC) of the United States National Aeronautics and Space Administration (NASA); the fitting between the peak value of lightning EMP pulses y and propagation distance exhibited the power-law relationship E = ar −b , where b varies between −0.96 and −1.20, with an average of −1.13 [13]. Similarly, Idone et al. 1993 examined the observation results of 12 rocket-triggered lightning flashes at six stations of NASA-KSC, which were between 117.9 and 427 km, and determined that the fitting coefficient b ranges between −0.95 and −1.34, and the average is −1.09 [14]. De Mesquita et al. (2012) analyzed seven lightning flashes measured at the observation tower at Morro do Cachimbo station in Brazil [15]. To evaluate the attenuation of lightning EM signal peak values, they used two quantities of EM field peak value and the distance from the tower to the sensors to determine a power regression curve (in the form of E = ar −b ), thereby fitting the data. The average fitting coefficient of seven lightning events was −1.52, in contrast with the measurement result of −1.13 by Orville 1991 in Florida [13]; that is, the attenuation of lightning E-field over distance was faster in Brazil. Kolmašová et al. 2016 used the MÉTÉORAGE lightning location network of France [16]. By analyzing 15 lightning events observed on 11 October 2012, they obtained the power regression curve E = A D 10 −αD , where α is in the range of 1.74-2.3, where the coefficient A (in V/m) represents the electric field amplitude at a distance of 100 km, and D = d/100 km, where d is the lightning distance. This value is likely caused by the farther distance, where the lightning observation distance of three stations is between 300 and 600 km. There is relatively large fluctuation in index b fitted from different lightning observation data; for example, the maximum and minimum b fitted by Idone et al. (1993) are different by 0.39 [14]. On the one hand, this could arise from the fitting error under the condition of a relatively small fitting sample, and on the other hand, it also likely indicates that the actual gain of sensors at the measurement stations could have random fluctuations relative to the calibrated gain, causing a relatively large deviation in the measured CG peak values. Apparently, we need to conduct the fitting study on the characteristics of lightning E-field attenuation with propagation distance based on a relatively large sample of observation data, in order to minimize the fluctuation of fitting results caused by the relatively small sample size. It has been a long time since the study of the propagation effects of electromagnetic pulses driven by lightning has mainly focused on the return stroke process of CG approaching the ground surface. In fact, IC discharge can also produce a bipolar pulse with strength comparable to the CG strokes [17][18][19][20]. In the early CG location systems, these IC discharge pulses were removed by the traditional identification algorithm, while they can be registered as IC discharging events and also located in the model total lightning location system. For these lightning pulses occurring inside the clouds and CG stroke pulses that mainly occur near the ground surface, their characteristics of propagation attenuation can be obviously different due to the differences in the height of the lightning source and the propagation path. Cooray et al. 2000 [21] applied the propagation attenuation function of air dipoles to study the attenuation effect of propagation for IC pulses, and their results indicate that in comparison with CG stroke pulses that mainly propagate along the ground Remote Sens. 2022, 14, 1672 3 of 14 surface, the IC pulses undergo less propagation attenuation. This conclusion still merits validation with experimental data. In this paper, we use a set of observation systems that record the impulsive waveforms of lightning EM fields synchronously at multiple stations. For the huge amounts of data for the E-field waveforms recorded for CG strokes and IC bipolar pulses, we propose a method that actually fits the attenuation features of lightning pulse peak with propagation distance. This method does not need to calibrate the antenna gain in the field for each station. Meanwhile, it can fit the relatively large data sample within a greater range. In particular, it can not only fit the pulse of CG strokes, but also measure the propagation attenuation of bipolar IC pulses. In addition, a simulation was carried out on the influence of the ground conductivity on the transmission of IC/CG electromagnetic wave, and a comparison was made with the measurement. Methodology With respect to the actual observations and theoretical derivation, the sketch of the observation site and lightning event is shown in Figure 1. indicate that in comparison with CG stroke pulses that mainly propagate along the ground surface, the IC pulses undergo less propagation attenuation. This conclusion still merits validation with experimental data. In this paper, we use a set of observation systems that record the impulsive waveforms of lightning's EM fields synchronously at multiple stations. For the huge amounts of data for the E-field waveforms recorded by this system for CG strokes and IC bipolar pulses, we propose a method that actually fits the attenuation features of lightning pulse peak value with propagation distance. This method does not need to calibrate the antenna gain in the field for each station. Meanwhile, it can fit the relatively large data sample within a greater range. In particular, it can not only fit the pulse of CG strokes, but also can measure the propagation attenuation of IC bipolar pulses. In addition, a simulation analysis was carried out on the influence of the amplitude of the IC/CG electromagnetic wave transmission on the ground conductivity attenuation, and a comparison was made with the measured data. Methodology With respect to the actual observations and theoretical derivation, the sketch of the observation site and lightning event is shown in Figure 1. Schematic diagram for the derivation of E-field and B-field at point P on the ground surface. It is presumed that the ground plane is an ideal conductor, and r is the horizontal distance. As shown in Figure 1, for a lightning stroke channel reaching the ground as a perfect conductor (i.e., with infinite conductivity), when the current propagates along the channel, the E-field at point P is as follows [12]: where ( ) = √ + , = − ( )/ , and c is the speed of light in the air. The three terms on the right-hand side are the electrostatic, induction, and radiation fields, respectively. When the observation distance is relatively far, the E-field is mainly the radiation It is presumed that the ground plane is an ideal conductor, and r is the horizontal distance. As shown in Figure 1, for a lightning channel reaching the ground as a perfect conductor (i.e., with infinite conductivity),when the current pulse propagates along the channel, the E-field at point P is as follows [12]: where R(z ) = √ z 2 + r 2 , t = t − R(z )/c, and c is the speed of light in the air. The three terms on the right-hand side are the electrostatic, induction, and radiation fields, respectively. When the observation distance is relatively far, the E-field is mainly the radiation component-the last term in Equation (1). That is, if r >> H, and R ∼ = r (R > 100 km), we have , and for a specific discharge, it is only related to the propagation of the lightning current along the channel, and does not depend on the propagation distance. Thus, Equation (3) indicates that when the lightning current propagates in the vertical channel, without taking into account the influence of propagation attenuation, the E-field at different distances is inversely proportional to the propagation distance. When the electromagnetic wave of lightning propagates on the ground surface with finite conductivity, the variation in the peak E-field along the distance can be described with the following equation: where f (r) is the influence of propagation attenuation on the amplitude of the E-field waveform, and E pa is the attenuated E-field peak, which is a function of the propagation distance,f (r) = f (r)/r. When we use the antenna to measure the E-field strength, we also need to consider the influence of antenna gain in the field: where G is the site gain coefficient of the antenna, and E pag is the attenuated E-field peak considering the field strength gain. We use the sensor calibrated with the field gain to measure the impulsive signal value E pag of the same lightning discharge at different distance r, and can obtain the variation inf (r) with distance through the E pag -r curve [13,14]. In this paper, we introduce a method that can fitf (r) without needing to calibrate the antenna gain in the field. According to Equation (5), we use the sensors placed at location i and location j to measure the identical discharging event, and the peak value of measured signal is denoted as follows: We can further derive where k is the ratio between the site gain of two sensors, and is a constant independent of distance. If we select the lightning events at the same distance from station j in the actual measurement-namely,f r j = const = const-then Equation (9) indicates that, by fixing the distance of one station relative to the lightning stroke, we can use the ratio between the lightning pulse amplitudes measured synchronously at two stations to determine the variation in propagation attenuation function with distance (k is the ratio between the site gain of two sensors, and is a constant independent of distance, without considering the constant coefficient). Figure 2 shows the geometric layout necessary to achieve this measurement. When the lightning event is located on the circular ring at the same distance from the JS station, its distance from the HF station is different. We can obtain the variation relationship for the ratio between the pulse peak at the JS station and the pulse peak at the HF station with the distance of the HF station, and further derive the variation off r j with distance (by ignoring the constant coefficient). In the actual measurement, in order to ensure sufficient samples, we usually select the lightning discharge events on a circular ring with very nar-row thickness, and the thickness of the circular ring is small enough in comparison with the radius of the circular ring. In this way, all of the discharges within the circular ring can be considered as being of an equal distance relative to the HF station. Meanwhile, the radius of the circular ring cannot be too small, in order to ensure that the lightning events within the circular ring have a sufficiently large span with respect to the distance from the JS station. Figure 2 shows the geometric layout necessary to achieve this measurement. When the lightning event is located on the circular ring at the same distance from the JS station, its distance from the HF station is different. We can obtain the variation relationship for the ratio between the pulse peak value at the JS station and the pulse peak value at the HF station with the distance of the HF station, and further derive the variation relationship of (r ) with distance (by ignoring the constant coefficient). In the actual measurement, in order to ensure sufficient samples, we usually select the lightning-discharging events in a circular ring with very narrow thickness, and the thickness of the circular ring is small enough in comparison with the radius of the circular ring. In this way, all of the discharging events within the circular ring can be considered as being of an equal distance relative to the HF station. Meanwhile, the radius of the circular ring cannot be too small, in order to ensure that the lightning events within the circular ring have a sufficiently large span with respect to the distance from the JS station. Observations and Data In this paper, the waveform data of lightning pulses recorded by a regional detection network-namely, the Jianghuai Area Sferic Array (JASA)-are used to characterize the attenuation of lightning's electromagnetic signal with distance [17][18][19][20]. The JASA is a regional network that can achieve the multi-station recording of electromagnetic pulses generated by local thunderstorm activity, and different stations are synchronized by adopting the technique of high-precision temporal synchronization. The system was constructed in the Jianghuai area of China in 2011, and its first phase includes six detection stations. At present, the number of stations in this system is continuously expanding, as shown in Figure 3. Each station is equipped with a reception antenna for VLF/LF band (bandwidth 0.8-400 kHz) lightning signals, along with a seamless acquisition system for lightning waveforms with high-precision temporal synchronization (40 ns). With the waveform of lightning pulses recorded synchronously at multiple stations, we can manually determine the type of discharge, and calculate the occurrence location with a time-of-arrival location algorithm [22]. The detection efficiency of the JASA is better than 95%, based on composite pattern recognition and machine recognition. The positioning accuracy of the JASA is better than 2 km in the Jianghuai region when using methods such as Monte Carlo. For more detailed information, refer to Liu et al. 2021 [20]. Observations and Data In this paper, the waveform data of lightning pulses recorded by a regional detection network, the Jianghuai Area Sferic Array (JASA) are used to characterize the attenuation of lightning electromagnetic signal with distance [17][18][19][20]. The JASA is a regional network that can achieve the multi-station recording of electromagnetic pulses generated by local lightning activity, and different stations are synchronized by adopting the technique of highprecision temporal synchronization. The system was constructed in the Jianghuai area of China since 2011, and its first phase includes six detection stations. At present, the number of stations in this system is continuously expanding, as shown in Figure 3. Each station is equipped with a reception antenna for VLF/LF band (bandwidth 0.8-400 kHz) lightning signals, along with a seamless acquisition system for lightning waveforms with highprecision temporal synchronization (40 ns). With the waveform of lightning pulses recorded synchronously at multiple stations, we can manually determine the type of discharge, and calculate the occurrence location with a time-of-arrival location algorithm [22]. The detection efficiency of the JASA is better than 95%, based on composite pattern recognition and machine recognition. The positioning accuracy of the JASA is better than 2 km in the Jianghuai region when using methods such as Monte Carlo. For more detailed information, refer to Liu et al. 2021 [20]. Figure 4 shows the waveform of a CG stroke recorded at multiple stations. The location of this CG stroke was determined from the time-of-arrival of pulses recorded at four stations-HF, FN, HB, and JS-via the time-of-arrival (TOA) method. Its distance from each station is indicated in the figure. This stroke was approximately 148.5 km from the JS station, and the measured peak value of the CG pulse was 951 digital units (DU); meanwhile the distance from the HF station was 283.4 km, and its pulse amplitude was 581 DU. According to Equation (9), the ratio of their pulse amplitude was 1.64. Figure 4 shows the waveform of a CG stroke recorded at multiple stations. The location of this CG stroke was determined from the time-of-arrival of pulses recorded at four stations-HF, FN, HB, and JS-via the time-of-arrival (TOA) method. Its distance from each station is indicated in the figure. This stroke was approximately 148.5 km from the JS station, and the measured peak value of the CG pulse was 951 digital units (DU); meanwhile the distance from the HF station was 283.4 km, and its pulse amplitude was 581 DU. According to Equation (9), the ratio of their pulse amplitude was 1.64. Figure 5 shows the bipolar waveform recorded by the system for an impulsive event generated by an IC discharge. Its distance from the HF station was 290.6 km, and the pulse Figure 4 shows the waveform of a CG stroke recorded at multiple stations. The location of this CG stroke was determined from the time-of-arrival of pulses recorded at four stations-HF, FN, HB, and JS-via the time-of-arrival (TOA) method. Its distance from each station is indicated in the figure. This stroke was approximately 148.5 km from the JS station, and the measured peak value of the CG pulse was 951 digital units (DU); meanwhile the distance from the HF station was 283.4 km, and its pulse amplitude was 581 DU. According to Equation (9), the ratio of their pulse amplitude was 1.64. Figure 5 shows the bipolar waveform recorded by the system for an impulsive event generated by an IC discharge. Its distance from the HF station was 290.6 km, and the pulse Figure 5 shows the bipolar waveform recorded by the system for an impulsive event generated by an IC discharge. Its distance from the HF station was 290.6 km, and the pulse peak was 224 DU; its distance from the JS station was 157.0 km, and the pulse peak was 224 DU; its distance from the JS station was 157.0 km, and the pulse value was 421 DU. Therefore, the pulse peak at the two stations was 1.88. These two lightning events were about 290 km from the HF station, and about 150 km from the JS station. The propagation path was almost the same as the propagation distance, while the ratio of IC pulse amplitude was about 1.15 times the ratio of CG pulse amplitude, which indicates that the IC pulses might undergo less peak value attenuation under the same propagation path conditions. value was 421 DU. Therefore, the pulse peak value at the two stations was 1.88. These two lightning events were about 290 km from the HF station, and about 150 km from the JS station. The propagation path was almost the same as the propagation distance, while the ratio of IC pulse amplitude was about 1.15 times the ratio of CG pulse amplitude, which indicates that the IC pulses might undergo less peak value attenuation under the same propagation path conditions. Figure 6 shows the location results of the time-of-arrival technique for lightning events recorded by the system on 26-29 August 2019. We can see that the distance of these lightning events from the HF and JS stations varied in a relatively large range, and in this paper we discuss the characteristics of propagation attenuation. As addressed in the following discussion, for the lightning-discharging events occurring at particular locations, we can identify the IC pulses and CG stroke pulses according to the signal waveform. The ratio of pulse peak value observed at the HF and JS stations was calculated respectively, and the distance from the station was taken as the function. Figure 6 shows the location results of the time-of-arrival technique for lightning events recorded by the system on 26-29 August 2019. We can see that the distance of these lightning events from the HF and JS stations varied in a relatively large range, and in this paper we discuss the characteristics of propagation attenuation. As addressed in the following discussion, for the lightning-discharging events occurring at particular locations, we can identify the IC pulses and CG stroke pulses according to the signal waveform. The ratio of pulse peak observed at the HF and JS stations was calculated respectively, and the distance from the station was taken as the function. Ratio of Field Gain According to Equation (8), the ratio between the field antenna gain at two stations is a constant independent of distance. In order to validate this hypothesis, as shown in Figure 7, we respectively selected the lightning events at the same distance from the HF and FN stations, and calculated the variation in the ratio between pulse peak with distance. The set of events was located on the normal line to the HF-FN station link in Figure 7. To increase the amount of data, we used the distance method above, and lightning events within 5 km of both sides of the normal line were taken into account. Because the selected discharging event was at comparable distance from two stations, we could ignore the influence of propagation attenuation, which was therefore independent of propagation distance. In the practical calculation, in order to enlarge the sample size as much as possible, we chose the lightning events within 5 km range from the HF station and the FN, HB, and JS stations. We can see from Figure 8a that the relative ratio of field strength at the stations HF and HB fluctuates in the range of 1.006-1.91, with an average of 1.32; it can be seen that the ratio of the amplitudes between the two stations is not affected by the distance. This ratio of field strength exhibits independence of propagation distance, which indicates that our assumption in the formula is appropriate. The average ratio of relative field strength at the HF and FN stations is 0.98, and the average ratio of relative field strength at the HF and JS stations is 1.18. However, we can also see from Figure 8a that this ratio of amplitude exhibits a relatively large fluctuation. We speculate that this could be related to the random fluctuation of site gain; specifically, when the electromagnetic wave comes from different directions, there is a considerable difference in the site gain. Because the selected discharging event was at comparable distance from two stations, we could ignore the influence of propagation attenuation, which was therefore independent of propagation distance. In the practical calculation, in order to enlarge the sample size as much as possible, we chose the lightning events within 5 km range from the HF station and the FN, HB, and JS stations. We can see from Figure 8a that the relative ratio of field strength at the stations HF and HB fluctuates in the range of 1.006-1.91, with an average of 1.32; it can be seen that the ratio of the amplitudes between the two stations is not affected by the distance. This ratio of field strength exhibits independence of propagation distance, which indicates that our assumption in the formula is appropriate. The average ratio of relative field strength at the HF and FN stations is 0.98, and the average ratio of relative field strength at the HF and JS stations is 1.18. However, we can also see from Figure 8a that this ratio of amplitude exhibits a relatively large fluctuation. We speculate that this could be related to the random Variation of Lightning Pulse Peak Value with Distance According to Equation (9), in order to explore the variation in lightning pulse value with distance, here we manually sort the lightning-discharging events at a ran 205 km (200-210 km) from the JS station, including a total of 1053 IC discharges an CG strokes. The distance from these discharging events to the HF station varies from up to 400 km; therefore, we can examine the variation in pulse peak value at the JS st (relative to the amplitude at the HF station) with propagation distance. We can see the figure that for both the IC and CG pulses, the pulse amplitude attenuates rapidly the propagation distance. Idone et al. 1993 pointed out that the peak value of CG st attenuates exponentially with distance (r); therefore, we can adopt the data fitting logarithmic coordinates [14]: Take the = We fit the logarithmic value of the data by applying the logarithm on both sid Equation (10). We then we get the following equation: Variation of Lightning Pulse Peak Value with Distance According to Equation (9), in order to explore the variation in lightning pulse peak value with distance, here we manually sort the lightning-discharging events at a range of 205 km (200-210 km) from the JS station, including a total of 1053 IC discharges and 726 CG strokes. The distance from these discharging events to the HF station varies from 0 km up to 400 km; therefore, we can examine the variation in pulse peak value at the JS station (relative to the amplitude at the HF station) with propagation distance. We can see from the figure that for both the IC and CG pulses, the pulse amplitude attenuates rapidly with the propagation distance. Idone et al. 1993 pointed out that the peak value of CG strokes attenuates exponentially with distance (r); therefore, we can adopt the data fitting in the logarithmic coordinates [14]: Take the We fit the logarithmic value of the data by applying the logarithm on both sides of Equation (10). We then we get the following equation: For the IC events and CG events at a range of 205 km (200-210 km) from the JS station, b = 1.0422 and b = 1.1385, respectively, as shown in Figure 9. Remote Sens. 2022, 14, x 11 of Figure 9. Logarithmic values of peak electric field value at the HF station vs. logarithmic values distance for lightning events located at 205 km from the JS station. Figure 10 shows the variation in pulse amplitude with the distance from the JS stati for 453 IC pulses and 334 CG pulses. Similarly, for the CG and IC events occurring a distance of 150 km (147.5-152.5 km) from the JS station, we obtained the variation in field pulse peak value (relative to the HF station) with distance, and the results of ind fitting were b = 1.0236 (for IC) and b = 1.116 (for CG), respectively. The results are show in Figure 10. Figure 11 shows the variation in the ratio between the E-field peak values of CG a IC events occurring at a distance of 250 km (245-255 km) from the JS station with distan where we obtained b = 1.0171 (for IC) and b = 1.1006 (for CG), and there were 192 IC puls and 177 CG pulses. The results are listed in Table 1. Figure 10 shows the variation in pulse amplitude with the distance from the JS station for 453 IC pulses and 334 CG pulses. Similarly, for the CG and IC events occurring at a distance of 150 km (147.5-152.5 km) from the JS station, we obtained the variation in E-field pulse peak value (relative to the HF station) with distance, and the results of index fitting were b = 1.0236 (for IC) and b = 1.116 (for CG), respectively. The results are shown in Figure 10. . Logarithmic values of peak electric field value at the HF station vs. logarithmic value distance for lightning events located at 205 km from the JS station. Figure 10 shows the variation in pulse amplitude with the distance from the JS stati for 453 IC pulses and 334 CG pulses. Similarly, for the CG and IC events occurring a distance of 150 km (147.5-152.5 km) from the JS station, we obtained the variation in field pulse peak value (relative to the HF station) with distance, and the results of ind fitting were b = 1.0236 (for IC) and b = 1.116 (for CG), respectively. The results are sho in Figure 10. Figure 11 shows the variation in the ratio between the E-field peak values of CG a IC events occurring at a distance of 250 km (245-255 km) from the JS station with distan where we obtained b = 1.0171 (for IC) and b = 1.1006 (for CG), and there were 192 IC pul and 177 CG pulses. The results are listed in Table 1. Table 1. Variation in the E-field peak values of CG and IC events with distance from the HF station. No. Width of Distance Ring (km) Attenuation Index of IC Events, b Attenuation Ind of CG Events, Figure 11 shows the variation in the ratio between the E-field peak values of CG and IC events occurring at a distance of 250 km (245-255 km) from the JS station with distance, where we obtained b = 1.0171 (for IC) and b = 1.1006 (for CG), and there were 192 IC pulses and 177 CG pulses. The results are listed in Table 1. Remote Sens. 2022, 14, x 12 of 1 Figure 11. Observation results of EM pulse peak value at the HF station for lightning events locate 250 km from the JS station. According to Equation (3), when b = −1, it indicates that there is no peak value atten uation of the lightning pulse during the propagation process. Our result of b > 1 indicate that, for both IC and CG pulses, the propagation on the ground surface with finite con ductivity caused the attenuation of the peak value. Under the same propagation cond Figure 11. Observation results of EM pulse peak value at the HF station for lightning events located 250 km from the JS station. According to Equation (3), when b = −1, it indicates that there is no peak value attenuation of the lightning pulse during the propagation process. Our result of b > 1 indicates that, for both IC and CG pulses, the propagation on the ground surface with finite conductivity caused the attenuation of the peak value. Under the same propagation conditions, the propagation attenuation index b of IC pulses was smaller than the attenuation index of CG pulses, which indicates that the IC pulses suffer less attenuation than CG pulses. As a matter of fact, Cooray 2012 used the propagation attenuation function of a dipole above the planar ground surface to simulate the propagation attenuation situation of IC discharges over the ground surface with finite conductivity, and revealed that the IC pulse mainly propagates in the air [10]. In comparison with the CG pulses that mainly propagate along the ground surface, IC pulses undergo less peak value attenuation. The results of field observations in this paper confirm this result. We can also see from Table 1 that, even though the waveform data observed at the HF and JS stations are used, when we select different concentric rings, the propagation attenuation coefficient obtained from fitting is slightly different. Moreover, there is relatively large fluctuation in the fitting value of the amplitude ratios calculated for individual lightning events. This fluctuation is very likely related to the fluctuation present in the field gain of EM waves coming from various incident directions. As shown in the figure, even though we do not consider the influence of propagation attenuation, the ratio of E-field gain coefficient in the field is not a fixed constant, and it also undergoes fluctuation, which leads to fluctuation in the calculated ratio of E-field pulse peak value. Even though in this paper, according to the fitting results of numerous lightning events occurring in a broad range, we can eliminate the influence of this fluctuation in the field gain to some extent, our results indicate that the attenuation index obtained by the fitting of CG events is 1.12-1.14. This result is very close to the attenuation index of 1.13 obtained by Idone et al. 1993 according to the CG stroke of artificial rocket-triggered lightning [14]. It should be noted that the attenuation index obtained by Idone et al. 1993 according to individual rocket-triggered lightning varied in the range of −0.95 to −1.34, which also confirms that the field gain coefficient of the antenna is not a constant, because of its feature of random fluctuation. This probably indicates that when Idone et al. 1993 fitted the relationship between EM wave peak value and return stroke current according to the E-field strength corrected with the propagation attenuation, they could not improve the correlation coefficient. Numerical Simulation We further used COMSOL to simulate and characterize the propagation attenuation of IC and CG pulses over the ground surface with finite conductivity. COMSOL Multiphysics is based on the finite element method, and achieves the simulation of real physical phenomena by solving single-field or multifield partial differential equations, and solves the physical phenomena in the real world with mathematical methods. We took the vertical dipole model as the source of the lightning pulses. For the CG pulse, it was presumed that the height of occurrence was 200 m above the ground surface and the length of the simulated discharge was 199 m, while the IC pulse was assumed to be at a height of 8 km above the ground surface and the length of the simulated discharge was assumed to be 199 m. For the CG pulse, it was presumed that the frequency was mainly concentrated at 40 kHz, and the frequency of IC pulses was presumed to be concentrated at 120 kHz. According to the features of the actual observation dataset, the simulation studied an observation range of 300 km. The underlying surface was modeled with uniform conductivity, and the conductivity was set to be 0.01 S/m, 0.001 S/m, 0.0005 S/m, and 0.0001 S/m. Because the simulation range was within 300 km, this simulation did not consider the reflection of the ionosphere. As for the IC pulses, when the conductivity is 10 −4 S/m, the IC pulses will also be subject to relatively considerable attenuation, and the attenuation index is b = 1.285; for CG pulses, it is b = 1.454. When the ground conductivity is relatively high (e.g., 0.01), both IC and CG pulses will experience relatively small attenuation, and the attenuation index is b = 1.0067 (for IC pulses) and b = 1.0152 (for CG pulses). From the results of actual observations and numerical simulations, it can be seen that the ground conductivity has a considerable impact on the attenuation of lightning E-field. This influence provides a way to use the attenuation features of E-field strength to retrieve the ground conductivity. Johler 1961 [23] discussed how to use the LF/VLF electromagnetic pulse of lightning to determine the ground conductivity; by comparing the actual observed waveform and the predicted waveform under the influence of ground conductivity, he determined the conductivity between Leoti in west Kansas (Kansas, Brighton) and Colorado. Schueler and Thomson 2006 [24] used the ground wave signal of lightning received by the lightning location system of Florida KSC in 1992; by analyzing the variation in dE/dt with ground conductivity, they estimated the effective ground conductivity, and the geometric mean ground conductivity was 0.0059 S/m. Aoki et al. 2015 [25] observed a decrease in Ez peak and an increase in Ez rise time due to propagation over 200 km of Florida soil, which were reasonably well reproduced by the FDTD simulation with ground conductivity of 0.001 S/m. We can see from Figure 12 that when the ground conductivity is 5 × 10 −3 S/m, the attenuation index is b = 1.0675 for IC and b = 1.1699 for CG. The measurement results of this paper were in the range of 5 × 10 −3 S/m-10 × 10 −3 S/m. Although this simulation only considers the uniform conductivity of the ground surface, and it does not consider the influence of roughness and earth curvature, this result is in good agreement with the electrical conductivity of 5 × 10 −3 to 10 × 10 −3 S/m in the plains region of the Huaihe River basin as published by Shaanxi Observatory, according to the empirical method used to predict the propagation delay of the ground wave for timing signal [26]. solves the physical phenomena in the real world with mathematical methods. We took the vertical dipole model as the source of the lightning pulses. For the CG pulse, it was presumed that the height of occurrence was 200 m above the ground surface and the length of the simulated discharge was 199 m, while the IC pulse was assumed to be at a height of 8 km above the ground surface and the length of the simulated discharge was assumed to be 199 m. For the CG pulse, it was presumed that the frequency was mainly concentrated at 40 kHz, and the frequency of IC pulses was presumed to be concentrated at 120 kHz. According to the features of the actual observation dataset, the simulation studied an observation range of 300 km. The underlying surface was modeled with uniform conductivity, and the conductivity was set to be 0.01 S/m, 0.001 S/m, 0.0005 S/m, and 0.0001 S/m. Because the simulation range was within 300 km, this simulation did not consider the reflection of the ionosphere. As for the IC pulses, when the conductivity is 10 −4 S/m, the IC pulses will also be subject to relatively considerable attenuation, and the attenuation index is b = 1.285; for CG pulses, it is b = 1.454. When the ground conductivity is relatively high (e.g., 0.01), both IC and CG pulses will experience relatively small attenuation, and the attenuation index is b = 1.0067 (for IC pulses) and b = 1.0152 (for CG pulses). From the results of actual observations and numerical simulations, it can be seen that the ground conductivity has a considerable impact on the attenuation of lightning's E-field. This influence provides a way to use the attenuation features of E-field strength to retrieve the ground conductivity. Johler 1961 [23] discussed how to use the LF/VLF electromagnetic pulse of lightning to determine the ground conductivity; by comparing the actual observed waveform and the predicted waveform under the influence of ground conductivity, he determined the conductivity between Leoti in west Kansas (Kansas, Brighton) and Colorado. Schueler and Thomson 2006 [24] used the ground wave signal of lightning received by the lightning location system of Florida KSC in 1992; by analyzing the variation in dE/dt with ground conductivity, they estimated the effective ground conductivity, and the geometric mean ground conductivity was 0.0059 S/m. Aoki et al. 2015 [25] observed a decrease in Ez peak and an increase in Ez rise time due to propagation over 200 km of Florida soil, which were reasonably well reproduced by the FDTD simulation with ground conductivity of 0.001 S/m. We can see from Figure 12 that when the ground conductivity is 5 × 10 −3 S/m, the attenuation index is b = 1.0675 for IC and b = 1.1699 for CG. The measurement results of this paper were in the range of 5 × 10 −3 S/m-10 × 10 −3 S/m. Although this simulation only considers the uniform conductivity of the ground surface, and it does not consider the influence of roughness and earth curvature, this result is in good agreement with the electrical conductivity of 5 × 10 −3 to 10 × 10 −3 S/m in the plains region of the Huaihe River basin as published by Shaanxi Observatory, according to the empirical method used to predict the propagation delay of the ground wave for timing signal [26]. Conclusions Based on the lightning pulse waveforms recorded by the modern lightning detection network, in this paper, we propose a method that applies the concentric ring technique to measure the propagation attenuation index of the lightning pulse waveform. This method does not need to precisely calibrate the field gain of sensors. Instead, it fixes the distance of lightning pulses with respect to a station, and uses the variation in the distance from Remote Sens. 2022, 14, 1672 13 of 14 another station to characterize the attenuation of a lightning pulse waveform with distance. For the large sample data recorded by the system, we found that for both IC and CG pulses, their peak values exhibit the relatively good power-law feature of E = ar −b with distance, while the attenuation index was b = 1.02 for IC discharges and b = 1.13 for CG strokes. On this basis, it can be concluded that under the condition of the same propagation path, the IC pulses experience less attenuation than CG pulses. Our results also indicate that even when the condition of propagation attenuation is identical, there is also a relatively large fluctuation in the ratio of E-field gain. This indicates that even if we conduct the calibration of gain for the antenna sensor in the field, the gain coefficient of actual individual measurements could exhibit a relatively large fluctuation with respect to the calibrated gain coefficient. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: The data that support this study are available from the author upon reasonable request.
2022-04-03T15:51:11.366Z
2022-03-30T00:00:00.000
{ "year": 2022, "sha1": "c562c81d849c04b0814cb3c58142377a4b303fa2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-4292/14/7/1672/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "61084038db1b844ead6eb7487e793802ccbb80c8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
5136519
pes2o/s2orc
v3-fos-license
Exploring Regional Variation in Roost Selection by Bats: Evidence from a Meta-Analysis Background and Aims Tree diameter, tree height and canopy closure have been described by previous meta-analyses as being important characteristics in roost selection by cavity-roosting bats. However, size and direction of effects for these characteristics varied greatly among studies, also referred to as heterogeneity. Potential sources of heterogeneity have not been investigated in previous meta-analyses, which are explored by correlating additional covariates (moderator variables). We tested whether effect sizes from 34 studies were consistent enough to reject the null hypothesis that trees selected by bats did not significantly differ in their characteristics from randomly selected trees. We also examined whether heterogeneity in tree diameter effect sizes was correlated to moderator variables such as sex, bat species, habitat type, elevation and mean summer temperature. Methods We used Hedges’ g standardized mean difference as the effect size for the most common characteristics that were encountered in the literature. We estimated heterogeneity indices, potential publication bias, and spatial autocorrelation of our meta-data. We relied upon meta-regression and multi-model inference approaches to evaluate the effects of moderator variables on heterogeneity in tree diameter effect sizes. Results Tree diameter, tree height, snag density, elevation, and canopy closure were significant characteristics of roost selection by cavity-roosting bats. Size and direction of effects varied greatly among studies with respect to distance to water, tree density, slope, and bark remaining on trunks. Inclusion of mean summer temperature and sex in meta-regressions further explained heterogeneity in tree diameter effect sizes. Conclusions Regional differences in roost selection for tree diameter were related to mean summer temperature. Large diameter trees play a central role in roost selection by bats, especially in colder regions, where they are likely to provide a warm and stable microclimate for reproductive females. Records of summer temperature fluctuations inside and outside tree cavities that are used by bats should be included in future research. Roosts selected by bats Descriptions of roosts that are used by insectivorous bats in North American forests were mostly anecdotal prior to the mid-1990s. Technical developments in telemetry have been instrumental for our current understanding of habitat-species interactions with small mammals, such as bats [1].We now know that cavity-and bark-roosting bats rely upon living and standing dead trees (i.e., snags) in intermediate stages of decay [2,3] for roosting [4,5]. They have been reported roosting under exfoliating bark, inside trunk crevices, and within the cavities of both living and dead trees during the summer [2,[5][6][7]. The occurrence of several snags in a given stand likely indicates available roosts to bats [8,9]. Bats are faithful to their roosting sites [10][11][12] and switch regularly from primary roosts (which are used more frequently) to alternate roosts [13,14]. They therefore rely upon networks of clustered roost trees [15] that share similar characteristics, such as a large diameter and an intermediate stage of decay [2,16], perhaps to minimize predation risk or to reduce commuting costs. Furthermore, snags are an ephemeral resource [17][18][19], which may explain-in addition to the aforementioned reasons-why bats favour a high density of snags near roosts [2,6,16,20,21]. Like males and non-reproductive females, reproductive females may also use torpor (i.e., state of reduced body temperature and metabolic rate) to reduce energy expenditures [49], but this comes at the cost of reduced milk production [50], and delayed fetal development and juvenile growth [5,47]. To counteract these costs, lactating females may enter torpor for shorter bouts [48,51] or adopt other behavioural strategies, such as social thermoregulation [52]. Sticking together to stay warm requires large tree cavities [53,54], which underscores a central role that tree diameter plays in roost selection. Like many birds and other small mammals [55], bats probably use passive rewarming to reduce energy expenditure during arousal, which requires an external heat source in the afternoon [43]. Several studies [16,20,[56][57][58] proposed that canopy emergents and tall trees that are located within canopy openings or within stands of low tree density are more accessible to bats and also benefit from greater heat transfer by solar radiation [59]. Slope, slope aspect and elevation have also been associated with longer periods of external heating provided by solar radiation [7,60,61]. As has been suggested by Lacki, Cox [7], bats might favour trees that are located at lower elevations to benefit from warmer microclimates relative to those located at higher elevations. However, without temperature measurements on the field, it is difficult to establish a causal link between microclimate and roost selection by bats. For example, preference in elevation could also be related to variation in tree species composition [60,62,63]. Lacki, Baker [8] suggested that stands at lower elevations provide better roosting characteristics to bats (i.e., taller canopies, higher snag densities) than higher elevation stands. Previous narrative and quantitative reviews The increasing number of published radio-telemetry studies has led to three systematic reviews [5,47,64] and three meta-analyses [1,7,65] that summarize habitat use by bats in both unmanaged and managed forests. In a previous systematic review, Miller, Arnett [64] suggested that most studies had small sample sizes and suffered from pseudo-replication, but the authors did not account for these caveats quantitatively. Unlike a systematic synthesis or a narrative review, a meta-analysis provides a statistical synthesis of literature by pooling effect sizes from several studies. Effect size reflects the strength of the difference between experimental and control group means [66]. Standardized effect sizes are commonly used in meta-analyses to compare results among studies independent of the scale of measurement [66]. By performing a meta-analysis, Lacki and Baker [65], and Kalcounis-Rueppell, Psyllakis [1] confirmed that tree diameter, tree height and canopy closure were important characteristics explaining roost selection by bats, despite notable differences in size and direction of effects (i. e., negative or positive effects) among studies. In a more recent meta-analysis of two bat species, Lacki, Cox [7] found that roosting requirements of Indiana bat (Myotis sodalis) and northern long-eared bat (M. septentrionalis) overlapped, except for tree diameter and variation in the type of roosts that were used. The authors concluded that the northern long-eared bat showed greater plasticity than the Indiana bat in the choice of roosting sites. None of these meta-analyses (i.e., [1,7,65]) have tried to explain differences in effect sizes and in the direction of effects among studies, which is referred to as heterogeneity [66]. Heterogeneity is likely to be encountered in meta-analyses, since individual studies are conducted under various field conditions, use different methodologies and attempt to answer different questions [66,67]. Meta-regression approaches are increasingly employed in meta-analyses to explore whether the heterogeneity may be correlated with additional covariates, which are referred to as moderator variables [67,68]. Moderator variables may be included to test whether heterogeneity is associated with differences in study methods [66], or in the present case, with differences in roost selection that could be related to sex [36], bat species [7] or large-scale environmental factors [48]. Aims and hypotheses The growing awareness of global environmental issues has encouraged researchers to focus upon large-scale patterns in ecology, which are often extrapolated from small-scale studies with a limited sample size [69]. Detecting regional variation in roost selection using a metaanalysis may reveal large-scale patterns that cannot be explored locally. For example, based on observations from Britzke, Harvey [70] and Britzke, Hicks [71], Lacki, Cox [7] suggested that Indiana bats avoided roosting in upland habitats in regions near the northern end of the species distribution, with cooler climate and shorter growing season, with the converse occurring in southern populations (sensu Lacki, Cox [7]). In the same vein, Boland, Hayes [72] suggested that in the northern range of Keen's myotis (M. keenii), reproductive females should select for trees with larger diameters, which likely provide warmer temperatures than smaller trees, due to relatively cold and short summers in Alaska compared to southern regions. Such large-scale hypotheses typically may be tested using meta-analysis coupled with a meta-regression approach [67]. A decade of research has passed since the last meta-analysis on North American bats was conducted [1] and the number of studies on roost selection by bats has doubled (S1 Table). There are now enough studies to investigate for regional differences in roost selection using meta-regression approaches and test large-scale hypotheses based on previous knowledge of bat roosting ecology during the summer. Our first aim was to test whether the results for the most common characteristics in the literature were consistent enough among studies to reject the null hypothesis that trees selected by bats are not significantly different in their characteristics from randomly selected trees. We predicted that the effect sizes would be significantly different from zero and that the direction of effects would be consistent enough among studies (i.e., homogeneous) to reject the null hypothesis (i.e., no significant difference in characteristics from random trees) for each characteristic that we intended to test. After having identified the most consistent characteristics of roost selection by bats (i.e., with the strongest effect size), our second aim was to explain heterogeneity in tree diameter effect sizes by incorporating moderator variables such as habitat type, bat species, mean summer temperature, and elevation into a set of alternative metaregression models. According to the microclimate hypothesis (sensu Boyles [46]), we predicted that reproductive females should select larger tree diameters (relative to random trees) in northern regions and at higher altitudes, because of lower mean summer temperatures, compared to southern regions and lower altitudes. We predicted that reproductive females and larger species of bats would require trees with larger diameters, compared to non-reproductive females and males [36,40] or smaller species of bats [6]. We also predicted that larger tree diameters would be found in unmanaged (i.e., national parks) and riparian areas, compared to managed areas (i.e., where logging activity still occurs). Selection of studies We searched for published bat-roost selection studies that were available online in Google Scholar and the Web of Science. Those included journal articles, government reports, Ph.D. and M.Sc. theses, book chapters, and symposia. We included most of the studies that were presented in Miller, Arnett [64], Barclay and Kurta [5], Kalcounis-Rueppell, Psyllakis [1], Lacki and Baker [65], and Lacki, Cox [7]. We retained only studies that reported comparisons between random and selected trees (i.e., case/control design). Because of distinct roosting ecologies [4,14], we did not include studies on foliage-roosting bats, but retained those that dealt with bark-and cavity-roosting bats. Dataset extraction and preparation Studies that compared different treatments or sites, or differences in roost selection among bat species, and between sexes, had more than one dataset. We regarded each dataset as a sample unit for our meta-analysis (expressed as n unless otherwise stated). We examined 20 candidate characteristics for explaining roosting preferences of cavity-roosting bats, but retained only nine for which we found a minimum of 10 studies (i.e., 19 datasets): tree diameter (cm; S1 Table), tree height (m; S2 Table), snag density (stems/0.1 ha; S3 Table), elevation (m; S4 Table), canopy closure (%; S5 Table), distance to water (m; S6 Table), tree density (stems/0.1 ha; S7 Table), slope (%; S8 Table), and bark remaining on trunks (%; S9 Table). We extracted means, standard errors, standard deviations, and sample sizes for each dataset. We converted standard errors to standard deviations by multiplying the standard error of the mean by the square-root of the sample size, i.e., the number of trees. We converted all measurements of size, density and distance to the same units. For each of the nine characteristics, we calculated Hedges' g Standardized Mean Difference (SMD) as an estimate of the effect size between trees that had been selected by bats (i.e., experimental group) and random trees (i.e., control group), as suggested by Borenstein, Hedges [68]. We excluded studies with an effect size greater than 4 times the mean group standard deviation to meet criteria of effect size normality and variance homogeneity [68]. We computed prediction intervals, fixed-effects and random-effects models (meta package, R Development Core Team 2015 [73]) for comparison purposes, but used only random-effects models in our metaanalysis (S1-S9 Tables). Random-effects models assume that heterogeneity not only depends upon sampling variance but also random population effect sizes [68], which is the case in our meta-analysis involving numerous bat species, together with potential variation between sexes and among habitat types. Publication bias and heterogeneity Testing for publication bias supposes that there is a tendency for publishing studies with significant findings. If such bias is present, studies should be unbalanced towards positive results with only a few published studies supporting the null hypothesis. Publication bias is considered null when studies are well balanced (e.g., when roughly the same number of studies have reported significant findings versus those supporting the null hypothesis). We used funnel plots (i.e., effect size plotted against its standard error) to assess potential publication bias [74] for each of the nine characteristics. We tested for funnel plot asymmetry using the conventional weighted linear regression method [75], which is provided in the package meta [76]. We used l'Abbé plots to display meta-data visually and to investigate potential patterns of heterogeneity. In l'Abbé plots, the experimental group is plotted against the control group and the resulting regression line and its associated 95% CI is compared visually with the equality line (1:1), for which the mean difference is null [77]. We used the maximum likelihood approach (package meta, R Development Core Team 2015) to estimate heterogeneity (τ 2 ) in the population effect sizes. We further quantified heterogeneity using Higgins' I 2 index [78] (expressed as a percentage) and used the classification scheme that was given by the authors to interpret the severity of heterogeneity (see Higgins, Thompson [78] for further details). Moderator variables and meta-regressions We geo-located study sampling sites by using GPS coordinates or the locations that were mentioned in the reviewed manuscripts. We registered these locations in ArcGIS (version 10.1, Environmental Systems Research Institute, Redlands, CA, USA), around which we drew 1 kmradius buffer zones to compensate for imprecision. We further integrated into ArcGIS raster maps of elevation (digital elevation model) and monthly mean temperature (from June to August) that were provided by WorldClim 1.4 [79]. We averaged the pixel values that overlapped the 1 km-radius buffer zones to generate summer mean temperature and elevation values. Monthly mean temperature and elevation raster maps that were provided by WorldClim 1.4 were generated through interpolation on a 30 arc-second resolution grid (i.e., 1 km 2 spatial resolution) [79]. Monthly mean temperatures are based on daily minimum and maximum temperature fluctuations from 1950 to 2000 [80]. We extracted additional moderator variables from the reviewed manuscripts, such as sex (male, female, and combined), habitat type (managed areas, riparian areas and protected areas such as national parks), and bat species. Given the limited number of datasets (n = 63), we grouped bat species with fewer than 5 datasets by genus, resulting in only six classes of bat species. To interpret our meta-regression results correctly, we verified a priori that random-tree diameter was not correlated with latitude (r 2 = 0.00; P < 0.9). We verified that mean summer temperature was negatively correlated with elevation (r 2 = 0.21; P < 0.001) and latitude (r 2 = 0.77; P < 0.001). We decided to exclude latitude from our set of moderator variables since it was strongly correlated (i.e., r 0.7; [81]) with mean summer temperature. Due to the apparent spatial proximity of several studies (Fig 1), we verified that our SMD estimates and our best meta-regression model residuals were not dependent upon the effect of spatial scale (i.e., they were not autocorrelated). We predicted that studies that are close to each other would not share similar SMD (and model residual) values and the converse for distant studies. In other words, we tested the null hypothesis of spatial randomness, for which SMD (and model residual) values would not depend upon values at neighbouring locations [82]. We choose K = 4 nearest studies as distance-based neighbours among studies. Once our neighbourhood of studies was created, we assigned spatial weights for each pair of neighbours, which was the inverse Euclidean distance among studies [82]. We performed a global Moran's I test of spatial autocorrelation under randomization on the resulting Inverse Distance Weight (IDW) matrices [82]. We also used Moran's I test for residual spatial autocorrelation, which was provided in the package spdep [83]. We compared 17 candidate meta-regression models (package metafor, R Development Core Team 2015) to examine whether the heterogeneity in tree diameter effect sizes was explained by the aforementioned moderator variables. We constructed five subsets of candidate metaregression models. The first set combined habitat type (i.e., management level), microclimate (i.e., mean summer temperature and elevation) and bat-related (i.e., bat species and sex) moderator variables. The second set combined both microclimate, and bat-related moderator variables. The third, fourth and fifth sets included only microclimate, bat-related or habitat type as moderator variables, respectively. We ranked the candidate set of models using the secondorder Akaike's Information Criterion for small samples (AICc). We calculated ΔAICc values (Δ i ) and Akaike weights (ω i ) to determine the importance of the candidate set of models relative to the best explanatory model (Δ i = 0). Models were considered equivalent when they had a ΔAICc 2 [84]. We also included the pseudo-R 2 statistic provided by the package metafor [85], which estimates the amount of heterogeneity (%) accounted for by each candidate metaregression model. Publication bias and heterogeneity Funnel plots were well balanced (Fig 3); therefore, asymmetry tests did not reveal any significant publication bias (Table 1). Higgins' I 2 heterogeneity index indicated considerable levels of heterogeneity (i.e., I 2 indices ranging from 50% to 100%) for each characteristic of roost selection by bats (Table 1; Fig 4). Spatial autocorrelation and meta-regressions SMD values and squared residuals of our best regression model (i.e., tree diameter effect sizes vs. mean summer temperature) were not spatially autocorrelated. Moran's I test for spatial autocorrelation did not reject the null hypothesis of spatial randomness either for our SMD values (Moran I standard deviate = -0.29, P = 0.62) or for our best regression model residuals (Moran I standard deviate = -1.07, P = 0.28). The model that explained the most heterogeneity in tree diameter effect sizes (pseudo-R2 = 29.19%) included mean summer temperature, elevation, bat species and sex as moderator variables. This model had a high ΔAICc (ΔAICc = 10; AICc ω = 0), compared to the two best AICc models (i.e., with ΔAICc = 0 and = 1.58), which included only mean summer temperature and sex as moderator variables. Meta-analysis and heterogeneity Our meta-analysis included a larger number of characteristics, and increased the scope to a wider range of bat species and forest habitats throughout North America than previous quantitative reviews [1,7,65]. Despite an overall high level of heterogeneity among studies, five characteristics showed strong general trends in roost selection by bats. Cavity-roosting bats selected larger and taller roosts compared to random trees. They also roosted in stands with a larger number of surrounding snags, at lower elevations, and with less canopy closure compared to random stands. These results are consistent with those found by Lacki and Baker [65], and Kalcounis-Rueppell, Psyllakis [1]. Other characteristics, such as distance to water, slope, and bark L'Abbé plots of the tree characteristics selected by bats (experimental groups) against the random tree characteristics (control group) with the 95% CI (black dashed lines) for each dataset, and for each characteristic (tree diameter, tree height, snag density, bark remaining on trunks, distance to water, canopy closure, elevation, slope, and stand density). The size of the circle varies according to the assigned random weight (inverse variance of the standardized mean differences) of each dataset. The diagonal (x = y) grey dotted line is the equality line (1:1) between both means (i.e., the zero effect line, for which the mean difference = 0). Above the x = y line, the experimental group mean is higher than the control group mean. Below the x = y line, the experimental group mean is lower than the control group mean. Tau-squared (τ 2 ) and Higgins' I 2 heterogeneity indices are shown in each plot. Higgins' I 2 index is expressed in percentage and is used to interpret the severity of heterogeneity. doi:10.1371/journal.pone.0139126.g004 Regional Variation in Roost Selection by Bats remaining on trunks, did not significantly differ from random trees because of strong differences in size and direction of effects among studies. With respect to distance to water, our results slightly differed from those of Kalcounis-Rueppell, Psyllakis [1], since we included a larger set of studies [59,72,86] with a positive effect size (i.e., random trees that were closer to water). Water is an important resource for bats [33][34][35], especially in arid regions [34,35]. Only two studies included in our meta-analysis were located in arid regions and reported distance to water [87,88]. It would be interesting to investigate if studies located in arid areas show roosts being at shorter distances to water than studies where the availability of water to bats and precipitation are important. Characteristics likely related to temperature We found considerable heterogeneity in slope effect sizes. Further, it was difficult to identify a general trend from the literature, since slope appears to be related to the topographical context of the study. Unlike slope, we found greater consistency between results from different studies for elevation. Heterogeneity in elevation was even the lowest compared to the other characteristics explaining roost selection that we tested. Studies were conducted at a specific elevation (i. e., where roosts and random trees are in the same elevation zone), and short distances between roosts and matching random trees are typically taken in the field [64], which likely minimized the effect size for this characteristic. Despite the fact that studies are conducted at a specific elevation, we showed that elevation differences between selected and random trees is a consistent pattern among studies. Bats might select trees located at lower elevations to benefit from warmer microclimate and greater insect availability near roosts, relative to trees that are located at higher elevations [89]. Several studies found sexual segregation in bats with reproductive females less likely to occur in stands at higher elevation [90]. Russo [91] and Arnold [92] obtained similar results with Daubenton's bat (Myotis daubentonii) and the northern longeared bat, respectively. Cryan, Bogan [93] showed an inverse relationship between habitat elevation and the presence of reproductive females in South Dakota. Tree decay and bark remaining on trunks Most bat species that we included in our meta-analysis seek shelter inside trunk cavities [7,72] and under the exfoliating bark of snags with an intermediate stage of decay [16,20,56,58]. Only 3 studies have reported the exclusive use of cavities within living trees [3,59,60] and two of these were associated with southeastern myotis [59,60]. Although bark remaining on trunks was the most heterogeneous characteristic among those that we studied, a clear preference was exhibited by bats towards snags with about 70% of bark remaining on trunks (Fig 4). An intermediate stage of decay should offer the best compromise between an appropriate tree height and enough bark remaining on the trunk to provide a roost [2,3]. Another interesting aspect of snags is that they offer less buffering capacity against external temperature variation, compared to living trees [42,94]. However, they likely provide more available cavities [95] compared to living trees. Thus, selection of roosts by bats might be driven by a trade-off between the availability of potential roost trees in a given stand [59], their related benefits in terms of warm microclimates, and their relatively short distances to feeding sites [96]. More studies are clearly needed to better understand the thermal capacity of trees and its implications in bat behaviour [41,48]. Moderator variables and tree diameter effect sizes Tree diameter was the strongest characteristic explaining roost selection by cavity-roosting bats, since positive effect sizes (i.e., trees selected with a larger diameter than random trees) were a common finding in several studies [6,20,59,72]. The main hypothesis invoked by these studies was that trees with a larger diameter offered greater thermal inertia against external temperature variation [41,42,94,97], compared to trees with a smaller diameter. For reproductive female bats, the importance of stable and warm temperatures has been discussed in detail by Barclay and Kurta [5]. Reproductive females are thought to benefit from warm and stable microclimates that minimize thermoregulation costs and which maximize their fitness [5,47]. However, these assumptions have rarely been tested empirically in North American bat research [48]. Most studies that have measured temperature variation in roosts of bats and other mammals have been conducted in Europe [44,45,98] and New Zealand [41,99,100]. To our knowledge, only Park and Broders [40], have shown reductions in temperature fluctuations within roosts that were used by lactating northern long-eared bats in Newfoundland. Lacki, Johnson [43] also showed reductions in temperature fluctuations within roosts that were used by long-legged myotis and which were located beneath the exfoliating bark of trees, in Idaho and Oregon. Surprisingly, moderator variables such as elevation, bat species, and habitat types were not included in our best model explaining heterogeneity in tree diameter effect sizes. When sex was combined with mean summer temperature, the two predictors explained further heterogeneity. Otherwise, sex alone performed poorly. Subsequent tests for subgroup differences indicated that intra-study heterogeneity for female bats was greater than inter-study heterogeneity, when considering all groups (i.e., males, females and combined). The variability that could be attributed to sex, although present [36,40], was masked by other moderator variables having a greater influence on tree diameter effect sizes. It was interesting to note that the model explaining the most heterogeneity included mean summer temperature, elevation, bat species, and sex as moderator variables. This model had a lower AICc ranking since it was less parsimonious (i. e., K = 11 parameters to estimate) than the two best models [101,102], which include only mean summer temperature (K = 3) and mean summer temperature + sex (K = 5) as moderator variables. Mean summer temperature and sex were the two moderator variables that best explained heterogeneity in tree diameter effect sizes. Our main finding was that, in the case of female bats, regional differences in selection for tree diameter were correlated to mean summer temperatures of the location where the studies were performed. In northern regions with lower mean summer temperatures, female bats showed greater selectivity towards large trees, compared to southern regions, which benefit from higher mean summer temperatures [72]. This study confirmed a relation between regional differences in roost selection by bats and differences in the climatic conditions (i.e., temperature) occurring across a broad spatial scale [7]. Most of the studies that we included in our analyses were from the Pacific Northwest of the US, the southeastern US, and southeastern Canada/northeastern US. Although the studies within these three regions appeared clustered (Fig 1), SMD estimates from these studies were not spatially dependent. In light of these results, the challenge of retaining trees with large diameters seems critical to ensuring the survival of bats, particularly in northern and mountainous regions with low mean summer temperatures and short growing seasons [2,72]. Limitations and research perspective We expected a high degree of heterogeneity because the studies that we included in our metaanalysis were conducted in various habitats, had included numerous bat species, and attempted to answer different questions. Despite the inclusion of moderator variables, most heterogeneity in tree diameter effect sizes remained unexplained. We are aware that we have used a relatively coarse measure of daily summer temperature that likely underscored regional temperature fluctuations. More accurate moderator variables could likely capture more heterogeneity in tree diameter effect sizes. It is likely that the differences in results among studies were also influenced by measurement methods [64]. We agree with Miller, Arnett [64] that random sites that are located in close proximity to selected roosts by bats might increase the lack of independence, and therefore, minimize the true effect sizes for several distance-based characteristics, such as elevation and distance to water. We were not able to estimate this potential bias since the authors rarely mentioned distances between trees that were selected by bats and random trees. Including this information in future research should greatly improve the interpretation of the results. Ambient temperature, exposure to solar radiation, and thermal properties of trees appear to play a central role in roost selection by bats. These aspects of the roost microclimate hypothesis, as described by Boyles [46], have been rarely investigated and should be included in future research. Driven by a forest management perspective, the majority of studies have focused their research on tree and stand characteristics (e.g., tree diameter, tree height, density of trees and canopy closure) that provide indirect links to microclimate. Studies that we reviewed also rarely mentioned stand age, although it may be correlated with the most important covariates of roost selection, such as tree diameter and tree height [103,104], canopy closure [105], tree density [106], snag density [18,107], and the number of available cavities [108]. The lack of published studies and available reports in northern Canada, in the desert southwest and the Midwest-West prairies in the US, and in Mexico has also limited our analyses to the southeastern US, the US Pacific Northwest and southeastern Canada/northeastern US. It would be interesting to include studies on roost selection by bats that were performed in northern regions to challenge our hypothesis. Supporting Information S1 Table. Meta-analysis on diameter at breast height (cm). Number of selected and random trees is provided for each dataset with corresponding mean, standard deviation (SD), standardized mean difference (SMD) with 95% CI, fixed weight (W), and random weight. Fixed effect and random effects SMD with 95% CI, and prediction intervals are provided at the end of the table. All values are rounded upward to two decimal places. (DOCX) S2 Table. Meta-analysis on tree height (m). Number of selected and random trees is provided for each dataset with corresponding mean, standard deviation (SD), standardized mean difference (SMD) with 95% CI, fixed weight (W), and random weight. Fixed effect and random effects SMD with 95% CI, and prediction intervals are provided at the end of the table. All values are rounded upward to two decimal places. (DOCX) S3 Table. Meta-analysis on snag density (stems/0.1 ha). Number of selected and random trees is provided for each dataset with corresponding mean, standard deviation (SD), standardized mean difference (SMD) with 95% CI, fixed weight (W), and random weight. Fixed effect and random effects SMD with 95% CI, and prediction intervals are provided at the end of the table. All values are rounded upward to two decimal places. (DOCX) S4 Table. Meta-analysis on elevation (m). Number of selected and random trees is provided for each dataset with corresponding mean, standard deviation (SD), standardized mean difference (SMD) with 95% CI, fixed weight (W), and random weight. Fixed effect and random effects SMD with 95% CI, and prediction intervals are provided at the end of the table. All values are rounded upward to two decimal places. (DOCX) S5 Table. Meta-analysis on canopy closure (%). Number of selected and random trees is provided for each dataset with corresponding mean, standard deviation (SD), standardized mean difference (SMD) with 95% CI, fixed weight (W), and random weight. Fixed effect and random effects SMD with 95% CI, and prediction intervals are provided at the end of the table. All values are rounded upward to two decimal places. (DOCX) S6 Table. Meta-analysis on distance to water (m). Number of selected and random trees is provided for each dataset with corresponding mean, standard deviation (SD), standardized mean difference (SMD) with 95% CI, fixed weight (W), and random weight. Fixed effect and random effects SMD with 95% CI, and prediction intervals are provided at the end of the table. All values are rounded upward to two decimal places. (DOCX) S7 Table. Meta-analysis tree density (%). Number of selected and random trees is provided for each dataset with corresponding mean, standard deviation (SD), standardized mean difference (SMD) with 95% CI, fixed weight (W), and random weight. Fixed effect and random effects SMD with 95% CI, and prediction intervals are provided at the end of the table. All values are rounded upward to two decimal places. (DOCX) S8 Table. Meta-analysis on slope (%). Number of selected and random trees is provided for each dataset with corresponding mean, standard deviation (SD), standardized mean difference (SMD) with 95% CI, fixed weight (W), and random weight. Fixed effect and random effects SMD with 95% CI, and prediction intervals are provided at the end of the table. All values are rounded upward to two decimal places. (DOCX) S9 Table. Meta-analysis on bark remaining on trunks (%). Number of selected and random trees is provided for each dataset with corresponding mean, standard deviation (SD), standardized mean difference (SMD) with 95% CI, fixed weight (W), and random weight. Fixed effect and random effects SMD with 95% CI, and prediction intervals are provided at the end of the table. All values are rounded upward to two decimal places. (DOCX)
2016-06-02T01:15:34.832Z
2015-09-29T00:00:00.000
{ "year": 2015, "sha1": "84fc8ce35d3ad32d5da7e69e8c430698dd518cb0", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0139126&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "84fc8ce35d3ad32d5da7e69e8c430698dd518cb0", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
221140825
pes2o/s2orc
v3-fos-license
The Current State-of-the Art of LRRK2-Based Biomarker Assay Development in Parkinson’s Disease Evidence is mounting that LRRK2 function, particularly its kinase activity, is elevated in multiple forms of Parkinson’s disease, both idiopathic as well as familial forms linked to mutations in the LRRK2 gene. However, sensitive quantitative markers of LRRK2 activation in clinical samples remain at the early stages of development. There are several measures of LRRK2 activity that could potentially be used in longitudinal studies of disease progression, as inclusion/exclusion criteria for clinical trials, to predict response to therapy, or as markers of target engagement. Among these are levels of LRRK2, phosphorylation of LRRK2 itself, either by other kinases or via auto-phosphorylation, its in vitro kinase activity, or phosphorylation of downstream substrates. This is advantageous on many levels, in that multiple indices of elevated kinase activity clearly strengthen the rationale for targeting this kinase with novel therapeutic candidates, and provide alternate markers of activation in certain tissues or biofluids for which specific measures are not detectable. However, this can also complicate interpretation of findings from different studies using disparate measures. In this review we discuss the current state of LRRK2-focused biomarkers, the advantages and disadvantages of the current pallet of outcome measures, the gaps that need to be addressed, and the priorities that the field has defined. INTRODUCTION Parkinson's disease (PD) is a debilitating neurodegenerative disorder, affecting millions of people worldwide. The current therapeutic options address symptoms only and there is no approved therapy that slows progression or modifies disease course. PD is a complex disorder influenced by both genetic and environmental factors. The first unequivocal genetic data supporting susceptibility to PD were mutations found in SNCA (encoding α-synuclein) and the subsequent identification of SNCA gene duplications (Polymeropoulos et al., 1997;Singleton et al., 2003). A few years later, mutations in the leucine-rich repeat kinase 2 (LRRK2) gene were found to exhibit significant impact across familial and sporadic PD (Paisan-Ruiz et al., 2004;Zimprich et al., 2004). Hundreds of nonsense or missense genetic variations have been described in the LRRK2 locus (Ross et al., 2011). However, only a few are considered pathogenic: p.Asn1437His, p.Arg1441Gly, p.Arg1441Cys, p.Arg1441His, p.Arg1441Ser, p.Tyr1699Cys, p.Gly2019Ser, and p.Ile2020Thr; with several other risk factors (e.g., p.Gly2385Arg) or variants of unclear pathogenicity (p.Arg1628Pro and p.Ser1761Arg). Their frequency varies markedly depending on the population founder effects of the G2019S-LRRK2 mutation, reaching 30-42% of PD patients in North African Arabic populations as well as 6-30% in Ashkenazi Jewish populations, probably resulting from a mutation arising at least 5,000 years ago (for review of the genetics of LRRK2, please see Monfrini and Di Fonzo, 2017). The collective data strongly suggesting that each of the different point mutations increase kinase activity (Liu et al., 2018). Genome wide association studies (GWAS) analyses also demonstrated that variants at the LRRK2 locus, such as single nucleotide polymorphisms, are among the most important genetic risk factors for PD (Monfrini and Di Fonzo, 2017). Emerging data also suggests that intergenic LRRK2 variants may be associated with increases in LRRK2 gene expression and accelerated PD motor symptom development (Võsa et al., 2018;Iwaki et al., 2019). In Figure 1, we show a schematic of the LRRK2 domain architecture, highlighting both pathogenic as well as other risk factor or functional variants. LRRK2 plays an important role in vesicular trafficking. It impacts endosomal, lysosomal and autophagosomal pathways (Roosen and Cookson, 2016;Alessi and Sammler, 2018), which are also affected by other well-defined PD genes, such as SNCA and GBA1 (Blandini et al., 2019), strongly implicating these fundamental cellular processes in PD pathophysiology. Recent data from post-mortem PD brain and multiple in vivo models, suggest a role for LRRK2 in idiopathic disease as well (Di Maio et al., 2018). Preclinical studies have shown that genetic knock-out of LRRK2, inhibition of LRRK2 with small molecules, or ASO-mediated knockdown reduce pathology and protect from α-synuclein induced dopaminergic neuronal loss in rodents (Daher et al., 2014(Daher et al., , 2015Zhao et al., 2017), also supporting the hypothesis that, even in the absence of familial mutations, LRRK2 can be pathogenic under certain conditions. Collectively, human genetic studies and preclinical data have led to biopharma initiating drug discovery efforts that have resulted in 2 potential therapeutics progressing into clinical trials (clinicaltrials.gov/ct2/show/study/NCT03710707; clinicaltrials.gov/ct2/show/NCT03976349). There are three potential strategies for clinical development of these LRRK2 therapeutics. Firstly, trials may selectively include genetically defined LRRK2 mutation carriers that have been diagnosed with PD. This would be dependent on patients knowing their own genetic status or focused screening efforts 1 . However, limitations in enrolling appropriate numbers of suitable LRRK2 mutation carriers will likely provide a significant hurdle in Phase 2 and Phase 3 trials; as the prevalence of LRRK2 mutations, which are estimated at approximately 5% of all PD cases, vary significantly depending on the geographic location, and the relative frequency of specific mutations, which also varies greatly (Monfrini and Di Fonzo, 2017). If there were to be additional stratification, for example, only including G2019S or R1441C/G, this would further reduce this limited patient pool. A second potential clinical design is a prodromal approach: identifying subjects with LRRK2 mutations and determining if disease onset could be prevented by pre-treatment of the potential therapeutic. The major limitations of this approach are the limited genetic penetrance of LRRK2 (Goldwurm et al., 2011;Lee et al., 2018) and, in the cases where the mutation carriers do progress to disease, the unpredictable age of onset, as well as the absence of safety data in subjects undergoing long-term chronic LRRK2 kinase inhibition; providing significant cost/length challenges and an uncharted regulatory path. Finally, strengthening the link between LRRK2 and idiopathic PD (iPD) could identify cohorts of patients where LRRK2, in the absence of known pathogenic mutations, is driving disease pathophysiology. In this case, the need for LRRK2 biomarkers, i.e., biological measures related to LRRK2 that can identify PD processes or therapeutic response, is absolutely critical given the heterogeneous nature of PD. As introduced above, there is evidence highlighting a link between α-synuclein and LRRK2 (Daher et al., 2014(Daher et al., , 2015Zhao et al., 2017). Similarly, a link has been postulated between LRRK2 and GCase activity (Alcalay et al., 2015;Nguyen and Krainc, 2018); although some controversy still exists concerning the nature of this link. Given these links as well as the prevalence of LRRK2 risk variants in the sporadic PD population, there is significant evidence supporting the therapeutic potential of LRRK2 inhibition in sporadic PD as well as additional familial PD cohorts beyond LRRK2 mutation carriers. The clinical development of LRRK2 therapeutics will be strongly dependent on biomarkers, as target engagement and pharmacodynamic endpoints are critical for successful progression of clinical candidates (Morgan et al., 2012). This is particularly vital in PD where efficacy trials are long (average length of current trials is ∼2 years), will require significant numbers of subjects (100+ per arm), and will be costly (in the hundreds of millions USD). There are several measures of LRRK2 function that could potentially be used in longitudinal studies of disease progression, as inclusion/exclusion criteria for clinical trials, as markers of target engagement, and as markers to predict response to therapy. Among these are levels of LRRK2, phosphorylation of LRRK2, either by other kinases or via auto-phosphorylation, in vitro LRRK2 kinase activity, and phosphorylation of downstream substrates or functional endpoints related to elevated (or therapeutic suppression of) LRRK2 function, which will be covered in this review. LRRK2 OUTCOME MEASURES In probing the function of LRRK2, with the goal of quantifying changes that coalesce around specific disease-stratifying variables (e.g., disease state, LRRK2 mutation status, etc.), there are a number of biochemical outcome measures that are available. These include the quantification of: total LRRK2 levels; phosphorylated LRRK2 (at multiple residues; including S935, S1292, see Figure 1 and below for more details); in vitro kinase activity using model peptide substrates; phosphorylation of endogenous LRRK2 substrates (e.g., Rab10); and others. The specific methodologies employed for each of these measures are discussed in more detail below (see section "Assays Being Employed"). However, to date, most of the early reports (with a few exceptions, see above) assessing these targets have relied largely on Western immunoblotting, which in comparison to ELISA-based approaches for example, is limited in terms of the quantitative range that is possible, and sensitivity. Each of the measures described reveal a distinct, yet equally important, feature of the activation "state" of LRRK2; and importantly, this pattern may also manifest differently depending on the source of the biospecimen examined. Note that a summary overview of LRRK2 related measures and potential applications is given in Table 1. Total LRRK2 Levels Total expression levels of LRRK2, depending on the tissue/cell type, can vary in PD, and thus can potentially be a useful tool to assess activation during the different stages of the disease. For example, in the CNS, LRRK2 protein levels are elevated in prefrontal cortex of PD patients (Cho et al., 2013), while CSF levels were only elevated in G2019S PD, but not in iPD or nonmanifesting G2019S carriers (Mabrouk et al., 2020). Outside the CNS, immune cells are an ideal source of LRRK2 since they are obtained non-invasively and previous reports have shown elevated levels in iPD compared to healthy controls (Cook et al., 2017). In that study, levels were determined by a novel flow cytometric approach using a LRRK2 knockout validated antibody (rabbit monoclonal; clone c41-2). Specifically, LRRK2 expression was increased in CD16+ monocytes, as well as B and T cells; and this expression was correlated with both intracellular as well as secreted levels of certain cytokines (Cook et al., 2017). The regulation of LRRK2 expression in specific immune cell sub-types is unclear, however, it is known that specific proinflammatory mediators, such as IFN-γ, can induce expression of LRRK2 (Gardet et al., 2010); thus, the increased levels of LRRK2 in specific immune cells may be linked to elevated peripheral inflammation, which may or may not be associated with PD (e.g., Dzamko, 2017). In the earlier study of Dzamko (2017), assessing pS935-LRRK2 levels by Western immunoblotting in isolated PBMCs, no difference in total LRRK2 expression was detected between iPD and healthy control subjects in this mixed cell population. Thus, given that the bulk of LRRK2 expression in blood cells is concentrated in a few cellular sub-types (e.g., see Fan et al., 2018), including neutrophils (which were not specifically assessed in either study), it is possible that changes in LRRK2 levels, like phosphorylation of LRRK2 as discussed above, in specific types of peripheral blood cells are heterogeneous. Heterologous LRRK2 Phosphorylation Phosphorylation of LRRK2 at a cluster of serine residues located within the N-terminal region of LRRK2 (e.g., S910, S935, S955, and S973), immediately upstream of the namesake leucine-rich repeat domain, represents an additional biochemical readout of LRRK2. The apparent relative abundance of these posttranslational modifications (PTMs) in comparison to other sites, such as S1292, has rendered phosphorylation at these sites more easily detected, however, functional interpretation of these findings is complicated by the fact that these are not auto-phosphorylation modifications, as is the case for pS1292. Multiple kinases have been implicated in the phosphorylation of these residues, including: CK1-α1 (Chia et al., 2014), Potential biomarker Current understanding Potential use Total LRRK2 • Expression level of LRRK2 has been shown to be modifed in disease related states, after LRRK2 kinase inhibitor treatment or after stimulation in immune cells. • Essential to determine calculate LRRK2 phosphorylation rates. Biomarker research and exploratory studies prior to potential use in a clinical setting. pS935-LRRK2 (rate) • A heterologous phosphorylation site of LRRK2. Modifed in at least some disease related conditions. Signal decreases in cells exposed to LRRK2 kinase inhibitor by sensitizing LRRK2 to dephosphorylation. • Pharmacodynamic marker in clinical trials with LRRK2 kinase inhibitors. Biomarker research and exploratory studies for assessment as PD progression or diagnostic marker. pS1292-LRRK2 (rate) • Autophosphorylation site. Indicator of LRRK2 kinase activity in cells. Modified in at least some disease related conditions. Signal decreases in cells treated with LRRK2 kinase inhibitors. • Pharmacodynamic marker in clinical trials with LRRK2 kinase inhibitors. Biomarker research and exploratory studies for assessment as PD progression or diagnostic marker. • Pharmacodynamic marker in clinical trials with LRRK2 kinase inhibitors. Biomarker research and exploratory studies for assessment as PD progression or diagnostic marker. • Pharmacodynamic marker in clinical trials with LRRK2 kinase inhibitors. Biomarker research and exploratory studies for assessment as PD progression or diagnostic marker. In vitro kinase assays (autophosphorylation or substrate phosphorylation) • Indicator of intrinsic kinase activity, potentially affected by post-translational modifications. • Biomarker research and exploratory studies for assessment as PD progression or diagnostic marker. Genetic testing • Pathogenic mutations and risk polymorphisms are indicators of varying degrees of increased risk for PD. PKA (Muda et al., 2014), TBK-1 (Dzamko et al., 2012), and others. However; analogous to what is observed for autophosphorylation sites, phosphorylation at S935 is sensitive to pharmacological LRRK2 kinase inhibition, such that there is a rapid de-phosphorylation at this site (and the other N-terminal serine residues) following treatment of cells, or in vivo, with specific LRRK2 kinase inhibitors (e.g., Dzamko et al., 2010;Vancraenenbroeck et al., 2014). Interestingly, over-expressed kinase inactive mutant LRRK2 [e.g., D1994A and K1906M/R; (Ito et al., 2014)], does not display S935 dephosphorylation relative to WT, indicating that acute (pharmacological) inhibition of LRRK2 alters this regulatory cycle, while chronic genetic ablation of LRRK2 kinase activity does not. This is explained by the fact that the S935-LRRK2 phosphorylation levels do not correlate to kinase activity but rather to the sensitivity of LRRK2 to phosphatases. Indeed, LRRK2 is sensitized to dephosphorylation by LRRK2 kinase inhibitors and for certain LRRK2 mutants with reduced basal S935-LRRK2 phosphorylation. LRRK2 dephosphorylation at the S935 cluster is mediated by the catalytic subunit of protein phosphatase 1 that is recruited to the LRRK2 complex in conditions of pharmacological inhibition of the LRRK2 kinase (Lobbestael et al., 2013). Conversely, overexpression of pathogenic mutant forms of LRRK2, such as G2019S or R1441C/G, which are known to enhance the kinase activity of LRRK2, does not show enhanced levels of pS935-LRRK2, and in fact have been reported to have decreased levels of phosphorylation at this site (Nichols et al., 2010;Li et al., 2011), including at endogenous levels in immortalized lymphoblasts from G2019S-LRRK2 mutation carriers . The earliest report of an assay designed to quantify pS935-LRRK2 at endogenous levels came from the group of Delbroek et al. (2013). Using well-validated antibodies (i.e., in knock-out tissue), this group established a quantitative detection method for S935-LRRK2 phosphorylation that demonstrated loss of signal in kinase inhibitor-treated cells and animals. Additionally, as a proof of concept, phosphorylation at this site was detected in human PBMCs from healthy volunteers, that also was sensitive to LRRK2 kinase inhibition (Delbroek et al., 2013). Apart from the ELISAbased detection of this PTM of LRRK2, several studies employing Western immunoblotting have also been reported. The same year as the report from Delbroek et al. (2013), the group of Dzamko et al. (2013) examined pS910-LRRK2 and pS935-LRRK2 levels in PBMCs from healthy controls or iPD patients. While a significant correlation between both phosphorylated sites and total LRRK2 was found in these cells, there was no significant change in pS935 or pS910 levels, when normalized to total LRRK2, in the iPD group (Dzamko et al., 2013). The authors in this study correctly pointed out that in a mixed cellular population of PBMCs, where LRRK2 expression is concentrated in few cell subtypes (Fan et al., 2018), changes in phosphorylation of LRRK2 in distinct cellular types may not be uniform. Additional clinical studies assessing these changes in LRRK2 within specific purified cell types (e.g., monocytes, neutrophils, etc.) are necessary to determine if pS935-LRRK2 levels are detectable in selective cellular populations. In a study of a small cohort of iPD and G2019S mutation carriers, levels of pS935-LRRK2, normalized to total LRRK2 by Western immunoblotting, showed a nonsignificant decrease in comparison to healthy controls in isolated neutrophils (Fan et al., 2018). LRRK2 Autophosphorylation The phosphorylation status of LRRK2 is reflective of its activation in a number of distinct ways. First, and most directly, autophosphorylation of LRRK2, for example at the S1292 site, is indicative of its kinase activity in the cell of origin at the time of collection. A number of factors can come into play to determine the level of phosphorylation at this, or other autophosphorylation site(s), not just the level of kinase activity of LRRK2 alone. For example, the presence and activity of relevant phosphatases, the sub-cellular localization of LRRK2, the activity of upstream regulators of LRRK2, the status of the ROC GTPase domain, and even the cell type, can all influence the final degree of pS1292 observed. Finally, phosphorylation at S1292 has also been detected in EVs present in CSF (Wang S. et al., 2017), at significantly higher levels, with the signal saturated in many samples, in comparison to pS1292-LRRK2 present in urinary EVs. This saturation effect in the Western immunoblot detection of pS1292-LRRK2 from CSF EVs prevented the stratification of LRRK2 G2019S carriers from non-carriers. This limitation would likely be overcome using an ELISA-based approach (with suitable antibodies for pS1292-LRRK2) in which the usable linear range of detection is typically much broader. Intrinsic LRRK2 Kinase Activity Finally, in addition to the cellular indices of LRRK2 kinase activity (LRRK2 phosphorylation, phosphorylation of endogenous substrates such as Rab GTPases), the intrinsic kinase activity of isolated LRRK2 can also be informative. In this approach, LRRK2 is purified from a specific biosample (e.g., PBMCs), and an in vitro kinase reaction is performed using as a substrate model peptides such as LRRKtide or the related NICtide. There are several key differences between assessing kinase activity in this way (i.e., the in vitro activity of the purified enzyme), vs. assessing kinase activation by determining auto-phosphorylation (e.g., pS1292-LRRK2) or phosphorylation of endogenous cellular substrates (e.g., pT73-Rab10). First, performing an in vitro kinase reaction will allow the determination of any changes in the intrinsic activity of purified LRRK2. For example, it is possible that certain PTMs that are known to affect LRRK2 (e.g., phosphorylation and ubiquitination) can alter the intrinsic activity of the kinase domain. If such PTMs are more prevalent in the diseased state, compared to healthy control subjects, the functional consequence of these may be altered kinase activity. This alteration can be detected in an in vitro assay, in an un-encumbered way, without the potential influence of interacting proteins (depending on the stringency of the purification conditions). Secondly, "cellular" assays (measuring phosphorylation of LRRK2 or its substrates) provide a "snap-shot" of kinase activation that is the result of the coordinated action of myriad upstream and downstream regulatory factors, interacting proteins, and the general activation state of the cell. We have employed such an assay, initially in over-expression models (e.g., Leandrou et al., 2019), but more recently in a clinical study assessing LRRK2 in peripheral blood cells (Melachroinou et al., 2020). In this approach, LRRK2 is purified in an ELISA plate, capturing the protein with anti-LRRK2 (or epitope tag) antibodies, followed by an in-well kinase reaction in which the reaction mixture containing the peptide substrate is added directly to the well containing immobilized LRRK2. Possible evolutions of this in vitro kinase activity approach could be to include measures of autophosphorylation (such as the pS1292-LRRK2 measure, as described in Melachroinou et al., 2016) or of Rab substrate phosphorylation (by spiking in recombinant Rab substrate proteins rather than peptide substrates). Substrates of LRRK2 Kinase Phosphorylation of LRRK2 substrates represents another potentially informative outcome measure of LRRK2 kinase activation; and like several of the other markers discussed, can also be dependent upon cell or tissue source. In 2016, in a landmark study from the groups of Alessi and Mann, several members of the Rab GTPase family were identified as endogenous kinase substrates of LRRK2 (Steger et al., 2016). A conserved residue within the switch II domain of these GTPases was found to be robustly phosphorylated both in cellular systems as well as in vivo. Several phospho-specific antibodies to certain Rab proteins have since been developed and characterized , and are now being deployed in studies of LRRK2 activation in clinical samples and as potential markers of target engagement. In another study, using in-house developed phospho-specific antibodies to pT73-Rab10 and pS106-Rab12, Thirstrup et al. (2017) demonstrated that a novel inhibitor of LRRK2 kinase activity could reduce phospho-Rab levels in stimulated PBMCs, but only after 24 h of treatment (Thirstrup et al., 2017). Likewise, similar to other reports, kinase inhibition, at 24 h, also reduced LRRK2 levels in comparison to un-treated cells. The goal in this study, as samples from PD cohorts were not examined, was to assess the utility of assessing pRab10 and pRab12 rates (i.e., phosphorylated Rab as a proportion of total Rab expression) as a marker of target engagement, and as a proof of concept study, this was indeed demonstrated. The principal caveat associated with this study is that LRRK2 levels were artificially induced in isolated PBMCs, following culture for 3 days in the presence of PMA and IFN-γ (Thirstrup et al., 2017). Later evidence demonstrated the translatability of both pS935-LRRK2 and pT73-Rab10 as pharmacodynamic readouts in the clinical setting in unstimulated PBMCs. In human subjects treated with the LRRK2 inhibitor DNL201 for 1-10 days, both readouts showed a robust exposure-dependent reduction in PBMCs (Denali Therapeutics Inc., MJFF PD Therapeutics Conference 2018). Two studies, thus far, have examined Rab10 phosphorylation (pT73) in peripheral blood cells of PD patients, both with and without the G2019S LRRK2 mutation. A first study by Fan et al. (2018) showed the feasibility of using Rab10 phosphorylation measures by demonstrating good detection levels of Rab10 and pT73-Rab10 in neutrophils and beginning to show increases in Rab10 phosphorylation in small samples of idiopathic PD or PD with LRRK2-G2019S compared to healthy controls. In a larger study, comprised of almost 50 subjects from control or iPD groups, Rab10 phosphorylation in isolated neutrophils or PBMCs was assessed. Consistent with the earlier report, LRRK2 inhibitor treatment significantly reduced pS935 levels as well as pRab10-T73 in both neutrophils and mixed PBMC cellular populations (Atashrazm et al., 2018), with no difference in the degree of response between control and iPD subjects. Interestingly, similar to the report of increased LRRK2 expression in B or T cells, or CD16+ monocytes (Cook et al., 2017), levels of LRRK2 in purified neutrophils (but not PBMCs) are also elevated (Atashrazm et al., 2018). Additionally, neither cell type revealed differences in phosphorylation of Rab between iPD and control subjects. Taken together, while phosphorylation of Rab10, by Western immunoblotting, appears to be a suitable marker for target engagement in clinical studies of LRRK2 inhibition, it remains unclear whether this readout can reliably stratify subjects according to patient group; as thus far, differences between control and iPD or LRRK2 mutation carriers have not been observed. It should be noted, however, that for the LRRK2 mutation carrier study, the sample size and statistical power was low (intentionally, for a proof-of-concept study) precluding the possibility to reach significant conclusions. Further analyses in larger cohorts, ideally with more quantitative approaches, are clearly warranted. CURRENT ASSAYS BEING EMPLOYED An important aspect of the evaluation of LRRK2 and related targets as potential biomarkers of PD is to have a good understanding of the assays used. We present here the assay methods that have been used in recent literature (e.g., ELISA), or are at earlier developmental stages (e.g., PET tracer ligands) to measure LRRK2 status. Western Immunoblots to Measure LRRK2 Function Western blots targeting pS935-LRRK2, pS1292-LRRK2, and pT73-Rab10 have been successfully used pre-clinically to detect and measure total levels of LRRK2, activation of LRRK2 kinase, and LRRK2 function. As a pharmacodynamic endpoint reflecting LRRK2 inhibition, the phospho-specific LRRK2 and Rab10 targets are well established pre-clinically. Measurement of pS935-LRRK2 showed a rapid reduction in S935 phosphorylation following LRRK2 inhibition in cellular models and in vivo pharmacokinetics/dynamics studies (Delbroek et al., 2013;Fell et al., 2015;Fuji et al., 2015), enabling quantification of LRRK2 inhibitor potency in cells and tissues where LRRK2 is endogenously expressed. Similarly, both pS1292-LRRK2 (in HEK cells overexpressing a mutant form of LRRK2) and pT73-Rab10 (in mouse tissues and HEK cells overexpressing Rab10 and LRRK2) are dose-dependently reduced following LRRK2 inhibition as measured by Western blot (Sheng et al., 2012;Atashrazm et al., 2018;Fan et al., 2018;Lis et al., 2018). pS935-LRRK2 and pT73-Rab10 are also measurable by Western blot and reduced following LRRK2 inhibition ex vivo in PBMCs and neutrophils, demonstrating the potential translatability of these markers for clinical use (Perera et al., 2016;Atashrazm et al., 2018;Fan et al., 2018;Lis et al., 2018; and see below in section "Tissue/Biofluid Origin"). In addition to use as pharmacodynamic readouts, several studies have measured pS1292-LRRK2 and phosphorylated Rab proteins in PD patient samples to test the hypothesis that LRRK2 kinase activity is elevated in all or a subset of PD (Dzamko et al., 2013;Fraser et al., 2016a;Atashrazm et al., 2018;Fan et al., 2018;Lis et al., 2018). pS1292-LRRK2 has not been reproducibly detectable in accessible blood matrices, while the results with pT73-Rab10 have not conclusively demonstrated elevated LRRK2 kinase activity in PD patient samples. Therefore, at this point, pT73-Rab10 (as well as pS935-LRRK2) are more likely to be useful as pharmacodynamic markers than patient selection markers, although additional studies using more sensitive and high throughput assays in additional matrices are ongoing that may change the landscape on this point. Despite the successes of assessing LRRK2 function via Western blot in many preclinical studies; Western blot has strong disadvantages as a potential biomarker endpoint in the context of a clinical trial. In order to enable clear interpretation and quantitative analysis with rapid turnaround time, clinical assays for pharmacodynamic readouts or patient selection must be highly quantitative, ideally allowing for absolute measurement of the analyte of interest, robust to implement in different locations or over extended periods of time, and relatively high throughput. Western blots are semi-quantitative, differ greatly from user to user, and generally allow for analysis of <100 samples at a time. Therefore, new methods of LRRK2 and Rab measurement must be developed to maximize the utility of these biomarkers in the clinical setting. ELISA to Measure LRRK2 Function ELISAs offer a more sensitive and high-throughput method to interrogate LRRK2 kinase activity and pharmacodynamics of LRRK2 kinase inhibitors. Thus far, three assays have been published (Delbroek et al., 2013;Henderson et al., 2015;Scott et al., 2017), each utilizing a sandwich-ELISA approach by capture with a total LRRK2 antibody, followed by detection with a specific pS935-LRRK2 antibody. The latter two studies have facilitated accurate IC 50 measurements for LRRK2 kinase inhibitors from treated mouse tissues (brain and kidney) (Henderson et al., 2015), LRRK2 G2019S SH-SY5Y cell lysates (Scott et al., 2017), and human PBMC lysates (Padmanabhan et al., 2020). Notably, Meso Scale Discovery (MSD) has made a pS935-LRRK2 sandwich-ELISA-based assay commercially available, alongside a comparable assay that measures total LRRK2 protein levels. Use of both assays facilitates normalization of pS935-LRRK2 to total LRRK2 levels to account for compound effects on LRRK2 expression or half-life, in addition to standard normalization to tissue weight or total protein levels. These ELISAs offer enhanced sensitivity compared to Western blots, as low as 400 picomolar, as well as the option for high-throughput 384-well assay design. An emerging alternative is the SIMOA platform offered by Quanterix, which applies digital ELISA technology. The SIMOA platform utilizes a bead-based approach to enable single molecule labeling detected by fluorescence. In addition, the SIMOA assay is able to use sample volumes often <5 µL and run up to 400 samples per shift. A recent MJFF-led study (Padmanabhan et al., 2020) utilizing the SIMOA platform, reported levels as low as 19 pg/mL for total LRRK2 and 4.2 pg/mL for pS935-LRRK2 using full-length recombinant human LRRK2; and subsequently applied this approach to human PBMC lysates. Altogether, ELISA-based assays offer a more sensitive, high-throughput alternative to Western blotting with multiplex potential for measurement of pS935-LRRK2 biomarker levels. It is crucial to note, however, that it will be vital to compare each approach, across platforms and in different centers, with parallel samples to determine if similar estimations of LRRK2 concentration and phosphorylation are obtained by the various assays. Quantification of reduced pS935-LRRK2 by conventional ELISA and SIMOA assays can accurately reflect pharmacodynamic response following administration of LRRK2 kinase inhibitors and therefore is currently used as a surrogate biomarker, even though pS935-LRRK2 is not a direct measurement of kinase activity (see above). In fact, it has been reported that the ratio of pS935-LRRK2 to total LRRK2 is significantly reduced in human PBMC lysates from PD manifesting LRRK2 G2019S carriers compared to iPD samples and healthy controls [with and without G2019S mutations (Padmanabhan et al., 2020)], although an alternative in-house developed ELISA detected a slight but significant increase in pS935-LRRK2 in PBMCs of iPD, compared to healthy controls (Melachroinou et al., 2020). Measurement of the auto-phosphorylation site pS1292-LRRK2 would be a more ideal marker of LRRK2 kinase activity, but reliance on this biomarker has been hindered by low physiological stoichiometry (Sheng et al., 2012), and limited phospho-specific antibodies. As newer clones of antibodies targeting this site, and pT73-Rab10 as well (see above), are validated for use in more quantitative and sensitive methods such as ELISA, these challenges will likely be overcome. Nonetheless, Di Maio et al. (2018) recently reported a method using proximity ligation to amplify pS1292-LRRK2 immunostaining in the substantia nigra of human iPD tissue, which was increased compared to healthy controls. These exciting data suggest LRRK2 kinase inhibitors may have broader therapeutic potential for the larger PD patient population, beyond those carrying mutations in the LRRK2 gene. There is potential for proximity ligation technology to be converted to more high-throughput qPCR-based platforms, though this has not yet been reported for pS1292-LRRK2. Additional improvements on the quality of reagents available for pS1292-LRRK2 detection will likely enable better utilization of this site as a biomarker. Similar quantification strategies for LRRK2-mediated phosphorylation of Rab substrates, particularly of pT73-Rab10, may also offer additional alternatives for more direct markers of LRRK2 kinase activity in the future. Mass Spectrometry to Measure LRRK2 Levels and Function Liquid chromatography-mass spectrometry (LC-MS) has wide ranging applications from exploratory to regulated clinical use, yet quantitative measurements of very low abundance proteins remains challenging due to sensitivity limits and artifacts such as matrix-induced ion suppression. By and large, protein measurements by LC-MS utilize "bottom up" proteomics techniques whereby proteins are digested by proteases into smaller peptides, which are then analyzed for their signature parent and fragment ion mass-to-charge (m/z) ratios. Peptides between 10 and 20 amino acids are in the ideal range for specificity (i.e., not likely to exist in different protein types) and sensitivity (i.e., are more likely to perform better in electrospray ionization-MS). Trypsin, which cleaves proteins at the c-terminus of arginine (R) and lysine (K), is the most commonly used protease in this context. In some instances, trypsin does not yield an appropriate peptide when a specific amino acid sequence is desired. For example, a recent article by Wang S. et al. (2017) showed detection of total LRRK2 and pS1292-LRRK2 by LC-MS using the Glu-C protease since trypsin would not generate a viable peptide containing S1292 -the S1292 site is flanked by K residues (KLSK), thus trypsin would generate a 3 amino acid peptide (LSK). A peptide this short would not necessarily only come from LRRK2 and so assay specificity would be lost. The group instead chose to use the less common protease Glu-C, which cleaves at the C-term of glutamic and aspartic acid residues and this process generated a 14 amino acid peptide between E1287 and E1301 (MGKLSKIWDLPLDE) containing S1292. The mass spectrometer can then distinguish and quantify the un-phosphorylated and phosphorylated peptide species. The group then showed that phosphorylated rLRRK2 (including pS1292) was reliably detected, however, the article stops short of quantifying pS1292 LRRK2 in biological samples. It is likely that an antibody enrichment step would still be required in biological samples to be successful since no cleanup step was applied. Although there are a number of sample cleanup steps that can reduce sample complexity (including sample fractionation), these steps can be laborious and can introduce variability. Gaining momentum in the field of protein biomarker quantification is the so-called "hybrid ligand binding assay (LBA)-LC-MS" methodology, whereby proteins are isolated from samples using antibodies (similar to ELISA), followed by protease digestion and LC-MS analysis. This methodology has the advantage of greatly reducing sample complexity and improving MS analysis. When a high-resolution mass spectrometer such as an orbitrap or FT-ICR system is used, specificity of signal is encoded by unique peptides that only exist in the targeted protein. This is an advantage over traditional ELISAs where detection specificity must be demonstrated experimentally by analyzing samples in various matrices and testing KO tissues, for example. To our knowledge there are no reported hybrid-LC-MS assays in the literature being used for routine LRRK2-pLRRK2 quantitation in the context of a fit-for-purpose biomarker assay. As this approach becomes more common, LRRK2-pLRRK2 would be well positioned for this type of assay development because of the availability of several high quality LRRK2 antibodies. Another variation of the hybrid approach is called SISCAPA (stable isotope standard and capture by anti-peptide antibodies). This technique goes even further in reducing sample complexity. In this approach, samples containing proteins of interest are digested using a protease, and then peptides (not proteins) are isolated using anti-peptide antibodies (Anderson et al., 2004). In principle, following elution from an antibody, samples are purified for a single peptide species. In comparison, anti-protein immunocapture eluent will contain peptides from the entire protein as well as peptides from proteases used. A recent initiative by MJFF sought to develop SISCAPA based assays against regions of LRRK2 that would serve as both total LRRK2 and kinase activity endpoints. Specifically, the MJFF-SISCAPA collaboration developed mouse monoclonal antibodies against linear epitopes containing S935 (HSNSLGPIFDHEDLLK) and S1292 (MGKLSKIWDLPLD) capable of detecting both the native and phosphorylated forms of the peptides. Unfortunately, those results showed only nanogram level sensitivity, which was attributed to the performance of the target peptides on the particular LC/MS platforms used as well as the need for a higher affinity rabbit monoclonal antibody (data not published). As such, the existing assays would have limited sensitivity in the context of human CSF. Elsewhere in this issue (Mabrouk et al., 2020), we will describe a novel SISCAPA assay using commercially available antibodies that function as anti-peptide antibodies to measure total LRRK2 with sensitivity sufficient for CSF detection. Development of LRRK2 PET Ligands Positron emission tomography (PET) is a non-invasive and highly sensitive molecular imaging technique that has multiple applications across the CNS drug discovery field. For example, PET imaging with a radiolabeled molecule can be used to assess that molecule's biodistribution properties thus allowing for the assessment of brain penetration which otherwise cannot be definitively determined in the clinical setting. PET imaging can also be used to quantify CNS target occupancy by a drug molecule and to confirm CNS target engagement. This is an incredibly powerful tool as it can determine if the hypothesis in question has been sufficiently tested in the clinic [i.e., a proof of concept (PoC) trial outcome was negative, but the CNS target was engaged sufficiently such that it rules out a role for that target in the disease/disease stage]. Finally, PET imaging has the potential to serve as a disease state biomarker if the radiolabeled molecule is specific to a target that is associated with disease or a particular stage of disease. Given the applications of PET imaging to CNS drug discovery, the identification of a LRRK2 PET ligand could significantly enable the clinical development of LRRK2 kinase inhibitors and has been the subject of intense focus from both industry and academic groups alike. Despite the identification of numerous potent and selective LRRK2 kinase inhibitors, from a variety of structural classes, there are limited reports detailing the successful development of radiolabeled LRRK2 kinase inhibitors. In 2013, Roche/Genentech published a patent in which they described the synthesis of 11 C-or 18 F-labeled LRRK2 inhibitors, which were related to GNE-1023. Similarly, Wang M. et al. (2017) described the radiolabeling of [ 11 C]-HG-10-102-01 but as with the Roche/Genentech probes, no in vitro or in vivo PET characterization of this molecule was described. Malik et al. (2017) reported that they had successfully radiolabeled [ 3 H]-LRRK2-IN-1, however, its use as a CNS PET tracer is limited by poor off-target selectivity and limited brain penetration of the base molecule. Most recently, Chen et al., 2019 reported on the development of [ 11 C]-GNE-1023 and reported excellent in vitro specific binding of [ 11 C]-GNE-1023 to LRRK2 in rat and NHP brain sections (Chen et al., 2019). However, whole-body ex vivo biodistribution studies exhibited limited brain uptake of [ 11 C]-GNE-1023 in mice despite not being a substrate of the brain efflux transporter Pgp. The authors reported that studies in higher species such as NHP and the development of tracers with improved brain penetration were ongoing. Additionally, GNE-1023 has been labeled with [ 18 F] rather than [ 11 C], however, minimal specific binding in caudate putamen homogenates from rat, rhesus monkey, and human was reported . This group also reported on studies with another radiolabeled LRRK2 kinase inhibitor (compound-B) that is derived from the indazole class and is structurally similar to the highly potent and selective LRRK2 kinase inhibitor MLi-2. While [ 3 H]-compound B showed high binding affinity to LRRK2 WT full-length enzyme (K d = 57 pM), only modest displaceable and saturable binding of [ 3 H]-compound B was observed in rhesus monkey brain CPu homogenates (K d = 0.09 nM) . Importantly, using either [ 3 H]-compound B or [ 18 F]-GNE-1023, they determined that the B max for LRRK2 in the NHP and human brain was very low (∼0.4 nM) and that the resulting tracer binding potentials (B max /K d ratio) were far below the desired B max /K d ratio > 10 which is typically required for the successful development of CNS PET tracers (Patel and Gibson, 2008). In summary, a validated PET ligand for monitoring changes in LRRK2 is not currently available and the probability of success for developing a LRRK2 PET tracer is low, based on observed low B max (<1 nM) in the CNS regions of interest. Figure 2). Here, we provide an overview of key findings for: blood, urinary exosomes, CSF exosomes, and gut/saliva. Measurement of LRRK2 in Blood and Blood Derivatives Despite LRRK2 being connected most closely with a disorder of the central nervous system, LRRK2 expression levels are highest in the periphery, in particular white blood cells (Fuji et al., 2015). This enables measurement of LRRK2 markers in blood or cells derived from blood such as PBMCs, a practical and accessible matrix in the context of clinical applications where frequent sampling for pharmacokinetics/dynamics analysis will be required. Indeed, many groups have successfully measured LRRK2 inhibition in PBMCs ex vivo from human samples, and in some cases in vivo in cynomolgus monkeys treated with LRRK2 inhibitors, by quantifying LRRK2 pS935 reduction (Delbroek et al., 2013;Fuji et al., 2015;Perera et al., 2016). After the discovery that LRRK2 phosphorylates several Rab GTPases, phospho-specific antibodies targeting the LRRK2-dependent Rabs were developed and used to measure LRRK2 inhibition in PBMCs, in particular pT73-Rab10 (Steger et al., 2016;Fan et al., 2018;Lis et al., 2018). PBMCs are commonly isolated in labs and clinical sites for many applications, and are therefore clearly translatable for the purposes of measuring LRRK2 inhibition in human subjects. LRRK2 expression varies among the different cell types of cells within PBMCs. It is most highly expressed in neutrophils and monocytes, with lower expression in T cells, B cells, dendritic cells, and natural killer cells (Fuji et al., 2015;Fan et al., 2018). This heterogeneity in LRRK2 expression combined FIGURE 2 | Expression of LRRK2 in multiple tissues/cell types. LRRK2 is widely expressed throughout the body in a variety of cell types and tissues, including high levels of expression in the kidney, lung, and cells of the peripheral immune system; but also in multiple brain regions, the intestine, as well as extracellularly via exosomal release. with heterogeneity of cell populations from person to person may add to variability of LRRK2 marker quantification in PBMCs. It has therefore been proposed that isolation of neutrophils and/or monocytes for the purposes of measuring LRRK2 markers may reduce inter-and intra-subject variability, and at least in the case of neutrophils, isolation from many donors and measurement of LRRK2 inhibition either by pS935-LRRK2 or pRab10 measurement has been successfully performed (Fan et al., 2018). In the clinical setting, it is likely that most centers will have more experience isolating PBMCs, compared to specific subtypes such as neutrophils or monocytes, so the practicalities of cell isolation must be balanced with theoretical gains of isolating a pure and homogeneous cell population. With respect to practicality, the most ideal solution for clinical measurement of LRRK2 inhibition in the periphery would be to measure it in whole blood rather than a population of cells isolated from whole blood. This would make the assay more broadly applicable and practical for clinical sites, however, the ability to track changes in LRRK2 activation within specific cell types will be sacrificed. For this reason, we developed an ELISA-based assay of pS935-LRRK2 and total LRRK2 with sufficient sensitivity for detection of these analytes in whole blood (Denali Therapeutics Inc., MJFF PD Therapeutics Conference 2018). This has indeed resulted in more practical and streamlined sample collection processes for clinical sites, compared to PBMC isolation, that are applicable for multi-center, international studies. Alternatively, at sites with such capabilities, immortalization of lymphocytes might be a useful strategy to identify new biomarkers from one type of cell. For instance, we were able to detect centrosomal cohesion deficits in PBMC derived lymphoblastoid cell lines from LRRK2 G2019S Parkinson's disease patients, as well as in a subset of sporadic PD patients (Fernandez et al., 2019). This approach, however, is better suited for patient stratification purposes in clinical research studies, as compared to rapid and sensitive markers of target engagement required in a clinical trial. Thus far, we have only been considering measurement of LRRK2 in blood for the purposes of target engagement, but there has been considerable effort put into measurement of LRRK2 levels and LRRK2 function in blood for the purpose of patient stratification or testing the hypothesis that PD patients without LRRK2 mutations have elevated LRRK2 function that contributes to PD pathogenesis. Total LRRK2, pS935-LRRK2 and pT73-Rab10 have all been measured in PBMCs and in neutrophils in sporadic PD patients, non-PD controls, and LRRK2 carriers with and without PD (Dzamko et al., 2013;Atashrazm et al., 2018;Fan et al., 2018;Melachroinou et al., 2020;Padmanabhan et al., 2020). LRRK2 S935 phosphorylation rates decrease in LRRK2 carriers with PD, while all other groups show no significant differences in levels of the tested analytes (Padmanabhan et al., 2020). In another study, however, pS935-LRRK2 levels were reported to be slightly elevated in iPD patients (Melachroinou et al., 2020). To be fair, for this purpose, one must consider the most relevant cell type in which to measure LRRK2 markers. In particular, in at least one report, specific monocyte sub-types have elevated LRRK2 in PD patients and release inflammatory cytokines to a greater extent in PD patients than in healthy controls following stimulation (Bliederhaeuser et al., 2016;Cook et al., 2017). Given this connection with disease, it is possible that in studies focused on patient stratification, purified monocytes may be the most relevant cell population to examine when developing blood-based markers of increased LRRK2 pathway activity in PD. Thus far, a broad characterization of LRRK2 markers or expression in monocytes in well-powered groups of PD, non-PD, and LRRK2 mutation carriers with or without PD has not been undertaken. Urine-Derived Exosomes LRRK2 is present in exosomes, i.e., cell-derived extracellular vesicles (EVs) of 30-100 nm in diameter, in several biofluids including urine ( (Fraser et al., 2013) and our own results, Mutez et al., 2016). Proteomics screens of exosomes isolated from urine first indicated the presence of LRRK2 in urinary exosomes (Gonzales et al., 2009). Subsequently, the development of sensitive and specific anti-LRRK2 antibodies allowed the confirmation of the presence of phosphorylated LRRK2 in urinary exosomes. Semi-quantitative western blot analyses of urinary exosomes have determined that LRRK2 is present in the high pg/ml to low ng/ml range (close to 1,000 pg/ml). Double immunofluorescence labeling of extracellular vesicles with anti-LRRK2 and the exosomal marker TSG101, confirmed the identity of the vesicles containing LRRK2 (Fraser et al., 2013). In light of the gain of toxic kinase function hypothesis in Parkinson's disease, measures of LRRK2 kinase function are of particular interest, for instance the measure of LRRK2 autophosphorylation sites, including the S1292 site that has robustly been confirmed on endogenous LRRK2 in model systems as well as in human samples. Testing of LRRK2-S1292 phosphorylation in urine has revealed significantly elevated pS1292 levels in subjects harboring the G2019S mutation (Fraser et al., 2016a). This study also reported that for subjects with the G2019S mutation, S1292 phosphorylation is elevated in groups with PD symptoms compared to those without. In a separate study, the same group showed that S1292 phosphorylation is significantly increased in idiopathic PD compared with matched healthy controls (Fraser et al., 2016b). Interestingly, this study also revealed that the severity of cognitive impairment correlates with increased S1292 phosphorylation. Furthermore, a third study by the same lab examined LRRK2 in urinary exosomes compared to CSF exosomes of the same individuals and found that S1292-LRRK2 phosphorylation increases observed in urinary exosomes in subjects harboring the G2019S mutation is reflected by a similar increase in CSF (Wang S. et al., 2017). Interestingly, the study also observes that S1292-LRRK2 phosphorylation is significantly higher in CSF compared to urine in all subjects, suggesting a higher activation level of LRRK2 in brain compared to urine, and highlighting the need for more quantitative measures of LRRK2 functions. These observations are consistent with the hypothesis that LRRK2 in urinary exosomes is modulated in disease, warranting further study of LRRK2 as a biomarker in this biofluid. For instance, the published results show a partial overlap in the distribution of S1292 phosphorylation levels in urinary exosomes in control and mutant/disease groups, suggesting that it is not an absolute predictor of disease. Also, it remains to be elucidated whether some of the observed differences are specific to certain ethnic groups or are (co-) dependent on additional factors such as dietary habits or sleep patterns, or additional lifestyle factors such as occupation. Weaknesses of this approach include the fact that it is impossible to know the cell type(s) or tissues of origin for the recovered EVs present in urine; however, given the high level of LRRK2 expression in the kidney, it is likely that much of the LRRK2 detected in these samples arises from these cells. Additionally, it is possible that more subtle changes in pS1292-LRRK2 levels may be overlooked due to the reduced sensitivity and quantitative limitations inherent to Western immunoblotting. LRRK2 or LRRK2 pathway proteins in urinary exosomes also offer the possibility of monitoring pharmacodynamics response to potential LRRK2 targeting therapeutics. According to this hypothesis, pS1292-LRRK2, pS935-LRRK2 or phospho-Rabs would be reduced in urinary exosomes following LRRK2 inhibitor treatment. This hypothesis remains to be confirmed in biofluids. A caveat to the potential use of this biospecimen source in target engagement measures is that it was initially shown that LRRK2 release in exosomes was sensitive to pharmacological kinase inhibition, specifically via its interaction with 14-3-3 (Fraser et al., 2013). Thus, in samples from subjects undergoing LRRK2 kinase inhibitor treatment, the detection of exosomal LRRK2 will likely be impaired. It should be noted that these studies also revealed sex differences in LRRK2 levels in urinary exosomes. Most notably, total LRRK2 levels are found to be higher in male compared to female subjects (Fraser et al., 2016b). In addition, pS1292-LRRK2 median levels were higher in men compared to women, while the relative elevation in pS1292-LRRK2 levels for PD versus healthy subjects is greater in women than in men. Interestingly, in a different sample set from a Norwegian patient cohort, sex differences displayed a different trend with males harboring the G2019S mutation showing higher pS1292-LRRK2 levels while the opposite holds true for females (Wang S. et al., 2017). CSF Exosomes LRRK2 is not thought to exist as a soluble protein in CSF, which presents a challenge when interrogating its function in the CNS. Despite this obstacle a number of studies have demonstrated LRRK2 detection in CSF after isolating small extracellular vesicles through techniques such as differential ultracentrifugation (e.g., Fraser et al., 2013;Wang S. et al., 2017). For instance, Fraser et al. (2013) showed that in neat CSF, LRRK2 is not detectable, nor in the supernatant of ultracentrifuged CSF, but only in the pellet which contains small EVs (exosomes). Following exosome enrichment, this group has successfully applied western blotting techniques to detect total LRRK2 and pS1292 LRRK2 signals and they continue to study the biological mechanism whereby LRRK2 is introduced into these vesicles. An interesting point is that CSF pLRRK2 does not appear to correlate with urinary pLRRK2 levels and CSF levels did not correlate with disease severity while urinary levels did (Wang S. et al., 2017). It should be noted that the CSF pS1292-LRRK2 levels became saturated (within the semiquantitative linear range of the Western immunoblot approach) compared to urinary exosomes, complicating the analyses of potential correlations with clinical features. In terms of having a reliable biomarker endpoint that can be used in a clinical trial, exosome enrichment poses several challenges. Differential ultracentrifugation may be difficult to perform in a reproducible manner across different labs and volume requirements are quite high (∼1 ml). In addition, Western blotting analysis techniques are not considered amenable to the throughput and robustness requirements of a clinical trial. Therefore, additional techniques which can isolate LRRK2 in CSF without exosome enrichment/isolation (see Mabrouk et al., 2020) would be beneficial going forward. FUTURE DIRECTIONS AND APPROACHES Nucleic Acid-Based Approaches Genome wide association studies analyses revealed that LRRK2 polymorphisms are not only associated with PD, but also other disorders including Crohn's disease and Leprosy pointing out the importance of the immune functions of LRRK2. Thus, one may expect that LRRK2 genotype stratification might help to better classify patients with higher risk to develop prominent immune phenotypes to orientate clinical trials and pharmacogenomics studies. Genome Wide Methylation Assuming that environmental factors may have a larger impact on sporadic PD development compared to familial PD, it is surprising that no difference is found between sporadic PD and LRRK2 patients heterozygous for a LRRK2 mutation either in the methylation status of islands of the LRRK2 promoter in patient derived leucocytes (Fernandez-Santiago et al., 2015) or when investigating whole genome methylation of dopaminergic neurons generated from patient derived induced pluripotent stem cells (iPSCs). Several interpretations might be formulated to explain this result. The role of genetics and environmental factors is proposed to explain the reduced penetrance of LRRK2. It is thus possible that patients with or without LRRK2 mutations share a similar influence of environmental factors or that these unknown environmental factors influenced numerous low risk alleles in genes converging to LRRK2 pathways. Moreover, this same study also revealed an important PD associated hypermethylation occurring only upon differentiation into dopaminergic neurons of PD patients, but not somatic cells (Fernandez-Santiago et al., 2015). This shows that the epigenetic control of the differentiation into dopaminergic neurons plays a crucial role for the development of PD phenotypes. They also highlight the need for exploring the transcriptome expression profiles of sporadic PD and LRRK2 patients to identify biomarkers and other pathways of interest to help better understand the pathogenesis of PD. LRRK2 RNA Expression and Splicing The LRRK2 gene on chromosome 12p12 is composed of 51 exons. Usually, large genes are more likely to give rise to several transcripts due to alternative splicing events. The Ensembl database showing only one transcript encoding the full-length protein of 2527 AA is supported by strong biological evidence. Other transcripts encoding proteins of 1271 AA, 454 AA, 521 AA, 206 AA, or 78 AA, as well as 3 transcripts not encoding proteins have been proposed based on computational mapping, based on gene identifiers from Ensembl, Ensembl Genomes and model organism databases. With the development of new sequencing technologies, such as RNAseq, several groups have investigated the existence of LRRK2 RNA expression and/or splicing variants in the brain and other tissues. Of interest, association of quantitative trait locus (QTL) involving exons 32-33 have been found in the brain and is associated with the presence of a polymorphism rs3761863 (p.M2397T, involved in Crohn's disease) together with two additional QTLs in liver and monocytes. Nevertheless, a 2019 study by Vlachakis et al. (2018) recently confirmed the existence of several spliced transcripts in brain occurring at different ratios according the studied brain regions. The development of large transcript sequencing technologies such as PacBio will enable a more robust mapping and reconstitution of each LRRK2 transcript structure. Such analyses have the potential to identify a specific transcript whose expression may be used as an early biomarker of PD and that might then be easily detectable in PBMCs. Transcriptome Analyses of LRRK2 Patients Because of the nature of such studies (assessing changes at the transcriptional level rather than the protein level), transcriptomic analyses, in the context of PD biomarker development, are restricted to patient stratification in studies of disease severity and/or progression. These kinds of studies are not applicable to clinical trials of investigational compounds in which markers of target engagement are required. The transcriptome of blood or neurons heterozygous for LRRK2 variants has revealed numerous pathways, similar to idiopathic PD, that differ from controls. In PBMCs and dopaminergic neurons, we found a prevalent common dysregulation of translation, immune system signaling, and vesicular trafficking and endocytosis (Mutez et al., 2014). These results are sustained by other observations showing that LRRK2 controls several steps of these key mechanisms, such as the phosphorylation of several proteins of the translation machinery, the eukaryotic initiation factors 4EBP and ribosomal protein S15 (within drosophila models), and thus deregulating translation (Martin et al., 2014). However, the exact mechanism leading to the deregulation of translation remains poorly understood. Transcriptome analyses have also highlighted deregulation of intracellular vesicle trafficking and function within the endocytic pathways. Their biological relevance has been confirmed (see above), since we know that LRRK2 phosphorylates at least 10 Rab GTPases regulating such processes as vesicular trafficking and endocytosis. The recent observations of Connor-Robson et al. (2019) study, using both transcriptome and proteome analyses, demonstrated that 25 of the 70 Rabs are deregulated, confirming a major role of LRRK2 in endocytosis. RNAseq and microarray analyses of both PBMC and iPSC derived dopaminergic neurons have also demonstrated the strong deregulation of the "axon guidance pathway." Ensemble of Gene Set Enrichment Analyses (EGSEA) of the integrated dataset revealed endocytosis and axon guidance as the two most significantly perturbed pathways, both of which were predicted to be inhibited in the presence of the G2019S mutation. The LRRK2-G2019S mutation has previously been demonstrated to disrupt axon guidance in iPSC-derived dopaminergic neurons (Sanchez-Danes et al., 2012;Reinhardt et al., 2013;Su and Qi, 2013;Borgs et al., 2016). Numerous reports using animal models confirmed deregulation of axonal guidance proteins [for review (Civiero et al., 2018)]. In addition, another analysis using single-cell transcriptional profiles of LRRK2 multipotent neural stem cells revealed neuronal lineages with signature similar to PD. Of note, among these genes, two regulate neurite extension upon down-regulation (NRSN1) or overexpression (SRRM4) (Ohnishi et al., 2017). The authors suggest that it could explain the discrepancies in the results obtained on neurite outgrowth assaying the LRRK2 role in neurite extension (Garcia-Miralles et al., 2015). Interestingly, deregulation of transcripts linked to mitochondrial dysfunction is also observed by the above studies (Ohnishi et al., 2017). For instance, a significant upregulation of nine mitochondrial genes was noted, emphasizing the critical role of mitochondria in the disease process. Additionally, the role of LRRK2 mutations in mitochondrial dysfunction is also reported in other PD patient-specific human neuroepithelial stem cells. Aberrations in mitochondrial morphology and functionality were evident in neurons bearing the LRRK2-G2019S mutation compared with isogenic controls (Walter et al., 2019). Since deregulation of these pathways was also observed in blood cells, further study needs to be performed to establish whether some of these changes might be useful as PD biomarkers, giving clues to the development of novel neuroprotective therapeutics. In this context, Infante et al. (2016) compared the transcriptome of carriers of the LRRK2 G2019S mutation (symptomatic and asymptomatic) as well as PD patients without the G2019S mutation and controls. These comparisons highlighted six deregulated genes that were previously associated to PD risk in Genome-wide association studies (Do et al., 2011;Rhodes et al., 2011;Pankratz et al., 2012;Nalls et al., 2014). Among the 58 genes deregulated in both idiopathic PD and LRRK2 patients, those involved in oxygen transport function or iron metabolism were significantly enriched as we previously noted (Mutez et al., 2011;Mutez et al., 2014). Cell adhesion molecule perturbations were also noted in these latter studies. The deregulation of the extracellular matrix (ECM) was also noted in transcriptome profiles of iPSC derived midbrainpatterned astrocytes from PD patients harboring the LRRK2 G2019S missense mutation (Connor-Robson et al., 2019). These data put forward the involvement of Transforming growth factor beta 1 (TGFB1), an inhibitor of microglial inflammatory processes in murine models of PD (Chen et al., 2017), and matrix metallopeptidase 2 (MMP2), known to degrade α-synuclein aggregates (Oh et al., 2017). Lipidomics Another potential alternative LRRK2 related biomarker is BMP [bis(monoacylglycero)phosphate], also known as lysobisphosphatidic acid (LBPA), which is an anionic phospholipid found exclusively on the intra-lumenal vesicles of late endosomes and lysosomes (Bissig and Gruenberg, 2013). BMP can also be secreted into biofluids, where it may be enriched on exosomes or apolipoprotein particles, like HDL (Grabner et al., 2019). BMP promotes electrostatic interactions between intralumenal vesicles and lysosomal lipases and their regulators (e.g., saposins) in order to facilitate glycosphingolipid degradation (Gallala and Sandhoff, 2011). BMP di22:6 levels increase dramatically in urine from patients with the lysosomal storage disorder Niemann-Pick type C, highlighting the translatability of this biomarker as an indicator of changes in lysosome function in vivo (Liu et al., 2014). Several reports have now firmly demonstrated that LRRK2 activity modulates BMP levels in urine, providing key foundational evidence linking LRRK2 to lysosome function. Cynomolgus monkeys treated with LRRK2 inhibitors GNE-7915 and GNE-0877 for 7 and 29 days showed a dose dependent decrease in urine BMP after 29 days of dosing. This effect was recapitulated in LRRK2 KO mouse urine, demonstrating that the effect is on-target (Fuji et al., 2015). Similarly, the recent study by the group of Alcalay et al. (2019) showed that LRRK2 carriers had elevated urinary di-BMP levels, suggesting a link between LRRK2 and lysosomal function. While BMP reduction in urine represents on-target pharmacology of LRRK2 inhibitors, much work remains to fully understand the dynamics and biological significance of this biomarker. Studies of BMP reductions in urine following LRRK2 inhibitor treatment have focused on time points of maximal inhibition or on long-term recovery time points, so we do not have a good understanding of the timecourse or dose-dependence of BMP reduction relative to other measures of LRRK2 inhibition such as pS935 or pRab10 (Fuji et al., 2015). Additionally, the mechanism by which LRRK2 LOF leads to changes in species of BMP on a cellular level is currently unknown, confounding our understanding of the biological effects of changes in BMP in biofluids. Studies such as this give investigators new directions in understanding LRRK2 biology but also serve as potential biomarkers in clinical trials. Future lipidomic studies examining the relationship between LRRK2, GBA and lysosomal function will help define common mechanisms of genetic PD. In addition, other LRRK2 interactors have been discovered which may have value as biomarkers of LRRK2 function such as 14-3-3 (Nichols et al., 2010). BOX 1 | Outstanding issues. 1. In general, studies reporting differences in specific biomarkers in PD patient groups compared to healthy control are still few in number. It remains therefore important to verify whether initial findings can be broadly replicated and extended to longitudinal studies. 2. Biomarker readouts have often been assessed individually, however, it is unclear whether a single biomarker will have sufficient predictive power. One potential path to resolve this issue is to develop a scoring system that would allow researchers to combine several biomarker readouts and thereby enhance predictive power. 3. Assays used to assess biomarker potential of LRRK2 and LRRK2 related measures have often been low-throughput assays in research laboratories (e.g., Western immunoblotting). For the most promising biomarkers, there remains a need for higher throughput robust assays that can be deployed broadly in clinical laboratories. 4. Our understanding of LRRK2 pathways has increased considerably in the last half decade. Besides kinase substrates that have begun to be considered, several other partners in these pathways remain to be assessed as potential PD biomarkers. 5. Similarly, LRRK2 phosphorylation has been intensely studied for a limited number of phosphosites (particularly S935 and S1292), however, it remains to be assessed what added value other less studied sites may have as PD biomarkers. 6. Besides potential LRRK2 related biomarkers that have emerged from proteomics and phosphoproteomic studies, other omics studies including lipidomics, transcriptomics have begun to point to potential additional potential biomarkers that require further assessment. Alternate Sample Types: Gut and Saliva Besides improvements in detection or exploitation of additional markers in the LRRK2 pathway, additional avenues can be opened by studying alternate sample types. Besides urine, PBMCs or CSF, other types of human samples may be of interest to monitor LRRK2 or LRRK2 pathway proteins as disease or pharmacodynamic biomarkers. Interestingly, the presence of LRRK2 has been confirmed in both the enteric nervous system (ENS) as well as in the epithelial gut cells. In the enteric nervous system, Maekawa et al. (2017) report LRRK2 expression in the myenteric plexus of the small intestine. These may be of interest in relation to the gut-brain hypothesis of PD pathology whereby the GI tract is considered a trigger site of PD pathological processes (e.g., Santos et al., 2019). In relation to this hypothesis, alpha-synuclein positive structures can be found in neurons of the submucosal plexus of sporadic PD patients and these structures are similar in LRRK2-G2019S PD subjects (Rouaud et al., 2017). Further work will be required to establish whether LRRK2 expression in the ENS is limited to the myenteric plexus or whether LRRK2 is also expressed in the submucosal plexus or in enteric glial cells (Derkinderen, 2017). It also remains to be determined whether LRRK2 may contribute to α-synuclein pathology in the ENS and/or to the transmission of pathological α-synuclein species from the ENS to the CNS. An additional link of LRRK2 with the gut is the expression of LRRK2 in epithelial gut cells, including Paneth cells (Zhang et al., 2015). This pattern of expression may be put into relation with the finding that genetic association studies have found LRRK2 to be a risk factor for inflammatory bowel disease (Crohn's disease, CD) (e.g., Derkinderen and Neunlist, 2018;Hui et al., 2018;Ridler, 2018). Studies of LRRK2 KO mice have shown that LRRK2 in Paneth cells is involved in the lysosome sorting process to protect from enteric infection, pointing to a potential pathological mechanism for Crohn's disease involving LRRK2 in Paneth cells (Rocha et al., 2015). It is also possible that LRRK2 gut expression may affect digestive tract symptoms that are very common in PD such as constipation. From the few studies focusing on non-motor symptoms, the frequency of such GI complications is similar between LRRK2-PD and iPD (e.g., Gaig et al., 2014). It remains to be elucidated whether levels of LRRK2, phospho-LRRK2 or the LRRK2 pathway proteins are affected in CD or PD at the level of the GI tract. A practical consideration here is the invasiveness of collecting gut samples for diagnostic purposes. The procedure performed via endoscopy is considered moderately invasive and is used on a routine basis to diagnose digestive disorders such as colorectal cancer, inflammatory bowel disease and peptic ulcer, therefore its application for Parkinson's disease is feasible (Corbille et al., 2016). Saliva is also considered a valuable biofluid for biomarker analysis and has specifically been highlighted for its potential for PD biomarkers. Indeed saliva is an attractive biofluid for diagnostics, especially in the elderly as it is much less invasive than other sample types. Principally, the primary use of saliva is as a source of DNA for genetic testing. Despite the growing interest of saliva as a biomarker fluid, little has been done to analyze LRRK2 protein or LRRK2 pathway proteins in saliva. Recently, proteomics analyses have uncovered that LRRK2 is detected in saliva as one of more than 2,000 confidently identified proteins (Pappa et al., 2018). Further research should now be performed to develop robust and quantifiable detection methods for LRRK2 in saliva and assess LRRK2 and phospho-LRRK2 levels in patient groups compared to controls. CONCLUSION As we have outlined in the sections above, there are a great many options already available for the interrogation of LRRK2 and LRRK2-related pathways as tools in the clinical setting. We have summarized the current state of biomarker development and use in Table 1, and key outstanding issues are highlighted in Box 1. For example, as LC-MS instrumentation manufacturers continue to make gains in terms of sensitivity, ease of use, robustness, more discoveries will be made leading to novel biomarkers to advance clinical stage programs. Mass spectrometry will continue to play an important role both in LRRK2 biomarker discovery and LRRK2 clinical development. These techniques have also been used to identify novel phosphorylation sites on LRRK2 protein (Greggio et al., 2009). From an exploratory perspective, the evolution and adoption of LC-MS techniques has proven to be extremely powerful with the discovery of the Rab proteins as bona fide substrates of LRRK2 kinase activity (Steger et al., 2016), and in just a few short years, Rab10/pRab10 measurements have been introduced as a clinical endpoint in a LRRK2 therapeutic trial. Finally, as it should be clear from the literature reviewed here, while the field has made great advances in the use of LRRK2targeted biomarkers as measures of target engagement (i.e., for small molecule inhibitors of LRRK2 kinase), much work remains in optimizing the interpretation of these outcome measures for use in staging the disease, tracking progression, predicting pheno-conversion (in carriers of specific mutations), or as a tool to confirm the diagnosis of PD. For this aspect to be developed, larger multi-cohort longitudinal studies will be required, assessing multiple readouts for the presence of correlations with specific clinical features, at various disease stages.
2020-08-18T13:10:45.279Z
2020-08-18T00:00:00.000
{ "year": 2020, "sha1": "1cfee194e5fe8b9092d2e2c303f24c19688137ed", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2020.00865/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1cfee194e5fe8b9092d2e2c303f24c19688137ed", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
134132154
pes2o/s2orc
v3-fos-license
Potential of yellowfin tuna catch in East Java-Indian Ocean based on length frequency and age distribution Tuna’s catch in Indonesia is the largest in the world, which contributing approximately 16 percent of the world’s tuna supply. Investigation of tuna length frequency distribution that caught in Indian ocean has conducted to know the Indonesian’s fisheries potential included yellowfin tuna (YFT). The result study provide useful data for estimating fish growth as a fundamental for YFT stock assessments and managing exploitation. The 203 YFT samples were collected from fish landing area at Sendang Biru Malang, East Java Indonesia during April and May 2017. The research found that YFT in East Java-Indian ocean was dominated with length group of 151-180 cm (53.2 %) and followed by with length group of 121 – 150 cm (33.5%) with estimated age of 4.1 – 6.0 years and 3, 1-4,0 years respectively. There were 17 individual YFT at juvenile size in group length of 31-60 cm was found 8.4%, while the maximum group size (181-120 cm) was reported at level of 4.4%. The variation of YFT length and age indicates the variation of trophic level in East Java -Indian ocean water which very important for sustainable fisheries. Introduction Indonesia is the archipelago country that dominated with the sea area (76.94 %) or 6,653,341,439 km 2 [1], therefore opens great opportunities for the development of fisheries sector, including tuna. Indonesia's tuna catch is the largest in the world which contributes about 16 % of the world's total tuna supply [2]. As a top predators in the ocean, tuna plays an important role in marine ecosystems and provides the worldwide protein requirements [3]. The one of abundant and high economically value of tuna catch species in Indonesia is YFT (Thunnus albacares) which in 2014 reached 65.686 metric tons [2] and [4] . The geographic position of Indonesia included the East Java-Indian ocean is fits for YFT to breed hence supply high quality YFT across the global market. [5]. Moreover, YFT catch have increased in recent years Due to its potential, the regulation of yellowfin exploitation has been issuing by Marine and Fisheries Ministerial to regulate capture and the Sustainable Fisheries Partnership (SFP) of Indian Ocean Longline Tuna [2]. Accordingly, the stock assessment for exploitation management of YFT in East Java-Indian Ocean is required. The study of length frequency and age distributions of catch may provide directly and useful information the condition of fish [6] and importantly used data for estimating fish growth as a fundamental for fish stock assessments and managing exploited species [7] and [8], as well as to predict the future yields, the sustainability of biomass levels and value of the catch [9] Furthermore, this study provides fisheries data that has importantly contribution for the sustainable YFT catch in Indonesia. Time, Place and Research Sample Site sampling determination were conducted based on current information of forecast map of fishing area of Java, Bali and Nusa Tenggara Region that produced by the Ministry of Marine Affairs and Fisheries (KKP) in 2017. The site sampling determined the location of the study covering the South Coast of East Java and technically considering the basis of accessibility or accessibility aspects. In this study this criteria was represented by Sendang Biru coastal area, Malang Regency, East Java Indonesia ( Figure 1). Sendang Biru is well-known fisheries area that produces the best handline tuna in Indonesia [10] Samples of YFT were collected from fish landing area at Sendang Biru during April and May 2017. Analysis of YFT body length and age frequency distribution The body length measurement was conducted based on morphometric method adopts the usual measurement method used in fisheries research [13]. The measurement of body length, and weight of YFT were obtained on site of fish landing area at Sendang Biru. The correlation among biometric data were analyzed using scatter plot in excel. Meanwhile, the incomplete data of YFT length/weight were estimated using the length-weight relation analysis according to Costa et al. [14], with the following formula: The observed and estimated of length-weight data were then analyzed using Frequency function within Excel in Microsoft Office. Frequency of data that determined within a certain range, then used for created a graph which shows the frequency distribution of YFT body length. Moreover, the frequency distribution of YFT body length were used for age estimation that calculated according to YFT growth equation that made by to Costa et al. [14] . Additional data of Tuna catch fisheries Monthly and annual data of tuna catch were analyzed based on the secondary fisheries data from local institution i.e. Pondok dadap Office of Maritime Affairs and Fisheries. The data of YFT fishing equipment that used in Sendang Biru as well as tuna catch location were obtained by interview with the fishermen that landing at Sendang Biru.. Indonesia has two seasons, wet and dry season. In Java, the wet season occurs between September to March and the dry season is from March to September. The average annual rainfall for Indonesia approximately in level of 3,175 mm, however in the eastern tip of Java tend to be dry until the rainfall decrease less than 1,000 mm [16]. These trend differences might be affected by disruption of the physical environment due to the climate patterns, which then impact natural stock behaviors such as migration, egg spawning hence decreasing the fish juveniles each year. For example, the annual fish stock in Pacific hake were different more than 100 times from year to year [17] The correlation between body length and weight of YFT catch The analysis of length-weight correlation that calculated using completely data of length-weight from 80 individual YFT samples presented at Figure 3. It is demonstrated the strong positive correlation between body length and weight (R² = 0.92). According to ICCAT was known that there is no significant size-weight difference between sexes, hence the length-weight applied for both sex. Length-weight relationships have been devised to obtain better estimates of catches in round weight from landed and processed catches [18]). Meanwhile, 123 individual YFT samples that were not completely yet measured (only consist of one data of length or weight ) were estimated using lengthweight correlation according to Costa et. al [14] (110-170 cm). The YFT fork length of 50 cm is remain in the coastal areas, and show moderate migratory habits (30 miles). Pre-adult YFT migrates with similar patterns to that of juvenile, while adults make trophic migrations towards higher latitudes during the summer and migrations across the ocean [19]. The percentage of body length and age frequency distribution from 203 individual YFT samples showed at Figure 4 and table 1. The research found that YFT in East Java-Indian ocean was dominated with length group of 151-180 cm (53.2 %) and followed by with length group of 121 -150 cm (33.5%) with estimated age of 4.1 -6.0 years and 3 ,1-4,0 years. The both groups were categorized at adult size (table 1). There were 17 individual YFT at juvenile size in group length of 31-60 cm was found 8.4%, while the adult maximum group size (181-120 cm) was found at level of 4.4%. As previously described, that each age category have trophic migration at different area and possibly intersection area. According to the interview data with local tuna fishermen was known that they do fishing generally at approximately 30 miles, which available to find mostly adult YFT. The variation of YFT length and age indicates the variation of trophic level in East Java -Indian ocean water which very important for sustainable fisheries. The length frequencies data may use to estimate catch-at-age which is to be the one of important component in fisheries assessment [20]. Moreover, the age distribution of the fish is an important aspect of stock predictions included the sexual maturity, the numbers of parent and young fish stock as well as estimates and manages the condition of fish populations in future years. Fish stock assessments providing the information that necessary to make sound decisions, therefore support sustainable fisheries [16] and [21]. Conclusion The YFT in East Java-Indian ocean was dominated with adult sizes (91.1%) included the varied adult size groups: 151-180 cm (53.2 %); 121 -150 cm (33.5%) and (181-120 cm). There rest of YFT catch (8.9%) were composed of varied size of juveniles and pre-adult. The diversity of size and age, the high frequency of adult YFT and small frequency of YFT juvenile/ pre adult reflects the YFT catch sustainability at economically and ecology prospective.
2019-04-27T13:09:53.703Z
2018-06-04T00:00:00.000
{ "year": 2018, "sha1": "2b494a08e47251dddfbbe02db6ce00f7fdae47c3", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1040/1/012007", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "cf2f902dcd00e2b29198eee841ed3accc3702e56", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
248154878
pes2o/s2orc
v3-fos-license
In silico prediction and interaction of resveratrol on methyl-CpG binding proteins by molecular docking and MD simulations study Resveratrol enhances the BRCA1 gene expression and the MBD family of proteins bind to the promoter region of the BRCA1 gene. However, the molecular interaction is not yet reported. Here we have analyzed the binding affinity of resveratrol with MBD proteins. Our results suggest that resveratrol binds to the MBD proteins with higher binding affinity toward MeCP2 protein (ΔG = −6.5) by sharing four hydrogen bonds as predicted by molecular docking studies. Further, the molecular dynamics simulations outcomes showed that the backbones of all three protein–ligand complexes are stabilized after the period of 75 ns, constantly fluctuating around the deviations of 0.4 Å, 0.5 Å and 0.7 Å for MBD1, MBD2 and MeCP2, respectively. The inter-molecular hydrogen bonding trajectory analysis for protein–ligand complexes also support the strong binding between MeCP2–resveratrol complex. Further, binding free energy calculations showed binding energy of −94.764 kJ mol−1, −53.826 kJ mol−1 and −36.735 kJ mol−1 for MeCP2–resveratrol, MBD2–resveratrol and MBD1–resveratrol complexes, respectively, which also supported our docking results. Our study also highlighted that the MBD family of proteins forms a binding interaction with other signaling proteins that are involved in various cancer initiation pathways. Introduction DNA methylation, histone modication, nucleosome remodeling and RNA-mediated targeting proteins regulate many biological processes that are not only essential for normal development and gene expression but also fundamental to the genesis of cancer. 1 Epigenetic modication plays an important role in regulation of transcription, DNA repair and replication. 2 At the time of chromatin regulation, expression patterns or genomic alterations can lead to the induction and maintenance of various cancers. [2][3][4][5] The presence of mCpG dinucleotide in a DNA sequence directly inhibits transcription or it recruits proteins that specically recognize methylated DNA and initiate the remodeling of euchromatin into a heterochromatin structure in the genome to form a spatial obstacle that is unable to bind transcription factors to promoter sequences. The DNA methylation pattern is believed to be 'read' by a conserved MBD family of proteins. 6,7 These proteins share a common motif, the methyl CpG binding domain (MBD). 8,9 Currently, the NCBI Conserved Domain Database lists 11 human proteins containing the methyl binding domain derived from methyl CpG binding protein 2. 10,11 Based on the presence of other domains, these are further divided into 3 groups within the MBD superfamily according to the CDD30: the histone methyl transferases, the MeCP2_MBD proteins, and the histone acetyl transferases. The MBD protein family includes MeCP2, MBD1, MBD2, MBD3, MBD4, and the uncharacterized Kaiso complex, which binds to methylated DNA. MBD1 binds to symmetrically methylated CpG dinucleotides and inhibits gene expression by blocking transcription factors' interaction with the promoter. 12,13 The MBD1 protein is the largest member of the family of proteins. It has a complex expression prole as there are 13 isoforms of the gene expressed on chromosome 18. The main difference between the isoforms is the presence of 2 or 3 CXXCtype zinc ngers present in the protein. 14 The isoforms containing the rst 2 CXXC domains preferentially repress methylated promoters, whereas those with the third CXXC domain are capable of DNA binding regardless of methylation status. 15,16 MBD2 may bind to methylated DNA and mediates the methylated DNA binding functions for 2 different transcriptional repressor complexes, MeCP1 and Mi2/NuRD. [16][17][18][19] Both these complexes use MBD2 to direct HDACs and chromatin remodelers to methylated promoters where they effect transcriptional repression (Fig. S1 †). Once again, this protein has been shown to silence genes in a variety of cancers: colorectal, lung, prostate, and renal cancer. [20][21][22][23][24][25][26] The structures of MBD motifs from three different MBD proteins have been solved and their overall similarity indicates that all MBD-containing proteins are likely to adopt a similar folding of protein chain. [27][28][29][30] The MBD forms a wedge-shaped structure composed of a b-sheet superimposed over an a-helix and loop. Amino acid side chains in two of the bstrands along with residues immediately next to the N-terminal to the a-helix interact with the cytosine methyl groups within the major groove, providing the structural basis for selective recognition of methylated CpG dinucleotide. 31,32 Resveratrol is the common term for 3,5,4 0 -hydroxystilbene ( Fig. S2A †) which is produced naturally by several plants in response to injury or when the plant is under attack by pathogens such as bacteria or fungi. 33,34 Food sources of resveratrol include the skin of grapes, blueberries, raspberries and mulberries. 35 Resveratrol was rst reported to exert anti-tumor activities in 1997. 33 Since then, the antioxidant, anti-inammatory, anti-proliferative and anti-angiogenic effects of resveratrol have been widely studied. It has been shown that it exhibits anti-oxidative and anti-inammatory activity and reverses the effects of aging in rats. 36 Resveratrol suppresses proliferation of several types of cancers, such as colon, breast, pancreas, prostate, ovarian and endometrial cancers, as well as lymphoma, and affects diverse molecular targets. 33 Resveratrol has been used in many studies not only for its preventive effects but also anti-tumor effects against various cancers and its ability to suppress cell proliferation, apoptosis, metastasis and invasion. 37 Resveratrol is found widely in nature and a number of its natural and synthetic analogues and their isomer adducts, derivatives and conjugates are available. [38][39][40] It is an off-white powder (extracted by methanol) with a melting point of 253-255 C and molecular weight of 228.25. Resveratrol is insoluble in water but dissolves in ethanol and dimethylsulphoxide. 41 Earlier, it was reported that resveratrol enhances the BRCA1 gene expression in breast cancer cells and MBD proteins binding to the BRCA1 promoter region. 42,43 However the molecular interaction or mechanism has not yet been reported. In this study, we have used MBD proteins as an emerging biomarker. Here, we have analyzed the binding affinity of resveratrol on MBD proteins with a high binding affinity score of resveratrol against MBD1, MBD2, and MeCP2 proteins by docking. Further, the study has been followed by MD simulation and binding free energy calculations for these proteinligand complexes. Additionally, protein-protein interaction analysis of MBD proteins with their neighboring counterparts has been carried out to identify crucial interacting signaling proteins which are directly or indirectly involved in cancer initiation pathways. The chemical structure of resveratrol (Pubchem CID: 445154) was retrieved from the PubChem compound database. Further, biomolecule visualizer Chimera 3.1 was used to optimize the molecule and convert it into a PDBQT le format. 44 Further, for performing docking studies, the PDB coordinates of the MBD1, MBD2 and MeCP2 protein and resveratrol molecules were optimized by using the protein visualization tool UCSF chimera (added missing residues to minimize energy level and removed unwanted molecules). The optimized three-dimensional coordinates of all proteins were saved at a minimum energy and stable conformation. Chimera allows for building of chemical structure, visualization, molecular analysis and structure optimization (Fig. S3 †). 45 Prediction of binding sites COACH is a meta-server approach to protein-ligand binding site prediction. Starting from the given structure of target proteins, COACH will generate complementary ligand binding site predictions using two comparative methods, TM-SITE and S-SITE, which recognize ligand-binding templates from the BioLiP protein function database by binding-specic substructure and sequence prole comparisons. These predictions will be combined with results from other methods (including COFACTOR, FINDSITE and ConCavity) to generate nal ligand binding site predictions. The COACH algorithm was ranked as the best method in the weekly CAMEO ligand binding site prediction experiments. 46,47 Molecular docking between MBD proteins and resveratrol molecule Molecular docking studies were performed to understand the binding affinity behavior of resveratrol with various proteins as MBD1, MBD2 and MeCP2 protein. Here in the present study Auto dock vina was used to perform docking studies. Further, the molecular visualization tool Chimera was used to visualize the detailed protein-ligand binding interactions and nal image formations. We have created an active binding grid using position center and size on the X, Y and Z axis by auto dock vina. For MBD1 the grid was centered at: À45.7294, 14.6662, À1.54214, and maintained a size of grid as 24.0191, 33.5461, and 37.6976. For MBD2 the grid was centered at: À47.7262, 13.1876, À0.072281 and maintained a size of grid as 32.647, 35.0381, and 37.7098. For MeCP2 the grid was centered at: 6.51469, À5.95367, À19.5398, and maintained a size of grid as 30.7524, 46.6142, and 34.3735. Further, we have added charge and H-bonds on the receptor protein for stable conformation and energy minimization. Finally, docking was run by auto dock vina to calculate high affinity docking scores and optimize the nal binding pose of the protein-ligand complexes. The energy of interaction of resveratrol with the MBD1, MBD2 and MeCP2 protein is assigned as the "grid point", and at each step of the simulation, the energy of interactions between protein and ligand was evaluated using atomic affinity potentials computed on a grid. The remaining parameters were set as the default. 48 Further, PyMOL soware was used for visualizing protein-ligand binding interactions and for calculating their hydrogen bond lengths. We have used UCSF chimera structure analysis tools for comparison of protein conformational change and sequence similarity aer ligand binding on the MBDs protein's structure. Molecular dynamic simulations and binding free energy calculations between protein-ligand complexes Molecular dynamic (MD) simulations for all three proteinligand complexes MBD1-resveratrol, MBD2-resveratrol and MeCP2-resveratrol were performed by the GROMACS 2019 package using force eld GROMACS96 43a1. The topology parameters for resveratrol were determined by using the PRODRG server (https://prodrg1.dyndns.org/). The topology parameters of all three proteins and resveratrol were merged to build up the topology of MBD1-resveratrol, MBD2resveratrol and MeCP2-resveratrol complexes to initialize the next stage of simulations. Individually, MBD1-resveratrol, MBD2-resveratrol and MeCP2-resveratrol complexes were centered in the dodecahedron box by maintaining the distance of 1.2 nm from the wall and the boxes were solvated by explicit water using the TIP3P model. 49 The solvated systems were electrically neutralized by adding 0.1 M concentration of sodium ions (NaCl) to all systems. 50 Prior to MD simulations, solvated systems were minimized by using 1000 steps of the steepest descent algorithm followed by a conjugate gradient algorithm, allowing the whole MBD1resveratrol, MBD2-resveratrol and MeCP2-resveratrol complexes environment to relax the system by removing close contacts in the environment. 51 The LINCS algorithm was used to constraint the bond length and bond angles and the time step throughout the MD simulations was set to 2 femtoseconds (fs). 52 The Particle Mesh Ewald (PME) algorithm was used to calculate long range electrostatic interactions and the cut-off for non-bonded van der Waal interactions was set to 10Å. 53 The heavy atoms were restrained during equilibration at a constant temperature of 300 K with 1 atmospheric pressure for 1 nano-second (ns) together with the method given by Parrinello and Rahman. 54 Before the production of MD simulations run at NPT ensembles of 2 ns, all position restrains were cleared from the systems and the data was saved aer an interval of every 10 picoseconds (ps). Finally, all minimized and equilibrated complex systems were subjected to a nal run of MD simulations for 100 ns. The various trajectory les i.e., root mean square deviations (RMSD), root mean square uctuations (RMSF) and others were analyzed by using GROMACS scripts and VMD molecular visualization tool. 54,55 The RMSF trajectory prole curves were calculated on the basis of Ca-atoms superimposition of proteins. The graphical tool XMGRACE was used for plotting various trajectories. 56 Further, binding affinity analysis between inhibitor and receptor molecules was performed for all three simulated complexes. In the present study we have performed binding free energy calculation methods to analyze the binding affinity of our inhibitor with respect to three receptor molecules. For the binding energy calculation between the receptor and inhibitor molecule we performed MM-PBSA calculations method using the GROMACS tool. Python script MmPbSaStat.py and the graphical tool XMGRACE were used for the nal statistical analysis of binding energy calculations and trajectory analysis, respectively. 57,58 Protein-protein interaction analysis For the protein-protein interactions study, MBD1, MBD2 and MeCP2 were analyzed by using the STRING 10.5 online database (https://string-db.org/). Initially, MBD proteins of Homo sapiens origin were selected as the input for performing a proteinprotein network study. Finally, the interacting protein partners of MBD proteins were predicted for further proteinprotein interactions analysis. 59 Sequential and structural analysis of MBD proteins The three-dimensional structure of MBD1 (PDB ID: 6d1t), MBD2 (PDB ID: 6c1a) and MeCP2 (PDB ID: 5bt2) are retrieved from the Protein Data Bank database (https://www.rcsb.org/ pdb/home/home.do) (Fig. S3B †). The 3D structures of MBD proteins are composed of a b-sheet superimposed over an ahelix and loop. The MBD1 protein is a monomer comprising chain A. The MBD1 primary structure comprising 79 amino acids and a conserved MBD motif, along with CXXC-type 1, type 2 and type 3. This protein is rich in proline and also comprises the TRD region. MBD2 is a homo tetramer protein comprising four chains A, B, E and F and each chain carries a sequence length of 79 amino acids residue. This protein is rich in glycine, arginine, MBD motifs (CXXC) and TRD regions. Additionally, we observed that MeCP2 is also a monomer protein comprising chain A containing a sequence length of 97 amino acids. MeCP2 has one MBD motif, a pro-rich region and a TRD region (Fig. S1 †). The TRD domain includes a nuclear localization signal. These proteins have a motif region which interacts with other proteins in the nucleus and forms a complex with it and then binds to a specic region of the DNA sequence as well as interacting with histone proteins, due to its binding gene expression being repressed. MBD1 protein has chain A and a MBD motif, CXXC-type 1, 2 and 3, pro-rich and TRD sequence (Fig. S1 †). Its repressive activity is reported to be mediated by lysine 9 (K9) of histone H3 methylation through SETDB1 histone methyl transferase (HMT) recruitment. 60 It interacts with Suv39h, another HMT that methylates K9 of histone H3. 61 Isoforms containing the CXXC3 are able to bind unmethylated DNA so these proteins not only repress the transcription of methylated sequences but also of unmethylated regions. 62 MBD1 has been shown to be signicantly associated with lung cancer and in human pancreatic carcinomas. Elevated expression of MBD1 showed association with lymph node metastasis. 63 Loss of MBD1 function could affect the normal regulation of gene expression by lacking suppression of genes. 12,64 The MBD1 role in gene regulation is conrmed by in situ hybridization and RNAi. 64,65 Knockdown of MBD1 inhibited cell proliferation, invasion, and induces apoptosis in pancreatic cancer cells. 63 MBD2 has A, B, E, F chains which contain glycine rich regions; the arginine-rich region also has one MBD and TRD motif in its sequence (Fig. S1 †). Repressive activity of MBD2 is mediated by MeCP1, an ATP dependent chromatin remodeling complex formed by MBD2 and the Mi-2/NuRD complex. 66 It is mainly associated with colorectal cancer, stomach cancer and breast cancer. 67,68 MBD2 deciency also dramatically reduced tumorigenesis and extended life span in the in vivo model. 69,70 It has been shown that MBD2 can also bind to unmethylated DNA to cause changes in gene expression. 63 MeCP2 has chain A and an MBD motif, a pro-rich region and a TRD region (Fig. S1 †). The TRD domain includes a nuclear localization signal. Repression by MeCP2 is mediated by chromatin remodeling complexes recruitment to methylated DNA sequences. The TRD domain interacts with Sin3A, a complex containing histone deacetylase enzymes HDAC1 and HDAC2. Histone deacetylation is not only a way in which MeCP2 represses transcription and establishes heterochromatin formation; it is also known that MeCP2 interacts with a complex containing lysine 9 of histone H3 methyl transferase activity. 71 MeCP2 is involved in cancer by binding to the hyper methylated regions of promoters of tumor suppressor genes and thereby causes their subsequent repression in breast cancer, prostate cancer, lung cancer, liver cancer, and colorectal cancer and the elevated expression of this gene has been reported in different cancers. Loss of MeCP2 function has been reported to inhibit cell proliferation and increase apoptosis of prostate cancer cells in vitro. 72,73 In addition, treatment with several natural compounds has been shown to down regulate the elevated MeCP2 expression in prostate and breast cancer cells in vitro. 74,75 Analysis of binding cavity Prediction of the consensus ligand binding amino acid on MBD protein sequence was analyzed by the COACH online meta-server which predicts ligand binding amino acids (Table 1) based on comparative methods. Amino acids present on the ligand binding site of the protein have high affinity to bind the ligand as compared to other amino acids ( Fig. 1 and 2). In MBD1 protein, 23 residues have been predicted to have ligand binding interaction, these amino acids are located on the MBD motif present on the early position of the protein structure. However, the TRD domain is located at the later position on the backbone of the protein. It makes the active site between the terminal loop and beta sheet of the protein. These MBD motifs interact with the methylated DNA and TRD domain attached to histone proteins which further regulate the transcription process. The MBD2 protein has been predicted to contain 26 residues which have ligand binding interaction; the MBD motif and TRD domain are jointly present within the middle of this protein backbone. The active site formed between the beta sheet, alpha helical and terminal loop of the proteins structure and these predicted amino acid falls within the early and middle position of this protein's structure. MeCP2 protein comprising 36 residues is predicted to have ligand binding interaction and is mostly present on the later sequence of the protein structure. The active site is formed between the beta sheet and terminal loop of the protein's structure (Fig. S1 † and 2). Molecular docking analysis of resveratrol on MBD proteins The molecular docking analysis of resveratrol with all three proteins MBD1, MBD2 and MeCP2 has been done to understand the binding mode of resveratrol with all three target molecules. Docking analysis helps us to calculate the relative binding affinity of resveratrol with respect to all 3 target proteins which should be directly proportional to the docking score. In this respect, we observed that the predicted binding affinity score of resveratrol with MBD1 is À5. Binding of resveratrol to these amino acids will inhibit the methyl domain binding and transcription of genes in chromosomes (Fig. 3) (Table 1). Further, using the structure analysis tool chimera, we found that there is no conformational change in MBD1, MBD2 and MeCP2 proteins aer resveratrol binding and showing 0.00 RMSD and there is 100% similarity of the protein sequence between native and ligand bound proteins (Table 2). analysis showed that all three complexes are stabilized aer the simulation time of 75 ns, suggesting that these complexes stabilized before the end of 100 ns simulations (Fig. 4A). Further, the dynamic natures of hydrogen bond formation between protein-ligand complexes are analyzed to understand the comparative strength of the intermolecular hydrogen bonding between protein-ligand complexes during MD simulations of 100 ns. In this respect, we observed that the receptor molecule MBD1 forms up to four hydrogen bonds, MBD2 forms up to six hydrogen bonds and MeCP2 forms up to seven hydrogen bonds during simulations of 100 ns (Fig. 4B). The outcomes of dynamic state hydrogen bonding analysis suggest that the MeCP2-resveratrol complex forms a more stable hydrogen bond as compared to other two proteins. Further, it is also suggested that similar to the maximum number of hydrogen bond formation they might follow a similar order for their binding affinity. Today, binding free energy calculations between receptor and inhibitor is a popular method to calculate the binding affinity of an inhibitor for a particular receptor. Therefore, in the present study we have performed a binding energy calculation between the receptor molecules (MeCP2, MBD2 and MBD1) and resveratrol. The RMSD trajectory analysis outcomes of the MD simulations study suggested that all three systems have reached equilibrium aer the time period of 75 ns and uctuate around 0.4Å, 0.5Å and 0.7Å for MBD1, MBD2 and MeCP2, respectively, with individually constant RMSD. Hence, all three systems achieved equilibrium aer 75 ns; therefore, binding energy calculations for all three systems are performed for the trajectory of 10 ns between the simulation periods of 80 to 90 ns. Aer performing binding energy calculations for all three individual complexes we compared the nal binding energies of all three cases (Table 3). Our binding energy analysis showed that the inhibitor molecule's resveratrol binds with the lowest binding energy of À94.76 kJ mol À1 for receptor molecule MeCP2 followed by the binding energy of À53.83 kJ mol À1 for the receptor molecule MBD2, followed by the binding energy of À36.73 kJ mol À1 for the receptor molecule MBD1 (Table 3). Our comparative binding energy analysis outcomes of resveratrol for all three receptor molecules showed that resveratrol showed the lowest binding energy with MeCP2 suggesting the higher binding affinity of resveratrol for MeCP2 and the binding affinity order following the order MBD2 and then MBD1. So, here our binding energy analysis results follow the same outcomes which we have seen in docking and binding interactions analysis where we have observed higher binding interactions of resveratrol in the case of MeCP2 followed by MBD2 and MBD1. Our binding energy analysis outcomes are strengthening our previous docking studies and binding interactions analysis results. Therefore, the combined outcomes of our docking studies, the binding interactions analysis and binding energy analysis suggest that resveratrol could be an effective and putative inhibitor of MeCP2 which could be used for further experimental validation studies. Our result has clearly stated that MeCP2 has a greater number of intra-molecular hydrogen bonds and its backbone is more stable than MBD1 and MBD2 up to 100 ns of simulation (Fig. 4). Our binding energy calculation of protein ligand complexes and MD simulation studies also conrms that resveratrol has strong binding affinity with MeCP2 compared to MBD1 and MBD2 proteins (Table 3). Protein-protein interaction analyses of MBD proteins with their neighboring metabolic pathway counterparts Protein-protein interaction analysis of MBD proteins with their neighboring proteins has been analyzed to identify the interacting partners of MBD proteins which are involved in cancer initiation and apoptotic pathways. The present study aims to understand the comparative binding affinity of MBD1, MBD2 and MeCP2 with resveratrol; therefore, we have identied the key interacting protein partners of MBD1, MBD2 and MeCP2 which directly and indirectly interact with these three proteins. Additionally, we assumed that by inhibiting these three proteins by the inhibitor molecule resveratrol we are able to block the corresponding metabolic pathways of these three proteins to which they belong. Accordingly, we observed that MBD1 protein interacts with ATF7IP, SETDB1, SUV39H1, CHAF1A, TENM1, CBX5, SUMO1, RARA, HDAC3 and CBX3 proteins. The MBD2 protein interacts with HDAC2, RBBP7, GATAD2A, MTA2, DNMT1, HDAC1, SIN3A, RBBP4, CHD3, PRMT5 proteins and MeCP2 protein interacts with HDAC2, RBBP7, GATAD2A, MTA2, DNMT1, HDAC1, SIN3A, RBBP4, CHD3, PRMT5 proteins (Fig. 5). These proteins are directly or indirectly involved in the regulation of gene expression, apoptosis, cell growth and proliferation, and signaling pathways in cancer cells. Our protein-protein interactions analysis suggests that proteins MBD1, MBD2 and MeCP2 are interacting with many of the other proteins which are crucial for many key metabolic pathways; therefore, inhibiting MBD1, MBD2 and MeCP2 with resveratrol could be a good approach to block the metabolic activity of the above discussed key pathways. MBD1 is a transcriptional repressor that binds to CpG islands in promoters where the DNA is methylated at position 5 of cytosine within CpG dinucleotides. MBD1 acts as a transcriptional repressor which plays a key role in gene silencing by recruiting AFT7IP, which in turn recruits factors such as the histone methyl transferase SETDB1. It probably forms a complex with SETDB1 and ATF7IP that represses transcription and couples DNA methylation and histone 'Lys-9' tri methylated (Fig. 5A). 76 MBD2 binds to hemi methylated DNA as well, it recruits histone deacetylases and DNA methyl transferases and acts as a transcriptional repressor and plays a key role in gene silencing. It functions as a scaffold protein, targeting GATAD2A and GATAD2B to chromatin to promote repression. It may enhance the activation of some unmethylated cAMP-responsive promoters (Fig. 5B). 77,78 The MeCP2 protein that binds to methylated DNA can bind specically to a single methyl-CpG pair. It is not inuenced by sequences anking the methyl-CpG mediating transcriptional repression through interaction with histone deacetylase and the co-repressor SIN3A, which binds both 5-methylcytosine (5mC) and 5-hydroxymethylcytosine (5hmC)-containing DNA, with a preference for 5-methylcytosine (5mC) (Fig. 5C). 79 The above results explore the importance of these MBD proteins in the regulation of gene expression and their involvement in cancer initiation pathways. Conclusions In conclusion, we have found that MBD proteins are made up of an a-helix, b-sheet and MBD1 and MeCP2. These two proteins have a single chain whereas MBD2 is made up of four polypeptide chains. These proteins have an MBD motif, TRD and a CXXC region which bind to the histone protein and methylated DNA sequence which regulates the gene expression. Binding analysis revealed that MeCP2 has a maximum number of amino acids which interact with the ligand as compared to MBD1 and MBD2 proteins. Aer docking with resveratrol and the MD simulation of the protein ligand complex it is conrmed that resveratrol has high binding affinity toward MeCP2 protein compared to MBD1 and MBD2 proteins. These MBD proteins interact with the other signaling proteins which are directly or indirectly involved the cancer initiation pathways. The detailed mechanism and pathways of these MBD proteins in cancer development are still unclear. Resveratrol could be used as the inhibitor of these MBD proteins and further in vitro study should be done to explore the effects of MeCP2 protein in different cancer cells. CDD Conserved domain database HDAC Histone deacetylase HMT Histone methyl transferases MBD Methyl-CpG binding proteins MD Molecular dynamic MeCP2 Methyl-CpG binding protein 2 NuRD Nucleosome remodeling complex Conflicts of interest The authors declare that they have no conicts or nancial interests in the work reported in this paper.
2022-04-15T05:14:41.215Z
2022-04-07T00:00:00.000
{ "year": 2022, "sha1": "257b56cf4f7a02a7cf08fd084d254b2ec918f52b", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2022/ra/d2ra00432a", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "257b56cf4f7a02a7cf08fd084d254b2ec918f52b", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
257680825
pes2o/s2orc
v3-fos-license
Recurrent Florid Glandular Cystitis: Case Report (15 Years Follow Up) Glandular cystitis is a differential diagnosis of malignant bladder tumors. Only the Pathological examination can be used to make a diagnosis. Glandular cystitis has no specific clinical manifestation. The clinical signs are not very specific and suggest bladder carcinoma. Principles of treatment for Glandular cystitis were used to treat these causative factors. Trans-urethral resection of the bladder pseudotumoral forms. The course of glandular cystitis is controversial and primarily focuses on the risk of lesion degeneration and requires regular monitoring of both radiological (CT) and, and biological (renal) function, urinary cytology), and control cystoscopy with multiple biopsies. Lack of consensus surveillance and the lack of perspective makes it difficult to understand the evolution of this pathology. We report in our work, a case of Florid glandular cystitis with 15 years of follow-up while highlighting the therapeutic and the prognostic difficulties encountered during follow-up. This risk is primarily associated with malignant degeneration. Its transformation into adenocarcinoma is exceptional and occurs in the case of persistence of the favoring factor.However, annual monitoring using cystoscopy with bladder biopsy is necessary.The florid form is considerably rarer and more disabling.It usually requires wide excision of the lesions.Here, we report a case of florid glandular cystitis with 15 years of follow-up, whose data are compared with those of the literature. Case Report In our work, we report the case of a 54-year-old weaned patient with a history of chronic smoking weaned 1 year ago.The patient has been followed up since 2003 for florid glandular cystitis with multiple transurethral resections of the bladder "TURB" (9 TURB in total), and the last TURB dates back to 02/2021.In 2003, the patient had his first TURB for a bladder tumor, the pathological examination of which revealed glandular cystitis.Th patient underwent annual followup cystoscopies; recurrences were marked by de novo onset of irritation.Bladder syndrome, pollakiuria, and repetitive episodes of urinary tractinfection.Diagnosis of recurrence was confirmed by follow-up cystoscopy, in which recurrence was always in the trigonal region (Figure 1).Pathological examination results in all TURBs in favor of glandular cystitis (Figure 2). Introduction Described for the first time in 1761 by Morgani [1], Glandular cystitis is a benign metaplasia, developed from Von Brunn's islets (epithelial clusters included in the mucous chorion).Rare, usually asymptomatic; it is favored by chronic irritation, and is sometimes associated with pelvic lipomatosis.presented with symptoms of bladder irritation without obstructive signs.The various explorations confirm the positive diagnosis, which is that of urothelial carcinoma.Only the anatomopathological examination, makes it possible to highlight the cylindrical glandular tissue at the level of the mucosa and the submucosa, and provides diagnostic certainty.There are two histological types of glandular cystitis: The first called typical form and the second: Intestinal metaplasia of the bladder [6].The exact proportions of these two subtypes are not known; in contrast, studies agree that intestinal bladder metaplasia is much less common than the typical form.These two histological types of glandular cystitis are distinguished by different or even opposite immunohistochemical profiles.The principles of treatment for glandular cystitis are to treat the causative factor if it is shown.Trans-urethral resection of the bladder for pseudotumoral forms: Which should be complete and deep if possible; Over a period of 15 years our patient had 9 TURBs the last date back to 02/2021.And finally, non-conservative surgery is for complicated forms.The course of glandular cystitis is very controversial and is primarily focused on the risk of lesion degeneration, and requires regular monitoring both radiological (CT), biological (renal function, urinary cytology) and control cystoscopies with multiple between 0.1% and 1.9%, it is a differential diagnosis of malignant bladder tumor [2].The mechanisms of occurrence of glandular metaplasia of the bladder are still poorly understood.It generally develops in a context of chronic inflammation whose causesare variable: Urinary stasis, bladder lithiasis, chronic catheterization, urinary tract infection, bladder tumor, and neurogenic bladder.More rarely, other factors are noted such as allergy, exposure to toxicants, hormonal imbalances, and bladder diverticula [3].Likewise, pelvic lipomatosis is associated in 75% to 80% of cases with glandular cystitis [4].The uro-CT scan rules out or confirms the presence of pelvic lipomatosis, it can help to act on the obstruction earlier, and therefore avoid complications of chronic obstruction of the upperurinary tract.In general, the search for the causal factor does not, in all cases, lead to its identification, this is the case with our patient where no cause was highlighted.Glandular cystitis has no specific clinical translation.The Clinical signs are not very specific and suggest a bladder carcinoma.Several modes of revelation are described either by macroscopic hematuria, by chronic irritative voiding disorders of the bladder, rarely obstructive signs linked to the invasion of the two meatus by a florid form associated or not with the envelopment of the pelvic ureters by associated pelvic lipomatosis [5].Our patient biopsies.The lack of consensus around surveillance and the lack of perspective are a difficulty in knowing the evolution of this pathology. Conclusion Glandular cystitis is rare, has no specific clinical translation, poses a problem of differential diagnosis with malignant bladder tumors, only the pathological examination allows a positive diagnosis to be made.The treatment is based on the treatment of the causative factor if identified, otherwise a complete and deep resection of the lesion.Close follow-up is essential given the risk of recurrence and degeneration. Human Subjects Consent was obtained or waived by all participants in this study.Ethics committee in the Military hospital of Meknes issued approval not applicable.Disclosure statement: No potential conflict of interest was reported by the author(s).Informed consent: Informed consent was obtained.Ethical approval was obtained from the local ethics committee in the Military hospital of Meknes-Morocco and the patient consented to participate in the case report and his information was handled as anonymous according to the ethical standards. Figure 1 : Figure 1: Endoscopic aspect of the bladder tumor: Localization in the trigonal region. Figure 2 : Figure 2: Von Brunn's glands take on a cystic appearance with glandular metaplasia.
2023-03-23T15:18:24.581Z
2023-06-30T00:00:00.000
{ "year": 2023, "sha1": "5b7a7fd214876052b71ad21c7a218a229e989b21", "oa_license": "CCBY", "oa_url": "https://clinmedjournals.org/articles/cmil/cmil-9-209.pdf?jid=cmil", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "17d496d506d8c97bf548f60a452afc046c46934a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
102351114
pes2o/s2orc
v3-fos-license
Bayesian influence diagnostics using normalizing functional Bregman divergence Ideally, any statistical inference should be robust to local influences. Although there are simple ways to check about leverage points in independent and linear problems, more complex models require more sophisticated methods. Kullback-Leiber and Bregman divergences were already applied in Bayesian inference to measure the isolated impact of each observation in a model. We extend these ideas to models for dependent data and with non-normal probability distributions such as time series, spatial models and generalized linear models. We also propose a strategy to rescale the functional Bregman divergence to lie in the (0,1) interval thus facilitating interpretation and comparison. This is accomplished with a minimal computational effort and maintaining all theoretical properties. For computational efficiency, we take advantage of Hamiltonian Monte Carlo methods to draw samples from the posterior distribution of model parameters. The resulting Markov chains are then directly connected with Bregman calculus, which results in fast computation. We check the propositions in both simulated and empirical studies. Introduction After fitting a statistical model we need to investigate whether the model assumptions are supported. In particular, inference about parameters would be weak if it is influenced by a few individual results. In this paper, we make use of a new diagnostic analysis tool for detecting influential points. The idea is to adapt the functional Bregman divergence to compare two or more likelihoods (Goh and Dey (2014)) to the context of measuring how influent is each observation in a given model. An influential point consists of an observation which strongly changes the estimation of parameters. The classical example is a point which drastically alters the slope parameter in a linear regression. In Bayesian inference, our focus lies into the whole posterior distribution instead of a single parameter. Indeed, seeking for leverage effect in many parameters from a complex model seems unfeasible. Bayesian inference should produce a posterior distribution based on Bayes theorem. So, if there is a function which measures the distance between two probability densities we can measure distance between two posterior distributions or between a Bayesian model and its perturbed version. The perturbed case could consist of the same sample without an element if we have identical and independent observations (Goh and Dey (2014)) or it may be a sample with an imputed element if we work with dependent models (Hong-Xia et al. (2016)). We can use a well know function such as Kullback-Leiber to measure the divergence between two posterior distributions as well as the functional Bregman divergence, which is a generalization of the previous one. In the applications we have in mind, the posterior distributions are not available in closed form and we resort to Markov chain Monte Carlo (MCMC) methods to obtain approximations for parameter estimates and detection of influential observations. All the necessary computations in this paper were implemented using the open-source statistical software R (R Core Team (2015)). In particular, the rstan package which is an interface to the open-source Bayesian software Stan (Stan Development Team (2016)) was used to draw samples from the joint posterior distributions. Stan is a computing environment for implementing Hamiltonian Monte Carlo methods (HMC, Neal (2011)) coupled with the no-U-turn sampler (NUTS) which are designed to improve speed, stability and scalability compared to standard MCMC schemes. Typically, HMC methods result in high acceptance rates and low serial correlations thus leading to efficient posterior sampling. The remainder of this paper is structured as follows. In Section 2, the models used to illustrate the application of our propositions are briefly reviewed and the associated prior distributions are described. The Hamiltonian Monte Carlo sampling scheme is also described here. In Section 3 we introduce the functional Bregman divergence and describe its use to detect influential observations in models for both independent and dependent data. Section 4 consists of simulation studies where we perform sensitivity analysis and investigate how accurately we can detect influential observations. Section 5 summarizes empirical studies in which we illustrate our proposed methodology applied in real data. A discussion in Section 6 concludes the paper. Models This section is dedicated to describe the models which we used around the paper and the HMC sampling scheme adopted. Generalized Linear Models Generalized linear models (GLM, Nelder and Wedderburn (1972)) are used here to illustrate applications of our methods in models for dependent data with non-normal distributions. Let y 1 , . . . , y n conditionally independent, where the distribution of each y i belongs to the exponential family of distributions, i.e The density in equation (1) is parameterized by the canonical parameter η i and ψ(·) and c(·) are known functions. Also, η = (η 1 , . . . , η n ) is related to regression coefficients by a monotone differentiable link function such that g(µ i ) = η i . The linear predictor is η = Xβ, where X is the design matrix and β = (β 1 , . . . , β k ) is a k vector of regression coefficients. The likelihood function based on model (1) is given by, This class of models includes several well kown distributions such as Poisson, Binomial, Gamma, Normal and inverse Normal. Spatial Regression Models We chose to illustrated our methods using spatial regression models (SRM) as a kind of geostatistical data model (Gaetan and Guyon (2010)). The model can be represented as, where z i is the response of the observation i, x i is the value of z i at the coordinate x, y i is the value of z i at the coordinate y and ε i is an error, usually assumed N (0, 1). Most commonly, x and y are latitude and longitude however they could express as angles. If we assume normality of the errors the likelihood function can be expressed as follows, where X is a design matrix with the following columns: ones, coordinate x, coordinate y, interaction of x and y, squared x and squared y. The matrix Σ describes the covariance structure between the observations and y in the response vector. A suitable set of priors consists of assuming that β ∼ N (0, ηI 6 ) and the variance-covariance matrix follows a inverse Wishart Σ ∼ IW (V, k). A particular case of SRM consists of assuming an independence variance structure, i.e. Σ = (σ 2 1 , . . . , σ 2 n ) ′ I n , where I n is an identity matrix. In this case default prior distributions for the σ 2 i could be Inverse Gamma, or Gamma distributions if we are not restricted to conjugate priors. GARCH Model The generalized autoregressive conditional heteroscedasticity (GARCH) model (Bollerslev (1986)) is the most used class of models to study the volatility in financial markets. The GARCH(p, q) model is typically presented as the following sequence of equations, where y t is the observed return at time t and α i and β j are unknown parameters. The ǫ t are independent and identically distributed error terms with mean zero and variance one. Also, α 0 > 0, α i ≥ 0, i = 1, . . . , p and β j ≥ 0, j = 1, . . . , q define the positivity constraints and p i=1 α i + q j=1 β j < 1, ensures covariance stationarity of σ 2 t . Given an observed time series of returns y = {y 1 , . . . , y n } the conditional likelihood function is given by, where s = max(p, q) and θ represents the set of all model parameters. In practice, to get this recursive definition of the volatility off the round in Stan we need to impute non-negative initial values for σ. Prior distributions for the GARCH parameters were proposed by Deschamps (2006) and also used in Ardia and Hoogerheide (2010), who suggest a multivariate Normal distribution for α and β truncated to satisfy the associated constraints. However, to avoid truncation we propose a simpler approach and specify the following priors, α 0 ∼ Gamma(a 0 , b 0 ), α i ∼ Beta(c i , d i ) and β j ∼ Beta(e j , f j ) for i ∈ {1, . . . , p} and j ∈ {1, . . . , q} respectively. Hamiltonian Monte Carlo Our approach to detect influential observations relies on MCMC methods that should produce Markov chains which efficiently explore the parameter space. This motivates seeking for sampling strategies that aim at reducing correlation within the chains thus improving convergence to the posterior distribution. Hamiltonian Monte Carlo (HMC) comes as a recent and powerful simulation technique when all the parameters of interest are continuous. HMC uses the gradient of the log posterior density to guide the proposed jumps in the parameter space and reduces the random walk effect in the traditional Metropolis-Hastings algorithm (Duane et al. (1987) and Neal (2011)). For θ ∈ R d a d-dimensional vector of parameters and π(θ) denoting the posterior density of θ, the idea is to augment the parameter space whereas the invariant distribution is now a Hamiltonian density given by, for a normalizing constant c. The Hamiltonian function is decomposed as, H(θ, ϕ) = U (θ) + K(ϕ), where U (θ) is the potential energy, θ ∈ R d is the position vector, K(ϕ) = ϕ ′ V −1 ϕ is the kinetic energy and ϕ ∈ R d is the momentum vector in the physics literature. In a Bayesian setup we set U (θ) = − log π(θ). Trajectories between points (θ, ϕ) are defined theoretically by some differential equations which in practice cannot be solved analytically. So, in terms of simulation a method is required to approximately integrate the Hamiltonian dynamics. The leapfrog operator (Leimkuhler and Reich (2004)) is typically used to discretize the Hamiltonian dynamics and it updates (θ, ϕ) at time t + ǫ as the following steps, where ǫ > 0 is a user specified small step-size and ∇ θ U (θ) is the gradient of U (θ) with respect to θ. Then, after a given number L of time steps this results in a proposal (θ * , ϕ * ) and a Metropolis acceptance probability is employed to correct the bias introduced by the discretization and ensure convergence to the invariant posterior distribution. So, using Hamiltonian Monte Carlo involves specifying the number of leapfrogs L by iteration, the step-size length ǫ and the initial distribution of the auxiliary variable ϕ. The choice of an appropriated L which associated with ǫ will not produce a constant periodicity may be done using the No-U-Turn sampler (NUTS, Hoffman and Gelman (2014)), which aims at avoiding the need to hand-tune L and ǫ in practice. During the warmup the algorithm will test different values of leapfrogs and step-size and automatically judges the best range to sample. The basic strategy is to double L until increasing the leapfrog will no longer enlarge the distance between an initial value of θ and a proposed value θ * . The criterion is the derivative with respect to time of half the squared distance between the θ and θ * . To define an efficient value of ǫ, NUTS constantly checks if the acceptance rate is sufficiently high during the warmup. If it is not, the algorithm just shortens the step-size at next iteration (see Nesterov (2009) and Hoffman and Gelman (2014)). It is also worth noting that the Stan programming language provides a numerical gradient using reverse-mode algorithmic differentiation so that obtaining the gradient analytically is not necessary. Finally, the distribution of ϕ is a multivariate normal with either a diagonal or a full variance-covariance matrix. The former is usually selected because the precision increase is almost irrelevant compared to the computational memory costs (Stan Development Team (2016)). Functional Bregman divergence The functional Bregman divergence aims at measuring dissimilarities between functions, and in particular we are interested in comparing posterior distributions. The method is briefly described here and adapted to our models for detection of influential observations. We define (Ω, X, ν) as a finite measure space and f 1 (x) and f 2 (x) as two non-negative functions. Definition 1. Let us consider ψ : (0, ∞) → R being a strictly convex and differentiable function on R. Then the functional Bregman divergence D ψ is defined under the marginal density ν(x) as where ψ ′ represents the derivative of ψ. This divergence has some well-known properties (see for example Goh and Dey (2014)), the proofs of which appear in Frigyik et al. (2008a,b). then ψ ′ (f (x)) = 1 ∀f (x) and the functional Bregman divergence is zero for any f 1 (x) and f 2 (x). However, if we choose a strictly convex ψ then the Bregman divergence will be always greater than zero, except for the trivial case f 1 (x) = f 2 (x). Furthermore, ψ works as a tuning parameter and increasing its distance from the identity we would have D ψ (f 1 , f 2 ) as large as desired no matter the functions f 1 (x) and f 2 (x). In this paper, we follow the suggestion in Goh and Dey (2014) and restrict to the class of convex functions defined by Eguchi and Kano (2001), Three popular choices of α are: α = 0 (Itakura-Saito distance), α = 1 (Kullback-Leibler divergence), and α = 2 (squared Euclidean distance or L 2 /2). Perturbation in dependent models Here we extend the ideas in Goh and Dey (2014) where perturbation was defined in models for independent and identically distributed observations to dependent models. A general perturbation is defined as the ratio of unnormalized posterior densities, where δ indicates that likelihood and/or prior suffers some perturbation. Particularly, to assess potential influence of any observation the perturbation is restricted to the likelihood function while keeping the prior unaltered. The associated perturbation is then given by, where y (i) denotes the vector y without the ith case. In models for dependent data however we can not exclude an observation without modifying the likelihood structure. In any case, the general rule to measure the local influence of the ith point is to compute the divergence between f (y|θ, X) and f (y (i) |θ, X), The integral in Equation (5) however is analytically intractable for most practical situations and an approximation is needed. It is convenient to define the normalizing constant for p(θ|y) as, where m(y) is the marginal density f (θ|y)p(θ) and ω(.) is any probability density function. So, given a sample {θ s } S s=1 from the posterior distribution (which could be generated by HMC) we can estimate the normalizing constant as, This is the so called Importance-Weighted Marginal Density Estimate (IWMDE, Chen (1994)). Denoting the resulting posterior distribution asp IW (θ|y), the approximate perturbed posterior is given by,p IW δ (θ|y) =p wherep IW (θ|y) = f (y|θ)π(θ)/m IW (y). Consequently, we can approximate the functional Bregman divergence between p(θ|y) and p δ (θ|y) by, which for the convex functions in (6) is simplified as, In particular, for α = 1 which corresponds to the Kullback-Leibler divergence, we can simplify many terms of the above expression and obtain, Normalizing Bregman divergence When using a functional Bregman divergence to evaluate influential points each d ψ,i ∈ R + and in this scale we might have doubts about one or more values being substantially higher than the others. To facilitate comparison, McCulloch (1989) proposed a calibration which compresses the scale between 0.5 and 1 by making an analogy with the comparison between two Bernoulli distributions one of which with success probabilities equal to 1/2. However, extending this idea to any functional Bregman divergence and comparing any probability distribution with a Bernoulli sounds difficult to justify theoretically. Therefore, we propose a different route to compare the Bregman divergence between two densities, which we call a normalizing Bregman divergence. Proof. There is a sequence of functions ψ m : (f 1 , f 2 ) → R + , m ∈ N, which tunes the divergence intensity between any two density functions f 1 and f 2 . Suppose we have a full probability density f 0 and we wish to compare it with each likelihood without ith element as f 1 , . . . , f n to check local influence. We already know that each divergence is positive, so the sum of n divergences belongs to positive real domain, where k 0 = 0 if and only if ψ is the identity, but also could be arbitrarily high as ψ becomes more and more convex. In particular k 0 may be one. Proposition 2. Given n+1 probability functions f 0 , . . . , f n we have n divergences between f 0 and f 1 , . . . , f n , which we write as There is a mean operator B which transforms any Bregman divergence in a normalizing Bregman divergence. where B(·) is called a normalizing Bregman operator. Proof. By the generalized Pythagorean inequality (Frigyik et al. (2008b)), it is natural to supposed order maintenance as for any ψ * and ψ * * under the restriction of strictly convexity. If the above order relation is maintained then we can guarantee that all Bregman divergence with any ψ consists of the same divergence just with a different location scale. We gather these two arguments together as follows. A finite sum of divergences is finite and ψ just tunes the scale but not the order of Bregman. So there is a special case of ψ, let us call it ||ψ||, for which the sum with respect to a set of n densities f i results in one, and we call this divergence a normalizing Bregman divergence. This is so because all Bregman divergence preserves the same order. So, the attractiveness of our proposal is that 0 ≤ D ||ψ|| (f 0 , f q ) ≤ 1, ∀q ∈ {1, . . . , n} and it is quite intuitive to work in this scale to compare divergences in the context of identifying influential observations. Also, one possible caveat is that a result being high or low would depend on the sample size so that any cut-off point should take n into account. In this paper we argue that uUnder the null hypothesis that there is no influential observation in the sample, a reasonable expected normalizing Bregman would be 1/n, i.e. so that we expected each observation would present the same divergence. This bound becomes our starting point to identify influential observations. If any observation returns D ||ψ|| (f 0 , f i ) > 1/n, then it is a natural candidate to be an influential point, which we must investigate. This should be seen as a useful practical device to seek for influence rather than a definitive theoretical constant which can separate influential from not influential cases. Finding a better cut-off point other than 1/n is still as a open problem for future research. Finally, we note that using the Kullback-Leibler divergence which is approximated using Equation (9) leads to faster computations. Simulation Study In this section, we assess the performance of the algorithms and methods proposed by conducting a simulation study. In particular, we verify whether reliable results are produced and which parameters are the most difficult to estimate. We also check sensitivity to prior specification and the performance for detecting influential observations. We concentrate on the performance of posterior expectations as parameter estimators using Hamiltonian Monte Carlo methods and the Stan package. For all combinations of models and prior distributions we generate m = 1000 replications of data and the performances were evaluated considering the bias and the square root of the mean square error (SMSE), which are defined as, whereθ (i) denotes the point estimate of a parameter θ in the ith replication, i = 1, . . . , m. Finally, for each data set we generated two chains of 4000 iterations using Stan and discarded the first 2000 iterations as burn-in. Performance for Estimation and Sensitivity Analysis We begin with a logistic regression with an intercept and two covariates and simulate data using two parametric sets. The values of the two covariates x 1 and x 2 were generated independently from a standard normal distribution. The first model (Model 1) has true parameters given by β 0 = 1.3, β 1 = −0.7, β 2 = 0.3 while for the second one (Model 2) the true parameters were set to β 0 = −1.6, β 1 = 1.1, β 2 = −0.4 and each model was tested for two different sample sizes, n = 100 and n = 300. Finally, inspired by Gelman et al. (2008), we adopted three different prior distributions for the coefficients β j , j = 0, 1, 2 as follows. Prior 1: β j ∼ N (0, 10 2 ), Prior 2: β j ∼ Cauchy(0, 10), Prior 3: β j ∼ Cauchy(0, 2.5). The main results of this exercise are summarized in Table 1. Model 1 with n = 100 and Cauchy prior presents less bias for most estimations, except for β 2 . This prior also leads to lowest SMSE for all parameters. When we observed the same set but with n = 300, all estimations get better and again the Cauchy prior provides the least biased estimation. The mean SMSE falls from around 0.100 to approximately 0.025 when the sample size incrases. For Model 2, the results are quite similar. [ Table 1 around here ] We now turn to the analysis of the spatial regression model given in (2). Data from two models with an intercept and four covariates were generated where the true coefficients are given by β=(3, 0.25, 0.65, 0.2, −0.3, −0.2) (Model 1) and β=(3, −0.1, −0.4, 0.8, −0.3, 0.35) (Model 2), both with the same variance σ 2 = 1. Each one was tested for two different sample sizes, n = 50 and n = 200. Both latitude and longitude were generated by independent standard normal distributions without truncation, as an hypothetical surface without borders. In Model 1 with n = 50 the priors present bias of the same order for most parameters except for σ, where Prior 3 is the best, and β 3 , where Prior 3 is the worst. The SMSEs are quite similar across all prior specifications. When n increases there is some changes in bias order: β 0 has the best performance with Prior 3, however β 2 , β 4 and β 5 show a one order decrease with Prior 2, then this is the best prior. For Model 2 and n = 50, we see Prior 2 again with better bias results for β 0 and β 4 , but Prior 3 is better for estimating σ. With n = 200 the bias results show an advantage of Prior 1 to estimate β 2 and β 4 , as well as Prior 2 is better to estimate β 3 and β 5 , and likewise Prior 3 for β 0 . However, the SMSEs were very similar, so that the differences between priors were not so relevant in this last set. Overall, Prior 2 presents the best results. [ Table 2 around here ] Our last exercise concerns to GARCH(1,1) models where we generate artificial time series with Normal errors and two different sets of parameters: α 0,a = 0.5, α 1,a = 0.11, β 1,a = 0.88 and α 0,b = 1, α 1,b = 0.77, β 1,b = 0.22. However we propose to estimate both cases with Normal and Student t error terms, even though all series were built using Normal errors. Replacing a Normal by a Student t is a commonly used strategy to control overdispersed data. The prior distributions were assigned as follows. Prior 1: α 0 ∼ Gamma(0.1, 0.1), α 1 ∼ Beta(2, 2), β 1 ∼ Beta(2, 2), Prior 2: α 0 ∼ Gamma(0.1, 0.1), α 1 ∼ Beta(2, 3), β 1 ∼ Beta(3, 2) and Prior 3: α 0 ∼ Gamma(0.5, 0.5), α 1 ∼ Beta(2, 3), β 1 ∼ Beta(3, 2), The results for the GARCH(1,1) with Normal errors are presented in Table 3. We notice that Prior 3 attained the best results for the parameter set 1, but Prior 1 was better in set 2. This outcome happens because Prior 1 is perhaps too informative about α 0 and even a value of T as large as 900 was not enough for the model to learn from data. However, the different priors assigned to α 1 and β 1 do not imply in any drastic output change. Table 4 summarizes the output from a GARCH(1,1) model estimated with Student t errors. From this table we notice that Prior 1 returned the best results for the parameter set 2, but in set 1 the three priors share similar performances. We now look at both tables in tandem to compare Student t and Normal errors. The Normal GARCH presented better results than Student t for the parameter set 2, but they were similar in set 1, so that there is no need of a robust model in this case (the series were generated with normal errors). [ Influence Identification To evaluate the normalizing Bregman divergence as a useful tool to identify point influence we proceed with three simulation sets, each refering to a different model. We use the same models presented in the previous subsection. Within each model we created four scenarios: I without any kind of perturbation, II where there is one perturbed observation, III where there are two influential points and IV with three influential points. The point contamination in time series and spatial models follows the scheme proposed by Hong-Xia et al. (2016) and Cho et al. (2009), i.e., y * t = y t + 5σ y , where σ y is the standard deviation of the observed sample y. For the logistic regression however we need a different approach to contaminate data. In this case we simply exchange the output, i.e. if y t is to be contaminated and y t = 1 then we set y * t = 0, otherwise if y t = 0 we set y * t = 1. The results for the case influence diagnostic using normalizing Bregman divergence in logistic regression are shown in Table 5. For this table, the true parameter values are β 0 = −3, β 1 = −0.7, β 2 = 0.3 and the prior distributions are β j ∼ Cauchy(0, 2.5), j ∈ {0, 1, 2}. Also, the perturbation schemes are: I no perturbation, II observation 64 has an additional noise, III observations 44 and 64 present perturbation and IV observations 19, 44 and 64 have an extra noise. The table then shows the estimated (mean and standard deviation) divergences for the three observations, 19, 44 and 64. We first notice that in the no perturbation scenario the estimated divergences are mostly as expected on average, i.e. 1/100 and 1/300 for n = 100 and n = 300 respectively. On the other hand, when the output is perturbed the average divergence was between 0.028 and 0.031 for n = 100 and between 0.008 and 0.009 for n = 300. Finally, there is a correlation between mean and standard deviation in the sense that a small value of one corresponds to low estimation of the other. [ Table 5 around here ] The results are even more emphatic in spatial regression models as shown in Table 6. In this table, the true parameter values are φ = 0.75, σ 2 = 1, β 0 = 1.3, β 1 = −0.7 and we chose Prior 2. Also, the influence scenarios are : I without perturbation, II observation 19 has an additional noise, III observations 15 and 19 present perturbation and IV where 3, 15 and 19 have an extra noise. For n = 50 and scenario I the normalizing Bregman divergence has mean around 0.020 in the three observed points, which corresponds to the expected 1/50. In scenario II, the estimates for observations 3 and 15 fall to 0.011 and 0.013, because observation 19 was perturbed and its divergence estimate rises to 0.423. In scenario III the estimate for observation 3 falls even more, because both 15 and 19 were perturbed and both have a 0.283 estimate for the divergence. Finally, when the three observations were perturbed they share the impact between 0.210 and 0.214 estimates. For n = 200 and scenario I, we have again results precisely as expected, i.e. 0.005 compared to 1/200. Furthermore, scenarios II, III and IV are quite similar, the mean values are slightly smaller, but this is expected for a larger sample. [ Table 6 around here ] The effect is still clear in time series with moderate sample sizes, as shown in Table 7. This table shows For T = 100 and scenario I the normalizing Bregman divergence has mean equal to 0.009 in the three observed points, which is slightly bellow the expected 1/100. In scenario II the estimated divergence for observations 19 and 44 fall to 0.007, because observation 64 was perturbed and its estimated divergence rises to 0.247, which might not seem a large value but it is more than 20 times the expected value 1/100. In III the estimate for observation 19 falls even more, because both 44 and 64 were perturbed and have estimated divergences 0.188 and 0.192. Finally, when all three observations were perturbed they share the impact with similar estimated divergences. For T = 500 and scenario I, we have again results around 0.002, i.e. 1/500. Furthermore, scenarios II, III and IV show quite similar results, the estimated values being slightly smaller, but this is expected for a larger sample. Empirical Analysis In this section, we investigate influential points in real data sets using the normalizing Bregman divergence. In all examples, convergence assessment of the Markov chains were based on visual inspection of trace plots, autocorrelation plots and theR statistic since we ran two chains for each case. All results indicated that the chains reached stationarity relatively fast. Binary Regression for Alpine Birds A study about an endemic coastal alpine bird was conducted in Vancouver Island (Southwest coast of British Columbia, Canada) for more than a decade and the results were published in Jackson et al. (2015). The presence or absence of birds in a grid of space was registered over the years together with other environmental characteristics as covariates. The authors proposed an interpretation of data by a Random Forest model. Here we extend their model to a Bayesian framework and consider a binary logistic regression with other covariates. For illustration, we selected the following covariates: elevation (1000 meters) and average temperature in summer months (in Celcius degrees) and the model also includes an intercept. We then ran a HMC with two chains, each one with 4,000 iterations where the first half was used as burn-in. This setup was used to fit models with probit and logit link functions. The normalizing Bregman divergences estimated for each observation are displayed in Figure 1. From this figure, it is hard to judge what is a high value of divergence, because there are more than one thousand observations in the sample. However, we can easily conclude that the model with logit link performs better because the highest values of logit are lower than the highest values of probit. This is to say that the most influential points in the logit model are not so influent as in the probit model. Spatial models for rainfall in South of Brazil Here we illustrate a spatial regression approach to analyze the data on precipitation levels in Paraná State, Brazil. This data is freely available in the geoR package and was previously analyzed by for example Diggle and Ribeiro Jr (2002) and Gaetan and Guyon (2010). The data refers to average precipitation levels over 33 years of observation during the period May-June (dryseason) in 143 recording stations throughout the state. The original variable was summarized in 100 millimeters of precipitation per station. We changed it to 10,000 millimeters of rain per station, which seems more intuitive once the average local precipitation is around 1,000 millimeter per year and the observation time was larger than 10 years. We the fitted three models for the average rainfall. These are the full SRM presented in Equation (2), the same model but without the squared components x 2 and y 2 , and the smallest one without squared components and neither the interaction term x * y, which we refer to as full, middle and small models respectively. All the models were fitted from two HMC chains, each one with 20,000 iterations where the first half was used as burn-in. We estimated the normalizing Bregman divergence for each recording station and compared the results in the same way as in the previous example. We conclude that the small model was the best one in the sense that it shows the smallest peaks. For example, the maximum values for each model were 0.072, 0.068 and 0.039 respectively, which are already pretty high relative to the expected 1/143 = 0, 0069. We chose to display only the results for this one best model in Figure 2 from which we can see that the largest values of normalizing Bregman (largest circles) are scattered around the map, notwithstanding the rainy region is concentrated in the southwest. GARCH for Bitcoin exchange to US Dollar The cryptocurrencies were born in the new millennium dawn as an alternative as governments and banks. As such, they changed the rules of financial market and they appreciated very fast, although high fluctuation and sharp falls are common. In particular, the Bitcoin is likely the most famous cryptocurrency and shows the largest volume of crypt transactions. We illustrate the statistical analysis with one year of daily data on the log-returns of Bitcoin (BTC) exchange to U.S. Dollar (USD) from August 5, 2017 to August 5, 2018. This data was produced from the CoinDesk price page (see http://www.coindesk.com/price/). We then fitted GARCH(1,1) models with normal and Student t errors for the log-return of BTC to USD exchange. We ran the HMC with two chains, each one with 4,000 iterations where the first half was used as burn-in. The estimation of main parameter of Normal model are: α 0 has zero mean and SD, α 1 is 0.15 (0.06) and β 1 is 0.07 (0.08), on the other hand the Student t is α 0 with zero mean and SD too, α 1 is 0.11 (0.05) and β 1 is 0.06 (0.07). We estimated the normalizing Bregman divergence for each day, the result could be see in Figure 3. Here is not so trivial to choose between the Normal or the Student t model. Because there is no clear dominance of one or another, even though the Student t presents the highest value of divergence, both form a mixed cloud of values very closed. However the highest points are quite sure very influent observations, because they represent more than 20 times the expected mean of 1/364. Consequently, it is not a surprise that the observed high divergences in January correspond to what the Consumer News and Business Channel (CNBC) called a Bitcoin nightmare. A time of new regulations in South Korea as well as a Facebook currency policy change, which implied a devaluation. Discussion In this article we explored the possibilities of using the functional Bregman divergence as a useful generalization of the Kullback-Leiber divergence to identify influential observations in Bayesian models, for both dependent and independent data. Kullback-Leiber is easier to estimate, but overall it is difficult to infer if a point represents an influential point or not. So we propose to normalizing the Bregman divergence based on the order maintenance of the functional. It has two intuitive advantages: firstly, it belongs to range between zero and one which is easier to interpret, secondly we can evaluate its intensity according to the sample size. In particular, the normalizing Bregman divergence for the Kullback-Leiber case avoids the need for heavy computations. As we saw in the simulation study, the expected average of a normalizing Bregman divergence to any observation without perturbation is approximately 1/n. Of course that number of influential points is a relevant issue to evaluate the value of a normalizing Bregman divergence. The simulation study embraced three different fields of statistic: GLM, spatial models and time series, with similar conclusions in all of them. Besides, the empirical analysis explored three scientific fields: Ecology, Climatology and Finance. Finally, in all cases the Hamiltonian Monte Carlo was an efficient and fast way to obtain samples from the posterior distribution of parameters
2019-04-07T19:33:50.000Z
2019-04-07T00:00:00.000
{ "year": 2019, "sha1": "544404e4b53cdd6778e7c873b77f6a3abc013eed", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "544404e4b53cdd6778e7c873b77f6a3abc013eed", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Mathematics" ] }
256977322
pes2o/s2orc
v3-fos-license
The Alpha-1 Subunit of the Na+/K+-ATPase (ATP1A1) Is a Host Factor Involved in the Attachment of Porcine Epidemic Diarrhea Virus Porcine epidemic diarrhea (PED) is an acute and severe atrophic enteritis caused by porcine epidemic diarrhea virus (PEDV) that infects pigs and makes huge economic losses to the global swine industry. Previously, researchers have believed that porcine aminopeptidase-N (pAPN) was the primary receptor for PEDV, but it has been found that PEDV can infect pAPN knockout pigs. Currently, the functional receptor for PEDV remains unspecified. In the present study, we performed virus overlay protein binding assay (VOPBA), found that ATP1A1 was the highest scoring protein in the mass spectrometry results, and confirmed that the CT structural domain of ATP1A1 interacts with PEDV S1. First, we investigated the effect of ATP1A1 on PEDV replication. Inhibition of hosts ATP1A1 protein expression using small interfering RNA (siRNAs) significantly reduced the cells susceptibility to PEDV. The ATP1A1-specific inhibitors Ouabain (a cardiac steroid) and PST2238 (a digitalis toxin derivative), which specifically bind ATP1A1, could block the ATP1A1 protein internalization and degradation, and consequently reduce the infection rate of host cells by PEDV significantly. Additionally, as expected, overexpression of ATP1A1 notably enhanced PEDV infection. Next, we observed that PEDV infection of target cells resulted in upregulation of ATP1A1 at the mRNA and protein levels. Furthermore, we found that the host protein ATP1A1 was involved in PEDV attachment and co-localized with PEDV S1 protein in the early stage of infection. In addition, pretreatment of IPEC-J2 and Vero-E6 cells with ATP1A1 mAb significantly reduced PEDV attachment. Our observations provided a perspective on identifying key factors in PEDV infection, and may provide valuable targets for PEDV infection, PEDV functional receptor, related pathogenesis, and the development of new antiviral drugs. Introduction Porcine epidemic diarrhea virus is a single-stranded positive sense envelope membrane RNA virus belonging to the genus alphacoronavirus of the coronaviridae family [1]; it is transmitted mainly by fecal-oral transmission [2]. PEDV has two untranslated regions (UTR) at the 5 N-terminal and 3 C-terminal end and seven open reading frames (ORF1a, ORF1b, S, ORF3, E, M and N). In addition to the auxiliary protein ORF3, other proteins including S, E, M and N constitute the viral structural proteins [3]. S protein is a glycoprotein on the surface of the virus, and has the size of approximately 200 kDa with a neutralizing antibody epitope and a receptor binding site involved in recognition [4]. Two different conformations exist in the S protein: the pre-fusion conformation is a clove-shaped trimer consisting of three individual S1 heads and a trimeric S2 stalk; the post-fusion conformation is a trimeric S2 [5,6]. Understanding the basic mechanisms of PEDV-host interactions and clarifying the role of host factors in PEDV infection can play a vital part in the development of new antiviral drugs and effective broad-spectrum vaccines. Recognition of the virus by the cellular receptor is a crucial step in the virus life cycle; it enables the virus to enter and infect host cells. According to existing reports, transgenic mice with pAPN could be infected with PEDV [7]. PEDV S1 has been biochemically interacted with soluble pAPN in a dot-blot assay [8]. Hence, pAPN has been commonly described as a functional receptor for PEDV entry into cells [9]. However, it was shown that IPEC-J2 cells with low expression of pAPN did not affect PEDV infection and that PEDV was able to infect Vero-E6 cells that did not express pAPN [10]. pAPN knockout pigs still remained susceptible to PEDV [11]. These results suggest that pAPN s role as a functional receptor for PEDV needs further study. Virus overlay protein binding assay (VOPBA) is a common method of screening viral receptors, using the method and principle of protein immunoblotting to screen for host proteins that bind to viruses by combining the specific interaction of viral protein and host protein [12]. Many virus-binding proteins have been reported to be identified using VOPBA. Norwalk virus (NV) interacted with the NV attachment (NORVA) protein to trigger the viral attachment [13]. Respiratory Syncytial Virus (RSV) was involved in the binding of nucleolin [14]. Grouper heat-shock cognate protein 70 (GHSC70) interacted with nervous necrosis virus (NNV) capsid protein to benefit the NNV attachment [15]. Na+/K+-ATPase (NKA) is a channel protein embedded in the phospholipid bilayer of the cell membrane, which has physiological functions such as ATPase activity and maintenance of intracellular and extracellular osmotic pressure [16][17][18]. NKA consists of four α subunits, three β subunits and γ isoforms. The N and C termini of the αsubunit are intracellular and anchored to the plasma membrane through 10 transmembrane helical regions to form ion channels [19]. The α1 isoform (ATP1A1) is widely expressed in eukaryotic cells. In addition, ATP1A1 has been found to be overexpressed in a variety of cancer cells, such as breast cancer, liver cancer and glioma [20][21][22][23][24]. ATP1A1 has not only played an important role in cancer cells, but also participated in different stages of viral infection, including viral attachment [25,26] and replication [27]. In this study, VOPBA was performed for the screening of ATP1A1 binding protein. Subsequently, we investigated the relationship between the host protein ATP1A1 and PEDV infection for the first time using IPEC-J2 and Vero-E6 cells. We found that PEDV infection of host cells upregulated the expression of ATP1A1 to facilitate PEDV infection. Further analysis showed that ATP1A1 is a host factor that facilitates PEDV attachment to the host cells. ATP1A1 CT Structural Domain Is Required for Interaction with PEDV S1 VOPBA is a common method of screening viral receptors [12,28,29]. We performed a VOPBA, immunoprecipitation with S1 monoclonal antibody (mAb) to PEDV, and after mass spectrometry analysis (data not shown), we screened ATP1A1 binding proteins with wide distribution, high abundance and presence of cell membranes for subsequent experiments. To confirm that the PEDV S1 protein interacts with the host protein ATP1A1, IP analysis was performed on IPEC-J2 cells using PEDV S1 and ATP1A1 mAb. The results showed that S1 protein interacted with the endogenous ATP1A1 ( Figure 1A). To identify which structural domain of ATP1A1 is responsible for the interaction with PEDV S1 protein, and to predict the structural domain using the ATP1A1 amino acid sequence at the HMMER website, we constructed the truncated plasmids of all structural domains of ATP1A1 as shown in Figure 1B. They transfected separately into Vero-E6 and then infected PEDV for IP analysis. We found that only the full-length ATP1A1 could interact with S1 ( Figure 1C). To clarify whether it is the CT structural domain or only the full-length ATP1A1 that can interact with S1, we constructed plasmids expressing only the CT structural domain for analysis. Further analysis showed that the ATP1A1 CT structural domain is required for its binding to PEDV S1 ( Figure 1D). These findings demonstrated that the ATP1A1 CT structural domain is required for interaction with the PEDV S1 protein. structural domain for analysis. Further analysis showed that the ATP1A1 CT structural domain is required for its binding to PEDV S1 ( Figure 1D). These findings demonstrated that the ATP1A1 CT structural domain is required for interaction with the PEDV S1 protein. Figure 1. The ATP1A1 CT structural domain is required for the interaction with PEDV S1. (A) PEDV S1 interacts with endogenous ATP1A1 protein. PEDV at 0.1 MOI infected IPEC-J2 cells for 36 h. Western blot of CO-IP with mouse anti-S1 mAb and rabbit anti-ATP1A1 mAb. (B) The schematic diagram of the truncated structural domains of ATP1A1. (C) Full-length ATP1A1 interacted with S1 protein. Full-length ATP1A1, truncated A1, A2, A3 and A4 structural domains were transfected, fy, to Vero-E6 cells for 24 h and then PEDV at 0.1 MOI infected cells for 24 h. Western blot of coimmunoprecipitations from lysates with mouse anti-S1 mAb and mouse anti-FLAG mAb. (D) ATP1A1 CT domain interacted with S1 protein. Vero-E6 cells were transfected with the plasmid expressing ATP1A1-CT for 24 h and were infected with PEDV at 0.1 MOI for 24 h. Western blot of co-immunoprecipitations from lysates with mouse anti-S1 mAb and mouse anti-FLAG mAb. Knockdown of ATP1A1 Expression by siRNAs Transfection To understand whether ATP1A1 is involved in PEDV infection, we synthesized two specific small interfering RNAs (siRNAs) of porcine and monkey origin, respectively, to test their functions. When cells were transfected separately with ATP1A1-specific siRNAs (siRNA-ATP1A1-A/B or siRNA-mATP1A1-A/B) for 48 h (no toxicity to cells), it was found that siRNA-ATP1A1-A/B or siRNA-mATP1A1-A/B significantly downregulated ATP1A1 mRNA transcription and protein expression in both IPEC-J2 and Vero-E6 cells; while, in the control groups (siRNA-NC or siRNA-mNC), high ATP1A1 expression could be detected. Considering its overall efficiency in the target cells, the siRNA-ATP1A1-A and siRNA-mATP1A1-A were therefore selected for subsequent experiments (Figure 2A-F). Western blot of CO-IP with mouse anti-S1 mAb and rabbit anti-ATP1A1 mAb. (B) The schematic diagram of the truncated structural domains of ATP1A1. (C) Full-length ATP1A1 interacted with S1 protein. Full-length ATP1A1, truncated A1, A2, A3 and A4 structural domains were transfected, fy, to Vero-E6 cells for 24 h and then PEDV at 0.1 MOI infected cells for 24 h. Western blot of co-immunoprecipitations from lysates with mouse anti-S1 mAb and mouse anti-FLAG mAb. (D) ATP1A1 CT domain interacted with S1 protein. Vero-E6 cells were transfected with the plasmid expressing ATP1A1-CT for 24 h and were infected with PEDV at 0.1 MOI for 24 h. Western blot of co-immunoprecipitations from lysates with mouse anti-S1 mAb and mouse anti-FLAG mAb. Knockdown of ATP1A1 Expression by siRNAs Transfection To understand whether ATP1A1 is involved in PEDV infection, we synthesized two specific small interfering RNAs (siRNAs) of porcine and monkey origin, respectively, to test their functions. When cells were transfected separately with ATP1A1-specific siRNAs (siRNA-ATP1A1-A/B or siRNA-mATP1A1-A/B) for 48 h (no toxicity to cells), it was found that siRNA-ATP1A1-A/B or siRNA-mATP1A1-A/B significantly downregulated ATP1A1 mRNA transcription and protein expression in both IPEC-J2 and Vero-E6 cells; while, in the control groups (siRNA-NC or siRNA-mNC), high ATP1A1 expression could be detected. Considering its overall efficiency in the target cells, the siRNA-ATP1A1-A and siRNA-mATP1A1-A were therefore selected for subsequent experiments (Figure 2A-F). Knockdown of Endogenous ATP1A1 Expression Suppresses PEDV Infection First, we investigated the biological significance of knocking down the expression of ATP1A1 in PEDV-infected target cells. Twenty-four hours after siRNAs transfection, IPEC-J2 and Vero-E6 cells were infected with PEDV, and samples were collected for PEDV viral load determination. PEDV N mRNA levels significantly reduced after PEDV infection ( Figure 3A,B). Western blot results indicated that the PEDV N protein decreased in IPEC-J2 and Vero-E6 cells compared with that of the NC groups ( Figure 3C,D). In addition, TCID 50 assays performed in IPEC-J2 and Vero-E6 cells after reduced expression of endogenous ATP1A1 showed decreased PEDV titers ( Figure 3E,F). As shown in Figure 3G,H, a reduction in the amount of PEDV fluorescence was observed in IPEC-J2 and Vero-E6 cells transfected and Vero-E6 cells were transfected with siRNAs against ATP1A1 for 48 h and analyzed using the CCK-8 kit. Data represent means ± SD from three independent experiments. ns, no significant difference. (C,D) Relative quantification of ATP1A1 mRNA. Cells were transfected with siRNAs against ATP1A1 for 48 h. Total cellular RNA was extracted, reverse transcribed, and quantified. The siRNA-NC or siRNA-mNC assigned the value of 1.0, data represent means ± SD from three independent experiments. *, p < 0.05; **, p < 0.01; ***, p < 0.001; ****, p < 0.0001. (E,F) Western blot analysis of ATP1A1 protein expression. The cells were lysed after 48 h of interference and Western blot was performed using rabbit anti-ATP1A1 mAb and mouse anti-GAPDH mAb. Detection using the corresponding secondary antibody. Knockdown of Endogenous ATP1A1 Expression Suppresses PEDV Infection First, we investigated the biological significance of knocking down the expression of ATP1A1 in PEDV-infected target cells. Twenty-four hours after siRNAs transfection, IPEC-J2 and Vero-E6 cells were infected with PEDV, and samples were collected for PEDV viral load determination. PEDV N mRNA levels significantly reduced after PEDV infection ( Figure 3A,B). Western blot results indicated that the PEDV N protein decreased in IPEC-J2 and Vero-E6 cells compared with that of the NC groups ( Figure 3C,D). In addition, TCID50 assays performed in IPEC-J2 and Vero-E6 cells after reduced expression of endogenous ATP1A1 showed decreased PEDV titers ( Figure 3E,F). As shown in Figure 3G,H, a reduction in the amount of PEDV fluorescence was observed in IPEC-J2 and Vero-E6 cells transfected with ATP1A1-specific siRNAs compared with that of the NC groups. Collectively, these data suggested that knockdown of endogenous ATP1A1 results in reduced PEDV replication in target cells. Data represent means ± SD from three independent experiments. ns, no significant difference. (C,D) Relative quantification of ATP1A1 mRNA. Cells were transfected with siRNAs against ATP1A1 for 48 h. Total cellular RNA was extracted, reverse transcribed, and quantified. The siRNA-NC or siRNA-mNC assigned the value of 1.0, data represent means ± SD from three independent experiments. ***, p < 0.001. (E,F) Western blot analysis of ATP1A1 protein expression. The cells were lysed after 48 h of interference and Western blot was performed using rabbit anti-ATP1A1 mAb and mouse anti-GAPDH mAb. Detection using the corresponding secondary antibody. NKA Inhibitors Promote Degradation of ATP1A1 and Effectively Reduce PEDV Infection Ouabain is one of the Cardiotonic steroid (CTS) drugs; it was reported that the binding of Ouabain to NKA leads to a change in its protein conformation and the internalization of ATP1A1 into the cytoplasm to participate in lysosomal-mediated degradation [30]. PST2238 (digitalis toxin derivative) is used as a competitive inhibitor of Ouabain. Therefore, we tested the effect of Ouabain and PST2238 on PEDV infection by targeting NKA. The cytotoxicity of Ouabain and PST2238 drugs were measured in IPEC-J2 cells after serial dilutions, and we chose concentrations of Ouabain (1 nM) and PST2238 (1 uM) which had no effect on cell viability after 48 h of treatment ( Figure 4A,B). First, we infected IPEC-J2 cells with PEDV after pretreatment with Ouabain or PST2238 at non-cytotoxic concentrations for 1 h, and collected infected cells and culture supernatants for PEDV viral load assay. We found that drug pretreatment notably inhibited PEDV replication ( Figure 4C). We pretreated IPEC-J2 cells with a gradient dilution of Ouabain or PST2238 for 1 h before infection with PEDV. We found that Ouabain and PST2238 significantly reduced PEDV RNA in a dose-dependent manner ( Figure 4D). We then examined the changes in ATP1A1 and PEDV-N protein levels after drugs treatment. The ATP1A1 and PEDV N protein expression decreased gradually in a dose-dependent manner as the drugs concentrations of Ouabain or PST2238 were increased ( Figure 4E,F). Inhibitors of Ouabain and PST2238 inhibited PEDV replication in a dose-dependent manner, according to the IFA data ( Figure 4G). These results suggested that NKA inhibitors are effective in reducing PEDV infection. NKA Inhibitors Promote Degradation of ATP1A1 and Effectively Reduce PEDV Infection Ouabain is one of the Cardiotonic steroid (CTS) drugs; it was reported that the binding of Ouabain to NKA leads to a change in its protein conformation and the internalization of ATP1A1 into the cytoplasm to participate in lysosomal-mediated degradation [30]. PST2238 (digitalis toxin derivative) is used as a competitive inhibitor of Ouabain. Therefore, we tested the effect of Ouabain and PST2238 on PEDV infection by targeting NKA. The cytotoxicity of Ouabain and PST2238 drugs were measured in IPEC-J2 cells after serial dilutions, and we chose concentrations of Ouabain (1 nM) and PST2238 (1 uM) which had no effect on cell viability after 48 h of treatment ( Figure 4A,B). First, we infected IPEC-J2 cells with PEDV after pretreatment with Ouabain or PST2238 at non-cytotoxic concentrations for 1 h, and collected infected cells and culture supernatants for PEDV viral load assay. We found that drug pretreatment notably inhibited PEDV replication ( Figure 4C). We pretreated IPEC-J2 cells with a gradient dilution of Ouabain or PST2238 for 1 h before infection with PEDV. We found that Ouabain and PST2238 significantly reduced Overexpression of ATP1A1 Promotes PEDV Infection To clarify what role the ATP1A1 protein plays in PEDV infection, we overexpressed the ATP1A1 in PEDV-infected IPEC-J2 and Vero-E6 cells, and then detected viral susceptibility. First, we found that overexpression of ATP1A1 significantly promoted PEDV N mRNA level compared with that of the empty vector transfection group ( Figure 5A). We also examined the effect of overexpression of ATP1A1 on PEDV N protein levels by Western blot. In addition to using the PEDV G1 genotype strain CV777, we also performed tests using the G2 strain GDgh isolated in our laboratory. Consistent with the mRNA results, overexpression of ATP1A1 significantly increased the expression of PEDV N protein ( Figure 5C). Meanwhile, ATP1A1 protein significantly promoted the expression of PEDV N protein in a dose-dependent manner ( Figure 5E). The viral titer in the supernatant of IPEC-J2 cells overexpressing ATP1A1 was higher than that of cells transfected with empty vector ( Figure 5G). In the immunofluorescence assay, the transfection of exogenous ATP1A1 into IPEC-J2 cells resulted in increased amount of fluorescence of PEDV compared with the empty vector group ( Figure 5I). We also performed the experiments where PEDV infected Vero-E6 cells overexpressing ATP1A1, and viral yield was measured by qPCR, Western blot, IFA and TCID 50 . The results showed that overexpression of ATP1A1 in Vero-E6 cells facilitated the replication of PEDV ( Figure 5B,D,H,J). In addition, we attempted to overexpress ATP1A1 protein on PTR2 and DF-1 cells which could not support the infection of PEDV ( Figure 5F). These results indicated that overexpression of ATP1A1 protein makes target cells more susceptible to PEDV infection, although it does not convert uninfected cell lines to be susceptible, which also suggested that ATP1A1 protein plays an important role in viral infection. PEDV RNA in a dose-dependent manner ( Figure 4D). We then examined the changes in ATP1A1 and PEDV-N protein levels after drugs treatment. The ATP1A1 and PEDV N protein expression decreased gradually in a dose-dependent manner as the drugs concentrations of Ouabain or PST2238 were increased ( Figure 4E,F). Inhibitors of Ouabain and PST2238 inhibited PEDV replication in a dose-dependent manner, according to the IFA data ( Figure 4G). These results suggested that NKA inhibitors are effective in reducing PEDV infection. Figure 7D and then cultured for 48 h. Cells lysates were prepared for Western blot analysis. (G) Treatment with Ouabain or PST2238 inhibits PEDV replication in a dose-dependent manner. Cells were treated in the same way as in Figure 7D and then cultured for 48 h. After fixation, penetration and closure of cells, staining with anti-PEDV-N mAb (green) and cell nuclei using DAPI (blue) staining solution. The processed samples were photographed and analyzed using fluorescence microscopy. Scale bars, 100 μm. Figure 4D and then cultured for 48 h. Cells lysates were prepared for Western blot analysis. (G) Treatment with Ouabain or PST2238 inhibits PEDV replication in a dose-dependent manner. Cells were treated in the same way as in Figure 4D and then cultured for 48 h. After fixation, penetration and closure of cells, staining with anti-PEDV-N mAb (green) and cell nuclei using DAPI (blue) staining solution. The processed samples were photographed and analyzed using fluorescence microscopy. Scale bars, 100 µm. qPCR, Western blot, IFA and TCID50. The results showed that overexpression of ATP1A1 in Vero-E6 cells facilitated the replication of PEDV ( Figure 5B,D,H,J). In addition, we attempted to overexpress ATP1A1 protein on PTR2 and DF-1 cells which could not support the infection of PEDV ( Figure 5F). These results indicated that overexpression of ATP1A1 protein makes target cells more susceptible to PEDV infection, although it does not convert uninfected cell lines to be susceptible, which also suggested that ATP1A1 protein plays an important role in viral infection. PEDV Infection Upregulates ATP1A1 Protein Expression in Target Cells To understand the association between the host protein ATP1A1 and PEDV infection, we analyzed the changes in mRNA and protein levels of ATP1A1 after PEDV infection. As determined by qPCR, PEDV infection resulted in a significant upregulation of ATP1A1 mRNA levels ( Figure 6A,B). Similarly, the protein levels of ATP1A1 increased after PEDV infection, consistent with that of the mRNA results ( Figure 6C,D). PEDV Infection Upregulates ATP1A1 Protein Expression in Target Cells To understand the association between the host protein ATP1A1 and PEDV infection, we analyzed the changes in mRNA and protein levels of ATP1A1 after PEDV infection. As determined by qPCR, PEDV infection resulted in a significant upregulation of ATP1A1 mRNA levels ( Figure 6A,B). Similarly, the protein levels of ATP1A1 increased after PEDV infection, consistent with that of the mRNA results ( Figure 6C,D). To further understand the association between ATP1A1 and PEDV infection, we used immunofluorescence to observe the expression of ATP1A1 protein in target cells after PEDV infection. The results of immunofluorescence experiments revealed that enhanced fluorescence of ATP1A1 proteins occurred after PEDV infection ( Figure 6E). To ensure that the phenomenon was not restricted to IPEC-J2 cells, we repeated the experiment in Vero-E6 cells as well. Consistent with the phenomenon on IPEC-J2, ATP1A1 protein was also enhanced in PEDV-infected Vero-E6 cells compared with mock-treated Vero-E6 cells ( Figure 6F). These data indicated that PEDV infection induces increased expression of ATP1A1 protein in target cells, suggesting that the expression of ATP1A1 protein may be related to PEDV infection. Knockdown of ATP1A1 Affects the Attachment of PEDV According to the proportion of homologous amino acids with other coronavirus S proteins, PEDV S protein can be divided into S1 and S2 structural domains, and the S1 protein plays a crucial role in recognition of viral particles with host proteins [31]. We screened the ATP1A1 protein from the mass spectrometry results which interacted with the PEDV S1 protein ( Figure 1A), and the viral S1 protein is mainly involved in viral attachment. We first explored the effect of downregulation of ATP1A1 protein expression on PEDV attachment, which we did for G1 genotype CV777 and G2 genotype GDgh of PEDV. We found that downregulation of ATP1A1 expression significantly inhibited the attachment and internalization processes of G1 type and G2 type PEDV ( Figure 7A). We speculated that the host protein ATP1A1 may be involved in the attachment of PEDV to target cells, and it may be a host factor that promotes PEDV attachment. The downregulated ATP1A1 expression also has an effect on the internalization phase, and we speculated that it may also be involved in the internalization of PEDV, and we will investigate whether ATP1A1 is involved in internalization in the future. MOI infected target cells at 4 °C for 2 h and then cells were washed with PBS and transferred to 37 °C for 1 h incubation to complete virus internalization. After washing the cells with PBS, the cells were treated sequentially with 0.05% trypsin and 0.5 mg/mL proteinase K to remove the uninternalized virus particles, and the cells were collected for viral RNA detection. Data represent means ± SD from three independent experiments. *, p < 0.05; **, p < 0.01; ***, p < 0.001; ****, p < 0.0001. (B,C) S1 number of positive cells bound to ATP1A1. Incubating IPEC-J2 and Vero-E6 on ice with PEDV at 1.0 MOI for 1 h to synchronize infection, then transferring to 37 °C incubator for the indicated time. Cells were collected after washing with PBS at the appropriate time points. Cells were then stained using rabbit anti-ATP1A1 mAb and mouse anti-S1 mAb, followed by CoraL-ite488-conjugated Goat Anti-Rabbit IgG and CoraLite594-conjugated Goat Anti-Mouse IgG. Data analysis using flow cytometry. In addition, we used flow cytometry to quantify the interaction between ATP1A1 and cell surface PEDV particles. The number of positive cells bound by ATP1A1 to the viral particle S1 protein gradually increased at 60 min of infection compared to that of 0 min of infection; at 120 min of infection, the number of positive cells bound to the cell surface gradually decreased as the viral particles entered the cells and was less than the number of positive cells at the time of initial infection ( Figure 7B,C). We speculated that ATP1A1 may be involved in the attachment stage of PEDV infection. In the attachment assay, PEDV at 1.0 MOI infected target cells at 4 • C for 2 h and then cells were washed with PBS and were collected to detect viral RNA abundance. In the internalization assay, PEDV at 1.0 MOI infected target cells at 4 • C for 2 h and then cells were washed with PBS and transferred to 37 • C for 1 h incubation to complete virus internalization. After washing the cells with PBS, the cells were treated sequentially with 0.05% trypsin and 0.5 mg/mL proteinase K to remove the uninternalized virus particles, and the cells were collected for viral RNA detection. Data represent means ± SD from three independent experiments. ***, p < 0.001. (B,C) S1 number of positive cells bound to ATP1A1. Incubating IPEC-J2 and Vero-E6 on ice with PEDV at 1.0 MOI for 1 h to synchronize infection, then transferring to 37 • C incubator for the indicated time. Cells were collected after washing with PBS at the appropriate time points. Cells were then stained using rabbit anti-ATP1A1 mAb and mouse anti-S1 mAb, followed by CoraLite488-conjugated Goat Anti-Rabbit IgG and CoraLite594-conjugated Goat Anti-Mouse IgG. Data analysis using flow cytometry. In addition, we used flow cytometry to quantify the interaction between ATP1A1 and cell surface PEDV particles. The number of positive cells bound by ATP1A1 to the viral particle S1 protein gradually increased at 60 min of infection compared to that of 0 min of infection; at 120 min of infection, the number of positive cells bound to the cell surface gradually decreased as the viral particles entered the cells and was less than the number of positive cells at the time of initial infection ( Figure 7B,C). We speculated that ATP1A1 may be involved in the attachment stage of PEDV infection. The Host Protein ATP1A1 Co-Localizes with the PEDV S1 Protein Early in PEDV Infection To simulate the biological process of virus infection of cells, cells were infected with PEDV and immunofluorescence staining with ATP1A1 and PEDV S1 mAbs. We observed the co-localization of PEDV S1 and ATP1A1 protein in the early stages of infection. ATP1A1 co-localized with PEDV in target cells ( Figure 8A,B). At the time of infection of 1 h, the ATP1A1 and S1 co-localization phenomenon is more obvious on Vero-E6 than IPEC-J2 cells ( Figure 8C,D). Monoclonal Antibody Pretreatment of ATP1A1 Effectively Inhibits PEDV Attachment To confirm the role of ATP1A1 in PEDV attachment, monoclonal antibody (mAb) against ATP1A1 was incubated with IPEC-J2 and Vero-E6 cells to interfere with the interaction between ATP1A1 and PEDV. ATP1A1 mAb pretreatment decreased PEDV RNA abundance in target cells in a dose-dependent manner compared with the DMEM group ( Figure 9A,B). The IPEC-J2 and Vero-E6 cells were preincubated with ATP1A1 mAb by serial dilution, and then infected with PEDV; in the end, the cells were collected for Western blotting analysis. PEDV N protein expression was significantly reduced in the group of ATP1A1 mAb pretreatment, and importantly with a dose dependent manner ( Figure 9C,D). Pre-incubation of IPEC-J2 and Vero-E6 cells with a 16,000-fold dilution of ATP1A1 mAb was followed by infection with PEDV, and a significant reduction in the daughter virus was demonstrated by TCID50 assay (Figure 9E,F) and IFA data ( Figure 9G,H). These results suggested that ATP1A1 mAb could be able to significantly block the PEDV attachment. Figure 7B. Staining of cell nuclei using DAPI (blue) staining solution. The panel shows a three-dimensional rendering; the red arrows indicate co-localized signals. Monoclonal Antibody Pretreatment of ATP1A1 Effectively Inhibits PEDV Attachment To confirm the role of ATP1A1 in PEDV attachment, monoclonal antibody (mAb) against ATP1A1 was incubated with IPEC-J2 and Vero-E6 cells to interfere with the interaction between ATP1A1 and PEDV. ATP1A1 mAb pretreatment decreased PEDV RNA abundance in target cells in a dose-dependent manner compared with the DMEM group ( Figure 9A,B). The IPEC-J2 and Vero-E6 cells were preincubated with ATP1A1 mAb by serial dilution, and then infected with PEDV; in the end, the cells were collected for Western blotting analysis. PEDV N protein expression was significantly reduced in the group of ATP1A1 mAb pretreatment, and importantly with a dose dependent manner ( Figure 9C,D). Pre-incubation of IPEC-J2 and Vero-E6 cells with a 16,000-fold dilution of ATP1A1 mAb was followed by infection with PEDV, and a significant reduction in the daughter virus was demonstrated by TCID 50 assay ( Figure 9E,F) and IFA data ( Figure 9G,H). These results suggested that ATP1A1 mAb could be able to significantly block the PEDV attachment. Discussion PEDV caused acute infectious and severe atrophic enteritis of the small intestinal villi in infected pigs, leading to severe vomiting and diarrhea, dehydration, loss of appetite and depression [32]. Due to the devastating impact on the global pig industry and the potential threat posed across species by PEDV, understanding the interaction between this virus and its host is urgent for the infection mechanism and the development of antiviral strategies. The binding of the virus to receptors on the cell surface is the first step in virus-infected cells [33], and PEDV invades host cells through membrane fusion. Previous studies reported that PEDV utilizes heparan sulfate on the cell surface to facilitate the attachment of host cells [34], sialic acid is beneficial to PEDV binding and entry [35,36], the presence of cholesterol in cell membranes is required for PEDV entry into cells [37], transferrin receptor 1 on the cell surface can increase the susceptibility of PEDV to piglets [38] and the tight junction protein Occludin facilitates PEDV entry [39]. Furthermore, integrin αvβ3 is involved in the cellular uptake of porcine intestinal α-coronavirus [40], and also enhances PEDV replication in Vero-E6 and IPEC-J2 cells [41]. Previously, porcine aminopeptidase-N (pAPN) was widely accepted as a functional receptor for PEDV [9,42]. Further studies have shown that changes in pAPN enzyme activity are factors affecting PEDV infection [10]. In addition, surface plasmon resonance results indicated that the pAPN extracellular structural domain did not interact with PEDV S1 or S2 proteins [43]. These different ex- Figure 8C. At 48 h, virus yields were determined by TCID 50 assay with Vero-E6 cells. *, p < 0.05; **, p < 0.01. (G,H) ATP1A1 mAb inhibits PEDV attachment in targeted cells. The experimental processing steps were consistent with Figure 8C. PEDV-infected cells were determined by immunofluorescence staining with anti-PEDV-N mAb (green). Staining of cell nuclei using DAPI (blue) staining solution. Scale bars, 100 µm. Discussion PEDV caused acute infectious and severe atrophic enteritis of the small intestinal villi in infected pigs, leading to severe vomiting and diarrhea, dehydration, loss of appetite and depression [32]. Due to the devastating impact on the global pig industry and the potential threat posed across species by PEDV, understanding the interaction between this virus and its host is urgent for the infection mechanism and the development of antiviral strategies. The binding of the virus to receptors on the cell surface is the first step in virusinfected cells [33], and PEDV invades host cells through membrane fusion. Previous studies reported that PEDV utilizes heparan sulfate on the cell surface to facilitate the attachment of host cells [34], sialic acid is beneficial to PEDV binding and entry [35,36], the presence of cholesterol in cell membranes is required for PEDV entry into cells [37], transferrin receptor 1 on the cell surface can increase the susceptibility of PEDV to piglets [38] and the tight junction protein Occludin facilitates PEDV entry [39]. Furthermore, integrin αvβ3 is involved in the cellular uptake of porcine intestinal α-coronavirus [40], and also enhances PEDV replication in Vero-E6 and IPEC-J2 cells [41]. Previously, porcine aminopeptidase-N (pAPN) was widely accepted as a functional receptor for PEDV [9,42]. Further studies have shown that changes in pAPN enzyme activity are factors affecting PEDV infection [10]. In addition, surface plasmon resonance results indicated that the pAPN extracellular structural domain did not interact with PEDV S1 or S2 proteins [43]. These different experimental results suggested that pAPN is questionable as a true functional receptor for PEDV and that there may be other receptors that facilitate the PEDV attachment of host cells. Na + /K + -ATPase (NKA) is an energy exchange ion pump first proposed by Skou in 1957 [44]. NKA plays an important role in active transport, energy metabolism and signaling [36,[45][46][47]. Recent studies have shown that NKA is participating in cell signaling pathways that are not dependent on ion pump function [48]. The signal transduction role is mainly attributed to the α subunit [49]. Ouabain as a specific inhibitor of NKA can trigger signal transduction without affecting the sodium-potassium pump function or ion homeostasis [50], which has been reported to inhibit viral replication including herpes simplex virus 1 [51,52] and adenovirus [53]. PST2238 inhibits Ouabain binding and signal transduction [54]. PST2238 also inhibits viral replication and prevents the entry of human respiratory syncytial virus (RSV) by inhibiting ATP1A1 activation [26]. In addition, ATP1A1 has been associated with various viral infections, for instance, SARS-CoV-2 virus RNA has interacted with host protein ATP1A1 during infection [55]. ATP1A1 has played an important role as a host factor in SARS-CoV-2 infection, and inhibition of ATP1A1 expression blocked fetal intestinal infection [56]. Moreover, the Ebola VP24 protein has interacted with ATP1A1 and treatment with ATP1A1 inhibitor Ouabain reduced viral infection [57]. In our study, we found that the CT domain of ATP1A1 interacts with PEDV S1 protein ( Figure 1). In addition, we showed that downregulating the expression of ATP1A1 with siRNAs reduced the PEDV infection, but the viral suppression effect in the Vero-E6 cell line was not as pronounced as that of IPEC-J2, which we speculate may be due to the two cell lines being so different, even with different genera belongings (Figure 3).When we pretreated the target cells with drugs, a significant reduction in PEDV replication could be observed ( Figure 4). Moreover, we observed that the overexpression of ATP1A1 could promote PEDV infection ( Figure 5). We also demonstrated that PEDV infection induced massive ATP1A1 expression, which may facilitate the PEDV infection in host cells ( Figure 6). These results suggested that the ATP1A1 protein plays an important role in PEDV infection. The surface spike (S) protein on coronaviruses is a type I glycoprotein, consisting of S1 receptor-binding domain and S2 membrane fusion domain, which is a determinant of the virus tropism [58,59]. The S protein has a crucial role in binding to host receptors during the attachment phase and in mediating membrane fusion during the invasion phase [8,60]. Therefore, we first investigated what role ATP1A1 plays in the PEDV attachment phase. The results showed that ATP1A1 was involved in the attachment phase of PEDV ( Figure 7) and co-localized with S1 protein (Figure 8). Pretreatment with anti-ATP1A1 mAb significantly inhibited PEDV attachment, showing that ATP1A1 contributed to PEDV attachment ( Figure 9). Collectively, these results detailed the mechanism that ATP1A1 affects PEDV replication, which may act during the attachment phase of viral infection to host cells, providing a basis of the development of novel antiviral drugs. Based on the above experimental results, we present a pattern diagram to better describe the role of ATP1A1 in PEDV attachment ( Figure 10). Besides the currently unknown functional receptor, the PEDV S1 protein also binds to ATP1A1 at the cell membrane surface which facilitates the PEDV attachment to the host cells. In conclusion, our findings suggested that ATP1A1 may be a host factor that facilitates PEDV attachment, which may be also applied to the understanding of the interaction between other coronaviruses and hosts as well. Additionally, ATP1A1 is widely distributed and abundantly present on the cell membrane surface, which explains the extensive cytohagocytosis of PEDV well. Figure 10. A model describing the involvement of ATP1A1 in PEDV attachment. ATP1A1 plays an important role as a host factor that facilitates PEDV attachment. Overall, ATP1A1 is involved in recognition of hosts together with unknown functional receptors. ATP1A1 mAb interferes with the recognition of ATP1A1 and PEDV on the cell surface and inhibits viral infection. ATP1A1-specific inhibitors treatment leads to partial internalization and degradation of ATP1A1, reducing viral recognition. In conclusion, our findings suggested that ATP1A1 may be a host factor that facilitates PEDV attachment, which may be also applied to the understanding of the interaction between other coronaviruses and hosts as well. Additionally, ATP1A1 is widely distributed and abundantly present on the cell membrane surface, which explains the extensive cytohagocytosis of PEDV well. Cells and Viruses IPEC-J2, Vero-E6, and Vero cells were separately cultured at 37 °C in a humidified incubator with a 5% CO2 atmosphere in Dulbecco's minimum essential medium (DMEM; Procell, China) supplemented with 10% fetal bovine serum (FBS; Procell, China) and penicillin (100 U/mL; NCM, China) and streptomycin (100 ug/mL; NCM, China). The PEDV strain CV777 with genotype 1 (GenBank accession no. LT906620) and PEDV strain GDgh with genotype 2 (GenBank accession no. MG983755) were isolated and preserved at the South China Agricultural University, Guangzhou, China [61]. In addition to the experimentally indicated strain subtypes, the PEDV strain used was genotype 2 strain GDgh. The viral infection dose is indicated in the figure legends. Cells and Viruses IPEC-J2, Vero-E6, and Vero cells were separately cultured at 37 • C in a humidified incubator with a 5% CO 2 atmosphere in Dulbecco's minimum essential medium (DMEM; Procell, Wuhan, China) supplemented with 10% fetal bovine serum (FBS; Procell, China) and penicillin (100 U/mL; NCM, Suzhou, China) and streptomycin (100 ug/mL; NCM, China). The PEDV strain CV777 with genotype 1 (GenBank accession no. LT906620) and PEDV strain GDgh with genotype 2 (GenBank accession no. MG983755) were isolated and preserved at the South China Agricultural University, Guangzhou, China [61]. In addition to the experimentally indicated strain subtypes, the PEDV strain used was genotype 2 strain GDgh. The viral infection dose is indicated in the figure legends. Plasmid Constructs The four truncated as well as full-length ATP1A1 genes were amplified from small intestinal epithelial cells using the primers in Table 1 and cloned into the eukaryotic expression vector pECMV-3×FLAG-N for mammalian cell expression. Nucleotide sequences of the construct plasmids were compared to ensure that the correct clones were used in this study. The target genes were amplified by PCR and cloned into the KpnI (bold) and EcoRV (red) sites in the pECMV-3×FLAG-N vector. b Used for relative quantitative PCR. Western Blot and IP Cells were washed with PBS and incubated in WB lysate containing protease inhibitor (catalog no. GK10014; GLPBIO, Montclair, CA, USA) on ice to inhibit protein degradation. Samples were separated and transferred to polyvinylidene fluoride (PVDF) membranes (catalog no. ISEQ00010; Merck Millipore, Darmstadt, Germany). Skim milk 5% with PVDF membrane were blocked at room temperature for 1 h, followed by overnight incubation with primary antibody dilutions at 4 • C. The PVDF membrane was washed five times with PBST (PBS with 0.05% Tween 20) and incubated with secondary antibody at room temperature for 1 h, and protein bands were detected using BeyoECL Plus (catalog no. P0018M, Beyotime, China). FLAG, FLAG-A1, FLAG-A2, FLAG-A3, FLAG-A4 or FLAG-ATP1A1 expressed plasmid were transfected, respectively, into Vero-E6 cells for 24 h, and the cells were collected 24 h after PEDV at 0.1 MOI infection. Samples were incubated with anti-S1-mAb overnight at 4 • C and then incubated with Pierce Protein A/G Magnetic Beads for 1 h at 4 • C [62]. Samples were washed 3 times with PBS, and protein samples were prepared and detected by Western blot using the indicated antibodies. RNA Interference The siRNAs against porcine-derived and monkey-derived ATP1A1, siRNA-negative control (siRNA-NC or siRNA-mNC) were designed and synthesized by Sangon Biotech (Shanghai, China). The indicated siRNAs were introduced into the cells with RNAiMAX (Invitrogen) reagent at a concentration of 50 nM according to the manufacturer's instructions. Forty-eight hours after transfection, the cells were scraped and assayed by quantitative RT-PCR and Western blotting for specific gene silencing. In some experiments, cells were transfected for 24 h and infected with PEDV at an MOI of 0.1 for subsequent experiments. The indicated siRNAs are listed in Table 2. Cell Viability Detection Cell viability was detected by a cell counting kit-8 (CCK-8). Cells were treated with the specified concentration of inhibitors for 1 h at 37 • C or transfected with siRNAs for 24 h at 37 • C. After adding CCK-8 solution and incubating at 37 • C for 2 h, the absorbance at 450 nm was measured using an enzyme marker (Gene, South San Francisco, CA, USA). Inhibitor Treatments IPEC-J2 cells were co-incubated with non-cytotoxicity specific inhibitor or DMSO mixed with PEDV for 1 h. They were changed into 2% maintenance solution containing the same concentration of inhibitor or dimethyl sulfoxide (DMSO) and incubated for the indicated time for subsequent experiments [26]. ATP1A1 mAb Inhibition Assay Based on previous studies, we examined the effect of ATP1A1 mAb on PEDV infection [63]. IPEC-J2 and Vero-E6 cells were incubated with ATP1A1 mAb at required dilution in DMEM at 37 • C for 1 h. They were incubated with DMEM containing the corresponding antibodies and PEDV (0.1 MOI) strain GDgh with genotype 2 at 4 • C for 1 h. After washing the cells three times with PBS, the cells were incubated again with antibodies contained in the appropriate concentrations at 37 • C, and the cells were collected and assayed at the indicated times. Quantitative Real-Time PCR (RT-qPCR) Total RNA was extracted using HiPure Total RNA Mini Kit (catalog no. R4111-03; Magen, Guangzhou, China), and cDNA was produced by reverse transcription using Evo M-MLV RT Premix (catalog no. AG11706; AG, Changsha, China). The cDNAs from different samples were amplified by RT-qPCR to measure the target gene. The RT-qPCR was performed using Eastep qPCR Master Mix (catalog no. LS2062; Promega, Madison, WI, USA) on QuantStudio 5 (Thermo Fisher Scientific, Waltham, MA, USA) and programmed as follows: 95 • C for 2 min (1 cycle), 95 • C for 15 s and 60 • C for 60 s (40 cycles). The primers were listed in Table 2. Relative quantification was determined by the 2 (-Delta Delta CT) Method [64]. Immunofluorescence Assay and Confocal Microscopy Cells are grown on culture plates or slides and processed as desired for the experiment. Then, 4% paraformaldehyde was fixed at 4 • C for 1 h, 0.5% Triton X-100 penetrated at 37 • C for 10 min, and 1% BSA blocking buffer was used for closure to reduce non-specific binding. Samples were incubated with primary and secondary antibodies as specified. The staining of cell nuclei was performed using DAPI staining solution (DAPI, catalog no. c1006; Beyotime, China) and observed via fluorescence microscopy (DS-Qi2, Nikon, Japan) with the confocal microscopy (AX, Nikon, Japan). Flow Cytometry To measure the number of positive cells of ATP1A1 binding to viral particles on the cell surface, cells were cultured under normal conditions to a density of 90%. IPEC-J2 or Vero-E6 cells were chilled on ice for 10 min to synchronize PEDV infection, and the growth medium was replaced with PEDV virus medium containing an MOI of 1. The cells were incubated on ice for 1 h to synchronize infection and then transferred to a 37 • C incubator for the indicated time. The cells were washed 3 times with PBS, digested with trypsin and collected, washed twice with pre-cooled PBS, centrifuged and precipitated, the supernatant was discarded and the cells were resuspended with PBS and counted. Then, 2 µL of PEDV-S1 and ATP1A1 antibodies were added to each tube and incubated for 60 min at 4 • C avoiding light, then washed twice with PBS. Next, we added secondary antibody and incubated for 30 min at 4 • C, avoiding light. The cells were washed twice with PBS and resuspended in PBS for flow cytometric detection (catalog no. FACS101; BD, Franklin Lakes, NJ, USA). TCID 50 Assay PEDV was inoculated at an MOI of 0.1 into cells treated according to experimental requirements, incubated for 1 h and then washed with PBS. At 24 h, the titer of the offspring virus was determined according to the method of Reed and Muench [65]. Briefly, 0.1 mL of 10-fold gradient diluted (10 −7 to 10 −1 ) sample was added to the Vero-E6 cells in the 96-well plate. After six days, the cytopathic effects (CPEs) were observed with an inverted microscope (ECLIPSE TS100, Japan) and the number of CPEs wells was counted. Statistical Analysis All data were expressed as the means ± standard deviations (SD) and analyzed by Student's t test using GraphPad Prism software (version 8.0). Values of p < 0.05 were considered to be statistically significant and were indicated as follows: *, p < 0.05; **, p < 0.01; ***, p < 0.001.
2023-02-18T16:19:08.706Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "8c8ff0b4c022f2d9403159e368f2ba5ab2473073", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "e398dddf0ead9af486f0f1e31573469992dcf8c6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
259117815
pes2o/s2orc
v3-fos-license
Impact of pandemics and disruptions to vaccination on infectious diseases epidemiology past and present ABSTRACT Infectious diseases are a leading cause of morbidity and mortality worldwide with vaccines playing a critical role in preventing deaths. To better understand the impact of low vaccination rates and previous epidemics on infectious disease rates, and how these may help to understand the potential impacts of the current coronavirus disease 2019 (COVID-19) pandemic, a targeted literature review was conducted. Globally, studies suggest past suboptimal vaccine coverage has contributed to infectious disease outbreaks in vulnerable populations. Disruptions caused by the COVID-19 pandemic have contributed to a decline in vaccination uptake and a reduced incidence in several infectious diseases; however, these rates have increased following the lifting of COVID-19 restrictions with modeling studies suggesting a risk of increased morbidity and mortality from several vaccine-preventable diseases. This suggests a window of opportunity to review vaccination and infectious disease control measures before we see further disease resurgence in populations and age-groups currently unaffected. Introduction Infectious diseases continue to be one of the leading causes of morbidity and mortality worldwide, accounting for 18.4% of deaths globally in 2019, with a higher proportion of deaths in low-and lower-to middle-income countries. 1,2Across all age groups, infectious diseases, including infectious diarrhea and lower respiratory infections, were among the top 10 causes of disease burden and deaths globally in 2019. 1,3accines are recognized as having a critical role in preventing deaths and hospitalizations due to infectious diseases; estimates suggest that vaccines could have prevented nearly one-quarter (21.7%) of the 5.3 million deaths among children under the age of 5 years in 2019. 4The role of vaccines in the global eradication of smallpox demonstrates the impact of successful global vaccination efforts, and other successes include the dramatic reduction and near elimination of polio in some regions of the world. 5,6][9][10][11][12][13] However, vaccinepreventable deaths continue to pose a significant economic burden to society, particularly in resource constrained communities.A 2018 analysis reported that four major vaccinepreventable diseases -rotavirus, pneumococcal disease, measles, and rubella -were estimated to collectively cost Africa (US) $13 billion annually, due to productivity losses resulting from premature death (US $10 billion) and prolonged sickness (US $2 billion), hospitalizations (US $260 million), and outpatient visits (US $73 million). 14isruptions in access to healthcare services, including problems with access to national immunization programs (NIP) and low vaccination uptake, can significantly impact the epidemiology of infectious diseases.6][17][18] Historically disruptions in access to NIPs and low vaccination uptake have had major impacts on the epidemiology of infectious diseases.][21] These included personal protection and hygiene measures (face masks, gloves and other personal protective equipment, hand hygiene, sanitizing contaminated surfaces) and social distancing (e.g., lockdowns, stay-at-home orders, bans/restrictions on travel and group gatherings/events). 19,20,22However, in addition to controlling the spread of COVID-19, such measures also impacted the epidemiology of non-COVID-19 infectious diseases and the uptake of NIPs.Subsequently, the roll-out of COVID-19 vaccination programs has led to the relaxation of NPIs but the eventual long-term impact of the pandemic on vaccine-preventable diseases globally remains unclear. 21It is therefore important to understand the potential impact of the COVID-19 pandemic and associated NPIs on the epidemiology of non-COVID infectious diseases.This targeted literature review (TLR) seeks to 1) identify past pandemics and corresponding NPIs and describe their impact on the epidemiology of infectious diseases; 2) identify historical examples of disruptions to NIPs and low vaccine uptake and characterize their impact on infectious diseases; 3) identify the impact of COVID-19 disruptions on vaccine uptake; 4) understand the impact of COVID-19 restrictions on vaccine preventable infectious disease epidemiology; and finally 5) apply these learnings to the COVID-19 pandemic and associated NPIs to build an understanding of their impact on the uptake of (non-COVID - 19) vaccines and the current and future epidemiology of (non-COVID-19) infectious diseases. Methods A protocol-driven TLR was conducted to identify key evidence from real-world (observational) and mathematical modeling studies.Extensive literature searches of MEDLINE (OvidSP) and Embase (OvidSP) were conducted from inception to October 2021.Search strategies used a combination of indexing terms (Medical Subject Headings terms in MEDLINE and Emtree terms in Embase) as well as free-text keywords were used to identify studies reporting on factors causing disruption to NIPs and the epidemiology of vaccine-preventable infectious diseases in the general population.Separate search facets were developed using terms for infectious diseases, NPIs, outcomes, and study designs of interest, which were combined using Boolean operators and limited to studies in humans (see Supplemental File 1 for additional details).Gray literature searches were carried out to identify conference abstracts published from 2019 onward (indexed in Embase) and epidemiological/surveillance data reported by key public health websites (World Health Organization [WHO], Gavi, the Vaccine Alliance, United Kingdom Health Security Agency [UKHSA] formerly Public Health England [PHE]), Centers for Disease Control and Prevention [CDC], European Centre for Disease Prevention and Control) were also carried out.Pre-print databases (medRxiv, bioRxiv, Lancet preprints) were also searched for articles posted from 2019 to October 2021 to capture more recent COVIDspecific data.Targeted hand searches for updated surveillance data and more recent publications were carried out in June 2022. Articles at title/abstract and full text were systematically screened using DistillerSR® software and selected for inclusion by one reviewer with a random sample of 20% validated by a second, senior reviewer according to pre-defined population, interventions and comparisons, outcomes, and study design (PICOS) criteria (see Supplemental File 2 for additional details).Studies investigating the impact of NPIs, such as social distancing measures, general face mask use, and policy changes relevant to public health, were considered eligible for inclusion.Observational studies, epidemiological modeling studies (based on real-world data), disease surveillance, and public health reports were included regardless of geographical location if they reported on the dynamics of infectious disease epidemiology (with specific interest in vaccine-preventable diseases) resulting from any disruption to a vaccination program.Examples of disruptions could include any type of NPI, policy changes, vaccine hesitancy, or previous disease outbreaks.From those studies meeting the PICOS inclusion criteria, 50 key articles were prioritized for data extraction.To ensure a representative global sample of key studies across the five research questions, articles were prioritized based on the geographical location (i.e., COVID-related evidence from UK, North America and Europe, and global evidence on past pandemics), infectious disease investigated (i.e., infectious diseases relevant to the United Kingdom (UK)), and time-period of data collection.(i.e., last 10 years) to ensure a representative global sample of key studies across the five research questions.In addition, priority was given to those articles of most importance to public health, within the UK including articles on pneumococcal disease, influenza, human papilloma virus (HPV), pertussis, measles and shingles. 23Data were extracted into a specially designed Microsoft Excel® spreadsheet by one reviewer and validated by a second reviewer.Key findings of the TLR were summarized qualitatively. Results A total of 3,295 records from database searches were screened, and 251 records selected for full-text review, of which 41 records were included.In addition, 102 records were identified from gray literature searches, including searches of websites and citation chasing (Figure 1).From the 143 included records 50 studies were prioritized for data extraction, including studies conducted in the UK (n = 13 studies), US (n = 12), and Europe (n = 10), followed by Africa (n = 6), Asia (n = 4), and Australia (n = 1) with four studies reporting on multiple global regions (Tables 1-5). Thirty percent of studies were focused on measles (n = 15), 24% on pneumococcal pneumonia (n = 12), 10% on respiratory syncytial virus (RSV; n = 5), and 8% on polio (n = 4).Thirty-eight percent of the studies (n = 19) reported disease epidemiology in children, while the remainder reported data for the general population or other age-specific groups (e.g., adults, elderly).Out of the 50 included studies 56% were surveillance data studies (n = 28), 22% (n = 11) were modeling studies, 12% (n = 6) were retrospective cohort studies, 6% were (n = 3) literature reviews, and 4% (n = 2) were case-control investigations.One-third of the included articles were public health reports from UK PHE and the US CDC.Articles reported on data gathered before the COVID-19 pandemic and spanned a 20-year period from 1996 to 2019. Impact of NPIs to tackle pre-COVID disease outbreaks on disease epidemiology The impact of historical disease outbreaks (pre-COVID) on vaccine-preventable disease epidemiology was reported in three studies all of which focused on the impact of NPIs during the 2014 to 2015 Ebola outbreak across Africa [24][25][26] (Table 1). During the Ebola outbreak, the affected countries implemented various NPIs including curfews, border closures, and restrictions on free movement.5][26] In Liberia, the mean coverage of the first dose of measles-containing vaccine (MCV1) during the outbreak in 2015 was 16% lower than in the 2 years preceding the outbreak.Correspondingly, the incidence of measles increased from zero cases in 2013 to 2014, to 108.5 cases per million in 2015. 25The incidence of measles in Sierra Leone increased from 6.9 per million in 2014, to 18.0 per million in 2015, during the Ebola outbreak, and remained high in 2016 and 2017. 25A nationwide measles vaccination effort was initiated in June 2015 to combat the rising case numbers resulting in the vaccination of 1,205,865 children from 9 to 59 months of age (97.2% coverage). 25ollowing continued outbreaks of measles involving children over 5 years of age, an expanded measles immunization program was implemented in May 2016, which reached 2,795,686 children aged 6 months to 14 years with a coverage at national level of 97.7% (95% confidence interval [CI]: 97.2% to 98%).A post-campaign survey revealed that 20.2% of the children received the vaccination for the first time. 25In Guinea, estimates of coverage for the third dose of diphtheria-tetanuspertussis (DTP3) vaccine, single-dose yellow fever vaccine, and MCV1 showed declines as a result of the 2014 to 2015 Ebola outbreak.DPT3 coverage was on average 48.5% in 2012 and 2013, 39.5% during the outbreak in 2014 and 2015, and Table 1.Impact of NPIs to tackle pre-COVID disease outbreaks on vaccine-preventable disease epidemiology. Reference Population Country Type of Disruption (year) Changes in Vaccination Uptake Gray 24 General population (Population-level data) Liberia Ebola outbreak (2014-2015) The number of presumptive tuberculosis cases dropped significantly by nearly one-fifth at the beginning of the Ebola outbreak.There was a significant increase in the proportion of smear-positive to presumptive cases in the post-outbreak period, suggesting that the Ebola outbreak negatively affected tuberculosis care services.Masresha 25 Historical disruptions in disease epidemiology due to NPIs (Pre-COVID) 6][47] A number of studies also reported low vaccination rates in nursing or care homes, 38,39,42,44 and some studies reported sub-optimal vaccine uptake with no clear reason despite availability of NIPs (Figure 2). 28,31,32,41uccessful NIPs have seen measles and polio effectively eliminated from several regions across the globe, however periodically these diseases have re-emerged in recent years when vaccination rates have fallen below optimal levels.Local and regional outbreaks have resulted in increased disease-related morbidity/mortality in the Netherlands, Ireland, and several other countries across western Europe and Africa. 28,30,32,34,36In the Netherlands, a large measles outbreak resulted in 2,766 reported cases, of which, 94% (n = 2,539) were reported in unvaccinated individuals, the majority due to religious reasons (84%; n = 2,135). 34In response to this outbreak, early measles-mumps-rubella (MMR) vaccination was advised in infants too young to have already received their first dose (MMR1), as they represent a highly vulnerable population due to loss of maternal antibodies; a total of 5,800 infants received an early MMR1 vaccination.Another clear example of an outbreak linked to suboptimal uptake of measles vaccination occurred in Dublin from December 1999 to July 2000. 32uring this time, 1,407 cases were reported in Ireland, and within a single hospital 111 severely ill children were admitted, with 13 needing treatment in intensive care, seven requiring mechanical ventilation, and three children dying as a result of measles.Of the 111 children, 49 (44%) were >15 months of age and therefore eligible for their first MMR immunization, however only 18 (37%) had received this vaccination. In 2011, measles outbreaks occurred in 36 out of 53 European countries.France reported the largest outbreak in the region, with 14,025 cases predominantly among individuals who were not vaccinated or those whose vaccination history was unknown. 28In each of these examples, NIPs were in place, however sub-optimal uptake was observed, which resulted in a resurgence of vaccine-preventable illness. Between 2010 and 2011, measles vaccination rates in the Democratic Republic of Congo (DRC) were poor with only three geographical areas achieving ≥89% coverage; subsequent epidemics resulted in 77,241 measles cases and 1,085 deaths. 30he DRC is prone to measles outbreaks, with supplementary immunization activities (SIAs) having previously been implemented with the aim to increase measles vaccine coverage through catch-up programs targeting young children.Access to vaccinations as well as optimal uptake are critical to reduce the risk of outbreaks.Despite being planned in 2010, the SIAs were not implemented. 30olio outbreaks have also been observed due to the low uptake of vaccination programs. 36,37Polio cases were observed in Ukraine following a significant decline in the oral polio vaccine coverage from 91% in 2008 to 15% in 2015, over the subsequent year as vaccination rates and surveillance increased no further cases were reported. 36Several factors contributed to the decline in vaccination against polio in the Ukraine, these included misconceptions around vaccine safety, anti-vaccine sentiments, as well as insufficient funding.Elsewhere in Afghanistan, following an ongoing ban on polio vaccine by anti-government elements, wild type 1 poliovirus (WPV1) cases increased from 13 cases observed in three provinces in 2019 to 26 cases from 12 provinces in 2020. 37n the US, several outbreaks of measles were reported between 2000 and 2015, and vaccine refusal due to nonmedical exemptions, such as religious belief, was a contributing factor to these outbreaks. 47A detailed review of vaccination data for 970 measles cases revealed that 574 cases occurred in unvaccinated individuals who were eligible for vaccination, with 405 (70.6%) of these individuals having nonmedical exemptions. 47During this same period, several pertussis outbreaks were observed in the US, including eight outbreaks in populations where 59% to 93% of pertussis cases occurred in children who were intentionally unvaccinated. 47Populations, including schools and communities/states with higher vaccination exemption rates, had correspondingly higher rates of pertussis, including among those who were fully vaccinated. 47ow vaccination rates have also been associated with several outbreaks of invasive pneumococcal disease (IPD) in nursing homes across the US. 38,39,42,44Among 361 long-term care facilities assessed in 2001, 8% failed to meet state regulations requiring pneumococcal polysaccharide vaccinations (PPV) to be offered to all residents. 38In addition, a survey of 54 nursing homes found that only 22% of residents had been vaccinated and the vaccination status was unknown for 66% of residents. 44The underuse of PPV in nursing homes can be potentially attributed to a lack of prioritization by doctors, skepticism regarding vaccine effectiveness, and challenges when trying to obtain residents' vaccination history. 38,42,44verall, previous evidence suggests young children 30,32 and older adults, including those in nursing homes and long-term care facilities, 38,42,44 have been most affected by disease outbreaks due to low vaccination rates implying the need to maximize efforts on vaccination coverage and uptake in these vulnerable groups. Pneumococcal vaccine CDC, MMWR 38 Nursing home residents (n = 27) US Low vaccination rates in nursing or care homes (2001) Among 361 long-term care facilities during May 21-July 31, 28 (8%) did not meet the state regulation that requires offering PPV to every resident.Among 52 patients having medical records reviewed, 34 (65%) had no history of having received PPV and no contraindication to the vaccine, none of these patients had documentation of receipt of PPV while hospitalized. CDC, MMWR 39 Patients at chronic care facilities (n = 267) US Low vaccination rates in nursing or care homes (1996) Death rate among chronic-care facility residents with pneumonia ranged from 20% to 28%, and less than 5% of the residents ≥65 years had vaccination records. Health protection report 41 General population (Population-level data) UK Suboptimal vaccine coverage with no clear reason (2020) PPV coverage among people aged ≥65 years has remained constant, between 69.0% and 7.1% between 2014 and 2020. Many of those eligible for PPV vaccination did not receive the vaccine in the first year that they become eligible, but did in the subsequent years, with additional uptake gradually decreasing with age. (Continued) Reported pertussis cases varied by month from < 100 in January 2010 to a peak of > 1,000 in August 2010. Varicella vaccine Glanz 46 Children (n = 626) US Decision not to vaccinate* (1998-2008) 133 cases were confirmed of which 7 (5%) had parents who refused all varicella immunizations.The mean age of the cases was 3.9 years, and 55% were female. Multiple vaccines Phadke 47 General population (Population-level During the COVID-19 pandemic, mitigation measures such as lockdowns and school closures contributed to a sharp decline in the uptake of common childhood vaccinations (e.g., MMR, diphtheria-tetanus-pertussis [DTP], HPV), with the greatest impact felt in those countries with the strictest measures.In England, the operational delivery of all school-aged immunization programs was paused due to the COVID-19 pandemic, resulting in marked reductions in vaccination uptake.For example, a 29.7% reduction in meningococcal conjugate vaccine (MenACWY) was reported in year 9 students from 2019 to 2020 compared to levels in 2018 to 2019. 50imilar reductions were reported for the priming dose of HPV for year 8 females (28.8%) 51 and Td/IPV (24%) in year 9 students. 52Worldwide data indicated that the 2020 vaccine coverage for DTP3 dropped to 83%, leaving 22.7 million children unprotected. 57MCV1 coverage decreased to 84%, whereas the second dose of the measles-containing vaccine (MCV2) coverage was relatively stable at 71% in 2019 and 70% in 2020. 56accinations in the older adults were also affected by the introduction of COVID-19 restrictions in England.The shingles vaccination program is open to adults aged between 70 and 79 years, and coverage in all ages was lower in the 2020 to 2021 financial year compared to 2019 to 2020. 54For adults turning 70, those newly eligible to the shingles program, coverage dropped from 26.7% in June 2020 to 20.2% in June 2021. 54milarly, coverage decreased by 5.4% in 70-year-olds and 7.1% for 78-year-olds in 2021 compared with 2018 to 2019. 55 Several diseases (especially respiratory diseases) saw a rapid reduction in cases after the introduction of COVID-19 restrictions due to reduced disease transmission rates in light of NPIs reducing social contact.However, research has suggested that although NPI measures have had a beneficial effect in reducing disease incidence, they may also have led to an increase in disease susceptibility, potentially due to waning immunity against some non-COVID-19 infectious diseases. 60,62,63This immunity gap appears to have ultimately left populations at increased risk of subsequent vaccine-preventable disease outbreaks, with surges in cases, and changes to the seasonality of diseases, such as RSV, influenza, norovirus, and pneumococcal disease. 60,62,63ow rates of norovirus infections were reported in England during periods where COVID-19 lockdowns were enforced, but this was likely accompanied by an increase in population susceptibility resulting in a rapid increase in infections (9% increase in symptomatic infections) as COVID-19 restrictions began to be lifted.The subsequent estimated annual incidence rate almost doubled when compared to that predicted before the arrival of COVID-19. 64lthough not a vaccine preventable infectious disease, the change in incidence of norovirus, highlights the impact of the COVID-19 pandemic and lifting of COVID-19 restrictions on infectious disease dynamics.In Germany, 59 where NPIs included quarantine after exposure, restrictions on large gatherings, mask wearing, workplace/ retail closures and travel restrictions, correlations were reported between reductions in IPD cases and increased stringency to NPIs.IPD incidence dropped sharply in the second quarter of 2020 but rebounded to pre-COVID-19 pandemic levels by the beginning of the third quarter of 2021. 59In children ≤4 years of age, IPD levels began to return to pre-COVID-19 pandemic values in April 2021 and exceeded pre-COVID-19 pandemic levels by June 2021, showing a 9% increase over average monthly values for 2015 to 2019. 59Similarly, for age groups 5 to 14 years, 15 to 24 years, and >80 years, increases in IPD cases began to be observed in spring 2021, crossing pre-COVID-19 pandemic levels in July 2021. 59In Switzerland, a drastic decline in IPD isolates was observed from February 2020 (n = 139) to April 2020 (n = 22) and remained low until February 2021 (n = 19). 58COVID-19 measures were relaxed by the Swiss government on March 1, 2021, and by June 2021, the number of IPD isolates had returned to pre-pandemic levels. 58educed numbers of cases and shifts in seasonality for RSV have been reported globally. 60,65,67Australia was the first country to report an increase in RSV cases accompanied by a shift in the seasonality and epidemiology of disease. 60Similar effects have since been reported across North America and Europe. 62,63,65A decrease in RSV cases beyond mean seasonal levels was observed in the US following the introduction of COVID-related NPIs.Prediction models have suggested that longer periods of NPI enforcement would subsequently reduce transmission rates, leading to an increase in susceptibility and ultimately resulting in larger RSV outbreaks.For example, modeling suggested that longer durations of NPIs (i.e., one year), would lead to larger RSV outbreak, and more importantly could also result in complex interactions affecting the normal seasonal pattern of disease. 65The UK has also reported a rapid 59 General population (Population-level data) Germany IPD incidence decreased sharply in the second quarter of 2020 and returned to baseline levels in the beginning of the third quarter of 2021. RSV Foley 60 Children (n = 917) Australia COVID-19 public health measures contributed to a shift in transmission of respiratory viruses, including a delay in the expected RSV season.A summer peak of RSV-positive admissions, 2.5 times the magnitude of the previous mid-winter peak, was observed.Halabi 61 Children and adolescents (n = 143) US The overall number of RSV cases decreased in 2020 to 2021 compared with both previous seasons with inter-seasonal resurgence.Despite lack of known risk factors, a higher proportion of children had severe disease in the 2020 to 2021 season.Hussain 62 Children (n = 2,922) UK In 2020 to 2021 there was a drop in bronchiolitis cases, and no RSV cases were identified.The most likely reason was that of NPIs resulting in reduced transmission of viruses.In Wales, a reemergence of RSV bronchiolitis cases at a rapid rate that is out of sync to the usual seasonal pattern was observed.van Summeren 63 General population (Population-level data) Europe RSV epidemics were only observed in Europe during the 2020-2021 season in France and Iceland, countries that had a policy of keeping their primary schools and daycare facilities open.In the Netherlands, the RSV epidemic started 19 weeks after schools were reopened, suggesting that school closures had an impact on RSV activity. England During the first lockdown until the school reopening stage, the rate of infection for norovirus is sufficiently low that new infections are rare.The third lockdown period corresponds to a reduction in contacts (from 6.61 to 3.47) and the rate of infection falls to low levels again, until schools are reopened.Subsequently, model scenarios predicted a rise in the rate of infection and a resurgence of norovirus in the community resulting in an annual incidence of cases up to 2 times higher than simulations prior to 2020. Multiple diseases Baker 65 General population (Population-level data) US Following NPIs due to COVID-19, a decline in RSV prevalence was observed beyond mean seasonal levels The 2019-2020 influenza season was more severe than average, with a relative increase in prevalence prior to March 2020; however, there was a decline to below average levels across almost all US states.Models identified that longer periods of NPIs, and subsequently reduce transmission, lead to greater increase in susceptibility and larger resulting outbreaks.Redlberger-Fritz 66 General population (n = 25,491)* Austria A rapid and statistically significant reduction of cumulative cases of influenza viruses, RSV, human Metapneumovirus and Rhinoviruses within short time after the lockdown in March 2020, compared to previous seasons.A reemergence of rhinovirus infections was observed after lifting of lockdown measures.Wan 67 Hospitalized patients (n = 42,558)** Singapore Implementation of NPIs pre-lockdown was associated with a reduction of influenza and RSV and a reduction of enterovirus/rhinovirus and adenovirus was only observed when lockdown was instated.During reopening, low levels of all viruses were sustained for approximately 13 weeks, but a reemergence of enterovirus/rhinovirus occurred in early September and a less pronounced rebound of adenovirus in mid-October.reemergence and increase in RSV bronchiolitis cases likely to be out of sync with the usual seasonal pattern of infections. 62onversely, in countries across Europe where a policy of keeping open primary schools and day care facilities was implemented (including France and Iceland), typical pre-COVID-19 pandemic RSV seasonality was observed in 2020 to 2021. 63In the Netherlands, the RSV epidemic started 19 weeks after schools were reopened, also suggesting that school closures had an impact on RSV activity. 63 Potential future impacts of vaccination disruptions due to COVID-19 restrictions on disease burden ][70][71][72][73] Data from the modeling studies suggested that disruptions to vaccination programs experienced during the COVID-19 pandemic could lead to an increase in number of cases and deaths from other infectious diseases.For example, the number of excess deaths due to measles is expected to range from 0.24 to 1.16 per 100,000 persons during 2020 to 2030 using data from Bangladesh, Chad, Ethiopia, Kenya, Nigeria, and South Sudan. 73During 2020 to 2023, modeling studies have also predicted that the number of polio cases globally is projected to increase from 4,657 to 5,557 cases despite any timely recovery in vaccination programs.Further polio eradication activities are not expected to substantially impact the overall predicted trajectory. 71 69 General population (Population-level data) England and Wales Pneumonia COVID-19 lockdowns were predicted to offset the increase in IPD cases resulting from a reduction in PCV13 coverage, by reduction in pneumococcal transmission, resulting in a reduction in pneumococcal carriage prevalence and IPD incidence for up to 5 years.The net reduction in cumulative IPD cases over the five epidemiological years from July 2019 was predicted to be 13,494.Kitano 70 Children (Population-level data) Japan Pneumonia The model analyzing scenarios for the next 10 years indicated reduction in IPD incidence from 11.9 per 100,000 in 2019 to 6.3 per 100,000 in 2020, resulting from reduced transmission following COVID-19 mitigation measures.Assuming a recovery in the transmission rate in 2022, the incidence of IPD is estimated to increase with maximal incidence of 12.1 and 13.1 per 100,000 children under 5 years in a rapid and delayed vaccination scenarios respectively.The difference of incidence was not observed between the two scenarios after 2025. Polio The COVID-19 pandemic led to disruptions in health services, including immunization campaigns against the transmission of WPV and cVDPV2, posing a challenge to the Global Polio Eradication Initiative.Some resumption in activities in the fall of 2020 to respond to cVDPV2 outbreaks and full resumption on January 1, 2021, of all polio immunization activities to pre-COVID-19 levels could mitigate the impact of COVID-19 delays in immunizations campaigns. Third-dose diphtheria-tetanus-pertussis Causey 72 Children (Population-level data) Global (94 countries) NA In 2020 there was a relative reduction of 7.7% for DTP3 and 7.9% for MCV1 compared to expected coverage in the absence of the COVID-19 pandemic.These estimates represented an additional 8.5 million children not routinely vaccinated with DTP3 and an additional 8.9 million children not routinely vaccinated with MCV1 attributable to the COVID-19 pandemic.Reductions in vaccine coverage in March and April were identified for all Global Burden of Disease super-region with the most severe impacts in north Africa and the Middle East, south Asia, and Latin America and the Caribbean. Measles and yellow fever Reductions in vaccination coverage in 2020 may lead to an increase in measles and yellow fever cases.In Ethiopia and Nigeria vaccination delays of one year may significantly increase the risk of measles outbreaks.For yellow fever, delays in vaccination lead to an increase of > 1 death per 100,000 people per year until vaccination campaigns are resumed. Abbreviations COVID-19 lockdowns have had a profound effect on infectious disease incidence in the UK, and modeling studies suggest significant impacts will be felt moving forward.Based on an existing model of pneumococcal transmission in England and Wales, simulating the impact of a 40% reduction in vaccination coverage and 40% reduction in contact rates during the COVID-19 lockdowns introduced in Spring 2020 and Autumn/Winter 2020 to 2021, 69 a reduction in pneumococcal carriage prevalence and IPD incidence has been predicted to occur over a period of up to 5 years. 69The reduction in transmission due to social distancing is predicted to offset any increase in IPD cases due to any reduction in vaccine coverage.Vaccination coverage has been shown to be a more important driver of vaccine mortality than the timing of vaccination, where high vaccination coverage can over-ride the effect of vaccination delay through herd immunity.Model scenarios for seven countries indicated that irrespective of delays, deaths averted by pneumococcal conjugate vaccine (PCV) were comparable when accounting for herd protection. 68 Discussion This TLR summarizes key data and learnings from past and current disruptions to human activity and vaccination programs that can impact the epidemiology of infectious diseases to build a better understanding of the potential trend of infectious disease dynamics as we move out of the COVID-19 pandemic era. [35][36]40,43,44,47 Low vaccine uptake and/or coverage can have many causes (e.g., lack of vaccination policy, political conflicts, parental vaccine refusal, vaccine procurement problems, antivaccination sentiments, vaccine safety concerns, and changes in vaccine schedule), but regardless of the cause, evidence gathered in this TLR shows that low vaccination rates have contributed to increased infectious disease burden particularly in vulnerable population groups such as young children and older adults and in some cases leading to local and regional outbreaks of diseases previously brought under control. The 0][61][62][63][64][65][66] Although causality is difficult to prove especially with regard to individual NPIs, further studies published since completion of the targeted literature search (in October 2021) have provided further evidence that NPIs implemented during the COVID-19 pandemic have coincided with reductions in vaccination rates and disease numbers alongside in some cases changes in the seasonality of disease.[74][75][76][77][78] Overall, evidence suggests that the COVID-19 pandemic has significantly impacted the epidemiology of vaccine preventable infectious diseases at least in the short term and in many cases likely in the longer term.As the COVID-19 pandemic has eased and NPIs have been lifted, a small recovery in vaccination uptake has been observed, 53,57 though vaccination coverage still remains generally lower than pre-COVID-19 levels in key populations such as young children. As ti progresses the signs of recovery of vaccination rates are variable in the extent and timing of this recovery by geographical region.79,81,82,85,86,93,95,97,98,103 In addition, increases in social contact have contributed to the spread of disease with at least initially a likely larger than normal pool of susceptible individuals.Using the example of pneumococcal disease, data from countries such as Germany and Switzerland initially reported reductions in cases during the pandemic, but cases have steadily increased since the relaxation of NPIs.58,59 Modeling the impact of COVID-19 NPIs on IPD cases in England and Wales predicted reductions in cumulative IPD cases for up to 5 years.69 However, recent surveillance data from UKHSA show that the number of IPD cases in 2020/2021 in England has already increased to a similar level as previously reported in 2019/2020 in children less than 2 years (Figure 3) and to a higher level in children less than 15 years compared to pre-pandemic years 2017-2019.104,106 Within the UK, evidence suggests a similar trend is occurring in meningococcal and other diseases.107 However, the relative contributions of low vaccination rates, increased population susceptibility, greater social contact, and the lifting of different NPIs is unclear and it is difficult to directly attribute the resurgence of vaccine preventable infectious diseases, like pneumococcal and meningococcal illness, to the lifting of NPIs, and/or the reduced vaccination rates during the pandemic. COVID-19 control measures continue to evolve as the pandemic and vaccination control measures change, 21 making it difficult to predict future trends in infectious disease epidemiology.However, some data from England (e.g., pneumococcal disease in age <15 years) already suggest that a resurgence in disease levels to pre-pandemic levels is occurring, 104,106 and much earlier than predicted by modeling studies. 69hough the studies identified in this TLR were unable to elucidate definitive causality between low vaccination levels, disease rates, and COVID-19 control measures, some speculated on potential causes, 48,50,52,72 with the suggestion of a trend toward larger declines in disease in areas with more stringent COVID-19 response measures and in lower-income countries. 57Consistent with this, multi-regional studies have shown that decreases in vaccination rates correlate with socioeconomic status, 81,88,100,108 suggesting that changes in access to healthcare during the COVID-19 pandemic was a major contributor to decreases in vaccine uptake.Research from 170 countries has shown evidence of substantial disruptions to routine vaccination related to interrupted vaccination demand and supply, including reduced availability of healthcare staff. 82he decline in vaccine uptake during the pandemic may also have contributed to a change in attitude toward vaccinations in general and concerns about the safety of vaccines, 109,110 since rates remain low in some diseases despite the relaxation of NPIs. Further hindering the interpretation of data is the presence of several confounding factors that could contribute to the reported changes in the number of infections after COVID-19 mitigation measures were implemented, including reductions in the reporting capacity of surveillance systems. Uncertainty also exists in general understanding of the measures that are the most influential in causing the observed changes in infectious disease incidence, and whether study designs are adequate to control for confounding factors.The aim of this review was also to capture both historical example of NPIs and the impact of recent COVID-19 related NPIs on disease epidemiology and national immunization programs.Although the electronic databases searches from which the evidence base was generated were conducted in October 2021, pre-print sources were included and additional manual ad-hoc searches for more recent evidence including surveillance data were conducted in June 2022.Given the last of the national lockdown restrictions were lifted in July 2021 the key effects of the COVID-19 restrictions should have been captured but given the target nature of our review our aim was not to capture all data during this period.Despite these issues, the burden of evidence suggests that the recent COVID-19 pandemic and the implementation of NPIs has led to significant impacts on non-COVID vaccine-preventable diseases in a similar manner to past examples such as Ebola. The recent pandemic may have also positively affected vaccination efforts by encouraging an increase in vaccine awareness, 109,[111][112][113][114][115][116][117] which would explain increased vaccine uptake in selected groups (e.g., the elderly) 41,118 and in certain geographical regions. 80,86,87,103,116,119,120These positive effects of the pandemic on the public perception of vaccines are ongoing areas of research, but offer a public health opportunity to further improve rates of vaccine uptake and coverage and prevent future outbreaks of vaccine-preventable diseases.As the pandemic progresses, it is important to continue to monitor vaccination uptake and disease rates closely to prevent disease outbreaks.Even though causality is uncertain, a return to normal social mixing suggests the need for vigilance to maintain high vaccination levels and prevent future disease outbreaks.The drop in case numbers also offers a unique opportunity to reset the endemic equilibrium for vaccine-preventable diseases to levels lower than in the pre-COVID era.Consequently, there is a window of opportunity to review vaccination and disease rates before we see further disease resurgence in populations and age-groups currently unaffected.Actions to minimize interruptions in the delivery of immunization services and to plan and implement catch-up vaccinations are needed to mitigate the effects of the COVID-19 pandemic as recommended by a recent WHO report. 121These actions include improving access to vaccines, increasing the efficiency of vaccination schedules, and harnessing opportunities for the simultaneous administration of multiple vaccines. 7 . 6% of those unvaccinated had a nonmedical exemption to vaccination.Higher rates of vaccine exemption were associated with greater measles incidence.Pertussis: In at least 7 statewide pertussis epidemics, a substantial proportion of cases in certain age groups were unvaccinated or under-vaccinated.*Decisions not to vaccinate included philosophical exemptions, non-medical exemptions, and vaccine safety concerns.Abbreviations: CDC = Centers for Disease Control and Prevention; IPD = invasive pneumococcal disease; MCV1 = measles vaccination; MMR = measles, mumps, and rubella; NR = not reported; OPV = oral polio vaccine; PCV = pneumococcal conjugate vaccine; PCV13 = 13-valent pneumococcal conjugate vaccine PPV = pneumococcal vaccination; TAG = technical advisory group; UK = United Kingdom; US = United States; WPV1 = wild polio type 1 vaccine. Table 2 . Historical disruptions in disease epidemiology due to NPIs (pre-COVID).In 1995, uptake of the measles, mumps and rubella (MMR) vaccine was over 90% in the UK.However, MMR vaccine coverage declined in the late 1990s due to controversy over the safety of the vaccine.Coverage with a first dose was 80% among 2-year-olds in England in2003-2004.The effective reproductive number for measles rose from .47 in 1995-1998 to .82 in 1999-2000.reportedbetween 2001 and 2019 with a yearly median of 6 outbreaks.A median of 36% of cases per year was due to international importation and a median of 15.1% of US cases occurred in vaccinated people.Up to a median of 66.7% of vaccine-eligible cases declined to be vaccinated due to religious beliefs.Vaccine coverage was over 89% in Likasi city, Lubumbashi city and Kipushi health zone and below it in all the other health zones.Supplementary immunization activities coverage ranged from 70% to 89% across health zones surveyed.77,241measlescases were reported during the 2010-2011 outbreaks in the Katanga province.77,241measles cases were reported during the 2010-2011 outbreaks in the Katanga province.Measles outbreak started sometime between December 17 and 20, 2014 and led to rapid growth in cases across the US.This analysis estimated that MMR vaccination rates among the exposed population might be as low as 50% and likely no higher than 86%.MMR vaccination rates in many of the communities that have been affected by this outbreak fell below the necessary threshold to sustain herd immunity, thus placing the greater population at risk as well.Measles outbreak with 1,407 cases was reported between December 1999 and July 2000.Vaccination rates were suboptimal nationwide, varying from 60% to 88% at 2 years of age with a mean uptake of 79% in 2000.Vaccination coverage of MMR1 and MMR2 vaccines has significantly decreased during the period 2008-2016 from 96% to 45% due to challenges in the procurement of vaccines in the country and antivaccination campaigns.measles cases were reported during the 2013-2014 outbreak.The first two cases were reported in unvaccinated children attending the same orthodox Protestant primary school.Vaccine coverage in these communities is around 60%, but varies widely between churches, with coverage reaching less than 30% among members of the most orthodox churches.There were 0 polio cases between 2008 to 2014.Three cases were identified in 2015 when the vaccination rate decreased to 15%.After a response by the government to increase vaccination rate and strengthen surveillance system, 0 cases of polio were identified in 2016.Increase in WPV1 cases in 2020, as anticipated given the ongoing ban on polio vaccination by anti-government elements.As of June 26, 2020, 26 cases from 12 provinces, compared to 13 cases from 3 provinces in 2019. Table 2 . (Continued).During an outbreak of pneumococcal infection, 11 out of 84 residents were infected resulting in an attack rate of 13%.Three patients died resulting in a case fatality rate of 27%.Only 4% of the residents had been vaccinated.had received pneumococcal vaccine.From the survey of 54 nursing homes, 22% of residents were reported to have received the vaccine, and vaccination status was unknown for 66%.Two major barriers to vaccination: low priority among physicians (43%) and difficulty in determining residents' vaccine history (37%).exemption rate increased from 1.6% in the 2005-2006 school year to 2.4% in the 2009-2010 school year. Table 3 . Data on the impact of COVID-19 disruptions on vaccination uptake., the average vaccine coverage in the Las where NHS providers delivered the Td/IPV vaccine to year 9 students was 57.6% compared with 87.6% in 2018 to 2019.Year10 coverage for the Td/IPV vaccine was 86.4%, compared with 86.0% in 2018 to 2019.Coverage ranged from 35.3% in Bolton to 98.4% in Northamptonshire.Monthly prenatal pertussis vaccine coverage for the second quarter of 2021 decreased from 66.1% in April to 63.1% in May, and then increased to 64.4% in June 2021.Between April to June 2021, the difference between the highest and lowest prenatal pertussis vaccine coverage by STP for each month was around 50%, which was similar for the first quarter of 2021.5% of those who turned 70 and 25.8% of those who turned 78 during 2019 and 2020 (from April 1, 2019 to March 31, 2020) were vaccinated by the end of June 2020, compared to 2018 to 2019, vaccine coverage decreased by 5.4% for 70-year-olds and 7.1% for 78-year-olds.In 2020 the coverage dropped to 83% for DTP3, leaving 22.7 million children vulnerable.Regions with the strictest COVID-19 response measures experienced the largest increases in zero-dose children, because service provision and especially outreach activities were affected.Only 19 vaccine introductions were reported, less than half of any year in the past two decades.Coverage dropped to 84% for MCV1, the lowest level since 2010, leaving 22. Table 5 . Potential future impacts of vaccination disruptions due to COVID-19 restrictions on disease burden.Pneumonia Overall vaccination coverage is a more important driver of vaccine mortality impact than vaccination timing.Irrespective of delays, deaths averted by PCV were comparable when accounting for herd protection.The greatest absolute difference in number of deaths averted was observed for Nigeria, in which two to five weeks of delay and 26% of vaccination coverage resulted in 600 additional deaths.Laos, with 7 to 28 weeks of delay with 78% of coverage had 21 additional deaths.The other five countries had an absolute difference of fewer than 200 deaths and less than 3% relative difference in numbers of deaths averted with delays ranging from 0 to 11 weeks.Choi 105Cumulative weekly number of reports of IPD in England due to any of the 13 serotypes covered by 13-valent Pneumococcal conjugate vaccine (PCV13).Schools reopened on March 8, 2021 (Week 10); outdoor socializing was permitted on March 29, 2021 (Week 13); indoor socializing was permitted on April 12, 2021 1 (Week 15).Source: Adapted from UK Health Security Agency104; Institute for Government.105Abbreviation: COV19 = coronavirus disease 2019.
2023-06-10T06:17:25.889Z
2023-06-08T00:00:00.000
{ "year": 2023, "sha1": "78dfe8148ab407db35c4accd504d6ae7e1e31377", "oa_license": "CCBYNCND", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21645515.2023.2219577?needAccess=true&role=button", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "cbd20d64406d845146065000082a8929abafb245", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
150998370
pes2o/s2orc
v3-fos-license
Emotionally Focused Family Therapy: Rebuilding Family Bonds Relationships with parents, siblings, and other family members go through tran-sitions as they move along the life cycle. Resilient families realign their relationships to respond to the changing demands and stressors within the family system. Those who are unable, find themselves in repetitive patterns marked by conflict and distress, often resulting in their need to seek treatment. Based on attachment theory, Emotionally Focused Family Therapy (EFFT) is a pragmatic short-term treatment approach designed to alleviate distress in family functioning. This chapter provides an overview of EFFT process, its theoretical underpinnings and the strategies EFT family therapists employ to promote positive outcomes. The presentation of a case study provides a unique lens where the therapist illustrates moment to moment interventions in an attempt to create new and more favorable family interactions, ones that enhance family members’ feelings of attachment, empathy, communication and stability. Introduction In the last 20 years, research studies have demonstrated the effectiveness of emotionally focused couple therapy (EFT) in helping couples repair their distressed relationships. The natural extension and broader application of EFT couple's treatment can prove especially valuable and effective when working in a family system [1,2]. The foundational principles of Emotionally Focused Couple's Therapy is based on attachment and bonding theories that aim to help individuals gain a greater awareness of their emotions, to provide them with strategies to effectively cope, regulate, and transform their emotions [3]. It is a short term, evidence-based approach that allows the therapist to set goals, target key processes, and chart a destination for couples to identify and remove those emotional blocks which derail the promotion of healthy functioning, while providing alternative approaches that serve to increase levels of attentiveness, empathy and feelings attachment and belonging with one another. According to Johnson,[4] EFFT is similar to emotionally focused therapy for couples, except that with families, the goal is "to modify family relationships in the direction of increased accessibility and responsiveness, thus helping the family create a secure base for children to grow and leave from." Working within a larger family system can be especially daunting as therapists attempt to navigate the vast landscape of family dynamics encompassing multiple, complex interpersonal processes between members, especially the powerful bonds that exist between parent and child, which when weak and broken-are often the root of familiar distress and dysfunction. The core of the human experience of a family lies within its ability to create supportive bonds that sustain it during turbulent and stressful times in its life cycle. The application of EFT to family treatment offers a practical, useful and expedient model from which to effectively bolster stronger and more empathic bonds between parents and their children. This chapter provides an overview of EFFT process, its theoretical underpinnings and the strategies EFT family therapists employ to promote healthy family functioning. Through a presentation of a case study, beginning therapists are provided a unique lens from which to view the interactions of both family and therapist as they attempt to create new family interactions, marked by increased parental accessibility and responsiveness to children, which ultimately leads to their enhanced sense of attachment, communication, belongingness and security. Theoretical framework Emotionally Focused Family Therapy (EFFT) is an integration of humanistic [5] and systemic therapeutic approaches [6]. The focus of treatment is on the ongoing construction of a family's present experience and how patterns of interaction are organized and expressed between family members. Another significant aspect of EFFT is its detailed attention to emotions. Identifying emotions is viewed by the therapist as essential in how family members view themselves and others, or an event. Emotions are hard wired in our brain and are meant to inform us about our environment. They also, contain physical impulses, which are designed by nature to be an immediate and adaptive call to action. In EFFT, emotions are categorized as primary and secondary. Primary emotions have been identified by researchers as universal emotions, such as joy, anger, fear, sadness, surprise, and shame. These emotions are frequently outside of people's awareness. Secondary emotions are defined as reactions, and they help people cope with their primary emotions. The word "emotion" comes from the Latin word, emovare meaning "to move." Emotions are openly identified, shared and often reframed by the EFFT therapist, as a vehicle to help family members navigate into new and more favorable patterns of interaction, one's that are more empathic and capable of building safe and healthier relationships. EFFT is grounded in attachment theory and based on the work of psychologist John Bowlby [7]. Bowlby maintains that human beings are biologically and fundamentally driven to pursue relationships that create security and belonging. He contends that the most critical attachment relationship is an infant's sense of protection created by the primary caregiver (typically the mother) through a series of reciprocal interactions which promote bonding and love. As Karen [8] in Becoming Attached says about love, "You don't need to be rich or smart or talented or funny; you just have to be there." A parent's emotionally engaged presence makes all the difference between disconnection and security. Throughout the cycle children and adolescents reach out to their primary attachment figures when they are in distress. If they experience parents as non-responsive or unavailable, it is natural for them to feel isolated, frightened and anxious. Feelings of insecurity in children are likely to heighten expressions that call for parental reassurance. Conversely, children may engage in behaviors that disengage and avoid their expressions of distress, particularly in moments of need [9][10][11]. In either scenario the resulting negative relational experiences foster instability and anxiety in the family system. In EFFT, one's sense of a secure attachment is linked to positive mental health. Children who are securely attached are best able to turn to their attachment figures for comfort and support [12]. Mikulincer and Shaver [13] capture the distinction Emotionally Focused Family Therapy: Rebuilding Family Bonds DOI: http://dx.doi.org /10.5772/intechopen.84320 between these predictable patterns of attachment behavior as shown in their research when they describe the issues of secure vs. insecure scripts. The secure script is: "If I encounter an obstacle and/or become distressed, I can approach a significant other for help; he or she is likely to be available and supportive; I will experience relief and comfort as a result of proximity to this person; I can then return to other activities [13]." However, when the attachment system remains in an activated state, there are two different insecure coping responses. The avoidant (dismissive) approach "includes rapid self-protective responses to danger without examining one's emotions, consulting other people or seeking to receive help from them [13]." The implicit script is, "If I am in distress, I will carry on with other activities." In contrast, the anxious approach is described as always being on guard for threat, and having difficulty receiving comfort. The implicit script is, "If I am in distress, I will reach for you and reach for you and reach for you, endlessly and to no avail." Attachment anxiety and avoidance are natural responses to the lack of confidence in the parents' emotional availability. Drawing from attachment theory, the EFFT therapist conceptualizes distress in terms of attachment dilemmas in which ineffective responses to attachment needs fuel miscommunication, creating parenting dysfunctions and exacerbating symptoms associated with individual psychopathology [14]. The therapist must obtain a clear understanding of symptoms that generate distress in the family and furthermore, evaluate the parent(s) availability and their children's confidence in their availability. These observations will provide the therapist with information about the attachment quality in the parent-child relationship. Insecure attachment is evident when the parent's capacity for empathy is blocked, giving precedence to feelings of anxiety and anger, thus viewing the child as difficult, antagonistic or uncooperative. In such instance, parents tend to blame the adolescent or child as solely the identified patient and remain oblivious to the underlying emotions, of fear, or sadness that are at play [15]. The EFFT therapist connects the child/adolescent's symptoms to their perception that the caregiver is unavailable and detached. This perception increases a child's anxiety, anger and defensiveness that contributes to the presenting problem [9,16]. The goal of the EFFT therapist is to work through a series of interventions that reframe the family problem as one arising out of an attachment crisis, and subsequently works to normalize family difficulties without blaming anyone [17]. Key to the EFFT process is understanding and integrating these core theoretical principals. EFFT process: Steps and stages The process of EFFT is categorized into three stages and nine treatment steps. In the initial four treatment steps, the therapist carefully focuses on assessing the interactive styles of the family and judiciously works to deescalate any conflicts as they emerge. In the middle phases of treatment (steps five, six, and seven), the therapist and family, work in concert to find new ways to establish more secure familial relationships. In the final two steps of treatment, the therapist highlights and validates new patterns of positive interaction. As importantly, the therapist reinforces family members confidence to handle future conflicts and issues now that they are armed with greater empathy and understanding for one another. The stages and steps of EFFT are outlined and discussed below. Stage one: Deescalating family distress Step 1: Forming an alliance and family assessment. Step 2: Identifying negative interactional patterns that maintain insecure attachment. Family Therapy -New Intervention Programs and Researches Step 3: Accessing underlying emotions informing interactional positions/relational blocks. Step 4: Reframing the problem in light of relational blocks and negative interaction patterns. The primary focus in stage one is for the therapist to identify and track behaviors and secondary emotions that fuel attachment insecurities. The therapist guides the family away from focusing on the content of their presenting conflicts, to developing a more attentive awareness about what underlies their expressed difficulties. The therapist accomplishes this task by tracking familial behaviors driven by intense emotion. As therapists understand, in times of distress, family members commonly deal with their feelings and interpersonal behaviors in unproductive ways. Some may withdraw, argue, submit, explain, or engage in other behaviors designed to minimize and distract from their emotional pain. In this stage, the therapist pays close attention to the interactive behaviors of the family and reframes maladaptive or secondary emotional responses in efforts to bring into awareness their negative cycle of interactions. A negative cycle is defined as a predictable interactional pattern that gets repeated and organizes the family around insecurity, rather than vulnerability. Negative cycles are fatiguing and destructive for family functioning. Tracking the cycle interrupts the behavior and reveals for the first time to the family their true underlying emotions and how their current behaviors serve as protective mechanisms to avoid discomfort and pain. Accessing primary emotions such as fear, hurt, and sadness creates empathy among family members, facilitates responsiveness, and helps the family deescalate [18]. During this phase of treatment, the therapist often returns to utilizing tracking interventions to reemphasize to the family the importance of understanding and dealing with the underlying issues of their discontent in order to enhance family stability and healthy functioning. Stage two: Restructuring family interactions Step 5: Accessing and deepening a child's disowned aspects of self and attachment needs. Step 6: Fostering acceptance of child's new experience and attachment related needs. Step 7: Restructuring family interactions focusing on sharing attachment needs and supportive caregiving responses. In stage two, the focus is on deepening and expanding primary emotions and unmet attachment needs, in order to reshape attachment bonds between family members that are more secure and connected. The change event in stage two involves the therapist accessing the needs embedded in the newly expanded primary emotions that drive the negative family cycle; and helping family members learn to identify and request that previously unexpressed core attachment needs be addressed. The therapist intentionally structures interventions known as enactments that function to restructure attachment bonds between family members [14]. Typically, these requests are for direct care, contact, or comfort and the shift is premised on the parent(s) ability to respond to their children's vulnerability. It is very common in this stage to observe parents having the desire to respond in a more emotionally connected way to their child, but their empathy may be restricted. In such instances, the EFFT therapist will work with the parents to develop their capacity and ability to respond in a way that shifts family relationships toward more secure bonds, replacing negative and harmful cycles of interactions. Stage three: Consolidation Step 8: Exploring new solutions to past problems from more secure positions. Step 9: Consolidating new positions and strengthening positive patterns. Finally, in stage three of EFFT, positive cycles of bonding are consolidated and integrated into the life of the family. At the end of this stage, the family is best able to integrate new ways of engaging in discussions and investing in greater security [18]. Discussions are characterized by more openness, responsiveness, and engagement among family members. It is imperative for the family to learn how to repair failed attempts to connect outside of sessions. Before termination, the therapist affirms that the family is now able to handle its issues and conflicts by examining and resolving them in new and more effective ways. The therapist also focuses on amplifying the family's vision to include more mindfulness of positive affect, vulnerable reaching, and connection. Core interventions There are two primary sets of interventions utilized by EFFT clinicians to help families navigate through the various stages of the treatment process. These core interventions are designed to direct families toward developing relational bonds that enhance their security, communication and strength. The first set are interventions for accessing, expanding and reprocessing of emotional experience. The second set are interventions for restructuring the family interactions. The EFFT techniques used within these categories are described below, followed by an example of a therapist's response to highlight and reinforce a more concrete understanding of the techniques deployed. For a more detailed explanation the reader is referred to the EFT manual [3]. Empathic reflection Reflect (name, order, or distill) emotional processing as it occurs. Slows down the process, directs and focuses attention inward, helps the therapist attune to the client experience, thus conveying understanding and helps in creating alliance. Empathic reflections need to be specific and vivid in order to move the client into a deeper awareness of their emotional experiencing. Therapist: "I think I hear you say that you become so anxious about his future that you find yourself wanting to control, wanting to know what he has in mind because not knowing or not having 'a say' is so overwhelming. Is that it? And then you become very critical with your son. Is that right?" Validation Conveys that the client is entitled to their experience. Such statements function to affirm, and legitimize, the client's experience as understandable, given the attachment relationship context. Validating statements start with, "it makes sense that you would feel this way, given (state specific context)". Therapist: "That makes sense to me, that when you feel that things are about to escalate between you and your mom, you go away, and you avoid any conversation. Is that right?" Evocative responding Through the use of questions, evocative language, and metaphors the therapist opens up the client's experience and encourages them to take another step toward it Therapist: "What's happening right now as you hear him say that?" "What's it like for you when she follows you around the house, pushing for your attention?" Heightening This intervention intensifies, clarifies, and deepens an emotion through persistent focus, reflection or enactments. Thus, allowing the client to identify and accept their emotional experience. The therapist's pacing, tone and timing are significant. The acronym RISSSC, implying emotional risk [3], represents how this intervention is done: with repetition, images, speaking simply, softly, slowly, and using client's words. The soft tone heightens vulnerability and sooths the dysregulated brain, so the client can process clearly. Therapist: " This sounds really important, can we stay here for a bit, I think I hear you say that deep down you really go to a bad place, a place where you get the message that you are nothing but a failure in their eyes. A real disappointment for a son, and that makes you feel so sad, so hurt inside." Empathic conjecture Therapist offers an interpretation of client's experience, or a hunch seen through the attachment lens. This facilitates a more intense experiencing from which new meanings may arise and an expanded awareness. It is important to convey tentativeness when offering a conjecture and to check if what is communicated matches the client's experience. Therapist: "As I listen to you, I hear you saying that you are angry about her lack of concern for you, but I see the tears in your eyes and I wonder if you are also saying that you are hurt by her lack of concern. Does that seem to fit?" Restructuring interventions The following interventions are used in EFFT to address the restructuring task: Tracking, and reflecting interactions Reflections that track family members behaviors slow down and clarify the interactional process. Therapist: "So, when Alex gets frustrated and walks away ignoring what you say, you get angry too and follow him. You need him to listen to you. And, when your mom follows you around wanting your attention it makes you shut down even more." Reframing Reframing interactions in the context of the negative cycle, and attachment needs. An attachment reframe functions to access a positive meaning or intention for a seemingly negative response. It shifts the view of the member to a positive portrayal. Therapist: "You don't experience that the louder she gets, the more desperately she is trying to find you. It sounds as if she is upset with you, but she is doing everything she can to get close to you." Creating enactments The therapist requests direct sharing of a clearly distilled message from one family member to another. Enactments, the most powerful intervention in EFFT, their function is to heighten emotional experience and reshape new interactions among family members which lead to positive cycles of accessibility and responsiveness. Therapist: "Can you tell her, 'I go away because I don't want things to get worse between the two of us.' Can you tell her this?" Case illustration To help illustrate EFFT treatment in action, a case study of a family recently seen by the author is provided below: The Aldo Family: Presenting Concern and Relevant History The family is composed of James and Penny (names and identifying information have been altered), a professional couple in their early 50s, married for 28 years. They have two children; Ellie (23) and Alex (19). The couple has been on and off in couple therapy for a year. The presenting problem described by the parents focused on their son Alex, who had told them at the end of his third semester in college that he wanted to drop out because "this kind of education" was not for him, and he did not see how it would help him get a job. Both parents were very upset and after much discussion, hesitantly agreed to allow him to take a "gap year." It was their understanding that after the year break, Alex was to resume his studies. During that time Alex worked as a waiter, earning spending money while living at home. His work hours provided him with the flexibility to develop an online business that in the long run became a source of income. Alex enjoyed being independent and learning about the world through travel, reading and much You Tube video viewing. A year later, his goal was to be an entrepreneur and not re-enroll in university. Both parents were extremely upset with Alex and had tried to talk "some sense" into him, but to no avail. It was at this point that Penny-the mom requested a family session. During the first two sessions the therapist met once with the entire family in order to assess they viewed the problem; and once individually with Alex, in order to develop an alliance and get to know him better. Alex, was a slender young man with short blond hair, and green eyes. He appeared younger than his years and was soft spoken as he stated that he was eager to start the process. Alex perceived his mother as critical, with strong opinions about a college education and persistent about him returning to school. This made him angry and he said that he frequently avoided conversations with her because they always ended up on the topic of his future. Mom viewed her son as unreasonable, and disrespectful because he ignored her questions and refused to engage with her. She experienced him as spoiled, entitled and selfish; this made her feel frustrated. James agreed with his wife and said that the tension between Alex and his mother stressed him, but he did not know what to do to resolve the issue. Right from the start the EFFT therapist aims to understand the ways family members react to each other and tracks their interaction pattern. As family members discuss how they each perceive their concerns, reactive emotional responses are expressed or suppressed, thus allowing the therapist to witness the negative interaction pattern firsthand. The therapist tracks and reflects the behaviors that elicit the negative response and begins to identify the family pattern that is associated with the problem [3,4]. It was obvious that this family was caught in a reactive pattern of defensiveness, which escalated with increasing anger and frustration. The family's escalation included mom trying to advise Alex and Alex avoiding the conversation. Family Therapy -New Intervention Programs and Researches The more mom insisted in engaging him the more Alex ignored her and she would get so upset that she would turn to her husband for help. James, not knowing what to do would try to calm her by promising he would talk to Alex. However, his approach was not successful either. The more they tried to talk to him or present him with consequences for his actions, the more Alex pulled away. The more he pulled away, the less valued they felt. It appeared to be a hopeless situation. Stage one: Family De-escalation What follows is an actual dialog from the initial sessions with the family. This excerpt highlights the goal of stage one treatment to track the cycle between Alex and his mom and attempt to deescalate the tensions between family members. ALEX: Well, yes… she is unbelievable. She asks me questions, a lot of questions about what I am going to do with my life and I do answer her but a few days later she is asking me the same questions! THERAPIST: all these questions coming your way, regarding your future and you answer them, and then she asks again. MOM: (in soft voice) yes. THERAPIST: Do you think Alex knows that? What would it be like to share a little bit of that with him? That underneath your anger you feel sad because you think that he does not value you? Can you tell him that? Treatment focus and progress in stage one In the above excerpt the therapist looks at the pattern as it unfolds in the room between Alex and his mom. Family de-escalation occurs as Alex and his mom begin to understand their part in the negative interaction pattern and how their attachment-driven behaviors trigger predictable responses in each other. In this case every time mom needed to be assured that Alex was on the right path regarding his future, she asked questions which in turn triggered Alex and made him feel that an argument was imminent and he would disappoint his mother. He then pulled away to avoid the argument, leaving mom to feel sad and not valued and fearful that she was failing as a mother. This triggered mom and she then followed Alex around the house insisting that he engage with her. Alex would get more frustrated and eventually would leave the room thus confirming mom's fear of not being valued. The therapist helps both uncover these deeper emotions and then invites them to do an enactment. In other words, to turn toward each other and engage in a different conversation. Until now, neither was aware how they protected themselves in their relationship nor had they been able to talk about their underlying feelings. The enactment is successful, and both Alex and mom have a new understanding about each other's behavior. He expresses that he values her and wants to be able to talk with her without arguing because it does not feel good to either of them. They both share in the new experience of staying engaged. This awareness shifts the focus from blaming each other to owing their contribution in the negative cycle. In turn, this begins to alter their experience; they feel calmer and more open. A level of safety is created that will allow us to go deeper into vulnerabilities in the next stage. Stage two: Restructuring family interactions What follows below is an example of actual dialog used to illustrate the process of restructuring family dynamics: THERAPIST: A few sessions ago you talked about feeling sad because you see yourself as a disappointment for your parents. Do you remember? ALEX: Mhmm. THERAPIST: I guess, I am curious to know, more about this place that you go to… when you feel that… you are a disappointment. Is it okay for us to go to that place? THERAPIST: Sure, it makes sense. And… who sees you in that place? Who knows about that? ALEX: Nobody knows. Nobody sees how much I try to make them proud of me. Instead I am told that everything I do is wrong. My whole approach is wrong, I am all wrong! (eyes closed). THERAPIST: That's really painful-it's hard for you. (Long Pause) ALEX: Sometimes it feels that I might be running out of time… you know… my dad had problems with his heart last year. (At this point Alex, with his eyes closed and tears running down his cheeks, can hardly speak. After a long pause he continues). I am afraid that I might not have the chance to prove myself and it will be too late. And that maybe I should give up on my ideas and listen to theirs because it will be faster, but then I get conflicted and I think that, it's not right to do something that I do not believe in. And I really believe in this. I do not want to disappoint them but I do not want to disappoint myself either. THERAPIST: Wow! It feels like you are running against time and you have to choose -your parents or yourself. Neither is a good option; and so you go here and you struggle, and you are confused and scared and alone trying to figure things out. Alex is sobbing, and his dad reaches over and hugs him. His mom moves over and she too, sits beside him and hugs him. THERAPIST: Alex, your parents are right beside you. They want to understand. Can you let them in to that place where you are alone and sad? ALEX: I am scared when I think that something suddenly might happen to dad or to you (mom) like last year-and then you would not have the chance to see what I Emotionally Focused Family Therapy: Rebuilding Family Bonds DOI: http://dx.doi.org /10.5772/intechopen.84320 accomplished and be proud of me. Then you will never know that I am capable and that it's ok to do it my way. THERAPIST: That is scary, to think that something might happen to either of your parents and trying to prove yourself, trying to get it right and not disappoint while you still have time. DAD: I am so sorry that you are so hurt. I am, we are not disappointed in you and we do not want to "fix you" or "change you". We love you no matter what you do and now that I know I will do anything to be there for you. I am sorry that our pushingour way of trying to help you caused you so much pain. We love you and want to support you, in a way that is best for you. ALEX: I could leave the house, but I really want to work on our relationship, because it is important that I, have both of your "blessings" as I move on. It is important, that I leave "the nest" as you say, knowing that you are proud of me and you love me, even if I failed. It's like, the baby bird trying to fly out of the nest. The parents have to trust that he can do it-although they may not know for sure. If the baby bird falls, he needs his parents to lovingly encourage him to try again. Sometimes, he flops around for a little bit before the parents rush in to help, and that is ok. The little bird is learning even if he falls, even if he breaks a wing. Keeping the bird in the nest or constantly giving him directions how to fly is constraining-he will not find his way. I guess what I am asking is… do you think you can be there as I try to figure things out? I want to find my way and can you trust that I will be okay-without flying in to help me or try to change my path? DAD: "I had no idea that you felt this way; that you have been trying to fly out of the nest. I didn't see all this as your attempt in figuring things out. What I thought I saw, was a little bird taking advantage of the safety provided by our nest and unless we pushed, you were not going to fly. I see now how that hurt you and how it made you feel that we didn't trust you. I love you and want to support you and it's pretty incredible to hear what has been going on for you." At this point Alex is weeping in his father's arms. Mom joins in the hug and after a small pause, with tears in her eyes says: MOM: "I am so sorry I hurt you. I get scared and I rush in to help you, to save you, to show you and that makes you feel that I don't believe in you. I want to be there for you. I don't want you to feel this way." Treatment focus and progress in stage two In the above excerpt, Alex begins to talk about how scary it is to feel that he disappoints his parents and how he wants to make them proud before he loses either of them. His parents remain open hearted and open minded as he engages with them from a vulnerable place. They see his pain, hurt and fear. Dad not only sees from afar this terrible place that his son struggles in but can stand side by side with him there. His presence is felt, and his apology makes a huge difference to Alex. For the first time, Alex feels seen and feels understood at a much deeper level and therefore, this allows Alex to clearly articulate his attachment needs. Mom and dad worked together to respond to Alex. Often parents cannot empathize because they get caught up in their own secondary responses of fear. Staying present with Alex in his vulnerability allowed both parents to experience how Alex's problematic behavior was related to the family's negative cycle of interaction. In a later session, both parents were able to articulate their fear of failure and Alex was able to hear this and understand much of their stress as parents. He then reassured them, "you have been great parents, given me so much. I hope to be able to offer my kids what you have offered me. I love you both and I don't want you to feel that you have failed as parents." Additionally, he expressed regret for his past behavior toward his mother. Alex began to ask for contact, and this continued in following sessions which helped to bring them closer together. Stage three: Consolidation What follows below is an example of actual dialog used to illustrate the process of consolidation: MOM: Things are good. Alex initiated a conversation earlier this week where he confided in me and asked for my advice. He was telling me about an incident that happened at work and how he handled it. And then asked for my opinion-how I would have handled it. ALEX: (smiling) "That was nice, and different than times in the past. She did not do anything, other than just listen. (Turning toward his mom) You did not try to fix or problem solve with me the way you used to with all the questions. You listened to me for a long time and then I remember that I asked you for advice. You said that you agreed with how I handled the matter and you would have done the same. It really felt good to talk to you like an adult without running away or avoiding you. I want to say, thank you for that because I feel less tense and more relaxed. THERAPIST: That's really great Alex that you felt good to approach your mom and discuss something that was important to you and ask for her input. And it sounds that you both had this conversation in a different way than before. In a way that even feels different in your body. ALEX: Yes. Growing up and doing things differently than the way your parents expect is hard and can be kind of scary. Knowing that they are open and that my mom is there without judging me feels great. MOM: I am so glad that we turned a corner. I am always here for you, no matter what and I want to be the mom you want me to be. Treatment focus and progress in stage three In the above excerpt mom discovers during treatment that she could help her son by her attentive presence. She understands that she did not have to solve Alex's problems or go "undercover" to find out what he was doing and, as a result, this helped her stay more connected with him. The relationship became safer, closer, and more equal. Both were able to confide in and support each other which is the desired outcome for stage three treatment. Conclusions Treating families in distress is extremely challenging for family therapists. Professionals working with families, especially neophytes, commonly feel uncertain and discouraged as they attempt to navigate the vast landscape of family dynamics encompassing multiple, complex interpersonal processes between members. As a result, family therapists find themselves negotiating or offering solutions to presenting problems, rather than focus on the underlying issues that are at the root cause of the dysfunction. Unfortunately, they soon realize the techniques used are not effective, and before long the family members cycle back where they started from. This makes the therapists feel inefficient and ineffective and therefore may shy away from doing family work. Having access to a practical, organized and effective model for working with families is pivotal if practitioners are to make meaningful differences in the lives © 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Emotionally Focused Family Therapy: Rebuilding Family Bonds DOI: http://dx.doi.org/10.5772/intechopen.84320 of people they serve. EFFT arose from the realization that the change principles used in EFT could be applied to family relationships thus changing the cycles of interaction [3]. EFFT is a powerful and efficient way to assess and create positive change within the family system. At its core, EFFT views family distress as a result of attachment insecurity where family members fail to get their attachment needs met. Such families do not possess the skills necessary in expressing their attachment needs and protect themselves by becoming defensive, beginning a negative cycle of interaction which prevents healthy family functioning and stability. Accessing underlying attachment-related emotions and the needs associated with these emotions opens the family to address needs in new ways [3]. Corrective emotional experiences create safety that change family relationships and most likely impact future generations. Tapping into parents' unconditional love is powerful; it offers families great hope and holds tremendous promise in revitalizing the field of family therapy. Author details Katherine Stavrianopoulos John Jay College of Criminal Justice, The City University of New York, New York, USA *Address all correspondence to: stavros@jjay.cuny.edu
2019-05-13T13:05:18.744Z
2019-02-16T00:00:00.000
{ "year": 2019, "sha1": "44114f72fa81a354f678efca33e5da272f5db1a2", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/65598", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "8dd51612d15016046b0b2abf32e6f8fc9cd3ddbe", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
28437163
pes2o/s2orc
v3-fos-license
Genetics and the Future of Human Longevity somatic mutation theories, were suggested by Harman4 and Szilard5, respectively. A union of evolutionary and mechanistic theories occurred in 1977, in the form of the disposable soma theory of ageing6-7. In recent years the evidence for genetic factors being involved in ageing has expanded at a great rate8-10. The major lines of empirical evidence for the role of genetic factors in ageing are as follows: first, life span in human populations shows significant, though low, heritability1112 (in the order of 20-35%); second, different species have different intrinsic life spans which can reasonably be attributed to differences in their genomes; third, in human populations there exist inherited progeroid disorders such as Werner's syndrome13 in which affected individuals have a complex phenotype characterised by premature develop- old people in society. Much of this work has focused on genetics. It is perhaps noteworthy that the discovery by Watson and Crick1 of the double helical structure of DNA occurred in the same decade as the first genetic theories on the evolution of ageing were proposed by Medawar2 and Williams'5, and when two mechanistic theories of ageing, the free radical and somatic mutation theories, were suggested by Harman4 and Szilard5, respectively. A union of evolutionary and mechanistic theories occurred in 1977, in the form of the disposable soma theory of ageing6-7. In recent years the evidence for genetic factors being involved in ageing has expanded at a great rate8-10. The major lines of empirical evidence for the role of genetic factors in ageing are as follows: first, life span in human populations shows significant, though low, heritability1112 (in the order of 20-35%); second, different species have different intrinsic life spans which can reasonably be attributed to differences in their genomes; third, in human populations there exist inherited progeroid disorders such as Werner's syndrome13 in which affected individuals have a complex phenotype characterised by premature development of a variety of age-related diseases, including arteriosclerosis, type II diabetes, cataracts, osteoporosis and cancers; fourth, in invertebrate model systems such as the fruitfiy, Drosophila melanogaster, and nematode worm, Caenorhabditis elegans, clear evidence of genetic effects on life span has been discovered1415. As the 20th century draws to its close, the amount of genetic information concerning human health and disease is expanding at an enormous rate, due to the efforts of the various human genome projects. What will be the impact of research on human longevity in the 21st century and beyond? It is already clear that the science of human ageing will be perhaps the preeminent biomedical research challenge in this period. Terminology In human gerontology the words 'ageing' and 'senescence' are used more or less interchangeably, and this will be the practice here. This is not to deny the importance of development and maturation, which some also count as 'ageing', but my primary concern is with the declines in structure and function that unfold gradually and progressively during adulthood. The measure of senescence most commonly used is one based on the increase in age-specific death rates16-17. In 1825, Gompertz18 observed that adult human mortality rates show an approximately exponential rise with increasing chronological age, and similar patterns have been noted in other species17. The Gompertz model has been generalised by adding a constant to represent age-independent mortality due to extrinsic causes19, and the resulting model for mortality rate can be written as where a, (3, and y are constants and x denotes age. The parameter (3 denotes the 'actuarial ageing rate' and determines how fast the age-dependent component of adult mortality increases with time. The parameter a denotes 'initial vulnerability' and acts as a scale parameter for the age-dependent component of adult mortality (note that the Gompertz model does not make any attempt to describe juvenile mortality). The parameter y denotes the age-independent component of adult mortality. There is some evidence that human mortality increases more slowly than the Gompertz model predicts among centenarians20 but it is not yet clear whether this slowing at extreme old age reflects: (i) genetic heterogeneity within the population, (ii) particularly assiduous care of the oldest old, or (iii) intrinsic biological processes. Genetic heterogeneity is likely to be at least part of the explanation as centenarians probably comprise a genetically robust subset of the population whose below-average ageing rate becomes apparent only when the frailer genotypes have already died". An exponential increase in mortality rates within human populations does not require that the underlying physiological processes follow exponential Theories on evolution of ageing Theories on evolution of ageing seek to explain why ageing occurs and to identify what kinds of genes are responsible. The puzzle, of course, is to explain why ageing occurs in spite of its clearly deleterious impact on Darwinian fitness. Because ageing is so obviously deleterious for the individual, attempts have been made to explain its evolution in terms of an advantage to the population as a whole21. Is it a form of population control to prevent overcrowding? This theory is given little credence today first, because there is no evidence that animal numbers in the wild are regulated to any significant extent by senescence, most deaths occurring at younger ages from extrinsic causes such as predation, and second, because it invokes the controversial concept of group selection, which is unlikely to be effective in this context. Nevertheless, these ideas periodically reappear, presumably because they appeal to the notion that ageing is programmed like development and will yield to the same kinds of genetic analysis that have proved so successful in developmental biology. Greatest weight is now attached to evolutionary theories which are 'non-adaptive', in the sense that they do not suggest ageing confers any fitness benefit of itself, and they recognise that it may indeed be harmful. The non-adaptive theories explain evolution of ageing through the indirect action of natural selection. One such theory is the 'mutation accumulation' theory2. This is based on the observation that natural selection is relatively powerless to act on genes which express their effects late in the lifespan at ages when, because of extrinsic mortality, survivorship has fallen to a low level. The assumption is that in the starting population there would be no age-related increase in intrinsic mortality, otherwise the theory would be circular. In such a context, late-acting deleterious mutations are predicted to accumulate over a large number of generations within the genome. The practical consequences of such an accumulation would be minimal in the wild environment but will have a serious effect upon the organism if it is moved to a protected environment. In the protected environment, the reduction in extrinsic mortality permits survival to ages when the intrinsic effects of the accumulated mutations are felt. In other words, ageing has evolved where beforehand it did not exist. A second concept invokes the idea that there may be pleiotropic genes whose expression involves trade-offs between early-life fitness benefits and late-life fitness disadvantages3. Like the mutation accumulation theory, this 'antagonistic pleiotropy' theory rests on the observation that the declining force of natural selection provides a differential weighting across the life span which will ensure that quite modest early-life fitness benefits outweigh major fitness disadvantages in later life. The trade-off principle is also at the heart of the 'disposable soma' theory6,722. This theory provides a direct connection between evolutionary and physiological aspects of ageing by recognising the importance of the allocation of metabolic resources between activities of growth, somatic maintenance, and reproduction. Increasing maintenance promotes the survival and longevity of the organism but only at the expense of significant metabolic investments that could otherwise be used to accelerate growth and reproduction. It has been demonstrated with formal models that the optimum allocation strategy results in a smaller investment in maintenance of the soma than would be required for indefinite lifespan2324. In effect, the organism sacrifices the potential for indefinite survival in favour of earlier and more prolific fecundity. Three categories of genes are thus predicted by the evolutionary theories to affect ageing and longevity: 1 Genes that regulate levels of somatic maintenance and repair; 2 Pleiotropic genes involved in trade-offs that do not include somatic maintenance; 3 Purely deleterious late-acting mutations that have escaped elimination due to the decline in the force of natural selection at old ages. Martin et al10 have suggested the terminology 'public' and 'private' to distinguish genes associated with ageing that are likely to be shared or individual. Genes involved in trade-offs, especially genes regulating fundamental aspects of somatic maintenance such as antioxidant systems, are expected to be public. Conversely, late-acting deleterious mutations are expected to be private, since the fate of these alleles will be strongly influenced by random genetic drift. Implications of the evolutionary theories A number of implications follow from the evolutionary theories. First, it is predicted that multiple kinds of genes contribute to senescence and that the total number of such genes may be large (Fig 1). This suggests that uncovering the genetic basis of senescence will be a complex task requiring a combination of approaches and methodologies. Second, the theories readily explain differences in the rate of ageing between different species, which are likely to be the result of different levels of extrinsic mortality. This is because extrinsic mortality determines the rate of decline in the force of natural selection. Extrinsic mortality also has a major effect on the optimal allocation of energy between maintenance, growth and reproduction, species at higher risk from extrinsic mortality being expected to invest relatively less in maintenance and more in reproduction. The disposable soma theory thus predicts higher levels of maintenance in somatic cells of long-lived species, for which there is growing evidence. Third, in the case of the disposable soma theory, there is a clear prediction that the actual mechanisms of senescence will be stochastic, involving processes like the random accumulation of somatic mutations or oxidative damage to macromolecules. Biological gerontology has long been divided between the 'programme' and 'stochastic' views. The idea that ageing might be due to the accumulation of random damage, but that the rates of damage are programmed in a statistical sense through the evolved settings of the maintenance systems, offers some accommodation of these apparently opposite views. A fourth implication of the evolutionary theories is that senescence may be malleable. From the human perspective, the trade-off principle is one that needs to be borne in mind when considering possible interventions in the ageing process. Interventions that would increase longevity or postpone a late-age disease may turn out to have side effects due to the existence of trade-offs. Genetics of human longevity Two strategies have been delineated to identify genes associated with human longevity11. The major interest is in genes that may confer above-average or extreme longevity, since there is potentially a large number of alleles that shorten life span through mechanisms that are unconnected or only indirectly connected with ageing. One strategy is the 'candidate gene' approach using case-control methodology. The aim is to identify extremely long-lived individuals and compare their allele frequencies at the candidate gene locus to the allele frequencies of a control population, who will be less long-lived individuals from the same genetic background. This assumes that the controls are unlikely themselves to reach the extreme old age of the 'cases', which is not unreasonable if the age criteria are appropriately defined. Candidate gene studies have identified significant differences in allele frequencies between centenarians and controls at the HLA2526, apolipoprotein E27, and angiotensin converting enzyme loci27, but have not so far been applied to their full potential. The second approach is the sib-pair method designed to detect loci that segregate within kin groups with traits of interest, such as inherited diseases. In the case of ageing, the trait of interest is extreme longevity. This method requires the recruitment of a sufficiently large sample of extremely longlived sib pairs and its application has not yet been reported. In the case of progeroid diseases, 1996 saw the identification by positional cloning of the gene responsible for Werner's syndrome which appears to code for a DNA helicase28. This finding is highly significant in that it supports the idea that accumulation of DNA damage may be a contributing factor to ageing, especially in dividing cells. In patients with Werner's syndrome post-mitotic tissue is relatively spared, which is consistent with the discovery that the gene defect is one that will principally affect DNA replication. The future of human longevity Our present understanding of the genetics of human ageing permits some consideration of how human life spans might conceivably change in the future, although a great deal more research will be needed. Human longevity may be altered as a result of (i) Fig 1. Diagram illustrating how polygenic control of longevity is effected, as predicted by the disposable soma theory of ageing. Natural selection acts in a similar way on the different genes regulating individual somatic maintenance functions. The precise setting of each function in an individual determines the period of 'longevity assured', as indicated by the lengths of the arrows. At the level of the population, the average period of longevity assured by each maintenance function is expected to be similar. However, some variance within the population is expected, so that within and between individuals the relative lengths of the arrows may vary. (Reproduced from reference 7 by permission of the Annals of the New York Academy of Science). natural selection, (ii) artificial selection, (iii) genetic engineering, (iv) drug interventions, (v) genetic risk assessment coupled with prophylactic measures, (vi) behavioural and lifestyle modifications. Natural selection Even though human populations now live in circumstances that many regard as 'unnatural', the process of Darwinian natural selection continues. The fact that so many humans now live to experience old age will, in principle, expose the genetic factors involved in ageing to new selection forces tending to increase life span. On the other hand, selection against inherited weaknesses has been diminished through medical interventions and the generally more comfortable circumstances of life, and this may lead to the accumulation of minor gene defects that will eventually have deleterious effects on long-term survival. Patterns of reproduction have also altered profoundly through the development of reliable contraception, resulting in extensive family planning governed mostly by social and economic circumstances. The net effect of these changes on the genetics of the future human life history are hard to predict but merit consideration. Artificial selection Artificial selection has produced significant effects on the life histories of fruitflies29'30, but such procedures are neither ethical nor feasible in human populations. The fruitfly experiments are interesting for the information that they provide on genetic variance in populations and on the rate and extent of the response to selection. However, the genetic variance within a population reflects the evolutionary history of that population, and there are likely to be major differences between fruitflies and humans with regard to the genetic variance in factors affecting life span. Genetic engineering In the popular mind, advances in genetic research are often linked to the idea of genetic engineering. Genetic engineering is a conceivable route to modification of human longevity although this presupposes major advances in the technology of gene therapy and in the detailed dissection of the genetic factors influencing life span. At present, effective gene therapy is still unavailable even for monogenic inherited diseases like cystic fibrosis which are, rightly, the primary targets of research. Whether genetic modification of a 'normal' process like ageing will ever be ethically acceptable or practically feasible is far from clear, and meaningful discussion must await the further identification of possible genetic targets. Nevertheless, the broad issues can and should be addressed as part of the wider debate on application of the 'new genetics'. Drug interventions Drug interventions based on understanding of genetic mechanisms involved in late life diseases such as Alzheimer's disease are the most likely immediate benefits to emerge from genetic advances in ageing research. Whether these will, in time, have the cumulative effect of altering underlying life spans remains to be seen, but in any case the more urgent and attainable goal is to improve the quality of the later years of life. Genetic risk assessmen t One of the major successes of genome research to date has been the identification of risk alleles for conditions such as Alzheimer's disease and breast cancer. The discovery of alleles linked to late life diseases is likely to continue at an accelerating pace. If such discoveries are coupled with the development of effective drug treatments or prophylaxis, they are likely to result in further extension of average life expectancy through reducing the negative impact of risk alleles on survivorship. It is less likely, however, that this approach will alter maximum life span, since the longest lived at present are probably those who are at lowest genetic risk. Behavioural and lifestyle modifications Advances in genetic understanding of ageing will not necessarily require genetic or drug-based interventions to produce enhancement in the quality of later life, or even life extension. Knowledge of genetic mechanisms is also likely to help to identify nongenetic factors (nutrition, exercise, etc) which may be beneficial. It is already clear that genes are only a part of what influences duration of life. The identification and exploitation of gene-environment and genelifestyle interactions will be of great importance too.
2018-04-03T01:12:36.032Z
1997-11-01T00:00:00.000
{ "year": 1997, "sha1": "c1aa69200dae1137d42d20429ce5753c22bb76b3", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "76abe74a027b2448874071d5d6a3a295f6a34234", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
268438619
pes2o/s2orc
v3-fos-license
Isohexide-Based Tunable Chiral Platforms as Amide- and Thiourea-Chiral Solvating Agents for the NMR Enantiodiscrimination of Derivatized Amino Acids New arylamide- and arylthiourea-based chiral solvating agents (CSAs) were synthesized starting from commercially available isomannide and isosorbide. The two natural isohexides were transformed into the three amino derivatives, having isomannide, isosorbide, and isoidide stereochemistry, then the amino groups were derivatized with 3,5-dimethoxybenzoyl chloride or 3,5-bis(trifluoromethyl)phenyl isothiocyanate to obtain the CSAs. Bis-thiourea derivative containing the 3,5-bis(trifluoromethyl)phenyl moiety with exo–exo stereochemistry was remarkably efficient in the differentiation of NMR signals (NH and acetyl) of enantiomers of N-acetyl (N-Ac) amino acids in the presence of 1,4-diazabicyclo[2,2,2]octane (DABCO). Nonequivalences in the ranges of 0.104–0.343 ppm and 0.042–0.107 ppm for NH and acetyl groups, respectively, allowed for very accurate enantiomeric excess determination, and a reliable correlation was found between the relative positions of signals of enantiomers and their absolute configuration. Therefore, a complete stereochemical characterization could be performed. Dipolar interactions detected in the ternary mixture CSA/N-Ac-valine/DABCO led to the identification of a different interaction model for the two enantiomers, involving the formation of a one-to-one substrate/CSA complex for (S)-N-Ac-valine and a one-to-two complex for (R)-N-Ac-valine, as suggested by the complexation stoichiometry. Introduction The growing awareness of the impact of chirality on human life has prompted the research towards accurate, reproducible, and, if possible, direct methods for the determination of the enantiomeric purity of chiral compounds. NMR methods of enantiodifferentiation are based on the use of chiral auxiliaries to convert the enantiomers into intrinsically anisochronous diastereomers.Chiral auxiliaries for NMR spectroscopy can be grouped in three classes, based on the nature of the interactions stabilizing the diastereomers: chiral solvating agents (CSAs) [11,12], chiral derivatizing agents (CDAs) [12], and chiral lanthanide shift agents (CLSRs) [12].The first class emerged for ease of use.CSAs commonly interact with the enantiomeric substrates by means of noncovalent interactions, such as dipole-dipole, π-π and hydrogen bond formation, giving rise to couples of transient diastereomers.A notable exception is represented by chiral liquid crystals (CLCs), which offer unprecedented versatility due to the fact that the substrate does not require any specific, directed interaction with the liquid crystal.Consequently, CLCs have the potential to differentiate a wide range of chiral molecules, encompassing organic and inorganic compounds, as well as metal complexes.One peculiarity of CLCs is their capability to differentiate nuclei that are remote from the chiral center, which presents a challenge for the majority of CSAs [13,14]. For the enantiodifferentiation of chiral substrates by any other kind of CSAs, hydrogen bond formation is mandatory for the achievement of the transient diastereomers.Therefore, CSAs endowed with amide and especially thiourea functions should be privileged due to the enhanced acidity and hence hydrogen bond-donating capabilities of these moieties.These functionalities can be easily embedded in chiral platforms by using simple synthetic procedures, which entail the reaction of a chiral amine or diamine with acyl chlorides or isothiocyanates [15,16]. Chiral diamine obtained from precursors belonging to chiral pool can represent an attractive choice for the synthesis of bis-amide and bis-thiourea CSAs, as their natural precursors are readily available in enantiomerically pure forms. Isohexides, namely isomannide (1, Figure 1) and isosorbide (2, Figure 1), which are chiral compounds coming from the chiral pool and endowed with interesting proprieties (cheap, non-toxic and obtained from renewable resources) [17][18][19], and also isoidide (3, Figure 1), which can be obtained from isomannide, possess structural features making them good candidates for use as chiral scaffolds for the preparation of chiral auxiliaries for enantioselective processes [20][21][22][23].Their vaulted structure, endowed with two hydroxyl groups characterized by different relative stereochemistry in different isohexides, allows the creation of U-, N-or W-shaped derivatives (Figure 1) showing interesting enantiorecognition features dependent not only on the nature of the derivatizing moieties but also on the stereochemistry of the final compounds. formation, giving rise to couples of transient diastereomers.A notable exception is represented by chiral liquid crystals (CLCs), which offer unprecedented versatility due to the fact that the substrate does not require any specific, directed interaction with the liquid crystal.Consequently, CLCs have the potential to differentiate a wide range of chiral molecules, encompassing organic and inorganic compounds, as well as metal complexes.One peculiarity of CLCs is their capability to differentiate nuclei that are remote from the chiral center, which presents a challenge for the majority of CSAs [13,14]. For the enantiodifferentiation of chiral substrates by any other kind of CSAs, hydrogen bond formation is mandatory for the achievement of the transient diastereomers.Therefore, CSAs endowed with amide and especially thiourea functions should be privileged due to the enhanced acidity and hence hydrogen bond-donating capabilities of these moieties.These functionalities can be easily embedded in chiral platforms by using simple synthetic procedures, which entail the reaction of a chiral amine or diamine with acyl chlorides or isothiocyanates [15,16]. Chiral diamine obtained from precursors belonging to chiral pool can represent an attractive choice for the synthesis of bis-amide and bis-thiourea CSAs, as their natural precursors are readily available in enantiomerically pure forms. Isohexides, namely isomannide (1, Figure 1) and isosorbide (2, Figure 1), which are chiral compounds coming from the chiral pool and endowed with interesting proprieties (cheap, non-toxic and obtained from renewable resources) [17][18][19], and also isoidide (3, Figure 1), which can be obtained from isomannide, possess structural features making them good candidates for use as chiral scaffolds for the preparation of chiral auxiliaries for enantioselective processes [20][21][22][23].Their vaulted structure, endowed with two hydroxyl groups characterized by different relative stereochemistry in different isohexides, allows the creation of U-, N-or W-shaped derivatives (Figure 1) showing interesting enantiorecognition features dependent not only on the nature of the derivatizing moieties but also on the stereochemistry of the final compounds. Ionic liquids derived from U-shaped isomannide behaved as chiral tweezers [23], whereas N-shaped aryl carbamates of isosorbide were successfully used as CSAs for the determination of the enantiomeric composition of amino acid derivatives [22] (Scheme 1, left panel).Those results clearly showed that tuning of the structural features allowed well performing CSAs to be obtained and that the presence of two aryl moieties on the isohexide scaffold was mandatory to achieve good enantiodiscrimination.Starting from these previously reported results, as a part of our ongoing research into the preparation of new, simple, biobased, and efficient CSAs for NMR applications, Ionic liquids derived from U-shaped isomannide behaved as chiral tweezers [23], whereas N-shaped aryl carbamates of isosorbide were successfully used as CSAs for the determination of the enantiomeric composition of amino acid derivatives [22] (Scheme 1, left panel).Those results clearly showed that tuning of the structural features allowed well performing CSAs to be obtained and that the presence of two aryl moieties on the isohexide scaffold was mandatory to achieve good enantiodiscrimination. terconversion of the hydroxyl groups to amino groups, followed by the introduction of aryl moieties by amide or thiourea bond formation. Scheme 1. Use of isohexide derivatives as CSAs. Herein, we report the synthesis of U-, N-and W-shaped isohexide derived arylamides (Scheme 1) and preliminary screening aimed at finding the most suitable structures for NMR enantiodiscrimination, with the synthesis of arylthioureas having the selected structures, followed by additional fast screening to identify the best performing CSA.The W-shaped arylthiourea molecule emerged as the best performing CSA, and this was finally tested in the enantiodiscrimination of amides of different amino acids under optimized analysis conditions.A complete characterization of the best performing CSA together with the study of the enantiodiscrimination mechanism is also presented. Results and Discussion The synthesis of the amino derivatives of the three isohexides was carried out as described in Scheme 2, starting from commercially available isosorbide and isomannide and from isoidide, which was obtained by Mitsunobu reaction on isomannide with benzoic acid, followed by hydrolysis of the benzoate groups [39].Starting from these previously reported results, as a part of our ongoing research into the preparation of new, simple, biobased, and efficient CSAs for NMR applications, we turned our attention to the use of amino-derived compounds, such as arylamides or arylthioureas (Scheme 1, right panel).This choice was made while also considering that U-, N-, or W-shaped amino derivatives of isohexides can be easily obtained when starting from commercially available isomannide and isosorbide through stereo-controlled interconversion of the hydroxyl groups to amino groups, followed by the introduction of aryl moieties by amide or thiourea bond formation. Herein, we report the synthesis of U-, N-and W-shaped isohexide derived arylamides (Scheme 1) and preliminary screening aimed at finding the most suitable structures for NMR enantiodiscrimination, with the synthesis of arylthioureas having the selected structures, followed by additional fast screening to identify the best performing CSA.The W-shaped arylthiourea molecule emerged as the best performing CSA, and this was finally tested in the enantiodiscrimination of amides of different amino acids under optimized analysis conditions.A complete characterization of the best performing CSA together with the study of the enantiodiscrimination mechanism is also presented. Results and Discussion The synthesis of the amino derivatives of the three isohexides was carried out as described in Scheme 2, starting from commercially available isosorbide and isomannide and from isoidide, which was obtained by Mitsunobu reaction on isomannide with benzoic acid, followed by hydrolysis of the benzoate groups [39]. As reported in Scheme 2, the first step of the synthetic route was the conversion of isohexides 1-3 into the corresponding ditriflates.The derivatization was performed by reacting the starting diols 1-3 with trifluoromethanesulfonic anhydride in the presence of pyridine at 0 • C. The chemically pure 4 products were obtained in good yields (around 90%), after simple aqueous work-up.Compounds 4 were then converted into diazides 5 via nucleophilic displacement of the triflate groups by sodium azide.As has been well established, the reaction follows a S N2 mechanism, thus leading to a complete inversion of configuration at the stereogenic centers bearing the functional groups.As a result, the isomannide-like azide was obtained from the isoidide triflate, and the isomannide triflate was converted into isoidide-like azide, whereas the stereochemistry of the isosorbide core remained unaltered.The reaction was carried out with a high excess of sodium azide in DMF as the solvent at RT, to boost the process toward nucleophilic displacement while minimizing as much as possible the competitive elimination reaction.However, even using these reaction conditions, the formation of the elimination side product was observed, as already reported [39,40].The extent of the formation of this product strongly depended on the stereochemistry of the triflate.It was observed that when the nucleophile approached from the less hindered convex side of the isohexide, as in the case of endo-triflate, the elimination side reaction did not take place at all, but when the nucleophile approached from the more hindered concave side, as for the exo-triflate, the formation of at least a 30% proportion of elimination product was observed.For this reason, only diazide 5a, having two exo-azido groups, was obtained as a chemically pure product after aqueous work-up, whereas the other diazides required chromatographic purification.The reaction was totally stereospecific with all substrates, as confirmed by the analysis of the NMR spectra of the three diazides, which showed the presence of only one set of signals for each nucleus in 5b or for each couple of equivalent nuclei in 5a and 5c, so confirming that no epimerization occurred at the stereogenic centers.The diazides were eventually reduced to the corresponding diamines, in good yields, by catalytic hydrogenation.As reported in Scheme 2, the first step of the synthetic route was the conversion of isohexides 1-3 into the corresponding ditriflates.The derivatization was performed by reacting the starting diols 1-3 with trifluoromethanesulfonic anhydride in the presence of pyridine at 0 °C.The chemically pure 4 products were obtained in good yields (around 90%), after simple aqueous work-up.Compounds 4 were then converted into diazides 5 via nucleophilic displacement of the triflate groups by sodium azide.As has been well established, the reaction follows a SN2 mechanism, thus leading to a complete inversion of configuration at the stereogenic centers bearing the functional groups.As a result, the isomannide-like azide was obtained from the isoidide triflate, and the isomannide triflate was converted into isoidide-like azide, whereas the stereochemistry of the isosorbide core remained unaltered.The reaction was carried out with a high excess of sodium azide in DMF as the solvent at RT, to boost the process toward nucleophilic displacement while minimizing as much as possible the competitive elimination reaction.However, even using these reaction conditions, the formation of the elimination side product was observed, as already reported [39,40].The extent of the formation of this product strongly depended on the stereochemistry of the triflate.It was observed that when the nucleophile approached from the less hindered convex side of the isohexide, as in the case of endo-triflate, the elimination side reaction did not take place at all, but when the nucleophile approached from the more hindered concave side, as for the exo-triflate, the formation of at least a 30% proportion of elimination product was observed.For this reason, only diazide 5a, having two exo-azido groups, was obtained as a chemically pure product after aqueous work-up, whereas the other diazides required chromatographic purification.The reaction was totally stereospecific with all substrates, as confirmed by the analysis of the NMR spectra of the three diazides, which showed the presence of only one set of signals for each nucleus in 5b or for each couple of equivalent nuclei in 5a and 5c, so confirming that no epimerization occurred at the stereogenic centers.The diazides were eventually reduced to the corresponding diamines, in good yields, by catalytic hydrogenation.The amino derivatives of isohexides were then converted to arylamides and arylthioureas by reacting diamines 6 with 3,5-dimethoxybenzoyl chloride and 3,5-bis (trifluoromethyl)phenyl isothiocyanate, respectively, under standard reaction conditions (Scheme 3).The enantiodiscrimination properties of the different amide derivatives 7a-c were assayed in 1 H NMR experiments towards both racemic ester 9c and acid 9b (Figure 2), and the results are presented in Table 1.These substrates were selected because of the presence of the 3,5-dinitrobenzoyl (DNB) group that can not only establish π-π interactions with the electronically complementary aromatic moieties of the isohexide derivatives, which is useful for the enantiodiscrimination, but that also presents some diagnos-Scheme 3. Synthesis of bis-amides 7 and bis-thioureas 8. The enantiodiscrimination properties of the different amide derivatives 7a-c were assayed in 1 H NMR experiments towards both racemic ester 9c and acid 9b (Figure 2), and the results are presented in Table 1.These substrates were selected because of the presence of the 3,5-dinitrobenzoyl (DNB) group that can not only establish π-π interactions with the electronically complementary aromatic moieties of the isohexide derivatives, which is useful for the enantiodiscrimination, but that also presents some diagnostic signals in a spectral region devoid of proton resonances of the CSAs.Scheme 3. Synthesis of bis-amides 7 and bis-thioureas 8. The enantiodiscrimination properties of the different amide derivatives 7a-c were assayed in 1 H NMR experiments towards both racemic ester 9c and acid 9b (Figure 2), and the results are presented in Table 1.These substrates were selected because of the presence of the 3,5-dinitrobenzoyl (DNB) group that can not only establish π-π interactions with the electronically complementary aromatic moieties of the isohexide derivatives, which is useful for the enantiodiscrimination, but that also presents some diagnostic signals in a spectral region devoid of proton resonances of the CSAs.Enantiodiscrimination tests were performed by adding one equivalent of CSA to a 30 mM solution of 9c in CDCl3 as the solvent.In the case of a substrate having underivatized carboxyl functions, 1,4-diazabicyclo[2,2,2]octane (DABCO) was added as solubilizing agent.The enantiodiscrimination efficiency was evaluated by measuring the Enantiodiscrimination tests were performed by adding one equivalent of CSA to a 30 mM solution of 9c in CDCl 3 as the solvent.In the case of a substrate having underivatized carboxyl functions, 1,4-diazabicyclo[2,2,2]octane (DABCO) was added as solubilizing agent.The enantiodiscrimination efficiency was evaluated by measuring the chemical shift nonequivalence , ppm; where δ R and δ S are the chemical shifts of the (R)-and (S)-enantiomer of the substrate measured in the mixture and δ f is the chemical shift of the (R)-and (S)-enantiomer of the free substrate). All the diamides were able to induce to some extent nonequivalences of selected protons of 9c and 9b, which were slightly higher for the NH proton of both substrates in the presence of 7a and also for the CH in the case of the mixture containing 7c and 9b (Table 1).In that way, some influence of the stereochemistry on the enantiodifferentiating properties of these CSAs was demonstrated.On the basis of this first screening, we concluded that amino derivatives of isohexides can be used to prepare CSAs and that diamide 7a, having the exo-exo stereochemistry of the arylamide moieties, showed the best enantiodiscrimination properties, mainly towards rac-9b, having not derivatized carboxylic function.Endo-endo stereochemistry (7c) appeared as the least promising. The first problem we encountered when using 8a-b as CSAs was the high insolubility of 8a under the initial conditions, which obliged us not only to work with more dilute samples (5 mM), but also to search for a solvent capable of giving homogeneous samples.After accurately screening (Table S1 in Supplementary Materials) for the best conditions for obtaining complete solubility of the chiral solvating agent, conditions were selected as a 5 mM solution of CSA/rac-9b/DABCO (1:1:1) in a 3:7 mixture of CD 2 Cl 2 /C 6 D 6 .Although 8b was highly soluble, the same conditions were used for the sake of comparison.The results reported in Table 2 clearly suggest a higher enantiodiscriminating ability of 8a-b with respect to amide derivatives 7a-c.In spite of the more dilute conditions (5 mM), in comparison with those conditions selected in the case of CSAs 7a-c (30 mM), higher nonequivalences were measured, which was particularly remarkable in the presence of CSA 8a.Regarding amino acid derivatives, enantiomers of N-Ac ones were those that were most efficiently differentiated, with remarkably high nonequivalences of 0.303 ppm and 0.083 ppm for the NH and acetyl groups, respectively (Table 2). 1 H and 19 F Enantiodiscrimination Experiments To deal with the enantiodiscriminating versatility of 8a (Figure 2), we extended our analysis to its 5 mM equimolar mixtures with N-DNB (9d-e, Figure 2), N-TFA (10b-e, Figure 2) and N-Ac derivatives (11b-g, Figure 2) in the presence of one equivalent of DABCO.A CD 2 Cl 2 /C 6 D 6 (3:7) solvent mixture was once again selected (see Supplementary Materials). Very high nonequivalences were detected for the NH and acetyl protons of the N-Ac derivatives 11a-g (Tables 2 and 3), with a maximum value of 0.343 ppm for Nacetylphenylglycine derivative 11c.Lower nonequivalences were obtained for N-TFA and N-DNB derivatives (Tables 2 and 3). Figure 3 shows the relevant spectral regions of the ternary mixtures containing the N-acetylamino acids 11b-g.The spectral regions of the ternary mixtures with 9d-e and 10b-e are reported in the Supplementary Materials (Figure S1).Integration of the well separated NH resonances of enantiomeric mixtures of 11a-g allowed us to accurately determine their relative amounts, leading to values of enantiomeric excesses (ee) in very good agreement with gravimetric data (Figure 4, Table S1 in Supplementary Materials), also in samples with very low (6.4%) and very high (−94.7%)ee, as reported in the case of 11g.Integration of the well separated NH resonances of enantiomeric mixtures of 11a-g allowed us to accurately determine their relative amounts, leading to values of enantiomeric excesses (ee) in very good agreement with gravimetric data (Figure 4, Table S1 in Supplementary Materials), also in samples with very low (6.4%) and very high (−94.7%)ee, as reported in the case of 11g. Integration of the well separated NH resonances of enantiomeric mixtures of 11a-g allowed us to accurately determine their relative amounts, leading to values of enantiomeric excesses (ee) in very good agreement with gravimetric data (Figure 4, Table S1 in Supplementary Materials), also in samples with very low (6.4%) and very high (−94.7%)ee, as reported in the case of 11g.In the development of new chiral solvating agents based on the selected chiral platform, the guiding idea mainly revolves around expanding enantiodiscriminating capabilities by means of the introduction of functional groups with enhanced potential in terms of establishing a tight network of hydrogen bonds, thereby promoting the thermodynamic stabilization of diastereomeric solvates.Therefore, it was imperative to compare the capabilities of the most efficient thiourea-based CSA, i.e., 8a, with those of the corresponding urea system 12a.Consequently, we synthesized urea 12a by reacting diamine 6a with 3,5-bis(trifluoromethyl)phenyl isocyanate and we assessed its ability to differentiate the proton nuclei of selected enantiomeric substrates, particularly 9a, 9b, 10a, and 11a (Table 4) For the chiral urea, auxiliary exacerbating solubility issues were In the development of new chiral solvating agents based on the selected chiral platform, the guiding idea mainly revolves around expanding enantiodiscriminating capabilities by means of the introduction of functional groups with enhanced potential in terms of establishing a tight network of hydrogen bonds, thereby promoting the thermodynamic stabilization of diastereomeric solvates.Therefore, it was imperative to compare the capabilities of the most efficient thiourea-based CSA, i.e., 8a, with those of the corresponding urea system 12a.Consequently, we synthesized urea 12a by reacting diamine 6a with 3,5bis(trifluoromethyl)phenyl isocyanate and we assessed its ability to differentiate the proton nuclei of selected enantiomeric substrates, particularly 9a, 9b, 10a, and 11a (Table 4) For the chiral urea, auxiliary exacerbating solubility issues were found compared to the thiourea counterpart, making it challenging to achieve complete solubilization at a concentration of 5 mM.Therefore, comparative measurements between the two systems were conducted at a concentration of 2 mM, at which 12a proved to be completely solubilized in the mixtures.In all instances, the urea-based CSA produced significantly lower nonequivalence data compared to the thiourea CSA (Table 4).C) nonequivalences (∆∆δ, ppm) of selected protons of racemic 9a-b, 10a, and 11a (2 mM) in equimolar mixtures with 8a or 12a and DABCO in CD 2 Cl 2 /C 6 D 6 3:7. Molecules 2024, 29, x FOR PEER REVIEW found compared to the thiourea counterpart, making it challenging to achieve com solubilization at a concentration of 5 mM.Therefore, comparative measuremen tween the two systems were conducted at a concentration of 2 mM, at which 12a p to be completely solubilized in the mixtures.In all instances, the urea-based CSA duced significantly lower nonequivalence data compared to the thiourea CSA (Tab 1 H NMR Configurational Assignment Only a few CSAs have been so far reported for configurational assignments [ 43], as CDAs have been preferred in this field since the earliest reports of Mosher i [44].As a matter of fact, CDAs may produce higher nonequivalences and originate conformationally restricted diastereomeric derivatives compared to CSAs.This la ture favors the building of molecular models suited for the rationalization of the c tion between the relative positions of diastereomeric derivatives signals and their lute configuration.The fixed skeleton of 8a suggested that it would be worthwhile 1 H NMR Configurational Assignment Only a few CSAs have been so far reported for configurational assignments [31,[41][42][43], as CDAs have been preferred in this field since the earliest reports of Mosher in 1963 [44].As a matter of fact, CDAs may produce higher nonequivalences and originate more conformationally restricted diastereomeric derivatives compared to CSAs.This last feature favors the building of molecular models suited for the rationalization of the correlation between the relative positions of diastereomeric derivatives signals and their absolute configuration.The fixed skeleton of 8a suggested that it would be worthwhile to explore its use for the configurational assignment of acetyl derivatives of amino acids, in consideration of the remarkable enantiodifferentiation detected in the mixtures 8a/N-Ac-derivative/DABCO.Availability of only one stereoisomer of the CSA required the analysis of enantiomerically enriched samples in order to evaluate the sense of nonequivalence, i.e., the relative positions of enantiomeric signals in enantiomerically enriched samples having known absolute configurations. As shown in Table S2 in Supplementary Materials, for every derivative 11a-c,f,g, the proton resonances of the NH and Ac groups of the (R)-enantiomer were shifted at a higher frequency with respect to those due to the (S)-enantiomer in the presence of the CSA (Figure 5).Therefore, a reproducible correlation between the sense of nonequivalence and the absolute configuration has been obtained, and this correlation can be reliably extended to samples having an unknown absolute configuration, provided that at least a minimum amount of the defect enantiomer is present in the mixture. Investigation of Chiral Recognition Processes The chemical bases of the chiral recognition process were investigated in the mixtures containing 8a and one of each enantiomer of 11g in the presence of DABCO by 1D or 2D ROESY in order to detect intra-and intermolecular dipolar interactions (Figures S3-S7, S9, and S10 in Supplementary Materials). The complexation stoichiometries were preliminarily defined using Job's method in samples having different substrate to CSA ratios while keeping constant the total concentration (Tables S2 and S3 in Supplementary Materials).Graphs reporting the normalized complexation shift of the acetyl protons of the substrate as a function of the molar fraction of 8a showed a symmetrical bell curve with a well-defined maximum at a 0.5 molar fraction for the diastereomeric complex containing (S)-11g, according to a 1-to-1 complexation stoichiometry (Figure S2 in Supplementary Materials).Otherwise, a well-defined maximum at the 0.3 molar fraction was obtained in the case of (R)-11g, demonstrating a 1-to-2 CSA/substrate complexation stoichiometry (Figure S2 in Supplementary Materials). Very low solubility of the CSA did not allow us to obtain reliable values of the association constants of the two diastereomeric complexes. Investigation of the stereochemical features of the diastereomeric complexes started Therefore, a reproducible correlation between the sense of nonequivalence and the absolute configuration has been obtained, and this correlation can be reliably extended to samples having an unknown absolute configuration, provided that at least a minimum amount of the defect enantiomer is present in the mixture. Investigation of Chiral Recognition Processes The chemical bases of the chiral recognition process were investigated in the mixtures containing 8a and one of each enantiomer of 11g in the presence of DABCO by 1D or 2D ROESY in order to detect intra-and intermolecular dipolar interactions (Figures S3-S7, S9, and S10 in Supplementary Materials). The complexation stoichiometries were preliminarily defined using Job's method in samples having different substrate to CSA ratios while keeping constant the total concentration (Tables S2 and S3 in Supplementary Materials).Graphs reporting the normalized complexation shift of the acetyl protons of the substrate as a function of the molar fraction of 8a showed a symmetrical bell curve with a well-defined maximum at a 0.5 molar fraction for the diastereomeric complex containing (S)-11g, according to a 1-to-1 complexation stoichiometry (Figure S2 in Supplementary Materials).Otherwise, a well-defined maximum at the 0.3 molar fraction was obtained in the case of (R)-11g, demonstrating a 1-to-2 CSA/substrate complexation stoichiometry (Figure S2 in Supplementary Materials). Very low solubility of the CSA did not allow us to obtain reliable values of the association constants of the two diastereomeric complexes. Investigation of the stereochemical features of the diastereomeric complexes started from the definition of the conformations of 8a and 11g in both ternary mixtures. Identical intramolecular ROE interactions were detected for 8a in the two mixtures (Figure S4, Supplementary Materials), indicating that 8a's conformation was the same in both solvates.In particular, regarding protons NH-7 and NH-11 (Figure S4 in Supplementary Materials), the intensity of ROE effects at the frequencies of methine protons H 1 and H 4 were higher than ROE effects at methylene protons H 3a /H 6a .Lower dipolar interactions were detected at H 2 and H 5 .On this basis, a transoid arrangement of the thiourea protons NH-7/NH-11 and H 2 /H 5 can be assessed.Intense intra-ROE effects between the amide protons (Figure S5 in Supplementary Materials) supported a cisoid arrangement of these protons in the thiourea fragment (Figure 6) that was further corroborated by the absence of ROE effect between NH-7/NH-11 and protons of 3,5-bis(trifluoromethyl)phenyl groups (Figure S4 in Supplementary Materials).The analysis of intermolecular ROE effects allowed us to impose proximity constraints that were significantly differentiated in the two mixtures. In particular, very different intermolecular dipolar interactions were originated by H1 and H4 of 8a, which were the sole protons of the CSA producing ROE effects with the protons of enantiomeric substrates (Figure S9 in Supplementary Materials).Therefore, both (R)-11g and (S)-11g interacted with the convex surface of the CSA, including protons H1 and H4 of the bicycle and the NH protons of the thiourea moieties.However, H1 and H4 showed a more intense ROE at the HA proton of (S)-11g and a less intense effect at its acetyl group (Figure S9 in Supplementary Materials).The reverse was found for the mixture containing (R)-11g, with the major ROE effect observed at the acetyl protons (Figure S9 in Supplementary Materials). Therefore, the (S)-enantiomer interacted with the chiral auxiliary, placing the carbonyl functions (acetyl and carboxylate) close to NH-7 and NH-11 groups in a spatial arrangement allowing to place the most sterically demanding group (isopropyl), away from the interaction site, as shown in Figure 7. Accordingly, the acetyl group of (S)-11g was in spatial proximity to ortho-protons of the phenyl moiety of 8a (Figure S10 in Supplementary Materials).This interaction model was according to the 1-to-1 complexation stoichiometry. Due to the 1-to-2 CSA to substrate complexation stoichiometry in the mixture containing (R)-11g and the spatial proximity of the methyl group of the acetyl of (R)-11g and H1 and H4 protons of 8a (Figure S10 in Supplementary Materials), placing its carbonyl The analysis of intermolecular ROE effects allowed us to impose proximity constraints that were significantly differentiated in the two mixtures. In particular, very different intermolecular dipolar interactions were originated by H 1 and H 4 of 8a, which were the sole protons of the CSA producing ROE effects with the protons of enantiomeric substrates (Figure S9 in Supplementary Materials).Therefore, both (R)-11g and (S)-11g interacted with the convex surface of the CSA, including protons H 1 and H 4 of the bicycle and the NH protons of the thiourea moieties.However, H 1 and H 4 showed a more intense ROE at the H A proton of (S)-11g and a less intense effect at its acetyl group (Figure S9 in Supplementary Materials).The reverse was found for the mixture containing (R)-11g, with the major ROE effect observed at the acetyl protons (Figure S9 in Supplementary Materials). Therefore, the (S)-enantiomer interacted with the chiral auxiliary, placing the carbonyl functions (acetyl and carboxylate) close to NH-7 and NH-11 groups in a spatial arrangement allowing to place the most sterically demanding group (isopropyl), away from the interaction site, as shown in Figure 7. Accordingly, the acetyl group of (S)-11g was in spatial proximity to ortho-protons of the phenyl moiety of 8a (Figure S10 in Supplementary Materials).This interaction model was according to the 1-to-1 complexation stoichiometry.Interestingly, DABCO protons produced intense ROE contacts with protons of the CSA and protons of each amino acid derivative enantiomer (Figure S4 in Supplementary Materials), supporting its role in the stabilization of the two diastereomeric solvates, which goes beyond its solubilizing effect.DABCO acts as a bridge between the polar groups of the two components. The mediating role of DABCO became evident through a comparison of its diffusion coefficient (D, m 2 s −1 ) measured by diffusion-ordered spectroscopy experiments in the pure state and its binary mixture containing the chiral auxiliary, along with ternary systems that also included the two enantiomers of the acetyl derivative of valine 11g.Starting from the initially high value of pure DABCO (D = 21.2 × 10 −10 m 2 s −1 ), as expected for its small molecular size, the chiral auxiliary itself initiated attractive interactions, in a reduction of the diffusion coefficient of DABCO by 5 units (D = 16.3 × 10 −10 m 2 s −1 ).An even more pronounced decrease in the diffusion coefficient of DABCO was observed in mixtures (R)-11g/CSA/DABCO and (S)-11g/CSA/DABCO, reaching values of 6.7 × 10 −10 m 2 s −1 and 7.4 × 10 −10 m 2 s −1 , respectively, which highlights the cooperative function of DABCO in the formation of the two diastereomeric solvates. Therefore, according to interaction models described above, strongly differentiated chemical environments are felt by the acetyl and amide protons of the two enantiomers leading to their highly differentiated chemical shifts. Materials All syntheses of sensitive compounds were realized under dry Argon in flame-dried lab glassware.Reactants and reagents, if not specified, were commercially available and used as received For the synthesis of sensitive compounds, dry CH2Cl2 and THF obtained using a MB-SPS solvent purification system were used.Pyridine was distilled under inert atmosphere over CaH2, and 3,5-bis(trifluoromethyl)phenyl isothiocyanate was distilled under vacuum. Methods Analytical thin layer chromatography (TLC) was performed on ALUGRAM Xtra G/UV254 plates (Macherey-Nagel GmbH & Co.KG, Duren, Germany) and detection of compounds was performed with a UV lamp and permanganate or sulfuric vanillin solution.Melting points were measured on a Buchi Melting Point B-545 instrument (BUCHI Italia s.r.l, Cornaredo, Italy), , optical rotations were measured on a 1 dm cell at the sodium D line on an Anton Paar MCP 300 Polarimeter or on an Jasco Polarimeter.NMR Due to the 1-to-2 CSA to substrate complexation stoichiometry in the mixture containing (R)-11g and the spatial proximity of the methyl group of the acetyl of (R)-11g and H 1 and H 4 protons of 8a (Figure S10 in Supplementary Materials), placing its carbonyl functional group far away from the CSA surface, a different interaction model can be hypothesized in which the main stabilizing interaction must necessarily involve the carboxylate group of two amino acid units with each one interacting with one bis-thiourea moiety as depicted in Figure 7. Interestingly, DABCO protons produced intense ROE contacts with protons of the CSA and protons of each amino acid derivative enantiomer (Figure S4 in Supplementary Materials), supporting its role in the stabilization of the two diastereomeric solvates, which goes beyond its solubilizing effect.DABCO acts as a bridge between the polar groups of the two components. The mediating role of DABCO became evident through a comparison of its diffusion coefficient (D, m 2 s −1 ) measured by diffusion-ordered spectroscopy experiments in the pure state and its binary mixture containing the chiral auxiliary, along with ternary systems that also included the two enantiomers of the acetyl derivative of valine 11g.Starting from the initially high value of pure DABCO (D = 21.2 × 10 −10 m 2 s −1 ), as expected for its small molecular size, the chiral auxiliary itself initiated attractive interactions, resulting in a reduction of the diffusion coefficient of DABCO by 5 units (D = 16.3 × 10 −10 m 2 s −1 ).An even more pronounced decrease in the diffusion coefficient of DABCO was observed in mixtures (R)-11g/CSA/DABCO and (S)-11g/CSA/DABCO, reaching values of 6.7 × 10 −10 m 2 s −1 and 7.4 × 10 −10 m 2 s −1 , respectively, which highlights the cooperative function of DABCO in the formation of the two diastereomeric solvates. Therefore, according to interaction models described above, strongly differentiated chemical environments are felt by the acetyl and amide protons of the two enantiomers leading to their highly differentiated chemical shifts. Materials All syntheses of sensitive compounds were realized under dry Argon in flame-dried lab glassware.Reactants and reagents, if not specified, were commercially available and used as received For the synthesis of sensitive compounds, dry CH 2 Cl 2 and THF obtained using a MB-SPS solvent purification system were used.Pyridine was distilled under inert atmosphere over CaH 2 , and 3,5-bis(trifluoromethyl)phenyl isothiocyanate was distilled under vacuum. Methods Analytical thin layer chromatography (TLC) was performed on ALUGRAM Xtra G/UV254 plates (Macherey-Nagel GmbH & Co.KG, Duren, Germany) and detection of compounds was performed with a UV lamp and permanganate or sulfuric vanillin solution.Melting points were measured on a Buchi Melting Point B-545 instrument (BUCHI Italia s.r.l, Cornaredo, Italy), optical rotations were measured on a 1 dm cell at the sodium D line on an Anton Paar MCP 300 Polarimeter or on an Jasco Polarimeter.NMR characterization of the bis-amides 7a-c and corresponding intermediates and the enantiodiscrimination experiments with 7a-c were carried out on a spectrometer operating at 500 or 400 MHz, 126, or 101 MHz for 1 H and 13 C nuclei, respectively; the NMR characterization of bis-thioureas 8a-b and N-acetylamino acids and the enantiodiscrimination experiments with 8a-b were carried out using a spectrometer operating at 600 MHz, 150 MHz, and 564 MHz for 1 H, 13 C, and 19 F nuclei, respectively.The samples were analyzed in a CDCl 3 , DMSO-d 6 , or methanol-d 4 solution. 1H and 13 C chemical shifts were referenced to tetramethyl silane (TMS) as the secondary reference standard; 19 F chemical shifts were referenceed against CFCl 3 and trifluoro-toluene (8a-8b) as the external standard with temperature control (25 ± 0.1 and 21 ± 0.1 • C for the spectrometer operating at 600 and 400 or 500 MHz, respectively).A 1 s relaxation delay and 200 increments of 4 transients, each with 2Kpoints, were employed for gCOSY (gradient Correlation SpectroscopY) and TOCSY (Total COrrelation SpectroscopY) experiments.The mixing time for TOCSY maps was 80 ms.The 2D-ROESY (Rotating-frame Overhauser Enhancement SpectroscopY) experiments were carried out with a relaxion time of 1 s, a mixing time of 0.3 s, and 128 increments of 8 transients, with 2K-points.For 1D-ROESY spectra, a selective inversion pulse was employed with transients ranging from 512 to 1024, a relaxion delay of 1 s, and a mixing time of 0.5 s.The gHSQC (gradient Heteronuclear Single Quantum Coherence) and the gHMBC (gradient Heteronuclear Multiple Bond Correlation) spectra were recorded with a relaxion time of 1.2 s and 128-200 increments with 16-32 transients, each with 2K-points.The gHMBC experiments were optimized for a long-range coupling constant of 8 Hz.For DOSY (Diffusion-Ordered Spectroscopy) experiments, a stimulated echo sequence with self-compensating gradient schemes and 64 K data points was used.Typically, g was varied in 20 steps (2-32 transients each), and ∆ and δ were optimized to obtain an approximately 90-95% decrease in resonance intensity at the largest gradient amplitude.Baselines of arrayed spectra were corrected prior to processing the data.After data acquisition, each FID was apodized with 1.0 Hz line broadening and then Fourier transformed.DOSY macro was used for data processing.The assignments of 1 H NMR and 13 C NMR chemical shifts for 8a-b, reported below, are shown in Figures SI32, SI33, SI34, and SI35 in Supplementary Materials; the NMR data were described using the following abbreviations: s-singlet, d-doublet, dt-double doublet, ddd-double double douplet, t-triplet, q-quartet, and m-multiplet. Conclusions New U-, N-, and W-shaped arylamides and arylthioureas CSAs 7a-c and 8a-b were easily synthesized starting with commercially available isomannide and isosorbide.The stereospecific interconversion of the hydroxyls of the two natural isohexides allowed us to obtain the three amino derivatives having isomannide, isosorbide, and isoidide stereochemistry, which were easily transformed into the corresponding arylamides, and arylthioureas.In view of the biological relevance of amino acids, their N-DNB, N-TFA, and N-Ac derivatives were probed as chiral substrates in the enantiodiscrimination experiments, highlighting the finding that bis-thiourea CSA 8a with 3,5-bis(trifluoromethyl)phenyl moieties in an exo-exo stereochemistry was endowed with remarkable enantiodiscriminating power towards N-acetylamino acids, surpassing that of the corresponding urea derivative 12a.The nature of the probe signals (doublets for NH and singlets for Ac) of the chiral substrates allowed us to also attain accurate determinations of enantiomeric compositions in diluted mixtures with very high or very low enantiomeric excesses.Importantly, 8a constitutes one of the few cases of the use of CSAs for configurational assessments.It is likely that the flexible lateral arms of the CSA in an open conformation can act in synergy to grab a single substrate unit.Alternatively, they can act independently while grabbing two of them in response to steric-repulsive effects.In that way, an efficient enantiomeric differentiation can be attained.As a matter of fact, alternative exo-endo stereochemistry, in which thiourea moieties cannot cooperate in the interaction with the chiral substrates, produces lower enantiomer differentiations in the NMR spectra.Therefore, anisohexide skeleton constitutes a versatile chiral platform for the design of new chiral auxiliaries bearing different kinds of functional groups to be applied to the enantiodiscrimination of different classes of chiral substrates. Figure 1 . Figure 1.Structures of isohexides and their derivatives. Figure 1 . Figure 1.Structures of isohexides and their derivatives. Figure 6 . Figure 6.Conformation of 8a in the ternary mixtures (R)-and (S)-11g/8a/DABCO in CD2Cl2/C6D6 (3:7) according to NMR data.The conformation of 11g was also the same in the two mixtures.A transoid arrangement of methine proton HA and amide proton NH-B was suggested by very low-intensity HA/NH-B ROE effect compared to the very high-intensity of HA/HD ROE (Figures S6 and S7 in Supplementary Materials).The dihedral angle defined by the fragment HA-C-N-HB was accordingto data, as a JHAHB of 8.0 Hz and 8.5 Hz was measured for (R)-and (S)-11g, respectively, corresponding [45] to very similar values of the dihedral angles of 147.1° and 150.5° for (R)-and (S)-11g, respectively.The conformation of 11g in mixture is shown in Figure S8 in Supplementary Materials.The analysis of intermolecular ROE effects allowed us to impose proximity constraints that were significantly differentiated in the two mixtures.In particular, very different intermolecular dipolar interactions were originated by H1 and H4 of 8a, which were the sole protons of the CSA producing ROE effects with the protons of enantiomeric substrates (FigureS9in Supplementary Materials).Therefore, both (R)-11g and (S)-11g interacted with the convex surface of the CSA, including protons H1 and H4 of the bicycle and the NH protons of the thiourea moieties.However, H1 and H4 showed a more intense ROE at the HA proton of (S)-11g and a less intense effect at its acetyl group (FigureS9in Supplementary Materials).The reverse was found for the mixture containing (R)-11g, with the major ROE effect observed at the acetyl protons (FigureS9in Supplementary Materials).Therefore, the (S)-enantiomer interacted with the chiral auxiliary, placing the carbonyl functions (acetyl and carboxylate) close to NH-7 and NH-11 groups in a spatial arrangement allowing to place the most sterically demanding group (isopropyl), away from the interaction site, as shown in Figure7.Accordingly, the acetyl group of (S)-11g was in spatial proximity to ortho-protons of the phenyl moiety of 8a (FigureS10in Supplementary Materials).This interaction model was according to the 1-to-1 complexation stoichiometry.Due to the 1-to-2 CSA to substrate complexation stoichiometry in the mixture containing (R)-11g and the spatial proximity of the methyl group of the acetyl of (R)-11g and H1 and H4 protons of 8a (FigureS10in Supplementary Materials), placing its carbonyl Figure 6 . Figure 6.Conformation of 8a in the ternary mixtures (R)-and (S)-11g/8a/DABCO in CD 2 Cl 2 /C 6 D 6 (3:7) according to NMR data.The conformation of 11g was also the same in the two mixtures.A transoid arrangement of methine proton H A and amide proton NH-B was suggested by very low-intensity H A /NH-B ROE effect compared to the very high-intensity of H A /H D ROE (Figures S6 and S7 in Supplementary Materials).The dihedral angle defined by the fragment H A -C-N-H B was accordingto data, as a J HAHB of 8.0 Hz and 8.5 Hz was measured for (R)-and (S)-11g, respectively, corresponding [45] to very similar values of the dihedral angles of 147.1 • and 150.5 • for (R)-and (S)-11g, respectively.The conformation of 11g in mixture is shown in Figure S8 in Supplementary Materials.The analysis of intermolecular ROE effects allowed us to impose proximity constraints that were significantly differentiated in the two mixtures.In particular, very different intermolecular dipolar interactions were originated by H 1 and H 4 of 8a, which were the sole protons of the CSA producing ROE effects with the protons of enantiomeric substrates (FigureS9in Supplementary Materials).Therefore, both (R)-11g and (S)-11g interacted with the convex surface of the CSA, including protons H 1 and H 4 of the bicycle and the NH protons of the thiourea moieties.However, H 1 and H 4 showed a more intense ROE at the H A proton of (S)-11g and a less intense effect at its acetyl group (FigureS9in Supplementary Materials).The reverse was found for the mixture containing (R)-11g, with the major ROE effect observed at the acetyl protons (FigureS9in Supplementary Materials).Therefore, the (S)-enantiomer interacted with the chiral auxiliary, placing the carbonyl functions (acetyl and carboxylate) close to NH-7 and NH-11 groups in a spatial arrangement allowing to place the most sterically demanding group (isopropyl), away from the interaction site, as shown in Figure7.Accordingly, the acetyl group of (S)-11g was in
2024-03-17T15:21:26.129Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "aa3884b2e8c9a422fbcf0b6b0a9db7633c5dbaf0", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/29/6/1307/pdf?version=1710492563", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dc3eb34a11190d26b1863f929d24e9547b156007", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
22601524
pes2o/s2orc
v3-fos-license
Superconductivity in charge Kondo systems We present a theory of superconductivity in charge Kondo systems, materials with resonant quantum valence fluctuations, in the regime where the transition temperature is comparable to the charge Kondo resonance. We find superconductivity induced by charge Kondo impurities, study how pairing of a superconducting host is enhanced due to charge Kondo centers and investigate the interplay between Kondo-scattering and inter-impurity Josephson coupling. We discuss the implications of our theory for Tl-doped PbTe, which has recently been identified as a candidate charge Kondo system. The role of impurities in superconductors is a classic problem in condensed matter physics [1,2]. A reciprocal problem concerns impurities which can cause superconductivity in a host that, on its own, has no intention to superconduct. One version is of course an impurity induced increase in the carrier concentration and density of states at the Fermi level. Much more exotic and interesting is however the prospect of impurities supplying the actual pairing mechanism. Candidates are so called negative-U centers [3], which can, as we will show, induce pairing in a non-superconducting host even in a regime of strong quantum, charge Kondo, fluctuations. The latter is crucial to understand superconductivity in Pb 1−x Tl x Te [4], where recent experiments by Matsushita et al. [5] found strong evidence for charge Kondo fluctuations close to T c . It promises a number of new unconventional properties [6] for this very exciting material. PbTe is a narrow gap IV-VI semiconductor [7] where Tl, for small x, is known to act as acceptor, adding one hole per atom to the valence band. This is consistent with the valence electron configurations of Pb (6s 2 6p 2 ) and Tl (6s 2 6p 1 ). The surprise is that Pb 1−x Tl x Te becomes superconducting with T c as big as 1.4K [4], comparable to metallic systems, but for a hole concentration orders of magnitude smaller (n 0 ≃ 10 20 cm −3 ). Equally puzzling is that T c rises with Tl concentration, x, for x-values where n 0 becomes independent of x [8,9]. A special aspect of Tl is that it likes to skip an intermediate valence state in a polarizable host [10,11]. In PbTe, Tl + , which acts as an acceptor, and Tl 3+ , where an electron is donated instead, are by several eV more stable than Tl 2+ [10]. This effect can be described in terms a negative-U Hubbard interaction between holes in the Tl6s-shell. If δE = E Tl 3+ − E Tl + is the smallest scale of the problem, the two valence states become essentially degenerate. Then, the hybridization of the impurities with valence holes causes a quantum charge dynamics, similar in nature to the Kondo effect of diluted paramagnetic impurities in metals [12,13]. An isospin can be introduced [13] where the "up" and "down" configurations correspond to Tl 3+ and Tl + , respectively. δE = 0 plays the role of the magnetic field and the isospin flip corresponds to a coherent motion of an electron pair into or out of the impurity. This motion of pairs suggest a x > x * (lower panel). A pinning of the chemical potential at µ = µ * for x > x * gives rise to a degeneracy between the Tl 1+ and Tl 3+ states and to n0(x) = const. connection between the charge Kondo dynamics, with Kondo temperature T K , and superconductivity. Numerical simulations [14] indeed demonstrate that negative-U centers increase T c of a superconducting host if δE is small. For δE = 0 pairing in a non-superconducting host was discussed under the assumptions T c ≫ T K [15]. Two important open questions arise: i) Why is it possible to assume almost perfect degeneracy (δE < T c ) given that Tl is known to act as acceptor (requiring E Tl 1+ < E Tl 3+ ) even at room temperature? ii) Are charge Kondo impurities able to cause superconductivity with T c ≃ T K , as requires by recent experiments [5]? Then the scattering rate of the centers is highly singular and the pseudo-spin moment is about to be quenched. In this paper we answer both questions. We show that beyond a characteristic Tl-concentration Pb 1−x Tl x Te tunes itself, without adjustment of parameters, into a resonant state with δE = 0. We further present a theory for the superconducting transition temperature of dilute negative-U , charge Kondo impurities to address the behavior in the intermediate regime T c ≃ T K , where the superconducting and charge Kondo dynamics fluctuate on the same time scale. We argue that our theory can explain the concentration dependence and magnitude of n 0 and T c for Pb 1−x Tl x Te. In addition we predict a re-entrance normal state behavior at low temperature and impurity concentration as a unique fingerprint of the charge Kondo mechanism for superconductivity, determine the electromagnetic response close to the transition and show that a low concentration of negative-U centers will always increase weak coupling host superconductivity. All this demonstrates the rich and highly nontrivial behavior of this very special class of impurities. An isolated valence skipper can be described in terms of a negative-U Hubbard model, where n s,σ = s † σ s σ is the occupation for a spin σ hole in the Tl 6s-shell, i.e. δE = 2 (ε 0 − µ) + U . µ is the chemical potential of the system and U < 0. The valence band is characterized by The concentration of holes in the valence band, donated via Tl-doping, is n 0 = x (1 − n s ) with n s = σ n s,σ , i.e. n 0 > 0 in case of an acceptor, Tl + , and n 0 < 0 (corresponding to electrons in the conduction band) for the donor, Tl 3+ . This enables us to determine µ and thus δE as function of Tl concentration. We first assume that the chemical potential, µ, is below the value µ * = ε 0 + 1 2 U , where δE = 0. Then δE > 0 and Tl + is more stable. There are no holes in the Tl 6s levels. All holes are in the valence band: n 0 = x, as seen in experiment for small x [8,9]. Increasing the Tl concentration increases µ until it reaches µ * for some x * . If we further add Tl-impurities and if they continued acting as acceptors, the chemical potential would rise above µ * . However, then δE < 0 and Tl 3+ become more stable acting as donor, in contradiction to our assumption. Thus, instead of increasing µ, additional impurities will equally split into Tl + and Tl 3+ valence states such that no new charge carriers are added to the valence band and µ remains equal to µ * . Tl + and Tl 3+ are degenerate and coexist with concentration x+x * 2 and x−x * 2 , respectively. No fine tuning is needed to reach a state with perfect degeneracy, except for the fact that µ * is reachable. This phenomenon is related, but not identical, to the pinning of the Fermi level in amorphous semiconductors, discussed in Ref. [3]. In Fig.1 we show experimental results of Ref. [8] for n 0 (x), in good agreement with this scenario. The comparison with experiment gives an estimate of x * ≃ 0.5% (see Fig. 1). Using the band structure of PbTe[16] this yields µ * ≃ 175 ± 20meV and µ * ρ 0 ≃ 0.07 with density of states at the Fermi level, ρ 0 . This value for µ * agrees very well with the tunneling data of Ref. [8], who finds µ * ≈ 200meV. Next we include an additional hybridization of the impurity with the band electrons, causing transitions between the degenerate valence states. For large |U |/V , the problem can be simplified by projecting out states with n isσ = 1 [17]. The close relation to the spin Kondo problem becomes evident if one introduces the Nambu spinor [13] c i = c i↓ , c † i↑ as well as the isospin t i = 1 2 c † i τ c i and similarly s i and T i = 1 2 s † i τ s i . Here τ is the vector of the Pauli matrices. For δE = 0 follows where J = 8V 2 |U| . The isospins T i and t i obey the usual spin commutation relation. Ordering in the x-y plane in isospin space is related to superconductivity (T + i = s † i↓ s † i↑ ) , whereas ordering in the z-direction corresponds to charge ordering (T z i = 1 2 ( σ n isσ − 1)). The model undergoes a Kondo effect where the low temperature bound state is a resonance of a pair of charges tunneling between the impurity and the conduction electron states at a rate T K ≃ De This causes an anisotropy of the analog of the RKKY interaction between isospins mediated by either particle-particle excitations, I +− (R) = J 2 ρF 8π R −3 or particle-hole excitations, I zz (R) = I +− (R) cos (2k F R), respectively. The in-plane coupling in isospin space, I +− , is the Josephson or proximity coupling between distinct impurities, whereas I zz determines charge ordering. The absence of Friedel oscillations in the particle-particle channel causes the different behavior of I +− and I zz . Using this pseudospin analogy one can easily conclude that superconductivity is possible if T c turns out to be large compared to T K and quantum fluctuations of T i can be neglected. The pseudospin moment is unscreened, corresponding to preformed pairs. The interaction I +− between these pairs in the isospin x-y plane is unfrustrated, supporting superconducting rather than charge ordering for randomly placed impurities. A mean field calculation in this regime gives T c,mf ≃ xJ 2 ρ F log D/ xJ 2 ρ F [15]. The origin of superconductivity is then similar to Josephson coupling between small superconducting grains located at the impurity sites. For T c comparable to T K the behavior is considerably more subtle. The time it takes to create a Cooper pair in the host equals the time for a valence change causing the pairing, i.e. the moments which are supposed to order are being quenched and a description in terms of preformed pairs is inapplicable. In addition, Kondo flip-scattering is expected to be pair breaking. Theoretically, the Kondo effect manifests itself in the appearance of the logarithmic divergence of the perturbation theory in J for T ≃ T K . A partial summation of the divergent perturbation series which is quantitatively correct even for T ≃ T K and only fails to recover the low T Fermi liquid behavior, was proposed in Ref. [19]. The approach is based on a non-linear integral equation for the t-matrix for non spin flip scattering which determines the one particle Green's function: where G 0 (p; ω n ) = 1/(iω n − ε k + µ) is the bare valence hole Green's function. x r = x − x * is the concentration of the degenerate impurities. Müller-Hartmann and Zittartz [20] solved the non-linear integral equation for t (ω) exactly. The approach was applied to study spin Kondo impurities in a superconducting host. A rich behavior for T c (x) was obtained which was shown to agree well with experiments [21]. In what follows we use and generalize this approach to investigate superconductivity in the charge Kondo problem. This scattering matrix approach is unique as it allows to investigate the subtle crossover close to T K and, as we will see, naturally includes effects related to the coupling between impurities, I ± (R), effects which are very hard to include in other, more modern approaches to the Kondo problem [18]. In the normal state t (ω) of the charge and spin Kondo problems turn out to be identical and we can simply use the results of Ref. [20]. In the superconducting state an anomalous scattering matrix, t ∆ (ω), occurs. Superconductivity and charge Kondo dynamics are much closer intertwined than in the magnetic problem and determining t ∆ (ω) becomes a considerably more complex task. However, for the linearized gap equation which determines T c , t ∆ (ω) is small and progress can be made analytically. We obtain for small superconducting gap, ∆ determined solely by the local Kondo dynamics and a nonlocal, "proximity" contribution t ∆,prox (ω n ) = which is proportional to T + , reflecting the broken symmetry at the impurity in the superconducting state. We allow for a finite attractive BCS-interaction, V 0 < 0, of the host. t (iω n ) is the normal state t-matrix of Ref. [20] and X n = ρ F J ψ 1 2 + n − ψ 1 2 − log TK T with digamma function ψ (x). Performing the usual disorder average [2] we finally obtain a linearized gap equation where i ω n = iω n + x r ρ F Jt (i ω n ) and ∆ ( ω n ) = ∆ 1 + x r ρ F J t( ωn) i|ωn| +x r ρ F Jt ∆ ( ω n ). T + is determined by the ability to polarize a static pairing state at the impurity site, just like in the proximity effect in superconductors or the RKKY interaction in the magnetic case. Close to T c , we find T + = − J 2V0 χ (T c ) ∆ with local susceptibility of the Kondo problem, χ (T ) ∝ (T + T K ) −1 . We first consider the limit V 0 = 0, i.e. the host material is not superconducting on its own, like PbTe. Only the t ∆ -contributions which are proportional to V −1 0 contribute to ∆ ( ω n ). At high temperatures, T c ≫ T K , one easily finds that only t ∆,prox contributes to T c and we recover the mean field result of Ref. [22]. The behavior changes as T approaches T K . Now χ (T ) ∼ T −1 K and t ∆,prox stops being the sole, dominant pairing source. The pairing interaction becomes strongly frequency dependent. t loc (ω) and t ∆,prox (ω) become comparable to each other as well as to the pair breaking scattering rate τ −1 which is directly related to the existence of a finite width, ∼ T K , of the Kondo resonance. Just like in case of spin Kondo systems, pair breaking effects are largest for T c ≃ T K . However unlike for the magnetic counter parts, the pairing interaction itself strongly depends on T c /T K and increases with concentration. Our results for the concentration dependence of T c are shown in Fig.2. Charge Kondo impurities do indeed cause a superconducting state with T c ≃ T K . At higher concentration we find T c rises almost linearly with x whereas a rich behavior occurs in the low temperature limit. The competition between pair breaking and pairing interaction causes a reentrance normal state behavior which might serve as a unique fingerprint for a charge Kondo origin of superconductivity. Due to the uncertainty of the ρ F J value for Tl-doped PbTe it is unclear whether this effect is observable in this material. In Fig. 2 we compare our results for several values of ρ F J, chosen such that T K ≃ T c , with experiment [5]. To obtain T c ≃ 1K we used D = µ * /4.5 and ρ F D ≃ 0.08. Given the above listed values for ρ F µ * and µ * , these are perfectly reasonable parameters, chiefly demonstrating that T c of several Kelvin is possible within the charge Kondo theory for x ≈ 1%. These numbers further allow us to estimate the temperature ≈ 30mK, below which the normal state reappears. Unlike ordinary superconductivity, pairing in charge Kondo systems is caused by dilute impurities which are coupled by host carriers with low concentration and the stability of the superconducting state with respect to fluctuations becomes an important issue. In order to quantify this we determine the superfluid density ρ s /n 0 = πT ωn ∆ 2 ( ω n )/ ω 3 n close to T c , where ρ s ∝ ∆ 2 . In Fig. 3 we show our results for the dimensionless ratio α ≡ ρs n0 (ρ F ∆) −2 as function of T K /T c . α has a local minimum for T c ≃ T K , caused by the strong scattering rate of a charge Kondo impurity which reduce ρ s . From α we can estimate the temperature, where phase fluctuations affect the transition significantly and find that for T c ≃ T K superconductivity is robust, whereas for T c ≪ T K the phase stiffness becomes rapidly small. In Ref. [22] charge Kondo superconductivity was analyzed for T c ≪ T K with the result that T c ≃ T K exp −λ −1 eff and λ eff ∼ x ρFTK . Our result strongly suggest that this state is unstable against phase fluctuations. Within our theory we can also discuss the impact of charge Kondo impurities in a system which is superconducting for x = 0. We find, in agreement with the quantum Monte Carlo simulations [14], that T c increases. Independent on J, x is pair stabilization due to negative U centers always more efficient than pair breaking. In summary we have developed a theory for superconductivity in charge Kondo systems valid in the crossover region where T ≃ T K which can explain the comparatively large transition temperature in Tl-doped PbTe. We showed that Tl is a very special impurity as it first supplies a certain amount of charge carriers to the PbTevalence band and then puts itself into a self-tuned res-onant state to supply a new mechanism for superconductivity of these carriers. The subtle interplay of pair formation and pair breaking by the same impurities can cause a rich behavior including an enhancement of the host transition temperature by impurities, a reentrance normal state transition and large phase fluctuations of weakly coupled local pairs for T c ≪ T K . Our results agree in order of magnitude and generic concentration dependence of T c and n 0 with the experiments [5,8,9] for Pb 1−x Tl x Te, strongly suggesting a charge Kondo origin for superconductivity in this material.
2018-04-03T06:07:16.384Z
2004-09-07T00:00:00.000
{ "year": 2004, "sha1": "0d0b47851b92922f1125f7417811f419d897ebf1", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0409171", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9beeba86253ce36f90884991da0263fe746c0710", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
235240179
pes2o/s2orc
v3-fos-license
Combined Central and Peripheral Demyelinating Disease With Good Response to B-Cell Depleting Therapy Combined central and peripheral demyelination (CCPD) is a rare disorder characterized by demyelinating lesions in both the central and peripheral nervous systems. The following case report is of a 29-year-old man who presented with a three-month history of progressive lower and upper limb weakness associated with facial and arm tremor, as well as urinary hesitancy. Brain and spine magnetic resonance imaging showed multiple demyelinating plaques. Nerve conduction studies revealed evidence of demyelination with severe prolongation of distal motor latencies and reduced conduction velocities. The patient received plasmapheresis and high-dose corticosteroids, which lead to clinical improvement. A rituximab infusion protocol was subsequently started, and the patient received two cycles. There was a significant functional improvement upon the use of rituximab. This study reports a rare neurological disease entity and highlights the necessity for conducting larger studies to optimally demonstrate the efficacy of rituximab in CCPD. Introduction Combined central and peripheral demyelination (CCPD) is a rare disorder, and our current knowledge is based on data derived from case reports or small case series [1]. Multiple sclerosis (MS) is a chronic autoimmune disease that is confined to the central nervous system (CNS) [2]. The disease course is characterized by inflammation, demyelination of the CNS, gliosis, and eventually, neuronal loss. Furthermore, degeneration of distal oligodendrocytes processes and their apoptosis are part of MS pathophysiology. MS can cause a vast range of neurological symptoms based on the unique site of the lesion. It has a multifactorial etiology and a prevalence of 2.5 million individuals all over the globe. On the other hand, chronic inflammatory demyelinating polyradiculoneuropathy (CIDP) is an acquired immune-mediated disorder that typically targets the peripheral nervous system (PNS) [3]. Myelin components of the PNS, which are generated by Schwann cells, are attacked by T-cell-mediated actions and humoral immune mechanisms, which underlie the etiology of this disease. CIDP usually develops over more than eight weeks with a diverse range of clinical presentations. In general, CCPD involves disparate conditions with acute, relapsing, and chronic subtypes [4]. This raises a concern of either addressing this disease as a separate entity due to a shared immunopathogenic mechanism, or a potential coincidence between two unrelated demyelinating disorders, MS and CIDP. In this article, we present a case of CCPD disease with a good response to B-cell depleting therapy. Case Presentation A 29-year-old man presented with a three-month history of progressive lower-limb weakness and numbness extended to involve upper limbs one month before the presentation. Weakness was more prominent distally, and more on the left than right. It was associated with facial and arm tremors and urinary hesitancy. The patient's symptoms had a fluctuating course with no particular triggers. There was no history of viral infection before symptom onset. The patient had no history of other neurological disorders. The patient had no history of fever, weight loss, or night sweats, and the systemic review was unremarkable. On physical examination, the patient was vitally stable. There was a relative afferent pupillary defect on the left eye with a reduced visual acuity of 20/25 on the same eye. Otherwise, visual field, fundoscopy, and other cranial nerves examinations were normal. The patient had a reduced muscle strength graded as 4 on hand gripping, 4 on wrist flexion, 4 on wrist extension, 4 on elbow flexion, 3 on elbow extension, 4 on hip flexion and extension, 4 on knee flexion and extension, and bilateral foot drop. Deep tendon reflexes were all absent. There was a stocking distribution of reduced sensation, particularly to vibration and proprioception. The patient had an Expanded Disability Status Scale (EDSS) score of 6. Brain magnetic resonance imaging (MRI) showed multiple demyelinating plaques involving the corpus callosum, right frontal lobe, deep periventricular white matter, and external capsule ( Figures 1A-E). Cervical and thoracic spine MRI showed multilevel eccentric signal alterations without any enhancement ( Figures 1F-G). Nerve conduction studies detected demyelinating polyneuropathy with severe prolongation of distal motor latencies and reduced conduction velocities over the bilateral median, left ulnar, and bilateral tibial nerves ( Table 1). Electromyography showed neurogenic motor units with minimal denervation potentials. Visual evoked potentials revealed borderline prolongation of p100 wave latency, left more than right. Upon hospital admission, the patient showed clinical improvement in the weakness of his upper and lower limbs after receiving plasmapheresis. This was followed by a course of prednisolone for four months. However, the patient's symptoms deteriorated with the steroid taper as he developed a new spinal cord relapse with worsening lower limb weakness and a new sensory level at T10. He was admitted to the hospital and treated with intravenous methylprednisolone 1 g for five days. Subsequently, he was placed on intravenous methylprednisolone 1 g weekly for six weeks, followed by high-dose maintenance of 60 mg oral prednisolone. The patient's weakness eventually improved. However, frequent flare-ups were observed with steroid taper before the initiation of steroid-sparing agents. Intravenous immunoglobulin (IVIG) was tried during the course, which elicited no clinical response. The patient was started on azathioprine but had abnormal liver function tests and was, hence, weaned off the medication. Rituximab (RTX) infusion protocol was initiated. The patient received two doses of 1 g, spaced in time with an interval of two weeks. This was followed by repeated cycles every six months. The patient showed significant clinical improvement after receiving RTX, and we were able to taper down the dose of prednisone up to 10 mg. After one-year follow-up from starting RTX, his motor examination showed mild persistent weakness in the left ankle dorsiflexion graded as 4 out 5, but otherwise normal strength throughout. His current EDSS is 2.5. Discussion CCPD is a rare entity with no well-established diagnostic criteria. To our knowledge, no case was reported in Saudi Arabia of a patient with CCPD. A retrospective cohort study of 31 patients was conducted in two centers in Italy [1]. The study aimed to identify clinical features, diagnostic findings, and possible treatments of CCPD. The majority of patients were men with a disease onset at 57 years of age. A total of 20 participants had an infection or received a vaccination before their presentation. Spinal cord lesions were anticipated as many of the enrolled patients suffered from lower-limb sensory-motor impairment and sphincter dysfunction. Nevertheless, altered mental status and cranial nerve involvement can be the presenting symptoms of CCPD patients. The disease course in the study's participants was monophasic in one-third of the cases, which indicates a single isolated episode of demyelination. A total of 21 patients showed progression of the disease, either by relapse with subacute onset of new symptoms or by a constant chronic progression. Interestingly, six cases with distal paresthesia presented with a progressive disease course from the onset of symptoms. Most patients in the acute phase were treated with steroids, IVIG, or plasma exchange therapy. A total of 19 out of 26 patients had an improvement in the modified Rankin Scale (mRS) score by at least one point. Twenty-four patients with relapses or chronic progression had a poorer response rate to steroids or IVIG when used later during the disease course. Neurofascin is a member of the L1 subgroup of adhesion molecules expressed at the nodes of Ranvier and the paranodes in both the CNS and PNS [5]. In a Japanese study, Kawamura et al. discovered that antineurofascin (anti-NF) antibody was present in 86% of CCPD patients [4]. The Japanese study also displayed a better response to IVIG or plasma exchange treatment in patients with positive anti-NF antibodies, addressing the significance of its detection. However, our patient tested negative for anti-NF antibody. This is in alignment with a previous study done on Caucasians to assess the presence of the same antibody among CCPD patients [6]. In that study, none of the patients with CCPD were found to be positive for the anti-NF antibody. Nevertheless, the study speculated that these findings were partially attributable to the participants' ethnicity and the heterogeneity of their clinical presentation. In our patient, RTX was successfully used to taper down the dose of steroids. The use of RTX has been reported in several studies. In a placebo-controlled randomized controlled trial of 104 relapsing-remitting MS (RRMS) patients, RTX was found to drastically decrease the counts of contrast-enhancing lesions as well as volumes of T2 lesions. Furthermore, it reduced the proportion of patients with relapse at 48 weeks (RTX: 20.3% vs. placebo: 40.0%) [7]. A population-based Swedish study of 494 patients with newly diagnosed RRMS concluded that the use of RTX decreases the annual relapse rate as well as neuroradiologic disease activity in comparison with all other disease-modifying therapies [8]. RTX was used in a case series of 11 patients with CIDP that showed evidence of improvement. The Medical Research Council sum score showed a change ranging from 0 to 60 post-treatment. Additionally, the change in Inflammatory Neuropathy Cause and Treatment disability score ranged from 0 to 8 after starting the treatment with a mean of 4.54 [9]. A case report of a 14-year-old patient with CCPD, who was treated with RTX, revealed both a clinical improvement in sensory and motor deficit as well as a reduction in the number of lesions on brain and spine MRI [10]. Conclusions RTX yielded excellent clinical improvement in our case, signifying a potential efficacy of B-cell depleting therapy in CCPD. This study reports a rare neurological disease entity and highlights the necessity for conducting further studies to optimally demonstrate the efficacy of RTX in CCPD. Additional Information Disclosures
2021-05-30T05:07:24.631Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "5a58ca4d3aedbfc3fa31fc9d441f06352bd61e86", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/57541-combined-central-and-peripheral-demyelinating-disease-with-good-response-to-b-cell-depleting-therapy.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5a58ca4d3aedbfc3fa31fc9d441f06352bd61e86", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236605579
pes2o/s2orc
v3-fos-license
Analysis of micro cracks and die erosion in die casting : Die soldering is challenging issue on die life and casting quality in High Pressure Die Casting (HPDC) Industry. It increases die down time which causes increasing production cost per piece. Die soldering can be tackled by surface heat treatment operations like gas nitriding and other PVD coatings. Used and scraped die is selected from the die casting industry for investigation of die soldering issue. Chemical distribution of elements and surface condition of affected die soldering zone is investigated. The study reveals that, there are abundant micro cracks, micro holes and micro cavities at soldering portion. The radius of micro holes is about 0.25 µm and the radius of macro holes is about 8µm. The die inserts are made up of H13 die steel and LM24 Aluminum alloy is used for casting operation. At the die soldering cross section region, distribution of aluminum and die soldering mechanism and its causes studied. The die soldering mechanism is classified as chemical, physical, mechanical and mixed soldering. The soldering phenomena have been researched based on die temperature and its chemistry, metal temperature and its chemistry, injection pressure and its velocity, and die surface roughness. The spread and formation of die soldering on used and scraped die is also discussed in this article. The production cost per piece depends on the die life. Studies shows that normal die life varies from 25000 cycles to 250000 cycles depending upon the design and complexity of the die (1) (2) (3) (4) (5). The production cost per piece can be reduced by increasing the die life. Die life is considered as total number of castings produced before its failure. Die may fail due to any one or combination of the following reasons like heat checks, chemical attack, soldering, mechanical erosion, thermal fatigue and mechanical stresses. Heat checking and die soldering are the two major factors which results in die failure (6) (7) (8) (9). Numerous past works have targeted principally on the heat checks (10) (11) (12) (13) (14). However, with the event of die casting trade, more and more consideration is being given to die bonding for a notable decrease in potency and productivity of die casting process. Recently some researchers have begun to review soldering mechanism by experimental work (15) (16) (17) (18) (19). However, die casting experiments in agreement with the particular production facilities will be terribly future and costly, so that some researchers have implemented many efficient experimental ways for work the attachment and the other phenomena within the facility (20) (21) (22) (23) (24). These strategies will be classified into 3 types: accelerated tests; friction welding and hot dip aluminizing. From the output of research work, the conclusions will be created that there are intermetallic compounds are produced within the die steel. Different metallic element aluminum alloys presents a distinct bonding tendency towards the steel die. The bonding tendency of the casting to a given steel die will increase with the number of casting cycles (25) (26) (27) (28). Soldering increases simply with rise in metal temperature, die temperature, injection velocity, injection pressure and failure of die surface coatings (29) (30) (31) (32) (33). Die surface treatments and PVD coatings will effectively scale back the incidence of soldering (34) (35) (36) (37). However the data of die soldering remains sensor activity. No theoretical approach has been finalized to investigate the consequences of method parameters on causing soldering problem. In this article, the surface conditions of the die and the distribution of chemical elements of soldering zone have been studied to analyze the soldering mechanism. Theoretical analysis of the results of method parameters on inflicting soldering are administrated by connected theory. Finally, the generation and unfold of soldering in a given die portion with the die casting process have additionally been investigated. Experimental and Results: Used and scraped die from the die casting company is taken to investigate the characteristics of the die soldering region. The die was employed to cast aluminum filter cover. The process parameters of the die are as follows: The die portion with soldered aluminum is cut off from the die to study the surface condition of the soldered die. The solder die part is dissolved with 15% Caustic soda solution for 20 hours. Figure 1 It is observed from Figure 1(a) that there exist various micro holes and micro cavities on the die surface. The radius of the tiny micro holes is about 0.25 µm and the radius of large macro holes is about 8µm. so the micro holes represents the character that intermetallic layers separates the die surface and therefore the micro cavities represents the character of small erosion. Die erosion near gate location is high while compare with general position in the die. There exist some cracks near the gate location as shown in Figure 1(b). Scanning electron microscopy (SEM) reveals the microanalysis of the soldering region as shown in Figures 2and 3. Figure 2. Depicts that there exit a straight interface between the die and soldered aluminum. It indicates the chemical reaction between the casting and the steel die. The soldering is predominantly physico-chemical bonding. When the liquid metal enters and solidifies curved cracks present in the die, strong mechanical interaction takes place and results in mechanical soldering as depicts in Figure 3. However, the cracks with tiny sizes are flared, the mechanical interaction caused by micro cavities and cracks are not strong enough that soldering occurs both chemical and mechanical action. Therefore soldering is classified as mechanical, physico-chemical and mixed soldering based on mechanism of soldering. It is also noticed that most die surface parts with rich aluminum transition layers. The transition layer consists of intermetallic compounds which are the results of diffusion mechanism. The transition layer plays a vital role in soldering occurrences. During casting operation, molten metal is injected into the die cavity under high pressure and velocity. This activates atoms presents on the die surface and leads to breakage of atomic bonding. This result in aluminum atoms diffused with die atoms and forms atomic bonding. It is observed that most metallic bonds formed at high temperature still exist after solidification. Joint area is formed between the casting and die after casting cycle. Metallic bonds are produced, when there is interaction between atoms of molten aluminum and atoms of die steel. The atoms which gain constant activation energy are participated in bonding process. According to Maxwell-Boltsman law, the fraction of atoms in activation state to all interfacial atoms can be derived as: . Where ΔU = constant activation energy of interaction. f = fraction of atoms. It is sure that fraction of atoms (f) is directly proportional to the ratio of real contact area and apparent contact area between the casting and die surface. That is f ∝ / ; Now equation (1) becomes (2). When Ar Aa = 1, A 0 can be obtained as: Where T0 = critical temperature, at which the casting bonds with the die. Then the equation (2) becomes The temperature of the liquid metal rises when it is injected on die surface under high velocity. An assumption is made that molten metal is injected on to the die surface at an angle of β and velocity of u. The vertical component of the velocity of metal is known as kinetic energy. This energy with a given mass (m) is change into heat energy of the metal. The rise in temperature of the liquid metal due to increase in kinetic energy is given as: Where Cm = specific heat of liquid aluminum. The gate area is constant for given die casting condition. The relation between injection pressure and filling velocity of liquid aluminum metal is expressed by Darcy equation: Where p = injection pressure ρM = density of aluminum C d = coefficient for cold chamber die casting and is equal to 0.8. The casting and die interface temperature can be expressed by the equation as: Where T M = Melt temperature T m = Die temperature b M and b m = heat accumulation coefficient The rise in interface temperature ΔT i can be expressed as: Considering equations (5)- (7), equation (3) can be written as (8). Soldering of aluminum metal to the die surface is considered as adhesion of two different metals under pressure. The metallic bonds are formed by inter atomic forces exist between two metallic atoms caused by adhesive forces. The energies γα and γβ are required to break metallic bonds that exist between two solid metals α and β in contact on unit area, whereupon the interfacial energy is recovered. The energy needed to create die soldering can be expressed as (38). According to this expression, the ratio between real contact area and the apparent contact area between the casting and die is desperate factor which influences the soldering formation. Figure 4. Depicts the effects of activation energy and temperature on A r /A a where ΔU 1 < ΔU 2 . It is observed that the activation energy is vital element that influences the value of A r /A a , which is responsible for die soldering according to the energy criterion of soldering. At the same interface temperature, a rise in activation energy tends to reduce in number of atoms in the activity station and a tremendous reduction in the value of A r /A a . Therefore, a rise in activation energy causes to decrease in soldering effect of the casting to the steel die. This can be used to explain the effect of alloy composition and die surface chemistry which results in soldering. Because the activation energy of interaction of Al-Fe is more than that of Al-Al. Increasing aluminum concentration on die surface results in increasing number of atoms that participated in soldering. Addition of iron (Fe) upto 1.3 wt% in the aluminum alloy reduces soldering tendency during casting process. The reason behind that is the activation energy of Al-Fe < than that of Fe-Fe. The separation of iron atoms from the liquid aluminum surface results in high concentration of iron atoms. The diffusion of iron atoms from die steel surface to the molten aluminum alloy can be prevented by high concentration of iron atoms in the alloy itself. Similarly, the activation energy of the interaction of Al-Fe is less than that of Al-Mo. The critical temperature T 0 is much greater than the die temperature. Therefore the soldering resistant ability can be obtained by laser melted molybdenum coating on the die surface. It is observed from Figure 4. The interfacial temperature is vital factor that influences the value of A r /A a . At starting, the value of A r /A a is small with respect to temperature (T). However, when the temperature increases to meet the critical temperature T 0 , the value of A r /A a increases suddenly. The temperatures of both the die surface and the interface depend on liquid metal pouring temperature and the heat transfer condition. The soldering occurs rarely, when the die temperature is low. Soldering occurs easily at hot spots, where the die temperature is very high. This has been verified during casting process. Injection pressure is one more factor that aggravates the soldering formation. The effects of injection pressure on soldering occurances include two parts namely chemical action and mechanical action. High pressure and velocity of injected molten metal on to the die surface causes partial wash out of coating material on the die surface. This result in direct attack of molten metal with die steel surface. The mechanical action of injection pressure increases the value of A r /A a . High injection pressure increases the alloy energy and the number of active atoms that causes soldering formation. Therefore the value of A r /A a rises with the chemical action of the injection pressure. Figure 5. Demonstrates the effect of injection pressure on A r /A a where P 2 is two times as greater as P 1 . It is evident that the value of A r /A a is more at high pressure at the same temperature. So die soldering occurs very easily under high injection pressure. The wetting process of liquid metal happens on the die surface due to surface tension, when the liquid metal is injected in to the die cavity. The wettability is always demonstrated by contact angle θ. The wettability is perfect, when the contact angle is zero. However, the solid surface is completely dry, when the contact angle is 180˚. The solid surface is partially wetted, if the contact angle is in between 0-180˚. The smaller contact angle results in better wetting. The value of contact angle between the die steel surface and the liquid aluminum represents the soldering tendency of the aluminum with the steel die. In aluminum die casting, there is a contact angle hysteresis between the die surface and the molten aluminum. The major factors that affects hysteresis contact angle are solute deposition on the die surface, surface roughness and molten metal. A contamination point still exists, even if the die lubricant is partially washed out from the die surface. It has been experimented (39) (40) that the influence of surface contamination on the contact angle hysteresis is equal to that of a rough surface. The contact angle hysteresis of a random rough surface of the die by molten aluminum metal and the wettability is discussed here. The roughness coefficient (r) for the rough surface is defined as the ratio of the real contact area to the apparent projective area. The relation between the apparent contact angle θ a and the real contact angle θ e is given by Wenzel as: Cos θ r = r Cos θ a (10). A rough surface consists of small pieces of different materials is consider as a compound surface. By assuming that the die compound surface consists of two kinds of flat surfaces with the fractions to the apparent projected area are f1 and f2 respectively. The apparent contact angle is expressed as: Cos θ C = f1 Cos θ 1 + f2 Cos θ 2 Where θ 1 and θ 2 are the real contact between the two kinds of flats and the liquid. Gas always obstructs the large size holes on the die steel surface by the liquid aluminum metal. This cannot occur by the smaller size holes on the die surface due to surface tension. If f2 is the fraction that cannot contact with the molten aluminum, θ 2 is equal to 180˚ and the expression changes into the following form: Cos θ C = f1Cos θ 1 f2 and considering f2 = 1-f1, the equation can be obtained as: Cos θ r = rf1 Cosθ 1 + f1-1 (11). When f1 = 1, that is the die surface contacts with liquid aluminum perfectly without obstructing gas, the result of the surface roughness on the apparent contact angle between the iron and the liquid aluminum or WC-Co and the molten aluminum is depicts in Figure 6(a). If f1 = 0.8, when the liquid aluminum obstructs gas on large size valleys, the relation between the surface roughness of the two contact systems and the apparent contact angle is depicts in Figure 6(b). The apparent contact angle is less than 90˚, when the die contacts with the liquid aluminum. It is observed that the apparent contact angle decreases with increase in the surface roughness coefficient. The apparent contact angle between WC-Co and liquid aluminum is more than 90˚ and increases with increase in the surface roughness coefficient. However the soldering tendency with WC-Co coated die is less than when compared to steel die. Comparing Figure 6 (a) and (b), It is noticed that the apparent contact angle of obstructing gas is higher than that of full contact even though the roughness coefficients are the same. This demonstrates that the contact angle hysteresis increase with increasing number of micro cavities and micro holes on the die steel surface. An experiment is conducted in the die casting plant with high gate velocity of 150 m/sec and high pouring metal temperature of 750˚C. In this experiment soldering takes place after only one or two shots to H13 steel die without any coating and die lubrication agent. This reveals that the interfacial temperature is very near to the critical temperature T 0 . This causes strong chemical reaction between the steel die surface and molten metal. The soldering occurs after second shot due to the oxide films on the die surface have been washed out by the first shot. But in actual practice, the gate velocity, pouring molten temperature and the die temperature is lower and die is also protected by die lubricant which form thin layer on die surface. This thin layer acts as barrier between the die surface and molten metal and thus prevents occurrence of die soldering. On the other hand, if there is any hot spot in the die, the lubricant layer is detached / washed off from the die and causes direct contact of molten metal with die surface. This results in die soldering formation. This is demonstrated in the Figure 7. At the beginning no soldering occurs at the die surface. As the number of casting cycles keeps on increasing, the aluminum atoms from the liquid metal diffuses into the die surface and the iron atoms from the die dissolves in the melt. The aluminum atoms concentration on the die slowly increases. When it reaches a threshold value, chemical reaction will occur between the die and molten metal and leads to soldering. An intermetallic compounds are formed and grown, which are washed out by high pressure of molten metal. Micro cavities and micro holes are formed and grown by erosion of the liquid metal. With the continuation of die casting process, heat checks are formed on the die which results in mixed soldering. This heat checks grows and leads to crack propagation. This crack propagation causes die failure and finally withdrawn from the production service. The formation and spread of soldering in the die with protective coating is different from the die without protective coating. Figure 8. Demonstrates the condition of the die with protective coating. The coating is so strong and cannot be washed out easily with the injection pressure of the metal. According to energy analysis [18] the chemical reaction between the molten metal and the die with coating is very low and hence the tendency of soldering formation is very low or nil. However with increasing casting cycles, micro cracks are produced in the coating. Once these cracks are propagated into the die, the coating detached from the die. This result in the die cannot be used for further production and finally withdrawn from the service. Conclusions:  Large number of micro cavities, micro holes, micro cracks are observed at soldering portion of the die steel surface. These surface imperfections are responsible for mechanical action between the molten metal and the die and causes chemical reaction at the given apparent contact area.  Soldering is grouped into three types namely mechanical, physico-mechanical and mixed soldering based on mechanism of soldering.  The activation energy of interaction between the die and the casting and the interfacial temperature strongly influence the value of A r / A a .  Die with special protective coatings are less affinity to soldering formation.  The roughness coefficient of the die surface increases w.r.t increase in number of casting cycles. This results in decrease in the apparent contact angle between the die and the liquid metal. Declarations:
2021-08-02T00:05:59.893Z
2021-05-10T00:00:00.000
{ "year": 2021, "sha1": "eb4c8b6e21bc0830b1a93f5d94191bb7200a74ad", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-495892/v1.pdf?c=1631880645000", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "11a832706d933926f14ce9dde8a552c1d65de064", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Geology" ] }
210839089
pes2o/s2orc
v3-fos-license
Tighter Theory for Local SGD on Identical and Heterogeneous Data We provide a new analysis of local SGD, removing unnecessary assumptions and elaborating on the difference between two data regimes: identical and heterogeneous. In both cases, we improve the existing theory and provide values of the optimal stepsize and optimal number of local iterations. Our bounds are based on a new notion of variance that is specific to local SGD methods with different data. The tightness of our results is guaranteed by recovering known statements when we plug $H=1$, where $H$ is the number of local steps. The empirical evidence further validates the severe impact of data heterogeneity on the performance of local SGD. Introduction Modern hardware increasingly relies on the power of uniting many parallel units into one system. This approach requires optimization methods that target specific issues arising in distributed environments such as decentralized data storage. Not having data in one place implies that computing nodes have to communicate back and forth to keep moving toward the solution of the overall problem. A number of efficient first-, second-order and dual methods that are capable of reducing the communication overhead existed in the literature for a long time, some of which are in certain sense optimal. Yet, when Federated Learning (FL) showed up, it turned out that the problem of balancing the communication and computation had not been solved. On the one hand, Minibatch Stochastic Gradient Descent (SGD), which averages the result of stochastic gradient steps computed in parallel on several machines, again demonstrated its computation efficiency. Seeking communication efficiency, Konečný et al. (2016); McMahan et al. (2017) proposed to use a natural variant of Minibatch SGD-Local SGD (Algorithm 1), which does a few SGD iterations locally on each involved node and only then computes the average. This approach saves a lot of time on communication, but, unfortunately, in terms of theory things were not as great as in terms of practice and there are still gaps in our understanding of Local SGD. The idea of local SGD in fact is not recent, it traces back to the work of Mangasarian (1995) and has since been popular among practitioners from different communities. An asymptotic analysis can be found in Mangasarian (1995) and quite a few recent papers proved new convergence results, making the bounds tighter with every work. The theory has been developing in two important regimes: identical and heterogeneous data. The identical data regime is more of interest if the data are actually stored in one place. In that case, we can access it on each computing device at no extra cost and get a fast, scalable method. Although not very general, this framework is already of interest to a wide audience due to its efficiency in training large-scale machine learning models (Lin et al., 2020). The first contribution of our work is to provide the fastest known rate of convergence for this regime under weaker assumptions than in prior work. Federated learning, however, is done on a very large number of mobile devices, and is operating in a highly non-i.i.d. regime. To address this, we present the first analysis of Local SGD that applies to arbitrarily heterogeneous data, while all previous works assumed a certain type of similarity between the data or local gradients. To explain the challenge of heterogeneity better, let us introduce the problem we are trying to solve. Given that there are M devices and corresponding local losses arXiv:1909.04746v4 [cs.LG] 14 Apr 2022 Algorithm 1 Local SGD Input: Stepsize γ ą 0, initial vector x 0 " x m 0 for all m P rM s, synchronization timesteps t 1 , t 2 , . . .. 1: for t " 0, 1, . . . if data is identical then 5: Compute g m t " gpf, x m t , z m q such that E rg m t | x m t s " ∇f px m t q. 6: else 7: Compute g m t " gpf m , x m t , z m q such that E rg m t | x m t s " ∇f m px m t q. 8: end if 9: x m t`1 " # 1 m ř M j"1 px j t´γ g j t q, if t " t p for some p P N x m t´γ g m t , otherwise. (1) In the case of identical data, we are able to obtain on each node an unbiased estimate of the gradient ∇f . In the case of heterogeneous data, m-th node can only obtain an unbiased estimate of the gradient ∇f m . Data similarity can then be formulated in terms of the differences between functions f 1 , . . . , f M . If the underlying data giving rise to the loss functions are i.i.d., the function share optima and one could even minimize them separately, averaging the results at the end. We will demonstrate this rigorously later in the paper. If the data are dissimilar, however, we need to be much more careful since running SGD locally will yield solutions of local problems. Clearly, their average might not minimize the true objective (1), and this poses significant issues for the convergence of Local SGD. To properly discuss the efficiency of local SGD, we also need a practical way of quantifying it. Normally, a method's efficiency is measured by the total number of times each function f m is touched and the cost of the touches. On the other hand, in distributed learning we also care about how much information each computing node needs to communicate. In fact, when communication is as expensive as is the case in FL, we predominantly care about communication. The question we address in this paper, thus, can be posed as follows: how many times does each node need to communicate if we want to solve (1) up to accuracy ε? Equivalently, we can ask for the optimal synchronization interval length between communications, H, i.e. how many computation steps per one communication we can allow for. We next review related work and then present our contributions. Related Work While local SGD has been used among practitioners for a long time, see e.g. (Coppola, 2015;McDonald et al., 2010), its theoretical analysis has been limited until recently. Early theoretical work on the convergence of local methods exists as in (Mangasarian, 1995), but no convergence rate was given there. The previous work can mainly be divided into two groups: those assuming identical data (that all nodes have access to the same dataset) and those that allow each node to hold its own dataset. As might be expected, the analysis in the latter case is more challenging, more limited, and usually shows worse rates. We note that in recent work more sophisticated local stochastic gradient methods have been considered, for example with momentum ( Our work is complimentary to these approaches, and provides improved rates and analysis for the vanilla method. Local SGD with Identical Data The analysis of local SGD in this setting shows that a reduction in communication is possible without affecting the asymptotic convergence rate of Minibatch SGD with M nodes (albeit with usually worse dependence on constants). An overview of related work on local SGD for convex objectives is given in Table 1. We note that analysis for nonconvex objectives has been carried out in a few recent works (Zhou and Cong, 2018;Wang and Joshi, 2018;Jiang and Agrawal, 2018), but our focus in this work is on convex objectives and hence they were not included in Table 1. The comparison shows that we attain superior rates in the strongly convex setting to previous work with the exception of the concurrent 1 work of Stich and Karimireddy (2019) and we attain these rates under less restrictive assumptions on the optimization process compared to them. We further provide a novel analysis in the convex case, which has not been previously explored in the literature, with the exception of (Stich and Karimireddy, 2019). Their analysis attains the same communication complexity but is much more pessimistic about possible values of H. In particular, it does not recover the convergence of one-shot averaging, i.e. substituting H " T or even H " T {M gives noninformative bounds, unlike our Theorem 1. In addition to the works listed in the table, Dieuleveut and Patel (2019) also analyze local SGD for identical data under a Hessian smoothness assumption in addition to gradient smoothness, strong convexity, and uniformly bounded variance. However, we believe that there are issues in their proof that we explain in Section 12 in the supplementary material. As a result, the work is excluded from the table. Local SGD with Heterogeneous Data An overview of related work on local SGD in this setting is given in Table 2. In addition to the works in Local SGD is at the core of the Federated Averaging algorithm which is popular in federated learning applications (Konečný et al., 2016). Essentially, Federated Averaging is a variant of Local SGD with participating devices sampled randomly. This algorithm has been used in several machine learning applications such as mobile keyboard prediction (Hard et al., 2018), and strategies for improving its communication efficiency were explored in (Konečný et al., 2016). Despite its empirical success, little is known about convergence properties of this method and it has been observed to diverge when too many local steps are performed (McMahan et al., 2017). This is not so surprising as the majority of common assumptions are not satisfied; in particular, the data are typically very non-i.i.d. (McMahan et al., 2017), so the local gradients can point in different directions. This property of the data can be written for any vector x and indices i, j as ∇f i pxq´∇f j pxq " 1. Unfortunately, it is very hard to analyze local methods without assuming a bound on the dissimilarity of ∇f i pxq and ∇f j pxq. For this reason, almost all prior work assumed some regularity notion over the functions such as bounded dissimilarity (Yu et al., 2019a;Li et al., 2020;Yu et al., 2019b;Wang et al., 2018) or bounded gradient diversity (Haddadpour and Mahdavi, 2019) and addressed other less challenging aspects of federated learning such as decentralized communication, nonconvexity of the objective or unbalanced data partitioning. In fact, a common way to make the analysis simple is to assume Lipschitzness of local functions, ∇f i pxq ď G for any x and i. We argue that this assumption is pathological and should be avoided when seeking a meaningful convergence In other words, we lose control over the difference between the functions. Since G bounds not just dissimilarity, but also the gradients themselves, it makes the statements less insightful or even vacuous. For instance, it is not going to be tight if the data are actually i.i.d. since G in that case will remain a positive constant. In contrast, we will show that the rate should depend on a much more meaningful quantity, where x˚is a fixed minimizer of f and f m p¨, z m q for z m " D are stochastic realizations of f m (see the next section for the setting). Obviously, for all nondegenerate sampling distributions D m the quantity σ dif is finite and serves as a natural measure of variance in local methods. We note that an attempt to get more general convergence statement has been made by (Li et al., 2018), but unfortunately their guarantee is strictly worse than that of minibatch Stochastic Gradient Descent (SGD). In the overparameterized regime where σ dif " 0, Zhang and Li (2019) prove the convergence of Local SGD with arbitrary H. Settings and Contributions Assumption 1. Assume that the set of minimizers of (1) is nonempty. Each f m is µ-strongly convex for µ ě 0 and L-smooth. That is, for all x, y P R d When µ " 0, we say that each f m is just convex. When µ ‰ 0, we define κ def " L µ , the condition number. Assumption 1 formulates our requirements on the overall objective. Next, we have two different sets of assumptions on the stochastic gradients that model different scenarios, which also lead to different convergence rates. Assumption 2. Given a function h, a point x P R d , and a sample z " D drawn i.i.d. according to a distribution D, the stochastic gradients g " gph, x, zq satisfy E z"D rgph, x, zqs " Assumption 2 holds for example when gpx, zq " ∇hpxq`ξ z for a random variable ξ z of expected bounded squared norm: Assumption 2, however, typically does not hold for finite-sum problems where gpx, zq is a gradient of the one functions in the finite-sum. To capture this setting, we consider the following assumption: Assumption 3. Given an L-smooth and µ-strongly convex (possibly with µ " 0) function h : R d Ñ R written as an expectation h " E z"D rhpx, zqs, we assume that a stochastic gradient g " gph, x, zq is computed by gph, x, zq " ∇hpx, zq. We assume that hp¨, zq : R d Ñ R is almost-surely L-smooth and µstrongly convex (with the same L and µ as h). When Assumption 3 is assumed in the identical data setting, we assume it is satisfied on each node m P rM s with h " f and distribution D m , and we define as a measure of variance at the optimum Whereas in the heterogeneous data setting we assume that it is satisfied on each node m P rM s with h " f m and distribution D m , and we analogously define Assumption 3 holds, for example, for finite-sum optimization problems with uniform sampling and permits direct extensions to more general settings such as expected smoothness Gower et al. (2019). Our contributions are as follows: 1. In the identical data setting under Assumptions 1 and 2 with µ ą 0, we prove that the iteration complexity of Local SGD to achieve ε-accuracy is Oˆσ 2 µ 2 M εi n squared distance from the optimum provided that T " Ω pκ pH´1qq. This improves the communication complexity in prior work (see Table 1) with a tighter results compared to concurrent work (recovering convergence for H " 1 and H " T ). When µ " 0 we have that the iteration complexity of Minibatch SGD to attain an ε-accurate solution in functional suboptimality is We further show that the same ε-dependence holds in both the µ ą 0 and µ " 0 cases under Assumption 3. This has not been explored in the literature on Local SGD before, and hence we obtain the first results that apply to arbitrary convex and smooth finite-sum problems. 2. When the data on each node is different and Assumptions 1 and 3 hold with µ " 0, the iteration complexity needed by Local SGD to achieve an ε-accurate solution in functional suboptimality is rovided that T " ΩpM 3 H 4 q. This improves upon previous work by not requiring any restrictive assumptions on the gradients and is the first analysis to capture true data heterogeneity between different nodes. 3. We verify our results by experimenting with logistic regression on multiple datasets, and investigate the effect of heterogeneity on the convergence speed. Convergence Theory The following quantity is crucial to the analysis of both variants of local SGD, and measures the deviation of the iterates from their averagex t over an epoch: To prove our results, we follow the line of work started by Stich (2019) and first show a recurrence similar to that of SGD up to an error term proportional to V t , then we bound each V t term individually or the sum of V t 's over an epoch. All proofs are relegated to the supplementary material. Identical Data Our first lemma presents a bound on the sequence of the V t in terms of the synchronization interval H. Lemma 1. Choose a stepsize γ ą 0 such that γ ď 1 2L . Under Assumptions 1, and 2 we have that for Algorithm 1 with max p |t p´tp`1 | ď H and with identical data, for all t ě 1 E rV t s ď pH´1q γ 2 σ 2 . Combining Lemma 1 with perturbed iterate analysis as in (Stich, 2019) we can recover the convergence of local SGD for strongly-convex functions: Theorem 1. Suppose that Assumptions 1, and 2 hold with µ ą 0. Then for Algorithm 1 run with identical data, a constant stepsize γ ą 0 such that γ ď 1 4L , and H ě 1 such that max p |t p´tp`1 | ď H, (2) By (2) we see that the convergence of local SGD is the same as Minibatch SGD plus an additive error term which can be controlled by controlling the size of H, as the next corollary and the successive discussion show. Corollary 1. Choosing γ " 1 µa , with a " 4κ`t for t ą 0 and we take T " 2a log a steps. Then substituting in (2) and using that 1´x ď expp´xq and some algebraic manipulation we can conclude that, where r t "x t´x˚a ndÕp¨q ignores polylogarithmic and constant numerical factors. When H " 1 the error term vanishes and we obtain directly the ordinary rate of Minibatch SGD. Linear speedup in the number of nodes M . We see that choosing H " OpT {M q leads to an asymptotic convergence rate ofÕ´σ 2 κ µ 2 M T¯w hich shows the same linear speedup of Minibatch SGD but with worse dependence on κ. The number of communications in this case is then CpT q " T {H "ΩpM q. Local SGD vs Minibatch SGD. We assume that the statistical σ 2 {T dependence dominates the dependence on the initial distance x 0´x˚ 2 {T 2 . From Corollary 1, we see that in order to achieve the same convergence guarantees as Minibatch SGD, we must have H " O`T κM˘, achieving a communication complexity of O pκM q. This is only possible when T ą κM . It follows that given a number of steps T the optimal H is H " 1`tT {pκM qu achieving a communication complexity ofΩ pminpT, κM qq. One-shot averaging. Putting H " T`1 yields a convergence rate ofÕpσ 2 κ{pµ 2 T qq, showing no linear speedup but showing convergence, which improves upon all previous work. However, we admit that simply using Jensen's inequality to bound the distance of the average iterate E " x T´x˚ 2 ı would yield a better asymptotic convergence rate ofÕpσ 2 {pµ 2 T qq. Under a Lipschitz Hessian assumption, Zhang et al. (2013) show that one-shot averaging can attain a linear speedup in the number of nodes, so one may do analysis of local SGD under this additional assumption to try to remove this gap, but this is beyond the scope of our work. Similar results can be obtained for weakly convex functions, as the next Theorem shows. Theorem 2. Suppose that Assumptions 1, 2 hold with µ " 0 and that a constant stepsize γ such that γ ě 0 and γ ď 1 4L is chosen and that Algorithm 1 is run for identical data with H ě 1 such that sup p |t p´tp`1 | ď H, then forx T " 1 Theorem 2 essentially tells the same story as Theorem 1: convergence of local SGD is the same as Minibatch SGD up to an additive constant whose size can be controlled by controlling H. ? T , then substituting in (3) we have, LT . Linear speedup and optimal H. From Corollary 2 we see that if we choose H " Op ? T M´3 {2 q then we obtain a linear speedup, and the number of communication steps is then C " T {H " Ω`M 3{2 T 1{2˘, and we get that the optimal H is then The previous results were obtained under Assumption 2. Unfortunately, this assumption does not easily capture the finite-sum minimization scenario where f pxq " 1 n ř n i"1 f i pxq and each stochastic gradient g t is sampled uniformly at random from the sum. Using smaller stepsizes and more involved proof techniques, we can show that our results still hold in the finite-sum setting. For strongly-convex functions, the next theorem shows that the same convergence guarantee as Theorem 1 can be attained. Theorem 3. Suppose that Assumptions 1 and 3 hold with µ ą 0. Suppose that Algorithm 1 is run for identical data with max p |t p´tp`1 | ď H for some H ě 1 and with a stepsize γ ą 0 chosen such that γ ď min . Then for any timestep t such that synchronization occurs, As a corollary, we can obtain an asymptotic convergence rate by choosing specific stepsizes γ and H. Corollary 3. Let a " 18κt for some t ą 0, let H ď t and choose γ " 1 µa ď 1 9LH . We substitute in (4) and take T " 18a log a steps, then for r t def "x t´x˚, Substituting H " 1`tt{M u " 1`tT {p18κM qu in Corollary 3 we get an asymptotic convergence rate of 'mushrooms', right: 'w8a' dataset. We use uniform sampling of data points, so σ 2 opt is the same as σ 2 dif with M " 1, while for higher values of M the value of σ 2 dif might be drastically larger than σ 2 opt . up to problem-independent constants and polylogarithmic factors, but with possibly fewer communication steps. Theorem 4. Suppose that Assumptions 1 and 3 hold with µ " 0, that a stepsize γ ď 1 10LH is chosen and that Algorithm 1 is run on M ě 2 nodes with identical data and with sup p |t p´tp`1 | ď H, then for any timestep T such that synchronization occurs we have T we see that γ ď 1 10LH , and plugging it into (5) yields This is the same result as Corollary 2, and hence we see yields a linear speedup in the number of nodes M . Heterogeneous Data We next show that for arbitrarily heterogeneous convex objectives, the convergence of Local SGD is the same as Minibatch SGD plus an error that depends on H. Theorem 5. Suppose that Assumptions 1 and 3 hold with µ " 0 and for heterogeneous data. Then for Algorithm 1 run for different data with M ě 2, max p |t p´tp`1 | ď H, and a stepsize γ ą 0 such that , then we have Dependence on σ dif . We see that the convergence guarantee given by Theorem 5 shows a dependence on σ dif , which measures the heterogeneity of the data distribution. In typical (non-federated) distributed learning settings where data is distributed before starting training, this term can very quite significantly depending on how the data is distributed. Dependence on H. We further note that the dependence on H in Theorem 5 is quadratic rather than linear. This translates to a worse upper bound on the synchronization interval H that still allows convergence, as the next corollary shows. Optimal H. By Corollary 5 we see that the optimal value of H is H " 1`XT 1{4 M´3 {4 \ , which gives O´1 ? M T¯c onvergence rate. Thus, the same convergence rate is attained provided that communication is more frequent compared to the identical data regime. Experiments All experiments described below were run on logistic regression problem with 2 regularization of order 1 n . The datasets were taken from the LIBSVM library (Chang and Lin, 2011). The code was written in Python using MPI (Dalcin et al., 2011) and run on Intel(R) Xeon(R) Gold 6146 CPU @3.20GHz cores in parallel. Variance measures We provide values of σ 2 dif and σ 2 opt in Figure 1 for different datasets, minibatch sizes and M . The datasets were split evenly without any data reshuffling and no overlaps. For any M ą 1, the value of σ dif is lower bounded by 1 M ř M m"1 ∇f m px˚q 2 which explains the difference between identical and heterogeneous data. Identical Data For identical data we used M " 20 nodes and 'a9a' dataset. We estimated L numerically and ran two experiments, with stepsizes 1 L and 0.05 L and minibatch size equal 1. In both cases we observe convergence to a neighborhood, although of a different radius. Since we run the experiments on a single machine, the communication is very cheap and there is little gain in time required for convergence. However, the advantage in terms of required communication rounds is selfevident and can lead to significant time improvement under slow communication networks. The results are provided here in Figure 2 and in the supplementary material in Figure 5. Heterogeneous Data Since our architecture leads to a very specific trade-off between computation and communication, we provide plots for the case the communication time relative to gradient computation time is higher or lower. To see the impact of σ dif , in all experiments we use full gradients ∇f m and constant stepsize 1 L . The data partitioning is not i.i.d. and is done based on the index in the original dataset. The results are provided in Figure 3 and in the supplementary material in Figure 6. In cases where communication is significantly more expensive than gradient computation, local methods are much faster for imprecise convergence. We thank Li Yipeng for spotting multiple typos in the paper. We use a notation similar to that of Stich (2019) and denote the sequence of time stamps when synchronization happens as pt p q 8 p"1 . Given stochastic gradients g 1 t , g 2 t , . . . , g M t at time t ě 0 we define References We define an epoch to be a sequence of timesteps between two synchronizations: for p P N an epoch is the sequence t tp , t tp`1 , . . . , t tp`1´1 . We summarize some of the notation used in Table 3. Average of stochastic gradients across nodes at time t. See Algorithm 1. The average of all local iterates at time t. r t The deviation of the average iterate from the optimumx t´x˚a t time t. σ 2 Uniform bound on the variance of the stochastic gradients for identical data. See Assumption 2. The variance of the stochastic gradients at the optimum for identical data. See Assumption 3. The variance of the stochastic gradients at the optimum for heterogeneous data. See Assumption 3. t 1 , t 2 , . . . , t p Timesteps at which synchronization happen in Algorithm 1. H Upper bound on the maximum number of local computations between timesteps, i.e. max p |t p´tp`1 | ď H. Throughout the proofs, we will use the variance decomposition that holds for any random vector X with finite second moment: In particular, its version for vectors with finite number of values gives As a consequence of (6) we have that, f px m q. As a special case with f pxq " x 2 , we obtain Tighter Theory for Local SGD on Identical and Heterogeneous Data We denote the Bregman divergence associated with function f and arbitrary x, y as D f px, yq def " f pxq´f pyq´x∇f pyq, x´yy . Proposition 2. If f is L-smooth and convex, then for any x and y it holds If f satisfies Assumption 1, then We will also use the following facts from linear algebra: x`y 2 ď 2 x 2`2 y 2 , 2 xa, by ď ζ a 2`ζ´1 b 2 for all a, b P R d and ζ ą 0. 8 Proofs for Identical data under Assumption 2 Proof of Lemma 1 Proof. Let t P N be such that t p ď t ď t p`1´1 . Recall that for a time t such that t p ď t ă t p`1 we have x m t`1 " x m t´γ g m t andx t`1 "x t´γ g t . Hence for the expectation conditional on x 1 t , x 2 t , . . . , x M t we have: Averaging both sides and letting V t " 1 M ř m x m t´xt 2 , we have Now note that by expanding the square we have, We decompose the first term in the last equality again by expanding the square, Plugging this into (16) we have, Now average over m: where we used that by definition 1 Now note that for the first term in (17) we have by Assumption 2, For the second term in (17) we have Averaging over m, where we used the fact that 1 M ř mḡ m t "ḡ t , which comes from the linearity of expectation. Now we bound ḡ m t´∇ f px t q 2 in the last inequality by smoothness and then use that Jensen's inequality implies Plugging in (19) and (18) into (17) we have, Plugging (20) into (15), we get By Jensen's inequality we have D f px T , x˚q ď 1 T ř T´1 t"0 D f px t , x˚q. Using this in (24) we have, Dividing both sides by γ{2 yields the theorem's claim. 9 Proofs for identical data under Assumption 3 Preliminary Lemmas Lemma 4. Individual gradient variance bound: assume that Assumption 3 holds with identical data, then for all t ě 0 and m P rM s we have where σ 2 m def " E zm"Dm " ∇f px˚, z m q 2 ı is the noise at the optimum on the m-th node. Proof. Using that g m t " ∇f px m t , z m q for some z m " D m , ď 2 ∇f px m t , z m q´∇f px˚, z m q 2`2 ∇f px˚, z m q 2 (11) ď 4L pf px m t , z m q´f px˚, z m q´x∇f px˚, z m q, x m t´x˚y q`2 ∇f px˚, z m q 2 . Proof. This is a modification of Lemma 3.1 in Stich (2019). For expectation conditional on px m t q M m"1 and using Lemma 6, E " x t`1´x˚ We now take expectations and use Lemma 4: Lemma 10. Epoch Iterate Deviation Bound Suppose that Assumptions 1, and 3 hold with identical data. Assume that Algorithm 1 is run with stepsize γ ą 0, let p P N be such that t p is a synchronization point then Proof. We start with Lemma 9, E rV t s ď p1´γµq E rV t´1 s`2γE rD f px t´1 , x˚qs`2γ 2 σ 2 opt " α¨E rV t´1 s`2γE rD f px t´1 , x˚qs`2γ 2 σ 2 opt . Using (42) in (41), Then summing up (43) weighted by where we used that at t " t p the sum ř t´1 j"tp α t´j E rD f px j , x˚qs is zero. Then by adding more Bregman divergence terms (which are positive) to the inner sum we obtain v ÿ t"tp`1 Combining (45) and (44) Finally, renaming the variable j gives us the claim of this lemma. Proof of Theorem 3 Proof. Let pt p q 8 p"1 index all the times t at which communication and averaging happen. Taking expectations in Lemma 7 and letting r t "x t´x˚, " p1´γµq E Let T " t p´1 for some p P N, then expanding out E " r t 2 ı in (46), It remains to bound the last term in (48). We have where in the first line we just count i by decomposing it over all the communication intervals. Fix k P N and let v k " t k´1 . Then by Lemma 10 we have, where α " 1´γµ. Using (50) in (49), where in in the third line we used that our choice of γ guarantees that 1´8 γLH 1´γµ ě 0. Using (51) in (49), Using (52) in (48), which is the claim of the theorem. Proof of Theorem 4 Proof. Start with Lemma 7 with µ " 0, then the conditional expectations satisfies M´γ 2 D f px t , x˚q. Since by assumption T is a synchronization point, then there is some k P N such that T " t k . To estimate the sum of deviations in (53) we use double counting to decompose it over each epoch, use (54), and then use double counting again: ď k´1 ÿ p"0 tp`1´1 ÿ t"tp`2 γHE rD f px t , x˚qs`2γ 2 σ 2 opt pH´1q" T´1 ÿ t"0`2 γ pH´1q E rD f px t , x˚qs`2γ 2 σ 2 opt pH´1q˘. Dividing both sides by γ{10 and using Jensen's inequality yields the theorem's claim. 10 Proofs for Heterogeneous data Preliminary Lemmas Lemma 11. Suppose that Assumptions 1 and 3 hold with µ ě 0 for heterogeneous data. Then for expectation conditional on x 1 t , x 2 t , . . . , x m t and for M ě 2, we have ı`T´1 ÿ Finally, using Jensen's inequality and the convexity of f we get the required claim. 11 Extra Experiments Figure 5 shows experiments done with identical data and Figure 6 shows experiments done with heterogeneous data in the same setting as described in the main text but with different datasets. 12 Discussion of Dieuleveut and Patel (2019) An analysis of Local SGD for identical data under strong convexity, Lipschitzness of ∇f , uniformly bounded variance, and Lipschitzness of ∇ 2 f is given in (Dieuleveut and Patel, 2019), where they obtain a similar communication complexity to (Stich, 2019) without bounded gradients. However, in the proof of their result for general non-quadratic functions (Proposition S20) they make the following assumption, rewritten in our notation: where L H is the Lipschitz constant of the Hessian of f (assumed thrice differentiable). Their discussion of G speculates on the behaviour of iterate distances, e.g. saying that if they are bounded, then the guarantee is good. Unfortunately, assuming this quantity bounded implies that gradients are bounded as well, making the improvement over (Stich, 2019) unclear to us. Furthermore, as G depends on the algorithm's convergence (it is the distance from the optimum evaluated at various points), assuming it is bounded to prove convergence to a compact set results in a possibly circular argument. Since G is also used as an upper bound on H in their analysis, it is not possible to calculate the communication complexity. Time to accuracy 2 10 3 1 local step 2 local steps 4 local steps 8 local steps 16 local steps 32 local steps Our system Figure 6: Same experiment as in Figure 3, performed on the 'mushrooms' dataset.
2019-09-10T20:47:10.000Z
2019-09-10T00:00:00.000
{ "year": 2019, "sha1": "4f783752a59c28df08bad9b22dd9c7bafe4efb08", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4f783752a59c28df08bad9b22dd9c7bafe4efb08", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
265913
pes2o/s2orc
v3-fos-license
Rapid Photodegradation of Methyl Orange (MO) Assisted with Cu(II) and Tartaric Acid Cu(II) and organic carboxylic acids, existing extensively in soil and aquatic environments, can form complexes that may play an important role in the photodegradation of organic contaminants. In this paper, the catalytic role of Cu(II) in the removal of methyl orange (MO) in the presence of tartaric acid with light was investigated through batch experiments. The results demonstrate that the introduction of Cu(II) could markedly enhance the photodegradation of MO. In addition, high initial concentrations of Cu(II) and tartaric acid benefited the decomposition of MO. The most rapid removal of MO assisted by Cu(II) was achieved at pH 3. The formation of Cu(II)-tartaric acid complexes was assumed to be the key factor, generating hydroxyl radicals (•OH) and other oxidizing free radicals under irradiation through a ligand-to-metal charge-transfer pathway that was responsible for the efficient degradation of MO. Some intermediates in the reaction system were also detected to support this reaction mechanism. Introduction Advanced oxidation processes (AOPs), which have superseded biological procedures proven to be ineffective for the treatment of some contaminated effluents under certain conditions, have been successfully demonstrated as efficient methods of degradation of organic pollutants [1][2][3]. In AOPs, hydroxyl radicals (ÁOH) and other oxidizing free radicals engendered from the reaction system can effectively oxidize organic pollutants into carbon dioxide, water, and inorganic acids. The Fenton process is an advanced oxidation process that is widely applied to treat a variety of organic pollutants due to its high efficiency, simple operation, and low cost [4,5]. Hydroxyl (ÁOH) radicals are produced while hydrogen peroxide (H 2 O 2 ) is decomposed in the presence of ferrous ions. UV-vis irradiation improves the efficiency of the process. Recently, alternative techniques such as photocatalysis of the novel iron sources, and complexes of Fe(III) and carboxylate anions for the degradation of organic contaminants have also received considerable attention [6][7][8][9][10][11]. Zuo and Hoigne [6] noted that photolysis of Fe(III)-oxalato complexes could lead to the formation of hydrogen peroxide (H 2 O 2 ), which could react with Fe(II) to further yield Fe(III) and a hydroxyl radical (ÁOH). Then, hydroxyl radicals could non-selectively mineralize azo dyes to carbon dioxide and water due to their high oxidation potential [12]. Cu(II) exists in natural environments and some waste soils and waters from the electroplating and smelting industries. Like Fe(III), Cu(II) can form a complex with organic carboxylic acid and has a lower oxidation state, Cu(I). Thus, it is hypothesized that in the presence of organic carboxylic acid, Cu(II), can also set up a photo-Fenton-like reaction with H 2 O 2 produced in situ, generating Cu(I) and some active free radicals through a pathway of metalligand-electron transfer under irradiation, just as Fe(III)/oxalate does. Garcia-Segura et al. [13] investigated the combination of Cu(II) and Fe(III) to improve the mineralization of phthalic acid by a solar photoelectro-Fenton (SPEF) process. They reported that Cu(II)-carboxylate complexes were easily removed with ÁOH that resulted from the photo-Fenton-like reaction of Fe(III)-carboxylate species, accelerating the degradation of organic acids [13]. However, they did not mention Cu(II)-carboxylate complexes as a ÁOH source. Our previous study on Cu(II)-carboxylate complexes mainly focused on the catalytic role of Cu(II) in the reduction of Cr(VI) by tartaric acid [14]. In this study, the photodegradation of methyl orange (MO) catalyzed by Cu(II) and tartaric acid was investigated at different initial pH values and concentrations of Cu(II), MO, and tartaric acid. MO was selected as the model organic pollutant in this paper because it is a typical azo dye. Azo dye, which contributes to~70% of all dyes in industries such as textiles, foodstuffs and leather, is of particular concern because they are known to be mutagenic and carcinogenic [7,9,10,15]. Cu(I) and ÁOH in the reaction system were also examined to reveal the potential degradation pathway of MO. The role of Cu(II) as a catalyst for the degradation of azo dyes with light in the presence of organic acids has never before been reported. Materials Methyl orange was obtained from Beijing Chemical Reagents Company (Beijing, China), and its stock solution (1000 mg/L) was prepared in deionized water. Cu(II) (50 mmol/L) was prepared by dissolving CuSO 4 •5H 2 O (s) (analytic grade, Shanghai Zhenxing Chemical Reagent Factory, Shanghai, China) in deionized water. The stock solution of tartaric acid (analytic grade, Shanghai Chemical Reagent Co., Ltd, Shanghai, China) with a concentration of 50 mmol/L was prepared in deionized water. 2,2'-Biquinoline, a characteristic reagent for Cu(I), was obtained from Sigma-Aldrich (Saint Louis, MO, USA). Tertiary butyl alcohol (TBA, Chengdu Kelong Chemical Reagent Factory, Chengdu, China) and L-histidine (L-H, Sinopharm Chemical Reagent Co., Ltd, Shanghai, China) were analytical grade and served as the radical scavengers to determine the production of ÁOH and other oxidative free radicals in the reaction systems. The other chemicals used in this study were at least analytical grade and used without further purification. All of the stock solutions were stored in a refrigerator at 4°C in the dark prior to use. All of the glassware used in the experiments were cleaned by soaking in 1 mol/L HCl for more than 12 h, and thoroughly rinsed first with tap water, then with deionized water. Photochemical experiments The photodegradation of MO was conducted in an XPA-7 photochemical reactor (Xujiang electromechanical plant, Nanjing, China) that was equipped with a magnetic stirrer, a device that controlled the temperature, and light sources including 100, 300 and 500 W medium pressure Hg lamps and a 500 W Xenon lamp. The light source was positioned inside a cylindrical Pyrex vessel surrounded by a circulating Pyrex water jacket to cool the lamp. A schematic diagram of the photochemical reactor was illustrated in our previous paper [16]. The light extensities at the position of the quartz tubes (reaction system) for the 100, 300 and 500 W medium pressure Hg lamps and the 500 W Xenon lamp were 12.7, 16.8, and 20.1 mw/cm 2 (measured by a UV-A irradiation meter, Beijing Normal University, China) and 26 500 Lux (measured by a ST-80C illumination meter, Beijing Normal University, China), respectively. For typical photocatalytic reactions, the required amounts of the stock solutions of MO, Cu (II) and tartaric acid were introduced into a 50 mL quartz tube, and the mixed solution was diluted with deionized water. NaOH (0.1 mol/L) and H 2 SO 4 (0.1 mol/L) were adopted to adjust the initial pH to the desired values (2, 3, 4, 5, 6, 7 and 8), and the final volume of the solution was adjusted to 40 mL. Then, the reaction tubes with 40 mL of solution were placed into the photochemical reactor and stirred with a magnetic bar at 500 rpm during irradiation. The temperature was maintained by a thermostatic bath. Control experiments were also performed under the same conditions. All of the experiments in this section were performed in triplicate. Analytical methods At given irradiation time intervals, a 1 mL aliquot of sample was removed with a pipette and diluted to 10 mL with HAc-NaAc buffer (pH = 5). The MO concentration was immediately determined using a UV-vis spectrometer (Beijing Ruili Corp, UV-9100) at the characteristic λ max of 464 nm. Cu(I), an intermediate, was detected using 2,20-biquinoline that acted as the chromogenic agent, which was extracted with isoamyl alcohol [17]. The absorbance was measured at 545 nm (see Fig A in S1 Text for the adsorption curve). A CyberScan pH2100 Bench Meter (Eutech Instruments) was used to measure the pH of the reaction solution after three-point calibration. Catalytic role of Cu(II) in the photodegradation of MO in the presence of tartaric acid The photodegradation of MO was conducted under different conditions. The results presented in Fig 1 show no noticeable change in the MO concentration in the single system of MO or the two-component system of MO and Cu(II) under UV irradiation with the full light of a 300 W medium pressure Hg lamp for 120 min, indicating that direct UV irradiation was insufficient to decompose MO even in the presence of Cu(II). A small increase in the MO degradation efficiency (~9%) in the two-component system of MO and tartaric acid was observed, which was attributed to the possible oxidants (e.g., H 2 O 2 and some free radicals) that were produced through the photolysis of tartaric acid. A similar result was reported by Guo et al. [10], who investigated the photodegradation mechanism and kinetics of MO catalyzed by Fe(III) and citric acid. Possible reactions resulting in the removal of MO are described in Eqs (1)(2)(3)(4)(5). However, 0.15 mmol/L MO was almost completely decolorized (the photodegradation efficiency reached 92%) within 120 min in the presence of 1 mmol/L Cu(II) and 10 mmol/L tartaric acid with UV irradiation, demonstrating that Cu(II) could significantly improve the photochemical degradation of MO in the presence of tartaric acid. The UV-vis spectra of MO versus time during the reaction are illustrated in Fig B in S1 Text, which clearly showed that the absorbance of the MO sample at the characteristic adsorption peak (464 nm) rapidly decreased with increasing reaction time. It was reported that Fe(III) can strongly catalyze the degradation of MO by citric acid under weakly acidic conditions because of the formation of Fe(III)-citrate complexes, which is of high photocatalytic activity, to produce hydroxyl radicals through a photo-Fenton-like reaction system [8][9][10]. Similar to Fe(III), Cu(II) reacted with tartaric acid to form a complex and also had a lower oxidation state, Cu(I). Under the irradiation of light, the Cu(II)-tartaric acid complex produced Cu(I) and tartaric acid radicals through a pathway of metal-ligand-electron transfer (Eqs 6-8). CuðIIÞ þ Tar ! Cu II ðTarÞ ð 6Þ Cu II ðTarÞ þ hv ! Cu I ðTarÞ Á Cu I ðTarÞÁ ! CuðIÞ þ Tar Á ð 8Þ To prove this hypothesis, a full-length scan experiment was performed, demonstrating the formation of Cu(II)-tartaric acid complexes in the reaction system (data not shown). Furthermore, we introduced 2,2'-biquinoline into the reaction system, and a red complex in an isoamyl alcohol solution as an extracting agent was observed. The absorbance at λ max = 545 nm was 0.093 (Fig A in S1 Text), which verified the generation of Cu(I) during the reaction [17] Moreover, tertiary butyl alcohol (a ÁOH-specific radical scavenger) and L-histidine (a universal radical scavenger) [18] were introduced into the reaction systems. It was expected that these scavengers would serve to determine the production of ÁOH and other oxidative free radicals in the reaction systems, and then the contributions of different radicals to the decomposition of MO could be discerned. It was noted from Fig 2 that the Cu(II) catalytic degradation of MO in the presence of tartaric acid was clearly suppressed with the introduction of excess tertiary butyl alcohol (~250 mmol/L), especially in the initial 65 minutes (almost no MO degradation), confirming that ÁOH resulted from the reaction system and contributed to MO decomposition to some degree. However, there was some MO (~64%) that was still degraded in the system containing excess tertiary butyl alcohol after 65 min, which indicated that there likely were some other types of active substances that were responsible for the destruction of MO. L-histidine, which served as a scavenger of all of the oxidative free radicals in the reaction systems, was adopted in the Cu(II) catalytic degradation of MO in the presence of tartaric acid. The results (Fig 2) demonstrate that the photodegradation efficiency of MO decreased by 38% with the introduction of 1 mmol/L L-histidine within 120 min compared with that in the absence of scavengers, and the photodegradation was almost inhibited when the concentration of L-histidine was increased to 5 or 10 mmol/L. The results indicate that the degradation of MO involving radicals was almost completely quenched after the introduction of enough Lhistidine. Based on the results described above, it was concluded that in addition to ÁOH, there were some other types of free radicals that played an important role in MO decomposition. Moreover, these results corroborate the radical-based mechanism responsible for the destruction of MO with light in the presence of both Cu(II) and tartaric acid. The possible reactions apart from Eq (5) that were involved in the photodegradation of MO in the system of MO/Cu (II)/tartaric acid are summarized as follows: During the generation process of these free radicals, Cu(I) and Cu I (tar)•, which were produced from the Cu(II)-tartaric acid complexes through a pathway of metal-ligand-electron transfer (Eqs 6-8), were oxidized by the oxidizing free radicals and dissolved oxygen in the reaction system, accompanied with the reproduction of Cu(II) (Eqs 10 and 11). Again, Cu(II)tartaric acid complexes formed. Consequently, a cyclic process of converting Cu(II) to Cu(I) in the reaction system was established. The main reactions previously discussed were summarized in a possible mechanism scheme (Fig 3) for Cu(II) and Cu(I) cycling in the Cu(II)-tartaric acid system. Effect of the initial concentrations of tartaric acid and Cu(II) on the photodegradation of MO The effect of the initial concentrations of tartaric acid and Cu(II) on the photodegradation of MO with irradiation by a 300 W medium pressure Hg lamp at pH 4 and 25°C was further investigated, and the results are depicted in Fig 4. As shown in Fig 4A, the increase of tartaric acid in the ternary system of MO/Cu(II)/tartaric acid with a given Cu(II) concentration greatly enhanced the photodegradation efficiency of MO. Similar results were reported by Balmer and Sulzberger [19] in an iron oxalate system, and they also noted that a higher oxalate concentration led to higher degradation of atrazine. The enhancement of the MO degradation by an increase in the tartaric acid concentration was due to the increased production of ÁOH radicals and other active free radicals resulting from more the photochemically active Cu(II)-tartaric acid complexes that formed, as described in Eqs 6-11. Similarly, in the presence of 10 mmol/L tartaric acid, the degradation of MO significantly increased with the initial concentration of Cu (II) from 0 to 1 mmol/L (Fig 4B). This result suggested that the formation of Cu(II)-tartaric acid played a crucial role in generating ÁOH radicals to accelerate the degradation of MO. However, as the initial concentration of Cu(II) increased from 1 to 15 mmol/L, there was no apparent enhancement in the MO degradation efficiency by the end of the reaction (120 min); however, there was an enhancement in the removal rate of MO in the beginning stage of the reaction. This result suggested that 1 mmol/L Cu(II) may be enough to form sufficient Cu (II)-tartaric acid complexes with 10 mmol/L tartaric acid to catalyze the photodegradation of 0.15 mmol/L MO. and tartaric acid. It was noted that pH played an important role in the degradation of MO, and the most efficient degradation of MO was realized at pH 3. In this case, MO was almost completely degraded within 70 min. Except for pH 3 and 4, the degradation rates of MO at pH 2 and 5-8 were similar and low, and less than 20% of the initial MO was removed within 120 min. Effect of pH on the photodegradation of MO The dependence of the MO photodegradation on pH was considered to mainly result from the following factors. Firstly, high pH (pH 5-8) gave rise to an increase in Cu(II) hydrolysis, which had a negative effect on the formation of Cu(II)-tartaric acid complexes and did not benefit the photodegradation of MO. Secondly, it was supposed that low pH that benefited ÁOH generation (Eqs 2 and 11) aided in MO photodegradation. However, this explanation was not true for the pH 2 condition. This was possibly due to the distribution of tartaric acid species and the quinoid structure of MO. Li et al. [18] reported that tartaric acid exists mainly as a molecular species at low pH, resulting in decreased Cu(II)-tartaric acid complexes. On the other hand, the molecular species of tartaric acid is prone to competing for ÁOH with MO and degrades under UV light [20], which inhibits the photodegradation of MO. In addition, it is known that the structure of MO is quinoid at low pH (pK a = 3.4), and the quinoid structure is more stable than the azo form [21]. Therefore, the degradation of MO was blocked at pH 2. Effect of light intensity on the photodegradation of MO The photodegradation of MO under simulated solar light by a 500 W Xenon lamp and full ultraviolet light by 100 to 500 W medium pressure Hg lamps with an initial concentration of 0.15 mmol/L MO, 1 mmol/L Cu(II) and 10 mmol/L tartaric acid was investigated at pH 4 and 25°C. The results in Fig 6 show that the MO degradation under the irradiation of simulated solar light was negligible, indicating that solar light could not efficiently activate Cu(II)-tartaric acid complexes or tartaric acid to generate free radicals under this experimental condition. However, intensive ultraviolet irradiation could significantly enhance the MO degradation rate. Under the irradiation of 100, 300 and 500 W medium pressure Hg lamps, the MO degradation efficiency was 20% in 120 min, 92% in 120 min, and~94% in 35 min, respectively. It was obvious that the photodegradation of MO strongly depended on light intensity in this system. 3.5 Improvement of the catalysis of Cu(II) and tartaric acid It was noted that the degradation rate was slow at the initial stage of the reaction and then accelerated after some reaction time in the ternary system of MO/Cu(II)/tartaric acid. It was assumed that the formation of Cu(II)-tartaric acid complexes required time. Thus, a series of experiments were carried out to prove this assumption. Cu(II) and tartaric acid were mixed for 30 min and 180 min with a magnetic stirrer in the photochemical reactor in the dark, and then MO was introduced into the reaction system to begin the experiment (the following steps were kept in routine operation). The results of MO degradation with different pre-treatments of Cu(II) and tartaric acid are shown in Fig 7. In comparison with the routine ternary system of MO/Cu(II)/tartaric acid without pre-treatment, when Cu(II) and tartaric acid were mixed in advance, the slow degradation stage in the beginning was shortened after 30 min of pre-treatment and almost disappeared after 180 min of pre-treatment. These results again illustrate that the formation of Cu(II)-tartaric acid complexes was the crucial step in the reaction system, and the photodegradation of MO could be accelerated by mixing Cu(II) and tartaric acid in advance. Conclusions Cu(II) markedly catalyzed the photodegradation of methyl orange in the presence of tartaric acid under weakly acidic conditions. The degradation rate and efficiency was improved with an increase of the initial concentrations of Cu(II) and tartaric acid. The optimal degradation of methyl orange catalyzed by Cu(II) was achieved at pH 3. The formation of Cu(II)-tartaric acid complexes in the reaction system was the crucial step, from which the strong oxidizing agent (ÁOH) and other oxidizing free radicals were generated under irradiation by a medium pressure Hg lamp, accompanied by the cyclic process of Cu(II) to Cu(I) conversion. It could be inferred from this study that in natural environments or in some contaminated effluents with sunlight (including~5% ultraviolet light), the transformation of Cu(II) to Cu(I) occurs when Cu(II) and organic carboxylic acid coexist, accompanied by the degradation of organic pollutants. Supporting Information S1 Text. Supporting information for "Rapid photodegradation of methyl orange (MO) assisted by Cu(II) and tartaric acid". (DOCX)
2016-05-12T22:15:10.714Z
2015-08-04T00:00:00.000
{ "year": 2015, "sha1": "6e7921d9f342f3ce0f3e92f4bb16f27bd682b03f", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0134298&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6e7921d9f342f3ce0f3e92f4bb16f27bd682b03f", "s2fieldsofstudy": [ "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
271597199
pes2o/s2orc
v3-fos-license
Safety evaluation of a second extension of use of the food enzyme α‐amylase from the non‐genetically modified Cellulosimicrobium funkei strain AE‐AMT Abstract The food enzyme α‐amylase (4‐α‐d‐glucan glucanohydrolase i.e. EC 3.2.1.1) is produced with the non‐genetically modified Cellulosimicrobium funkei strain AE‐AMT by Amano Enzyme Inc. A safety evaluation of this food enzyme was made previously, in which EFSA concluded that the food enzyme did not give rise to safety concerns when used in seven food manufacturing processes. Subsequently, the applicant has requested to extend its use to include three additional processes. In this assessment, EFSA updated the safety evaluation of this food enzyme when used in a total of ten food manufacturing processes. As the food enzyme‐total organic solids (TOS) are removed from the final foods in one food manufacturing process, the dietary exposure to the food enzyme‐TOS was estimated only for the remaining nine processes. The dietary exposure was calculated to be up to 0.049 mg TOS/kg body weight (bw) per day in European populations. When combined with the no observed adverse effect level previously reported (230 mg TOS/kg bw per day, the highest dose tested), the Panel derived a margin of exposure of at least 4694. Based on the data provided for the previous evaluation and the revised margin of exposure in the present evaluation, the Panel concluded that this food enzyme does not give rise to safety concerns under the intended conditions of use. CONTENTS 'Food enzyme' means a product obtained from plants, animals or microorganisms or products thereof including a product obtained by a fermentation process using microorganisms: (i) containing one or more enzymes capable of catalysing a specific biochemical reaction; and (ii) added to food for a technological purpose at any stage of the manufacturing, processing, preparation, treatment, packaging, transport or storage of foods. 'Food enzyme preparation' means a formulation consisting of one or more food enzymes in which substances such as food additives and/or other food ingredients are incorporated to facilitate their storage, sale, standardisation, dilution or dissolution. Before January 2009, food enzymes other than those used as food additives were not regulated or were regulated as processing aids under the legislation of the Member States.On 20 January 2009, Regulation (EC) No 1332/2008 on food enzymes came into force.This Regulation applies to enzymes that are added to food to perform a technological function in the manufacture, processing, preparation, treatment, packaging, transport or storage of such food, including enzymes used as processing aids.Regulation (EC) No 1331/2008 2 established the European Union (EU) procedures for the safety assessment and the authorisation procedure of food additives, food enzymes and food flavourings.The use of a food enzyme shall be authorised only if it is demonstrated that: • it does not pose a safety concern to the health of the consumer at the level of use proposed; • there is a reasonable technological need; • its use does not mislead the consumer. All food enzymes currently on the European Union market and intended to remain on that market, as well as all new food enzymes, shall be subjected to a safety evaluation by the European Food Safety Authority (EFSA) and approval via an EU Community list. | Background as provided by the European Commission Only food enzymes included in the Union list may be placed on the market as such and used in foods, in accordance with the specifications and conditions of use provided for in Article 7(2) of Regulation (EC) No 1332/2008 on food enzymes. Alpha-amylase from a non-genetically modified strain of Cellulosimicrobium funkei (strain AE-AMT) is a food enzyme included in the Register of food enzymes 3 to be considered for inclusion in the Union list and thus subject to a risk assessment by the European Food Safety Authority (EFSA). On 31 August 2023, a new application has been introduced by the applicant "Amano Enzyme Inc." for an extension of the condition of use of the food enzyme Alpha-amylase from a non-genetically modified strain of Cellulosimicrobium funkei (strain AE-AMT). | Terms of Reference The European Commission requests the European Food Safety Authority to carry out the safety assessment and the assessment of possible confidentiality requests of an extension of the condition of use for the following food enzyme: Alphaamylase from a non-genetically modified strain of Cellulosimicrobium funkei (strain AE-AMT), in accordance with Regulation (EC) No 1331/2008 establishing a common authorization procedure for food additives, food enzymes and food flavourings. | Data The applicant has submitted a dossier in support of the application for the authorisation of the extension of use of food enzyme α-amylase from a non-genetically modified Cellulosimicrobium funkei strain AE-AMT. | Methodologies The assessment was conducted in line with the principles described in the EFSA 'Guidance on transparency in the scientific aspects of risk assessment' (EFSA, 2009) and following the relevant existing guidance documents of EFSA Scientific Committee. The 'Scientific Guidance for the submission of dossiers on food enzymes' (EFSA CEP Panel, 2021) and the 'Food manufacturing processes and technical data used in the exposure assessment of food enzymes' (EFSA CEP Panel, 2023a) have been followed for the evaluation of the application. | Public consultation According to Article 32c(2) of Regulation (EC) No 178/2002 5 and to the Decision of EFSA's Executive Director laying down the practical arrangements on pre-submission phase and public consultations, EFSA carried out a public consultation on the non-confidential version of the technical dossier from 26 January to 16 February 2024. 6No comments were received.α-Amylases catalyse the hydrolysis of 1,4α-glucosidic linkages in starch (amylose and amylopectin), glycogen and related polysaccharides and oligosaccharides, resulting in the generation of soluble dextrins and other malto-oligosaccharides. | ASSESSMENT All aspects concerning the safety of this food enzyme regarding its source, production, characteristics and toxicology were evaluated in July 2022 (EFSA CEP Panel, 2022), then updated when its use was extended to include six additional food manufacturing processes in June 2023 (EFSA CEP Panel, 2023b).Following a request to update the intended uses (adding three additional food manufacturing processes), EFSA revises the exposure assessment and updates the safety evaluation of this food enzyme when used in ten food manufacturing processes. | Dietary exposure The current dietary exposure supersedes section 3.5 in the previous evaluation (EFSA CEP Panel, 2023b). | Revised intended use of the food enzyme The food enzyme is intended to be used in ten food manufacturing processes at the use levels summarised in Table 1. 5 Regulation (EC) No 178/2002 of the European Parliament and of the Council of 28 January 2002 laying down the general principles and requirements of food law, establishing the European Food Safety Authority and laying down procedures in matters of food safety.OJ L 31, 1.2.2002, pp.1-24. https:// conne ct.efsa.europa.eu/ RM/s/ consu ltati ons/ publi ccons ultat ion2/ a0lTk 00000 06HSn/ pc0801. The additional three uses of the food enzyme are described below.In the production of juices, the food enzyme is added during the mash treatment and the depectinisation steps. 8The α-amylase degrades starch in the pressed juices, improving the filtration rate, preventing haze, and increasing sweetness.The food enzyme-TOS remain in the juices. In the production of fruit and vegetable products other than juices, the food enzyme is added during both the mash treatment and prior to straining. 9The hydrolysis of starch reduces viscosity, and the hydrolysates have higher solubility and sweetness in the final products (e.g.jam, puree, paste and sauce).The food enzyme-TOS remain in the final products. In the production of a coffee substitute, the food enzyme is added to milled barley prior to extraction. 10 The α-amylase hydrolyses the starch, which increases sweetness and decreases viscosity. 11The food enzyme-TOS remain in the final coffee substitutes. Based on the thermostability evaluated previously (EFSA CEP Panel, 2023b) and the thermal treatments applied during food processing, it is expected that the enzyme is inactivated in the food manufacturing processes listed in Table 1, with the exception of juices, in which it may remain active depending on the pasteurisation conditions. | Dietary exposure estimation In accordance with the guidance document (EFSA CEP Panel, 2021), dietary exposure was calculated for the nine food manufacturing processes where the food enzyme-TOS remain in the final foods. Chronic exposure to the food enzyme-TOS was calculated by combining the maximum recommended use level with individual consumption data (EFSA CEP Panel, 2021).The estimation involved selection of relevant food categories and application of technical conversion factors (EFSA CEP Panel, 2023a).Exposure from all FoodEx categories was subsequently summed up, averaged over the total survey period (days) and normalised for body weight.This was done for all individuals across all surveys, resulting in distributions of individual average exposure.Based on these distributions, the mean and 95th percentile exposures were calculated per survey for the total population and per age class.Surveys with only one day per subject were excluded and high-level exposure/intake was calculated for only those population groups in which the sample size was sufficiently large to allow calculation of the 95th percentile (EFSA, 2011). Table 2 provides an overview of the derived exposure estimates across all surveys.Detailed mean and 95th percentile exposure to the food enzyme-TOS per age class, country and survey, as well as contribution from each FoodEx category to the total dietary exposure are reported in Appendix A -Tables 1 and 2. For the present assessment, food consumption data were available from 48 dietary surveys (covering infants, toddlers, children, adolescents, adults and the elderly), carried out 8 Technical dossier/Intended use(s) in food and use level(s) (Proposed normal and maximum use levels)/Annex 'Flow chart of each application'/p.9. 9 Technical dossier/Intended use(s) in food and use level(s) (Proposed normal and maximum use levels)/Annex 'Flow chart of each application'/p.9.The numbers in bold were used for calculation. c The previous evaluation is made for the food enzyme application EFSA-Q-2022-00532. 7 Technical dossier/Intended use(s) in food and use level(s) (Proposed normal and maximum use levels)/Proposed conditions of use/p.6. in 26 European countries (Appendix B).The highest dietary exposure was estimated to be 0.049 mg TOS/kg bw per day in toddlers and children at the 95th percentile. | Uncertainty analysis In accordance with the guidance provided in the EFSA opinion related to uncertainties in dietary exposure assessment (EFSA, 2006), the following sources of uncertainties have been considered and are summarised in Table 3. The conservative approach applied to estimate the exposure to the food enzyme-TOS, in particular assumptions made on the occurrence and use levels of this specific food enzyme, is likely to have led to an overestimation of the exposure. The exclusion of one food manufacturing process from the exposure estimation was based on > 99% of TOS removal.This is not expected to impact the overall estimate derived. | Margin of exposure In the previous evaluation, the Panel identified a no observed adverse effect level (NOAEL) of 230 mg TOS/kg body weight (bw) per day, the highest dose tested, resulting in a margin of exposure of at least 19,167 (EFSA CEP Panel, 2023b). A comparison of the NOAEL with the newly derived exposure estimates of 0.002-0.035mg TOS/kg bw per day at the mean and from 0.003 to 0.049 at the 95th percentile resulted in a revised margin of exposure of at least 4694. Food•a 10 Technical dossier/Intended use(s) in food and use level(s) (Proposed normal and maximum use levels)/Annex 'Flow chart of each application'/p.10. 11 Technical dossier/Intended use(s) in food and use level(s) (Proposed normal and maximum use levels)/4-11 Proposed conditions of use/p.5.T A B L E 1 Updated intended uses and use levels of the food enzyme. 7Production of plant-based analogues of milk and milk products Cereals, legumes, oilseeds, nuts etc.The name has been harmonised by EFSA according to the 'Food manufacturing processes and technical data used in the exposure assessment of food enzymes' (EFSA CEP Panel, 2023a).b Abbreviations: +, uncertainty with potential to cause overestimation of exposure; -, uncertainty with potential to cause underestimation of exposure.
2024-08-02T05:13:49.006Z
2024-07-01T00:00:00.000
{ "year": 2024, "sha1": "d5db0f2b82cc92a3274230345237bd424bafbe31", "oa_license": "CCBYND", "oa_url": "https://doi.org/10.2903/j.efsa.2024.8948", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d5db0f2b82cc92a3274230345237bd424bafbe31", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
256397579
pes2o/s2orc
v3-fos-license
Model for the static friction coefficient in a full stick elastic-plastic coated spherical contact Finite element analysis is used to investigate an elastic-plastic coated spherical contact in full stick contact condition under combined normal and tangential loading. Sliding inception is associated with a loss of tangential stiffness. The effect of coating thickness on the static friction coefficient is intensively investigated for the case of hard coatings. For this case, with the increase in coating thickness, the static friction coefficient first increases to its maximum value at a certain coating thickness, thereafter decreases, and eventually levels off. The effect of the normal load and material properties on this behavior is discussed. Finally, a model for the static friction coefficient as a function of the coating thickness is provided for a wide range of material properties and normal loading. Introduction Friction is ubiquitous in our lives. On one hand, it can be desirable in some cases such as a braking system [1] and power transmission system [2,3]. On the other hand, it may not be favored as high friction can lead to severe material wear [4] and undesirable energy consumption [5]. Thus, proper control of friction is an essential goal for the surface engineering community. Coating technology has been widely used in the industry and has been proven to be one of the most effective surface treatments to control the surface friction property [6−8]. However, the selection of some important parameters, such as coating thickness and coating material, in coating applications still mainly relies on empiricism [9]. A generic scientific theory is thus required so that the surface properties can be tailored precisely with less trial and error. In numerous theoretical studies on surface contact models, a nominally flat surface is envisaged as a cluster of asperities, whose heights have some statistical distribution (e.g., the classic GW model [10]). The contact between surfaces is determined by statistically incorporating the individual asperity contact behavior. If we assume that the coating inherits the topography of the substrate and the asperities on the substrate surface have spherical shape, the contact between coated surfaces is localized to many individual coated spherical tips. With this assumption, understanding the behavior of a single coated sphere under combined normal and tangential loading is the first step for the establishment of a theory on the friction property of a coated surface. The behavior of a coated sphere under pure normal loading has been intensively investigated in various aspects of both elastic [11−18] and elastic-plastic [19−25] regimes. Keer et al. [11] obtained the stress distribution at the contact interface of two identical coated spheres. Under the same contact radius, a stiffer coating produces higher interfacial normal stress. Garjonis et al. [12] determined an equivalent modulus of elasticity for identical contacting coated spheres that is load dependent. However, in practice, it may be inconvenient to use such an effective modulus owing to its load dependency. This inconvenience was resolved by Goltsberg and Etsion [13], who derived a load-independent effective elasticity modulus based on their extensive numerical results. They also introduced a special normalization for contact parameters that enabled universal dimensionless relations for loaddisplacement [13] and displacement-contact area [14] in elastic regime. Another important aspect of the elastic deformation in a coated sphere owing to normal loading is its elasticity terminus. Goltsberg et al. [15] showed various possible locations of yield inception, which can be in the coating, in the substrate, or at their interface depending on the coating thickness. Chen et al. [16] extended this analysis by presenting the location of yield inception in a yield map as a function of both coating thickness and material properties of the coating and substrate. In addition, Goltsberg and Etsion [17] pointed out a weakening effect associated with very thin hard coatings where the resistance of the coated sphere to plastic yield is even lower than that of the homogeneous sphere made of the weaker substrate material. This weakening effect was experimentally validated by Huang et al. [18]. Compared with the elastic deformation in a coated sphere, the elastic-plastic deformation is inevitable and may be beneficial in practice, which, for instance, can generate sufficient contact area to establish a reliable electrical contact [19−22]. Both numerical [19,20] and experimental studies [21,22] were conducted to obtain the load-displacement and load-contact area relationships. However, these results were obtained only for a few specific material properties and geometric dimensions. To overcome this limitation, a wide range of material properties and coating thicknesses were considered in [23−25]. Chen et al. [23] showed the plasticity evolution in a coated sphere loaded by a rigid flat under the slip contact condition, where the coating thickness was selected to avoid the weakening effect [17]. After the first yield inception, a second yield inception occurred on the substrate side of the interface. Subsequently, they normalized the contact parameters by critical values at the second yield inception and obtained universal dimensionless relations for load-displacement and contact area-displacement in elastic-plastic regime [24]. Ronen et al. [25] compared the plasticity evolution and contact parameters under slip and stick conditions and observed that the effect of contact condition was negligible except for a phenomenon occurring close to the contact area. Despite the considerable number of studies on the behavior of a coated sphere under pure normal loading, studies concerning combined normal and tangential loading are relatively scant and are limited to the elastic regime. Keer et al. [11] presented the shear stress distribution at the contact interface of two identical coated spheres elastically deformed under the combined loading. They assumed that an increase in tangential loading would trigger the occurrence of local slip at the originally bonded interface once the Coulomb friction law is satisfied. Such a local slip is initiated from the contact edge, and thereafter, it progresses radially inward until a global interfacial sliding. Based on this assumption, they further observed that the friction force is independent of coating thickness. The studies concerning combined normal and tangential loading in elastic-plastic regime were mostly conducted for a homogeneous sphere. To avoid the empiricism-based Coulomb friction law, many Corresponds to the maximum static friction coefficient researchers sought to associate the friction property with various material properties such as yield strength and toughness [26−35]. Partial slip was allowed at the interface in Refs. [26−31] whereas the full stick condition was maintained during tangential loading in Refs. [32−35]. Although allowing partial slip is realistic under low to moderate normal loading, it is still difficult to achieve a satisfying law governing partial slip owing to the complex interfacial bonding contributed by multiple intertwined physical and chemical sources [31]. The much simplified extreme case of full stick condition is an appropriate description of interfacial bonding [36]. Under high normal loading, the full stick condition was experimentally demonstrated to be maintained throughout the loading process [37]. Under low to moderate normal loading, the full stick condition also correlates well with the slip occurrence at the contact interface as the plastic volume resulting from the combined loading is confined to close proximity below this interface, indicating that the slip occurs at the interface or very close to it. Thus, the simpler full stick condition [32−35] will be adopted in the present study. Brizmer et al. [32] observed that an increase in the tangential load leads to a decrease in the tangential stiffness. They assumed that this stiffness finally disappears at sliding inception. By defining the tangential force at sliding inception as the static friction force, they obtained the static friction coefficient as a decreasing function of normal loading. Additionally, Zolotarevskiy et al. [33] showed the evolution of tangential force from the initiation of tangential loading to sliding inception. In both Refs. [32] and [33], the material is bilinear elastic-plastic with 2% tangent modulus of isotropic linear hardening. Bhagwat et al. [34] reported that a larger tangent modulus results in a higher friction coefficient. Zhao et al. [35] performed a similar study but on materials with power-law hardening as it is a more realistic description of behavior for many materials. From the above literature review, it can be observed that the studies on coated spherical contact have been conducted so far with pure normal loading. Studies on combined normal and tangential loading are limited to elastic regime only and rely on Coulomb friction law (e.g., [11]). Therefore, the aim of the present study is to investigate the elastic-plastic coated spherical contact under combined normal and tangential loading with the full stick contact condition and a power-law hardening. Static friction coefficient of a homogeneous spherical contact [32] Figure 1 schematically shows a deformable coated sphere in contact with a rigid flat under combined normal and tangential loading. Notably, indentation is often used for characterizing mechanical properties of coating but may be detrimental in sliding systems. This is because the indenting asperities of a rough surface may result in severe abrasive friction and wear owing to plowing. Hence, for a good tribological design, indentation of asperities should be avoided and flattening of asperities associated with mild adhesive friction and wear should be attempted. Loading of the coated sphere is achieved by first applying a normal load P, and subsequently a tangential displacement u x . The special case where the coating and substrate materials are identical is a homogeneous spherical contact. This contact problem under the full stick condition has been intensively investigated in Ref. [32]. Upon the completion of normal loading, a contact area is established at the contact interface. Subsequently, tangential displacement (u x ) i increased in a stepwise manner is applied, where i is the number of consecutive tangential displacement steps. The corresponding tangential force (Q) i can be obtained as the x component of the reaction force at the sphere bottom. Accordingly, the tangential stiffness of the junction (K T ) i can be approximated as It decreases with the tangential loading owing to the accumulated plasticity in the junction. It was assumed in Ref. [32] that, once (K T ) i becomes equal to or less than 10% of the initial tangential stiffness (K T ) 1 , i.e., (K T ) i ≤ 0.1(K T ) 1 , the junction cannot support significant additional tangential force and sliding inception occurs. Hence, sliding inception is treated as the junction plastic failure. The corresponding tangential force is the maximum tangential force Q max that can be supported by the junction and is defined as the where L c is the critical normal load at yield inception of a homogeneous sphere under the stick condition in the form where , , and Y are the Poisson's ratio, Young's modulus, and yield strength of the material, respectively, and From Eq. (2), the static friction coefficient  decreases with the increase in normal load P. This is physically reasonable as a higher normal load P results in a more plastic and compliant junction that can support a lower dimensionless tangential load Q max /P [32]. Finite element model A coated spherical contact under combined normal and tangential loading is schematically presented in Fig. 1, where the coated sphere is composed of a substrate of radius R and a coating of thickness t. To solve this complex contact problem, the finite element method, implemented using the commercial software ANSYS 18.1, was used. Owing to the symmetry, the contact problem can be simulated by only a quarter of the coated sphere in contact with a rigid flat as presented in Fig. 2. The thick solid curve inside the coated sphere indicates the coating/substrate interface. The 3D mesh was generated by rotating the 2D mesh (i.e., the meshed quarter of a circle on the x-z plane) by 180° about the z-axis with 10 volumes [38]. The 2D mesh was divided into four different but fixed mesh density zones, where Zones I, II, and III were within t+0.01R, t+0.025R, and t+0.1R, respectively, from the sphere tip and zone IV is outside the distance t+0.1R. The four zones had a gradually coarser mesh with the increase in their distance from the sphere tip. In general, the element size in Zones I to IV was 0.001R, 0.0025R, 0.01R, and 0.1R, respectively, whereas for extremely thin coatings, the element size in Zones I and II was adjusted to ensure at least 10 elements in the coating along the coating thickness. Consequently, the 3D mesh of the coated sphere contained 6,842 to 14,560 20-node brick-shaped elements (SOLID 186) depending on the coating thickness. 3-D 8-node contact elements (CONTA 174) and 3-D target elements (TARGE 170) were used to model the outer surface of the coated sphere and the rigid flat, respectively. The rectangular-shaped rigid flat had the dimensions of 0.4(R+t)×0.2(R+t) in the x and y directions, respectively, which were sufficient to cover the maximum possible contact area encountered in the present study. The following main assumptions were adopted in the present study: 1) The stick contact condition exists between the outer surface of the coated sphere and the rigid flat. 2) The coating is perfectly bonded to the substrate. 3) The coating and substrate materials are homogeneous and isotropic. 4) The coating and substrate are free of residual stresses. The coating and substrate materials are elastic-plastic and the transition from elastic to plastic deformation was determined using the von Mises yielding criterion. The stress-strain relations in the elastic and plastic zones were governed by the Hooke's law and the Prandtl-Reuss law with isotropic power-law hardening, respectively. The power-law hardening is adopted owing to its capability for better modeling the material behavior [35]. In uni-axial tension, the relation between the strain  and the stress  is where n is the strain hardening exponent. A larger n indicates a stronger hardening effect. n=0 and n=1 represent the elastic-perfectly plastic case and purely elastic case, respectively. In the present study, a small value i.e., n=0.01 was selected for both coating and substrate materials. The nodes on the x-z plane were constrained in the y direction owing to the symmetry. The nodes on the x-y plane were constrained in all directions. Notably, the full constraint of bottom nodes will affect the results negligibly as the bottom is very far from the contact zone [23]. The point at the location (0, 0, R + t) on the rigid flat was selected as the pilot node [39], whose motion can govern that of the entire rigid flat. A constant normal load P was first applied to the pilot node and the corresponding normal interference  0 was obtained as the displacement of the summit point of the coated sphere. Subsequently, the tangential displacement (u x ) i applied to the pilot node was increased in a stepwise manner. The corresponding tangential force Q i was obtained as the sum of the x component of the reaction forces of nodes at the sphere bottom. Accordingly, the tangential stiffness (K T ) i can be calculated using Eq. (1). As in Ref. [32] (see Section 2), the sliding inception criterion for the coated spherical contact is also selected here as (K T ) i ≤ 0.1(K T ) 1 . This criterion was justified by attempting lower values (e.g., (K T ) i ≤ 0.05(K T ) 1 ) that increased negligibly but at a much higher cost of computing time. From the sliding inception criterion, it is crucial to capture the initial tangential stiffness (K T ) 1 accurately. As K T decreases with u x , it is apparent that (K T ) 1 obtained at a smaller (u x ) 1 can better approximate the real initial tangential stiffness. Consequently, (u x ) 1 was set as 0.001 0 , which is a very small displacement step compared with the following steps (u x ) i -(u x ) i-1 = 0.05 0 for i ≥ 2. The use of a (u x ) 1 smaller than 0.001 0 yielded a negligible difference in (K T ) 1 . To verify the accuracy of the present 3D finite element model, two groups of comparisons were made with the results obtained in the literature for simpler cases. 1) The use of identical materials for coating and substrate in the current model of the coated sphere facilitates the comparison with the results in a homogeneous spherical contact. Under pure normal loading in the slip contact condition, the present loadinterference and interference-contact area relations differ from those in the Hertz solution by less than 1% and 5%, respectively. Under combined normal and tangential loading in the stick condition, the present tangential displacement-tangential load relations under a wide range of normal loads differ from those in Ref. [33] by less than 7%. 2) The use of different materials for coating and substrate facilitates the comparison with the results in a coated spherical contact under pure normal loading. The load-interference and interference-contact area relations under the slip contact condition differ from those in Ref. [24] by less than 0.5% and 2%, respectively, and those under the stick contact condition [25] by less than 4% and 5%, respectively. Finally, a convergence test was performed for the current model of the coated sphere under combined normal and tangential loading in the full stick condition by increasing the mesh density until a further increase affected the final results negligibly. Results and discussion With fixed E su and R, the material and geometric properties of a coated sphere can be determined by four dimensionless parameters E co /E su , E co /Y co , E su /Y su , and t/R, where subscripts 'co' and 'su' indicate the coating and substrate material, respectively. In previous studies on an elastic-plastic coated spherical contact [23−25], relations between these four dimensionless parameters and various dimensionless tribological parameters under fixed E su and R were investigated. It was observed that these relations remain the same even for different E su and R as long as the above four dimensionless parameters characterizing the coated sphere remain fixed. Hence, these relations depend only on the dimensionless parameters. Similarly, the present study was conducted with fixed E su =200 GPa and R=10 mm and the results were also verified to be independent of E su and R as long as E co /E su , E co /Y co , E su /Y su , and t/R and the dimensionless normal load (see Section 4.1) were fixed. Thus, the dimensionless results discussed in Sections 4.1 to 4.3 also depend only on the dimensionless parameters. Figure 3 presents the static friction coefficient  as a function of t/R when E co /E su =4, E co /Y co =E su /Y su =1,000 under the dimensionless normal load P * =50. P * is defined as P/(E su R 2 ×10 -7 ). As E su and R are fixed, the dimensional normal load P is proportional to P * . Thus, the results in Fig. 3 are obtained under the same dimensional normal load P. Therefore, only the effect of the geometric parameter t/R on the static friction coefficient is revealed in Fig. 3. Fig. 3 Static friction coefficient  as a function of t/R for E co /E su =4, E co /Y co =E su /Y su =1,000 and P * =50. Effect of geometric parameter t/R It can be observed that the static friction coefficient  first increases linearly with t/R from  su at t/R = 0 till reaching a maximum  m at (t/R) m . A further increase in t/R above (t/R) m leads to a decrease in , which eventually approaches  co . The static friction coefficients  su and  co of a homogeneous spherical contact can be calculated by first obtaining the corresponding dimensional load P=P * (E su R 2 ×10 -7 ) and subsequently substituting P into Eq. (2). Finally, the expressions for where L c_su and L c_co are the critical loads of a homogeneous sphere made of the substrate and coating material, respectively, under the stick condition (see Eq. (3)). To explain the behavior of the static friction coefficient  in Fig. 3, the relative contributions of the substrate and coating to the total tangential displacement u x of the coated sphere should be investigated. A similar approach was used in Ref. [13] to explain a transition interference from the substrate to coating under normal loading. It was observed in Ref. [13] that the substrate is the dominant contributor to the interference at small t/R whereas the coating is the dominant contributor at large t/R. Consequently, the yield inception is more likely to occur in the substrate for small t/R and in the coating for large t/R [15,16]. The tangential displacement of the highest point in the substrate u su describes the contribution of the substrate to u x . The contribution of the coating u co is thus u x -u su . Figure 4 presents u su /u x at sliding inception as a function of t/R for the case in Fig. 3. As shown in Fig. 4, the variation of u su /u x with t/R has three distinct stages (see the two vertical solid lines). In the first stage from 0 to (t/R) m , u su /u x is almost 1 and hence, u su is the sole contributor to u x . In the second stage where t/R is from (t/R) m to approximately t/R=0.03, u su /u x decreases sharply with t/R. It drops to 0.5 at approximately t/R=0.12, where the coating and substrate contribute equally to u x . For t/R above 0.12, the contribution of the coating to u x is greater. In the last stage, u su /u x finally levels off at approximately 0.1 and the coating becomes the sole contributor to u x . The variation of u su /u x with t/R indicates that the behavior of a coated sphere with very small or very large t/R approaches that of a homogeneous sphere made of the substrate or coating material, respectively, as expected. This correlates well with the behavior of the static friction coefficient at very small or very large t/R in Fig. 3. To explain the transitional behavior of the static friction coefficient in the intermediate range of t/R, the substrate plasticity level and contact area as a function of t/R should be investigated. In Ref. [24], it was observed that a thicker hard coating, i.e., a higher t/R, better protects the substrate from plastic deformation and also results in a smaller contact area at the outer surface of the coating. As indicated in Section 2, based on Ref. [32], the sliding inception is a junction plastic failure. A more plastic junction can support a lower dimensionless tangential load Q max /P. Similarly, the von Mises equivalent stress level in a junction at sliding inception cannot exceed the yield strength of the material. A junction of smaller contact area with a higher normal stress level under the same normal load can support less additional shear stress, and hence, less friction force. Figures 5 and 6 present the dimensionless plastic volume V/V 0 in the substrate and dimensionless contact area A/A 0 upon the completion of normal loading, respectively, as a function of t/R for the case in Fig. 3. Here, V 0 and A 0 are the plastic volume in a homogeneous sphere made of the substrate material and the contact area in a homogeneous sphere made of The transitional behavior of the static friction coefficient in the intermediate range of t/R can be attributed to the competition between two mechanisms. The first is that a decrease in the substrate plasticity level tends to increase  and the second is that a decrease in the contact area tends to decrease For t/R from 0 to (t/R) m , where the substrate is the dominant contributor to the tangential displacement u x , it is reasonable to assume that the first mechanism prevails. Hence, an increase in t/R will increase the static friction coefficient. For t/R values above (t/R) m , where the coating becomes the dominant contributor, it is reasonable to assume that the second mechanism prevails and hence, an increase in t/R will decrease the static friction coefficient. In the small range just above (t/R) m , where the contribution of the substrate to u x dramatically drops but u su /u x still remains above 0.5 (see Fig. 3), it appears that the second mechanism involving a decrease in the contact area overcomes the first one involving a decrease in the substrate plasticity level. Hence, in this range, an increase in t/R will decrease the static friction coefficient. An accurate quantitative validation of these assumptions is out of the scope of the present paper but can be addressed in future work. Effect of normal loading P * and material properties E co /E su , E co /Y co , E su /Y su To reveal the effect of each of these parameters on the static friction coefficient, a parametric study was performed based on the reference case of Fig. 3. This was performed by varying one parameter each time while maintaining the others the same as those in the reference case. The value of t/R was selected from 0.001 to 0.05 as in Fig. 3. The dimensionless material parameters were selected as 2 ≤ E co /E su ≤ 8, 500 ≤ E su / Y su ≤ 2000, and 500 ≤ E co /Y co ≤ 2000. As the present study was limited to hard coatings only, it was required that the ratio Y co /Y su =(E su /Y su )(E co /E su )/(E co /Y co ) be larger than 1. This eliminated the combinations of low E co /E su and E su /Y su along with high E co /Y co (see Ref. [16]). P * was selected from 20 to 100 so that the deformation of a coated sphere caused by P * is elastic-plastic for most combinations of E co /E su , E co /Y co , E su /Y su , and t/R values in their ranges. Exceptional combinations are those with very large t/R, where even P * = 100 can only elastically deform the coated sphere. Nonetheless, the present study focuses on an elastic-plastic contact. Figure 7 shows the effect of the normal loading. The static friction coefficient at small or large t/R decreases with the increase in normal load (e.g.,  as a function of P * at t/R denoted by the vertical dashed lines I and III). This is expected because the coated sphere with such extreme t/R behaves as a homogeneous sphere [32]. On the contrary, the static friction coefficient at t/R between the extreme t/R ranges increases with the normal load (e.g.,  as a function of P * at t/R denoted by the vertical dashed line II). In this t/R range, the coated sphere exhibits a feature unique to the coated spherical contact, i.e., the stress level in the coating is relieved by the plastically deformed substrate [23]. As the normal load increases, the substrate becomes more plastic and compliant and relieves the stress level in the coating. This results in a more elastic coating, and hence, a junction capable of supporting additional stress and a larger friction force. Figure 8 shows the effect of E su /Y su , where an increase in E su /Y su increases (t/R) m and  m . A higher E su /Y su for a fixed E su indicates a weaker substrate; hence, under a fixed normal load, the substrate plasticity level is higher. This provides a greater scope for the first mechanism indicated in Section 4.1 to be the dominant one, explaining the increase in (t/R) m and  m with the increase in E su /Y su . The difference between the three cases of E su /Y su diminishes with the increase in t/R as  approaches the common  co because of the same coating material used in these cases. Figure 9 shows the effect of E co /E su , where the static friction coefficient increases with E co /E su . A higher E co /E su for a fixed E su indicates a stiffer coating. This enables the junction to support additional stress, and hence, a larger friction force. In addition, (t/R) m is independent of E co /E su . From the results of the parametric analysis, it was observed that the effect of a decrease in E co /Y co has the same trend as that shown in Fig. 9 for an increase in E co /E su . Similarly, (t/R) m is independent of E co /Y co . A model for the static friction coefficient Extensive numerical simulations must be performed in order to obtain the expression of the static friction Fig. 9 Effect of E co /E su on the static friction coefficient. coefficient in terms of the material and geometrical properties and normal load covering ranges indicated in Section 4.2. Table 1 presents the values of E co /E su , E co /Y co , E su /Y su , P*, and t/R used for the numerical simulations. Four material property combinations that result in Y co /Y su < 1 (see section 4.2 and Ref. [16]) are eliminated. Thus, we have (3×3×3-4)×5=115 combinations of material properties and normal load, for each of which we obtained 12 values of  corresponding to the 12 values of t/R. Consequently, the total number of data points used for obtaining the following empirical expressions (Eqs. (7)(8)(9)(10)(11)(12)(13)) is 115×12=1380. It was observed that, for all the combinations of material properties and normal load, the behavior of  with the increase in t/R was similar to that shown in Fig. 3. Hence, the expression of  should contain two parts for the two branches of t/R divided by (t/R) m and also ensure that = su at t/R=0 and  approaches  co at large t/R. An admissible form for the expression of  is as follows: where k is the slope of the linear function that fits the increasing and a and b are the fitting parameters inside the cotangent hyperbolic function that fits the decreasing . Using Eqs. (7) and (8) to fit each of the 115 sets of numerical -t/R data (see Fig. 3 as an example), we observed that the R 2 goodness of fit is better than 0.9 for more than 80% of the sets (maximum error 15%) and better than 0.8 for the rest (maximum error 30%). Hence, such a form for the As expected, (t/R) m is a function of only P * and E su /Y su whereas  m depends on all the dimensionless parameters (see Section 4.2). As ((t/R) m ,  m ) must satisfy both Eqs. (7) and (8), the slope k in Eq. (7) can be obtained from points (0,  su ) and ((t/R) m ,  m ) as follows: and a relation between the fitting parameters a and b can also be obtained by substituting ((t/R) m ,  m ) into Eq. (8) as follows: The parameter b can be handily obtained from the fitting curves for each set of numerical -t/R data, whose expression is given by Eq. (13) Therefore, using Eqs. (7-13) along with Eqs. (2), (3), (5), and (6), which calculate  co and  su , a model can be obtained for the static friction coefficient in a full stick elastic-plastic coated spherical contact. Notably, as the elastic-plastic model produces permanent distortion of the surface, the current model would only be valid for sliding of a new surface. Relaxing this limitation is a task for future work. It was revealed in Ref. [40] that, with regard to critical loads at yield inception vs. dimensionless coating thickness, the behavior of soft coatings resembles a mirror image of the behavior of hard coatings. We thus speculate that soft coatings may provide a similar mirror image behavior with regard to the static friction coefficient vs. t/R shown in Fig. 3. We used the finite element model to obtain  for a single case of soft coating with E co /E su =0.25, E co /Y co = E su /Y su =1,000 (resulting in Y co /Y su =(E su /Y su )(E co /E su )/ (E co /Y co )=0.25) and P * =50 with t/R from 0.001 to 0.05. The results showed that as t/R increased, decreased from  su to a minimum and thereafter increased and approached  co . This single case resembles a mirror image of the behavior of  with hard coatings. An intensive study over a large range of mechanical properties for soft coatings and the minimization of friction is very interesting but out of the scope of the present paper. This shall be covered in future work. Conclusion A finite element model was developed to investigate an elastic-plastic coated spherical contact under combined normal and tangential loading in the full stick contact condition. In this model, sliding inception was assumed to occur once the tangential stiffness became equal to or less than 10% of the initial tangential stiffness. It was observed that, for fixed coating and substrate material properties and normal loading, the static friction coefficient  is a function of the dimensionless coating thickness t/R. An intensive study was performed on hard coatings when Y co /Y su > 1. As a typical behavior of such a case,  first increases linearly with t/R till reaching a maximum  m at t/R=(t/R) m , and thereafter decreases as a cotangent hyperbolic function when t/R increases above (t/R) m . When t/R is 0 or very large, approaches the static friction coefficient  su or  co of a homogeneous sphere made of the substrate or coating material, respectively. A parametric study was performed over a wide range of the dimensionless material properties E co /E su , E co /Y co , and E su /Y su and the dimensionless normal load P * to reveal the effects of these parameters on the static friction coefficient as a function of t/R. An increase in E co /E su or a decrease in E co /Y co increases for the entire range of t/R. The effect of E su /Y su and P * is more complex as an increase in P * or E su /Y su can either increase or decrease  depending on t/R. A model for the static friction coefficient  in the case of hard coatings was proposed based on the typical behavior of  as a function of t/R. The potential for mirror image behavior in the case of soft coatings that can minimize  was also demonstrated.
2023-01-31T15:04:29.965Z
2018-11-22T00:00:00.000
{ "year": 2018, "sha1": "b3b1e81eb2052f734f9f71ed6c6d5739570c1409", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40544-018-0251-5.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "b3b1e81eb2052f734f9f71ed6c6d5739570c1409", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [] }
9637071
pes2o/s2orc
v3-fos-license
Differentially methylated regions in maternal and paternal uniparental disomy for chromosome 7 DNA methylation is a hallmark of genomic imprinting and differentially methylated regions (DMRs) are found near and in imprinted genes. Imprinted genes are expressed only from the maternal or paternal allele and their normal balance can be disrupted by uniparental disomy (UPD), the inheritance of both chromosomes of a chromosome pair exclusively from only either the mother or the father. Maternal UPD for chromosome 7 (matUPD7) results in Silver-Russell syndrome (SRS) with typical features and growth retardation, but no gene has been conclusively implicated in SRS. In order to identify novel DMRs and putative imprinted genes on chromosome 7, we analyzed eight matUPD7 patients, a segmental matUPD7q31-qter, a rare patUPD7 case and ten controls on the Infinium HumanMethylation450K BeadChip with 30 017 CpG methylation probes for chromosome 7. Genome-scale analysis showed highly significant clustering of DMRs only on chromosome 7, including the known imprinted loci GRB10, SGCE/PEG10, and PEG/MEST. We found ten novel DMRs on chromosome 7, two DMRs for the predicted imprinted genes HOXA4 and GLI3 and one for the disputed imprinted gene PON1. Quantitative RT-PCR on blood RNA samples comparing matUPD7, patUPD7, and controls showed differential expression for three genes with novel DMRs, HOXA4, GLI3, and SVOPL. Allele specific expression analysis confirmed maternal only expression of SVOPL and imprinting of HOXA4 was supported by monoallelic expression. These results present the first comprehensive map of parent-of-origin specific DMRs on human chromosome 7, suggesting many new imprinted sites. Introduction Methylation of the 5′-cytosines in CpG dinucleotides is an epigenetic mark that can affect gene expression. Clusters (approximately 1 kb) of multiple CpGs form CpG islands (CGIs), which are usually unmethylated when located at gene transcription start sites (TSS), but are also found within coding regions, at 3′ ends, as well as in intra-and inter-genic regions. 1 Methylation near TSSs blocks transcription initiation, but methylation within gene bodies may increase expression and influence splicing. 1 Methylation of promoter CGIs is rare and found in genes where expression is permanently repressed, as for imprinted genes, genes on the inactive X chromosome, and genes exclusively expressed in germ cells. 1 Moreover, CpG methylation is a hallmark of imprinted genes, which are expressed exclusively from the maternal or paternal allele and most of which play crucial roles in growth and development. 2 Imprinted genes generally reside in clusters where an imprinting control center (ICR) and other additional differentially methylated regions (DMRs) control the expression of the imprinted genes. 3 In human, approximately 32 imprinting clusters and over 70 imprinted genes have been described (see Catalogue of Imprinted Genes and Parent-oforigin Effects in Humans and Animals, http://igc.otago.ac.nz). 4 The normal balance of imprinted genes can be disrupted by uniparental disomy (UPD), the inheritance of both chromosomes of a chromosome pair exclusively either from the mother (maternally, matUPD) or the father (paternally, patUPD). 5 UPD can lead to imprinting syndromes where loss or gain of methylation at a specific DMR/ICR leads to the syndrome phenotype, e.g., patUPD for chromosome 6 (patUPD6) [5][6][7][8][9][10] However, aberrant methylation is not always confined to only one locus, and multilocus loss of methylation (LOM) at multiple imprinted loci has been reported for TNDM, PWS, and a matUPD7q patient with also hypomethylation at the paternally imprinted DLK1/MEG3 locus at 14q32 (MIM 176290). 11,12 MatUPD7 is found in approximately 10% of SRS patients and hypomethylation of the ICR1 of IGF2-H19 (MIM 147470, MIM 103280) on chromosome 11p15 in 20-60%. 13,14 SRS is characterized by severe pre-and postnatal growth restriction, macrocephaly, skeletal asymmetry, a triangular face and other variable dysmorphic features. 15 Conversely, patUPD7 does not affect growth and development. [16][17][18] An imprinted gene on chromosome 7 has been suggested to cause SRS, but none of the known imprinted genes at three suggested domains (7p12: [19][20][21][22][23] Segmental maternal duplications spanning the imprinted gene GRB10 on 7p12.2 and rare cases of segmental matUPD7q31-qter, matUPD7q, and a mosaic matUPD7q21-qter in SRS patients have however narrowed down the candidate SRS regions. [24][25][26][27][28] Imprinted genes can be identified through parent-of-origin specific difference in methylation and expression. 29 Highthroughput genome wide methylation profiling has enabled systematic identification of multiple new loci. 30 Whole genome methylation has already been used to identify new imprinted genes by profiling between matUPD15 and patUPD15 cases and in rare reciprocal genome-wide UPD samples. 31,32 Global methylation arrays with 1505 and 27 500 CpG sites in SRS patients with/without ICR1 hypomethylation at IGF2-H19 did not reveal any significant common associations outside the H19 promoter. 33,34 However, individual ICR1 hypomethylated patients showed increased methylation changes at separate loci. 34 No matUPD7 patients were included in these studies and to our knowledge high-throughput methylation profiling of matUPD7 has not been reported. To identify novel DMRs on chromosome 7, we compared DNA methylation status of matUPD7 cases to controls and a rare patUPD7 case with the Illumina Infinium HumanMethylation450K BeadChip methylation assay. 35 We found 17 DMRs on chromosome 7, of which 14 are novel and three are the known DMR/ICRs of GRB10 and the SGCE/ PEG10 and PEG/MEST clusters. Imprinted expression was suggestive for three genes with novel DMRs, HOXA4 (MIM 142953), GLI3 (MIM 165240), and SVOPL (MIM 611700) by qRT-PCR. Allele specific expression of SVOPL confirmed it as a novel imprinted gene and monoallelic expression of HOXA4 supported imprinting. These results present to our knowledge the first comprehensive map of parent-of origin specific DMRs on human chromosome 7, suggesting novel imprinted domains. Genome-wide methylation in UPD7 To identify DMRs between maternal and paternal chromosomes 7, we performed genome-wide comparisons of the methylation of individual CpG sites between nine matUPD7 cases, including one segmental matUPD7q31-qter, ten controls and one patUPD7 case. We used the Infinium HumanMethylation450K BeadChip (Illumina), which measures methylation of 485 512 individual CpG sites, of which 30 017 are located on chromosome 7. Comparison between matUPD7 and controls showed that the differentially methylated CpGs mapped exclusively to chromosome 7 (Fig. 1), as expected for matUPD7. None of the CpGs outside chromosome 7 exceeded the genome-wide significance level (P = 5 × 10 −8 ), but single CpGs exceeding suggestive significance level (P = 1 × 10 −5 ) were observed on chromosomes 2, 4, 5, 6, 14, 16, 17, and 20. None of these CpGs coincide with known imprinted gene clusters or genes previously reported to show LOM in SRS patients with H19 hypomethylation or other syndromes. 11,33,36,37 Comparing the single patUPD7 sample with controls, the strong clustering of differentially methylated CpGs was also observed on chromosome 7. Because only a single patUPD7 sample was available, the statistical analysis was disturbed by noise, and scattered CpGs above the significance threshold emerged genome-wide (data not shown). Although the information provided by the patUPD7 had limitations in statistical analyses, the sample was highly informative when analyzing imprinted loci in greater detail. Differentially methylated regions on chromosome 7 We screened chromosome 7 for DMRs by using a step-bystep filtering process for all 30 017 CpG sites on chromosome 7 (Fig. 2). The process included two separate tracks, one screening for maternally hypermethylated CpGs and the other for maternally hypomethylated CpGs. The first step of the filtering process was based on identifying a pattern where the median M-value of the different groups (matUPD7, controls, and patUPD7) would differ in the way that would be expected for an imprinted locus, i.e., methylation in controls is midway and matUPD7s and patUPD7 diverge in opposite directions. For maternally hypermethylated CpGs, we set the filter to include all CpGs where the median M-value of matUPD7s was larger than that of the controls, and the median M-value of the controls was larger than that of the patUPD7. For the maternally hypomethylated genes, we set the filter to include all CpGs where the median M-value of the patUPD7 was larger than that of the controls, and the median M-value of the controls was larger than that of the matUPD7s. Step two excluded all CpGs where the methylation difference between matUPD7s and controls did not reach nominal significance (P value < 0.05). A nominal threshold was applied in order to avoid false negative findings. The significance level of differential methylation between controls and patUPD7 was disregarded for having only a single patUPD7 sample. The third step was set to include only those CpGs where a consecutive row of three or more CpGs had passed steps one and two in the filtering process. We excluded single and only two adjacent significant CpGs because these were judged more likely to be false positives. We excluded CpGs with poor signals and chose only adjacent CpGs with a good signal across the entire locus to increase the likelihood of obtaining true positive results. Our strategy may have left some loci undetected, but should have a low false positive rate. We found significant differential methylation that passed our three step filtering process for 204 CpGs at 17 DMRs/CGIs spread along chromosome 7, localizing to 14 genes and two intragenic regions at long non-coding RNAs (lncRNA) ( Table 1; Tables S4 and S5; Fig. 3). The majority of the DMRs were maternally hypomethylated (11/17, 65%) and six maternally hypermethylated ( Table 1; Tables S4 and S5; Fig. 3). As expected, the three known DMRs of the imprinted domains of GRB10 on 7p12.2, SGCE/PEG10 on 7q21.3, and MEST/MESTIT1 on 7q32 passed our filter (Table 1). 23,38,39 The well-conserved CGI2 upstream of GRB10 was entirely differentially methylated, with four CpGs encompassing the CGI2 and two CpGs in the North shore covering 113 bp ( Table 1). The DMR in the promoter of MEST has been proposed to act as an ICR for the whole imprinted cluster spanning from CPA4 to KLF14. 22,40 We identified 63 maternally hypermethylated CpGs spanning the entire MEST ICR, with 50 over the CGI and 13 CpGs located to the North shore covering 793 bp ( Table 1). We found 55 maternally hypermethylated CpGs in the SGCE/PEG10 ICR with 41 located in the CGI and 14 in the South shore covering 715 bp (Tables 1 and 2). The SGCE/PEG10 CGI was not completely differentially methylated as 662 bp in the 5′ end lacked a DMR. CGI shores are defined as the region 0-2 kb upstream (North) or downstream (South) from the CGIs and shelves as the region 2-4 kb from the CGIs (Illumina, based on UCSC predictions). Altogether ten DMRs located to genes with no prior imprinting status, two to previously predicted imprinted genes (HOXA4 and GLI3), 41 and one to PON1 (MIM 168820), with a disputed imprinted status (http://igc.otago.ac.nz) ( Table 1). The DMR of PON1 was the only new DMR found in a known imprinting cluster (SGCE/PEG10). Previously, only one DMR, in an ICR situated between PEG10 and SGCE, has been found in this cluster out of all the CGIs ranging from CALCR (MIM 114131) to PPP1R9A (MIM 602468). 23,42 The intergenic DMR at 7p21.1 partially overlapped the promoter and 5′ region of a possible long non-coding RNA (lncRNA), defined as a transcript of uncertain coding potential (TUCP) by the UCSC (University of California, Santa Cruz) genome browser (http://genome.ucsc.edu/) (TCONS_ l2_00026389, 710 bp at 16 625 596-16 626 306 bp) and the second intergenic DMR at 7q36.2 was located 659 bp upstream of the TSS of another TUCP (TCONS_I2_00027011, 135 475 bp at 156 264 552-156 400 027 bp). Several imprinting clusters contain at least one lncRNA located in the proximity or partially overlapping a protein-coding gene, e.g., MESTIT1 overlapping MEST at the 7q32 imprinted cluster. 43,44 Imprinted lncRNAs, e.g., Air (antisense Igf2r RNA, [MIM 604893]) and Kcnq1ot1 (Kcnq1 overlapping transcript 1, [MIM 604115]), have also been shown to play a major role in silencing multiple genes in the imprinted clusters. The ICR can lie within the lncRNA, a few kb upstream of the promoter or at the lncRNA promoter. 43 Interestingly, RPS2P32 at 7p15.3 is itself a 1025 bp lncRNA, with the maternally hypermethylated DMR located to the TSS and gene body ( Table 1) The lengths of the DMRs ranged from 196 bp (GLI3) to 3164 bp (MEST/MESTIT1) ( Table 1). All significant CpGs defining DMRs located primarily to CGIs and their shores ( Table 1). For MAD1L1 and GLI3 all significant CpGs located only to shores and the DMR for CARD11 (MIM 607210) did not hit a predicted CGI nor the shores or shelves of a CGI. SH2B2 (MIM 605300) had two separate DMRs in two independent CGIs. Otherwise, all genes had a DMR located to a single CGI and its shores, but none extended into the shelves. Differentially methylated regions and gene regulatory elements The location of a DMR in regard to, e.g., promoters, enhancers, TSSs, and CTCF binding sites may affect gene expression. 1,45 The identified DMRs were predominantly in the 5′ UTRs, TSSs, and first exons of the genes ( Table 1). Only for two genes (MAD1L1, SH2B2) the DMRs located to the body of the gene or the 3′ UTR ( Table 1). For HOXA4 and PON1 the DMRs were spread from the TSS to the gene body and for PRR15 from the 5′UTR to the 3′UTR. However, PRR15 is a small gene of 1717 bp (NM_175887) with only two exons. Methylation at the promoter and TSS is usually associated with decreased gene expression while gene body or 3′ UTR methylation increases expression and can affect splicing. 1 Altered methylation of CTCF binding sites is known for some imprinted genes, e.g., H19, where one of seven CTCF binding sites shows parent-of-origin specific methylation. 1 Methylation prevents the binding of CTCF that acts as an insulator, and the unblocked enhancers are able to drive the promoters to upregulate transcription of the target gene. We observed a strong signal for CTCF-binding sites in 4/6 (67%) of our maternally hypermethylated genes, including the CGI2 of GRB10, and in 3/10 of the maternally hypomethylated domains ( Table 2). Methylation of CpG sites in the recognition sequences of transcription factors can strongly influence transcription factor binding by complex mechanisms. 1,46 Conserved transcription factor binding sites were observed for 7/17 (41%) of the DMRs ( Table 2). Validation of methylation status with pyrosequencing We chose three DMRs (HTR5A, RPS2P32, and HOXA4) for validation with pyrosequencing. There was a high correlation of methylation levels between the Infinium HumanMethylation450K BeadChip and pyrosequencing: 94% for the HTR5A CpG located at 154 862 969 (hg19 build) and 95% for the HOXA4 CpG located at 27 170,892 (hg19 build) (Fig. 4), suggesting that the Infinium microarray results were highly accurate. Pyrosequencing for RPS2P32 failed with two different primer sets. Data visualization The methylation level of each CpG in each sample in the chromosome 7 DMRs and surrounding regions was visualized in detail with the Integrated Genome Browser (IGB, http://bioviz. org/igb/). 47 The full data set is available at http://publications. scilifelab.se/kere_j/imprinting7 as raw data files and IGB compatible files. As expected, GRB10, SGCE/PEG10, and MEST/ MESTIT1 showed a clear parent-of-origin methylation pattern, with the methylation level of matUPD7s and patUPD7 deviating in opposite directions from the controls (Fig. 5). HTR5A, PON1, and RPS2P32 also showed a similar parent-of-origin specific methylation. For SVOPL, one matUPD7 case deviated from the others, suggesting individual variation (Fig. 5). HOXA4 showed clear hypomethylation for the matUPD7s, but the patUPD7 was indistinguishable from the controls (Fig. 5). For the remaining DMRs, clustering of the different groups was less apparent despite statistically significant group differences, suggesting more subtle methylation differences (data not shown). Expression study of genes with DMRs To define possible imprinted expression of the genes displaying DMRs, we performed quantitative reverse transcriptase-PCR (qRT-PCR) on freshly drawn blood cells. Differential expression between matUPD7s, controls, and patUPD7, compatible with imprinting, was seen for 7/11 (64%) of the genes with novel DMRs studied ( Table 2). PEG10 was studied as a positive control for imprinting and displayed the expected paternal expression, with a significant difference between matUPD7s and controls, t test P value 0.0017 ( Fig. 6D and H). A significant difference in expression between matUPD7s and controls was found for three genes; HOXA4 at 7p15.2, GLI3 at 7p13, and SVOPL at 7q34 (Table 2; Fig. 6). HOXA4 (Fig. 6A) and SVOPL (Fig. 6C) showed significantly increased expression in matUPD7s compared with controls (t test P values 0.008 and 0.017, respectively) and the patUPD7 showed markedly lower expression compared with the controls and matUPD7s (Fig. 6). GLI3 showed significantly decreased expression in matUPD7s compared with controls (t test P value 0.049) and the one patUPD7 showed higher expression compared with the controls and to matUPD7 (Fig. 6B). Significant expression differences were also confirmed by performing a linear regression analysis, including the single patUPD7 case (Fig. 6E-G). Maternal hypomethylation of the HOXA4 promoter region (Fig. S1A) appears to result in increased expression in matUPD7s, and hypermethylation in decreased expression in the patUPD7. However, for GLI3 maternal hypomethylation at the TSS (Fig. S1B) resulted in decreased expression in matUPD7s compared with controls and patUPD7. For SVOPL, maternal hypermethylation at the TSS and 5′UTR (Fig. S1C) results in increased maternal expression. Overall, HOXA4 and SVOPL showed maternal expression while GLI3 was paternally expressed. In addition, CARD11, PRR15, and SH2B2 showed a clear pattern of increased maternal expression compared with controls and decreased expression in the patUPD7 suggestive of maternal expression, and RPS2P32 of decreased maternal expression indicative of paternal expression, but the difference between matUPD7s and the controls was not statistically significant for any of these genes (data not shown). For MAD1L1 and RARRES2 (MIM 601973) we did not identify differential expression indicative of imprinting and for HTR5A and PON1, expression was undetectable in lymphocytes. We did not study the lncRNAs or genes close to the two intergenic DMRs ( Table 1). To confirm the imprinting of HOXA4, GLI3, and SVOPL, we studied parent-of-origin specific expression by sequencing selected exonic SNPs in nine parent-child trios. Four SNPs with high expected heterozygosity were chosen for HOXA4 (rs17471888, rs4722660, rs1801085, rs2158218), five for GLI3 (rs3735361, rs3823720, rs2051935, rs929387, rs846266) and three for SVOPL (rs1614641, rs3734944, rs2305816) ( Table S6). Genomic DNA from the children and parents was first sequenced for all SNPs to find children heterozygous for a SNP and parents either both homozygous for different alleles or at least one parent homozygous for one allele. The child's cDNA was then sequenced from the trios with informative SNP alleles to see whether the maternally or paternally inherited allele was expressed. For SVOPL, two trios were informative and both children showed expression of only the maternal allele for rs2305816, confirming the maternal expression as seen in the qPCR (Fig. S2A). Only one child showed two heterozygous SNPs for HOXA4, but both parents were also heterozygous. Sequencing of the child's cDNA for rs2158218 showed monoallelic expression thus supporting imprinting (Fig. S2C). Because both parents share the same allele it is not possible to discern which parental allele is expressed. The other heterozygous SNP rs4722660 failed to give a readable cDNA sequence. For GLI3 we failed to obtain conclusive data due to high PCR failure rate. Further analyses in larger sample sets are warranted to confirm the parent-specific expression. Discussion We performed a genome-wide methylation study for 450 000 CpG sites to identify DMRs on chromosome 7 in matUPD7 and patUPD7 cases in comparison to biparental controls. We used a three step filtering approach to identify the most significant CpGs in terms of parent-of-origin specific methylation. We identified 17 distinct DMRs spread along chromosome 7 localizing to 14 genes and two intragenic regions at lncRNAs, including the known DMRs at the SGCE/PEG10, GRB10, and MEST/ MESTIT1 imprinted domains, and also two previously predicted imprinted genes, HOXA4 on 7p15.2 and GLI3 on 7p13. 41 Novel DMRs were located near eight protein coding genes and two lncRNAs, none previously implicated in imprinting, and SVOPL was confirmed as imprinted by parent-specific expression. The majority of the DMRs were maternally hypomethylated (65%) and they were predominantly located in the promoter regions of the genes. This is contradictory to the current knowledge that the majority of imprinted genes are maternally hypermethylated. Maternally methylated DMRs and ICRs of imprinted genes are typically located at the promoters. 45 DMRs and ICRs of imprinted genes may also be situated within the genes and gene body methylation is known to enhance expression and possibly affect splicing. 1 For MAD1L1 and SH2B2 the maternally hypomethylated DMRs were found only in the gene bodies. However, SHB2B showed increased maternal expression, although non-significant, and MAD1L1 biparental expression whereas gene body maternal hypomethylation would have been expected to result in decreased maternal expression in both genes. We did not study differential splicing for these genes. Paternally methylated DMRs are preferentially located in intergenic regions, 45 and consistently both of the DMRs we found in intergenic regions were maternally hypomethylated. The CpGs in the DMRs located to CpG islands and shores but not shelves. CTCF-binding sites, which act as insulators between promoters and enhancers, are commonly found at imprinting clusters. 45,48 All of the DMRs found here, except for MAD1L1, SGCE/ PEG10, and MEST/MESTIT1, co-localized with CTCF-binding sites. Differential methylation at CTCF-binding sites can affect the imprinted expression of the genes regulated by the insulators, e.g., H19. 1 Several imprinting clusters contain also at least one antisense lncRNA located in the proximity or partially overlapping a protein coding gene. 43,44 Imprinted lncRNAs are necessary for the imprinted expression of the genes in the cluster, but can also regulate small clusters of autosomal genes in cis. 44 XIST (Inactive X-specific transcript, [MIM 314670]) and TSIX (Inactive-specific transcript, antisense [MIM 300181]) are the best-known functional lncRNAs required for the epigenetic X chromosome inactivation in female mammals. 44 Both of the two intergenic DMRs identified here were in close proximity to possible lncRNAs defined TUCPs, suggesting that these TUCPs may have imprinted regulation by the DMRs and potentially the DMRs and lncRNAs may be part of larger imprinted domains with surrounding genes. In addition, we observed antisense lncRNAs overlapping a few genes with DMRs, namely HTR5A and MAD1L1, and an antisense hypothetical protein overlapping HOXA4. Interestingly, RPS2P32 at 7p15.3 is itself a lncRNA. More than 80% of imprinted genes reside in clusters of several imprinted genes. 3 It remains to be seen whether the DMRs identified here act as ICRs by regulating the imprinted expression of other genes nearby, in addition to the genes that the DMRs lie closest to and have been studied here. In depth methylation and expression analysis of the neighboring genes and lncRNAs is thus warranted, but is beyond the scope of this study. Our results give further insight to the extent of the DMRs at the SGCE/PEG10, GRB10, and MEST/MESTIT1 imprinted domains. The entire CGIs and parts of their North shores of GRB10 and MEST/MESTIT1 proved to be DMRs, but only 61% of the SGCE/PEG10 CGI turned out to be a DMR that extended well into the South shore. We also identified a novel DMR in the CGI in the promoter and first exon of PON1 in the SGCE/ PEG10 cluster. Parent-of-origin specific expression of PON1 has not been shown, although paternal expression of PON1 has been reported in mouse hybrids containing a single maternal or paternal human chromosome 7. 49, 50 We did not detect PON1 expression in lymphocytes and could thus not study its possible parent of origin specific expression. PON1 is located between PPP1R9A, imprinted in both humans and mice, and PON2 (MIM 602447) and PON3 (MIM 602720), both imprinted in mouse, but biallelically expressed in humans. 21,23 The ICR for PPP1R9A is unknown, as a CGI in the first exon of PPP1R9A did not show differential methylation between matUPD7 and patUPD7 lymphoblastoid cell lines or fetal placenta, liver or muscle, 21 610317) adjacent to GRB10 have been shown to be imprinted in mouse, but the DMR of GRB10 appears to control imprinting of all three genes in mouse. 51 However, in human DDC and COBL were biparentally expressed in multiple fetal tissues. 52 DLX5 (MIM 60028) at 7q21.3 was initially shown to be maternally only expressed in human lymphoblasts, 49 but subsequently biallelic expression has been reported. 23,53 We did not find DMRs in or near DLX5 which would indicate imprinting. Several predicted imprinted loci at 7q11.21, 7q11.23, 7q21.11, and 7q36.1 failed to show differential methylation, suggesting either tissue or developmental stage specific imprinting beyond the blood cells that we studied, or imprecision of the predictions. 41 As all CpGs are not covered by the Infinium HumanMethylation450K BeadChip, some DMRs might have been missed by the study. We set our filtering process to exclude single and two adjacent CpGs, assuming that these CpGs were more likely to be false positives, and therefore some DMRs might have been ignored. We excluded the possibility of blood lineagespecific alternative methylation as a potential source of bias by verifying all loci for the absence of such differences. 46 Among the detected DMRs only GRB10, MEST, and PON1 have been reported to show age-dependent methylation level changes using the same microarray method. 54 Thus, the methylation results for the remaining DMRs should not be affected by the age of the subjects studied. Only loci on chromosome 7 showed genome-wide significant differences in methylation. Consistent with our findings, there have been no reports of multilocus LOM associated with UPDs for other chromosomes. Thus, our data support the finding that matUPD7 is not associated with multilocus LOM. 55 However, up to 73% of SRS patients with H19 hypomethylation have been reported to show multilocus LOM, suggesting a generalized defect in establishing imprints in these patients. 34,48 We found suggestive loci on seven other chromosomes, but these loci did not hit known imprinted domains. A small proportion of them might be cross-reactive probes that have co-hybridized to both chromosome 7 and other chromosomes. We have not explored these single CpG sites further. Our data supported the imprinted status of HOXA4 at 7p15.2, GLI3 at 7p13, and SVOPL at 7q34 by showing differential expression between matUPD7s and patUPD7 and a significant expression difference between matUPD7s and controls. Concordant with the qRT-PCR results, allele-specific expression studies revealed maternal only expression of SVOPL, thus confirming it as an imprinted gene. HOXA4 showed monoallelic expression supporting imprinting, but because the child and both parents were heterozygous for the SNP we could not discern which parental allele was expressed. We failed to confirm imprinting of GLI3 because of high PCR failure rate. Further studies are needed to confirm imprinting. Hypomethylation of the promoter of HOXA4 in matUPD7s associated with increased expression of HOXA4 compared with controls, while hypermethylation in patUPD7 resulted in decreased expression. HOXA4 belongs to the cluster of HOXA genes on 7p15.2, which is a family of homeodomain containing transcription factors key during embryonic development. 56 Several other genes in the HOXA cluster are also predicted to be imprinted (Fig. 3). 41 The HOXA cluster has two lncRNAs known to regulate HOXA gene expression: LncRNAs are found in many imprinted gene clusters and have been found to be crucial for the imprinted expression of genes in the imprinted clusters. 44 Further studies on the other HOXA genes and lncRNAs in the HOXA cluster are needed to clarify if this is a new imprinting cluster and also if the DMR at the promoter of HOXA4 would also act as an ICR in this region. For GLI3, maternal hypomethylation of the DMR at the TSS resulted in decreased maternal expression. GLI3 encodes a zinc finger transcription factor that functions as a transcriptional activator and a repressor of the sonic hedgehog signaling pathway, and plays a role in early development. 59 Defects in GLI3 are found in disorders affecting limb development, resulting in For SVOPL, maternal hypermethylation at the TSS and 5′ UTR resulted in increased maternal expression. Several imprinted genes have methylated DMRs on the active allele, where it is proposed that the methylation inactivates silencing factors. 2 SVOPL was found through a sequence similarity search for SLC22 anion transporters, and it shows sequence similarity with the synaptic vesicle protein SVOP, but otherwise little is known of SVOPL. 60 CARD11, PRR15, and SH2B2 showed increased maternal expression and RPS2P32 of decreased maternal expression, implicating imprinted expression, but we did not see a statistically significant difference between matUPD7s and the controls for these genes. Lack of statistical significance can be due to the small sample sets available for this study and also interindividual variation seen within the sample sets. CARD11 acts as a critical signal transducer for NF-kappaB activation in both B and T lymphocytes and plays a crucial role in the antigen-specific immune response in human. The function of PRR15 is poorly understood in human, but in the mouse, the expression pattern of Prr15 closely resembles that of a number of important negative cell cycle regulators and it is suggested that Prr15 could be involved in controlling cellular proliferation and/or differentiation. 61 SH2B2 encodes an adaptor protein SH2B2, also known as APS, which belongs to the SH2B protein family, that regulates several signaling pathways and participates in various physiological responses and developmental processes. 62 SH2B2 interacts with insulin receptor substrate 1 (IRS1), IRS2, or Janus kinase 2 (JAK2) to regulate insulin, leptin, and growth hormone signaling. MAD1L1 and RARRES2 appeared to be biparentally expressed, as expression was similar in matUPD7s, controls and patUPD7. Imprinting is not completely excluded for these genes, as other tissues, specific isoforms or developmental time points might still reveal parent-of-origin specific differences. 45 For PON1 and HTR5A we did not detect any expression in the blood samples available and therefore could not make any conclusion on the effects of the DMRs on their expression. In conclusion, we have identified 14 novel parent-of origin specific DMRs on human chromosome 7 using a genome wide methylation microarray. Expression study identified a novel imprinted gene SVOPL that was confirmed by parent-specific expression and supported the previously suggested genes HOXA4 and GLI3 as being imprinted. Imprinted genes on human chromosome 7 are implicated in the etiology of the matUPD7 phenotype of SRS. Interestingly, HOXA4 and GLI3 and many of genes close to the novel DMRs have known functions in cellular growth and development, which make them appealing as putative SRS genes in future studies. Patients We obtained fresh blood samples from eight SRS patients with matUPD7, of whom six have been reported before. 14,63-65 and two are first reported in this study as well as a SRS patient with segmental matUPD7q31-qter 26 and one individual with patUPD7. 16 SRS patients were recruited from the Hospital for Children and Adolescents, Helsinki University Central Hospital, Finland. Additionally three patients were referred from the Oulu University Central Hospital and one from the Päijät-Häme Central Hospital, Finland. All patients were seen by a pediatric endocrinologist and diagnosis of SRS was based on clinical features and matUPD7 was verified by microsatellite markers as described before. 63,64 Control samples from ten unrelated normal height adults were obtained. For parent-of-origin allele-specific expression analysis we obtained fresh blood samples from nine parent-child trios. All children were form our growth retardation study cohort and had previously been excluded from having matUPD7, as described before. 64 Seven children had H19 hypomethylation and SRS, 14 one was a normally growing sister of a Genome-wide methylation analysis with the Infinium HumanMethylation450K BeadChip Genomic DNA was extracted from fresh EDTA-blood samples with the FlexiGene DNA Kit (Qiagen) according to the manufacturer's instructions. 500 ng of DNA from each subject was bisulfite converted with the EZ-96 DNA Methylation Kit (Zymo research Corporation) according to the manufacturer's instructions. Array-based specific DNA methylation analysis was performed with the Infinium HumanMethylation450K BeadChip technology (Illumina). All samples were analyzed for more than 450 000 CpG sites at single nucleotide resolution with 99% coverage of RefSeq genes and 96% coverage of CGIs. The CpGs were distributed in CGI shelves, CGI shores, CGIs, promoter regions 5′UTRs, first exon, gene body, and 3′UTRs. Bisulfite-treated genomic DNA was whole-genome amplified and hybridized to the HumanMethylation450 BeadChips (Illumina) and scanned using the Illumina iScan at the Bioinformatic and Expression Analysis (BEA) Core Facility of the Karolinska Institutet. The intensity of the images was extracted with the GenomeStudio Methylation Software Module (v 1.9.0, Illumina). Quality Control analysis and data validation Quality control was conducted in GenomeStudio software (v2011.1) using the methylation module (v1.9.0) according to the manufacturer's recommendations (Illumina). Briefly, the controls included assessment of DNP and Biotin staining, hybridization, target removal, extension, bisulfite conversion, G/T mismatch, and negative and non-polymorphic controls. The various controls indicated overall good quality of DNA preparations and chip performance. Bioinformatics analysis of Infinium HumanMethylation-450K BeadChip data The analysis of the HumanMethylation450 BeadChips was performed as previously described. 46 Briefly, the raw data were analyzed with the BioConductor package lumi in R v. 2.13. 66 The data was adjusted for color channel imbalance and background noise, and normalized according to the quantile method. Probe-wise differential methylation was assayed by linear model followed by pair-wise comparisons by empirical Bayes t test on the normalized M values. 67 The M-value is calculated as the log2 ratio of the intensities of methylated probe vs. unmethylated probe and describes a measurement of how much more a probe is methylated compared with unmethylated. 68 A value close to 0 indicates a similar intensity between the methylated and unmethylated probes, which means the CpG site is about half-methylated. 68 Positive M-values mean that more molecules are methylated than unmethylated, while negative M-values mean that more molecules are unmethylated. The M values give higher resolution than the β values for extreme hyper-and hypomethylation levels, whereas at middle-range methylation levels they are collinear. Process of filtering for differentially methylated regions All 30 017 CpGs on chromosome 7 were filtered according to a three-step process. For maternally hypermethylated CpGs, step-1 of the filtering process was passed if the median M-value of all eight matUPD7s was larger than that of the ten controls and median M-value of controls was larger than the M-value of the patUPD7 sample. No specific threshold was set and the filter was passed even with a minimal difference between the groups. For maternally hypomethylated CpGs, step-1 of the filtering process was passed if M-value of the patUPD7 sample was larger than the median M-value of the controls, and median M-value of the controls was larger than that of the matUPD7 samples. Step-2 of the filtering process excluded all CpGs, where the differential methylation between matUPD7s and controls did not reach the empirical Bayes significance (nominal P value < 0.05). The step-3 in the filtering process was set to include only those adjacent CpGs which were included in a consecutive row of at least three CpGs which also had passed steps one and two of the filtering process. Adjacent CpGs were considered as CpGs consecutively mapped in a given chromosomal region after one another (i.e., with no other probes in between them). A genomic distance was not used to define adjacency. CpGs in regard to CGIs, genes, TSS, Promotors, CTCF binding sites The loci of the CpGs in regard to CGIs, TSSs, promoters, and CTCF binding sites were obtained from the Infinium Human Methylation450K BeadChip annotation files and from the UCSC genome browser. Localization of DMRs to lncRNAs was determined according to UCSC genome browser. Data visualization Genome-wide visualization of the global methylation data was performed with the IGB version 6.5.1/6.5.1_5. 47 Pyrosequencing Two micrograms of DNA was bisulfite-converted with EpiTect Bisulfite Kit (Qiagen) from all matUPD7, patUPD7, and controls. An amount of 10-20 ng of bisulfite modified DNA was used for PCR amplification. Primers were designed by PyroMark Assay Design SW 2.0 to encompass the DMR regions identified by the Infinium 450K Bead chip for HTR5A, RPS2P32, and HOXA4. Specific primer sequences will be given upon request. PCR was performed in standard conditions with 10 µM primers and successful PCR was verified by 1% agarose gel electrophoresis. Pyrosequencing was performed on the PyroMark Q24 system (Qiagen), according to the manufacturer's instructions. Methylation level of the CpGs was analyzed with the PyroMark Q24 software (Qiagen). Correlation between the pyrosequencing and the Infinium HumanMethylation450K Bead chip results was done by converting the M-values of the representative CpG sites obtained from the Infinium 450K Bead chip to Beta-values, using an approximation method. 68 Beta-values of the CpG sites were then compared with the methylation percentage derived from pyrosequencing experiments. Quantitative PCR Fresh whole blood samples were collected from all matUPD7, patUPD7, and controls into PAXgene Blood RNA tubes (PreAnalytiX, GmbH). RNA was extracted with the PAXgene Blood miRNA Kit (PreAnalytiX, GmbH), according to the manufacturer's instructions. RNA quality was checked with Bioanalyzer (Agilent Technologies) and all samples had a RIN (RNA integrity number) value above eight. The Student's T-test was used to calculate significance between matUPD7s and controls. ANOVA was used to calculate significance for regression plots. The partial matUPD7q31-qter was included in the matUPD7 group only for genes in the 7q31qter region, SVOPL1 and RARRES2, otherwise the sample was excluded from the analyses. Parent-of-origin allele-specific expression analyses The HapMart data mining tool was used to select exonic SNPs within HOXA4, GLI3, and SVOPL, that had a minor allele frequency greater than zero reported for the CEPH (CEU) population. Primers were designed by PrimerZ and Primer-BLAST. Specific primer sequences are available upon request. Genomic DNA from nine parent-child trios was extracted from fresh EDTA-blood samples with the FlexiGene DNA Kit (Qiagen), according to the manufacturer's instructions. Fresh whole blood samples from the children were obtained in PAXgene Blood RNA tubes (PreAnalytiX, GmbH). RNA was extracted with the PAXgene Blood miRNA Kit (PreAnalytiX, GmbH), according to the manufacturer's instructions. Reverse transcription was performed by High Capacity Reverse Transcription Kit (Applied Biosciences), according to the manufacturer's protocol. Genomic DNA and cDNA were amplified by PCR using Phusion High Fidelity DNA polymerase (Thermo Scientific) by standard protocols. Products were cleaned with EXO-SAP IT ® (USB) and sequenced. Genotypes were viewed using FinchTV (1.5.0, Geospiza). Disclosure of Potential Conflicts of Interest No potential conflicts of interest were disclosed.
2018-04-03T04:37:19.009Z
2013-11-18T00:00:00.000
{ "year": 2013, "sha1": "9425736e0de2c8af7b5fed7eb304cfa44aee3d8b", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.4161/epi.27160?needAccess=true", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "9425736e0de2c8af7b5fed7eb304cfa44aee3d8b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
18186523
pes2o/s2orc
v3-fos-license
Meson mass modification in strange hadronic matter We investigate in stable strange hadronic matter (SHM) the modification of the masses of the scalar ($\sigma$, $\sigma^*$) and the vector ($\omega$, $\phi$) mesons. The baryon ground state is treated in the relativistic Hartree approximation in the nonlinear $\sigma$-$\omega$ and linear $\sigma^*$-$\phi$ model. In stable SHM, the masses of all the mesons reveal considerable reduction due to large vacuum polarization contribution from the hyperons and small density dependent effects caused by larger binding. The study of the properties of hadrons, in particular the light vector mesons (ω, ρ, φ), has recently attracted wide interest both experimentally and theoretically. Recent experiments from the HELIOS-3 [1] and the CERES [2] collaborations indicate a significant amount of strength below the ρ-meson peak. This has been interpreted [3] as the decrease of the ρmeson mass in the medium. In an effective chiral model based on the symmetries of QCD, Brown and Rho [4] predicted an approximate scaling law for the in-medium decrease of the masses of the mesons and nucleons. On the other hand, in the quantum hadrodynamic (QHD) model [5,6] based on structureless baryons, the meson masses are found to increase when particle-hole excitations from the nucleon Fermi sea are considered. By also including the particle-antiparticle excitations from the Dirac sea, the meson masses has been, however, shown to decrease [7][8][9]. So far all investigations of in-medium meson mass modification has been done in the nuclear matter environment. However, presently there is a growing interest about the possibility of bound strange matter. In analogy to the stable strange quark matter [10], strange hadronic matter (SHM) composed of nucleons and hyperons has been speculated [11] to be absolutely stable or at least metastable with respect to weak hadronic decays. It is expected that such strange matter may possibly be created at RHIC and LHC [12]. It is therefore worth investigating the properties of meson masses in the strange baryonic matter environment. This study is not only of interest in itself but could serve as a signal of the formation of SHM in heavy ion collisions. In this letter, in a complete self-consistent calculation within the framework of the QHD model, we examine the mass modifications of both, the scalar and the vector mesons in SHM. Let us consider the composition of SHM which is stable with respect to particle emission. The analysis of level shifts and widths of Σ − atomic level suggest [13] a well-depth of Σ in nuclear matter of U (N ) Σ ≈ 20 − 30 MeV. While for strong processes, ΣN → ΛN and ΣΛ → ΞN, the energy released is Q ≈ 78 and 52 MeV, respectively. Consequently, systems involving Σ's are unstable with respect to these strong decays. Analysis of binding energies of Λ, and emulsion experiments with K − beams [13] yields well-depths of Λ and Ξ in nuclear matter of U (N ) Λ,Ξ ≈ 28 − 30 MeV. However, in strong decays, ΛΛ ⇀ ↽ NΞ, the system can be stable since Q ≈ 25 MeV. Therefore, we consider here only the set of baryon species B ≡ {N, Λ, Ξ}, which can constitute stable SHM. Up to now all investigations of meson mass modification has been performed in the simple linear Walecka model with nucleons only. Considering here nonlinear self-interactions of the scalar field σ in the QHD model, the total Lagrangian is where the summation is over all baryon species B ≡ {N, Λ, Ξ}. The scalar self-interaction U(σ) = g 2 σ 3 /3 + g 3 σ 4 /4 yields sufficient flexibility to the effective Lagrangian of the model and they are also necessary for a good reproduction of nuclear ground state properties, in particular the compressibility. The term δL contains renormalization counterterms. This model is not able to reproduce the observed strongly attractive ΛΛ interaction. The situation can be remedied by introducing two additional meson fields, the scalar meson f 0 (975) (de-noted as σ * hereafter) and the vector meson φ(1020) [11] which couple only to the hyperons (Y ). The corresponding (linear) Lagrangian is The self-consistent propagator in the medium for baryon species B can be written as sum of the Feynman [G F B (k)] and density-dependent [G D B (k)] parts: The momentum and energy of baryon In mean field theory (MFT), the total energy of the system is generated by the presence of all baryons in the occupied Fermi seas. In contrast, in the relativistic Hartree approximation (RHA), the effect of the infinite Dirac sea is also included. The renormalized total energy density in RHA is given by where E MFT is the usual MFT energy density [11]. The contribution from vacuum fluctuation of all the baryons is given by where I B = 2I + 1 is the isospin degeneracy of the baryon B. The mass shift of the Dirac sea from m B to m * B produces the vacuum fluctuation contribution. The renormalized nonlinear σ-meson contribution is [6] where λ 1 = 2g 2 σ/m 2 σ and λ 2 = 3g 3 σ 2 /m 2 σ . The meson propagator in the baryonic medium can be computed by summing over bubbles which consists of repeated insertions of the lowest order one-loop proper polarization part. This is equivalent to relativistic random phase approximation (RPA). Since both, scalar (σ, σ * ) and vector (ω, φ) mesons are present in our model, it is essential to include scalar-vector mixing which is a pure density dependent effect. Therefore it is convenient to define a full scalar-vector meson propagator D ab in the form of a 5 × 5 matrix with indices a, b ranging from 0 to 4, where 4 corresponds to the scalar meson and 0 to 3 the components of vector meson. Moreover, the (strangeness violating) scalar-vector coupling between the strange and nonstrange mesons are prohibited by invoking the OZI rule. We therefore consider couplings between σ-ω and σ * -φ separately. We shall present explicitly the calculations only for the σ-ω propagator; the expressions for σ * -φ propagator would follow similarly. Dyson's equation for the full σ-ω propagator D can be written in matrix form as where D 0 is the lowest order σ-ω meson propagator expressed in terms of the noninteracting σ and ω-meson propagators where q 2 µ ≡ q 2 0 − q is the four-momentum carried by the meson. Note that the effect of nonlinear σ-interaction (the boson loops) is to replace the bare meson mass m 2 σ in Eq. (10) by m 2 σ = m 2 σ + ∂ 2 U(σ)/∂σ 2 = m 2 σ + 2g 2 σ + 3g 3 σ 2 (see Ref. [14]). The polarization insertion of Eq. (8) is also a 5 × 5 matrix where each entries in Eq. (13) is summed over all the baryons, except for the pion loop Π π σ (q). The effect of relatively large σ-width has been accounted by including the contribution of pion loop to Π σ [9]; the in-medium modification of the pion loop is neglected. The pion propagator ∆ π is obtained from Eq. (10) with m 2 σ replaced by m 2 π . The pion loop polarization contribution to the σ-meson is In terms of the baryon propagator the lowest order σ, ω and σ-ω (mixed) polarizations for the baryon loop B are respectively given by As in the case for the baryon propagator (Eq. (3)) the above polarization insertions (except Π π σ ) can be expressed as the sum of Feynman (F) part and density-dependent (D) part, i.e. . The finite D-part has the form G D ×G D + G D ×G F + G F ×G D which describes particle-hole excitations and also includes the Pauli blocking of BB excitations. Since the polarizations of the D-part are defined in Ref. [15], we will not refer them here. The divergent F-part of the polarization insertions can be rendered finite by adding appropriate counterterms to the Lagrangian of Eq. (1). For the σ, each of the baryons loops and the π-loop have to be renormalized separately. For any baryon loop contribution to the σ the usual counterterm Lagrangian [5,6] is used The coefficients α 2 and ζ σ can be obtained by imposing the condition that the propagator in vacuum (m * B = m B ) reproduces the "physical" properties of σ-meson [15]: The renormalized σ-meson self-energy for a baryon loop is For the π-loop we employ the renormalization condition in free space, Π π σ (RF ) (q 2 µ ) = 0 at q 2 µ = m 2 σ [9]. We obtain finally For the ω, only a wavefunction counterterm L ω = ζ ω ω µν ω µν /4 is required to make the Employing the renormalization condition in vacuum, As mentioned before, the mixed part Π (M )B µ of Eq. (17) does not contribute to vacuum polarization. The solution of Dyson's equation, Eq. (8), the poles of the σ-ω propagator which define the respective meson masses in the medium are now contained in the dielectric function when ε = 0. By taking q = (0, 0, q) where q = |q|, we obtained in Eq. (23) the transverse and longitudinal dielectric functions defined as where the polarization insertion of Eq. (16) 13)). The eigencondition for determining the collective excitation spectrum (i.e. finding the effective meson masses) is equivalent to searching for the zeros of the dielectric function. In particular, for a given three-momentum transfer q ≡ |q|, the "invariant mass" of a meson (σ or ω) is m * m = q 2 0 − q 2 , where q 0 is obtained from the condition ε = 0. In the present study of meson mass modifications in the medium, we restrict ourselves to the meson branch in the time-like region (q 2 µ > 0). Since the propagation of the strange σ * -φ mesons are decoupled from that of the nonstrange σ-ω mesons, we may follow the same procedure as given in Eqs. (8)-(25) to obtain the effective masses of the strange mesons. In particular, a dielectric function (Eq. (23)) is obtained but with the masses of the mesons and their couplings to the baryons correspond to the σ * and φ. Moreover, since a linear σ * interaction is used in this case, m 2 σ in Eq. (10) should be replaced by the bare meson mass m 2 σ * . The renormalization conditions in RHA in Eq. (6) is imposed at q 2 µ = 0, while those used in Eq. (19) for σ and σ * -mesons are at q 2 µ = m 2 σ and q 2 µ = m 2 σ * , respectively. This difference yields an additional term in the vacuum fluctuation energy [15], so the total energy density for SHM is where E RHA is the RHA energy of Eq. (5) and and a similar expression for a σ * B for σ * ; for nucleons a σ * B = 0. The field equations are obtained by minimizing the energy density E with respect to that field. At a given baryon density n B and strangeness fraction f S = (n Λ + 2n Ξ )/n B , the set of field equations are solved self-consistently in conjunction with the chemical equilibrium condition 2µ Λ = µ N + µ Ξ due to the reaction ΛΛ ⇀ ↽ NΞ. The chemical potential of a baryon species B is µ B = [k F B + m * 2 B ] 1/2 + g ωB ω 0 + g φB φ 0 . The four saturation properties of nuclear matter (NM): density n 0 = 0.16 fm −3 , binding energy E/B = −16 MeV, effective nucleon mass m * N /m N = 0.78 and compression modulus K = 300 MeV are used to fix the nucleon coupling constants g σN , g ωN and the parameters g 2 and g 3 of the σ self-interaction. The coupling constants for pure NM without hyperons are shown in Table I. When hyperons are included, i.e. for SHM, they will contribute to E from their vacuum fluctuations V B even if their Fermi states are empty. This entails a redetermination of the coupling constants for the nucleons. The V B depends on the effective baryon mass, which in turn depends on the scalar-baryon coupling constants (see Eqs. (4) and (6)). Therefore, the σ and σ * couplings to the hyperons (Y ) should be predetermined. For this purpose, we adopt the SU(6) model, i.e. g σΛ /g σN = 2/3, g σΞ /g σN = 1/3 for the σ-Y couplings, and g σ * Λ /g σN = √ 2/3, g σ * Ξ /g σN = 2 √ 2/3 for the σ * -Y couplings. Note that the nucleons do not couple to strange mesons, i.e. g σ * N = g φN = 0. The coupling constants of nucleons for the SHM are given in Table I Λ,Ξ ≈ 40 MeV, for a Λ or Ξ in a Ξ "bath" with n Ξ ≃ n 0 [11]. To determine the couplings of the pion to scalar mesons, g σπ and g σ * π , we adjust them to reproduce the widths of σ and σ * in free space, i.e. Γ 0 s = −ℑΠ π s (RF ) /m s at q 2 µ = m 2 s . For a conservative estimate we consider Γ 0 σ = 300 MeV and Γ 0 σ * = 70 MeV. The stability of strange hadronic matter may be explored by considering its binding energy defined as E/B = E/n B − i Y i m i , where the abundance Y i = n i /n B . In Fig. 1, we present the binding energy E/B as a function of baryon density n B at various strangeness fractions f S . With increasing f S , the binding energy of SHM is found to increase and the saturation point is shifted to higher density. This is a consequence of the opening of new MeV is obtained for a large f S ≈ 1.35 at a relatively high density n B ≈ 3n 0 . A further increase of f S enforces the Fermi energy of the hyperons to increase resulting in the decrease of binding. This finding is quite similar to that obtained in the quark-meson coupling model [16]. The variation of baryon effective masses m * B is shown in Fig. 2 as a function of density n B for varying strangeness f S . It is observed that m * B decreases with increasing n B , with m * N having the largest decrease rate and m * Ξ the smallest at each f S value. Furthermore, with increasing f S the effective nucleon mass increases while the effective masses of hyperons decrease at any density. This effect stems from decrease of the nonstrange meson fields σ and ω and increase of the strange meson fields σ * and φ with increasing f S . Since the nucleons couple only to σ and ω, m * N is increased. The hyperons, however, do couple to all the meson fields resulting in a decrease of their effective masses with increasing f S , especially for Ξ which has the strongest coupling to the strange mesons. In Fig. 3, we display the "invariant mass" m * m = q 2 0 − q 2 of nonstrange σ-meson, m * σ , (left panels) and ω-meson, m * ω , (right panels) as a function of baryon density n B for different strangeness fractions f S . We consider first the features observed for m * σ for small three-momentum transfer q = |q| = 1 MeV (top-left panel). For f S = 0, m * σ is found to decrease with increasing n B for small values of n B ≤ n 0 . This reduction is caused due to two competing effects. The vacuum polarization which leads to reduction of m * σ dominates over the density dependent dressing of the meson propagator which causes an increase in m * σ . In fact, this reduction can be traced back to the corresponding reduction of m * N < m N in the medium (see Eq. (20)). However, the decrease in the scalar polarization depends on m * 2 B , m 2 B , and m 2 σ in a complicated fashion. For densities above n 0 , the density-dependent (D) part becomes increasingly dominant resulting in increase of m * σ . In SHM with f S = 0, the vacuum polarization contribution from the hyperons and the nucleons causes a considerable suppression of m * σ at low densities. At higher densities m * σ increases, however, the rate of increase above ∼ n 0 becomes smaller with increasing f S . An explanation to this effect is as follows. The D-part of σ propagator is primarily determined by nucleons to which it has the strongest coupling. The increase of f S causes an increase in the Fermi momenta of the hyperons while that of the nucleons decrease. Consequently, with increasing f S the decreasing D-part of the nucleons results in a slower rate of increase of m * σ with n B . At small value of q = 1 MeV, the (density dependent) effect of scalar-vector mixing is negligible (see Eq. (25)). On the other hand, for large values of q = 500 MeV (central-left panel) and q = 1 GeV (bottom-left panel), the mixing is more effective. (For the latter value, the present model may have been stretched to the extremes of applicability.) It gives rise to a repulsion which again leads to decrease of m * σ at high density. With increasing f S , the onset of this decrease or the peak positions in the figure are found to shift to higher densities. This is a manifestation of the shift of saturation value E/B to higher n B as f S increases (see Fig. 1). For small values of q = 1 MeV, the transverse and longitudinal invariant ω-meson mass, m * ω T and m * ω L are practically identical as evident in Fig. 3 (top-right panel). In contrast to σ mass, the vacuum polarization effect is much stronger (see Eq. (22)) and the density dependent effect is much weaker for the heavier ω mass. This causes for f S = 0 a substantial decrease of m * ω up to n B ≈ 2n 0 , and a subsequent small increase with density. The reduction is more pronounced for SHM where the total vacuum polarization effect is stronger and the D-part (mainly controlled by N) is weaker. In fact, for f S = 1.5, m * ω decreases considerably even up to large densities. As a consequence, the crossing between m * σ and m * ω at q = 1 MeV is shifted to lower n B for the SHM. At large values of q = 500 MeV and 1 GeV, the longitudinal mass m * ω L (solid line) and transverse mass m * ω L (dashed line) get well separated. It is found that m * ω L is reduced near nuclear matter density and finally increases with density attaining values higher than m * ω T . As in the q = 1 MeV case, the reduction in mass is stronger for SHM. Because of strong repulsion from the mixing at these high three-momentum values, m * σ and m * ω L never cross each other. In Fig. 4 the "invariant mass" of the strange σ * -meson, m * σ * , (left panels) and φ-meson, m * φ , (right panels) are shown as a function of baryon density n B for different strangeness fractions. In this case the masses are determined by the hyperons as nucleons do not couple to the strange mesons. As for the nonstrange mesons, the masses m * σ * and m * φ in general decreases with increasing n B . However, the decrease is more enhanced over the large density range explored here. This arises due to large vacuum fluctuation contribution from the hyperons, and, in particular, a small density-dependent part of the hyperons stemming from the large binding in SHM. At high densities n B ≥ 4n 0 , in contrast to nonstrange meson masses, the strange meson masses has higher values for larger f S . This reversal in behavior for the m * σ * and m * φ results from the increase of the Fermi momenta and hence the D-part of the hyperons for large f S . The scalar mass m * σ * however becomes sensitive to high values of q and drops at f S = 1.5 below that for f S = 0.5. We have investigated the meson mass modification in strange hadronic matter as a function of baryon density for a fixed strangeness fraction. The results correspond to a metastable SHM. For a given n B , an absolute stable SHM could be obtained by determining the absolute minimum of the binding energy E/B as a function of f S . In this situation, we have found that the masses of the mesons in SHM undergo a drastic reduction over a wide range of density. Besides the large vacuum part, the minimum in the energy per baryon E/B at each density enforces a smallest possible Fermi momentum and hence the density-dependent for all the baryons. It is worth mentioning here that in the linear Walecka model, it has been demonstrated [5,6,17] that a self-consistent inclusion of exchange (Fock) term has an insignificant contribution to binding energy when the two parameters of this model, g σN and g ωN , are renormalized to reproduce the same NM saturation properties of baryon density and binding energy. Even the predicted values of the effective nucleon mass, m * N , and incompressibility, K, in this Hartree-Fock calculation are almost identical to the Hartree results [5]. However, the applicability of the linear Walecka model to moderate and high density phenomena can be misleading if the two parameters m * N and K at n 0 are not under control. In fact, it was shown [18] that in the nonlinear Walecka model when the four parameters, g σN , g ωN , g 2 , and g 3 (the latter two are from the σ self-interaction) are fitted to the same NM saturation properties of n 0 , E/B, m * N , and K (as in our calculation), nearly all the properties obtained in the relativistic Hartree calculation differ from the mean field results only by ≈ 3% even at n = 10n 0 . (This is in contrast to the linear Walecka model results.) It is therefore expected that by further inclusion of exchange terms (in all the baryonic sectors) and pseudoscalar mesons in the present nonlinear Walecka model, and performing a self-consistent calculation (with the parameters renormalized to the same NM saturation properties), the results for E/B and m * B will be practically unaltered from the Hartree calculation. Consequently, exchange corrections from the nucleonic and strangeness sectors should also have an insignificant contribution and therefore neglected in the present study of the modification of meson masses in the medium. In summary, we have investigated the masses of baryons (N, Λ, Ξ) and, in particular, the nonstrange (σ, ω) and strange (σ * , φ) mesons in stable strange hadronic matter. The ground state properties of the SHM in the relativistic Hartree approximation is obtained by using a nonlinear σ-ω and linear σ * -φ Lagrangians. With increasing strangeness fraction, the effective mass of the nucleons increases while that of the hyperons decreases. The masses of all the mesons reveal a considerable reduction over a wide density range with increasing strangeness. This may be attributed to a large contribution from the vacuum polarization of the hyperons which causes the decrease in the meson masses at small densities. The larger binding for the SHM and therefore a smaller density-dependent part helps to reduce meson masses at high densities. S.P. and S.G. acknowledge support from the Alexander von Humboldt Foundation.
2014-10-01T00:00:00.000Z
1999-09-02T00:00:00.000
{ "year": 1999, "sha1": "43aeb85f479d29f89168a871df0cdae50dc8e1d1", "oa_license": null, "oa_url": "http://arxiv.org/pdf/nucl-th/9909005", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "43aeb85f479d29f89168a871df0cdae50dc8e1d1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
205376233
pes2o/s2orc
v3-fos-license
ATR and H2AX Cooperate in Maintaining Genome Stability under Replication Stress* Chromosomal abnormalities are frequently caused by problems encountered during DNA replication. Although the ATR-Chk1 pathway has previously been implicated in preventing the collapse of stalled replication forks into double-strand breaks (DSB), the importance of the response to fork collapse in ATR-deficient cells has not been well characterized. Herein, we demonstrate that, upon stalled replication, ATR deficiency leads to the phosphorylation of H2AX by ATM and DNA-PKcs and to the focal accumulation of Rad51, a marker of homologous recombination and fork restart. Because H2AX has been shown to play a facilitative role in homologous recombination, we hypothesized that H2AX participates in Rad51-mediated suppression of DSBs generated in the absence of ATR. Consistent with this model, increased Rad51 focal accumulation in ATR-deficient cells is largely dependent on H2AX, and dual deficiencies in ATR and H2AX lead to synergistic increases in chromatid breaks and translocations. Importantly, the ATM and DNA-PK phosphorylation site on H2AX (Ser139) is required for genome stabilization in the absence of ATR; therefore, phosphorylation of H2AX by ATM and DNA-PKcs plays a pivotal role in suppressing DSBs during DNA synthesis in instances of ATR pathway failure. These results imply that ATR-dependent fork stabilization and H2AX/ATM/DNA-PKcs-dependent restart pathways cooperatively suppress double-strand breaks as a layered response network when replication stalls. Genome maintenance prevents mutations that lead to cancer and age-related diseases. A major challenge in preserving genome integrity occurs in the simple act of DNA replication, in which failures at numerous levels can occur. Besides the misincorporation of nucleotides, it is during this phase of the cell cycle that the relatively stable double-stranded nature of DNA is temporarily suspended at the replication fork, a structure that is susceptible to collapse into DSBs. 2 Replication fork stability is maintained by a variety of mechanisms, including activation of the ATR-dependent checkpoint pathway. The ATR pathway is activated upon the generation and recognition of extended stretches of single-stranded DNA at stalled replication forks (1)(2)(3)(4). Genome maintenance functions for ATR and orthologs in yeast were first indicated by increased chromatid breaks in ATR Ϫ/Ϫ cultured cells (5) and by the "cut" phenotype observed in Mec1 (Saccharomyces cerevisiae) and Rad3 (Schizosaccharomyces pombe) mutants (6 -9). Importantly, subsequent studies in S. cerevisiae demonstrated that mutation of Mec1 or the downstream checkpoint kinase Rad53 led to increased chromosome breaks at regions of the genome that are inherently difficult to replicate (10), and a decreased ability to reinitiate replication fork progression following DNA damage or deoxyribonucleotide depletion (11)(12)(13)(14). Consistent with the role of the ATR-dependent checkpoint in replication fork stability, common fragile sites, located in late-replicating regions of the genome, are significantly more unstable (5-10-fold) in the absence of ATR or Chk1 (19,20). Because these sites are favored regions of instability in oncogene-transformed cells and preneoplastic lesions (30,31), it is possible that the increased tumor incidence observed in ATR haploinsufficient mice (5,32) may be related to subtle increases in genomic instability. Together, these studies indicate that maintenance of replication fork stability may contribute to tumor suppression. It is important to note that prevention of fork collapse represents an early response to problems occurring during DNA replication. In the event of fork collapse into DSBs, homologous recombination (HR) has also been demonstrated to play a key role in genome stability during S phase by catalyzing recombination between sister chromatids as a means to re-establish replication forks (33). Importantly, a facilitator of homologous recombination, H2AX, has been shown to be phosphorylated under conditions that cause replication fork collapse (18,34). If ATR prevents the collapse of stalled replication forks into DSBs, and H2AX facilitates HR-mediated restart, the combined deficiency in ATR and H2AX would be expected to dramatically enhance the accumulation of DSBs upon replication fork stalling. Herein, we utilize both partial and complete elimination of ATR and H2AX to demonstrate that these genes work cooperatively in non-redundant pathways to suppress DSBs during S phase. As discussed, these studies imply that the various components of replication fork protection and regeneration cooperate to maintain replication fork stability. Given the large number of genes involved in each of these processes, it is possible that combined deficiencies in these pathways may be relatively frequent in humans and may synergistically influence the onset of age-related diseases and cancer. EXPERIMENTAL PROCEDURES MEF Isolation, Lentivirus Infection, and ATR Deletion-Murine embryonic fibroblasts (MEFs) were harvested from day 14.5 postcoitus embryos and grown in Dulbecco's modified Eagle's medium (DMEM, Cellgro) supplemented with 10% FBS (Hyclone) in a 3% oxygen incubator and frozen within 3 doublings after isolation. For shRNA lentivirus infections, cells were plated at 3 ϫ 10 6 cells/10-cm plate in 0.5% FBS/DMEM in the presence of virus (as described below) and distributed into additional plates by split/plating 24 h later (1 ϫ 10 6 cells per 10-cm plate in 0.5% FBS/DMEM). The next day, media was changed to 0.1% FBS/DMEM; 24 h later, cells were stimulated with 20% FBS/DMEM for 16 -19 h to enrich and normalize for S phase (supplemental Fig. S1). Plates were then harvested or treated as described ("Results"). Lentivirus constructs expressing short hairpin RNAs (shRNAs) from the H1 promoter that target ATR (5Ј-gaattgttattgtggtaaattcaagagatttgccacagtaacaattc) or a control sequence (5Ј-gtactagttcatggttattttcaagagagataaccatggactagtac) were generated using the H1UG1 vector, which co-expresses enhanced green fluorescent protein through the human ubiquitin C promoter. Lentiviruses were produced as described (52), titered by enhanced green fluorescent protein expression and delivered to MEFs at a multiplicity of infection of 5-10, which consistently yielded 95-98% infection rates. To delete ATR in ATR flox/Ϫ Cre-ERT2 ϩ MEFs (18,53), cells were enriched in G 0 as described above and treated with 0.2 M 4-hydroxytamoxifen (4-OHT, Calbiochem) for 48 h prior to serum stimulation. Aphidicolin Sensitivity Assay-MEFs were infected with lentiviral constructs on the first day of synchronization (above) and replated 1 day later at 2 ϫ 10 5 cells per 10-cm plate. Upon serum stimulation, cells were left untreated or treated with 0.2 or 0.4 M aphidicolin. Cells were grown for a total of 4 days, with an intervening replating at day 2. Media and aphidicolin were replenished every 24 h. At replating and final harvest, cells were counted and the total population doublings were determined. Chemical Inhibitors-Aphidicolin (Calbiochem), a DNA polymerase inhibitor, was used at a final concentration of 5 M unless otherwise indicated. ATM inhibitor KU-55933 (Sigma) and DNA-PK inhibitor NU7026 (Calbiochem) were added to media at a concentration of 10 M 1 h prior to cell collection or aphidicolin treatment. Quantitative Reverse Transcriptase-PCR-RNA was extracted from 1 ϫ 10 6 cells with TRIzol reagent (Invitrogen) according to the manufacturer's instructions and cDNA was produced with the cDNA Archive Kit (Applied Biosystems). Quantitative PCR was performed using TaqMan Universal PCR Master Mix (Applied Biosystems). Primers against ␤-actin and ATR (Applied Biosystems, Mm00607939_s1, Mm01223656_m1) were used for ⌬⌬C T analysis. Analysis was performed using an Applied Biosystems 7900HT Sequence Detection System, with amplification quantified by MGB probes. Flow Cytometric Quantification of BrdUrd Incorporation, Phospho-H2AX, and Phospho-histone H3-G 0 -enriched cells were harvested 16 -19 h after serum stimulation and fixed in 70% EtOH prior to staining. Cells were analyzed for phospho-H2AX and DNA content as previously described (54). For quantification of S phase, cells were incubated with 10 M BrdUrd (Roche) for 30 min, then harvested and fixed in 70% EtOH, acid denatured (3 N HCl containing 0.5% Tween 20), and neutralized with 0.1 M sodium borate, pH 8.5 (Sigma). Following staining with anti-BrdUrd (BD Pharmingen) and fluorescein isothiocyanate-conjugated secondary antibodies (Jackson), cells were stained with propidium iodide (50 g/ml propidium iodide, 0.1% Triton X-100, 50 g/ml RNase, 50 mM EDTA) for DNA content. To determine the percentage of phospho-H2AX-positive cells in mitosis, cells were first permeablized and stained for phospho-H2AX (54) with allophycocyaninconjugated secondary antibody, followed by staining for phospho-histone H3 (Upstate) at a 1:200 dilution and fluorescein isothiocyanate-conjugated secondary antibody at a 1:500 dilution. For each procedure, cells were analyzed by FACS using a FACScalibur (BD Biosciences) and CellQuest software. Immunocytochemical Detection of BrdUrd, Rad51, and Phospho-H2AX-MEFs were plated on round coverslips and synchronized as described above. After stimulation, cells were treated with 10 M BrdUrd for 30 min, fixed with 3% paraformaldehyde, 2% sucrose in phosphate-buffered saline for 10 min at room temperature, then permeablized in 0.5% Triton X-100 in phosphate-buffered saline for 10 min on ice. Cells were stained using anti-Rad51 (Santa Cruz) or anti-phospho-H2AX antibody (Upstate), followed by Alexa Fluor 594 secondary antibody (Invitrogen) detection. Cells/antibodies were then fixed (3% paraformaldehyde, 2% sucrose, phosphate-buffered saline) for 10 min at room temperature, denatured with 2 N HCl for 5 min at room temperature, washed, and stained with anti-BrdUrd (BD Pharmingen), Alexa Fluor 488 (Invitrogen) secondary antibody and 4Ј,6-diamidino-2-phenylindole. Cells were visualized with a Nikon Eclipse 80i fluorescent microscope with a ϫ100 objective lens. Rad51 foci and phospho-H2AX staining were quantified from images by double-blind methods. Metaphase Spreads and Spectral Karyotyping-To arrest cells in M phase, 0.5 M nocodazole (Calbiochem) was added for 4 h, and mitotic spreads were prepared as described (18). SYTOX Green (Invitrogen) nucleic acid stain was used (1:50,000 in phosphate-buffered saline, pH 7.9). Metaphases were visualized using a Nikon Eclipse 80i fluorescent microscope with a ϫ100 objective lens. Spectral karyotyping was performed following the DNA spectral karyotyping hybridization and detection protocol from Applied Spectral Imaging. Images were captured on an ASI CCD camera and interferometer, using an Olympus BX61 microscope with a ϫ60 objective lens. Data were managed and analyzed using Case Data Manager software (version 5.0) from ASI. H2AX Wild-type and H2AX S139A Add-back Constructs-Full-length murine H2AX and H2AX S139A were PCR amplified (PFU Turbo polymerase, Stratagene) from cDNA (Open Biosystems, MMM1013-64604) using the following primers: a common forward primer (5Ј-gcattctagagccgccaccatgtccggacgcggcaagaccggcggcaag) and one of two differing reverse primers (5Ј-caggggatccttagtactcctgagaggcctgcgaggccttctt for H2AX wild-type and 5Ј-caggggatccttagtactcctgagcggcctgcgaggccttctt for H2AX S139A ). Amplified products were subsequently cloned into the HFUW lentiviral vector for expression from the human ubiquitin C promoter. Protein expression and virus titering was assayed by Western blot and FACS analysis for H2AX using a total H2AX antibody (Bethyl) using the manufacturer's recommended concentrations (supplemental Fig. S3). Intermediate Suppression of ATR Leads to Increased H2AX Phosphorylation in S Phase upon Replication Stress-We have previously shown that ATR deletion leads to increased H2AX phosphorylation upon replication stalling (18). However, through overexpressing a kinase-inactive mutant, ATR has also been reported to be responsible for H2AX phosphorylation in response to replication stress (40). We reasoned that these contrasting results may be due to differences in the levels of ATR pathway inhibition, as dominant-negative overexpression leads only to hypomorphic suppression (55). To examine the possibility that levels of ATR suppression strongly influence its effects on H2AX phosphorylation, wildtype and ATR ϩ/Ϫ MEFs were transduced with lentivirus expressing an shRNA targeting either ATR (shATR) or a nontargeting control shRNA (shCtrl). Quantitative reverse transcriptase-PCR analysis demonstrated that ATR mRNA was reduced 61% upon shATR expression in wild-type cells (Fig. 1A). As previously reported (5), ATR ϩ/Ϫ cells maintained only 55% of the wild-type ATR mRNA level; however, this level decreased to 16% of wild-type when ATR ϩ/Ϫ cells expressed shATR (ATR hypo , Fig. 1A). ATR protein levels decreased in a manner consistent with the decrease in mRNA (Fig. 1B). FIGURE 1. Partial and complete suppression of ATR causes increased H2AX phosphorylation and increased reliance on H2AX for cellular viability upon replication fork stalling. A, quantification of ATR mRNA levels in ATR ϩ/ϩ and ATR ϩ/Ϫ MEFs following shRNA-mediated knockdown. G 0 -enriched MEFs were infected with shRNA-expressing lentivirus ("Experimental Procedures") and stimulated in enter S phase. RNA was then isolated and ATR mRNA was quantified by quantitative PCR, normalized to ␤-actin control, and shown relative to wild-type ATR levels. Standard errors are represented by bars at the top of each column. B, ATR protein level as detected by immunoblot following shRNA-mediated knockdown, as described in A. mTOR was used as a loading control. C, detection of H2AX Ser 139 phosphorylation following replication stress and varying degrees of ATR knockdown. MEFs with ATR knockdown were untreated, or treated for 1 or 2 h with aphidicolin prior to collection. ATR ⌬/Ϫ cells were generated from ATR flox/Ϫ CreERT2 ϩ cells treated with 4-OHT as described ("Experimental Procedures"). Immunoblots were detected for phospho-H2AX and non-phospho-H2AX as a protein level control. As positive and negative controls, H2AX Ϫ/Ϫ and wild-type MEFs were treated with 10 gray ionizing radiation (IR) and harvested 45 min later. To determine whether partial suppression of ATR in combination with replication stress leads to H2AX phosphorylation, wild-type, ATR-suppressed (e.g. ATR hypo ), and ATR-deleted cells (ATR ⌬/Ϫ ) were serum-starved at the time of ATR depletion and subsequently stimulated to enter the cell cycle. At peak S phase, cells were treated with aphidicolin (5 M). For each condition, S phase levels were similar at the time of aphidicolin treatment (supplemental Fig. S1). Although only a small increase in H2AX phosphorylation was observed in wild-type cells following aphidicolin treatment (Fig. 1C, lanes 1-3), increasing levels of H2AX phosphorylation were observed upon aphidicolin treatment as ATR abundance was reduced, culminating with the highest levels observed in ATR ⌬/Ϫ cells. The stimulatory effect of ATR suppression on H2AX phosphorylation implied that H2AX may perform a salvage pathway role in response to replication stress. If so, then dual suppression of ATR and H2AX in the presence of replication stress would be expected to diminish viability further than suppression of either gene alone. To test this hypothesis, ATR ϩ/Ϫ H2AX Ϫ/Ϫ MEFs were generated, and ATR transcript was further suppressed by shATR expression, as described above. The end-point proliferation of these cells was compared with that of control MEFs both in the presence or absence of low doses of aphidicolin (0.2 and 0.4 M). These concentrations of aphidicolin only partially inhibit DNA polymerase processivity, and have been shown to cause increased fragile site expression in ATR-deficient cells (19). The expansion of wild-type and H2AX Ϫ/Ϫ MEFS expressing shCtrl and ATR hypo MEFs were only marginally inhibited by low doses of aphidicolin (20 -30%). However, proliferation of ATR ϩ/Ϫ H2AX Ϫ/Ϫ MEFs expressing the shATR hairpin (ATR hypo H2AX Ϫ/Ϫ ) was significantly suppressed by low doses of aphidicolin, culminating in a greater than 60% reduction in the presence of 0.4 M aphidicolin (Fig. 1D). These results indicate that H2AX plays an important function in the biological response to replicative stress specifically under conditions of ATR dysfunction. H2AX Phosphorylation Occurs in S Phase and Is Not the Result of Premature M Phase Entry-It has been suggested previously that doublestrand breaks resulting from aphidicolin treatment of ATR-deficient cells are attributable to premature mitotic entry (19). By flow cytometry, aphidicolin treatment led to an increased frequency of ATR ⌬/Ϫ cells that exhibited detectable H2AX phosphorylation ( Fig. 2A). However, co-detection of phosphorylated H2AX with phospho-histone H3, a mitotic marker, indicated that nearly all of aphidicolin-treated ATR ⌬/Ϫ cells with detectable levels of H2AX phosphorylation were phosphohistone H3 negative ( Fig. 2A). These results indicate that H2AX phosphorylation is not the result of premature mitotic entry, consistent with previous findings that ATR ⌬/Ϫ MEFs resist premature mitotic entry under replication stress (18). Co-staining for phospho-H2AX and DNA content (propidium iodide) indicated that the increased H2AX phosphorylation in ATR ⌬/Ϫ cells occurred in S phase, as it was observed predominantly in cells with intermediate DNA content, between 2N and 4N (Fig. 2B). Furthermore, 30 min incorporation of BrdUrd prior to aphidicolin treatment revealed that 99 Ϯ 0.01% (S.E.) of phospho-H2AX-positive ATR ⌬/Ϫ cells treated with aphidicolin were also positive for BrdUrd as deter- mined by immunocytochemical detection (data not shown). These results further substantiate that phosphorylation of H2AX in ATR-deficient cells occurs primarily during DNA replication. ATM and DNA-PKcs Collaborate in Phosphorylating H2AX upon ATR Suppression-H2AX phosphorylation is correlated with DSB formation (34 -38). Our data are consistent with a model that ATR is required to preserve replication fork stability and prevent collapse into DSBs, leading to H2AX phosphorylation. However, the kinases responsible for this phosphorylation in this context have not been previously defined. Given their role in phosphorylating H2AX in response to ionizing radiation-induced DSBs (37-39), we sought to determine the relative contribution of ATM and DNA-PKcs in the H2AX phosphorylation observed upon ATR suppression. To begin to address this issue, ATR hypo MEFs were treated with pharmacological inhibitors of ATM and DNA-PKcs just prior to aphidicolin treatment. The concentrations of inhibitors used in these experiments do not affect ATR activity (56,57). Under these conditions, both ATM and DNA-PKcs inhibitors only partially suppressed H2AX phosphorylation upon replication stalling in ATR-depleted cells (Fig. 3A). However, it was conceivable that the efficiency of kinase inhibition may have been partial (supplemental Fig. S2) and not equivalent between these two compounds. To further confirm the roles of ATM and DNA-PKcs, ATR hypo ATM Ϫ/Ϫ and ATR hypo DNA-PKcs Ϫ/Ϫ MEFs were generated, synchronized, and treated with aphidicolin as described above (Fig. 1). Again, S phase levels were similar between all cell lines at the time of aphidicolin treatment (supplemental Fig. S1). As shown in Fig. 3B, ATM absence potently suppressed aphidicolin-induced H2AX phosphorylation (ATR hypo ATM Ϫ/Ϫ cells) in comparison to ATR hypo cells. ATR hypo DNA-PKcs Ϫ/Ϫ cells also exhibited suppressed levels of H2AX phosphorylation (Fig. 3C). In either cell line, ATR hypo ATM Ϫ/Ϫ or ATR hypo DNA-PKcs Ϫ/Ϫ , residual H2AX phosphorylation was largely ablated by chemical inhibition of the other kinase (Fig. 3, B, lanes 10 -12, and C, lanes 10 -12). These data indicate that the H2AX phosphorylation stimulated by replication stress in ATR-deficient cells is co-dependent on ATM and DNA-PKcs. Increased Rad51 Foci Formation in ATR-deficient Cells Requires H2AX-Collapsed replication forks utilize homologous recombination to catalyze invasion of the DSB ends into the unbroken complimentary DNA on the sister chromatid, ultimately restoring active replication forks. Because H2AX has been implicated in modulating the efficiency of homologous recombination (43,44) and Rad51 accumulation (37, 45-51), we asked whether ATR deletion led to an increased frequency of homologous recombination intermediates and if H2AX played a facilitative role in this process. To do so, we quantified Rad51 focal accumulation in wildtype and ATR-deleted cells in the presence and absence of H2AX. ATR ⌬/Ϫ and ATR ⌬/Ϫ H2AX Ϫ/Ϫ cells were generated by treating G 0 -enriched ATR flox/Ϫ Cre-ERT2 ϩ and ATR flox/Ϫ H2AX Ϫ/Ϫ Cre-ERT2 ϩ MEFs with 4-OHT to acutely activate Cre recombinase (18,53). Subsequently, 4-OHT was removed, and cells were stimulated to re-enter the cell cycle. Consistent with ATR absence leading to elevated DSB formation and an increased reliance on HR, ATR ⌬/Ϫ cells exhibited a 2-fold or greater increase in Rad51 foci compared with wild-type or H2AX Ϫ/Ϫ cells, both in the presence and absence of aphidicolin treatment (Fig. 4). Importantly, the elevated levels of Rad51 foci in ATR ⌬/Ϫ cells were significantly suppressed in the absence of H2AX (ATR ⌬/Ϫ H2AX Ϫ/Ϫ cells), both in aphidicolin-treated and untreated cells (Fig. 4B). These results indicate that the focal accumulation of Rad51 that occurs upon ATR deletion is largely dependent on H2AX. ATR and H2AX Cooperate in Maintaining Genome Stability under Replication Stress-Increased accumulation of Rad51 foci in ATR-deficient cells and suppression of these foci in the absence of H2AX are consistent with a co-dependence on ATR and H2AX for maintaining genome stability in S phase. That is, failure of the ATR pathway to maintain stalled replication forks leads to an increased dependence on H2AX to facilitate replication fork restart through homologous recombination. If this model is correct, then DSBs resulting from replication fork stalling in ATR-deficient cells should be more persistent in the absence of H2AX, leading to a synergistic increase in genomic instability. However, the decreased abundance of Rad51 foci in the absence of H2AX could also be an indication of accelerated repair and attenuated DSB persistence. To discriminate between these possibilities, we quantified the effect of combined ATR and H2AX suppression on chromatid breaks in the presence and absence of aphidicolin (Fig. 5). As described previously (18), an increase in chromatid breaks was observed upon ATR deletion alone (ATR ⌬/Ϫ versus wildtype MEFs) (Fig. 5B). However, this level of breakage was further elevated when combined with H2AX deficiency, confirming that ATR deletion led to an increased dependence on H2AX to suppress chromatid breaks (ATR ⌬/Ϫ versus ATR ⌬/Ϫ H2AX Ϫ/Ϫ , Fig. 5B). The frequency of chromatid breaks in ATR ⌬/Ϫ H2AX Ϫ/Ϫ cells was significantly greater than the combined frequency observed in cells deficient for ATR or H2AX alone (p value ϭ 0.045), indicating that the dual loss of these genes led to a synergistic increase in genomic instability. To confirm these results and investigate the interdependence of ATR and H2AX during replication stress, metaphase spreads were analyzed from controls (wild-type and H2AX Ϫ/Ϫ MEFs expressing the shCtrl hairpin), ATR hypo and ATR hypo H2AX Ϫ/Ϫ MEFs that were left untreated or pulse treated with aphidicolin for 2 h and then released into M phase. A significant increase in chromatid breaks was observed in ATR hypo cells compared with either wild-type or H2AX Ϫ/Ϫ control cells, and this instability was further elevated following aphidicolin treatment (p Յ 0.05). Moreover, similar to ATR ⌬/Ϫ cells, the genomic instability observed in untreated ATR hypo cells was synergistically increased by H2AX deletion in comparison to suppression of ATR or H2AX alone (p value ϭ 0.001). Interestingly, because ATR ⌬/Ϫ cells had significantly fewer chromatid breaks than ATR hypo H2AX Ϫ/Ϫ cells (p value ϭ 0.05), these data indicate that partial loss of ATR in an H2AX-null background is more destabilizing than the complete deletion of ATR alone. As noted above, the level of genomic instability generated from ATR deficiency was enhanced by aphidicolin treatment (Fig. 5B); however, this instability was significantly exacerbated by the absence of H2AX (p value ϭ 0.045). The level of instability quantified from ATR hypo H2AX Ϫ/Ϫ cells treated with aphidicolin is likely underestimated, as spreads with greater than 20 chromatid breaks were frequently observed (Fig. 5A, lower right image) and were classified by the lowest reliable estimate (20 breaks). The increased genomic instability generated by dual ATR and H2AX deficiency demonstrates co-dependent roles for these genes in maintaining genome integrity under replication stress. To investigate whether phosphorylation of H2AX was critical for genome stabilization in response to ATR suppression, lentiviruses encoding wild-type H2AX and a serine 139 to alanine mutant (H2AX S139A ) were used to complement ATR ϩ/Ϫ H2AX Ϫ/Ϫ cells 2 days prior to shRNA-mediated ATR suppression. Cells were then left untreated or pulse treated with aphidicolin and chromosome spreads were analyzed as described above. Consistent with an important genome stabilizing function for H2AX-Ser 139 phosphorylation, complementation of H2AX null cells with the H2AX S139A mutant failed to suppress the genomic instability observed in ATR hypo H2AX Ϫ/Ϫ cells in the presence of aphidicolin (Fig. 5B). In contrast, ectopic expression of wildtype H2AX decreased the genomic instability of ATR hypo H2AX Ϫ/Ϫ cells to a level similar to that observed in ATR hypo cells. Together, these data indicate that ATR and H2AX cooperate in maintaining genome integrity during DNA synthesis and that phosphorylation of H2AX by ATM and DNA-PKcs (Fig. 3) plays a pivotal role in genome maintenance in instances of ATR pathway failure (Fig. 5B). Loss of ATR and H2AX Leads to an Increase in Translocation Events-Our results are consistent with a model in which dual deficiency of ATR and H2AX leads to the accumulation of DSBs. According to this model, the absence of H2AX leads to an increased persistence of DSBs due to defective homologous recombination-mediated restart. One expectation of this model is that alternative, potentially error-prone DSB repair mechanisms may have a greater opportunity to generate mutagenic recombination events, such as chromatid translocations, in ATR/H2AX dual-deficient cells. H2AX in ATR ⌬/Ϫ H2AX Ϫ/Ϫ cells led to a dramatic increase in frequency of chromatid translocations, corresponding to 0.1 translocations per metaphase. This frequency was 4-and 7.5-fold greater than that observed in ATR ⌬/Ϫ (p ϭ 0.030) and H2AX Ϫ/Ϫ cells (p ϭ 0.004), respectively. The observed increase in translocation events in ATR ⌬/Ϫ H2AX Ϫ/Ϫ cells is consistent with a decrease in Rad51-mediated HR repair in the absence of H2AX (Fig. 4), and an increased reliance on error-prone DSB repair mechanisms (Fig. 6). DISCUSSION Our studies indicate that ATR and H2AX provide cooperative safeguards to ensure genome stability during DNA replication. According to this model, activation of the ATR pathway serves as a primary mechanism to stabilize stalled forks and prevent their collapse into DSBs, a function that is well supported by studies in yeast, Xenopus extracts, and mammalian cells in which ATR or ATR orthologs have been mutated or depleted (10 -13, 17-19, 21, 58). However, in instances of ATR pathway failure, DSBs are generated, stimulating the ATM-and DNA-PKcs-dependent phosphorylation of H2AX (Fig. 3). The co-dependence of H2AX phosphorylation on ATM and DNA-PKcs (Fig. 3) indicates that H2AX represents an important nexus at which these two DSB-responsive kinases converge. Consistent with H2AX playing a facilitative role in replication fork stability by promoting homologous recombination-mediated fork restart, the increased focal accumulation of Rad51 in ATR-deleted cells was largely dependent on H2AX (Fig. 4). Furthermore, synergistic increases in chromatid breaks and translocations were observed in ATR-deleted cells that lacked H2AX, suggesting that the absence of H2AX enhanced persistence of DSBs and their accessibility to error-prone DNA repair mechanisms (Figs. 5 and 6). Thus, whereas ATR is an important player FIGURE 5. Combined loss of ATR and H2AX leads to increased DSBs. A, representative images of metaphase spreads from wild-type, ATR hypo , ATR hypo H2AX Ϫ/Ϫ , and aphidicolin-treated ATR hypo H2AX Ϫ/Ϫ cells. shRNAmediated suppression of ATR was conducted as described in the legends to Figs. 1 and 3. Cells were collected 4 h after nocodazole treatment and metaphase spreads were prepared. B, average number of chromatid breaks/metaphase upon suppression of ATR and H2AX. MEFs of the indicated genotypes were G 0 -enriched, treated to suppress ATR levels, and stimulated to enter the cell cycle as described under "Experimental Procedures" and in the legend to Fig. 4. Mitotic spreads were isolated without prior treatment or following pulse treatment with 5 M aphidicolin for 2 h, followed by a 2-h recovery period and subsequent collection in M phase (4 h nocodazole treatment). Identical procedures were performed on ATR hypo H2AX Ϫ/Ϫ MEFs complemented with wild-type H2AX or a serine 139 to alanine H2AX mutant (H2AX S139A ), generated as described under "Experimental Procedures." Average values from three independent experiments are depicted. Standard errors are represented by bars at the top of each column, and p values were calculated by Student's t test. in preventing replication fork instability, our data are consistent with ATM, DNA-PKcs, and H2AX providing supportive roles by acting in concert as a salvage pathway to assist in repairing collapsed replication forks and restoring DNA replication (Fig. 7). The mechanism by which the ATR pathway preserves replication fork stability is unclear (59); however, several studies have indicated functions both in preserving the replication fork structure and in stimulating restart. Studies on ATR orthologs in yeast and ATR-depleted Xenopus extracts indicate that ATR regulates the continued association of replicative DNA polymerases at the fork, both upon stalling and during restart following camptothecin-mediated collapse (17,21,58). In addition, it has been shown that Chk1, a conventional protein kinase regulated by ATR, is required for the efficient formation of Rad51 foci following ionizing radiation-induced damage (27). In light of these findings, it is conceivable that the increased level of Rad51 accumulation in ATR-deleted cells (Fig. 4) is underrepresented, due to deficiencies in Chk1 activation. Our results in no way contradict the possibility that ATR plays important roles both in the prevention of fork collapse and reinitiation of replication. However, because Rad51 foci in ATR ⌬/Ϫ cells increase significantly over that observed in wildtype cells (Fig. 4B), our data further supports a role for ATR in preventing the emergence of DSBs following replication fork stalling. In this context, H2AX assumes an ATR-independent support function that assists in Rad51 accumulation and the suppression of persistent DSBs. Although several lines of evidence demonstrate that H2AX plays a facilitative, but non-essential, role in homologous recombination (43,44), the exact mechanism of its involvement remains unclear. Rad51 accumulation at DSBs is not fully dependent on H2AX (43,48); however, several studies have indicated a partial requirement (43,(45)(46)(47)(48)(49)(50)(51). For example, focal accumulation of BRCA1, which is required for efficient Rad51 foci formation, is reduced in H2AX-deficient murine cells (43,51). Similarly, general ubiquitination of chromatin and the focal accumulation of ubiquitinated-FANCD2 following DNA damage is strongly impaired by the absence of H2AX, and, these events appear to be required for accumulation of Rad51 at damage sites (45-47, 49, 50). On the other hand, in DT40 cells, the involvement of H2AX phosphorylation in Rad51 recruitment is predominantly early in the DSB response and is strongly compensated for by an H2AX-independent mechanism catalyzed by XRCC3 (48). An H2AX-dependent mechanism for Rad51 accumulation that is distinct from the relatively well-defined core pathway (BRCA1, BRCA2, XRCC3, etc.) is additionally supported by the further suppression of homologous recombination achieved by combining H2AX deficiency with hypomorphic levels of BRCA1 and Rad51 (44). Regardless of the specific mechanism of involvement, the accumulation of Rad51 resulting from ATR deletion appears to largely rely on H2AX (Fig. 4). Consistent with decreased Rad51mediated HR and elevated usage of error-prone repair, combined deletion of ATR and H2AX led to a significant increase in translocation events. It is not clear at the present time whether increased translocations represent 1) default repair mechanisms acting under conditions of decreased HR, or 2) the products of misdirected ATM and DNA-PKcs activity in the absence of H2AX. Together, our data are consistent with a layered checkpoint response governing genome stability upon replication stress. According to the model described herein, this process is regulated by two main functional groups of proteins, the ATR forkstabilizing pathway and the ATM-, DNA-PKcs-, and H2AX- mediated DSB salvage pathway. Given the large number of genes associated with each of these networks (the ATR-and ATM-dependent checkpoint pathways, the Rad52 epistasis group, ubiquitin transfer complexes, etc.), combinations of synthetic interactions between these two groups may occur frequently in human populations. Both ATR and H2AX have been implicated in suppressing cancer (18, 60 -62). Because combined reductions in fork stabilizing and restart pathways can lead to dramatic increases in DSBs during normal DNA synthesis, such combined deficiencies would be expected to strongly influence the onset of agerelated diseases (via tissue degeneration) and cancer (via tumor suppressor gene loss of heterozygosity, translocation events, etc.). Thus, our studies are consistent with the genetic etiology of cancer and other age-related diseases residing both in specific mutations and their context within deficiencies in networks that govern upstream stabilization or salvage pathways.
2018-04-03T00:05:39.989Z
2009-02-27T00:00:00.000
{ "year": 2009, "sha1": "8fa915f12f977fcf8726aaf5b1ab57938c495740", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/284/9/5994.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "8fa915f12f977fcf8726aaf5b1ab57938c495740", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
259352274
pes2o/s2orc
v3-fos-license
Homicide-Suicide Partners: A Simulation of Injuries Death by homicide-suicide or dyadic death is rare, with the nature of the death varying from case to case. The perpetrators are usually males and most often use weapons available in their vicinity to commit a crime. This case presents an instance of dyadic death using multiple methods to kill the intimate partner, followed by mirror imaging of similar injuries on himself and finally committing suicide by hanging. This case depicts a rare case of murder-suicide in which both victims and perpetrators died by different methods but a mirroring pattern of fatal injuries was observed on each intimate partner. The non-fatal injury for one was a facsimile of a fatal injury on a corresponding intimate partner. Introduction The term homicide-suicide refers to when the perpetrator of a homicide takes his or her own life after killing the victim [1,2]. Several terms have been used to describe it, including murder-suicide, dyadic death, and homicide following suicide, with special mention of such instances in the literature of the Chinese Ming dynasty and Greek tragedies [3,4]. Dyadic deaths are relatively unusual with global mortality rates ranging between 0.02 and 0.46 per 1,000,000 with significant national and regional variations [5]. Marzuk et al. were the first to propose a classification based on the relationship between the victim and the perpetrator and labeled the killing of a spouse/intimate partner as uxoricide-suicide/homicide-suicide in a consortial relationship [6,7]. In intimate partner homicide-suicides, perpetrators are usually male [8]. Usually, the perpetrator commits suicide immediately or within a week following the event [7]. This study describes a rare form of dyadic death in which multiple methods were used to kill the intimate partner and then the perpetrator killed himself by hanging after sustaining non-fatal injuries mimicking the victim's fatal injuries. Case Presentation A couple in a consortial relationship for eight years had an argument and fight one early morning over suspicion of an affair of the female partner. During the argument, she was assaulted by her boyfriend with a hammer on her head and she rushed out of the house seeking help. The neighbors witnessed their heated arguments and saw the boyfriend dragging her inside the house. After some time, the voice of the woman started to fade, and the police were called by the neighbors. On arrival, the police forcibly broke the door and found the dead bodies of the couple. Crime scene findings are depicted in Figure 1. The male was found hanging, while the female partner was lying on the floor in a pool of blood. It was established through the police inquest and statements from friends that the male partner suspected infidelity resulting in frequent quarrels, and, hence, he might have killed his female partner. Victim The autopsy of the 29-year-old female with a body mass index (BMI) of 23 kg/m 2 showed blood-soaked clothes, with the face and limbs covered in blood. Incised wounds were present over both forearms. An incised wound over the lower one-third of the right forearm with a slashing injury of the flexor carpi radialis tendon and complete severance of the adjoining radial artery resulted in death ( Figure 2). Internal examination showed pale organs without any signs of asphyxia. The multiplicity of injury indicated aggression, the magnitude of violence, and the determination by the attacker to kill the victim. The cause of death was ascertained as a hemorrhagic shock following a deep wrist injury. Perpetrator The autopsy of the male perpetrator showed a 28-year-old averagely built and moderately nourished male with a BMI of 24.8 kg/m 2 and a height of 1.67 m. Clothes were stained with splashes of blood at places. Two parallel incised wounds were present over the left forearm. The wounds were superficial involving the skin and subcutaneous tissue, and the rest of the underlying structures were intact ( Figure 5). Dried blood clots were present on the fingertips and index finger on the right hand. Further examination revealed that the transverse cuts along the longitudinal axis of the fingers were subcutaneously deep, suggesting unintentional self-injury while attempting to attack the victim's right forearm with a blade (Figure 6). FIGURE 6: Unintentional self-injury on the perpetrator's fingers suggestive of the use of a sharp object with bare hands. A blood-stained blade was found in the perpetrator's jeans pocket. A hanging mark in the form of pressure abrasion was present completely encircling the neck, which was directed obliquely upward and backward with a knot impression over the right occipital region (Figure 7). FIGURE 7: Ligature mark around the neck of the perpetrator. No infiltration of blood was noted on the dissection of the neck. Internal examination showed congested organs with evidence of cyanosis and petechial hemorrhages in the lungs. After careful assessment of the history, circumstances of death, exclusion of other causes, and cautious evaluation of the signs described above, the cause of death was ascertained as death due to hanging. The weapon used by the perpetrator to kill the partner was a blade and a hammer, two common household tools found in most Indian homes. Here, we are presenting the cases where multiple methods were used to commit homicide-suicide. This leads to a mirroring of simulating injuries in both the victim as well as the perpetrator. Discussion Suicide committed by perpetrators committing homicide is relatively uncommon and varies from region to region [9]. Stack et al. in their review on homicide-suicide argued persuasively that the closer the ties between the offender and the victim, the more likely the offender is to commit suicide [10]. A male usually commits homicide-suicide after experiencing interpersonal conflict in an intimate relationship, and it usually involves the killing of the female victim [11,12]. A higher than anticipated probability exists that the perpetrator committed the murder-suicide because of an interpersonal conflict arising either due to a lack of communication or a loss of trust. Accordingly, they are also more likely to be angry, hostile, and violent, and to have behaved erratically in the period leading to death, and a similar scenario is reflected in our case report [13][14][15]. Marzuk et al. classified such a type of homicide-suicide as amorous jealousy and found that in addition to depression, these perpetrators also had histories of abusive relationships with partners [16]. A similar study conducted by Logan et al. in 27 states of America found that perpetrators had a history of domestic violence, were jealous over real or imagined infidelity, and were in the process of breaking up [17]. A study conducted by Santos-Hermoso et al. among perpetrators involved in intimate partner femicide in the prison population of Sweden and Spain found that, as opposed to other criminals, intimate partner aggressors are more specifically perpetrators of femicide and do not exhibit antisocial behavior. Rather, femicide may be committed by an irresponsible partner who lacks impulse control and reacts violently to the conflict [18]. Because firearm possession and use are strictly prohibited in India, household sharp and hard blunt tools are commonly used for killings in violent conflicts and suicides [19,20]. Conclusions Two aspects were involved in this case, namely, the perpetrator inflicted non-fatal injuries on his own body that resembled fatal injuries he had inflicted on the victim, but the cause of death differed between intimate partners of the homicide-suicide. The manner of death was discussed based on the meticulous autopsy, crime scene investigation, and other corroborative evidence. In this case report on a homicide-suicide, to confirm the death of the victim, the perpetrator attempted to strangulate the victim with a soft ligature material that created a ligature mark on the victim's neck, and he also inflicted an incised wound over the ligature to reconfirm it. The perpetrator attempted suicide by inflicting cuts over the forearm and hanged himself. Committing homicide and suicide created a pattern of multiple non-fatal injuries in the perpetrator that simulated fatal injuries in the victim. Additional Information Disclosures Human subjects: All authors have confirmed that this study did not involve human participants or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2023-07-07T22:16:38.516Z
2023-06-01T00:00:00.000
{ "year": 2023, "sha1": "9609649ec3d4d21ba5ac17f4e880bb3a04dacd9f", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/case_report/pdf/128310/20230605-6265-1ji9tj9.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6fc7cd41a3ea93be476f8d00cccf44942df896a7", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
210864470
pes2o/s2orc
v3-fos-license
3D Printing On-Water Sports Boards with Bio-Inspired Core Designs Modeling and analyzing the sports equipment for injury prevention, reduction in cost, and performance enhancement have gained considerable attention in the sports engineering community. In this regard, the structure study of on-water sports board (surfboard, kiteboard, and skimboard) is vital due to its close relation with environmental and human health as well as performance and safety of the board. The aim of this paper is to advance the on-water sports board through various bio-inspired core structure designs such as honeycomb, spiderweb, pinecone, and carbon atom configuration fabricated by three-dimensional (3D) printing technology. Fused deposition modeling was employed to fabricate complex structures from polylactic acid (PLA) materials. A 3D-printed sample board with a uniform honeycomb structure was designed, 3D printed, and tested under three-point bending conditions. A geometrically linear analytical method was developed for the honeycomb core structure using the energy method and considering the equivalent section for honeycombs. A geometrically non-linear finite element method based on the ABAQUS software was also employed to simulate the boards with various core designs. Experiments were conducted to verify the analytical and numerical results. After validation, various patterns were simulated, and it was found that bio-inspired functionally graded honeycomb structure had the best bending performance. Due to the absence of similar designs and results in the literature, this paper is expected to advance the state of the art of on-water sports boards and provide designers with structures that could enhance the performance of sports equipment. Introduction Mechanical design and modeling are the most recent methods for tackling technical concerns in the field of sports science and could have various benefits such as injury prevention, reduction of the cost of manufacturing, minimization of the weight, and enhancement of the performance of the equipment. For example, Caravaggi et al. [1] developed and tested a novel cervical spine protection device to keep the athlete's neck in its safe physiological range. Shimoyama et al. [2] employed a finite element method (FEM) to optimize the design of the sports shoe sole, followed by lightening the sole weight. Furthermore, Sakellariou et al. [3] applied coupled algorithms with the FLUENT ® solver to optimize a surfboard fin shape, resulting in maximum lift per drag ratio. With regard to head uniform honeycomb structure under a three-point bending test. After the validation of the numerical tool with the experimental results, the accurate FEM tool was employed to simulate the board with different nature-inspired core structures such as a pinecone-inspired pattern, spiderweb-inspired pattern, carbon crystal lattices, and gradient honeycomb, all tested by the three-point bending test, while the total volume of the board was kept constant. Board Design (Uniform Honeycomb Sandwich Structure) Modern boards are primarily composed of an inner foam core covered by a thin outer shell, generally called a sandwich structure [6,9]. The foam core enables reduced weight, increased buoyancy, and better stability for the rider, whilst the sandwich structure provides improved bending resistance. Such structures are mainly composed of three main parts: a top shell, a lightweight core, and a bottom shell [24]. The board in this study, however, was comprised of two primary parts, a top shell, and a merged bottom shell and lightweight core, which can be designed and 3D-printed with different patterns. Furthermore, most boards are manufactured with a bottom curvature that aims at better edging and upwind ability, which is significant for beginners, providing more grip and stability when compared to flat boards [25]. In this study, inspiring by nature a primary structure for the board core was designed. As illustrated in Figure 1a, a beehive is made up of a regular pattern of hexagonal honeycomb cell structure, which was used to design the core of the board by implementing CATIA V5 software [26]. The designed honeycomb core structure and exploded 3D model are shown in Figure 1b,c, respectively. A smaller scale version of a real on-water sports board was designed. The dimensions of the designed merged bottom shell and the honeycomb core board are presented in Figure 2. The board had a 48 mm width and 144 mm length with a 357 mm radius curvature at two sides. A bottom curvature of 600 mm was considered, resulting in a model closer to the real one. The hexagonal honeycomb structure formed the core of the board, and was repeated across the specimen. As can be seen in the detailed view, the 3 mm wide honeycombs were patterned with 1 mm thick walls. Moreover, the bottom and top shells of the board had thicknesses of 5 and 1.5 mm, respectively. Materials and 3D Printing This section aims to experimentally determine the mechanical property of PLA fabricated by an FDM 3D printing apparatus. The guidelines of ASTM D638 [27], Standard Test Method for Tensile Properties of Plastics, were followed. Five different specimens are introduced in the standard, each having the same geometry, but with different dimensions as a function of thickness. The TYPE 1 specimen was chosen to design the tensile test dog-bone specimens. The sketch of the specimen and the dimensions are given in Figure 3 and Table 1, respectively. The designed specimen with the square cross-section was fabricated using the XYZ da Vinci 1.0 Pro 3D printer, which works on the basis of FDM technology. In this technique, raw cylindrical thermoplastic filament is mechanically dragged into the melting nozzle, and the molten polymer is extruded on a heated platform known as the bed. After the first layer of the object is deposited, the nozzle moves upward to extrude the second layer on the previously printed layer; this process continues until the object is completely printed. The layer height is defined as the distance between any two sequential layers. The layer height is one of the most important FDM printing parameters with respect to the mechanical properties of the printed object [28]. For this study, PLA was used as the raw material of 3D printing. Consequently, the nozzle temperature of 230 • C, bed temperature of 40 • C, layer thickness of 0.2 mm, internal fill density of 100%, and printing speed of 20 mm/s were set. All the layers were combined with the raster angle of +45 • and −45 • . In order to determine the Young's modulus of the FDM 3D-printed PLA materials, a uniaxial tensile test was conducted using a Hounsfield-H25KS testing machine. The temperature was kept constant at 23 • C and the strain rate was set at 0.001/s to ensure that the test condition was quasi-static loading. The tensile mechanical bench machine and a specimen under test are shown in Figure 4. The stress-strain (σ-ε) curve for the tensile dog-bone sample is shown in Figure 5. The specimen exhibited a linear elastic behavior, followed by yielding and plastic deformation. The approximate values of the Young's modulus and yield stress were determined to be 1.8 GPa (the slope of the linear region) and 60 MPa, respectively, followed by softening irrecoverable plastic deformation. D Printing and Assembling of the Board with the Honeycomb Core Structure In the next step, two parts of the board (bottom and top shell) were separately 3D printed in a smaller scale of real on-water sports board using PLA filament. The nozzle temperature, bed temperature, layer thickness, internal fill density, and printing speed were set to 230, 40 • C, 0.2 mm, 100%, and 20 mm/s, respectively. Both 3D-printed board parts are shown separately in Figure 6a, while Figure 6b depicts the two parts glued together with a strong adhesive after the 3D printing process. Experimental Three-Point Bending Test of Uniform Honeycomb One of the most common types of surfboard fractures takes place in the middle section of the board, between the feet of the surfer. These breaks occur in two main circumstances: the most frequent breakage takes place when the lip of the wave impacts in the middle the board, ripping it apart into two separate parts, just after the surfer falls in the water ( Figure 7b); the second type of breakage occurs when the feet of the surfer get close together, concentrating the pressure of the body in the middle of the board (see Figure 7c) [29]. In both of these circumstances, an immense force acts upon the middle portion of the board, causing large bending stress that may result in breakage. Figure 7. A schematic of the most common situations that boards break: (a) A standard boarding with a fine distance between the rider's feet; (b) when the wave hits on the board while the rider falls in the water; (c) when the rider's feet get so close together that the weight of their body is concentrated in the middle of the board. As both of these breakages are caused by bending stresses, a mechanical three-point bending test could be employed to determine the strength of the board in such loading. Stier et al. [6] also applied a similar test to their board with a novel design in shape in order to find its bending strength. The 3D-printed board with a uniform honeycomb structure in the core was tested under 3-point loading. In order to do this, the grippers of the tensile test had to be changed. The lower grip was replaced with two supports under the specimen, and the upper grip was replaced by a loading nose in the middle of the sample in order to apply force. Figure 8 shows the sample board under the three-point bending test. The test with the strain rate of 0.001 s −1 was carried out at room temperature with an 80 mm distance between two supports. A displacement-controlled test was conducted to get a maximum deflection of 4 mm in the elastic range. Analytical Solution In this section, a geometrically linear simple approach for the analytical solution is provided to validate the experimental bending results of the 3D-printed board with a uniform honeycomb core structure. For this purpose, an equivalent I-shaped section, in which its geometrical stiffness varies along the x-direction, was considered to simulate the board structure, as illustrated in Figure 9. In order to determine the deflection of the structure, strain energy methods are implemented. The density of strain energy, u, is expressed as: The effect of bending stress is considered and formulated as: In this equation, I denotes the moment of inertia of the cross-section that varies along the x-direction and M denotes the variable moment in each section. By substituting Equation (2) into Equation (1) and considering a linear elastic behavior, Equation (1) can be rewritten as: Next, according to the second Castigliano's method, displacements of a linear-elastic system can be determined based on the partial derivatives of the energy. Equation (4) shows Castigliano's method [30], where δ D and F D are the displacement and virtual or actual force at point D, respectively. Finite Element Method and Experimental Validation In this study, a geometrically non-linear FEM software (Dassault Systemes, 6.14, Vélizy-Villacoublay, France) package ABAQUS™ was employed to numerically simulate the boards with various core structures under a three-point bending test. The computer-aided design for top shell and merged bottom shell and core were first imported into ABAQUS. For complete 3D models, especially relatively thin objects with a complex geometrical shape, mesh generation is challenging. As each board in this study not only had complex core geometry, but also had very high curvatures along the edges, tetragonal elements were implemented to successfully cover the whole geometry with good accuracy. Since the top shell and bottom shell with a structural core must be meshed independently in the full solid model, it would be crucial to use the tie constraint at their interfaces to simulate impeccable bonding between both parts. The boundary conditions were chosen to be similar to the real three-point bending conditions, as numerical findings were planned to be validated experimentally. To prevent plastic deformations, boards must be maintained in the elastic regime. Therefore, in this study, all of the simulations were conducted in the elastic regime. As illustrated in Figure 10, two cylindrical supports with a radius of 2 mm, set at a 80 mm distance were completely fixed underneath the board using the encastre boundary condition, while a z-displacement of −4 mm was applied to the loading nose. Next, in order to validate the FEM model, the designed board with a uniform honeycomb core structure under a three-point bending test was simulated. In this regard, element type C3D10M was exploited for both solid parts with approximately 37,000 elements and 2719 faces for the bottom part and 4000 elements for the top shell. To ensure the accuracy of the numerical results, a mesh sensitivity analysis was performed. In Figure 11, the reaction force of the loading nose for a maximum of 4 mm deflection was plotted versus the number of elements. It can be seen that after increasing the number of elements to >35,000, the reaction force values converge in an almost constant manner. Figure 11. FEM mesh convergence test of the board with a uniform honeycomb core structure. Next, the three-point bending test was applied to the board with the uniform honeycomb core to get a maximum deflection of 4 mm. The stress contour is illustrated in Figure 12 shows the maximum stress, predictably, occurs in the middle of the board. This maximum stress is low enough (∼40 MPa) to keep the board in the desired elastic region, as the previously tested PLA material showed a yield stress level of 60 MPa. Results and Discussions The force-deflection curve for the experimental, geometrically non-linear numerical, and geometrically linear analytical results are plotted and compared to each other in Figure 13. The preliminary conclusion drawn from this figure is the fact that the PLA board shows a linear elastic deformation up to 300 N force, beyond which the material yields, followed by plastic deformation that is manifested as a plateau after 500 N. From a design point of view, it is desired that the board exhibits small elastic deformations up to stress levels as large as 500 N. Regarding the modeling, it can be seen that, at the beginning of the deflection, the numerical and analytical results showed an excellent fit to the experimental results, but as the deflection increased, the difference between the numerical/analytical and experimental results increased. It can be seen that the geometrically non-linear FEM model can predict the non-linear experimental curve better, while the geometrically linear analytical method is unable to do so. This is particularly pronounced in the large deformation regime, revealing the importance of considering the geometrically non-linear assumption in the design of the board. However, the geometrically linear analytical method could be used as a reliable tool for predicting the behavior of the board in smaller strain ranges, circumventing an increase in computation. It is seen that at the 4 mm deflection, where the material reaches the end of the elastic region, the FEM ABAQUS predicts the experimental results very well, with an approximately 3.1% error. In order to statistically investigate the data dispersion between the experimental results and analytical and FEM results, the error function was defined as: In this equation, Y i denotes the analytical or FEM results, and Y exp shows the experimental results. In Figure 14, the results of error function are plotted. The conclusion drawn from this plot is that the error between the experimental and FEM results was relatively low when compared to the error between experimental and analytical results; on the other hand, the finite element had less dispersion in comparison with experiment. Testing Different Core Patterns Having validated the geometrically non-linear FEM model for the 3D-printed board with the honeycomb core structure, different designs for the core of the bottom shell were introduced to determine the structure that gives the maximal bending resistance for this particular application. All of the designed boards had the same outer frame, but different patterns were applied to the cores, while the total volume of the board and upper shell geometry was kept constant. Structures inspired by natural shapes and patterns like spiderwebs, sunflowers, pine, and carbon crystal lattices were developed. Some of them have recently attracted researchers' attention such as triangular honeycomb (TH) [12,31] and hexagonal-rhombic (HR) [11]. For all of the structures, the mesh convergence study was conducted and the appropriate number of elements for the FEM model was selected. Furthermore, the maximum stresses of all boards with various core structures were figured to have shown a maximum stress lower than the yield stress of the PLA material. Hexagonal-Rhombic Structure The HR structure, which is comprised of intermeshed rows of hexagonal and rhombic patterns periodically repeated across the core of the board, has been recently presented by Platec et al. [11] as a cellular structure fabricated by an FDM technology. Having been tested under a quasi-static loading (compression) condition, this structure demonstrated a compression resistance superior to that of a uniform honeycomb. Consequently, we were motivated to investigate the bending performance of this structure for our particular application. Figure 15 presents the designed bottom shell of the board with the HR structure applied to its core. Triangular Honeycomb Structure Recently, a composite triangular honeycomb structure was designed and manufactured by Compton et al. [12] using a 3D printing technology in order to be tested under compression. Furthermore, the ability of shape recovery of the mentioned structure was studied [27], and Bodaghi et al. [13] investigated the large deformations of TH structure by taking advantage of 3D printing technology to manufacture the samples. As the application of the TH structure is very common, we applied this pattern to the board to determine its bending resistance. The pattern is composed of repeatedly arranged hexagonal unit-cells, each comprised of equilateral triangles and shows a detailed view of geometry and dimensions of the TH unit-cell (see Figure 16). Hexagonal Carbon Lattice There are several allotropes of carbon such as graphite, diamond, graphene, and carbon nanotubes. As illustrated in Figure 17a, in all of the noted carbon allotropes, carbon atoms (marked by black circles) are placed in vertices of regularly patterned hexagons. Inspired by this broad usage of carbon in nature, the arrangement of the carbon atoms in the edges of a hexagon was applied to the design of the core of the board. Figure 17b demonstrates the designed board where the small circles are the carbon atoms linked together by the sides of the hexagon. Pinecone and Sunflower-Inspired Patterns Fibonacci numbers are a mathematical sequence starting with 1 and 1, where each number of the sequence is the summation of two previous numbers. Intriguingly, this sequence can be commonly found in nature. For instance, in a pine cone, there are a number of spirals starting from the cone center, following a spiral path to the outside of the cone. These spirals are in two opposite directions-clockwise and counter-clockwise-where the numbers of these two opposite directional spirals are the consecutive Fibonacci numbers. The 8-number clockwise and 13-number counter-clockwise spirals with red and blue colors are illustrated in Figure 18a, respectively. This phenomenon, moreover, can be found in sunflower, with Fibonacci spirals that can easily be made using squares attached to each other where the squares have consecutive Fibonacci numbers as the dimension (Figure 18b). Only by placing the seeds in the intersection of the spirals can the maximum numbers of seeds be filled in a sunflower or a pinecone. These optimizing numbers and their frequent usage in nature motivated us to design a pinecone-inspired core structure using spirals with Fibonacci numbers, as illustrated in Figure 18c. Spiderweb-Inspired Pattern The spider web is known to have a very high tensile strength exceeding 1 GPa. The effectiveness of the spider web should be due to the strength of the spider silk as well as the patterns of the web. As illustrated in Figure 19a, this pattern is composed of a sequence of periodic polygons that shrink as they get closer to the center of the web. The designed hexagonal pattern for the core of the board, as shown in Figure 19b, is inspired by the spiderweb. Functionally Graded Honeycomb Structure Bamboo, a group of perennial grasses comprised mostly of cellulose fibers and parenchyma tissue, is a natural fiber-reinforced composite that could resist harmful tropical winds (Figure 20a). Functionally graded (FG) structures are one of the most optimal choices to fabricate lightweight structures while being able to tolerate damaging stresses. In these structures, in general, the dimension of the unit-cells change with a constant gradient across the object. Near the vascular bundles of the bamboo, there are parenchyma cells, and in the cross-section illustrated in Figure 20b, the ratio of vascular bundles to parenchyma matrices decreases from the outside surface to the inside. Bamboo's FG structure has exceptional stiffness, tensile strength, and fracture resistance [32]. These excellent properties have allowed bamboo to be used successfully in construction such as buildings and bridges, and it proved its worth when in 1991, approximately 20 bamboo houses survived the 7.5 Richter scale earthquake in Costa Rica [33]. In this section, considering the maximum stress acting in the middle of the board (see Figure 12) and being driven by excellent mechanical properties of a bio-inspired bamboo structure, we designed a FG honeycomb structure, as shown in Figure 20c, in which the dimension of the hexagonal unit-cells increases from the middle of the board across the x-direction (with a constant coefficient of 1.05), while the dimension of the hexagons is kept constant in the y-direction. Results of the Different Patterns After designing different patterns, every board with the previously mentioned core structures was applied to a three-point bending loading using the geometrically non-linear FEM software package ABAQUS. The constant displacement of 4 mm in the z-direction was executed by means of the loading nose, and the reaction forces for each designed board was determined. Figure 21 compares reaction force-displacement curves of different core structures. The preliminary conclusion drawn from this figure is the fact that the FG honeycomb structure and fully filled board can tolerate maximum and minimum forces, respectively, while the rest of the patterns experienced an intermediate force. As the maximum stress occurs in the middle of the board right below the load application area, it can be found that the FG honeycomb pattern showed the best bending resistance when compared to other non-uniform patterns. For instance, the force versus deflection curve for the FG board reached a plateau after 500 N and experienced large deflections beyond this value, while the other patterned boards reached a plateau beyond 400 N. Comparing the FG and uniform honeycombs for 500 N force revealed that the functionally graded pattern could reduce the central deflection as much as 31%. It was also seen that the FG honeycomb board experienced 595 N force at 4 mm deflection and resulted in ∼12% enhancement in force compared to the uniform honeycomb structure. From the results presented in this figure, it can also be found that for a constant applied force of 400 N, the central deflection reduced by almost 97% when using the FG pattern, and was 97% lower than a board with a fully-filled core, which is a significant improvement. Furthermore, it can be concluded that the board with a uniform honeycomb pattern showed slightly better bending performance as compared to the carbon atom lattice structure, meaning that implementing circular holes did not improve the bending resistance. Conclusions An on-water sports board with different nature-inspired core patterns under the three-point bending test was studied in this paper. The 3D-printed sample board with a uniform honeycomb core structure was designed and fabricated from PLA using FDM 3D printing technology, and then experimentally tested under the three-point bending condition. Considering the equivalent section for the honeycomb structure, the geometrically linear analytical solution was developed for the board with a honeycomb core structure using the energy method. Furthermore, the geometrically non-linear FEM was used to simulate the sample boards under the three-point bending test. The results demonstrated coarse and fine matches between the experimental, numerical, and analytical results in the small and large deformations. Then, different structures inspired by natural patterns and shapes like spiderweb, pinecone, and carbon lattice configuration were developed and applied to the core of the board in order to determine which gave the highest bending resistance. Experimental and numerical results revealed a 31% better bending resistance of the board with the FG honeycomb pattern when compared to the board with a uniform honeycomb structure at 500 N force. Furthermore, comparing the FG with the solid core board for a fixed force of 400 N revealed, a significant, 97% reduction in the central deflection of the board.
2020-01-23T09:08:22.420Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "edd4171894c757809bedfe829e2aae6b05a9360c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/12/1/250/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f80c5651182e39ae798f666cbf9dab6177edbff7", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
215818342
pes2o/s2orc
v3-fos-license
Coronavirus Disease 2019 (COVID-19) in Kenya: Preparedness, response and transmissibility The world and Kenya face a potential pandemic as the respiratory virus Coronavirus Disease 2019 (COVID-19) affects world populations. Nations have been forced to intervene and issue directions under executive orders to ensure the pandemic is contained. Kenya has reported 110 confirmed COVID-19 cases (as at 2nd April, 2020), three persons have succumbed and 2 people have fully recovered. Most of the affected people had entered/returned to Kenya from different parts of the world. Most of the people who have contracted COVID 19 are between the 16–74 years of age. As a result, since February 2020, Kenya put in place several precautionary measures to mitigate the pandemic in its early stages. However, the economic status of the population of country won't be simple to control COVID 19, if government won't integrate the realistic feasible timely plans. This article highlights the preparedness, response, transmissibility of Covid-19 and proposes intuitions to manage COVID-19 in Kenya. Currently it is clear that since first confirmation to current, the transmission of the COVID-19 is exponentially increasing in Kenya. Introduction A novel coronavirus (SARS-CoV-2) that emerged out of the city of Wuhan, China in December 2019 has already demonstrated its potential to generate explosive outbreaks in confined settings and cross borders following human mobility patterns, the number of cases rapidly increased in Africa resulting in 6555 cases including 244 deaths and 456 recoveries as of 1st April, 2020. The first case was reported in Egypt on February 15th 2020. Most of the African countries have confirmed cases of Covid-19 except São Tomé and Príncipe and Kingdom of Lesotho as at 2nd April 2020. Confirmed cases to COVID-19 are subject to rough estimation. The WHO estimates the pandemic as at 2nd April 2020 to be 896 450 confirmed cases globally and45 526 deaths. World Health Organisation assessment indicates high risk (WHO Report À73, 2020). Preparedness and response (MOH, Kenya, 2020) On 2nd February 2020, the Ministry of Health advised Kenyans to remain vigilant and to maintain hygiene, avoid contact with persons with respiratory symptoms and to go to the nearest health facility for assessment and prompt management with symptoms of respiratory infection or recent travel to China especially Wuhan. Further on 13th February 2020, Kenyans were advised against non-essential travel to affected countries. On 19th February 2020 Kenyan Government through the ministry of Health had put several measures in place to safeguard public health safety including but not limited to a multi-agency approach to deal with the threat of COVID-19. On 28th Feb 2020, the National Emergency Response Committee was established through an executive order No. 2 of 2020. At its meeting on 20th March 2020, they resolved and directed Kenyans of taking of the following additional pre-cautionary measures: i. All entertainment, bars and other social spaces, were to close their doors to the public by 7.30pm every day until further notice, effective Monday, 23rd March 2020. Social distancing of 1.5 m to be observed during allowed periods. ii. All supermarkets were required to limit the number of shoppers inside the premises at any given time, in a manner that conforms to the social distance requirements of at least 1.5 m apart. iii. The management of local markets were directed to ensure that the premises are disinfected regularly to maintain high standards of hygiene. iv. The County Governments were required to prioritize garbage collection and cleanliness of all markets as well as ensure provision of soap and clean water in all market centers. v. Corporations and businesses were encouraged to allow where possible employees to work from home. vi. To ensure business continuity for the manufacturers and industries, factories were required to operate using minimum workforce on a 24-h shift rotation system. vii. To reduce the risk of transmission in the public transport system, persons were encouraged as much as possible, to stay at home unless on essential business. Public service vehicle operators were asked to observe high levels of hygiene during this period. It was directed that, vehicles to maintain 60% maximum of seating capacity. viii. All hospital management in public and private hospitals were to restrict patient visitation to family and relatives of patients who have been expressly contacted by the hospital. ix. All travelers coming into the country, was restricted to Kenyans and foreigners with valid residence permits must self-quarantine for a period of 14 days. On 25th March 2020, the president announced a welcome stimulus package to address the impact of coronavirus on the economy. Which included: a 100 per cent tax relief for individuals with a gross income of up to Sh24,000, the income tax be reduced from 30 per cent to 25 per cent, reduce the Value Added Tax from 16 per cent to 14 per cent, Orphans, the elderly and other vulnerable members of the society will enjoy a Sh10 billion cash among others. From 27th March 2019, Kenya government through a public order notice called for a 7pm to 5am curfew on all persons not providing essential services. The curfew would be indefinite (MOH, KENYA, 2020). Discussion On 12th March 2020, the Ministry of Health confirmed the first case of Coronavirus disease (COVID-19) case in Nairobi. The suspected case was tested and confirmed at the National Influenza Centre Laboratory at the National Public Health Laboratories. The patient had returned to Nairobi from USA on March 5th 2020 via London, UK. The ministry of health had warned that the cases are going to rise exponentially in the coming days, asking Kenyans to remain calm and follow the set guidelines. This is proven in the plotted graph as shown in Fig. 1. The confirmed cases will continue to increase exponentially if no drastic measures are put in place. The curve shows that the confirmed cases will increase according to equation (1). where y is the confirmed cases. x is the day under question. After implementing social distancing and the curfew, the government should provide proper protective equipment (like hand wash, sanitizers, masks, etc.) Regular handwashing with running water and soap is an essential precaution for the COVID-19; hence, the county and national governments through relevant departments should enhance availability of 24hr clean water and soap in the low-income homes especially in the informal settlements i.e. Kibera which is the largest in Africa. It would be disaster containing CIVID-19 is such informal settlements where population density is very high. The Kenyan government should also plan on how to distribute sufficient, and quality food to its citizens in the event it required to go through total lockdown due to increasing cases. Due to reduced activities by the judiciary, there will be increased case of lawlessness. The security parameters should be vigilant and enhance intelligence especially in densely populated areas. Lastly its recommended for Kenya to adopt the WHO strategies including but not limited to Interruption of human-to-human transmission including reducing secondary infections among close contacts and health care workers, inhibiting transmission amplification events, and preventing further international spread, abate social and economic impact through multisectoral partnerships among others. Ethical approval Not required. Declaration of Conflict of Interest The author declares no conflicts of interest. Acknowledgement Copperbelt University Africa Centre of Excellence for Sustainable Mining (CBU ACESM) is acknowledged for financial involvement that make the publication possible.
2020-04-20T13:03:18.582Z
2020-04-20T00:00:00.000
{ "year": 2020, "sha1": "616343c0b2c30e7a9ccbd3c4dec8fd746ae30bc9", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.jmii.2020.04.011", "oa_status": "GOLD", "pdf_src": "ElsevierCorona", "pdf_hash": "616343c0b2c30e7a9ccbd3c4dec8fd746ae30bc9", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Geography" ] }
181443905
pes2o/s2orc
v3-fos-license
Regional pH in Five Agricultural Soil-Types – Associations with Temperature and Groundwater Si in Continental Finland Abbreviations: Clay: Clays (Several Subtypes); Coms: Coarse Mineral Soils (incl. mor + sand soils); Gw – Groundwater; Incl: Including; Miner: Mineral Soils (incl. coms + silt + clays); Mor: Moraines (Several Subtypes); Mull: A Single, Main, Organic Soil-Type; Prp: Proportion; RC: Rural Center (earlier Agricultural Advisory Center); Sand: Sands (Several Subtypes), Silt: Silt (No Subtypes), Si: Silicon; Soil: Agricultural Soil ARTICLE INFO Introduction Dissolved silica (SiO 2 ) is known as a product carbonate silicate cycle ( ) [1]. Weathering is associated with temperature (Temp) [2], soiltype [3] and soil ageing [3,4]. Silicate slags [5] and granite powder [6] can be used for pH elevation of acid soils. Inter-regional relative pH stability during three decades (1961 -1990) and pH asso ciation with Temp and Si.gw is suggesting on pH association with soil weathering [7]. Association of pH.tot with Si.gw is interestingly mirror-like when the regions were arranged by Temp [8]. Inter-relations between soil-type pH´s, Temp and Si.gw after [7,8] need somewhat clarification. The aim of this survey is to clarify whether where the soil pH regulation (and Si availability) seems not to be associated with Si.gw [15]. . Prp variation (SD/mean, %) when coms and miner excluded was lowest in sands and mull (39 %) and highest in clays (143 %). Associations with pH.tot Pearson and Spearman correlations of pH.tot with pH´s of other soil-types were significantly positive (Tables 2-4). Table 5 and Figure 1 Associations with Si.gw Discussion Because the main aim of this article was to assess pH variation inside mor, sand, silt, clay and mull soil-types, (Figures 1 & 6) are without proportions of peat, gyttja and mud soil-types. Practically the part between 100 and Prp. mull colums is peat (Figures 1 & 6). pH´s of mineral soils were 0.34 -0.49 units higher than pH.mull (Table 2) (N.B. pH.peat was lower than that pH.mull, why the difference between pH.org and pH of mineral soil-types is higher). All analyzed soil-type pH´s associated significantly with pH.tot. ( Table 3), pH.clays with lowest association. Remarkable is that pH.tot is not a golden standard, because it is a sample weighted pH mean of different soil-types, in which proportion of mineral soil-types increased (and acid organic soil-types decreased) with increasing Temp (Figure 1). Temp and pH: Temp associated significantly positively with pH´s, Si.gw and Prp.miner ( Figure 1) and ( . Associations with pH.mull were borderline significant (p < 0.06) and with pH.mor non-significant (Table 6). pH.mor was most "resistant" to Si.gw variation, next sensitive were pH.mull, pH.silt and pH.sands, most sensitive was pH.clays (Figures 3-5). Because of scanty number of clay soil samples and possible several sampling error in conclusion selected interpretation: "association with Si.gw seemed to increase towards finer mineral soils". Soil-type pH's (e.g. pH.mor, (Figure 2)) deviate mirror-like from trend-line with Si.gw when the regions were arranged by Temp (similarly as pH.tot in [8]). In Fig. 3 and so increasing pH [5,6]. Dissolved silica SiO 2 or Si(OH) 4 [19] can condense and form dimers and oligomers [20]. Oligomers of Si(OH) 4 can have buffering abilities around pH 6.8 [20] (possibly in micro-milieu, too). pKa1 for carbonic acid (H 2 CO 3 ) including CO 2 (aq) is 6.3, so macroscopic pH regulation above 6.3 could be mainly regulated by carbonic acid [21]. Regional pH´s in different soil-types were associated with different measures of soil juvenility (e.g. Prp.miner -an inverse measure of humus formation [4]) and pH as such [4], as well as with regional Temp. The carbonate silicate cycle [1] could be benefited for fertilizing and liming of agricultural soils by juvenile silicates [3,5,6,22], like the mother earth has benefited during her long life. Conclusion Regional pH.mor, pH.sands, pH.silt and pH.clays associated significantly with pH.tot and Temp, as well as obviously with soil weathering status. pH associations with Si.gw seemed to increase towards finer mineral soils. This phenomenon is possibly related with carbon silicate cycle.
2019-06-07T22:36:24.947Z
2019-02-06T00:00:00.000
{ "year": 2019, "sha1": "00680c7df3c56fb6dd74f43ad993d621e072484f", "oa_license": "CCBY", "oa_url": "https://biomedres.us/pdfs/BJSTR.MS.ID.002505.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "6bf20d695beb938a15f9cec2276678694e538884", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
268854209
pes2o/s2orc
v3-fos-license
Optimizing random skin biopsies: a review of techniques and indications for intravascular large B-cell lymphoma Intravascular large B-cell lymphoma (IVLBCL), a rare subtype of malignant lymphoma, is diagnosed by observation of intravascular proliferation of tumor cells in samples taken from affected organs. However, diagnosis of IVLBCL is usually difficult due to the lack of mass formation. IVLBCL may be fatal when the diagnosis is delayed, so an accurate early diagnosis is the key to successful treatment. Random skin biopsy (RSB), in which specimens are sampled from normal-appearing skin, has been reported as useful. However, the specific method of RSB remains controversial, with individual institutions using either the punch method or the incisional method. Research has shown that the incisional method has higher sensitivity than the punch method. We discuss whether this difference might owe to the collection of punch specimens from an insufficient depth and whether the punch method might result in false negatives. For RSB, we recommend taking specimens not only from normal-appearing skin, but also from any lesional skin, because lesions may reflect micro IVLBCL lesions. To ensure accurate diagnosis, both dermatologists and hematologists should know the proper method of RSB. This review summarizes the appropriate biopsy method and sites for RSB. Introduction Intravascular large B-cell lymphoma (IVLBCL) is a subtype of malignant lymphoma that is characterized by the proliferation of tumor cells within vessels [1].Because of its unspecific symptoms, IVLBCL is always challenging to diagnose.Therefore, many cases have been diagnosed by autopsy [2][3][4][5].As the tumor cells can invade any organ, IVLBCL had been diagnosed by taking samples from affected organs, such as the kidneys, lungs, and brain [2,[6][7][8][9][10].However, these biopsies are usually difficult because of the deteriorated condition of the patient and the rapid progression of the disease.In addition, due to the lack of lymphadenopathy and mass formation, it is difficult to determine an adequate biopsy site.A skin biopsy is easier and less invasive.Sampling from normal-appearing skin, called random skin biopsy (RSB), has been reported useful for diagnosing IVLBCL [2,[11][12][13][14].Patients with IVLBCL are mostly diagnosed from bone marrow and/or skin biopsy [15].The share of diagnoses from bone marrow (20.5%) is less than that from skin (74.4%) [16].Furthermore, the number of patients diagnosed with IVLBCL by skin biopsy has been increasing recently.Only 7.4% of patients were diagnosed with IVLBCL from the skin in 2007 [17]; however, more than half of patients with IVLBCL were diagnosed from the skin around 2020 [14,18].Hence, the skin is an important diagnostic site for IVLBCL.Although IVLBCL has been a fatal disease, at 2008 study found that patients treated with rituximab showed improved outcomes [19,20].Moreover, the progression-free survival and overall survival at 2 years were reported to be 76% and 92% in 2020, respectively, owing to the use of rituximab [18].Therefore, diagnosing IVLBCL early promises to increase the likelihood of successful treatment.In this review, we provide recommendations on the optimal method of RSB.Matsue et al., who conducted a large case series of RSB, used "RSB" to refer to sampling not only from normal-appearing skin but also from visible skin lesions.Our review follows this terminology [2,14,16]. IVLBCL subtypes There are three subtypes of IVLBCL: a classical subtype, a hemophagocytic subtype, and a cutaneous subtype [1,12].Patients with the hemophagocytic subtype show a typical clinical hemophagocytic syndrome.The cutaneous subtype presents as single or multiple skin lesions with negative systemic staging [1,21].Ferreri et al. reviewed 38 patients with IVLBCL and found that cutaneous lesions were the dominant presenting features in 15 patients.Ten of those 15 patients had recognizable lesions restricted to the skin, a condition that was named the cutaneous variant of IVLBCL [21].Most cases of IVLBCL show lesion that are not limited to the skin and with skin lesions usually being absent [12,21].The incidence rate of the cutaneous subtype is much lower in Asia than in the West (3% vs. 24%) [22].The cutaneous subtype is mainly observed in younger women.Furthermore, almost all patients with the cutaneous subtype showed an excellent performance status and rarely had B symptoms, a set of symptoms including fever above 38 °C, drenching night sweats, and weight loss of more than 10% of body mass.Ferreri et al. hypothesized that the cutaneous subtype shows a better prognosis due to the easier diagnosis of the cutaneous lesions or to biological differences from other subtypes [21].IVLBCL usually lacks both visible skin lesions and specific symptoms, so except for the cutaneous subtype, IVLBCL is prone to late diagnosis or misdiagnosis, which results in worse prognosis than for the cutaneous subtype [3,5].Therefore, the early and accurate diagnosis of IVLBCL other than the cutaneous subtype is especially important. History of RSB IVLBCL lesions have been observed in almost every organ, including the skin, in autopsy cases [2,23].Before the establishment of RSB, no study had reported IVLBCL diagnosed from normal-appearing skin, although there had been IVLBCL cases that were diagnosed from skin rashes [24][25][26][27].Demirer et al. were the first to report a case of IVLBCL diagnosed by lip biopsy without skin lesions, in 1994 [23].A case of IVLBCL diagnosed by RSB was reported in 2003 [28].Subsequently, the usefulness of RSB was reported [11,13,[29][30][31].Asada et al. reported six cases of IVLBCL diagnosed by RSB [11].They examined 26 specimens obtained from six patients with IVLBCL and found that 23 of the specimens (88.5%) included IVLBCL lesions.As the skin biopsy was easier than biopsy from other organs, they concluded that if IVLBCL is suspected, RSB should be considered.Although bone marrow aspiration and biopsy were performed in all cases in that study, no tumor cells were found within vessels in those specimens [11].Bone marrow biopsy continues to be widely performed to screen for IVLBCL; however, its sensitivity has been low [16,32].The marrow pattern of IVLBCL is categorized in three patterns: an intrasinusoidal pattern with or without minimal extravasation, an intrasinusoidal pattern with substantial scattered/ interstitial extravasation, and a nodular/diffuse pattern.Of these patterns, the intrasinusoidal marrow infiltration pattern with or without minimal extravasation is diagnostic for IVLBCL [32].Matsue et al. reported that 18 patients (60%) showed bone marrow infiltration among 30 patients with IVLBCL diagnosed by RSB.Nevertheless, only five patients showed an intrasinusoidal marrow infiltration pattern in the bone marrow.Hence, the sensitivity of bone marrow biopsy was only 16.7% [32].Accordingly, even if IVLBCL is diagnosed by RSB, the bone marrow infiltration may be incongruous with the skin pathology.Thus, some patients suspected of having IVLBCL who receive only bone marrow biopsy without RSB may be misdiagnosed as negative for IVLBCL.In clinical practice, patients suspected of having IVLBCL are evaluated by RSB and bone marrow biopsy in the initial workup.In cases where RSB is positive with negative bone marrow biopsy, the diagnosis of IVLBCL is made [11,16].Owing to its convenience, RSB has gradually become widespread.However, no unified RSB method has been established, and the procedure varies among institutions.Furthermore, indicators of which patients can most benefit from RSB have not been established. The appropriate RSB method RSB has been performed by two methods: a punch method and an incisional method.The punch method is easier and less invasive than the incisional method.However, the incisional method can sample specimens deeper and wider than the punch method.No unified method for RSB has been established [11,13,29,30,33,34].Asada et al. reported that the affected vessels in specimens are distributed predominantly in the subcutaneous fat tissue [11].We previously examined the depth of the affected vessels in 82 specimens from 25 patients with IVLBCL diagnosed by incisional RSB [35].IVLBCL lesions were significantly more numerous in subcutaneous fat tissue than in the dermis.Furthermore, among the 25 patients with IVLBCL, 19 (76%) showed dermal and subcutaneous invasion, and the remaining 6 (24%) showed only subcutaneous invasion.All 77 specimens with IVLBCL lesions among the 82 investigated specimens exhibited subcutaneous invasion.We also found that 14 of 38 (37%) specimens in which the affected vessels presented only in subcutaneous fat tissue showed a minimum depth exceeding 5 mm from the skin surface to the lesion [35].Moreover, we conducted a study that compared specimen depth for punch RSB versus incisional RSB.The median depth of the punch specimens was found to be less than that of the incision specimens.In addition, approximately 40% of the specimens obtained by the punch method measured less than 5 mm [36].Hence, a punch biopsy may result in false negatives.Two cases of IVLBCL diagnosed by incisional RSB after the failure of punch RSB due to an insufficient amount of subcutaneous fat tissue were reported [34,37].From the above, it is clear that the affected vessels are predominantly distributed in the subcutaneous fat tissue [11,35].Thus, the punch method could be insufficient for detecting tumor cells in IVLBCL (Fig. 1).Supporting this idea, the sensitivity of punch RSB was found to be low, at 0-50% [38,39], whereas Matsue et al. reported the sensitivity of incisional RSB to be high (77.8%)[14].Since the punch method is the predominant method in Western countries, the sensitivity of RSB in Western countries might be lower [34,[38][39][40][41]. Maekawa et al. reported a case series of RSB that used either an incisional or a punch method.All nine patients with RSB-positive IVLBCL underwent incisional RSB in the Maekawa report [33].In summary, the sensitivity of punch RSB was found to be lower than incisional RSB (Table 1).Consequently, sufficient sampling depth is important for detecting IVLBCL lesions. To determine how deep the biopsy specimen should be, Maekawa et al. examined the pathology of patients with incisional RSB-positive IVLBCL.Of samples from nine patients with incisional RSB-positive IVLBCL, samples from 8 patients included a whole layer of subcutaneous fat tissue with the deep fascia.The Maekawa group found that there were no specimens in which the affected vessels were limited to the lower layer of subcutaneous fat tissue [33].Although it seems more diagnostically effective to sample specimens that include the deep fascia, which could contain many more vessels, it is difficult to perform such a sampling in patients who have thrombocytopenia and a deteriorating condition.Furthermore, it is difficult to ensure hemostasis in the fascia, which has a rich blood flow.Moreover, the Maekawa group reported that there were significantly more atypical cells in the upper layer than in the lower layer of subcutaneous fat tissue [33].Accordingly, sampling specimens that include the superficial fascia can be regarded as reasonable.For deeper sampling, Winge et al. proposed a telescoping method in which a small punch biopsy is telescoped into a larger punch biopsy defect [42].They concluded that this method could obtain adequate subcutaneous fat tissue.With regard to hemostasis, however, a narrow operative field is associated with a higher risk of bleeding, especially in IVLBCL patients with thrombocytopenia.Therefore, further studies are warranted.An incisional biopsy can sample specimens not only to a much greater depth, but also to a greater width than the punch method.Although the difference between these widths may also contribute to the high sensitivity of incisional RSB, no study has yet addressed this question. The greater is the number of specimens, the higher is the sensitivity of the RSB.However, complications from bleeding are more likely in IVBCL patients with thrombocytopenia.Moreover, obtaining numerous specimens is often painful for the patient.We previously reported high sensitivity and specificity for incisional RSB that obtained specimens from at least three separate sites [14].Moreover, it is desirable to take an adequate volume of specimens from fat-rich areas.Consequently, it is reasonable to obtain specimens from at least three separate fat-rich areas of the skin, such as the thigh or abdomen for the maximum sensitivity and the minimum risk of complications. Appropriate sites for RSB: normal-appearing skin vs. visible skin lesions Although only 14.3% of patients with IVLBCL were found to have skin lesions, visible skin lesions associated with IVLBCL are known to present diversely as nodules, plaques, erythema, and telangiectasia [16,43].There is the question of whether to look for visible skin lesions as targets for sampling before RSB.Arai et al. compared the positivity rates for skin lesions to those for normal-appearing skin in IVLBCL.They concluded that neoplastic cells may be present more frequently in skin lesions than in normal-appearing skin [44].Several reports have recommended biopsies from visible skin lesions, especially for cherry angiomas [45][46][47][48].Ishida et al. reported an IVLBCL patient who had no affected vessels in the specimens taken from normalappearing skin and who was finally diagnosed from a cherry angioma [46].We also encountered an IVLBCL case with tumor cells within the vessels in a cherry angioma (Fig. 2). In general, cherry angiomas arise in everyone, and their numbers increase with age.Because of their abundant capillaries, tumor cells become trapped in those vessels.Thus, a cherry angioma may show higher rates of positivity than those of normal-appearing skin in RSB [47].On the other hand, Saurel et al. described the case of a cutaneous variant of IVLBCL with nodules and spider angiomas that mostly disappeared after the completion of therapy.They evaluated the expression of vascular endothelial growth factor and secreted phosphoprotein 1, which are angiogenic factors in tumor cells.As the tumor cells expressed these angiogenetic factors, they hypothesized that these factors might play a role in the formation of pseudohemangiomoformative lesions [49].Similarly, Weingarten et al. reported a case of IVLBCL with cherry angiomas that disappeared and subsequently reappeared [50].Thus, the unusual progress of cherry angiomas in patients with IVLBCL might be due to angiogenic factors from the tumor cells.However, due to the small number of cases, the cause of cherry angiomas in IVLBCL needs to be further investigated. As mentioned above, the eruptions in IVLBCL patients can be varied, so thrombophlebitis, vasculitis, and livedo racemose might be included as differential diagnoses for IVLBCL.When those symptoms are seen in IVLBCL patients, they are initially caused by the occlusion of vessels by tumor cells, which consequently activates the coagulation cascade, and thrombosis develops within the lumina of the vessels [43].If superficial vessels are involved, the clinical pattern may be that of livedo racemose; if vessels in the deep dermis or subcutaneous fat tissue are involved, the clinical lesions mimic those of erythema nodosum or nodular vasculitis [43].Essentially, skin lesions in IVLBCL may reflect IVLBCL lesions in the micro-findings.Hence, we recommend taking specimens not only from normalappearing skin, but also from any clinically recognized skin lesions.Several patients with IVLBCL have happened to be diagnosed from skin lesions, such as peau d'orange [26,27], indurated dermal plaques with overlying telangiectasia [25,27], subcutaneous nodules [24,27], and red macules with nodules [24].In addition, a case of IVLBCL diagnosed from skin lesions detected by dermoscopy was recently reported [51].Dermoscopy enabled the identification of appropriate biopsy sites by showing telangiectasia that was too faint to be recognized by the naked eye.Hence, to identify the appropriate biopsy sites, normal-appearing skin should be reconfirmed using dermoscopy to find subtle telangiectasia.In the absence of skin lesions, positron emission tomography (PET)/computed tomography (CT) may help determine biopsy sites before RSB.In fact, Matsukura et al. reported a case of IVLBCL that was eventually diagnosed by the rebiopsy of an abnormal uptake site of PET/CT after an initially negative RSB [52]. The selection of patients for whom RSB is appropriate The prevalence of patients evaluated by RSB who are actually diagnosed with IVLBCL varies widely from study to study.Matsue et al. examined patients who underwent RSB for suspected IVLBCL.Among the 111 patients who underwent RSB, 33 patients were finally diagnosed with IVLBCL [14].In contrast, Rozenbaum et al. reported that 12% of patients receiving RSB were eventually found to have the disease [39].Similarly, we reported that patients with IVLBCL comprised 11% of patients who had undergone RSB previously [36].From these facts, appropriate selection criteria for RSB should be provided, while any oversight is avoided.Matsue et al. proposed six predictors for positive RSB: (1) unexplained fever (≥ 38 °C), (2) altered consciousness, (3) hypoxemia (≤ 95%), (4) thrombocytopenia (< 120 × 10 3 /μL), ( 5) high serum lactate dehydrogenase (LDH)(> 800 U/L), and ( 6) high soluble interleukin-2 receptor (sIL-2R)(> 5000 U/mL).The more of these predictors that were met, the higher was the rate of positive RSB [14].Among these items, Sumi-Mizuno et al. focused on LDH and sIL-2R.If both parameters are normal, the Sumi-Mizuno group consider that RSB should not be performed, to avoid unnecessary biopsies [53].These predictors would be useful in selecting patients for whom RSB is appropriate. Complications of RSB Despite RSB being a less-invasive method, patients can still have complications, such as bleeding.It was reported that a patient with IVLBCL experienced hemorrhagic shock after an incisional RSB, and it was proposed that a punch biopsy would be preferable as an initial step to prevent accidental bleeding in IVLBCL patients with thrombocytopenia.It was also proposed that if the initial RSB is negative, a second RSB may be considered at an alternative site [33].However, repeated biopsies are difficult because of the patient's deteriorating condition and rapid disease progression.To minimize hemorrhagic risk, it is important to correct the coagulopathy and transfuse platelets before RSB [35].No severe complications other than bleeding have been reported in RSB.With adequate preparation before RSB, this complication can be avoided. Conclusion RSB plays an important role in diagnosing IVLBCL.Even patients in deteriorated condition are able to receive RSB because of its lower invasiveness.Moreover, unlike biopsies from other organs, RSB can be performed at the bedside.Even so, we must recognize the risk of this method.Furthermore, because an inappropriate RSB can result in a false negative, it is important to perform incisional RSB appropriately.For an accurate diagnosis of IVLBCL, we should plan to obtain at least three specimens from fat-rich areas.Also, we should examine the whole body to find skin manifestations, such as cherry angioma and detect faint lesions including telangiectasia by dermoscopy, and we should consider PET/CT before RSB execution.If there are any skin lesions that might contain IVLBCL lesions, we should consider RSB not only from normal-appearing skin, but also from the lesional skin.RSB has become a popular method; however, unnecessary RSBs might still be performed.Thus, we recommend using the predictive criteria of positive RSB mentioned above.IVLBCL can be cured if it is diagnosed early and accurately.We strongly expect that an appropriate method of performing RSB will gain acceptance as a common diagnostic technique worldwide. Funding Open Access funding provided by Akita University. Fig. 1 A Fig.1A specimen of intravascular large B-cell lymphoma sampled by incisional random skin biopsy.Tumor cells stained by CD20 immunostaining are predominantly distributed within the vessels in the subcutaneous fat tissue.Black arrows indicate the affected vessels.The blue-framed area shows the depth and width of the specimen that can be sampled by the punch method.The specimen that can be taken by the incisional method is indicated by the green-framed area (× 40; CD20 immunostaining) Table 1 Summary of a case series on RSB for suspected IVLBCL
2024-04-03T06:18:19.651Z
2024-04-02T00:00:00.000
{ "year": 2024, "sha1": "3ef0dd67e9dcbd4456c1fca3d4fd0fecb4a0431f", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12185-024-03757-5.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "7711f27920092d0f670931c89803aa528f6959b7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
131932749
pes2o/s2orc
v3-fos-license
Aerosol Distribution in The Planetary Boundary Layer Aloft a Residential Area Atmospheric aerosol is an omnipresent component of the Earth atmosphere. Aerosol particle of diameters < 100 nm or > 1 μm defines ultrafine or coarse aerosol particles, respectively. Aerosol particle concentrations within the planetary boundary layer - PBL are measured at the ground level while their vertical profiles in the PBL are usually estimated by modelling. The aim of this study was to construct vertical concentration profiles of ultrafine and coarse aerosol particles from airborne and ground measurements conducted in an urban airshed. Airborne measurements were done by an unmanned airship, remotely controlled with GPS 10 Hz position tracking, and electrically powered with propulsion vectoring, which allows average cruising speed of 6 m.s-1. The airship carried three aerosol monitors and a temperature sensor. The monitors acquired 1 Hz data on mass concentration of coarse and number concentration of ultrafine particles. Four flight sequences were conducted on the 2nd of March 2014 above Plesna village, up-wind suburb of Ostrava in the Moravian-Silesian region of the Czech Republic. The region is a European air pollution hot-spot. Repeated flights were carried out in several height levels up to 570 m above ground level - a.g.l. Early morning flight revealed a temperature inversion in the PBL up to 70 m a.g.l. This lead to coarse particle concentrations of 50 μgm-3 below the inversion layer and 10 μgm-3 above it. Concurrently, air masses at 90-120 m a.g.l. were enriched with ultrafine particles up to 2.5x104 cm-3, which may indicate a fanning plume from a distant emission source with high emission height. During the course of the day, concentrations of ultrafine and coarse particle gradually decreased. Nevertheless, a sudden increase of ultrafine particle concentrations up to 3.7x104 cm-3 was registered at 400 m a.g.l. at noon and also after a lag of 20 min at the ground. This may indicate formation of new aerosol particles at higher altitudes, which are then transported downward by evolved convective mixing. Detailed information acquired by the airship measurements allow us to better understand processes resulting in the increase of aerosol particle concentrations at ground level in urban air. Introduction The planetary boundary layer -PBL, the lowest part of the troposphere, is influenced by the exchange of heat, water vapour, trace gases and aerosol particles with the Earth surface [1]. Surface heating produces a turbulent, well-mixed PBL during the day, while surface cooling after sunset may lead to a temperature inversion which causes the PBL stratification. Key constituent of the PBL is atmospheric aerosol, a colloid originating from natural and anthropogenic sources. While natural aerosol sources prevail over the anthropogenic ones at remote locations or open troposphere [2], anthropogenic sources dominate in urban environment. Additionally, temperature inversion in the PBL prevents mixing of the anthropogenic aerosol with the free troposphere. Therefore, knowing the atmospheric aerosol distribution within the PBL at urban microscale is important for human exposure assessment. Unmanned aerial systems offer advantages as research platforms because of the possibility to investigate atmospheric parameters at small scale and low altitudes. Airships have the possibility to fly with low cruising speed at constant heights, with minimal logistic requirements and lower costs compared to the aircrafts [3,4,5,6]. Also, compared to drones, airships can carry heavier payload [7]. This work presents and discusses airborne and ground-based measurement of the aerosol particles concentration, performed in March 2014 in a suburb of Ostrava, Czech Republic. This city is known as a European air pollution hot-spot [8,9,10]. The aim of this study was to construct vertical concentration profiles of size-segregated aerosol particles from airborne and ground measurements conducted in this urban air pollution hot-spot. Methods The measurements were performed in Ostrava city suburb Plesna ( Figure 1) mainly composed of family houses. The suburb is relatively far from any direct industrial and/or traffic pollution. The airborne measurements were realized with an unmanned airship, remotely controlled with GPS 10 Hz position tracking and electrically powered with propulsion vectoring, which allows average cruising speed of 6 m.s -1 . The precision of the airship position tracking was 5 -10 m vertically and 5-8 m horizontally. The scientific payload was composed of a laser nephelometer (DustTrak DRX-8533, TSI Inc.), two condensation nuclei counters (P-track 8525, TSI Inc.), and temperature sensor (111DL, Voltcraft). The nephelometer measured the mass concentration of coarse aerosol particles. Each of the counters was equipped with Particle Size Selector -PSS (model 376060. TSI Inc.) at the aerosol inlet but with different adjustments. The first PSS housing holds 7 screens, which raises the smallest detectable particle size limits to about 100 nm while there were no screens at the second PSS housing. Therefore, the first counter detects particles within the size range of 100 -1000 nm while the second within the size range of 20 -1000 nm. The particle number concentrations -PNC within the ultrafine size range 20 -100 nm are obtained by the difference of the first and second counter. The temperature sensor and also aerosol monitors acquired 1 Hz data. Concurrent ground measurements were conducted at fixed site (49°51'57.31"N, 18° 7'55.52"E, 290 m altitude), approximately 300 m far from the airship launch site (Figure 1). Five minute integrates of particle number concentrations and size distribution in size range 14 -10000 nm were measured by a Scanning Mobility Particle Sizer (SMPS model 3936L75, TSI Inc.) and an Aerodynamic Particle Sizer (APS-3321, TSI Inc.). Meteorological parameters (wind speed, wind direction, relative humidity and temperature) were also recorded. Results and discussions Four flights were conducted: 1st at 06:40-7:50, 2nd at 8:13-9:40, 3rd at 9:55-11:11 and 4th at 11:25-12:19. During the first flight in the early morning, two temperature inversion layers were observed (Figure 2, left). The first was formed up to 70 m a.g.l. while the second reached heights 180-230 m a.g.l. Coarse aerosol mass concentrations 20-50 μgm -3 below the first inversion layer reflected coarse aerosol sources on the ground, also indicated by the elevated coarse particle concentration recorded at the fixed site on the ground ( Figure 5, top). Above the first inversion layer, the concentration dropped to less than 10 μgm -3 (Figure 2, right). In contrast to coarse particles, ultrafine particle number concentrations were very high, up to 2.5x10 4 cm -3 , at heights 90-120 m a.g.l., which may indicate a fanning plume from a distant emission source with a high emission height (Figure 3, right). During the second flight, temperature inversion layer was observed only at heights 150-280 m a.g.l. as a consequence of surface heating. There were concentrations of coarse particle mass up to 39 μgm -3 and number of ultrafine particles up to 20x10 4 cm -3 recorded in this layer. During the third and fourth flights, the inversion disappeared due to the heating of the Earth's surface. Air masses can vertically mix and the concentration of both coarse and nanoparticles in the PBL decreased. Nevertheless, sharp increase in number concentrations of ultrafine particles up to 3.7x10 4 cm -3 was recorded at heights of 380-400 meters a.g.l. (Figure 4, right) during the descending flight of the airship at 12:00:23-12:00:47. After a delay of 20 minutes, at 12:20-12:50, there was also a sudden increase of ultrafine particles concentration 1.5-2x10 4 cm -3 observed at fixed site on the ground ( Figure 5, bottom). This may indicate the process of formation of new particles occurring at higher altitude, which are subsequently transported downward by enhanced convective mixing and registered on the ground. Similar spatial/temporal dynamic of the PBL were observed in Melpitz, Germany [11]. Figure 5. Contour graphs of the temporal variation of aerosol size distribution for mass (top, size range 0.5-10 m) and for number (bottom, size range 14-732 nm), and ultrafine particle number concentration registered at the fixed site Conclusions Dynamics of vertical profiles for temperature, coarse and ultrafine aerosol particle concentration in the PBL in microenvironment of urban airshed was revealed by measurements with an unmanned airship. Early morning, temperature stratification of the PBL caused coarse particles to be accumulated below the inversion layer while ultrafine particles, emitted from distant source with high emission height, were trapped at heights 90-120 m a.g.l. During the course of day, the PBL stratification ceased and gradually evolved turbulent mixing led to a downward transport of ultrafine particles newly formed in higher elevations. Detailed information allows us to understand processes and apportion sources of aerosol particles in ground level in the urban micro environment and unmanned airship seems to be an optimal platform for airborne measurements.
2019-04-26T14:24:02.878Z
2016-10-01T00:00:00.000
{ "year": 2016, "sha1": "c22a0c154491a9017a541e823669bcaa247e7eaf", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/44/5/052017", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "7671e661e51a2df29a4ed487e5eb65777cecd5c3", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
52289627
pes2o/s2orc
v3-fos-license
An mHealth App for Self-Management of Chronic Lower Back Pain (Limbr): Pilot Study Background: Although mobile health (mHealth) interventions can help improve outcomes among patients with chronic lower back pain (CLBP), many available mHealth apps offer content that is not evidence based. Limbr was designed to enhance self-management of CLBP by packaging self-directed rehabilitation tutorial videos, visual self-report tools, remote health coach support, and activity tracking into a suite of mobile phone apps, including Your Activities of Daily Living, an image-based tool for quantifying pain-related disability. Objective: The aim is to (1) describe patient engagement with the Limbr program, (2) describe patient-perceived utility of the Limbr program, and (3) assess the validity of the Your Activities of Daily Living module for quantifying functional status among patients with CLBP. Methods: This was a single-arm trial utilizing a convenience sample of 93 adult patients with discogenic back pain who visited a single physiatrist from January 2016 to February 2017. Eligible patients were enrolled in 3-month physical therapy program and received the Limbr mobile phone app suite for iOS or Android. The program included three daily visual self-reports to assess pain, activity level, and medication/coping mechanisms; rehabilitation video tutorials; passive activity-level measurement; and chat-based health coaching. Patient characteristics, Introduction Management of chronic conditions places a considerable burden on patients, communities, and health care systems worldwide [1], but evidence indicates that symptom management in chronic disease can be significantly improved through self-management interventions [2,3].Given that mobile phone usage in the United States has become widespread in recent years and is still on the rise [4], advancements in mobile technology can be leveraged to deliver mobile health (mHealth) apps that support patients in effective self-management of chronic conditions.In a recent US study of mHealth use among primary care patients, 55% of respondents reported having a mobile phone and 70% of these had used mHealth apps for management of health conditions [5]. Conditions for which exercise therapy has been shown to be effective, such as chronic lower back pain (CLBP) [6], stand to benefit greatly from mHealth integration because sustained adherence to exercise-based rehabilitation is vital for recovery [7][8][9].The effectiveness of mobile phone-based interventions for measuring and influencing physical activity has been explored in a number of studies, and there is increasing evidence that mHealth interventions that are adaptive to user preference while supplementing standard care with disease monitoring, self-reporting, education, and promoting physical therapy adherence have the potential to improve health outcomes among those living with chronic diseases [1,10,11].Support from a health coach has also been shown to help drive mHealth app use [10,12], and remote health coaching in the form of text messages can effectively improve self-management of symptoms and promote long-term behavior change retention [13,14], including increased compliance with physical therapy [15].Ecological momentary assessment, or "experience sampling"-including self-report surveys and sensor-assisted reminders-are effective tools for collecting in situ user data [16,17] and can be used to enhance mHealth interventions for the self-management of CLBP. Limbr is a compliance enhancement intervention that was developed to incorporate many of these elements by packaging self-directed rehabilitation tutorial videos, personalizable visual self-report tools, health coach support, and sensor-assisted, passive activity-level tracking into a suite of mobile phone-based apps for patients with CLBP.The Limbr program aims to promote adherence to the Back Rx exercise rehabilitation regimen [18], increase engagement in self-directed management of pain (including pain, medication, and exercise tracking), and improve self-reported outcomes for pain.One novel aspect of Limbr is its use of Your Activities of Daily Living, an image-based tool for characterizing functional status [19].Although a recent preliminary evaluation of Your Activities of Daily Living conducted among a small number of patients with arthritis suggested promise of its utility [19], it has not yet been evaluated among a larger patient group. Despite the existence of numerous patient-targeted mobile phone apps for pain tracking, self-management, and exercise training, the implementation of mHealth technology for chronic conditions remains an important area for further research.In particular, many currently available mobile phone apps targeted at low back pain (LBP) management are low in quality, offering content that is not based on current research and has not been reviewed or tested by health care providers [20,21].This study was performed to (1) describe patient engagement with the Limbr program, (2) describe the patient-perceived utility of the Limbr program, and (3) assess the validity of the Your Activities of Daily Living module as a quantifier of pain and disability level among patients with CLBP. Study Design This was a single-arm trial utilizing a convenience sample of 93 adult patients who visited physiatrist Vijay Vad, MD (New York, NY, USA), from January 2016 through February 2017 and were diagnosed with discogenic back pain.Included patients were required to be English speaking and to have a diagnosis of LBP with predominantly axial symptoms, persistence of symptoms for at least 3 months, lumbar intervertebral disk pathology evident on magnetic resonance imaging, and possession of a mobile phone device (iPhone models 5S and later or Android models 2.3 and later).Patients were excluded if they had a history of trauma, a history of lumbar spine surgery or severe lumbar disk degeneration prior to the beginning of the study, or concurrent pathology that could have contributed to axial low back symptoms (eg, spondylolysis, spondylolisthesis, facet arthropathy), or if their case involved a legal claim.Informed consent was obtained from patients at the initial doctor visit (onboarding) after the nature of the study had been explained.The study is registered on ClinicalTrials.gov(identifier NCT03040310), was approved by the institutional review board of the Hospital for Special Surgery (New York, NY), and was conducted in accordance with all applicable regulations. Intervention Eligible patients were enrolled in an mHealth-based 3-month physical therapy program (Limbr) and received a mobile phone app suite free of charge to monitor and manage their CLBP.The program included three daily visual self-reports to assess pain, medication/coping mechanisms, and affect; self-directed rehabilitation via Back Rx video tutorials personalized for patients with discogenic back pain; and passive measurement of activity levels.At onboarding, patients underwent baseline assessments (including a Your Activities of Daily Living full assessment and an Oswestry Disability Index [ODI] [22] assessment) and received assistance with installation and setup of the mobile phone app suite.For the duration of the program, patients received remote support from a health coach available in real time, and several patient engagement methods were utilized to improve user compliance.Elements of Limbr are described in further detail subsequently. Self-Reports The daily visual self-reports-Your Activities of Daily Living [19], Medications of Daily Living, and the Photographic Affect Meter (PAM) [23]-used an experience sampling approach to collect in situ user data for the purpose of providing tailored content to users.Rather than relying on words or numbers, these questionnaires offer a variety of photographs from which the user selects those that best describe their mood/condition.Your Activities of Daily Living (Figure 1) [19] is an image-based survey inspired by the PAM [23] that characterizes a patient's functional status using images representing activities of daily living (ADL) from the Western Ontario and McMaster Universities Arthritis Index [24] and Boston Activity Measure for Post-Acute Care [25], which are validated clinical measures.To complete the Your Activities of Daily Living assessment, patients used the app to select images of activities during which they recently experienced LBP-induced difficulty.The full assessment conducted at onboarding included 47 images and was intended as a substitute for a conventional, clinician-administered long-form ADL questionnaire (eg, the ODI).The Your Activities of Daily Living daily assessment was intended to provide interim reports between time points at which a long-form assessment would typically be administered and included only the images selected by the patient during the baseline full assessment. Medications of Daily Living (Figure 2) is an app-based medication log with a visual interface.Similar to Your Activities of Daily Living, patients completing the daily Medications of Daily Living assessment by choosing images that characterized any LBP medication or coping strategies used over the past 24 hours.All medications and coping strategies included in Medications of Daily Living had been confirmed by the study physician as relating to LBP.PAM (Figure 3), used to assess patients' daily affect, is a rigorously validated tool for measuring emotion through a series of images [23].Photos in the PAM are arranged in a grid from low arousal and negative valence in the bottom left, to high arousal and positive valence in the top right.To complete the daily PAM assessment, patients used the app to choose the image that best represented their emotion at the time of assessment. Self-Directed Rehabilitation The self-directed rehabilitation component of Limbr was administered via Force Therapeutics [26], an app providing a series of exercise videos tailored to patients with LBP.Patients were requested to watch three times per week.Patients used the app to view videos and to indicate if they watched the videos. Activity-Level Measurement Activity levels, including personal location and activity classification information (eg, minutes active per day, hours out of the house), were monitored via Moves [27], an app that utilizes mobile device sensors for passive collection of activity-level data. Health Coach Support Data-informed health coaching with a certified health coach was made available via Limbr Chat, a text-messaging app.(Standard iOS and Android messaging apps could not be used to maintain compliance with the Health Insurance Portability and Accountability Act of 1996 [28].)The coach in this study was familiar with the Force Therapeutics exercises and advised patients about exercise, technical issues, and personal support.The coach monitored participant data from the Limbr suite, including daily self-reports, indicators of participant compliance, and activity levels; identified trends in participants' progress to provide personalized care; and used the Limbr Chat app to send responses and other messages, including support messages and reminders to interact with the program.The coach used casual language and abbreviations typical of text messaging (eg, "u" instead of "you") to promote an informal relationship and reduce intimidation on the part of participants, and any patient messages were responded to within 24 hours. Patient Engagement During the study, Limbr participants were categorized according to their level of engagement with the program for the purpose of tracking and improving compliance.Patient categorization XSL • FO RenderX was updated three times a week on the basis of the frequency and quality of engagement with the interactive components of the system (watching videos or completing the visual self-reports).Patients were categorized as "frequently interacting" (>2 interactive components/week), "infrequently interacting" (<1 interactive component/week), and "unproductive-active" (1-2 interactive components/week). To promote sustained engagement, all frequently interacting and unproductive-active participants were sent weekly summary emails (Figure 4) consisting of a visual feedback regarding their interactions across the Your Activities of Daily Living and Medications of Daily Living visual self-reports, self-directed rehabilitation (Force Therapeutics), and passively collected mobility data.To encourage engagement among infrequently interacting participants, an email was sent reminding them to engage with the Limbr components and log their activities.Unproductive-active patients were sent personalized messages from the Limbr health coach, who worked with the patients to construct a new care plan according to individual patient needs.For example, the coach might suggest that a patient reduce interaction with the daily assessments from daily to three times per week.Finally, the Limbr health coach would check in weekly with all frequently interacting, unproductive-active, and infrequently interacting patients, inquire about their progress, and send personalized motivational messages.Examples of messages that might be sent to encourage patient engagement and/or check in regarding a patient's progress are: Patient Engagement Patient engagement was assessed using three outcome variables: (1) the frequency of interactions across the visual self-reports, (2) a binary outcome representing at least one viewing of the physical therapy videos versus none watched, and (3) the frequency of messages to the health coach.For outcome analysis, an interaction was defined as an instance of using Your Activities of Daily Living, Medications of Daily Living, PAM, or the Limbr Chat app during the study period.The percentage of physical therapy videos watched was automatically collected by the Force Therapeutics app.Frequency of messages was computed as the number of times a participant sent a message to the coach using the Limbr Chat app during the study period. Patient-Perceived Utility of Limbr The overall utility of the Limbr program was assessed using a Web-based feedback survey administered to all participants at the completion of the study.The feedback survey consisted of 13 questions, presented on a 5-point Likert scale, and divided into three sections that assessed the perceived helpfulness of (1) the patient engagement features (Limbr Chat app and weekly summary emails), (2) the app notifications reminding users to complete the daily self-reports and Back Rx exercises, and (3) the visual self-reports for Your Activities of Daily Living, Medications of Daily Living, and PAM.The response options for each section ranged from "strongly disagree/not useful" (5 points) to "strongly agree/very useful" (1 point). Association of Your Activities of Daily Living with Conventional Pain Assessment To determine whether the Your Activities of Daily Living visual self-report could serve as a proxy for a more traditional pain index, outcomes from the Your Activities of Daily Living assessment were compared at baseline with those from the ODI, a questionnaire that measures levels of disability in ADLs among patients rehabilitating from LBP [22].Participants were directed to complete the ODI at onboarding (baseline) and at 2 weeks, 6 weeks, and 3 months after enrollment.The ODI was completed via Ohmage [29], a mobile survey app utilized for recording, analyzing, and visualizing participant data and administering clinical surveys. Statistical Analysis Baseline patient characteristics, patient engagement, and patient-perceived utility of the Limbr program were analyzed descriptively; means and standard deviations were provided for continuous variables, and numbers and percentages were provided for discrete variables.Associations between participant characteristics and level of interaction with Limbr (total interactions and interactions per week) were analyzed using multiple linear regression to determine whether characteristics (either collectively or individually) had any effect on use of the Limbr system. To analyze the association between Your Activities of Daily Living and ODI assessment results at baseline, Pearson correlation coefficient was calculated after confirming normality using the Shapiro-Wilk test.In addition, hierarchical linear modeling (HLM) was used to analyze the ability of Your Activities of Daily Living daily self-reports to predict ODI scores among participants who both entered multiple ODI scores and completed the full Limbr program.For the HLM analysis, the outcome was the ODI score reported on a particular day and the predictor variable was the Your Activities of Daily Living score reported closest in time to that ODI score. Patient Characteristics A total of 93 participants were enrolled from January 2016 through February 2017, of which 13 dropped out after completing the onboarding session and before interacting with the Limbr components.Of the remaining 80 participants, an additional 45 dropped out before completing the 3 Patient Engagement The 35 participants who completed the program averaged 96 total interactions with the three daily self-reports over the 12-week study, roughly evenly distributed among Your Activities of Daily Living, Medications of Daily Living, and PAM.The number of interactions per week ranged from 1 to 29, with a mean of 8 (SD 7).On average, participants who interacted with the daily assessments between daily and every 3 days comprised slightly over 50% of the study group.Median participant interaction frequency across assessments is depicted in Figure 5. Participants were instructed to watch the Force Therapeutics instructional videos only 1 to 3 times a week, as opposed to daily.Interaction data shows that 70% (19/27) of participants interacted with Force Therapeutics a median of at least once a week (Figure 6).A total of 147 messages were sent from participants to the coach using the Limbr Chat app.The majority of these (73/147, 49.7%) were tech support messages (eg, "I cannot seem to log in to force.Can you help?"), with the next most common type (47/147, 32.0%) comprising messages about the Force Therapeutics exercises (eg, "Yes my knee reaches.Not the hip flexor.The hip flexor is tight and I feel pain.So was wondering if that's normal and if not then I shouldn't stretch that much.").A smaller percentage (23/147, 15.6%) were medical messages (eg, "Thank you.I have had extreme back pain for the last 2 days.I hurts to sit, sleep, and bend at the moment"), and 2.7% of messages (4/147) recorded participant criticisms of the system. Patient-Perceived Utility of Limbr Feedback surveys were returned by 21 participants; a question-by-question breakdown of participant responses in presented in Figure 7.Among respondents, 11 of 21 (52%) found the daily self-reports to be helpful in tracking pain-related ADL functionality, medication use, and affect.In particular, 13 of 21 (62%) found that the Your Activities of Daily Living daily assessment helped them track the activities of daily living affected by their back pain; 16 of 21 (76%) and 15 of 21 (71%) agreed that the daily notifications were helpful in reminding them to complete the Force Therapeutics exercises and daily surveys, respectively; 17 of 21 (81%) found the Limbr system easy to use; and 13 of 21 (62%) rated their overall experience as either good or excellent. Association of Your Activities of Daily Living With Conventional Pain Assessment Baseline Your Activities of Daily Living and ODI scores were found to be significantly associated (Pearson correlation coefficient=.551,P<.001).Linear regression modeling further revealed that the baseline Your Activities of Daily Living score was a significant predictor of baseline ODI score, with ODI increasing by 0.30 units for every 1-unit increase in Your Activities of Daily Living (P<.001).Similarly, HLM analysis (among the 14 patients with multiple ODI scores who completed the full Limbr program) indicated that Your Activities of Daily Living daily assessment scores were significant predictors of ODI scores recorded over the course of the study, with ODI increasing by 0.33 units for every 1-unit increase in Your Activities of Daily Living (P=.01). Discussion In this pilot study conducted among patients with CLBP, engagement with the Limbr compliance enhancement intervention was high among those who finished the program; the majority of completers interacted with the daily self-reports multiple times per week and 70% used the self-directed rehabilitation component as directed.Approximately half of feedback survey respondents found the daily self-report components of Limbr to be helpful, and more than 70% indicated that the daily notifications helped them remember to perform their rehabilitation exercises.Moreover, the Your RenderX Activities of Daily Living assessment was found to be significantly associated with conventional pain assessment scores, thereby validating its utility as a novel quantifier of pain and disability level.These findings suggest that Limbr has substantial potential as an approach to promoting patient engagement and self-directed rehabilitation adherence for CLBP management. Although US data regarding mHealth interventions for patients with CLBP are limited, evidence exists that Web-or mobile-based strategies can be effective for reducing pain and improving self-management in this population [30][31][32].In this study, participants who completed the Limbr program exhibited a high level of engagement throughout the trial, frequently interacting with the self-report modules as well as with Force Therapeutics.It is particularly encouraging that most respondents found Limbr helpful in remembering to engage with self-directed rehabilitation, as rehabilitation adherence is critical for maximum improvement in physical function among patients with CLBP [7][8][9].In addition, the sustained use of Your Activities of Daily Living, Medications of Daily Living, and PAM observed throughout the trial suggests that patients may find these visual assessments-which provide a simple and intuitive interface, can be completed quickly, and are readily adapted to mobile devices-to be easier to use than conventional reporting methods such as text-based surveys [19,23]. The significant association of the Your Activities of Daily Living assessment with the ODI scores is a key finding of this study.As a visual survey, Your Activities of Daily Living leverages the inherent ambiguities of images to mitigate some of the limitations of conventional pain assessments [19].For example, although it is not possible for a standardized survey to contain a comprehensive list of every ADL that could be relevant for every patient, the images in Your Activities of Daily Living are open to individual interpretation and, therefore, can be used by different patients to express a wider range of ADL experiences [19].Furthermore, standard ADL assessments are typically performed in an office setting in association with a clinical encounter, limiting their ability to reflect day-to-day variability in ADL performance.In contrast, Your Activities of Daily Living is mobile-friendly and can be completed by patients at any time without assistance, greatly increasing the scope and granularity of the ADL information that can be captured.Validation of Your Activities of Daily Living opens the door to the creation of mHealth apps that are capable of reliably measuring patient-reported pain outcomes on a day-to-day basis.The relationship between Your Activities of Daily Living daily assessments and pain-related disability as assessed by the ODI should be explored in further studies. Participant attrition in this study was high; of 93 patients enrolled, only 35 (38%) completed the 3-month program.Although this level of attrition is substantial, it is not unusual among mHealth interventions, for which the challenge of many participants discontinuing the intervention and/or being lost to follow-up is widely acknowledged [33].For example, only 32 of 180 (18%) study enrollees were retained after 12 weeks in one recent analysis of a multidisciplinary LBP pain treatment app, despite the fact that those who completed the program experienced significant reductions in pain [34].Similarly, only 25% of participants in a 1-year internet-mediated exercise intervention for patients with CLBP maintained at least 80% compliance with required data uploads for the duration of the study [32].Excellent participant retention was observed in the 4-month FitBack randomized controlled trial, which demonstrated greater pain reduction among program users versus those in comparison groups.Of 597 initial enrollees, 580 (97%) submitted assessments at all three designated time points [30].It is noteworthy, however, that FitBack participants received cash rewards for submitting assessments, and the degree to which those who submitted all three assessments engaged with the intervention program on a daily or weekly basis (eg, tracking pain and pain-management activities or watching instructional videos) is unknown [30].Furthermore, FitBack is a comparatively simple Web-based program in contrast to Limbr, which required installation, maintenance, and utilization of seven separate component apps.The complexity of the combined interactions and maintenance tasks required of Limbr participants may have been a factor in the high dropout rate.As nearly half of the chat messages sent by participants in this study were categorized as tech support-related, the possibility that technical difficulties contributed to attrition also cannot be discounted. Considerable effort was made to improve patient engagement and reduce attrition during the course of this study.First, patient input from feedback surveys and calls was used to improve usability and enhance the user experience of the daily self-reports; for instance, two of the reports (Your Activities of Daily Living and Medications of Daily Living) were updated and moved from beta testing to the app store.Before this change was made, beta testing expirations frequently required patients to manually update their apps, and there was a significant difference in device usage between iOS and Android phones.After the update, however, there was no difference in usage between operating systems (data not shown).This example highlights the importance of making the patient experience as seamless as possible to promote engagement with the intervention. The other major effort to enhance patient interaction comprised personalized Limbr Chat messages from the health coach, which were tailored according to participant data collected via the Limbr suite.A post hoc analysis revealed that 408 health coach messages were sent, with the largest proportion categorized as engagement (169/408, 41.4%), followed by technical support (113/408, 27.6%), and medical/exercise-related messages (60/408, 14.7%).Findings from previous studies suggest that interactions with a health coach, including two-way messaging systems similar to that used in Limbr [35], tend to increase patient engagement with the intervention [35] and promote improved self-management of chronic pain conditions [36].However, of the messages sent from patients to the coach, only 74 of 147 were unrelated to technical support.Although we expected more patient messaging, Limbr did little to encourage patients to engage in a two-way exchange or spontaneously send messages to the coach.Messages from the coach rarely prompted the patient to respond, as the coach was not instructed to do so, and the system itself did not actively guide the patient to the chat app unless there was a new message waiting.The XSL • FO RenderX low number of patient-sent messages is not necessarily an indicator of poor engagement, because patients who were highly engaged in the exercising and reporting may have felt little need for communication with the coach.Nevertheless, the effectiveness of various types of health coach messaging is an important area for future research. Despite the relatively high attrition rate, the overall utility of the Limbr system was scored positively by the majority of respondents and some individual components were widely found to be useful; for example, the weekly summary emails were met with optimum positive feedback.On the other hand, reception of other Limbr components varied considerably across participants.Daily notifications and recurrent coach messages were particularly polarizing, seen as vexatious by some participants but highly motivating by others.This finding underscores the need for mHealth interventions to take a personalized approach to engagement rather than relying on a single method of promoting compliance.Additional studies to characterize which engagement efforts are most effective both overall and for particular patient populations could help enable the design of highly personalizable interventions in the future.Other changes that could reduce attrition and enhance engagement in a future iteration of the Limbr system include combining the disparate components into a single app designed with user-first principles and fewer technical barriers, conducting more formal user testing and employing analytical techniques such as conversion funnel optimization prior to considering a larger trial and emphasizing two-way messaging between patient and coach. Our results should be considered in light of several limitations.Because the trial was conducted among a small convenience sample of patients with CLBP, the applicability of the study findings to other patient populations is unknown and should be assessed in future studies.In addition, because there was no comparison group, the study outcomes cannot be definitively attributed to use of the Limbr suite.Finally, although the reasons underlying the high dropout rate of the trial warrant further exploration, this study was not designed to analyze the causes of patient attrition or the types of engagement efforts that were most helpful in promoting compliance with the intervention. The findings of this pilot study suggest that the Limbr program shows promise as an approach to enhancing patient self-management and adherence to self-directed rehabilitation for CLBP.Engagement among participants who completed the program was high, and the utility of the program was rated positively by the majority of respondents.Our results also support the validity of the Your Activities of Daily Living visual self-assessment for quantifying pain and disability level.Future studies should assess the effect of Limbr on clinical outcomes, evaluate its use among a wider patient sample, and explore strategies for reducing attrition. Figure 1 . Figure 1.Screenshots from the Your Activities of Daily Living app: (A) daily assessment, (B) daily assessment reminders, and (C) when there was nothing to report, patients selected "Today was a good day!". Figure 2 . Figure 2. Screenshots from the Medications of Daily Living app: (A) daily assessment, (B) daily assessment reminders, and (C) when there was nothing to report, patients selected "Today was a good day!". Figure 3 . Figure 3.The Photographic Affect Meter (PAM) app.(A) Screenshot of the PAM visual interface, and (B) how images are arranged from low arousal and negative valence in the bottom left to high arousal and positive valence in the top right. Figure 4 . Figure 4. Example of a weekly summary email. a An interaction was defined as an instance of using Your Activities of Daily Living, Medications of Daily Living, PAM, or the Limbr Chat app during the study period. Figure 5 . Figure 5. Median interaction frequency across daily self-reports for Your Activities of Daily Living (YADL), Medications of Daily Living (MEDL), and the Photographic Affect Meter (PAM). Figure 6 . Figure 6.Median interaction frequency for daily self-reports for Force Therapeutics.Note there were different frequencies of data reported from Force Therapeutics versus the other assessments. Figure 7 . Figure 7. Feedback survey results.Surveys were scored on a 5-point Likert scale with response options ranging from "strongly disagree/not useful" (5 points) to "strongly agree/very useful" (1 point).Percentages may not sum to 100% because of rounding.YADL: Your Activities of Daily Living; MEDL: Medications of Daily Living; PAM: Photographic Affect Meter. Good morning, I know it can seem like we are asking u to track a lot of things and data.Honestly, nothing is more important to your healthy outcome than the FORCE exercises.I do them myself.Please don't forget to do them-1 set, 3 times per week AND mark them all done in the FORCE app.Thank you and good health.I am glad the nights are getting better.Sometimes the lack of movement can stiffen the body.Series B is best thought of as the next level up shall we say on the exercises.You should keep doing the exercises you have been doing until they seem too easy, then contact Dr Vad to get his permission to move to Series B.Patients were categorized as inactive if they had had no interaction with the interactive components for 4 successive weeks.Participants were said to have completed the program if they remained active for 12 weeks. Table 2 . Associations between participant characteristics and program interaction measures (N=35).
2018-09-23T00:24:57.921Z
2018-09-01T00:00:00.000
{ "year": 2018, "sha1": "d7e73b58d9b2880a050f0658ba38156074715526", "oa_license": "CCBY", "oa_url": "https://doi.org/10.2196/mhealth.8256", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "4523e10149f393bd389bdbd82c098308a7e646d3", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
258649443
pes2o/s2orc
v3-fos-license
Sorption treatment of water from chromium using biochar material Abstract Sorption treatment of wastewater from chromium compounds is a current and actively developing area of research. However, there is a lack of sorbents combining high efficiency and economic affordability with the possibility of application in industrial conditions. In this work, the sorption properties of biochar material “EcoChar,” obtained as a result of fast pyrolysis of agricultural poultry waste, were studied in relation to the removal of trivalent chromium ions, and the basic thermodynamic parameters of the process, which is the type of strong chemisorption, were determined. Based on the IR spectra of the sorbent material, the formation of chelate complexes by hydrolyzed chromium ions with the involvement of carboxylic groups and simple ether bonds is suggested. Using the Langmuir model, the values of sorption capacity of sorbent were found as, 68.8 ± 0,2 mg Cr(III)/g and 87.5 ± 0,3 mg Cr(III)/g at 298 and 318 K, respectively. These values, which are much higher than those of their corresponding waste material counterparts, make the use of this biochar as an adsorbent advantageous for the removal of Cr(III) containing compounds from acidic wastewater. In wastewater, chromium is generally present in two forms: Cr(III) and Cr(VI) (Rajapaksha et al., 2022).The trivalent form of chromium Cr(III) exhibits relatively less toxicity than the hexavalent Cr(VI) (Mikhaylov, Maslennikova, Krivoshapkina, Tropnikov, & Krivoshapkin, 2018;Uddin, Jeong, & Lee, 2021).A large amount of literature on the sorption properties of substances of various nature in relation to hexavalent chromium compounds have been available to date (Fenti, Chianese, Iovino, Musmarra, & Salvestrini, 2020;Thangagiri et al., 2022;GracePavithra, Jaikumar, Kumar, & SundarRajan, 2019).Many of these data represented the composite materials.For example, the material based on carboxymethyl cellulose with embedded metallic iron nano-particles has relatively high values of specific sorption capacity of 1.648 mmol/g.The maximum value of this parameter was recorded for polypyrrole nanocomposite based on graphene oxide as 12 mmol/g (Setshedi, Bhaumik, Onyango, & Maity, 2015).Despite the high efficiency, these composite sorbents are difficult to manufacture and cannot be produced industrially.On the contrary, the substances represented by waste products (Cheremisina, Ponomareva, & Bolotov, 2019;Cheremisina, Cheremisina, Ponomareva, Bolotov, & Fedorov, 2021;Chukaeva, Matveeva, & Sverchkov, 2022) have a number of significant advantages: affordability, lack of processing costs, high efficiency in relation to certain pollutants.However the majority of such materials have low sorption capacity referring to the contaminants of hexavalent chromium, the order of ð1 À 10Þ Á 10 À3 mmol/g (Fenti et al., 2020).In addition, the aggressive oxidative nature of chromate and dichromate ions may cause destruction of the organic matrix of the sorbent (Pashayan, Shetinskaya, & Zerkalenkova, 2018) at high concentrations, which makes them unsuitable for industrial use in the treatment of wastewater contaminated with large amounts of Cr(VI).Cr (III) compounds do not exhibit aggressive oxidative properties, they do not destroy the sorbent matrix, they are stable acidic and weakly acidic media, and are much better sorbed on various surfaces.Natural diatomite, which has no economic value, was also used as a sorbent for the removal of trivalent chromium from the solution (G€ ur€ u, Venedik, & Murathan, 2008).The specific sorption capacity of diatomite was found as 26.5 mg Cr(III)/1 g of diatomite (or 0.51 mmol/1 g of diatomite), which is twice as much as the case of similar sorbents for wastewater treatment of Cr(VI).It was found that removal of 97% Cr(III) was achieved in 80 min at 30 C, particle size of 1.29 mm and stirring speed of 45 rpm.The values adsorption capacities of a number of natural adsorbents were found in the range from 1.55 up to 68.12 mg/g in some previous studies (Caglar, Afsin, Tabak, & Eren, 2009;Garc ıa-Reyes, Rangel-Mendez, & Alfaro, 2009;Rajapaksha et al., 2022).The highest value of 68.12 mg/g was achieved in the presence of seaweed biomass which also indicates the easier removal of Cr(III) than Cr(VI) from the solution. The analysis of literature sources relevant to the treatment of wastewater containing chromium compounds shows the scarcity of the adsorbents combining high efficiency and economic availability, as well as the possibility of their application in industrial conditions. Thus, the scheme of wastewater treatment where reduction of Cr(VI) to Cr(III) is carried out first, using cheap reducing agents and subsequent sorption of the resulting acid solutions using more convenient adsorbents with high sorption capacity has been suggested as alternatives (Ahmed et al., 2022;Kudinova, Poltoratckaya, Gabdulkhakov, Litvinova, & Rudko, 2022;Wan et al., 2021).In recent years, there has been increased interest in the use of biochar as a sorbent (Chen, Meng, Han, Lan, & Zhang, 2019;Qiu et al., 2022;Liang et al., 2021). Research methods This study is aimed to evaluate the efficiency of the application of the biocarbon material "EcoChar" obtained by rapid pyrolysis (5.5 Ä 6.1 MW, 1000 C) of waste products of agricultural poultry farm "Ptitsefabrica Roskar" for removing chromium compounds from wastewater through adsorption. Determination of the characteristics of the sorption material.The chemical composition of the starting material was determined by X-Ray fluorescent analysis using an energy dispersive spectrometer Epsilon 3 by PANalytical (The Netherlands), designed for the precise and reproducible analysis of the chemical composition from Na to U and determination of the element concentrations from fractions of ppm to 100%.Phase composition was determined by X-ray diffractometer Shimadzu XRD-7000 of Shimadzu Corporation (Japan).Specific surface area and porosity of the adsorbent material were determined by gas adsorption (nitrogen) at 77 K using the Quantachrome 1000e instrument and "NOVA Win-2.1"software.Microphotographs of the sorbent were obtained using a scanning electron microscope Tescan VEGA3 manufactured by Thermo Fisher Scientific (USA).IR spectra of the samples were obtained using a Nicolet 6700 Fourier infrared spectrometer operating in the range 25000-20 cm À1 ; at resolution of 0.09 cm À1 and scanning speed of 75 scans/sec. Study of sorption under static conditions.The study of Cr(III) sorption from acidic solution of chromium (III) sulfate was carried out from model solutions.Variable concentration method was used to determine constants of sorption process at constant pH ¼ 2.67 and phase ratio L: S ¼ 125 (25 ml of solution and 0.2 g of sorbent).The sorption was carried out at the temperatures 298 and 318 K at Cr(3þ) concentrations (0.002-0.0202) mol/kg. Determination of Cr(III) content of solution.For quantitative determination of Cr(3þ) ions content of the solution using X-ray fluorescent method of analysis model solutions of Cr 2 (SO 4 ) 3 Á 6H 2 O (analytical grade) with 0.1 mol/l concentration converted to chromium (III) sulphate were prepared.These solutions were diluted using distilled water and calibration solutions with the following concentrations were prepared: 0.1; 0.05; 0.01; 0.005; 0.001; 0.0005 mol/l.The software of the device Epsilon 3 was to build up the calibration dependence.The value of the correlation coefficient for linear dependence of calibration curve was determined as 0.9999.Then the content of metal ions was determined before and after sorption. The sorption value (q, mol/kg) was determined by the method of variable concentrations at a constant S: L ratio according to formula (1): where C0 and C1 represent the concentrations of chromium (III) in initial and equilibrium solutions, mol/L, respectively; V is the volume of solution, ml; m is the mass of sorbent, g.Samples of "EcoChar" sorption material were provided by the representative office of agricultural poultry farm "Ptitsefabrica Roskar" Determination of the thermodynamic parameters of the sorption process.To calculate the basic parameters of the sorption process, it is necessary to use a thermodynamic model of sorption equilibrium that most adequately describes the system under study.The basic models used for calculating the parameters of the adsorption process are listed in Table 1. Linear forms of sorption isotherms are constructed from the equations given in Table 1.Linear equations with two variables have the form y ¼ kx þ b.Using values of k and b the basic parameters of sorption equilibrium can be calculated as outlined below: For the Langmuir model: For the Temkin model: The linear equation for which the linear regression coefficient, R 2 is closest to 1 most adequately describes the system under study. The basic thermodynamic state functions of the system are then calculated using the formulas (2-4) given below: where D r G 0 T is the Gibbs free energy of sorption process, kJ/mol; DH 0 ðT 2 ÀT 1 Þ is the enthalpy change of the sorption process, kJ/mol; DS 0 T is the entropy change of the sorption process, J molÁK ; K is the sorption equilibrium constant, determined by using one of the models; T is the absolute temperature, K; indexes 1 and 2 correspond to the processes carried out at 298 K and 318 K, respectively. Characteristics of the sorbent material The values for specific surface area and pore volume are shown in Table 2. On the basis of the data given above, it can be concluded that the adsorbent used here is not a highly porous organic material, such as zeolites or activated coals, which have a specific surface area of 400-500 m 2 /g.The specific surface area of the adsorbent material corresponds to conventional carbon-bearing materials, such as charcoal, hard coal or petroleum coke, with a specific surface area of 20-70 m 2 /g. Chemical and phase composition The chemical composition of the organic material was given in Table 3, where the amounts of total carbon and organic carbon were found as 26.50 and 22.96%, respectively, and the rest of the mass belongs to the mineral constituents. The surface image of the sorbent is shown in Figure 1. The surface image of the sorbent exhibits the mixture of mineral grains and oxidised carbon inclusions (Figure 1).Some of the grains have a flakey appearance and some contain elongated mesopores.The main crystalline phases determined by XRD analysis are calcium carbonate (CaCO 3 ), potassium chloride (KCl) and calcium phosphate (Ca 3 (PO 4 ) 2 ). Determination of the optimum phase ratio and the effect of fraction size on sorption performance An initial solution of Cr 2 (SO 4 ) 3 with a concentration of 0.01 M gives rise to an acidic reaction at the pH value of 2.67.Biocarbon, in turn, gives an alkaline reaction with the medium.Thus, when 0.5 g of the sorbent fraction of À0.125 mm is rigorously stirred with 50 ml of distilled water, the pH of aqueous solution reaches the value of 9.56.Thus, the biochar coming in contact with acid solution of chromium (III) sulfate will cause solution alkalinization which may have a positive effect on sorption properties of the sorbent with respect to chromium. The experimental data obtained are presented in Table 4. The highest capacity with a sufficient degree of extraction is observed when the sorbent fraction À0.125 mm and phase ratio V/m ¼ 100 are used. Determination of the thermodynamic characteristics of Cr(III) sorption For obtaining the adsorption isotherm under static conditions, the solution of chromium (III) sulphate Table 1.Thermodynamic models of sorption equilibria (Foo & Hameed, 2010). Thermodynamic model Non-linear form Linear form Coordinates Equation parameters was contacted with the sorbent in phase ratio V/m ¼ 100, at 298 and 318K, respectively.q ¼ f ðC 1 Þ dependence is shown in Figure 2. Linear forms of the adsorption isotherms were obtained based on the equations shown in Table 1 (Figure 3). Based on the R 2 values, the Langmuir equation was chosen for further calculations.The results of the calculations of the basic thermodynamic functions as well as the values of the basic parameters of the adsorption process are shown in Table 5. IR spectra of the adsorbent surface Figure 4 shows the infrared spectrum of the adsorbent sample.The carbon component of the adsorbent oxidised state with a large number of oxygen-containing groups of different structures.The characteristic peaks in the range 550-600 cm À1 correspond to phosphate groups which is in consistent with X-Ray phase analysis data.The single peak at wave number 875 cm À1 is typical for aliphatic ethers and cyclic ethers of epoxy compounds, and cyclic ethers exhibit a multiplet of 4-5 bands in this region.Presumably, it cannot be said that the compound is free of polyether bonds as the remaining bands of the multiplet Table 4. Chromium (III) sorption results at different phase ratios and using fractions of different grain sizes. Fraction, mm V/m, ml/g pH (after sorption) q, mol/g Recovery rate, % À0.125 are located at a shorter wavelength region and can be overlapped by an intense peak in the range of 1000 to 1200 cm À1 of the spectrum.This peak corresponds to the vibrations of the ester complex bond as well as the hydroxyl group, which is also indicated by the broad peak lying in the range of Table 5. Thermodynamic characteristics of Cr (III) sorption from aqueous solution. , R peak may also characterize the valence vibrations of carboxylate group.The peak at 1430 cm À1 may be attributed to the -CH 2 -COgroup. After treatment of the surface with chromium (III) sulphate solution some changes in the shape and intensity of the characteristic peaks may be seen in the spectrum.The peak at 875 cm À1 disappears completely, which may indicate the participation of -C-O-Cgroup in the formation of coordination bonds with Cr(III) ions since this group is sensitive to structural changes.Also, changes in the intensity of the doublet extending from 1430 cm À1 to 1610 cm À1 are noticeable.The peak at 1430 cm À1 is smoothed and loses intensity significantly, which may point out the interaction of the carboxyl group with Cr(III) ions and the emergence of a newly formed structure through an intermolecular mechanism, involving nearby protonic centers (Koksal, Afsin, Tabak, & Caglar, 2020). Discussion The negative value of the calculated Gibbs energy change indicates the occurrence of a spontaneous process of Cr 3þ ion adsorption onto EcoChar.The calculated enthalpy shows the endothermic nature of the process which can explain the higher sorption capacity of the sorbent for chromium with increasing temperature.The value D r H 0 T ¼ 87, 860, 6kJ=mol indicates a strong chemisorption interaction resulting in the formation of a solid sorption complex.The specific sorption capacities of the sorbent at 298 and 318 K were determined as 68.8 ± 0.2 mg/g and 87.5 ± 0.3 mg/g, respectively.These values are much higher than that of the many analogues adsorbents produced from wastes which cannot potentially provide an economic value (Garc ıa- Reyes et al., 2009;Hu et al., 2020;Saravanan & Senthil Kumar, 2022) (Figure 5). The existence forms of Cr(III) ions as a function of pH are shown in Figure 6.chromium ions into the formation of the sorption complex primarily. It was suggested in a study on Cr(III) adsorption by biocarbons (Hu et al., 2020) that chromium (III) ions bind to the surface groups of the carbon sorbent, forming chelate complexes (Figure 7).The ligands bind to a central atom using a free electron pair which can be transferred to the central atom to form more than one coordination bond.Ligands that are directly attached to the central atom form an internal coordination region (Nasiri, Jamshidi-Zanjani, & Darban, 2020;Carbonaro, Gray, Whitehead, & Stone, 2008).The coordination number for the Cr 3þ ligand is 6.After the hydrogen atom is split off, the free chromium orbital is capable of absorbing an unbound electron pair from oxygen.A hydrogen atom may be split off a hydroxyl group or a carboxyl group.The free electron pairs obtained from the hydroxyl or carboxy groups can chelate the central Cr(III) ion (Fernandes, Romão, Abreu, Quina, & Gando-Ferreira, 2012).The infrared spectral data obtained in the present study point out the involvement of carboxyl group in the formation of sorption complex with chromium ions, as well as the participation of simple ether bonds in the formation of coordination compounds. Since in the case of pH > 3 hydrolyzed forms of Cr(III) predominate in the solution, replacement of a part of oxygen atoms by hydroxyl groups may be suggested. Conclusions In the present work, the sorption properties of biochar material "EcoChar" in relation to trivalent chromium ions were investigated, and the values of the thermodynamic functions D r G 0 298 ¼ À19:260:4 kJ=mol, D r G 0 318 ¼ À26:460:5 kJ=mol, D r H 0 T ¼ 87:86 60:6 kJ=mol, D r S 0 T ¼ 359:260:8 J molÁK , demonstrate the occurrence of strong chemisorption.Based on the vibrational spectrum of the sorption material the formation of chelate complexes by hydrolyzed chromium ions with the involvement of carboxylic groups and simple ether bonds was assumed.Using the Langmuir model, the values of the static sorption capacity of the adsorbent were determined as 68.8 ± 0.2 mg Cr(III)/g and 87.5 ± 0.3 mg Cr(III)/g at 298 and T ¼ 318 K, respectively.These values which are much higher than those of the closest analogues produced from wastes, proof EcoChar as a promising material to be used as an efficient adsorbent for removal of Cr(III) compounds from acidic wastewater. Figure 2 . Figure 2. Adsorption isotherm of sorption of Cr(III) from sulphate solution. Figure 4 . Figure 4. IR spectrum of the sorbent surface.Red colour corresponds to the spectrum of the original sorbent surface, blueafter contact with chromium (III) solution. Figure 5.Comparison of sorption capacity of EcoChar in relation to trivalent chromium with the closest analogues at 298 K. Data are taken from the work (Garc ıa-Reyes et al., 2009). Figure 6 . Figure 6.Forms of existence of Cr(III) ions in aqueous solution as a function of pH (Cheremisina et al., 2021). Table 2 . porosity and specific surface area of organic material. Table 3 . Chemical composition of organic material.
2023-05-13T15:09:36.139Z
2023-05-10T00:00:00.000
{ "year": 2023, "sha1": "63c912ce6a2a4881d9d169b4899012e32fb3b6f3", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/25765299.2023.2207407?needAccess=true&role=button", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "6a1e5c9840525ead5287b5c5c7131ba7de58ae29", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
251043503
pes2o/s2orc
v3-fos-license
Probing a Stochastic Epidemic Hepatitis C Virus Model with a Chronically Infected Treated Population The hepatitis C virus is hitherto a tremendous threat to human beings, but many researchers have analyzed mathematical models for hepatitis C virus transmission dynamics only in the deterministic case. Stochasticity plays an immense role in pathology and epidemiology. Hence, the main theme of this article is to investigate a stochastic epidemic hepatitis C virus model with five states of epidemiological classification: susceptible, acutely infected, chronically infected, recovered or removed and chronically infected, and treated. The stochastic hepatitis C virus model in epidemiology is established based on the environmental influence on individuals, is manifested by stochastic perturbations, and is proportional to each state. We assert that the stochastic HCV model has a unique global positive solution and attains sufficient conditions for the extinction of the hepatotropic RNA virus. Furthermore, by constructing a suitable Lyapunov function, we obtain sufficient conditions for the existence of an ergodic stationary distribution of the solutions to the stochastic HCV model. Moreover, this article confirms that using numerical simulations, the six parameters of the stochastic HCV model can have a high impact over the disease transmission dynamics, specifically the disease transmission rate, the rate of chronically infected population, the rate of progression to chronic infection, the treatment failure rate of chronically infected population, the recovery rate from chronic infection and the treatment rate of the chronically infected population. Eventually, numerical simulations validate the effectiveness of our theoretical conclusions. Pretreatment analyses required for anti-HCV treatment can consist of interferon (INF), determination of HCV genotype, determining the present status of the disease (acute, chronic), assessing the stage of liver disease (fibrosis, cirrhosis), considerations of alcohol addiction, evaluation of the immune system, and assessment for HIV/HBV co-infection and co-morbidities [9,13]. These days, combinations of direct-acting antivirals (DAAs) have widely replaced INF based therapy [13]. In 1998 and 2001, INF plus ribavirin (RBV) and pegylated (Peg)-IFNs plus an RBV antiviral agent was the standard of care, but those were unable to wipe out chronic HCV infection. On the other hand, sustained virologic response (SVR, defined as no discernible HCV RNA in the blood circulation 12-24 weeks after antiviral therapy cease) achieved average cure rates of 42-45%, 65-85%, and 70-80% of infected patients with HCV GT1; GT4, GT5 or GT7; and GT2 or GT3, respectively [1,7,9,13]. In November 2013, NS3/4A and NS5A protease inhibitors (simeprevir) were endorsed by the Food and Drug administration (FDA) and this was followed December 2013 by another NS5B polymerase inhibitor (sofosbuvir-SOF). Combining two (simeprevir and SOF) or three DAAs provides excellent tolerability and safety for HCV patients with HCV GT1, and cure rates are 90-100% are attained [13]. Combinations with SOF/ledipasvir (NS5A replication complex inhibitors) and ombitasvir (NS5A)/paritaprevir (NS3/4A protease inhibitors)/r + dasabuvir (nonnucleoside inhibitor of NS5B) were endorsed by the FDA in October and December 2014, respectively [9,13]. In July 2015, SOF + daclatasvir (DCV; NS5A replication inhibitors) + RBV achieved very high SVR rates (95%) in patients with HCV GT1 with excellent tolerabilty and safety. In 2016, the FDA approved elabasvir/grazoprevir to treat chronic HCV patients in the USA and Europe, and SOF/velpatasvir (VEL; NS5A replication inhibitors) ± RBV regimens reached 50-100% in chronic HCV patients with good tolerability [1,9,13]. Finally, SOF+RBV reached high SVR rates in chronic HCV patients with GT1, and furthermore, was revealed to be effective in individuals with GT4, GT5 or GT7 infection [9]. This segment focuses on HCV as a stealth virus, one that in the course of the infection attacks the command and control point of the immune system, the CD4 helper T cells, by eliminating epitopes that deregulate antiviral Type 1 cytokines like interleukin (IL)-2 and interferonγ (IFN-γ) and on-regulate Type 2 cytokines such as MHC class II molecules and chemokines, which nourish host tolerance to HCV. Infection with HCV paves the way for chronic susceptibility in 85% of patients without evidence of active, antiviral immunological responses [2,14]. Knowing the means by which HCV sets and maintains infection is also to examining modes of human immunoregulation. The part played by CD4+ and CD8+ T cells in HCV clearance or disease pathogenesis is, at least, ambiguously understood. Certainly, the fact that infection consistently occurs despite the existence of virus-specific CD4+ and CD8+ T cells in the liver and the peripheral blood implies that these responses are ineffective for many patients. Both innate and adaptive immune responses are vital for HCV viral eradication. For the innate immune response, natural killer (NK) cells appear to be useful in eliminating HCV infection, and it seems that some kind of NK cell receptor genes (KIR2DL3 and HLA-C1) are related to viral eradication [1,2,14,15]. CD4 T cells can be partitioned into at least two types, T helper 1 (Th1) and T helper 2 (Th2), these have different roles in the immune response. The role of CD4 T cells which are prominent in viral escape is problematic; it is easier to handle viral escape methodologies from the point of view of the antibody or killer T-cell identification of viral epitopes. A failure to recognize the causes of viral "escape" would bring about a failure in antibody-facilitated clearance or neutralization of infected cells. HCV has developed several techniques for eluding or evading immense response. For instance, the HCV NS3/4A protein can split and neutralize two host indicating processes that react to HCV pathogen-associated molecular designs to instigate the IFN process. IFN-stimulated genes are sterilized during innate HCV infection, but this is not very effective at doing away with the virus [1,14]. Rapidly increasing evidence asserts that CD4 T cells can also have a direct impact on virus-infected target cells, varying from cytotoxicity to secretion of antiviral cytokines like IFN-γ and tumour necrosis factor-α (TNF-α). On one level, CD8+ killer T cells are somehow required for the ultimate extinction of HCV, and killer cell differentiation rests upon Type 1 cytokines. The last that most patients are unable to recover from the disease highlights the fact that CD8 T-cell responses are neutralized. The origins of these cytokines in the liver environment may not fall in line with conventional paradigms. These somewhat paradoxical observations culminate with the observation that there do exist HCV antigens that are able to extinguish the virus from the blood and liver of at least a minor segments of patients. Here it is suggested that suitable intensive immune responses to HCV are possibly to be controlled by CD4+ regulatory T cells [14,15]. Based on HCV viral pathology, reinfection plays as significant a role as primary infection, with respect to infection rate, progression rate, treatment rate and recovery rate (partial or loss of immunity). The treatment model for HCV transmission dynamics [6,17] is given by the following deterministic nonlinear differential system of equations: (1.1) The biological meaning of all positive parameters and variables in the deterministic HCV model (1.1) are listed in Table 1. The basic reproduction number of the deterministic HCV system (1.1) is Consequently, in [6], the deterministic HCV model (1.1) has the following properties: • if R 0 ≤ 1, the deterministic HCV system (1.1) has an infection-free equilibrium E 0 = (S 0 , I 0 , P 0 , R 0 , T 0 ) = Λ µ , 0, 0, 0, 0 , which is globally asymptotically stable on Γ; • if R 0 > 1, the deterministic HCV system (1.1) has an endemic equilibrium E 1 = (S 1 , I 1 , P 1 , R 1 , T 1 ), which is globally asymptotically stable on Γ. The biological processes captured and expressed by mathematical models for disease transmission can be valuable in real life scenarios, but deterministic models can be influenced by environmental white noise or by the presence of uncertainty. The treatment model for HCV system (1.1) is perpetually subject to stochastic effects which occur at all levels, from susceptible to chronically treated populations. Phenomena are inevitably modeled and stochastically perturbed based on environmental white noise, and understanding this is essential for a better understanding of many biological phenomena. Stochasticity impacts upon various biological [22-30, 39, 40, 43, 44] and other models [31][32][33][34][35][36]38]. Inspired by the above factors, we put forth the stochastic epidemic HCV model for a chronically treated population. The stochastic epidemic HCV model is on the basis of the influence of the environment on individuals manifested by stochastic perturbations, and it is proportional to each state [22,25,41,42]. In this paper, the theoretical findings extend to the analysis of the corresponding deterministic system. We construct the following stochastic epidemic HCV model: Here Φ is defined in (1.1). The rest of this article is organised as follows: in Section 2 we address the existence of global and unique positive solutions to stochastic HCV model (1.2). In Section 3 the sufficient conditions for the extinction of the hepatotropic RNA virus are attained. Section 4 establishes that there is a unique ergodic stationary distribution of the positive solutions of the stochastic HVC model (1.2) under some conditions. In Section 5 the five-dimensional stochastic model of a hepatitis C virus is validated by extensive numerical simulations, and the dynamics of the stochastic HCV system (1.2) are analyzed. Existence of a Unique Global Positive Solution It is first pivotal to discover whether or not the solution has global existence S(t), I(t), P (t), R(t) and T (t) signify the portion of the population that is susceptible, acutely infected, chronically infected, recovered and chronically infected, and treated, respectively, at time t. Here the main consideration is that the solution to the stochastic HCV system (1.2) is global and positive. Remark 2.2 In May 2016, the WHO unveiled the 'Global Health Sector Strategy on Viral Hepatitis, 2016-2021'. The strategy laid out a vision of eradicating viral hepatitis as a public health problem, and the global targets being to reduce new viral hepatitis infections by 90% and reduce deaths due to viral hepatitis by 65% by 2030 [20]. Hence, it is essential to study the eradication of the hepatotropic RNA virus among chronic infected HCV individuals. This is the subject of the next section. Extinction of the HCV The spread of the disease and the natural mortality rate of the population, and also the intensities of the white noise in the stochastic system (1.2), are the factors that need to be considered to wipe out HCV. First, we shall present a lemma which will be used in our analysis. The proof of the Lemma 3.1 is similar to that of Lemma 2.1 and Lemma 2.2 in [37], so we omit it. Theorem 3.2 Let (S(t), I(t), P (t), R(t), T (t)) be the solution of stochastic HCV system (1.2) with any positive initial values (S(0), then the HCV in stochastic system (1.2) is eradicated exponentially with probability one; i.e., Proof Let By virtue of Itô's formula, we obtain that By stochastic system (1.2), we have that Integrating both sides of (3.4) from 0 to t, with (3.1) and (3.2), we get that Integrating both sides of (3.3) from 0 to t, and combining with (3.5) and R 0 < 1, we attain that Otherwise, according to (3.4), we have that which, together with (3.1), (3.2) and (3.7), yields that lim t→∞ S t ≤ Λ µ a.s.. By computation, we obtain µ = 0.09 > (σ 2 1 ∨σ 2 2 ∨σ 2 3 ∨σ 2 4 ∨σ 2 5 ) 2 = 0.08 and R 0 = 0.0074 0.0081 = 0.9096 < 1 (Figure 1). Also, the HCV in stochastic epidemic model (1.2) can be eliminated depending on the disease transmission rate or the recruitment rate of the population. In particular, R 0 contains the recruitment rate of the population, but not in R 0 . Susceptible individuals must be substantially immunized and given proper treatment when there is a high recruitment rate. Remark 3.4 After six months of persistence of hepatitis C RNA virus within the blood, acute HCV individuals will progress to chronic HCV infection. At this stage, they have low levels of immunity. These are the factors regarding the persistence of HCV amongst the population. HCV persistence is addressed in the next section. Ergodic Stationary Distribution When considering epidemiological dynamical systems, we are interested in when the disease will persist and prevail in a population. In this section, we present some theories about stationary distribution (see Has'minskii [19]), and we show that there exists an ergodic stationary distribution which reveals when a disease will persist. Let X(t) be a homogeneous Markov process in R d , described by the following stochastic differential equation: The diffusion matrix is defined as Lemma 4.1 (see [19]) The Markov process X(t) has a unique ergodic stationary distribution π(.) if there exists a bounded domain D ⊂ R d with a regular boundary Γ having the following properties: where f (.) is a function integrable with respect to the measure π. Next, we focus to proving condition A 2 . Define where c i (i = 1, 2, · · · , 9) are positive constants to be resolved later. By virtue of Itô's formula on U 1 , we get that where c 10 , c 11 , c 12 and c 13 are positive constants to be determined later. Case 3 If (S, I, P, R, T ) ∈ D 3 , as a consequence of (4.10), we get that which follows from (4.13). Numerical Simulations and Discussions In this article, we have explored a stochastic epidemic hepatitis C virus model with a chronically infected treated population. We showed that the stochastic HCV system (1.2) has a unique global positive solution. We attained sufficient conditions for the extinction of the hepatotropic RNA virus. Furthermore, we accomplished sufficient conditions for the existence of a unique ergodic stationary distribution of the positive solutions to the stochastic HCV model (1.2) by obtaining a suitable Lyapunov function. For better understanding, the effect of varied environmental noise on the model's dynamic behavior based on real-life parameters is examined through numerical simulations. We apply Milstein's Higher order method [45] to find the solution (1.2) with initial S(0) = 0.8, I(0) = 0.18, P (0) = 0.12, R(0) = 0.1, T (0) = 0.1, and suppose that the unit of time is one day. The stochastic HCV model can be written in terms of the subsequent discretization equations: Here χ i,j (i = 1, · · · , 5), j = 1, 2, · · · , n are the independent Gaussian random variables which follow the distribution N (0, 1), and σ 2 i > 0 reflects the intensities of white noise. Consider the time step ∆t = 0.02. In any case, mathematical models pertaining to pathology and epidemiology must always keep environmental noise in mind.
2022-07-26T13:43:29.264Z
2022-07-25T00:00:00.000
{ "year": 2022, "sha1": "2318eea2ca9f11cfd87ebd419a5e191625811394", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "e146d7cf16bc7aeeb4785fd5ef0b1fc01dc1f0a7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256675306
pes2o/s2orc
v3-fos-license
A curious case of DBS radiofrequency programmer interference Deep brain stimulation (DBS) systems frequently rely on radiofrequency (RF) transmission for patient programming. The potential exists for other devices to interfere with communication between the internal pulse generator (IPG) and the programming device. In this paper, we are reporting a case of programming interference between the IPG and the WaveID device. INTRODUCTION Deep brain stimulation (DBS) is an effective and safe treatment for movement disorders, such as Parkinson's disease (PD), dystonia, and essential tremor. 1 It functions by delivering an electrical stimulus to the targeted neuronal structures, such as basal ganglia and thalamic nuclei; the electrical stimulus is delivered through multi-contact intracranial electrodes. These electrodes are anchored to the skull and connected a subcutaneous pulse generator in the chest or the abdomen by extension. Disruption of the circuit can lead to hardware failure; one of the most common locations for circuit disruption is the extracranial portion of the DBS electrode and the extension wires. 2,3 Although radiofrequency interference with DBS devices while approaching security, anti-theft, or radiofrequency identification (RFID) devices at the airport was described briefly, 4 it was never reported in the literature prior. In this paper, we are reporting a case of programming interference between the internal pulse generator (IPG) and the WaveID device. CASE PRESENTATION A 67-year-old gentleman with essential tremor underwent implantation of a Boston Scientific DBS system into the left VIM nucleus of the thalamus. At 3 months postoperatively he returned for modification of his programming due to increased tremor in his right hand. At this visit, there was an error in establishing communication between his IPG and the programming device. This patient had already been programmed at a different location and previously there had been no difficulties in connecting the programmer with his IPG. Troubleshooting of this issue began by replacing each component including the programming device, the connectors, and the computer one at a time, however, the patient's device still did not connect. The computer would initiate data download from the device but an error message would appear at varying points during this process. The error message: "action unsuccessful: communication link (25035)" would be prompted (Fig. 1). This error message was noted to be a sign of radiofrequency interference. The following steps were then performed in an attempt to eliminate any interference: (1) ensure the IPG is fully charged (Fig. 2), (2) remove any power sources near the patient. RF readings were taken next to the patient and at every corner on the room. The patient location had an RF reading on 176 (Fig. 3a) compared to 87, 48, 67, and 78 for each of the corners. Within the patient remote control (RC) there is an option that measures RF. The RF meter is a standard "RSSI" (Received Signal Strength Indicator) indicator. Also, there is an ADC (analogto-digital converter) that samples the signal as received by the receiver. The value of the ADC reading is then presented on the RC. The RC is held in the spot of interest for the measurement to be taken. The patient was then moved to a new location (RF reading at this new location was 45 (Fig. 3b), subsequently, the computer connected to the IPG and programming proceed normally. After completion, the patient was moved back to the original location and error message 25035 reappeared. The patient provided written informed consent. DISCUSSION Radiofrequency interference was initially described as electromagnetic interference (EMI), an electrical or magnetic field that prevents the neurostimulator from operating correctly. 5 Currently, three companies are manufacturing DBS systems (Medtronic, St. Jude, and Boston Scientific). Each of the DBS systems transmits on a unique radiofrequency. Medtronic's Activa DBS system transmits at a frequency of 175 kHz. 6 St. Jude's Infinity DBS systems transmits between 2.402 and 2.48 GHz. 7 FCC search on the Boston Scientific Vercise remote indicates that it transmits at a frequency of 125 kHz. 8 In this light, Boston Scientific has not altered their current generation product in any way. The best solution is to relocate the patient and card reader 4 feet apart. If this is not feasible, one might consider shielding the card reader. The FCC website was cross-referenced for devices that operate on a similar frequency range. It was discovered that the WaveID devices (manufactured by RF IDeas, Inc.) transmit at this frequency, and several RF card readers manufactured by RF Ideas were found to transmit at exactly 125 kHz (Fig. 4). 9 WaveID is a badge-based authentication and identification solutions powered by RF IDeas readers, that enable employees to wave their badge for identification, Single-Sign-On in the medical and manufacturing industries, computer login, execute print jobs, meetings register at, track time and attendance, and pay for food. 10 Additional research into the RF readers found that these systems are considered passive and constantly and continuously transmit at 125 kHz. This case highlights the possible effects of radiofrequency interference on DBS systems. Electronic devices, such as cell phones, WaveID readers that are present in the clinic rooms and hospitals, might cause additional interference; clinicians should be mindful of potential sources of interferences. CONCLUSION The WaveID reader was a source of RF interference and prevented a connection between the IPG and programmer. RF interference is a potential source of difficulty in patient programming, as more and more devices are introduced to the clinic site, physicians must remain cognizant of these as potential sources of interference. DATA AVAILABILITY Data are available on request from the authors. AUTHOR CONTRIBUTIONS R.E.W. and R.J.U. were involved in the conception and design. S.S.G., K.R., and A.L.G. collected data. S.SG. and K.R. wrote the manuscript. R.E.W. and R.J.U. gave final approval of the manuscript. ADDITIONAL INFORMATION Competing interests: The authors declare no competing interests.
2023-02-09T16:26:09.758Z
2019-02-01T00:00:00.000
{ "year": 2019, "sha1": "6500495a02f868eecc519feb4dde5379a7d98a41", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41531-019-0075-7.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "6500495a02f868eecc519feb4dde5379a7d98a41", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [] }
271224167
pes2o/s2orc
v3-fos-license
Associations of exposure to volatile organic compounds with sleep health and potential mediators: analysis of NHANES data Objective The effect of environmental pollution on sleep has been widely studied, yet the relationship between exposure to volatile organic compounds (VOCs) and sleep health requires further exploration. We aimed to investigate the single and mixed effect of urinary VOC metabolites on sleep health and identify potential mediators. Methods Data for this cross-sectional study was collected from the National Health and Nutrition Examination Surveys (NHANES) (2005–2006, 2011–2014). A weighted multivariate logistic regression was established to explore the associations of 16 VOCs with four sleep outcomes. Following the selection of important VOCs through the least absolute shrinkage and selection operator (LASSO) regression, principal component analyses (PCA), weight quantile sum (WQS), and Bayesian kernel machine regression (BKMR) analyses were conducted to explore the associations between exposure to single and mixed VOCs and sleep outcomes, as well as identify the most contributing components. A mediation analysis was performed to explore the potential effect of depression scores. Results Of the 3,473 participants included in the study, a total of 618 were diagnosed with poor sleep patterns. In logistic regression analyses, 7, 10, 1, and 5 VOCs were significantly positively correlated with poor sleep patterns, abnormal sleep duration, trouble sleeping, and sleep disorders, respectively. The PCA analysis showed that PC1 was substantially linked to a higher risk of poor sleep patterns and its components. The WQS model revealed a positive association between VOC mixture of increased concentrations and poor sleep patterns [OR (95% CI): 1.285 (1.107, 1.493)], abnormal sleep duration [OR (95% CI): 1.154 (1.030, 1.295)], trouble sleeping [OR (95% CI): 1.236 (1.090, 1.403)] and sleep disorders [OR (95% CI): 1.378 (1.118, 1.705)]. The BKMR model found positive associations of the overall VOC exposure with poor sleep patterns, trouble sleeping, and sleep disorders. PCA, WQS, and BKMR models all confirmed the significant role of N-acetyl-S-(N-methylcarbamoyl)-l-cysteine (AMCC) in poor sleep patterns and its components. The depression score was a mediator between the positive VOC mixture index and the four sleep outcomes. Conclusion Exposure to single and mixed VOCs negatively affected the sleep health of American population, with AMCC playing a significant role. The depression score was shown to mediate the associations of VOC mixtures with poor sleep patterns and its components. Introduction Sleep is a series of physiological processes regulated by neurobiology, which accounts for one-third of human life duration (1,2).Sleep plays a crucial role in promoting health by affecting many physiological processes such as endocrine and neurological systems (3,4).Based on an assessment of the World Health Organization, around one-third of people worldwide suffer from sleep disturbances (5).Poor sleep health is manifested by sleep disorders as well as insufficient, delayed or fragmented sleep, which is inherently sensitive to external environments such as ambient sounds, light, air quality, and environmental features around the sleep space (6).Many studies have observed an overall negative association between environmental exposures and sleep health, including heavy metals, secondhand smoke, and air pollutants, etc. (7).A cross-sectional study based on National Health and Nutrition Examination Surveys (NHANES) found that exposure to polycyclic aromatic hydrocarbons (PAHs) might be associated with poor sleep patterns (8).Previous metaanalyses have shown that self-reported exposure to secondhand smoke is positively correlated to short sleep lengths, poor sleep quality, and daytime sleepiness (9).Liu et al. (10) found that continuous exposure to air pollutants (including PM 10 , PM 2.5 , PM 1 , and NO 2 ) increased the occurrence of sleep disorders while decreased sleep duration of the Chinese population.Taken together, these results suggested a possible connections between environmental exposures and sleep issues. Volatile organic compounds (VOCs) are a combination of low-molecular-weight substances (11), including a variety of organic chemicals such as benzene and toluene, etc. (12).Both natural sources and human activities are important sources of VOCs.These compounds primarily enter the human body through inhalation or skin contact, which may affect a variety of physiological and metabolic functions of the body, such as serum lipids (13), sex hormones (14), and liver function (15), etc.One recent study found a positive correlation of co-exposure to VOCs with short sleep duration and trouble sleeping among the United States general population (16).A previous study on sewage treatment workers in the United States found that workers exposed to benzene, toluene, and other organic solvents had increased sleep requirements consistent with solvent exposure (17).A study on rats suggested that toluene exposure disrupted the sleep-wake cycle by affecting monoaminergic responses in sleep-related brain regions (18).Nowadays, cumulated evidence has indicated a close association between depression and sleep disorders, with various neurotransmitters in the central nervous system (CNS) jointly involved in emotional and sleep regulation (19).Furthermore, air pollutants are believed to affect the onset and progression of depression through inflammation and oxidative-stress-related pathways (20).Epidemiological studies have confirmed an increased risk of depression associated with VOCs (21).All these studies suggested a possible link between VOCs and sleep health, with depression potentially playing a significant role in such connections. However, current research has been mainly focused on specific occupational exposure groups limited to a few types of VOCs, with the assessment of sleep health confined to a single dimension and a lack of an exploration of its underlying mechanisms.Considering the complex interactions among VOCs and the importance of incorporating various sleep components in the analyses (22), further investigation into the combined effects of VOCs on sleep and their potential mechanisms appears essential. Therefore, we conducted a cross-sectional study based on 3 cycles of the NHANES database using logistic regression, least absolute shrinkage and selection operator (LASSO) regression, principal component analysis (PCA), weight quantile sum (WQS), Bayesian kernel machine regression (BKMR) and mediated effects analyses to fully explore the associations between single and mixed exposures to VOCs and sleep health (including poor sleep patterns, abnormal sleep duration, trouble sleeping, and sleep disorders) among the American general population, as well as to explore the mediating effect of the depression score.In this study, a total of 30,279 participants were recruited from 3 NHANES cycles, of whom 16,308 were aged 20 years or above.After excluding 11,212 participants with missing urinary VOC data, 5,096 were collected.Of these participants, 19 with missing sleep data were excluded.To obtain more reliable results, we excluded 1,604 participants with missing covariates [ratio of family income to poverty (PIR), diabetes, hypertension, marital status, education level, body mass index (BMI), drinking status, serum cotinine].Ultimately, 3,473 study participants were included in the analyses (Supplementary Figure S1). Measurement of urinary VOCs The quantification of urinary metabolites of VOCs was performed utilizing ultra-performance liquid chromatography-electrospray tandem mass spectrometry (UPLC-ESI/MSMS) (23).An Acquity UPLC ® HSS T3 column (Part no.186003540, 1.8 μm x 2.1 mm x 150 mm, Waters Inc.) was utilized for chromatographic separation.More detailed methods and information can be accessed on the NHANES website.In cases where analytes yielded results below the lower limit of detection (LLOD), a fill value was inserted in the analyte result field, calculated as LLOD/ 2. Poor sleep patterns, and its component assessment The participants' nighttime sleep length was determined by asking them, "How much sleep do you usually get at night on weekdays or workdays?, " and was categorized as normal (7-9 h/night) and abnormal (<7 h/night or >9 h/night).Based on answers to the question "Have you ever told a doctor or other health professionals that you have trouble sleeping?, " the presence of self-reported trouble sleeping was evaluated.To determine whether a sleep disorder was present, the question "Have you ever been told by a doctor or other health professionals that you have a sleep disorder?"was asked.When two or more of the following occur, it is considered a "poor sleep pattern": an abnormal sleep duration (<7 h or >9 h), trouble sleeping, and sleep disorders (8,24). Measurement of the depression score The depression score was obtained through the Patient Health Questionnaire-9 (PHQ-9), a face-to-face interview-based depression screening tool.The PHQ-9 consisted of nine questions, each of which was assigned a score of 0-3, and all item scores were ultimately summed to obtain a depression score ranging from 0 to 27 (25).The depression score reflected the frequency of participants' depressive symptoms in the past 2 weeks, which was positively correlated with the severity of their depression symptoms.The sensitivity and specificity of diagnosing major depression with a PHQ-9 score ≥ 10 were 88% (26). Covariates Based on previous studies, potential covariates that might influence the association between VOCs and sleep health were included in this study (8,27).Categorical covariates included gender (female and male), race (Mexican American, other Hispanic, non-Hispanic White, non-Hispanic Black, others), education level (less than grade 9, grade 9-11, high school graduate/GED or equivalent, some college or AA degree, college graduate or above), marital status (married, widowed, divorced, separated, never married, living with partner), PIR (<5, ≥5), BMI (<18.5, 18.5-24.9,25.0-29.9,and ≥30), drinking status (no, moderate, and heavy), serum cotinine (low and high), diabetes (no, borderline, yes) and hypertension (no and yes).Continuous covariates included age.Non-drinkers were defined as individuals who had not consumed alcohol in the past year.In addition, women who drank an average of <4 drinks per day and men who drank an average of <5 drinks per day in the past year were defined as moderate drinkers, and the rest were defined as heavy drinkers.Environmental tobacco exposure was assessed using serum cotinine concentrations, categorized as low (≤0.015ng/mL) and high (>0.015ng/mL) using a cut-off of 0.015 ng/mL.Hypertension and diabetes mellitus were diagnosed through index measurements, medication use, and self-reports. Statistical analysis Given the complexity of the NHANES design, and following the NHANES survey reporting guidelines, we used subsample weights from 2-year VOCs divided by 3 as 6-year subsample weights for a better generalization of the results to the entire American population.We first conducted normality tests on continuous variables in the baseline characteristics analysis.Continuous variables following a normal distribution were expressed as weighted means (standard deviation, SD), with t-tests for between-group comparisons, and unweighted values (weighted proportions) for categorical variables, with χ 2 tests for between-group comparisons.We standardized the urinary VOC concentrations for urinary creatinine, and further performed a logarithmic transformation for creatinine-corrected VOCs to conform to a normal distribution considering the rightskewed distribution of urinary VOCs.The correlation between the natural logarithm (ln)-transformed VOC concentrations under creatinine adjustment was computed using Pearson's correlation coefficient. We used survey-weighted multivariate logistic regression models to explore the relationship between single exposure to 16 VOCs and sleep health.Pearson correlation analyses suggested high correlations and collinearity among multiple VOCs, so we used multivariateadjusted LASSO regression to screen out key variates associated with sleep health outcomes and construct optimal models (28).A 10-fold cross-validation was used to select the optimal lambda.Significant VOCs screened through LASSO regression were included in the subsequent PCA, WQS, BKMR, and mediation analyses. We used PCA to transform our original correlated VOC variables into a series of uncorrelated principal components that captured important sources of variations, which could explain most variations in the original variables, realizing the downscaling of the screened important VOCs.Principal components with eigenvalues exceeding 1 were chosen (29) and integrated into the logistic regression model as continuous independent variables to investigate the associations between principal component scores and poor sleep patterns, along with their components.To explore the mixed effect of VOCs on multiple sleep health outcomes, we fitted the screened important VOCs into WQS and BKMR models for analyses.Based on the characteristics of the WQS model, we assumed positive and negative directions, respectively, to explore the associations of the WQS index of VOCs with poor sleep patterns and its components, as well as the contribution of each VOC.The samples were pre-randomized into training and validation sets at a ratio of 4: 6, and bootstrap sampling with N = 1,000 was employed to generate robust estimates.In the BKMR model, we performed 20,000 iterations for all analyses using a Markov chain Monte Carlo method.First of all, we calculated the values of posterior inclusion probabilities (PIPs) for selected VOCs to identify those important for poor sleep patterns and its components using a threshold of 0.5.Secondly, the joint effect of VOCs was assessed by comparing VOC mixtures in different percentiles with the median mixture of VOCs.In addition, we fixed the remaining VOCs at the median to explore the dose-response relationship of a single VOC with poor sleep patterns and its components. Finally, we conducted mediation analyses using the R package "mediation" to test the mediating role of depression scores in the VOC mixture index, poor sleep patterns and its components.The bootstrap method was used and simulations were repeated 5,000 times to estimate the mediation effect and confidence intervals (CIs), with all covariates corrected. Participant characteristics Of the 3,473 participants, a total of 618 were diagnosed with poor sleep patterns.Participants' survey-weighted baseline characteristics according to sleep patterns are shown in Table 1.Participants with poor sleep patterns were more likely to be older, widowed/divorced, who had higher levels of BMI and serum cotinine, whereas those without poor sleep patterns were more likely to be in a moderate drinking status, while those with diabetes and hypertension were more likely to have poor sleep patterns.To better illustrate the characteristics of the study population, we further compared the baseline characteristics of included and excluded subjects (Supplementary Table S2). Relationship of single VOCs with poor sleep patterns and its components Table 2 demonstrates the correlation of single VOCs with poor sleep patterns and its components under survey-weighted logistic regression analysis after adjusting all covariates.Poor sleep patterns were positively connected with AAMA, AMCC, CEMA, DHBMA, 3HPMA, MHBMA3, and PGA.Additionally, a positive correlation was found between AAMA, AMCC, CEMA, CYMA, DHBMA, 3HPMA, MA, MHBMA3, PGA, HPMMA and abnormal sleep duration, while a negative correlation was discovered between BPMA and abnormal sleep duration.Only CEMA was positively correlated with trouble sleeping.Significant positive correlations were found between AAMA, CEMA, DHBMA, MA, MHBMA3 and sleep disorders (all P-FDR <0.05). Lasso regression to identify VOCs associated with poor sleep patterns and its components Considering the high correlation among multiple VOCs, we used LASSO regression to screen out the VOCs that were more important for poor sleep patterns and its components.Based on the logarithm of λ, we plotted partial likelihood deviance (binomial deviance) curves and determined the optimal λ values for poor sleep patterns, abnormal sleep duration, trouble sleeping as well as sleep disorders to be 0.004236 [log(λ) = −5.464],0.005043 [log(λ) = −5.290],0.003715 [log(λ) = −5.595]and 0.004153 [log(λ) = −5.484]respectively (Supplementary Figure S3).The contraction coefficient curves were further plotted to select VOCs that were more correlated with the dependent variables.For poor sleep patterns, a total of 10 VOCs (AAMA, AMCC, BMA, BPMA, CEMA, CYMA, 2HPMA, MHBMA3, PGA, HPMMA) were included in the analyses; 3MHA + 4MHA, AMCC, ATCA, BPMA, CEMA, CYMA, 2HPMA, PGA, and HPMMA were more associated with abnormal sleep duration; for trouble sleeping, AMCC, BMA, BPMA, CEMA, CYMA, DHBMA, and 2HPMA were included; while AMCC, ATCA, DHBMA, MA, and PGA were considered more relevant to sleep disorders (Supplementary Figure S4). Principal component analysis (PCA) on VOC mixtures We identified 2, 2, 2, 1 PCs for poor sleep patterns, abnormal sleep duration, trouble sleeping, and sleep disorders using eigenvalues through PCA analyses, which, respectively, explained 59.59, 57.37, 59.34, and 49.49% of the total variance in the VOC exposure (Supplementary Table S3).Supplementary Figure S5 depicts the loadings of each selected VOC on the principal components.For poor sleep patterns, the first PC exhibited similar moderate variable loadings for AAMA, AMCC, CEMA, CYMA, MHBMA3, and HPMMA in the same direction.PC2 showed a high positive load fraction of BPMA, while negative loadings for CYMA, AMCC, MHBMA3, AAMA, and HPMMA.Notably, AMCC had high positive loadings in all the four sleep outcomes of PC1. In the principal component analysis adjusting all covariates, PC1 was significantly positively associated with poor sleep patterns [OR 3). WQS analysis of single and mixed VOCs with poor sleep patterns, and its components We constructed WQS regression models to explore the relationship between important VOCs screened based on LASSO and sleep health.As shown in Supplementary Table S4 1 shows that AMCC, CEMA, MHBMA3, and AAMA are major contributors to poor sleep patterns, PGA, AMCC, and CEMA are significant for abnormal sleep duration, AMCC and CYMA are key for trouble sleeping, while AMCC and DHBMA are primary contributors to sleep disorders.In addition, we also investigated the negative associations of the mixture of VOCs with sleep health while did not find a substantial negative association between the combined VOC WQS index and poor sleep patterns, or its components (Supplementary Table S5 and Supplementary Figure S6). BKMR analysis of single and mixed VOCs with poor sleep patterns, and its components The screened key VOCs were incorporated into the BKMR model to further validate the mixture effect of VOCs on sleep health.Supplementary Table S6 shows the PIP values of selected VOCs, among which AMCC had the highest PIP value in terms of poor sleep patterns, trouble sleeping, and sleep disorders.AMCC, BPMA, CYMA, 2HPMA, and PGA made great contributions to abnormal sleep duration.Figure 2 shows the overall effect of VOC mixtures and their estimated changes in the risk of poor sleep patterns as well as its components, compared to those when all VOCs are fixed at the median.We found that the overall effect of urinary VOCs was significantly and positively associated with poor sleep patterns, trouble sleeping, and sleep disorders when all VOCs were above the 55th percentile.Further, exposure-response relationships between single VOCs and sleep health indicators were analyzed when fixing the remaining VOCs at the 50th percentile level.We found that AMCC was significantly nonlinearly and positively correlated with poor sleep patterns, trouble sleeping, and sleep disorders.BPMA and 2HPMA were non-linearly and negatively correlated, while CYMA and PGA were nonlinearly and positively correlated with abnormal sleep duration.In addition, we also observed a nonlinear association between AMCC and abnormal sleep duration (Supplementary Figure S7).3). Discussion For the first time, our study provided the systematic and comprehensive confirmation of the relationship between VOCs and various sleep outcomes.According to single and mixed models, AMCC was consistently positively correlated with poor sleep patterns, abnormal sleep duration, trouble sleeping, and sleep disorders.In mixed analyses, PCA, WQS, and BKMR models supported that co-exposure to VOCs was significantly and positively associated with poor sleep patterns, trouble sleeping, and sleep disorders.In addition, depression scores mediated the associations of co-exposure to VOCs with poor sleep patterns and its components. Limited studies have investigated the effect of exposure to VOCs on sleep health.A survey on the general population of the United States revealed that with increasing co-exposure to VOCs, the risks of short sleep duration and trouble sleeping significantly elevated (16).Thetkathuek et al. (30) found that workers exposed to xylene and toluene were more likely to experience drowsiness compared to those not exposed to solvents, with the lack of personal protective equipment being a major factor affecting sleep disorders.Several studies on rats suggested that central monoaminergic mechanisms were associated with toluene-induced partial insomnia and sleep-wake cycle disruption (18,31).A cross-sectional study showed a higher prevalence of sleep disturbances among tunnel workers previously exposed to acrylamide and N-methylolacrylamide (32).In addition, considering that tobacco smoke is a major source of VOC exposure, the relationship between secondhand smoke exposure and sleep health is partly suggestive of the impact of VOCs on sleep.A large number of cross-sectional and cohort studies have shown that exposure to secondhand smoke is significantly associated with poor sleep health such as poor sleep quality, sleep maintenance disorders, and short sleep duration (33,34), which supports our speculation.However, current relevant studies are limited to certain single VOC exposures, which lack a comprehensive assessment of sleep outcomes, making it difficult to generalize the results. In real life, people are commonly exposed to a variety of mixed VOCs, making a comprehensive assessment on the impact of VOCs on sleep of significant public health importance.In this study, to address the collinearity and correlation issues among multiple VOCs, we conducted PCA, WQS, and BKMR analyses based on LASSO regression to better capture the combined toxic effects of VOC exposure on sleep health outcomes.These models also supplemented and corroborated the findings of logistic regression on individual VOCs and health outcomes, aiming to identify exposure components that contribute more significantly to the outcomes.In our study, all analytical models pointed to an elevated incidence of poor sleep patterns, as well as its components with increasing concentrations of AMCC.AMCC was considered a key triggering factor for poor sleep outcomes.As a major component of AMCC, dimethylformamide (DMF) is a widely-used drug solvent; however, the mechanism through which DMF induces poor sleep outcomes is unclear.Notably, in terms of the BKMR exposure-response function, we found that when the metabolites of other VOCs were fixed at the median level, AMCC showed a nonlinear relationship with poor sleep patterns and its components.Specifically, as the concentration of AMCC increased, the prevalence of poor sleep patterns and its components initially decreased and then increased, with this non-linear association being more pronounced in abnormal sleep duration.A plausible explanation for this is that DMF has a wide range of CNS depressant effect, which enhances a pentobarbitone-induced increase in sleep duration (35,36).A moderate increase in sleep duration promotes normal metabolism and homeostasis, while a continued accumulation of AMCC leading to excessive sleep duration results in poor sleep outcomes and harms the body (37). Currently, it is unclear how exposure to VOCs influences sleep disturbances.Chronic exposure may affect sleep outcomes through CNS regulation and physiologic changes in the respiratory system.First of all, air pollutants may lead to an altered and dysregulated expression of neurochemicals through the CNS.Specifically, it has been demonstrated that air pollution lowers the serotonin level in the brain (38).Serotonin is one of the most important brain chemicals that regulate the sleep-wake cycle.A decrease in the serotonin level can result in drowsiness and lead to sleep disturbances (39).Several in vitro studies have shown that exposure to high levels of VOCs promotes oxidative stress in human lung epithelial cells, and the reactive oxygen species (ROS)-induced activation of pro-inflammatory genes and transcription factors triggers the production of inflammatory mediators (40), which leads to respiratory-related sleep disturbances and reduces sleep quality (7).Furthermore, inflammatory signals reach the CNS through active mechanisms and cellular pathways involved in direct neural innervation, the effect of humoral mediators, and blood-brain barrier transport, thereby influencing alterations in sleep patterns (41).Given the complex additive and (42).About 90% of depressed patients complain of sleep quality problems such as insomnia (43), while sleep quality problems or depression do not occur at the same time, whose emergence is dependent on many other factors (44).Based on the above studies, we hypothesized that depression could be a potential mechanism between VOC exposure and sleep health.Further mediation analyses also validated this speculation: depression scores mediated 21.4,24.0, 30.1, and 16.4% of the associations of VOCs with poor sleep patterns, abnormal sleep duration, trouble sleeping, and sleep disorders, respectively.At the mechanistic level, VOCs promote the generation of oxidative-stress-mediated inflammatory mediators in the body, leading to a state of systemic chronic inflammatory stress and an increasing depression risk (21).Similar changes in neurotransmitter receptor systems and neuroendocrine responses between depression and sleep disturbances may play a significant role in their relationship.One hypothesis suggested that depression was caused by an imbalance between cholinergic and monoaminergic neurotransmitter production, which was closely related to the regulation of rapid eye movement (REM) sleep (19).Other studies have suggested that dysfunctions of the orexin system may be related to the pathophysiology of mood regulation and the sleep-wake cycle (45).In addition, depression inhibits melatonin secretion, interferes with circadian rhythms, and disrupts sleep (46).In conclusion, our findings provided evidence that improving depressive symptoms might reduce the negative effects of VOC exposure on sleep health.Based on the above research results, we advocate for the government to strengthen the management of VOC concentrations by enacting stricter regulations.Public awareness of VOCs should be raised in the future, with attention to personal protection.Furthermore, the role of mental and emotional well-being in the impact of environmental pollution on physical health should be emphasized to achieve better health management. There are several strengths of our study.Firstly, based on the NHANES database, we were able to explore our study on a large sample population.Secondly, we utilized LASSO regression to select the most relevant VOCs associated with four sleep outcomes, addressing multicollinearity issues arising from highlycorrelated variables.Additionally, we employed various mixedeffect models to complement each other, confirming the mixed negative effects of VOCs on poor sleep outcomes.Of note, we identified AMCC as a potential compound closely related to poor sleep patterns and its components, providing insights for further research into the mechanisms underlying adverse sleep outcomes. Our study also has certain limitations.Firstly, since self-reporting was the basis for sleep health outcomes in this study, recall biases or inaccurate reporting could potentially skew the findings.It will be necessary in the future to classify sleep issues through objective tests such as actigraphy for a more refined assessment.Secondly, urinary VOC metabolites were assessed based on a single measurement without exposure timing information in NHANES.This measurement method can only represent current VOC levels and may lead to measurement errors.Thirdly, our study had inherent limitations as a cross-sectional study, which prevented us from establishing a causal relationship between VOC exposure and sleep outcomes.More cohort studies and randomized controlled trials are needed in the future to better reveal this association. Conclusion Single and combined VOC exposure increased the risk of poor sleep patterns, abnormal sleep duration, trouble sleeping, and sleep disorders, with AMCC being a significant contributor.Depression scores mediated the associations between VOC mixtures and sleep outcomes.Our study emphasizes the potential of controlling VOC exposure for improving sleep health, and advocates for the future formulation of health policies regarding VOC regulation to normalize VOC concentration management.In the future, additional prospective research will be required to validate and expand upon our findings. This cross-sectional study utilized data from the NHANES conducted during 2005-2006, 2011-2012, and 2013-2014.NHANES, an ongoing project led by the National Center for Health Statistics (NCHS), is a nationally-representative survey conducted in the U.S., which involves continuous data collection through interviews, physical examinations, and laboratory tests on the general population of the United States.For more detailed information, please visit the Website NHANES -National Health and Nutrition Examination Survey Homepage (cdc.gov). synergistic effects among various VOCs, more experimental and epidemiological studies will be needed in the future to elucidate the underlying mechanisms.Previous studies have shown that urinary VOC concentrations are significantly positively correlated with depressive symptoms (21).A cross-sectional study based on NHANES 2007-2014 found a positive dose-response connection between clinically-relevant depression and sleep patterns FIGURE 1 FIGURE 1Positive weights of WQS index of screened urinary VOCs in poor sleep pattern (A), abnormal sleep duration (B), trouble sleeping (C), and sleep disorder (D).The dashed grey lines represent the cutoff to discriminate which element has a significant weight.Models were adjusted for age, sex, race, body mass index, serum cotinine, drinking status, marital status, education level, the ratio of family income to poverty, diabetes, and hypertension. FIGURE 2 FIGURE 2 Overall relationship between the mixture of VOCs and poor sleep pattern (A), abnormal sleep duration (B), trouble sleeping (C), and sleep disorder (D) estimated by Bayesian kernel machine regression (BKMR) model.Models were adjusted for age, sex, race, body mass index, serum cotinine, drinking status, marital status, education level, the ratio of family income to poverty, diabetes, and hypertension. FIGURE 3 FIGURE 3 Estimated proportion of the association between VOCs mixture index and poor sleep pattern (A), abnormal sleep duration (B), trouble sleeping (C), and sleep disorder (D) mediated by depression score.Models were adjusted for age, sex, race, body mass index, serum cotinine, drinking status, marital status, education level, the ratio of family income to poverty, diabetes, and hypertension.*p < 0.05; **p < 0.01; ***p < 0.001.VOCs, volatile organic compounds. TABLE 2 Multivariable logistic regression analysis between single urinary VOC metabolites, poor sleep patterns, and its components.Next, we explored whether VOCs indirectly affected sleep health through depression scores.Depression scores mediated 21.4,24.0, 30.1, and 16.4% of the associations of VOC mixture index with poor sleep patterns, abnormal sleep duration, trouble sleeping, and sleep disorders, respectively, (all p < 0.05) (Figure Models were adjusted for age, sex, race, body mass index, serum cotinine, drinking status, marital status, education level, the ratio of family income to poverty, diabetes, and hypertension.The bold number indicates the FDR < 0.05.FDR, false discovery rate; VOCs, volatile organic compounds; OR, odds ratios; CI, confidence interval. TABLE 3 Association between VOCs, poor sleep patterns, and its components: principal component analysis results.
2024-07-17T15:04:49.782Z
2024-07-15T00:00:00.000
{ "year": 2024, "sha1": "22ef9445bc7223e7e735dea41a46e231da90a30f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3389/fpubh.2024.1423771", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ac088086a24042fba9d382cc45044ae05ba4b591", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
51931291
pes2o/s2orc
v3-fos-license
International competence and knowledge studies and attitudes of the Brazilian Management accountant : analyses and reflections † Ph.D. in Accounting by the University of São Paulo Affiliation: Associate Professor of Mackenzie Presbyterian University Address: Rua da Consolação, n. 896 – Prédio 60 – São Paulo SP, CEP 01302907 E-mail: ricardo.cardoso@mackenzie.br Telephone: (11) 21148273 Ω Ph.D. in Accounting by the University of São Paulo Affiliation: Associate Professor of Mackenzie Presbyterian University Address: Rua da Consolação, n. 896 – Prédio 60 – São Paulo SP, CEP 01302907 E-mail: octavio.mendonca@mackenzie.br Telephone: (11) 21148273 ¥ Ph.D. in Accounting by the University of São Paulo Affiliation: Associate Professor of Mackenzie Presbyterian University Address: Rua da Consolação, n. 896 – Prédio 60 – São Paulo SP, CEP 01302907 E-mail: oyadomari@mackenzie.br Telephone: (11) 21148273 INTRODUCTION he accountant's profession, as well as its specialties such as the Management accountant undergo alterations according to the changes occurred in the business world increasingly require certain competences from this professional.The so-called competences of a professional are studied, initially, in psychology area with the articles of McClelland (1973) and later with Boyatzis (1982) and then with Spencer & Spencer (1993), being that the latter has prepared the so-called competence dictionary of several professions.More recently, these studies have sought to relate the competences of the professionals to their intellectual, cognitive skills and emotional intelligence, such as the latest articles of McClelland (1998).Competence studies and their relations, also suffer influence from the approach of longitudinal studies made by Boyatzis, Stubbs and Taylor (2002) and Goleman, Boyatzis and McKee (2002) who worked segregating competences in selfmanagement, relationship management and cognitive.Despite all existing studies about the issue of competence, however, it is a consensus that it cannot be considered a settled issue yet.Competence is still a construct in formation. Allied to studies that seek the formation of a consensus in the formation of a constructor called competence and its application in the scope of positions in business organizations, it is also possible to find some researchers of specific knowledge fields who started using this theoretical basis to develop researches applied on competences in certain professions, highlighting for physicians with Epstein & Hundert (2002), business managers with Erondu & Sharland (2002), buyers with Giunipero and Pearcy (2000), with future managers Godoy, Antonello, Bido and Silva (2009) and accountants with IFAC e IMA.The accountant profession has a lot of specialties being that Management accountant is one of the most important and relevant in the profession, this being the reason for this study.However, this approach does not discourage studies for general accountant or other specialties such as taxes, audit, financial, among others.In the accounting area case, studies about competences are a little confused with the professional's functions and activities according to reports in the studies of AICPA (1999), IFAC (2003), Abdolmohammadi, Searson and Shanteau (2004) and Palmer, Ziegenfuss and Pinsker (2004).Considering the stage of the research about the accountant's competences, this study assumes demands being placed on the accountant being liable to recording in literature about the profession.From these studies and assembly of the competence dictionary, it was also tried to use the specialist panel to evaluate required T BBR, Vitória, v. 7, n.3, Art. 5, p .87-107, sept -dec. 2010 www.bbronline.com.brcompetences.After this analysis, it will be assessed whether competences obtained from Brazilian accountants are aligned with the competences listed in the international studies quoted Palmer, Ziegenfuss and Pinsker (2004).The article represents a view of the Management accountant profession in a behavioral line, trying to understand better the impacts of the stresses that currently involve this professional, emphasizing the harmonization issue of accounting standards, overall curriculum and even issues connected to information technology area.It is believed that, by making use of a greater effort to understand the Management accountant's competences, it will be possible to reinforce studies about accounting teaching issues, professional training, as well as about behavioral aspects of this profession.It is within this context that this paper is inserted. Competence Studies The competence study goes through a long line of interpretation that can be partly understood in the placement of Dutra, Hipólito and Silva (2000), report that when trying the answer: What is competence?One enters a minefield; such is the diversity of interpretations of the term over the last thirty years.According to the authors, however, the risk is worthwhile because it deals with a concept the purpose of which is to clarify cloudy aspects of people management.Considering the divergences recorded among the several authors, such as Woodruffe (1991), Le Boterf (1994) and Parry (1996), the term "competence" has as origin the word competentia, from Latin, meaning the quality of who is capable of appreciating and settling a certain subject, doing a certain thing, with capability, skill, aptness and good repute. Another important aspect to be analyzed as far as competence is concerned is the association of the entire competence with the value-added ideal and delivery to certain context in independent manner from concerned position, which was later discussed by authors such Zarifian (2001) and Le Boterf (1994and 2001) and Fleury &d Fleury (2001).By value adding, Dutra (1999) understands as something that the person delivers to the organization in effective manner, in other words, something that remains, even when the person leaves the organization.Over the years, a set of authors began to assess both positioning seeing them in conjunction: the delivery and characteristics of the person, which may define it more properly.(Parry, 1996).Segregation of competences in inputs: knowledge, skills and attitudes and outputs: value adding, perhaps is not the best solution, because, as can be seen, there is a synergy between the two concepts and, at the same time, interdependence.Atitudes in Portuguese): knowledge, skills and attitudes.Another concept to be understood is of models of competences that represent a set of competences required for a higher performance in a certain position, trying to identify required behaviors to perform, successfully, a certain function, according to Lucia & Lepsinger (1999).One of the most developed generic models of competences was of Boyatzis (1982), to whom the human organism is a complete system as well as organizations and these systems cannot be observed in separate manner, therefore, competences of a person must be understood by evaluating the context surrounding it.Construction of competence models for positions is discussed in several manners, but they can be synthesized in the view of de Spencer & Spencer (1993) who worked with 03 basic methods: the so-called classical method that uses employees with higher performance, specialist panel with meeting and discussion about the position to be modeled and studies of single or future.The latter being the most complicated, since there are no parameters to prepare the model.The logics of competence model construction, based on a specialist panel through an exploratory research, are the basis used in this research paper. International studies of the management accountant competences The concern of the competence study in accounting procession has appeared more strongly in trade entities or associations since 1950, but the use and adoption of the term competence only occurs in the 1990s where the studies of: Big 8 Firms (1989), IMA (1994, 1996 and 1999), AECC/AAA (1996), IFAC (1998 and2003), IIA (1999) data of this research the main competences quoted in above-mentioned studies are tabulated and presented next.In addition to the studies performed by the entities, we must consider that several accounting area researchers discuss the theme.The first is Kester (1928), going through Bower (1957), Heckert and Willson (1963) and, later, Henning and Moseley (1970). Past all this evolution of the 1960's and 1970's, we have, in the 1980's and 1990's, a discussion about the strategic need of accounting and the more proactive role of the professional, allied to structural changes in teamwork and systemic view in organizations, reported in Hardern (1995), Laurie (1995) and Morgan (1997).The consolidation of these concepts and their implication on the profession can be seen in the studies of Sakagami, Yoshimi and Okano (1999) in the Japanese case.Specifically discussing the issue of the management accountant's competences we have semi-structured studies with emphasis to Malcarney (1964), Vatter (1986), Pierce (2001) and Boritz and Carnaghan (2003) where the issue is discussed as a business view, management techniques and capability to generate managerial information as important factors in this process. As a consequence of the literature review about the accountant's competences is that one of the theoretical justifications for this study is that we have: shortage of studies about the accountant's competence and mainly studies focused on the management accountant's competences, because literature, mostly, deals with the professional's function and not the competence factor; almost all studies do not perform empirical surveys and do not even deal with the competence aspect through the psychology and human resources approach.Research development in this area based on more solid theoretical grounds and appropriate methodology is reported in the studies of Abdolmohammadi, Searson and Shanteau (2004), IFAC (2003), Pierce (2001), in the inventory of studies connected to behavioral accounting of Meyer and Rigsby (2001), Boritz and Carnaghan (2003), Cardoso and Riccio (2005) and Cardoso, Riccio and Alburquerque (2009).It is relevant to report that recent studies of international regulatory entities start to demonstrate greater concern with conceptual structuring. This research issue, therefore, becomes relevant to the extent that it uses a broader conceptual approach than the regulatory view of accountants' competences and seeks empirical evidences with suitable statistical treatment to structure the management accountant's competences.In order to facilitate viewing the competences required from accountant, a summary of found articles was prepared, specifically about the professional of this area. Source: prepared by authors Problem and Work Hypothesis This study has an exploratory nature, seeking the development of the management accountant's competence structuring, for which it has a set of required characteristics for the development of this professional meeting the precepts pointed out by Spencer andSpencer (1993), Lucia &Lepsonger (1999) and Boyatzis, Stubbs and Taylor (2002).The purpose of this study is to investigate the competences required to the Brazilian management accountant and to compare that these competences are aligned with competences cited in studies of international accounting entities such as IFAC, IMA, AICPA, among others. Based on this context, the following questions are placed to be discussed in this study: Question 1: What would the competences required for the management accountant be? Question 2: Are the most relevant competences cited in international accounting entities' studies in line with the study performed in Brazil?Question 3: Among presented competences are there competences that must be prioritized in the management accountant's development l?For development of the study the following conceptual assumptions raised from the studies These new competences will be important to this professional because they provide him with a mapping of essential competences that may result in superior performance. Because of resources and cultural diversities among organizations, regions and countries and even accounting companies, the competence structure must be considered a temporary model. Additional jobs are necessary for their full utilization. Sample The sample is intentional and not probabilistic during the months of June and October Questionnaires were handed to students and answered personally or by e-mail.Out of the total 285 sent, 200 were answered and considered methodologically valid. Composition of instrument Instrument construction followed 02 stages, the first one divided into the following steps: a) theoretical grounding analysis about competences in the scope of behavioral area; b) analysis of competences mentioned by studies focused in accounting area for preparation of variables to be measured including international studies; c) addition of identified competences according to concerned; d) construction of the meaning of each competence in the instrument aimed at interpretation error reduction on the part of respondents; e) holding 03 rounds of discussion with 04 professionals with wide experience in management accounting area to check adherence to the proposal and raise other variables.The second stage consisted of applying the questionnaire with 08 professionals for pre-testing purposes and, after the application, comments and remarks were collected about the instrument, which were assessed and incorporated, to the extent necessary, to the final instrument. V18 Outside Relationship Henning and Moseley (1970) and Morgan (1997) Source: Prepared by the authors For internal consistence, Cronbach's Alpha originally developed by Cronbach (1951) was used, considering test premises, the data presented a result of 0.884, which represents a good degree of the instrument reliability.It was noticed that none of the 18 variables placed for evaluation have a great effect on alpha composition when alpha results are analyzed by removing the effects of each variable in SPSS. Results of research For presentation purposes of results, first, we have respondents' data: Exploratory factorial analysis Dealing with an exploratory study, statistical instrument used was the factorial analysis the basic purpose of which is to summarize data through linear combination (factors) among variables and to explain the relation between these variables.Factorial analysis operationalization followed the steps given by Hair (1998) and Tabachnick & Fidell (2001). By using the main component method where concern falls upon the common and specific variance and performing the significance test of obtained factors, we obtain the following results: a) Bartllet's sphericity the purpose of which is to know whether the correlation existing between variables is significant, to the point of just a few factors representing a large part of data variability.In this test p<0.000 was obtained for significance level of 0.05; b) Adequacy measure of Kaiser-Meyer-Olkin's (KMO) sample the purpose of which is to know whether the correlation between each pair of variables can be explained by the other variables included in the study.Obtaining in this study an absolute value of 0.830; c) anti-image matrix that analyzes the correlation of a variable against the others, the effects of the others controlled.In this test all combinations reached a correlation over 0.50 for each pair of variables.All these tests, together, demonstrate that the null hypothesis that matrix variables are not sufficiently correlated can be rejected.Thus accepting, the factorial analysis assumption placed by Hair (1998) and Tabachnick & Fidell (2001).Another considered and accepted criterion in factorial analysis, was communality, in other words, common variance of the variable in relation to the others had values over 60% for practically all variables, the only ones that were a little below of close to 50% the capability to solve problems and management techniques variables. The choice of number of factors was done by using eigenvalues over 1, explained variance over 60% in accumulative and at last eigenvalue diagram analysis.Considering the analysis assumptions of each of these criteria placed by Tabachnick & Fidell (2001) and considering a study connected to Social Science placed by Hair (1998) defining a number of 04 factors as the one that best meets placed assumptions, demonstrating an explanatory power of 62.8% of variance. Considering the choice of four (04) factors and the difficulty to analyze the main component matrix of some variables, Varimax orthogonal rotation was done, which is more adequate for cases where there are independence assumptions of components Hair (1998) which the case of this kind of study as performed by Giunipero and Pearcy (2000).From the rotation, the following rotated component matrix was obtained presented in table 04: BBR, Vitória, v. 7, n.3, Art. 5, p. 87-107, sept. -dec. 2010 www.bbronline.com.br Scale refinement In this phase, scale refinement and denomination of each factor must be sought.Scale refinement refers to the definition of the number of variable that must compose accountants' competence model.To that end, Dutra's recommendation was used (2001), which indicates something around seven to twelve competences to form the model.Other authors foresee the use of reduced models with something around 8 to 14 competences, such as Horton (2000) and Zhong andKan (2004). To Dutra (2001), this interval minimizes the subjectivity bias in evaluation of people and increases the possibility of overlap among competences.This fact has direct relation to practical use of the generic model.In order to perform this reduction, the parameter of variables that presented lower factorial load was used, demonstrating lower relation to common constructor, model proposed by Horton (2000).By following this criterion, in the case of this research, all variables that presented factorial loads less than < 0.600 were removed.The following variables fitted into this criterion: capacity to solve problems (0.437), management techniques (0.495), interpersonal relationship (0.578), integrity and confidence (0.599) and interpersonal communication (0.570).With these adjustments, the model passes from 18 variables to 12 variables.BBR, Vitória, v. 7, n.3, Art. 5, p .87-107, sept -dec. 2010 www.bbronline.com.brAfter making these variable reduction, a new factorial analysis and a new analysis of all assumptions of this technique were made, which was fully met by the new data as described below.A reliability coefficient for the new scale was measured by Cronbach's Alpha of 0.847, event higher than minimum necessary standards.The extraction method was the main axis factoring and factor rotation method was Varimax, the same used up to the moment in this research.When running the model again, the following results were reached: a) Bartllet's sphericity test obtained p<0.000 for significance level of 0.05; b) for adequacy measure of Kaiser-Meyer-Olkin's (KMO) sample absolute value of 0.840 was obtained in this study; c) with anti-image matrix combinations that reached a correlation over 0.50 for each pair of variables were obtained and the one that obtained the lowest index was leadership/teamwork variable with 0.730.All these tests, together, demonstrate that the null hypothesis, according to which matrix variables are not sufficiently correlated, can be rejected. Thus, accepting the factorial analysis assumption placed by Hair (1998) and Tabachnick & Fidell (2001).Another considered and accepted criterion in factorial analysis, was communality, in other words, common variance of the variable in relation to the others was over 50% for all variables, the only one that was below 50% was information technology with 36% being in this manner removed from the 13 variables meeting the assumption referred by Hair (1998).The number of factors was reduced to three (3) and departing from 13 to 12 variables, for which eigenvalues over 1 were used, explained variance over 67.4% in the accumulative and, at last, eigenvalue diagram analysis was performed.These data reached a new factor matrix after Varimax rotation, as presented below: Management accountant's competence factors Considering factor rotation matrix results, 03 factors are indicated that, conceptually, were ranked as follows: Factor 01: Technical Competences: it congregates competences aimed at specific accounting and control area knowledge, especially those related to technical aspects, such as accounting, budget, planning, costs and internal controls.Additionally, it includes the accountant's analytical view and strong knowledge of legal issues.It is clearly seen that the professional's basic characteristic is his technical competence.Competences referred here are: accounting and finances, legal, control tools, planning and analytical capacity. Factor 02: Behavioral Competences: are related to behavioral aspects of this professional with internal and external members to the organization, as well as to the ability to communicate, analyze and solve business activity problems.Allied to these aspects there are issues related to information technology that forms part of this set of ability related to the management accountant.Competences related to these data are: self-control, listening effectively, leadership / teamwork, information management and outside relationship.Seeking clues to this question, it can be considered that the 03 factors described above can answer these questions.The order of factors seems to demonstrate a certain hierarchy or perhaps areas of competence to be considered in the training of these professionals.To that end, however, the research results need to be validated by management accounting professionals.The setting of this structure type was not identified in specific literature BBR, Vitória, v. 7, n.3, Art. 5, p .87-107, sept -dec. 2010 www.bbronline.com.brreferring to the management accountant, which reinforces the need and the importance of this proposal. For future researches, following up impact of new management technologies and techniques in the professional's competences must be especially sought, as well as evaluating generic model competences with required competences for management accounting professionals of specific sectors, such as for audit and accounting companies and companies from sundry sectors, either financial, retail, industry and others.Based on theoretical grounding and empirical research results, a generic competence structure can be built as described by Boyatzis (1982) and, more specifically, by Spencer & Spencer (1993) formatting: Despite the wide use of generic structures or model in people management area, there are criticisms such as those made by Blackmore (1999) who warns about the assumption that there is only one competent or effective professional type: the generic model one.Another critique refers to the organizational culture dependence where the model may be refuted or even different interpretations of the competence occur before different situations.In order to reduce errors by using generic structures, some authors indicate cautions in implementation, highlighting the following authors: Considering the competences mentioned in Chart 1 and 2 and coming from studies originating from accounting area entities and associations mainly from observations of Palmer, Ziegenfuss and Pinsker (2004) which the variables listed below may be the first ones BBR, Vitória, v. 7, n.3, Art. 5, p .87-107, sept -dec. 2010 www.bbronline.com.brQuestion 3: Among presented competences are there competences that must be prioritized in the management accountant's development? Based on research and discussion data about the existence or not of competences to be prioritized in the development of the professional, H1 hypotheses of the study supported on average tests. Hypothesis 1: There is a significant difference in the importance assigned to a certain competence in relation to the others. For the average test, Kruskal-Wallis test was used, considered quite efficient for this kind of study, where two or more samples coming from the same population of different populations are tested.The test basis is the difference between importance evaluation averages for competence variables among male and female accountants that answered the question.The fact of using men and women is related to the references made in the studies of Loft (1992) and Anderson, Johnson and Reckers (1994) about the female management accountant's perception on the profession aspects where different views are identified, which also might be identified in this paper.Based on this test, considered null hypothesis is that there is no statistically significant difference with respect to assignment of the variable among groups formed by men and women.With a < 0.05 level of significance, it is seen that there is no different between respondents coming from the 02 groups: men and women, except regarding attitude/endeavor BBR, Vitória, v. 7, n.3, Art. 5, p. 87-107, sept. -dec. 2010 www.bbronline.com.brvariable.Therefore, the null equality of averages hypothesis cannot be rejected, concluding that sub-samples had no influence on general study results.Considering the data it is seen that not even variables held in literature as prioritized by the study conducted by Palmer, Ziegenfuss and Pinsker (2004) with accounting entities' researches White Paper (1999), AECC/AAA (1990), IMA (1996and 1999) and IFAC (2003) in general were considered relevant, see table 8.Among these variables the only one considered liable to priority is attitude and endeavor, which on its own could not be considered relevant.This fact despite still being initial can be analyzed as an indication of management accountants' competences form part of a common structure and not isolated competences that must be individually improved.This kind of analysis starts to be done reaching still embryonic results such as Cardoso and Riccio's (2005) study. CONCLUSIONS Understanding which competences are required by the management accounting professional may help the growth and development of this function in organizations, always having as an assumption the importance of people in value generation for the institutions.The study had as main results the definition of Brazilian management accountant's competences from the collocation of the 18 variables collected in literature, which submitted to the evaluation of 200 respondents, reached the 12 variables organized in 03 factors: technical competences, behavioral competences and posture competences.After this analysis, the study sought 12 found variables to competences held as prioritized by the main studies in the area, leaving out communication, interpersonal relationship, capacity to solve problems e information technology competences, demonstrating that in the behavioral aspect there may be posture differences between the Brazilian professional and the others that must be better understood and discussed.At last, it was reached from the average test that over 18 mentioned competences only attitude competence can be considered liable to be prioritized, which actually cannot have a priority meaning perhaps indicating that competences are more important in all than in the individual scope.It must be considered that some findings reported in this study are limited to the current research development stage of the accountant and management accountant competencies.However, it must be considered that some contributions both in the methodological field and in the theoretical implications of the management accountant competence study. The limitations of this study are related to the intentional and not probabilistic sample, which does not allow any kind of generalization.Another question are the limitations 2007, with graduate students, specialization level in accounting and controllership of two important Brazilian universities, the Pontifical Catholic University of Campinas and Mackenzie Presbyterian University.The students are selected to enter graduate school by considering their prior academic background and professional experience in the course concentration area and in order to answer the questionnaire had the obligation to act professionally in management accounting or controllership area for at least 3 years. By considering the good model adjustment criterion placed by Hair (1998) it is seen the Varimax rotation met the allocation process of variables to factors, as only two variables are verified, planning and management techniques, ´present two values over 0.40. Factor 03 : Posture Competences: among competences, perhaps this is the group that can most differentiate the general accountant from the management accountant, because in literature these collocations are done with a lot of emphasis even if from semi-structured studies.Developing and demonstrating the ability to endeavor, i.e., developing creative solutions to problems of the organizations innovating in the manner of work, as well as, having a close relation with strategic aspects of the organization demonstrating a broad business view.Listed competences were: attitude /endeavor and general/strategic view.Responses to Questions of this Research: Question 1: What would required competences from the management accountant be?Item three present the questions of this research, from which the first one is being discussed here: What would required competences from Brazilian management accountant be? Boyatzis (1982) to whom the model must be aligned to objectives, culture and the organization's values;McClelland (1998), who deals with the need for senior management support and focus on performance improvement; Lucia & Lepsinger (1999) who reminds the need for trying to identify potential problems and their possible causes; develop alternate plans and establish communication channels.Question 2: Are the most relevant competences mentioned in international accounting entities' studies in line with the study performed in Brazil? Table 1 : Summary of literature about management accountant competences BBR, Vitória, v. 7, n.3, Art. 5, p. 87-107, sept.-dec.2010 www.bbronline.com.br research support the authors who describe with greater wealth of details the competence, both in the scope of authors connected to behavioral or accounting area. Described competenceswere divided into capabilities, skills, knowledge and other personal characteristics.The questions were prepared in a 10-point Likert scale with 01 for no importance and 10 for extreme importance.The variables incorporated to the questionnaire are presented in table,BBR, Vitória, v. 7, n.3, Art. 5, p. 87-107, sept.-dec.2010www.bbronline.com.brconsidering as
2018-08-05T09:32:32.278Z
2010-09-01T00:00:00.000
{ "year": 2010, "sha1": "31ca4b806a2f7af52b7c0a09d27cc784f24d7e12", "oa_license": "CCBY", "oa_url": "https://bbronline.com.br/index.php/bbr/article/download/331/501", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "3fae47ed69abe19fc2dfcbb97c0a2ddf6c4513a4", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Psychology" ] }
5519892
pes2o/s2orc
v3-fos-license
A finiteness property of torsion points Let k be a number field, let E/k be an elliptic curve, and let S be a finite set of places of k contianing the archimedean places. Let F be an algebraic closure of k. We prove that if a point P in E(F) is nontorsion, then there are only finitely many torsion points x in E(F) which are S-integral with respect to P. We also prove an analogue of this for the multiplicative group, and formulate conjectural generalizations for abelian varieties and dynamical systems. Introduction Let k be a number field, with ring of integers O k and algebraic closure k, and let E/k be an elliptic curve. Let E/Spec(O k ) be a model of E, and let S be a finite set of places of k containing the archimedean places. In this paper we will prove: Theorem 1.1. If α ∈ E(k) is nontorsion (that is, has canonical height h(α) > 0), then there are only finitely many torsion points ξ ∈ E(k) tors which are S-integral with respect to α. By S-integrality we mean that the Zariski closures of ξ and α in the model E/Spec(O k ) do not meet outside fibres above S. Since any two models are isomorphic outside a finite set of places, the finiteness property is independent of the set S and the model E. We will also prove an analogue of Theorem 1.1 for the multiplicative group (Theorem 2.1 below). Theorems 1.1 and 2.1 are analogues for non-compact varieties, where it is most natural to look at integral points, of the Manin-Mumford conjecture (first proved by Raynaud [Ra83]) . The ingredients of the proof of Theorem 1.1 are a strong form of equidistribution for torsion points at all places v, properties of local height functions, and David/Hirata-Kohno's theorem on linear forms in elliptic logarithms. In outline, the proof is as follows. By base change, one reduces to the case where α ∈ E(k). Given a place v of k, let k v be the algebraic closure of the completion k v , and let λ v : E(k v ) → R be an appropriately normalized Néron-Tate canonical local height. On the one hand, elementary properties of heights show that for any torsion point ξ n , one has (1) h(α) = 1 [k(ξ n ) : k] v σ:k(ξn)/k֒→kv λ v (α − σ(ξ n )) . By the integrality hypothesis, the outer sum in (1) can be restricted to v ∈ S, allowing the limit and the sum to be interchanged. This gives h(α) = 0, contradicting the assumption that α is nontorsion. Examples show that the conclusion of Theorem 1.1 is false if α is a torsion point, and that it can fail if {ξ n } is merely a sequence of small points (that is, a sequence of points with h(ξ n ) → 0). In particular, Theorem 1.1 cannot be strengthened to a theorem of Bogomolov type. Theorem 1.1 is the first known case of general conjectures by the second author (as refined by J. Silverman and S. Zhang) concerning abelian varieties and dynamical systems. Assume as before that k is a number field, and let S be a finite set of places of k containing the archimedean places. Let O k,S be the ring of S-integers of k. Conjecture 1.2. (Ih) Let A/k be an abelian variety, and let A S /Spec(O k,S ) be a model of A. Let D be an effective divisor on A, defined over k, at least one of whose irreducible components is not the translate of an abelian subvariety by a torsion point, and let D be its Zariski closure in A S . Then the set A D,S (Z) tors , consisting of all torsion points of A(k) whose closure in A S is disjoint from D, is not Zariski dense in A. Conjecture 1.3. (Ih) Let R(x) ∈ k(x) be a rational function of degree at least 2, and consider the dynamical system associated to the rational map R * : P 1 → P 1 . Let α ∈ P 1 (k) be non-preperiodic for R * . Then there are only finitely many pre-periodic points ξ ∈ P 1 (k) which are S-integral with respect to α, i.e. whose Zariski closures in P 1 /Spec(O k,S ) do not meet the Zariski closure of α. Theorem 1.1, in addition to being the one-dimensional case of Conjecture 1.2, is equivalent to Conjecture 1.3 for Lattès maps. That is, if E/k is an elliptic curve, let R ∈ k(x) be the degree 4 map on the x-coordinate corresponding to the doubling map on E, so that the following diagram commutes: The motivation for Conjecture 1.2 is the following analogy between diophantine theorems over k and k, and over O k and Z (the ring of all algebraic integers). Let A/k be an abelian variety, and let X be a non-torsion subvariety of A (that is, X is not the translate of an abelian subvariety by a torsion point). Recall that the Mordell-Lang Conjecture (proved by Faltings) says that A(k)∩X is not Zariski dense in X; while the Manin-Mumford Conjecture (first proved by Raynaud) says that A(k) tors ∩ X is not Zariski dense in X. Likewise, Lang's conjecture (also proved by Faltings) says that if D is an effective ample divisor on A, then the set A D (O k ) of O k -integral points of A not meeting supp(D) is finite. Note that A is compact, whereas A D = A\supp(D) is noncompact. Conjecture 1.3 is motivated by Conjecture 1.2 and the familiar analogy between torsion points of abelian varieties and preperiodic points of rational maps. The paper is divided into two sections. In the first, we prove Conjecture 1.3 for the dynamical system R(x) = x 2 . In the second, we prove Conjecture 1.2 for elliptic curves. Throughout the paper, we will use the following notation. For each place v of k, let k v be the completion of k at v and let |x| v be the normalized absolute value which coincides with the modulus of additive Haar measure on k v . If v is archimedean and k v ∼ = R, then If v is nonarchimedean and lies over the rational prime p, then |p| v = p −[kv:Qp] . For 0 = α ∈ k, the product formula reads v |α| v = 1 . If k v is an algebraic closure of k v , there is a unique extension of |x| v to k v , also denoted |x| v . Given a finite extension L/k, for each place w of L we have the normalized absolute value |x| w on L w . If we embed L w in k v , then |x| w = |x| for each x ∈ L w . Write log(x) for the natural logarithm of x. Given β ∈ L and a place v of k, as σ ranges over all embeddings of L into k v fixing k we have The absolute Weil height of α ∈ k (also called the naive height) is defined to be It is well known that for α ∈ Q, h(α) is independent of the field k containing Q(α) used to compute it, so h extends to a function on Q. Furthermore h(α) ≥ 0, with h(α) = 0 if and only if α = 0 or α is a root of unity. 2.1. The finiteness theorem. Let S be a finite set of places of k containing the archimedean places. Given α, β ∈ k, view them as points in P 1 (k) and let cl(α), cl(β) be their Zariski closures in P 1 /Spec(O k ). By definition, β is S-integral relative to α if cl(β) does not meet cl(α) outside S. Thus, β is S-integral relative to α if and only if for each place v of k not in S, and each pair of embeddings σ : k(β) ֒→ k v , τ : k(α) ֒→ k v , we have σ(β), τ (α) v = 1 under the spherical metric on P 1 (k v ). Equivalently, for all σ, τ Theorem 2.1. Let k be a number field, and let S be a finite set of places of k containing all the archimedean places. Fix α ∈ k with h(α) > 0; that is, α is not 0 or a root of unity. Then there are only finitely many roots of unity in k which are S-integral with respect to α. Before giving the proof, we note some examples which limit possible strengthenings of the theorem. A) The hypothesis h(α) > 0 is necessary: If α = 0, take k = Q. Then each root of unity ζ n is integral with respect to α at all finite places. If α = 1, then each root of unity of composite order is integral with respect to α at all finite places. If α = ζ N is an N th root of unity with N > 1, take k = Q(ζ N ). If ζ m is a primitive m th root of unity with (m, N) = 1 and m > 1, then ζ −1 N ζ m is a primitive mN th root of unity whose order divisible by at least two primes. This means 1 − ζ −1 N ζ m is a unit, so ζ N − ζ m is also a unit. Hence ζ m is integral with respect to α at all finite places. B) When h(α) > 0, one can ask if the theorem could be strengthened to a result of Bogomolov type: is there a number B = B(α) > 0 such that there are only finitely many points β ∈ k with h(β) < B which are S-integral with respect to α? That is, could finiteness for roots of unity be strengthened to finiteness for small points? The following example shows this is not possible. Take k = Q, α = 2, and S = {∞}. For each n, let β n be a root of the polynomial Here f n (x + 1) is Eisenstein with respect to the prime p = 2, so f n (x) is irreducible over Q. Note that each β n is a unit. By Rouché's theorem, β n has one conjugate very near 2 and the rest of its conjugates very close to the unit circle; this can be used to show that lim n→∞ h(β n ) = 0. Finally, β n − 2 is also a unit, so β n is integral with respect to 2 at all finite places. By replacing k with k(α), and S with the set of places S k(α) lying over S, we are reduced to proving the theorem when α ∈ k. Indeed, if ζ is a root of unity which is S-integral with respect to α over k, then each k-conjugate of ζ is S k(α) -integral with respect to α over k(α). Suppose α ∈ k, and that there are infinitely many distinct roots of unity {ζ n } which are S-integral with respect to α. For each n, we will evaluate the sum in two different ways. On the one hand, an application of the product formula will show that each A n = 0. On the other hand, by applying the integrality hypothesis, Baker's theorem on linear forms in logarithms, and a strong form of equidistribution for roots of unity, we will show that lim n→∞ A n = h(α) > 0. This contradiction will give the desired result. The details are as follows. First, using (3), formula (4) can be rewritten as Since α is not a root of unity, the product formula gives A n = 0. Next, take v / ∈ S. If |α| v > 1 then by the ultrametric inequality, for each σ : so that Now let n → ∞ in (6). Since S is finite, we can interchange the limit and the sum over v ∈ S, obtaining We will now show that for each v ∈ S, Inserting this in (7) gives h(α) = 0, a contradiction. Assuming v is nonarchimedean and |α| v = 1, let M(α) be as in the Lemma. Fix 0 < r < 1, and let N(r) be the number of roots of unity in k v with |ζ − α| v < r. For each ζ n and each σ : log(r) . Since r < 1 is arbitrary, the limit in (8) is 0, verifying (8) in this case. Now suppose v is archimedean. To simplify notation, view k as a subfield of C and identify k v with C. (Thus, the way k is embedded depends on the choice of v). Here |x| can be replaced by |x| v . The Gal(k/k)-conjugates of roots of unity equidistribute in the unit circle. We will give a direct proof of this below, but we note that it also follows from generalizations of Bilu's theorem, for example the equidistribution theorem for polynomial dynamical systems given in Baker-Hsia ( [BHpp]). The Baker-Hsia theorem implies that if µ n is the discrete measure where δ P (x) is the Dirac measure with mass 1 at P , then the µ n converge weakly to the Haar measure µ = (1/2π)dθ on the unit circle. The first problem is solved by A. Baker's theorem on lower bounds for linear forms in logarithms (see Baker [Ba75], Theorem 3.1, p.22). We are assuming that |α| v = 1, and α is not a root of unity. Fix a branch of log with log(z) = log(|z|) + iθ, −π < θ ≤ π, and write log(α) = iθ 0 . For another branch, log(1) = 2πi. The following is a special case of Baker's theorem. (In his statement of the theorem, Baker uses an exponential height having bounded ratio with H(β) = e h(β) .) where h(β) = log(max(|a|, |N|)) is the absolute height of β. The second problem is settled by a strong form of equidistribution for roots of unity, proved in §2.2 below. It says that for any 0 < γ < 1, the conjugates of the ζ n are asymptotically equidistributed in arcs of length [k(ζ n ) : k] −γ . Note that weak convergence is equivalent to equidistribution in arcs of fixed length. Proposition 2.4. (Strong Equidistribution) Let k ⊂ C be a number field. Then the Gal(k/k)-conjugates of the roots of unity in k (viewed as embedded in C) are strongly equidistributed in the unit circle, in the following sense. Fix 0 < γ < 1. Then for all roots of unity ζ and all I, We remark that a strong equidistribution theorem for points of small height with respect to an arbitrary dynamical systems on P 1 has recently been proved by C. Favre and J. Rivera-Letelier ( [FRLpp], Théorème 6). Assuming Proposition 2.4, we will now complete the proof of Theorem 2.1 by showing that (8) holds for archimedean v when |α| v = 1. Let µ = (1/2π)dθ be the normalized Haar measure on the unit circle, and for each n, put Then µ n is supported on the unit circle and the µ n converge weakly to µ. We must show that The idea is to break the sum into three parts: the terms nearest α, which can be treated by Baker's theorem; the other terms in a small neighborhood of α, which can be dealt with by strong equidistribution; and the rest, which can be handled by weak convergence. In the course of writing this paper, the authors learned of several results related to Theorem 2.1, some of which imply it in special cases. A. Bang's theorem [B1886] (1886) says that if α = ±1 is a nonzero rational number, then for all sufficiently large integers n there is a prime p such that the order of α modulo p is exactly n. This can be rephrased as saying that for all sufficiently large n, there exists a primitive n-th root of unity ζ n and a nonzero prime ideal p of Z[ζ n ] such that α ≡ ζ n (mod p). Since all primitive n-th roots are conjugate over Q, this implies Theorem 2.1 in the case α ∈ Q. A. Schinzel [Sc74] gave an effective generalization of Bang's theorem to arbitrary number fields; Schinzel's theorem implies Theorem 2.1 for number fields k which are linearly disjoint from the maximal cyclotomic field Q ab , and α ∈ k. J. Silverman [Si95] has shown that if α ∈ Q is an algebraic unit which is not a root of unity, there are only finitely many m for which Φ m (α) is a unit, where Φ m (x) is the m-th cyclotomic polynomial. In fact, if d = [Q(α) : Q] he shows there is an absolute, effectively computable constant C such that the number of such m's is at most In the case when α is a unit, this yields Theorem 2.1 in the same situations as Schinzel's theorem. G. Everest and T. Ward is the logarithm of the Mahler measure of F (x). When k = Q, and α = α 1 is an algebraic integer, the product formula tells us that v of Q |∆ n (F ))| v = 1, so for all large n there must be some nonarchimedean v and some α i such that |α n i − 1| v = 1, and this in turn means there is some n-th root of unity ζ with |α i − ζ| v < 1. However, this is not strong enough to give Theorem 2.1 because (a) ζ might not be primitive, and (b) the primitive n-th roots of unity might not all be conjugate to one another over Q(α). 2.2. Strong equidistribution for roots of unity. We will now prove Proposition 2.4, the strong equidistribution theorem for roots of unity. At least when k = Q, the result is well known to analytic number theorists, but we do not know a reference in the literature. The proof rests on the following lemma, for which we thank Carl Pomerance. Let ϕ(N) denote Euler's function and let d(N) = m|N 1 be the divisor function. We write λ(m) for the number of primes dividing m, and use θ(x) to denote a quantity satisfying −x ≤ θ(x) ≤ x. Lemma 2.5. (Pomerance) Fix an integer Q > 1 and an integer b coprime to Q. Then for each integer N ≥ 1 divisible by Q and each interval (c, d] ⊂ R, Remark 16. The main content of the lemma is that the error depends only on N, and not on Q or (c, d]. Proof. Let p 1 , . . . , p r be the primes dividing N but not Q. (If there are no such primes, take p 1 · · · p r = 1 in the argument below). Carrying out inclusion/exclusion relative to the primes p 1 , . . . , p r we have Proof. of Proposition 2.4. Let ζ N denote a primitive N th root of unity. There are only finitely many subfields of k, so there are only finitely subfields of the form k N = k ∩ Q(ζ N ) for some N. For each N there is a minimal Q for which k N = k Q , and then Q(ζ Q ) ⊂ Q(ζ N ) so Q|N. We will call Q = Q N the cyclotomic conductor of ζ N relative to k, and write Recall that for any δ > 0, if N is sufficiently large then d(N) ≤ N δ and ϕ(N) ≥ N 1−δ (see Hardy and Wright [HW71], Theorem 315, p.260, and Theorem 327, p.267). Take δ such adjoining or removing endpoints of I will not affect the form of the estimate, so (10) applies to all intervals. 3.1. The finiteness theorem. Let k be a number field, and let E/k be an elliptic curve. We can assume E is defined by a Weierstrass equation with coefficients in O k . More precisely, E is the hypersurface in P 2 /Spec(k) defined by the homogenization of (19). Let ∆ be its discriminant. Given a nonarchimedean place v of k and points α, β ∈ E(k), we will say that β is integral with respect to α at v if the Zariski closures cl(β) and cl(α) do not meet in the model E v /Spec(O v ) defined by the homogenization of (19). Equivalently, if z, w v is the restriction of the spherical metric on P 2 (k v ) to E(k v ) (see [Ru89], §1.1), then for each pair of embeddings σ, τ : k/k ֒→ k v , If S is a set of places of k containing all the archimedean places, we say β is S-integral with respect to α if β is integral with respect to α at each v / ∈ S. Write h(α) for the canonical height on E(k), defined by where h P 1 (resp. h P 2 ) is the naive height on P 1 (k) (resp. P 2 (k)), and [m] is multiplication by m on E(k There is also a decomposition of h(α) as a sum of local terms. For each place v of k, let λ v (P ) be the local Néron-Tate height function on E(k v ). For compatibility with our absolute values we normalize λ v (P ) so that λ v (P ) = [k v : Q p ] · λ v,Sil (P ), where λ v,Sil (P ) is the local Néron-Tate height defined in Silverman ([Si86], p.365). For each 0 = α ∈ E(k) (see [Si86], Theorem 18.2, p.365). Note that only finitely many terms in the sum are nonzero. If L/k is a finite extension, for each place w of L there is a normalized local Néron-Tate height λ w (P ) on E(L w ). If we fix an isomorphism L w ∼ = k v , then for all P ∈ E(k v ), It follows that if β ∈ E(L), then for each place v of k, as σ runs over all embeddings of L into k v fixing k, We will use the following explicit formulas. Furthermore, if µ v (z) is the additive Haar measure on E(k v ) which gives E(k v ) ∼ = C/Λ total mass 1, then 6 be the second Bernoulli polynomial, and put Furthermore, if µ v is the Haar measure dx/ord v (q) giving the loop R/(Z · ord v (q)) total mass 1, then If v is nonarchimedean and E has good reduction at v, let z, w v be the spherical metric on E(k v ) induced by a projective embedding E ֒→ P 2 corresponding to a minimal Weierstrass model for E at v. Then for each P ∈ E v (k v ) Proof. This is a summary of results in ( [Si99], §VI); see in particular Theorem 1.1, p. 455; Theorem 3.2, p.466; Theorem 3.3, p.468; and Theorem 4.1, p.470. We now come to Ih's conjecture for elliptic curves. The following is a restatement of Theorem 1.1 in the Introduction. Theorem 3.2. Let E/k be an elliptic curve, and let S be a finite set of places of k, containing all the archimedean places. Let α ∈ E(k) be a nontorsion point, i.e., a point with h(α) > 0. Then there are only finitely many ξ ∈ E(k) tors which are S-integral with respect to α. Again there are limitations to possible strengthenings of the theorem: A) As noted by Silverman, it is necessary that α be nontorsion. If α = 0 and S is the set of archimedean places, then by Cassels' generalization of the Lutz-Nagell theorem (Proposition 3.5 below), each torsion point whose order is divisible by at least two primes is S-integral with respect to α. Similarly, if α is a torsion point of order N > 1, let S contain all places of bad reduction for E. Then for each q coprime to N, all q-torsion points are S-integral with respect to α. B) When h(α) > 0, Zhang has pointed out that Theorem 3.2 cannot be strengthened to a result of Bogomolov type. A result of E. Ullmo ([U95], Theorem 2.4) shows that for each ε > 0, there are infinitely many points β ∈ E(k) with h(β) < ε which are integral with respect to α. Proof. The argument is similar to the proof of Theorem 2.1, but requires more machinery. We begin with some reductions. First, after replacing k by k(α), and S by the set S k(α) of places lying over S, we can assume that α ∈ k. Second, after replacing k by a finite extension K/k, and replacing S with the set S K of places of K lying above places in S, we can assume that E has semi-stable reduction. Thus we can assume without loss of generality that for nonarchimedean v, either E has good reduction, or E is k v -isomorphic to a Tate curve. Third, after enlarging S if necessary, we can assume that S contains all v for which |∆| v = 1. In particular, we can assume that the model of E defined by (19) has good reduction for all v / ∈ S. We claim that if ξ n ∈ E(k) tors is any torsion point, then To see this, let L be the Galois closure of k(ξ n )/k. By (20) and (21), for each conjugate σ(ξ n ), Averaging over all embeddings σ : L ֒→ k, fixing an embedding k ֒→ k v for each place v of K, using (22), and noting that there are only finitely many nonzero terms in each sum, we Since each conjugate σ(ξ n ) occurs [L : k(ξ n )] times in the final inner sum, this is equivalent to (24). Suppose there were an infinite sequence of torsion points {ξ n } which were S-integral with respect to α. If v / ∈ S, our initial reductions assure that E has good reduction at v. By Proposition 3.1.C and the integrality hypothesis, λ v (α − σ(ξ n )) = 0 for each n and σ. It follows that In the following two subsections, we will show that for each v ∈ S, This will complete the proof of Theorem 3.2 for then, combining (25) and (26) and letting n → ∞ in (25), we would have h(α) = 0, contradicting the assumption that α is nontorsion. 3.1.1. The Archimedean Case: Let v be an archimedean place of k. To simplify notation we view k as embedded in C and fix an isomorphism of k v with C. Thus, the way k is embedded depends on the choice of v. To prove (26) we will need two facts: David/Hirata-Kohno's theorem on linear forms in elliptic logarithms, and a strong form of equidistribution for torsion points. Proposition 3.3. (David/Hirata-Kohno) Let E/k be an elliptic curve defined over a number field k ⊂ C. Fix an isomorphism θ : C/Λ ∼ = E(C) for an appropriate lattice Λ ⊂ C. Let ω 1 , ω 2 be generators for Λ. Fix a non-torsion point α ∈ E(k) and let a ∈ C be such that θ(a mod Λ) = α. By Ullmo's theorem ([U98]) , the Galois conjugates of the ξ n are equidistributed in E(C). As we will see, they are in fact strongly equidistributed, in a sense analogous to that in Proposition 2.4. Let Λ ⊂ C be a lattice such that E(C) ∼ = C/Λ. Let r 0 = r 0 (S, Λ) > 0 be the largest number such that S(a, r) injects into C/Λ ∼ = E(C) under the natural projection for all a ∈ C and all 0 ≤ r < r 0 . Write S E (a, r) for the image of S(a, r) in E(C). Proposition 3.4. (Strong Equidistribution) Let k ⊂ C be a number field, and let E/k be an elliptic curve. Then the Gal(k/k)-conjugates of the torsion points in E(k) are strongly equidistributed in E(C) in the following sense: Let µ be the additive Haar measure on E(C) with total mass 1. Fix γ with 0 < γ < 1/2, and fix a bounded, convex, centrally symmetric set S with 0 in its interior. Then for each r such that S(a, r) injects into E(C), and for all ξ ∈ E(k) tors , where the implied constant depends only on S, E, and γ. The proof will be given in §3.2 below. We can now complete the proof of (26) in the archimedean case. The argument is similar to the one in the proof of Theorem 2.1. By Ullmo's theorem ( [U98]), or by Proposition 3.4 when S has the shape of a period parallelogram (so E can be tiled with sets S E (a, r)), one knows that as n → ∞ the discrete measures Choose a lattice Λ ⊂ C such that E(C) ∼ = C/Λ, and let F be the area of a fundamental domain for Λ. After scaling Λ, if necessary, we can assume that F = 1. After this normalization, µ coincides with Lebesgue measure. Let θ : C/Λ ∼ = E(C) be an isomorphism as in the David/Hirata-Kohno theorem, and let a ∈ C be a point with θ(a mod Λ) = α. Using (45) and (46) below, one sees that [k(ξ n ) : k] ≥ N 1/2 n for all sufficently large n. Thus for all sufficiently large n. Since A ℓ (n) is the difference of two sets to which Proposition 3.4 applies, we find as above that for sufficiently large n, N(ξ n , A ℓ (n))/[k(ξ n ) : k] ≤ 2µ(A ℓ (n)) . 3.1.2. The Nonarchimedean Case: In the nonarchimedean case, the proof of (26) depends on a well-known result of Cassels on the denominators of torsion points (see [Si86], Theorem 3.4, p.177). Write O v for the ring of integers of k v . Proposition 3.5. (Cassels) Let k v be a local field of characteristic 0 and residue characteristic p > 0, and let E/k v be an elliptic curve defined by a Weierstrass equation y 2 + a 1 xy + a 3 y = x 3 + a 2 x 2 + a 4 x + a 6 whose coefficients belong to ord v (D) = ord v (p) p n − p n−1 . Since the Weierstrass equation for E need not be minimal, we can replace k v by an arbitrary finite extension L w /k v , and if e w/v is the ramification index of L w /k v , then for P ∈ E(L w ) tors and a, b, D ∈ L w , (32) becomes This yields the result for all P ∈ E(k v ) tors . Corollary 3.6. Let E/k v be an elliptic curve defined over a nonarchimedean local field. Then for each nontorsion point α ∈ E(k v ): (A) There is a number M such that for all ξ ∈ E(k v ) tors , (B) If E has good reduction, then for each ε > 0, there are only finitely many ξ ∈ E(k v ) tors with λ v (α − ξ) > ε. If E is a Tate curve, then for each ε > 0, there are only finitely many ξ ∈ E(k v ) tors with λ v (α − ξ) > ε + 1 12 (− log(|∆(E)| v )). Proof. After a finite base extension, we can assume that E either has good reduction or is a Tate curve. Since (B) implies (A), it suffices to prove (B). Fix ε > 0. First suppose E has good reduction. Then λ v (x − y) = − log( x, y v ), where x, y v is the spherical distance on the minimal Weierstrass model for Nv is the order of the residue field of O v . By the the ultrametric inequality for the spherical distance ([Ru89], §1.1), By the definition of the spherical distance, if x, y are the coordinate functions in the minimal Weierstrass model, − log( ξ, 0 v ) = min(ord v (x(ξ)), ord v (y(ξ))) · log(Nv) . We can now prove (26) when E has good reduction at v. Fix ε > 0. Let M be the upper bound in Corollary 3.6.A, and let N be the number of points ξ ∈ E(k v ) tors with λ v (α − ξ) > ε given by Corollary 3.6.B. For all sufficiently large n, To prove (26) when E is a Tate curve at v, we will need the following equidistribution theorem of Chambert-Loir ( [CLpp], Corollaire 5.5). Fix a Tate isomorphism E(k v ) ∼ = k v /q Z , put L = Z · ord v (q) ⊂ R, and define a "reduction map" r : E(k) → R/L by setting r(P ) = ord v (a) (mod L) if P ∈ E(k v ) corresponds to a ∈ k × v . For each global point P ∈ E(k), define a measure µ P,v on R/L by and let µ v be the Haar measure on R/L with total mass 1. Proposition 3.7. (Chambert-Loir) For each sequence of points {P n } in E(k) with h(P n ) → 0, the sequence of measures {µ Pn,v } converges weakly to µ v . We can now prove (26) when E is a Tate curve. Let {ξ n } be a sequence of torsion points which are S-integral with respect to α. For sufficiently large n the right side is at most 3ε. Hence This completes the proof of Theorem 3.2. Several results in the literature use methods similar to ours, though none of them yields Theorem 3.2: J. Cheon and S. Hahn [CH99] proved an elliptic curve analogue of Schinzel's theorem [Sc74]. Likewise, Everest and B. Ní Flathúin [EF96] evaluate 'elliptic Mahler measures' in terms of limits involving division polynomials, obtaining results similar to (15). They use David/Hirata-Kohno's theorem on elliptic logarithms in place of Baker's theorem, much as we do. More recently, L. Szpiro and T. Tucker [STpp] proved that local canonical heights for a dynamical system can be evaluated by taking limits over 'division polynomials' for the dynamical system. (These polynomials have periodic points as their roots). Their work uses Roth's theorem rather than Baker's or David/Hirata-Kohno's theorem. It would be interesting to see if this could be brought to bear on Conjecture 1.3. 3.2. Strong equidistribution for torsion points on elliptic curves. We will now prove Proposition 3.4, the strong equidistribution theorem for torsion points on elliptic curves which was used in the proof of Theorem 3.2. The proof breaks into two cases, depending on whether E has complex multiplication or not. First suppose E does not have complex multiplication. As usual, the action of Gal(k/k) on E(k) tors gives a homomorphism By Serre's theorem ( [Se72], Théorème 3), the image of Gal(k/k) in p GL 2 (Z p ) is open. Thus there is a number Q such that Im(η) contains the subgroup Let G Q ⊂ Gal(k/k) be the pre-image of this subgroup. Let ξ ∈ E(k) tors have order N, and put Q N = gcd(Q, N). For suitable right coset representatives σ 1 , . . . , σ T of G Q in Gal(k/k), the Galois orbit Gal(k/k) · ξ decomposes as a disjoint union of G Q -orbits: Since G Q is normal in Gal(k/k), the orbits G Q · σ i (ξ) = σ i (G Q · ξ) all have the same size. Thus [k(ξ) : k] = T · #(G Q · ξ). By considering the action of G Q on the p-parts of ξ, one sees that Indeed, let ξ p be the p-component of ξ in E[N] ∼ = p|N (Z/p ordp(N ) Z) 2 . Identify ξ p with an element of (Z/p ord p (N ) Z) 2 , and note that it is a generator for that group. If p|Q N , the image of G Q in GL 2 (Z/p ordp(N ) Z) is I + p ordp(Q N ) M 2 (Z/p ordp(N ) Z), and On the other hand, if p | Q N , the image of G Q in GL 2 (Z/p ordp(N ) Z) is the full group, so G Q · ξ p = (Z/p ord p (N ) Z) 2 \p · (Z/p ord p (N ) Z) 2 . Write Λ N = 1 N Λ, fix σ i , and let x ∈ Λ N correspond to σ i (ξ). Since E[N] ∼ = Λ N /Λ, the considerations above show there is a one-to-one correspondence between elements of G Q · σ i (ξ), and cosets y + Λ for y ∈ Λ N such that y − x ∈ Q N Λ N and y + Λ has exact order N in Λ N /Λ. Equivalently, y − x ∈ Q N Λ N and y / ∈ pΛ N for each prime p dividing N but not Q. Let p 1 , . . . , p R be the primes dividing N but not Q; if there are no such primes, take p 1 · · · p R = 1. Since Q N and p 1 , · · · , p R are pairwise coprime, there is an x 0 ∈ Λ N such that x 0 ≡ x (mod Q N Λ N ) and x 0 ≡ 0 (mod p 1 · · · p R Λ N ). Then y − x ∈ Q N Λ N if and only if y ∈ x 0 + Q N Λ N , and y ∈ p i Λ N if and only if y ∈ x 0 + p i Λ N . Note that if D|p 1 · · · p R then Q N Λ N ∩ DΛ N = Q N DΛ N . Take a ∈ C and 0 < r ≤ r 0 . Using the fact that S(a, r) injects into C/Λ and applying inclusion-exclusion, we obtain where λ(D) is number of primes dividing D. Let F be a fundamental domain for Λ; we can assume F is bounded and contains 0. Let C be such that F ⊂ S(0, C). Note that since S is convex, if z 1 ∈ S(a 1 , r 1 ) and z 2 ∈ S(a 2 , r 2 ), then z 1 + z 2 ∈ S(a 1 + a 2 , r 1 + r 2 ). Put F = area(F ), S = area(S); then area(tF ) = t 2 F and area(S(a, r)) = r 2 S. This completes the proof of Proposition 3.4 when E does not have complex multiplication. Now suppose E has complex multiplication. Let K be the CM field, and let O ⊂ O K be the order corresponding to E. After enlarging k if necessary, we can assume that K ⊂ k. Let Λ ⊂ C be a lattice such that E ∼ = C/Λ. Without loss of generality, we can assume that Λ ⊂ K. Fix an analytic isomorphism ϑ : C/Λ ∼ = E(C). By the theory of complex multiplication (see [Sh71], [L73], or [Si99], Chapter II), E(k) tors is rational over k ab , the maximal abelian extension of k. Let k × A be the idele ring of k, and for s ∈ k × A let [s, k] be the Artin map acting on k ab . Given σ ∈ Gal(k/k), take s ∈ k × A with σ| k ab = [s, k], and put w = N k/K (s) ∈ K × A . There is an action of K × A on lattices, defined semi-locally, which associates to w and Λ a new lattice w −1 Λ. This action extends to a map w −1 : K/Λ → K/w −1 Λ. There is also a homomorphism ψ : k × A → K × , the 'grössencharacter' of E, which has the property that ψ(s)N k/K (s) −1 Λ = Λ. Put κ = ψ(s) ∈ K × . With this notation, there is a commutative diagram: in which the vertical arrows on the left are multiplication by w −1 and κ respectively, and those on the right are the Galois action (see [Sh71], Proposition 7.40, p.211, or [L73], Theorem 8, p.137). Note that the same analytic isomorphism ϑ appears in the top and bottom rows. Thus, if ξ ∈ E(k) tors corresponds to x ∈ K/Λ, and σ| k ab = [s, k], then This gives an explicit description of the Galois action on torsion points in terms of adelic "multiplication". The action of K × A in the diagram is as follows. Let L ⊂ K be a lattice. For each rational prime p of Q, write L p = L ⊗ Z Z p and K p = K ⊗ Q Q p ; if w ∈ K × A , let w p be its p-component. There is a unique lattice M ⊂ K such that M p = w −1 p L p for each p ([L73], Theorem 8, p.97), and w −1 L is defined to be M. Likewise, if x ∈ K/L, lift it to an element of K ⊂ K A and write x p ∈ K p for its p-component; there is a y ∈ K such that w −1 p x p (mod w −1 L p ) = y (mod M p ) for each p, and w −1 (x (mod L)) is defined to be y (mod M). The where p runs over the primes of K lying over p, and O K,p is the completion of O K at p. The kernel U of the grössencharacter ψ : A is open. Thus there is an integer Q ≥ 1 such that for each p|Q, the subgroup 1 + QO K,p ⊂ O × K,p is contained in W p and for each p | Q, O × K,p ⊂ W p . If w ∈ W , then w −1 Λ = Λ, so w p ∈ O × p . Hence c|Q. and let U Q be its preimage in k × A under the norm map. Put G Q = {σ ∈ Gal(k/k) : σ|k ab = [s, k] for some s ∈ U Q } . Then G Q is open and normal in Gal(k/k). Let ξ correspond to x + Λ ∈ K/Λ. Write Λ(x) for the O-lattice Ox + Λ; since ξ has order N, [Λ(x) : Λ] ≥ N. More generally, for any integer m, put Λ(mx) = O · mx + Λ = mOx + Λ. Note that If p|Q, then G Q acts on ξ p through the subgroup 1 + p ord p (Q) O p ⊂ O × p . Noting that ord p (Q N ) = min(ord p (Q), ord p (N)) and that p ordp(Q) x ∈ Λ p if ord p (Q) ≥ ord p (N), we have If L is any O-lattice, and F (L) is the area of a fundamental domain for C/L, then by Minkowski's theorem there is a point 0 = ℓ ∈ L with |ℓ| ≤ (4/π) 1/2 F (L) 1/2 . Here, L is a proper O ′ -lattice for some order O ′ with conductor c ′ |c. There are only finitely many such orders O ′ , and for each O ′ there are only finitely many homothety classes of proper O ′ -lattices, so there are only a finitely many homothety classes of O-lattices. Hence there is a constant C 1 , independent of L, such that L has a fundamental domain F (L) contained in the ball B(0, C 1 · F (L) 1/2 ). In turn, there is a constant C, independent of L, such that F (L) ⊂ S(0, C · F (L) 1/2 ). This fact is the crux of the proof. Again, if L is an O-lattice, then for each ideal ̟ of O K coprime to c, there is a unique lattice ̟L defined by the property that (̟L) q = (̟O K L) q for all primes q|N̟, and (̟L) q = L q for all primes q | N̟. This lattice has index [L : ̟L] = N̟. Now consider a set S(a, r), where a ∈ C and r ≤ r 0 . For each σ i (ξ), we will compute #((G Q · σ i (ξ)) ∩ S E (a, r)). Fix σ i , and replace ξ by σ i (ξ) in the discussion above. Let x ∈ K/Λ correspond to σ i (ξ), and let p 1 , . . . , p R be the primes of O K dividing N but not Q, for which ord p (Λ(x)) = ord p (Λ). (Note that the p j are independent of σ i , since K ⊂ k and for p | Q, σ i acts on ξ through O × p .) Then there is a one-to-one correspondence between elements of G Q · σ i (ξ), and cosets y + Λ for y ∈ K such that y ∈ x + Λ(Q N x) and y / ∈ p j Λ(x) for j = 1, . . . , R. Since Λ(Q N x) ⊂ Λ(x), such y necessarily belong to Λ(x). Before closing, we note for purposes of reference that the arguments above provide lower bounds for the degree [k(ξ) : k] in terms of the order N of ξ. When E does not have complex multiplication, then since T is fixed, Q N ≤ Q, and p (1 − 1/p 2 ) converges to a nonzero limit, (39) shows there is a constant C 1 depending only on E such that (45) [k(ξ) : k] ≥ C 1 N 2 . When E has complex multiplication, then since T and Q are fixed, (44) shows that there is a constant C 2 depending only on E such that (46) [k(ξ) : k] ≥ C 2 N/(log log(N)) 2 .
2014-10-01T00:00:00.000Z
2005-09-21T00:00:00.000
{ "year": 2005, "sha1": "4806ba9ace239b25500d0e203b06e042981875cf", "oa_license": null, "oa_url": "http://msp.org/ant/2008/2-2/ant-v2-n2-p06-s.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "2d966601bb6f9ed0a3bb8e518b2d75f6010f8c37", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
252510690
pes2o/s2orc
v3-fos-license
An Automatic Drift-Measurement-Data-Processing Method with Digital Ionosondes : Drift detection is one of the important detection modes in a digital ionosonde system. In this paper, a new data processing method is presented for boosting the automatic and high-quality drift measurement, which is helpful for long-term ionospheric observation, and has been successfully applied to the Chinese Academy of Sciences, Digital Ionosonde (CAS-DIS). Based on Doppler interferometry principle, this method can be successively divided into four constraint steps: extracting the stable echo data; restricting the ionospheric detection region; extracting the reliable reflection cluster, including Doppler filtering and coarse clustering analysis; and calculating the drift velocity. Ordinary wave (O-wave) data extraction, complementary code pulse compression and other data preprocessing techniques are used to improve the signal-to-noise ratio (SNR) of echo data. For the purpose of eliminating multiple echoes, the ionospheric region is determined by combining the optimal height range and detection frequencies obtained from the ionogram. Successively, Doppler filtering and coarse clustering analysis extract reliable reflection clusters. Finally, the weighting factor is brought in, and then weighted least-squares (WLS) is used to fit the drift velocity. The entire data processing process can be implemented automatically without constantly changing parameter settings due to changes in external conditions. This is the first time coarse clustering analysis has been used to extract the paracentral reflection cluster to eliminate scattered reflection points and outer reflection clusters, which further reduces the impacts of external conditions on parameter settings and improves the ability of automatic drift measurement. Compared with the previous method possessed by Digisonde Protable Sounder 4D (DPS4D), the new method can achieve comparable drift detection precision and results even with fewer reflection points. In 2021–2022, several experiments on F region drift detection were carried out in Hainan, China. Results indicate that drift velocities fitted by the new method have diurnal variation and change more gently; the trends of drift velocities fitted by the new method and the previous method are semblable; and this new method can be widely applied to digital ionosondes. Introduction Ionospheric information is vital to human activities. Reliable long-distance shortwave communication depends heavily on ionospheric conditions [1]. This is because shortwave signals mainly bounce off the ionosphere before reaching the receiving equipment [2]. Ionospheric plasma drift is an important study direction derived from ionospheric variations. Typically, changes in the ionospheric plasma drift which trigger ionospheric scintillation have effects on global positioning systems [3]. Appropriately, digital ionosonde is a conventional terrestrial platform used to study the ionosphere, and its drift detection mode can measure the plasma drift velocity [4][5][6][7][8]. The drift velocity mentioned in this paper refers to the plasma drift velocity. The early drift measurement methods estimated the drift velocity by using similar fading or correlation analyses which compared the amplitude fluctuations of the echoes received by the antenna array elements [9,10]. Then, Reinisch et al. initially presented Doppler interferometry to study drift velocity in 1998 [11]. The VFIT, which is essentially a deformation of Doppler interferometry, scales the phase measurements into a radial velocity [12]. Prominently, Doppler interferometry has the advantages of improving precision and simplifying calculations. It has also been applied to the American Digisonde family of ionospheric sounders [13], Canadian Advanced Digital Ionosonde (CADI) [14], Dynasonde [12] and so on. Compared with incoherent scatter radar observations, the reliability of ionosonde drift measurements at high, middle and low latitudes has been verified [15][16][17]. Comparison of the drift measurements by Dynasonde and EISCAT is consistent in the polar region [18]. A fair consistency about F-region drift measurements between ionosonde and incoherent scatter radar at magnetic equator has been reported [19]. However, using the raw echo data directly for drift measurements can reduce the accuracy of results. That is to say, clutter and multiple echoes mixed into the echoes can blur the ionospheric electron density profile. Some researchers used pulse compression, digital filtering, frequency domain correlation superposition, interference elimination algorithms, another WLS and other techniques to eliminate or suppress interference, so as to select reliable reflection points [4,[20][21][22][23]. The above shows that a reliable data processing method is conducive to improving the accuracy of drift measurements, but the serious problem that data processing results are extremely sensitive to external conditions still exists. Considering that unmanned ionosonde stations demand real-time and long-term ionospheric monitoring, while guaranteeing accuracy, automating drift measurement is also essential. Automatic drift measurement here does not mean that the program runs automatically, but rather that the parameter settings that have been set are little affected by changes in external conditions. In other words, when the external conditions change significantly, the previous methods must have their parameter settings modified in time to ensure the quality of the measurement. The method that DPS4D employs can be used as a typical example. The "ARTIST" function of this method enables autonomous selection of the detection frequencies, but not of the ionospheric region to be measured; this method has limited rejection of strong interference at high Doppler frequencies during Doppler filtering. The new method is an optimization of the previous method. The proposed drift-measurement-data-processing method affects automatic drift measurements and enhances the quality of drift detection by successively implementing four steps: extracting the stable echo data, restricting the ionospheric detection region, extracting the reliable reflection cluster and calculating the drift velocity. Thus, parameter settings are not constantly changed as external conditions change. It is worth mentioning that clustering analysis is used for facilitating automatic drift measurement for the first time. The drift measurement results obtained by this constrained method agree well with those obtained by the method DPS4D uses, even with fewer reflection points. Experimental results show that the CAS-DIS employing this method can automatically control the drift measurement process and then get quality measurement results. This constrained data processing method is also applicable to other modern digital ionosondes. Compared with existing drift-measurement-data-processing methods, the main innovations of this paper are as follows: 1. The method realizes automatic and high-quality drift measurement and is dedicated to ionospheric drift detection, which can provide real-time and long-term ionospheric plasma drift state information. 2. The method can be widely applied to digital ionosondes. 3. The ionospheric detection region is constrained by detection frequencies and virtual heights provided by the ionogram, which effectively filters out the echoes from other regions. 4. After Doppler filtering and coarse clustering analysis, the reliability of reflection points is enhanced, which could reduce the impacts of external conditions on parameter settings and further ensure the accuracy of the drift velocity. Principle of the Drift Measurement For the sake of drift detection, ionosonde primarily needs to vertically transmit pulsemodulated high-frequency radio waves from the ground. The radio waves are reflected when its frequencies equal that of the plasma, and they will be received by the antenna array later. As the actual ionospheric stratification is not smooth horizontal stratification, vertical and oblique echoes are received, which provides conditions for using the Doppler interferometry to calculate drift velocities. The information of one echo data includes amplitudes, phases, virtual heights, echo directions and Doppler frequency shifts. The premise for Doppler interferometry is that all the plasma in the designated ionospheric region moves uniformly in a short time. Arrayreceiving interferometry can determine the directions of echo. Doppler frequency shifts can determine the drift radial velocities. Consequently, the drift velocity can be obtained by the correspondence between radial velocities and echo directions. In terms of determining the echo direction, the dimensions of the receiving antenna are much smaller than the distance between the receiving antenna and the ionospheric detected region; hence, the echo can be regarded as the plane wave. When the i-th echo reaches the a-th receiving antenna, its phase ϕ a is: where k i is the wave vector of the i-th echo and τ a is the vector from the first antenna to the a-th receiving antenna. Therefore, the phase difference ϕ ab of the i-th echo between the a-th receiving antenna and the b-th receiving antenna is: According to Equation (2), k i can be determined. Then, the unit vector n i can be obtained from k i = (2π f /c)n i , where f is the drift detection frequency and c is the speed of light. Finally, the echo direction (azimuth angle φ i and zenith angle θ i ) from the i-th reflection point is calculated on the basis of n i = (cos φ i sin θ i , sin φ i sin θ i , cos θ i ). In the process of determining the drift radial velocity, due to the relative motion between plasma and ionosonde, the echo certainly relates the Doppler frequency shift. The phase difference ∆ϕ i between the i-th echo and its transmitted signal is: Double distance ∆L i can be represented by where ∆t is the time difference and V is three-dimensional drift velocity. Accordingly, the Doppler frequency shift f di = (1/2π) · (∆ϕ i /∆t) is further derived to After simultaneous equations, the three-dimensional drift velocity in the geomagnetic coordinate system is where V N , V E and V Z are the drift velocity components. V N is the north-south drift velocity, which is positive when northward drift happens; V E is the east-west drift velocity, which is positive when eastward drift happens; V Z is vertical drift velocity, which is positive when downward drift happens; zenith angle is 0°in the vertical downward drift direction and is positive away from this direction; azimuth angle is 0°in due north drift direction and is positive when it turns counterclockwise. Equation (6) intuitively shows that the extraction quality of reflection points and the regression quality of drift velocities can directly affect the quality of the drift detection. The drift-measurement-data-processing method aims at obtaining the echo information of the first reflection points in the designated ionospheric region and obtaining a reliable drift velocity. Method of the Drift Measurement Data Processing This drift-measurement-data-processing method can be divided into four steps: extracting the stable echo data, restricting the ionospheric detection region, extracting the reliable reflection cluster and calculating the drift velocity. Extracting the Stable Echo Data When extracting the stable echo data, we firstly extract the high-resolution echo data with specified polarization, and then obtain the echo direction information. Extracting O-wave data, complementary code pulse compression and determining echo directions are the main parts of this step. Extracting O-Wave Data Considering the effect of the geomagnetic field, a radio wave splits into two characteristic waves with different refractive indexes and polarization states during the ionospheric propagation. One is the O-wave corresponding to a left-handed elliptic polarization wave, and the other is an unusual wave (X-wave) corresponding to a right-handed elliptic polarization wave. When the wave vector is parallel to the geomagnetic field, the O-wave is a left-handed circularly polarized wave and the X-wave is a right-handed circularly polarized wave; when the wave vector is perpendicular to the geomagnetic field, the O-wave is a linearly polarized north-south wave whose refraction index is unaffected by the geomagnetic field, and the X-wave is a linearly polarized east-west wave. Considering that the refraction index of an O-wave is relatively stable and the horizontal component of geomagnetic field is larger as it gets closer to the low latitudes, it is a more excellent choice to extract O-wave data for drift measurement. A method for separating O-wave and X-wave data in the CAS-DIS system is realized by digitally synthesizing circularly polarized waves on the receiving circuit [24]. O-wave data are extracted by where I 0 • and Q 0 • , respectively, represent the real and virtual data of the east-west receiving antenna; I −90 • and Q −90 • , respectively, represent the real and virtual data of the north-south receiving antenna. The echo data mentioned in the following all refer to O-wave data. Complementary Code Pulse Compression The CAS-DIS system uses complementary codes to phase modulate the carrier signal at the transmitter, and performs complementary code pulse compression on the baseband signal at the receiver for improving the SNR of echo data. The selected set of 16-bit complementary codes contains an A-code and a B-code. In order to reduce the calculation load, pulse compression could be implemented in the frequency domain. Above all, Fast Fourier Transform (FFT) is separately performed on the A-code and echo data modulated by the A-code; then, the two results above are multiplied, and the result is Inverse Fast Fourier Transformed (IFFT). Analogously, FFT is performed on the B-code and echo data modulated by the B-code; then, the two results above are multiplied, and IFFT is performed on the outcome. Lastly, the above two compression pulses are added to realize complementary code pulse compression processing. Figure 1 shows the autocorrelation processing results of complementary codes and the sum result of two autocorrelation functions, which reveals that the autocorrelation functions of A-code and B-code are opposite at the pseudocorrelation peak. By superposition of two compression pulses, the correlation peak rises to double with the same phase, and the pseudo-correlation peaks are canceled. The SNR is naturally enhanced. After that, echo data passes through the median filter to reduce echo fluctuation and suppress noise. Additionally, the optimal window size of the median filter is 3-5 continuous echo data lengths. Determining Echo Directions The principle for determining echo directions has been interpreted in Section 2. Equation (2) indicates that at least three array elements are needed to solve k i . CAS-DIS adopts the triangular array of four array elements to receive echo data. Additionally, its k i is fitted by least-squares (LS): where M is the number of elements. Then, the echo directions of all reflection points can be obtained. The skymap on the left of Figure 2 plots the distribution of reflection points. Additionally, the distance between the reflection point and the center of the skymap is proportional to the zenith angle. Restricting the Ionospheric Detection Region Since drift measurement is to measure the drift characteristics of the specified ionospheric region, it is incredibly necessary to distinguish the echoes from different ionospheric regions and eliminate the secondary echoes, multiple echoes and other clutter interference as far as it is possible. The ionogram (Figure 3), which charts the relationship curve between the detection frequency and the virtual height, shows that the strongest echo may be in E region or F region; may be a primary, secondary or multiple reflection echo; or may just be a false strongest echo. For this reason, selecting the detection frequency and restricting the height range are warranted. Height and height range in this article refer to virtual height and virtual height range. What should be noted is that the ionogram is the result of vertical detection, and its broadband characteristics are not appropriate for drift detection. This is because the drift measurement needs to meet the needs of large pulse accumulation times and real-time detection, resulting in a limited number of detection frequency points. Due to the small change of the ionosphere in the short term (about less than 5 min), CAS-DIS has the vertical detection and drift detection work alternately, and vertical detection is always about 3 min ahead of drift detection. In this way, the automatic calibration results of the ionogram, which are used for restricting the ionospheric detection region, can provide reliable parameters of detection frequencies and detection heights for drift measurement. The actual detection frequency is determined according to transmitted detection frequency parameters, and the actual detection height range is slightly wider than the range defined by transmitted detection height parameters. During drift measurement, the height corresponding to the strongest echo in the detection height range is selected as the height of the primary reflection echo. The strongest echo is selected by the accumulation of multiple coherent pulses and the addition of data from four receiving channels at the same detection frequency. Given that the raw echo data are equal-spacing sampled and the ionosphere fluctuates during the detection, a narrow height range including the height of the primary reflection echo is selected as the optimal height range for drift measurement. Figure 3 is the relevant case for detection frequency selection and optimal height range restriction. Figure 3a shows an invalid drift measurement without selecting reasonable detection frequencies; Figure 3b shows an invalid drift measurement with reasonable detection frequencies but no reasonable optimal height range. Its false optimal height range is 457-662 km, and thus the skymap reflects the mixed distribution of primary and secondary reflection points; Figure 3c describes the drift measurement with the reasonable detection frequencies and the reasonable optimal height range. Its optimal height range is 325-342 km, and thus the skymap reflects the distribution of primary reflection points; Figure 3d is an ionogram comprising the black trace curve of automatic calibration obtained 3 min before drift measurement (Figure 3a), and the wrong calibration result leads wrong parameters being transmitted to drift detection; Figure 3e is an ionogram whose detection and calibration results are obtained 3 min before drift measurement (Figure 3b,c). Several teams worldwide have been involved in the ionogram automatic calibration process for a long time [25][26][27][28]. Extracting the Reliable Reflection Cluster On the premise of having extracted reliable echo data, Doppler filtering and coarse clustering analysis provide benign conditions for further implementation of automatic and stable drift measurement. Plainly, the reflection point information after multiple constraints (filtering) is the cornerstone of calculating the drift velocity. Doppler Filtering The higher the Doppler frequency resolution is, the better Doppler filtering performs. CAS-DIS can obtain high-Doppler-frequency resolution by adopting large pulse accumulation, which refers to the large number of single frequency pulse signals. When the data sampling rate is set to 20 Hz and pulse accumulation is set to 500 times, the Doppler frequency resolution can reach 0.04 Hz. Due to FFT possessing a filtering characteristic, Doppler filtering is implemented for echo data of each height within the optimal height range by FFT. Directly performing FFT on echo data can result in energy leakage and Doppler sidelobe interference, which can further reduce frequency resolution. Time domain windowing technology can improve spectrum quality. In order to select a suitable complex windowing function for spectrum optimization, the commonly used rectangular window, Hanning window and Hamming window were firstly selected to multiply with echo data, and then FFT was carried out. It was found that using a Hamming window has the best effect. In addition, discretely sampling a spectrum of FFT can cause fence effects. The sharper the spectral peak is, the larger the spectral amplitude errors, which is not conducive to later Doppler filtering. The number of pulse accumulation is denoted by N. When the window spectrum is a rectangle with width 4π/N, the resolved spectral lines of FFT can accurately restore spectral amplitudes, and the corresponding windowing function is the Sinc function. Since the Sinc function is infinitely long in the time domain, the main part of function is intercepted as the Sinc window; that is, The central Doppler line obtained after FFT is shifted to the zero frequency, so that the zero frequency corresponds directly to the zero Doppler shift. Figure 4 shows the Doppler spectra: the amplitude spectra are on the left, and the phase spectra are on the right, and the spectrum information of four receiving channels is successively arranged from top to bottom. According to the spectral characteristics of echoes of the short pulse train signal, by concentrating the distribution and forming the peak near zero Doppler frequency shift, after the frequency domain data of the four receiving channels are added together, the amplitude threshold and Doppler frequency shift threshold are set to carry out Doppler filtering in turn, which can extract the Doppler frequency shift information of ionospheric primary echo reflection points. The amplitude threshold can remove white noise on account of its uniform distributed characteristic in the frequency domain. The Doppler shift threshold can properly filter radio frequency interference, which forms a narrow peak in the frequency domain. Figure 5 successively shows the distribution of reflection points after ionospheric detection region restriction, amplitude threshold filtering and Doppler frequency shift threshold filtering, which intuitively reflects that the quality of reflection points is gradually improved. Coarse Clustering Analysis Reflection points should be concentrated around the center of the skymap in which the ionospheric drift above the receiving antenna is to be detected. In accordance with the distribution characteristics of the Doppler spectrum, the dominantly concentrated reflection points (reflection cluster) should be the bipolar pattern comprising zero, positive and negative Doppler frequency shifts and zero Doppler frequency shifts close to the skymap's center, and the ideal situation is to only retain the bipolar reflection cluster (Figure 6a). Actually, the reflection cluster containing other types of reflection points also exists (Figure 6b,c). The concentrated monopolar reflection cluster is mainly generated by disturbed geomagnetic conditions [29]. Scattered, irregularly distributed and time invariant reflection points are caused by strong mutation interference generated by electromagnetic interference. Plenty of F-region drift measurements such as those in Figure 6 show bipolar reflection clusters near vertical and whose maximum zenith angle is around 20°. The quality of reflection points can be further improved by limiting the maximum zenith angle at the expense of decreasing the calculation precision of horizontal velocity [30]. Experiments show that the quality of reflection points is improved somewhat by limiting the maximum zenith angle to 20° (Figure 6d-f), but this is not completely applicable to Figure 6b,c. Furthermore, the fitted drift velocity is more accurate and robust with a skymap whose reflection points are disturbed in a wider area. To improve this situation, ten clustering methods were used to cluster six representative skymaps. The clustering results in Figure 7 reveal that MeanShift, DBSCAN and OPTICS algorithms cluster better. These three methods are suitable for clustering the effective reflection points with a high-density distribution (concentrated distribution), as they are density-based clustering methods. The MeanShift algorithm finds and adjusts the centroid in accordance with the sample density in the dataset, whose location is the mean of the samples within its neighborhood [31]. DBSCAN and OPTICS algorithms look for high-density sample areas in the dataset and expand the surrounding areas into clusters. They differ in that the maximum distance between two samples in DBSCAN is a fixed value and the maximum distance between two samples in OPTICS is a value range [32][33][34]. These three clustering methods conduct clustering analysis on the basis of different criteria, and each has its own advantages and disadvantages. MeanShift and DBSCAN algorithms can more easily separate multiple clusters, but some scattered samples could be lost when there is only a single cluster; on the contrary, OPTICS makes it easier to retain all samples of a single cluster, while being able to roughly separate multiple clusters. If three clustering methods are combined for coarse clustering analysis, extracting the bipolar reflection cluster including selected reflection points can be easier and more effective. The following is the data processing process: (1) Set up the X × Y dataset. The number of samples X is the number of reflection points after Doppler filtering. The number of features Y being equal to two refers to the projection of reflection points in north-south and east-west directions. (2) MeanShift, DBSCAN and OPTICS algorithms are respectively used for clustering analysis on the dataset. When using a single clustering method to process data, the first priority is clustering. Subsequently, the clustering labels of all samples need to be extracted. Then, one solves the centroid of each cluster after removing the noise. Ultimately, the selected cluster embracing the centroid nearest to the center of skymap is the bipolar reflection cluster extracted by this single clustering method. (3) Vote out the reliable bipolar reflection cluster. The first step is splicing reflection clusters extracted by three clustering methods into a new dataset. Afterwards, by means of voting, the new cluster formed by extracted all reflection points with more votes from that dataset is the required bipolar reflection cluster. Figure 6g-i show the distribution of reflection points after coarse clustering analysis. Coarse clustering can extract the higher quality bipolar reflection cluster and ensure the accuracy of drift velocity while reducing the calculation precision of horizontal velocity. Generally speaking, coarse clustering analysis can avoid uncontrollable measurement errors caused by ionospheric fluctuation and system error, which provides strong backing for actualizing the automatic and high-quality drift measurement. In addition, the numbers of reflection points and the drift velocity components of the cases in Figure 6 are shown in Tables 1-3. Calculating the Drift Velocity Theoretically, all primary echo reflection points satisfy Equation (6). Nevertheless, not all points can indeed satisfy that. Consequently, the WLS method considered to fit the sub-drift velocity is: where I M is the number of all reflection points within the bipolar reflection cluster ultimately used to fit drift velocity; I is the number of its subset; and V N I , V EI and V ZI are components of the sub-drift velocity V I−2 . The larger the Doppler frequency shift value is, the lower its reliability. Naturally, the weighting factor can be introduced into the error index ε 2 I . For the i-th point, its weighting factor w i is negatively correlated with its Doppler frequency shift value: where f dmax is the maximum Doppler frequency shift value. When using the WLS method to fit the drift velocity, each subset of reflection points included in the bipolar reflection cluster should be used to fit a sub-drift velocity. Start with a subset of I (at least three) reflection points and calculate the sub-drift velocity V 1 ; then add another reflection point to the previous subset, forming a new subset to calculate the sub-drift velocity V 2 ; and then for a while, repeat the previous step until the sub-drift velocity V I M −2 is calculated using all reflection points. The drift velocity V is the mean of (I M − 2) sub-drift velocities. The standard deviation of (I M − 2) sub-drift velocities can evaluate the uncertainty of drift velocity measurement. If the plasma moves uniformly in the sampling period and the filtering effect of reflection points is excellent, all sub-drift velocities should be similar; thus, the sub-drift velocity distribution is relatively narrow and the uncertainty is quite small. Otherwise, the sub-drift velocity distribution becomes wider and the uncertainty becomes larger. Experimental Results Four experiments were conducted in Hainan Station (geographic coordinates 19.4°N, 109.0°E) for verifying the automatic drift-measurement-data-processing method. Nearly ten thousand drift measurements were processed in 2021-2022. To better compare experimental results, the DPS4D ionosonde is positioned 60 m southeast of the CAS-DIS ionosonde. Both devices have the same antenna layout. Two ionosonde systems perform drift detection every 5 min. DPS4D and CAS-DIS need interlaced drift detection so that the mutual interference between them can be reduced. The reliability and wide applicability of the new method can be verified by using the two methods to process drift echo data received by the two systems. The first experiment only adopted drift echo data of DPS4D system which were measured from 11 June 2021 to 23 June 2021. The automatic drift-measurement-dataprocessing method proposed in this paper (the new method) was used to process drift echo data. Then, the obtained drift velocities were compared with the drift velocities which had been calculated by the previous method encapsulated inside DPS4D system. The comparison results present that they have high similarity, and the newly obtained drift velocities change more gently and have smaller standard deviations. These further indicate that unreliable reflection points are eliminated or reduced, rapid fluctuations of drift velocities are retarded appropriately and the quality of drift measurements is improved. Figure 8 shows a comparison of drift velocities from 11 June 2021 to 12 June 2021. This experiment proves the feasibility of the drift data processing method proposed in this paper and illustrates that this new method is suitable for the DPS4D system. In the second experiment and the third experiment, the CAS-DIS system received its own drift echo data and ran automatic drift-measurement-data-processing method (the new method), and the DPS4D system also received its own drift echo data and ran its own data processing method (the previous method). Due to the fact that the detection objects are drift velocities in the same ionospheric region, the results that they have similar drift velocity variation trends are reasonable. A fine comparison of drift measurements of two systems is indeed unreliable, but it can be demonstrated that the new method is also applicable to the CAS-DIS system by obtaining similar drift velocity variation trends, and the obtained drift measurements can be used to calibrate and analyze each other. The second experiment ( Figure 9) was conducted from 27 March 2022 to 28 March 2022. For the CAS-DIS system, its every vertical detection is 3 min earlier than that of every drift detection event. Here are its working parameters in drift detection mode. Pulse train signals of four frequencies are transmitted. A single frequency pulse signal is transmitted 500 times. The pulse repetition interval (PRI) is 50 ms. Figure 10a shows the transmitting sequence of its drift detection. As the receiving sampling sequence is completed in accordance with transmitting sequence, the data sampling period is 50 ms, the sampling rate is 20 Hz and the Doppler resolution is up to 0.04 Hz. Since the previous method owned by the DPS4D system does not dynamically restrict the optimal height range based on the adjacent ionogram when restricting the ionospheric detection region, on 27 March 2022, 13:02:02-15:37:02, reflection points from primary and secondary echoes were used for fitting the drift velocity. These drift velocities are distortions in that the reflection points from the secondary echoes are distributed in the east-west direction. On 28 March 2022, 00:00:00-01:56:52, since the previous method owned by the DPS4D system extracted the unreliable reflection cluster, reflection points included the reliable reflection cluster and time invariant reflection clusters. The standard deviations of these fitted drift velocities are larger than the standard deviations of that drift velocities obtained by the new method owned by the CAS-DIS system. The components of drift velocities have highly similar variation trends in the drift measurement results of the CAS-DIS system and DPS4D system, which verifies the validity of the proposed new method. The third experiment was conducted from 28 March 2022 to 10 April 2022. In drift detection, what differed from the second experiment was that CAS-DIS system adopted frequency multiplexing (FM), transmitted pulse train signals of five frequencies and transmitted a single frequency pulse signal 400 times. The PRI was still 50 ms. Figure 10b shows the transmitting sequence of its drift detection, which is essentially another mode of drift detection. Hence, the single-frequency repetition interval was 250 ms, the sampling rate was 4 Hz and the Doppler resolution was increased to 0.01 Hz. By improving the Doppler resolution, FM could avoid the strong interference at the high Doppler frequencies. The Since drift velocities are affected by the solar activities, the diurnal variation law of drift velocities exists. In the last experiment, in order to better observe whether drift velocities had a diurnal variation law, we statistically analyzed the drift velocities processed by CAS-DIS which had been obtained in the third experiment. Figure 12 shows the statistical results of drift velocities from 7 April 2022 to 10 April 2022, which reveals the plain diurnal variation. What should be clear from these experiments is that the parameter settings do not need to constantly change as external conditions change when the new method process the drift echo data. Note that some outlier values in drift velocity statistics were caused by a few reflection points or wrong automatic calibration results of the ionogram. Discussion In this paper, an automatic drift-measurement-data-processing method was presented that can effectively improve the drift measurement ability of a digital ionosonde. Based on Doppler interferometry, this method filters clutter step by step, and optimizes drift measurement quality through multiple constraints. Consequently, this constrained data processing method provides strong support for realizing automatic drift measurement and obtaining reliable drift measurement results. Additionally, its data processing cycle can be divided into four stages: extracting the stable echo data, restricting the ionospheric detection region, extracting the reliable reflection cluster and calculating the drift velocity. In the first stage, extracting the stable echo data is essentially data preprocessing. We extract reliable O-wave data as data processing objects (echo data); improve SNR by complementary code pulse compression; and reduce echo fluctuation and suppress interference by median filtering and other data preprocessing techniques. The extracted echo data are essential and useful for obtaining accurate and obtaining stable drift measurement results. Ultimately, the information of preprocessed echo data includes amplitudes, phases, virtual heights, echo directions and Doppler frequency shifts. In the second stage, we select the optimal height range to restrict the ionospheric detection region. It is especially necessary to use a recently acquired ionogram to determine the detection frequencies and the optimal height range. Compared with the correct drift measurement results, if the ionosphere has obvious E and F layers, the false optimal height range could produce obviously different reflection clusters, resulting in serious distortion of drift measurement results; if the detection frequency is higher than the ionospheric plasma frequency, no effective reflection points could be extracted from the false echo information, resulting in serious distortion of drift measurement results (Figure 3). In the third stage, the Doppler filtering and coarse clustering analysis aim to obtain a reliable reflection cluster. It is essential to carry out Doppler filtering according to the spectral characteristics of the echoes. The Doppler filtering can not only filter white noise, but also effectively remove strong random external interference. For example, in the last three experiments, the CAS-DIS system could receive part of the transmitted signals of the DPS4D system when receiving signals. After Doppler filtering, the strong reflection points belonging to DPS4D can be filtered out, and the distribution of these strong reflection points is similar to the distribution of reflection points obtained by the DPS4D system when it conducts drift detection in near real-time. In this paper, coarse clustering analysis was introduced into drift measurement data processing for the first time. It can extract the reflection cluster near the center of a skymap with a high-density distribution, further enhancing the capacity of automatic drift measurement. As can be seen in Figure 6, reflection points with small Doppler frequency shift values far from the center of the skymap can decrease the estimated drift velocity and result in weakening the capacity of drift measurement. This constrained stage can solve the problem of some reflection points with small Doppler shifts being far away from the center of skymap, which exist during certain periods of daytime. The last stage uses the WLS method for calculating the drift velocity. The introduction of a weighting factor further reduces the influence of reflection points with high Doppler frequency shifts on the drift velocity. Figures 8-12 indicate that drift velocities have diurnal variation and have similar variation trends in the CAS-DIS system and DPS4D system, which further verifies the feasibility of this method for more than one digital ionosonde. However, a few reflection points will degrade the quality of drift measurement, which could be considered to break through in future research. Conclusions Providing real-time and long-term ionospheric plasma drift state information is highly desirable. Summarizing this constrained method is valuable for future research on the data processing of drift measurements. This automatic drift-measurement-data-processing method can automatically complete high-quality drift measurements without constantly changing parameter settings as external conditions change, and is suitable for the CAS-DIS system, the DPS4D system and other digital ionosonde systems. Currently, this method has been successfully applied to the CAS-DIS system, and it will be used in more observation stations to perform long-term, automatic and high-quality drift measurements in the future. We hope this method will motivate ionospheric plasma motion research. Conflicts of Interest: The authors declare no conflict of interest.
2022-09-25T15:15:32.906Z
2022-09-21T00:00:00.000
{ "year": 2022, "sha1": "146064038e07ee484351e64552017c3fe117f833", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-4292/14/19/4710/pdf?version=1663766641", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "822adb97700185d08b2b7eeed096ed926a53e4b5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }