id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
55058455
pes2o/s2orc
v3-fos-license
COMPARISON OF METHODS TO MAP SELECTED TRAFFIC MARKINGS ON FIRST CLASS ROADS IN THE CZECH REPUBLIC The article presents conclusions of a comprehensive analysis of pilot data collection using four mapping methods. To validate mapping methods and procedures, we selected three ten-kilometer sections of the first class roads with different geomorphological, vegetative and transportation properties. All sections were measured by aerial photogrammetry using GSD=4cm, mobile laser scanning equipment linked with cameras, by geodetic surveying methods, and one section was also measured by UAV. The tested methods mapped selected features of vertical and horizontal traffic markings on the first class roads. The traffic marking measuring sets were analyzed from the perspectives of personnel, time, data, costs, and technological and organizational aspects. All the mapping methods were verified as mentioned above starting from work preparation phase, its terrain realization, captured data processing and detailed analysis, concluding with stating the advantages and disadvantages for each mapping method. One of the analysis outputs was proposals to change and refine road administrator’s regulations. The mapping methods were compared with geodetic measurements. Analyses were also carried out in the context of creating digital data in 3D for the realization of BIM (Building Information Modeling) digital data in connection with the concept of the European Parliament and Council Directive 2014/24 / EU on Public Procurement, and Czech Government Decree 682 on the Concept of Implementation of the BIM Method in the Czech Republic of 25 September 2017 and Decree 958 of the Government of the Czech Republic of 2 November 2016 on the importance of BIM for the construction engineering and proposal of further steps to introduce it in the Czech Republic. * Corresponding author Objective The objective of the research team membered with experts from four Czech companies (PRIMIS Ltd.., VARS a.s., GB-geodézie Ltd. a Upvision Ltd.) and consultants from the VSB-Technical University of Ostrava, Department of Geodesy and Mine Surveying, was to analyze a pilot data collection and records verification of selected traffic markings of first class roads. The study describes mapping methods and related personal, time, data, financial, technological, coordination and required organizational links. Selected mapping methods were verified from the phase of mapping preparation work, via terrain mapping, processing of the obtained data to a detailed analysis. We also mutually compared the mapping methods stating their advantages and disadvantages. The work builds on Decree 958 of the Government of the Czech Republic of 2 November 2016 on the importance of BIM for the construction engineering and proposal of further steps to introduce it in the Czech Republic. Mapping methods Roads are usually mapped using geodetic survey, photogrammetry, laser scanning and mobile mapping methods. The required accuracy of the mapping methods was not set for the purposes of this comparison, but each method was supposed to render results of utmost accuracy when maintaining a reasonable cost/accuracy ratio. Geodetic survey methods in road mapping: Geodetic survey mapping methods are listed below. Considering the capacities and accuracy of the contemporary electronic tachymeter, the first fifth mapping methods are reduced to the polar method in practice. The second used method in testing the accuracy and road mapping procedures was a measuring method using GNSS.  Polar method  Orthogonal method  Forward intersection  Intersection from distances  Combined intersection  Methods using GNSS 1.2.2 Methods of above-ground mobile mapping using photos and laser scanning: The measurement procedures related to above-ground mobile laser and photogrammetric instruments are very convenient for mapping and taking inventory of roads. However, to measure and process surveying results for mapping purposes, only such mobile laser scanning units, processing or graphic programmes, procedures and measurement processing may be used, which ensure that the final accuracy complies with the expected RMSE xyz 1.2.3 Photogrammetric mapping methods:For the purposes of the test, photogrammetric mapping methods were divided as manned aerial photogrammetric methods and remotely piloted aircraft systems (RPAS). RPAS was used only in the locality "Plateau". 1.2.4 Principles of mapping method verification: When mapping, accuracy of mapping is checked based on the statistical principles and respecting the fact that mapping verification must be carried out using different measurements (or methods in the case of comparing photogrammetric methods and laser scanning methods). It is advisable to check the mapping by other workers who are not biased by the initial mapping quality. The mapping accuracy is evaluated by comparing the mapping with geodetic survey methods in line with the principles below:  Maximum coordinate error u xy is determined as the double of the mean coordinate error m xy . The maximum deviation in distance is determined as the double of the mean distance m d .  Maximum positional error up is given by the relation u p = √2 . u xy .  When assessing the acquired coordination accuracy of a newly measured detailed survey point of planimetry as calculated in second point, the accuracy is considered as satisfactory when the selected mean coordinate error calculated by means of least square method is smaller than the maximum coordinate error u xy as calculated in point first point. In case of a set of more than 20 newly measured detailed survey points of planimetry, at least 40 % of selected mean coordinate errors must be smaller than the basic mean coordinate errors m xy  The number of points to be checked is usually 5% of the mapping detailed survey points. Calibration of instrumentation An important requirement for mapping procedures is to use calibrated instruments, which is also subject to valid laws of the Czech Republic. The total station was calibrated, including the determination of an additive and multiplicative constant and related measurement uncertainty. Angel calibration procedures are related to the calibration of horizontal and vertical angles of total stations. The calibration of GNSS systems was carried out using the state position measuring standard of the Czech Republic -owner of trigonometrical net for testing is Research Institute of Geodesy, Topography and Cartography, v.v.i. Aerial cameras used in the manned aircrafts were calibrated in the manufacturing company Vexcel. Non-surveying cameras used in the RPAS test were calibrated in the non-surveying photo processing programmes. Locality "Upland" The locality "Upland" is a first class road I/19 from the town Žďár nad Sázavou, km 170, to the town Nové Město na Moravě, km 180, of a total length of 10 km. The road goes through a hilly terrain. A small section near the crossroads in the direction of Veselíčko is an access road to a complicated crossroads. The section includes the urban zone of Radvaňovice, several crossroads and numerous road markings. In general, it is a road of medium to heavy traffic with many blind horizons. The traffic does not exceed 4000 vehicles a day. It is a medium complicated locality in terms of the number and complexity of Selected Road Infrastructure Equipment (SRIE) and traffic as well as in terms of used surveying methods. Locality "Forest" The locality "Forest" is a part of a first-class road I/19 from the municipality of Štěpánov nad Svratkou, km 201, to the village of Hodonín, km 208, with the total length of 9 km. 90% the road section goes through forests. A part of the road falls in the urban zone of the municipality Štěpánov nad Svratkou. The road is not very busy, and the mapped features are not many. Forested sections are difficult to survey using all the tested mapping methods. As for the number and complexity of SRIE and traffic, it is a simple locality. As for the used survey methods, it is a very complicated locality, namely for photogrammetry and mobile mapping, as the road goes through a forest. The daily traffic is about 2500 vehicles. Locality "Plateau" The locality "Plateau" is a first-class road I/50 from the town Slavkov u Brna, km 11, to the town Bučovice, km 20, of a total length of 9,5 km. The road goes through open country. It is a road in the rural area with numerous crossroads, bus stops, frequent road markings and extremely heavy traffic. As for the number and complexity of SRIE and traffic, it is a very complicated locality. As for the used geodetic survey methods, it is a very complicated locality with a daily traffic amounting to 16 000 vehicles. Considering photogrammetry and mobile mapping methods, it is a simple locality. DESCRIPTION OF MAPPING PROCEDURES This section describes the applied instrumentation and mapping procedures in the tested localities. Technological description of geodetic survey method Between 1 August 2017 and 15 September 2017 we measured the features (detailed survey points) of horizontal and vertical traffic markings in all the three tested localities. We used Trimble S6 Oneman total station, and instrumentation to measure GNSS -GPS Trimble R4. The survey was executed using RTK positioning using GNSS apparati (in sections without trees). In sections with a trees (without a quality GPS signal), measurements were made with the total station. A geodetic point field was determined using traverses oriented on both ends. From the polygon points, detailed survey points were measured using the polar method. Sketches were not made as the measured points were surveyed including codes for automatic drawings or 3D models. Each feature of the traffic marking was photographed to make the road inventory. Each detailed survey point is documented by its calculation method and determination accuracy. Based on the measurements and list of coordinates with codes defined for selected types of horizontal and vertical traffic markings, drawings from the supplied geodetic survey TXT files were loaded into the prepared database. In its structure, the database corresponded to the required data model of Ředitelství silnic a dálnic České republiky (Directorate of Roads and Motorways of the CR). The data may be visualized using QGIS software. In the final map output, the vertical traffic markings have 37 attributes in the geodatabase, including photos of vertical road signs. In the horizontal traffic markings, each feature has 27 attributes, including the used colour, its reflectivity, date of last coating, etc. The road surface area is expressed and described using 25 attributes. The road safety features, such as crash barriers and handrails, also have 25 attributes. Technological description of mobile laser-scan mapping The mobile mapping data were collected on 1 September (locality "Plateau") and on 8 September 2017 (locality "Upland" and locality "Forest"). The data were collected using MOMAS (MObile MApping System) system placed on a ramp pulled by a car (Škoda Yeti). The system comprises of the socalled control unit, to which are connected a monitor and keyboard to control the whole system and monitor the functionality of the discrete devices. There is also a measuring unit equipped with GNSS/INS system to determine position and orientation, two laser scanners, two digital cameras and one spherical camera. The control unit is connected to the car battery and backup battery. An external odometer is also connected to the system. A detailed description of MOMAS system follows:  GNSS/INS system to determine position and orientation of the mobile mapping system (two-frequency GNSS receiver, GPS+GLONASS L1/L2, frequency 10Hz  Inertial measurement unit (IMU), frequency 200Hz  2 laser scanners (VQ250) for 3D documentation of the area of interest, maximum scanning frequency 600kHz (2x300kHz), 200 lines/s (2x100lines/s), FOV 360°, accuracy 5mm  2 digital cameras for a detailed photo documentation of the area of interest, resolution 5Mpx (2452x2056), max. 8 photos/s, FOV 80°x65°  1 spherical camera (Ladybug 5), 6 partial cameras, resolution 6x5Mpx (sphere 30Mpx), FOV 36  1 external odometer Based on the required accuracy of the output mobile mapping data, it was necessary to do targeting and subsequent initial point (IP) survey. These points were measured twice in the open country in one-hour intervals using GNSS RTK -radio (locality "Upland" and locality "Plateau"), where the base station was placed on the net densification point of the Czech State Triangulation Network with a verification to at least one neighbouring point. In the forested area (locality "Forest") the initial point coordinates were determined using the combination of GNSS measurements and a traverse. The own data collection using the mobile mapping system was preceded by planning an optimal time for the data collection. Inter alia, we considered suitable climatic conditions, number of satellites above the locality and traffic density. In the given locality we selected a suitable site to begin and terminate data collection (open country with the highest number of satellites as possible), where we carried out a static observation of at least 5 min (at the beginning and end of data collection) and IMU calibration. At each locality, the data were obtained via two passages (each time in opposite direction) at car speed 40 -45Km/h. The set laser frequency was 300kHz. Images were taken each 4m. Trajectory calculation: The first step in POSPac programme was the calculation of a trajectory, along which the mobile mapping system moved. The calculation is based on the differential GNSS calculation. For such purposes, we used a virtual reference station of CZEPOS network and reference station of TopNet Network (Žďár nad Sázavou). The coordinates of the reference station are given in the coordinate system ETRS89. The output MOMAS data are also defined in this coordinate system. When calculating the trajectory, combined are data from the GNSS receiver, inertial measurement unit and odometers, thanks to which the trajectory may be reconstituted in places of GNSS signal dropouts -e.g. when passing through the heavily forested locality "Forest". Laser point cloud generation: The calculated trajectory was imported into a project in RiPROCESS software. The raw data acquired in the field were used to generate and georeferenced a cloud of laser points. For further processing, the cloud was converted into a standard format LAS 1.2. Generation of images: Based on the calculated trajectory and exposure timing of the different spherical images and photos of a higher resolution, the images were georeferenced in POSPac and RiPROCESS programmes. Point cloud processing: The outcome of the above mentioned processes are laser point clouds in the format LAS1.2 and trajectory records. The data contain inevitable errors from using the systems. If the measurement dropouts exceed the permissible error, it is vital to improve the spatial and altitude data accuracy via smoothing the laser data to initial points. Prior to smoothing, the laser point clouds had to be divided into logical portions of adequate sizes. This process occurs in Microstation V8i programme with TerraScan add-on. The point cloud divided into irregular tiles (due to uneven data density) was compounded into a project for MDL applications of TerraScan and TerraMatch. In such applications, semiautomatic processes searched for distances in the laser cloud, corrections of the passages were determined and the corrections were applied subsequently. In the point cloud, geodetically surveyed ground control points were identified automatically and partially manually, absolute corrections were determined and the corrections were applied along the whole point cloud. This way, we reached the correct position and altitude of the point model representing the geometric conditions of the ground and engineering structures in the localities of interest. Evaluation of inventory data from a laser point cloud: The smoothed laser point clouds were transformed into the state coordinate system (SJTSK, Bpv) and the required road inventory features were evaluated in the Microstation V8i programme. The coordinates of the evaluated feature points were exported as txt files divided into vertical and horizontal road markings, and as safety barriers. These points were imported into PanoramaGIS programme (by GB-geodézie Ltd.), where inventory data were filled in. The data were completed into the final inventory geodatabase in ArcGis. Technological description of manned aerial photogrammetric mapping 3.3.1 Targeting of initial and check points: Prior to recording, the locality was targeted and monumented for ground control points (GCP). The signal size of the ground control points and check points was sufficient, and clearly interpretable at the signal size of average 2.5 multiple of GSD, i.e. circa 10cm. Figure 2 gives the targeting of the initial point No. 8000 in the locality "Upland". It is suitable for the initial point to be placed on a wider flat surroundings of the initial point (at least 20 pixels) so that they were correctly interpreted, pointed, identified via computational procedures, and correlated in all images where shown. An important requirement limiting the quality of the final mapping of traffic markings detailed points is the correct distribution of ground control points in the line locality of the tested roads. The distribution and position of the targeted initial points in the locality must be planned with respect to the borders of the final mapping, orthophotos and digital ground surface model, or its traffic markings features. The position of the points is planned for the sake of comparison and measurability on at least 4 aerial photos. In the course of targeting work, road video records were made from a car travelling at the speed of 50km/hour in order to fill the geodatabase and road inventory attributes. Aerial triangulation: Having developed the aerial survey photos, a locality project is compiled, followed by the definition and setting of basic parameters to calculate aerial triangulation (AT) and determine the parameters of external photo orientation. Next, all the initial points are checked manually in all photos, where these occur. The next is the correlation process of automatic search of photo tie points and the calculation of external orientation parameters in all the photos of the locality. The final calculation of AT was executed in Photo-T programme. Mapping the horizontal and vertical traffic marking features: Based on the calculated elements of external orientation and requirement for the evaluation of horizontal and vertical markings features with all their attributes, the features of traffic infrastructure were evaluated in the stereoscopic regime using digital stereoscopic stations in MicroStation V8. During the evaluation, information on the mapped infrastructure feature parameters were added gradually. It was recorded in tables*.xls, which were subsequently used to build the final database *.dbf and make drawings *.shp. Conversion of DGN into SHP and completion of dbf: The geometry of all features was exported from the auxiliary *.xlsx files and *.dgn files of MicroStation V8 format acquired via merging all drawings of the stereophotogrammetric evaluation in the given tested section (from node to node). The built-in function "Export to Shapefile" of ArcGIS software was used for the export into shp format. In the vertical traffic markings, the geometry was exported simultaneously with marking the name and sign boards sequence in the given stationing. The discrete features of the vertical traffic markings were divided according to types (prohibitory, warning, etc.) in the course of stereophotogrammetric evaluation based on different types of cells. The name and order of the different sign boards on the stationing were attributed manually after merging all drawings of the tested locality in the given tested section according to the information from video records in MicroStation software. In the horizontal traffic markings, the geometry was exported simultaneously with marking the names of horizontal traffic markings of different names and broken line cadences. In the given tested sections, the markings were manually divided into different layers in MicroStation based on the information from video records. The geometry was exported and marked safety markings in the given tested section. Next, the features were divided manually based on the safety markings types into different layers in MicroStation software according to the information from video records. After the data export into the shp format, a required data model was made for each layer using ArcgisModelBuilder (automatic creation of an empty attribute table adhering to data types and names of different data items). Technological description of the mapping work using remotely piloted aircraft systems Having completed the premarking prepared for the manned aircraft with 10 premarked initial points, on 3 August 2017 the locality "Plateau" was scanned between 07:35 and 09:20 CEST using an unmanned aircraft MaVinciSirius (registration OK-X003N) in three overlapped blocks. The aerial work permit is registered with the Civil Aviation Authority of the Czech Republic (number 0003/LPUA). The used camera was Panasonic Lumix GX1 of 14mm focal point, Live MOS (CMOS) chip and resolution of 4592x3448 pixels. When imaging, the sky was clear, the temperature was 20°C, wind 4 m/s with air gust of up to 6 m/s. Each of the three flights lasted for the maximum of 25 minutes (a total of 75 min). A total number of 1175 photos were taken with the longitudinal overlap of p=80% and lateral overlap of 70%. The mean altitude above ground was 216 m. The nominal resolution was 5 cm/px 3.4.1 Preparation and planning of aerial imaging. The aerial assault was planned as a series of 3 flights covering the areas of interest so that each flight did not exceed the maximum time of flight for the used unmanned aircraft with a time reserve of 5 min in case of unexpected complications as well as to comply with the VLOS limit subject to the specifications of aerial work permit. The flight plan was prepared automatically in the relevant MaVinci Desktop software supplied by UAV. Airspace restrictions: Before the aerial imaging, the area of interest was checked for potential airspace restrictions (with regard to the flight altitude we checked the following airspace: CTR/MCTR, ATZ, LKP, LKR, LKD). Before the flights we also enquired about potential NOTAM news for the given locality. Imaging: Imaging was carried out early in the morning, while the weather forecast was checked continuously for the localities in question (particularly the numerical model Aladin used by the Czech Hydrometeorological Institute and server Windguru.com). The time of imaging was decided the evening before flight. In case of unfavorable weather forecast the evening before imaging (more than 20% probability of rain, limited visibility, wind over 8 m/s), the aerial work for the forthcoming day was cancelled. Take-off and landing sites: The take-off and landing sites were selected during the planning stage as the workers had a clear overview of the local topography based on the field reconnaissance and knew about possible restrictions. The sites were selected especially with regard to the requirement of permanent pilot's visual contact with the aircraft during the flight and with respect to the site safety considering obstacles on the ground. Data checks and back-ups: After each flight, the photos were checked on the camera display as for quality (exposition, sharp definition, contents) and quantity (an expected number of photos per the flight length). About 400 photos were taken during each flight. After the 3 consecutive flights, the photos were copied from the memory card to two hard discs and the memory card was formatted. Preparation of images for photogrammetric processing: After the flights, the so-called matching in MaVinci Desktop software was carried out, which logged the data acquired during the aerial photography to the photos. This permitted a visual check directly in the field, whether the whole area was photographed and whether no problematic sites occurred, namely due to imperfect overlaps. Another flight was carried out after such check in the field. Next, we filed all the photos and exported them, including their external orientation, into special photogrammetric software AgisoftPhotoscan to be processed. There occurred the whole process (align, point cloud, mesh) of automatic correlation and identification of determined ground control points for the area surveyed in the S-JTSK coordinated system and calculation, all the way to the export of required data -orthophotos and digital model of the ground surface in the form of a point cloud. COMPARISON OF THE TESTED MAPPING METHODS With regard to GSD imaging (4cm), the requirements for the measurement of coordinates of targeted and monumented initial points in all the tested localities corresponded to the accuracy of building net densification point controls of SJTSK network. All the monumented and targeted points entering the photogrammetric processing of mobile laser scanning were taken using GNSS methods, namely the static method of 5minute observations at each point with double measurements of the given point with at least a 90-minute delay to allow for the change of the GNSS satellite images. The requirements for RMSE points were set on the half value of the scanned GSD, i.e. 20mm in position and 25mm in altitude. Comparison of detailed point mapping accuracy To compare the accuracy of the different mapping methods in the different localities, we selected several types of features. The first entities to compare were discrete, unambiguously pointable, interpretable and determinable points acquired by all mapping methods. In horizontal road markings, a typical comparison point is the end of the continuous traffic line, horizontal road marked lane corner, road markings corner, or start of a direction arrow. An analogous comparison was also executed with vertical road markings, which themselves indicate a position via the intersection of the sign pillar with the ground (soil or concrete). The second type of entities was the evaluation of the positional, altitudinal, and spatial accuracy or distances of horizontal road markings mapped by the different mapping methods. To evaluate the positional and altitudinal accuracy of the surveying methods on the tested first class road sections, the following procedures were used and discrete points of horizontal, vertical and safety road markings were defined:  An empty DGN file in the SJTSK coordinate system was loaded with points exported as a text file from the primary dgn files of each mapping methods (photogrammetric (PHTGM), mobile mapping (MM) and geodetic surveying GS)). Three dgn files were made this way, which contained all the points from the dgn drawings as outputs of each mapping method. The dgn files were processed in colour (PHTGM is blue, MM in green, and GS in brown;  We opened the dgn PHTGM file, while referring to the MM and GS mapping results. The other reference raster data were orthophotos with GSD 5cm;  We opened an xlsx. file. Observing the dgn files over orthophoto, we selected ordered triads and recorded them into an xlsx sheet. We progressed from the west to the east of each locality. To compare the discrete points of the horizontal road markings, we selected easily interpretable points in the field and point clouds for all the mapping methods;  The procedure was analogous with the vertical road markings;  Having ordered the triads from the whole locality, we  selected points to be compared from the dgn exported text file and put them in an xlsx table;  The data exported into an xlsx table were evaluated as discrete elements as differences of the geodetic surveying method minus the results of the photogrammetric mapping method (GS-PHTGM) and as differences of the geodetic surveying method minus the results of mobile mapping (GS-MM). We calculated the coordinate deviations and related RMSE. Next, we calculated the differences, deviations and RMSE between the photogrammetric mapping method and mobile laser mapping (PHTGM-MM).  The acquired data were also plotted into charts. Table 1 gives the values of root mean square errors (RMSE) when comparing the GS-PHTGM methods and GS-MM methods, when the geodetically surveyed discrete point coordinates are taken as the initial values unburdened by inner errors. For the sake of completeness, the right section of the table gives a mutual comparison of two contactless mapping methods, i.e. photogrammetric and mobile mapping. The comparison included over 1120 points in all the three localities. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-1, 2018 ISPRS TC I Mid-term Symposium "Innovative Sensing -From Sensors to Methods and Applications", 10-12 October 2018, Karlsruhe, Germany Comparison of the mapped feature lengths using different mapping methods Besides comparing the accuracy of position and altitude of the different mapped features and the final geodatabase, we also compared the completeness of the data set acquired via the different mapping methods. The differences in the number of features and the lengths of the horizontal road markings lines are caused by the different lengths of the mapped sections. For the sake of an objective evaluation of the different mapping method potential, the starts and ends of the sections in the different localities were not set uniformly. The different teams were left to freely interpret the work load from the submitter's task. In particular, there were pronounced differences in the individual mapping technologies' approach to collection road network ("sucking area" -access to main road subject to mapping), the starts and ends of which were not determined exactly. The outcomes of the comparison of the number of evaluated features using the different mapping methods are the following statements:  With regard to the uncertainty of the start and end of the sections, the team did not measure identical lengths when using different mapping methods;  The results differ with regard to the fact that the teams using different mapping methods began measuring the different features at unclearly defined points, defined on the road either by a coordinate or stationing;  Having chosen identical mapped surface areas and mutually consolidated the areas from different mapping methods to an identical area and distance, it may be stated that using different mapping methods the road markings line lengths differed up to 0.05% in length;  Minor differences in the lengths of the crash barriers and handrails were caused by different evaluations of their end points in the point clouds and aerial images. Comparison of the number of evaluated mapped features using different mapping methods Having corrected the mapping for identical starts and ends and excluded the measurements in collection road network ("sucking area") of diverse sizes in the locality "Upland", the mapping methods rendered the results below:  geodetic surveying determined 98 stationings,  mobile mapping measured 100 stationings,  photogrammetry measured 91 stationings (without using video records) Comparison of the mapping methods from the organization-technological point of view The tested mapping methods have their advantages and disadvantages. As for the organization-technological point of view, the methods may be compared as for: 4.4.1 The duration of preparation works: When using the geodetic surveying methods to map common first class roads, the work may start from day to day if transport engineering precaution (TEP) is not needed. Mobile mapping technologies come second in this respect as only ground control points need to be targeted and surveyed (in the case of more complicated conditions) on the road, and data may be collected subsequently. When using laser mapping, a ten-kilometer section may be surveyed two days after activation, or order. The last comes the aerial photogrammetry as first, it is important to carry out premarking, and get coordinated with the air traffic control authorities and wait for suitable weather for imaging. From the activation to implementation, imaging of a tenkilometer section takes about six days. The number of people (professions) for a successful task execution: Using the geodetic surveying method, it is vital to involve three experts -surveyor, data base producer rendering the data into dbf for SHP, and an unbiased expert to check the final data processing. In mobile mapping, at least four professionals need to be involved (surveyor to measure the ground control points, who is usually also the mobile mapping car driver, a raw data base producer (calculations of trajectories and cloud adjustment), an expert to process point clouds into dbf for SHP, and an unbiased person to check the final processing. Photogrammetric methods need at least eight professionals: a surveyor to survey the initial points, aircraft flight planner and navigator (in one person), pilot, expert to develop digital images, expert for aerotrinagulation, aerial image analyst, expert to process images into dbf for SHP, and an unbiased person to check the final processing. The instrumentation (owned or rented): In the geodetic surveying methods, it is sufficient to have a car, quality instrumentation to measure GNSS, tachymeter, computer for data processing (the costs, including software, fall below 25.000€ -without the car). As for the mobile laser mapping, the instrumentation is identical to the geodetic mapping methods. On top of that, it is also necessary to get a laser-scanning apparatus, high-performance computer and special-purpose software to evaluate the point clouds. The costs are about 400.000€. For photogrammetric mapping, the instrumentation needed by a geodetic surveyor is topped with an aircraft, camera with add-ons, special-purpose software for data processing, large data storage for data filing and storing. Such technologies start at 1.200.000€. Comparison of mapping methods as for processing time As for the time needed to process and provide the final data, outputs and reports using the different mapping technologies, we must point out that only the geodetic surveying method is proportional to the length in kilometres (or hours). As for time intensity, both the contactless mapping methods (PHTGM and MM) greatly depend on the size or length of the mapped road. For illustration, in aerial imaging the number of images is not relevant as for development from RAW format to the processing format as a hundred images take the same time as one thousand images. When arriving at the site of imaging, there is not much difference in imaging ten kilometers of roads or one hundred of kilometers. What does matter is the cost of one image in the locality or the cost of labour per one kilometer of road. Table 2 may be used to compare the costs and limit capacities. The financial intensity of data acquisition in MM is influenced by the requirement for the ground control point surveying. In case the spatial accuracy of features entering the geodatabase was reduced to 0.14 m, it would be possible to greatly reduce the costs skipping the requirement for ground control point targeting before surveying, or stipulating ground control point measurements only in localities where the accuracy using MM was obtained. This way, costs reductions of as much as 40 % per one kilometer of road may be achieved. Using PHTGM costs may also be reduced via skipping the ground control points from the technology. Smaller reductions may be achieved by changing the image resolution from 5 cm to 7.5 cm, or 10 cm, but at the expense of not being able to see/ process certain road markings. On the other hand, more geodetic surveying will be needed and costs will rise there. In general, the differences in image resolution (5 cm or 7.5 cm) is minimum as the aircraft travels an identical distance along the road (shorter flight at higher GSD may mean a reduction of 10 % in time and corresponding financial savings). Table 2. Comparison of the mapping methods as for costs CONCLUSIONS When measuring the distances between a point measured geodetically and lines produced by contactless mapping methods, the biggest distances are 8.5cm (where 90% of the distances are from 4.5cm to 8.5cm in mobile mapping). However, 80% of the distances in the locality Slavkov-Bučovice is up to 4.5cm (in photogrammetric mapping it is almost 90%). The differences in the evaluation of horizontal line road markings are caused by the character of the contactless methods. In photogrammetry the pointing to the surveyed point is direct, i.e. the photogrammeter places the measurement mark on a real object in the stereoscopic model. The final pointing error is thus an error of the maximum setup in the stereoscopic model, which is 1/3 to 1/2 of GSD in the subpixel observation. In this case, imaging with GSD=4cm, it is 1.3 to 2cm. The remaining deviations from the geodetic surveying, or from the real position in the field, are caused by the remaining deviations of aerotriangulation calculations. In mobile mapping, the points and lines are pointed and interpreted from point clouds. Each point from the cloud is pointed by a laser fixed to a car. If the car goes at the speed of 45km/h, as in this test, one rotation (one section measured by laser) is 12.5cm from another at the laser mirror rotation speed of 100/s. With regard to the fact that the instrumentation described in 3.2 has two lasers, it may be said that point sections in the real terrain are described as profiles with step 6.25cm. In the direction of laser rotation is density between points only 14mm. Such data are continuous sections along the direction of the car's passage. With regard to the pseudorandom character of the measured points towards the real objects, the pointing and interpretation of the line points is burdened with interpretation error from the point cloud at the level of 1/3 to 1/2 laser rotation value, i.e. 2 to 3cm perpendicularly to the direction of travel, and 3 to 4cm along the direction. These rather constant deviations caused by car travel may be easily removed either by a slower passage (the reduction in speed shall lead to an increase in measured sections) or increasing the laser measuring frequency (the market currently offers MM systems with lasers with almost triple higher mirror rotation and three times higher number of pulses, i.e. two to three times 'denser' data at identical car speed). Research outputs were used during implementation of the project "Integration of IoT data sensory platforms into GIS systems in the framework of Smart City e-services" (PV 10437), that was supported from the State budget through the Ministry of Industry and Trade. The major findings and recommendations are summarized below and in Table 3.  When compared to the geodetic surveying, the average distances of horizontal road markings are 43mm in PHTGM, and 86mm in MM when using lines mapped from point clouds.  When compared to the horizontal road markings geodetic surveying, RMSE is 58mm in PHTGM (at the altitude of 58mm) and 111mm in MM (at the altitude of 40mm).  The contactless mapping methods (PHTGM with GSD=5cm and MM) are practically identical (95mm) measuring the horizontal road markings and comparing the spatial deviations with geodetic surveying. In the vertical road markings, MM is more accurate.  The geodetic mapping method ensures collection even in relatively difficult terrain such as for example unclean road surface. However, when comparing the line between the reinforced and the unpaved edge of the communication, the line interpretation (grass x asphalt) have the difference in these measurements up to 45mm.  Geodetic surveying may be used in relatively short sections without prominent increase in unit price (c. from 1 km of surveying length, a standard kilometre price may be used).  The major disadvantage of geodetic mapping methods for first class roads is the staff safety. This may be eliminated using TEP, which means that data collection becomes longer and more expensive. The most troublesome is the surveying of the centre line, which may be partly replaced using an automated generation of centre lines from the side guide lines. The mean differences in the partial measured centre lines and the generated road axes were: Štěpánov nad Svratkou -Hodonín -1.96 cm; Žďár nad Sázavou -Nové Město na Moravě -1.56 cm; Slavkov u Brna -Bučovice -1.8 cm.  At traffic cuts, or any errors when painting the centre line, entering the road to survey the direct lines or arrows cannot be avoided.  It is recommended to do a detailed evaluation of horizontal road markings (Road and Motorway Directorate of the Czech Republic,2017) and determine the procedures leading to the accurate definition of their surveying, which in some cases is not defined precisely.  It is advisable to comply with the accuracy set by regulations (Road and Motorway Directorate of the Czech Republic,2017) to survey features on consolidated surface for the purposes of SRIE road inventory taking and consider whether this accuracy (0.03m) is needed.  Taking the inventory of first class roads and requiring the accuracy of 0.03m (position and altitude) without the initial point field and single RTK measurement, it is not possible to guarantee the accuracy by geodetic surveying that would have to be implemented (Decree No. 383/2015 Coll).  The major disadvantage of contactless mapping methods, particularly PHTGM, is their inadequate accuracy of SRIE position and altitude determination in forested and vegetation shadowed areas. In this case, MM must use ground control points to arrange the laser point clouds, and the photogrammetric method does not identify or measure the vertical and horizontal road markings covered by vegetation, or measured horizontal road markings incompletely. This disadvantage may partially be eliminated by acquiring a georeference video record to add the missing SRIE. This, however, means that PHTGM and MM must be combined.
2018-12-11T06:58:28.028Z
2018-09-26T00:00:00.000
{ "year": 2018, "sha1": "00bf3464697bdf3c5e76155a30babfdba7aba98c", "oa_license": "CCBY", "oa_url": "https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLII-1/371/2018/isprs-archives-XLII-1-371-2018.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "00bf3464697bdf3c5e76155a30babfdba7aba98c", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Computer Science" ] }
19000660
pes2o/s2orc
v3-fos-license
Solvable potentials, non-linear algebras, and associated coherent states Using the Darboux method and its relation with supersymmetric quantum mechanics we construct all SUSY partners of the harmonic oscillator. With the help of the SUSY transformation we introduce ladder operators for these partner Hamiltonians and shown that they close a quadratic algebra. The associated coherent states are constructed and discussed in some detail. INTRODUCTION Since the early days of quantum mechanics there has been enormous interest in exactly solvable quantum systems. In fact, Schrödinger himself initiated a program [1] which resulted in the famous Schrödinger-Infeld-Hull factorization method [2]. In the last 10-15 years this program has been revived in connection with supersymmetric (SUSY) quantum mechanics [3]. To be a little more precise, it has been found [4] that the so-called property of shape-invariance of a given Schrödinger potentials, which is in fact equivalent to the factorization condition, is sufficient for the exact solvability of the eigenvalue problem of the associated Schrödinger Hamiltonian. However, SUSY quantum mechanics has also been shown to be an effective tool in finding new exactly solvable systems. Here in essence one utilizes the fact that SUSY quantum mechanics consists of a pair of essentially isospectral Hamiltonians whose eigenstates are related by SUSY transformations. This is the basic idea of a recent construction method for so-called conditionally exactly solvable potentials [5]. Here one constructs a SUSY quantum system for which, under certain conditions imposed on its parameters, one of the SUSY partner Hamiltonians reduces to that of an exactly solvable (shape-invariant) one. Other approaches, which are also based on the presence of pairs of essentially isospectral Hamiltonians, go back to an idea formulated by Darboux [6], are based on the inverse scattering method [7], or on the factorization method [8]. Clearly, these approaches are closely connected to each other and to the SUSY approach. In this paper we will construct with the help of the Darboux method all possible SUSY partners of the harmonic oscillator Hamiltonian on the real line and discuss their algebraic properties in some detail. In doing so we review in the next section the Darboux method and explicitly show its equivalence to the supersymmetric approach. Section 3 then briefly presents the basic idea for the construction of conditionally exactly solvable (CES) potentials. Section 4 is devoted to a detailed discussion of the harmonic oscillator case. Here we first present all possible SUSY partners of the harmonic oscillator and give explicit expressions for the corresponding eigenstates. Secondly, with the help of the standard ladder operators of the harmonic oscillator we introduce similar ladder operators for the SUSY partners and show that they close a quadratic algebra, which is also briefly discussed. Finally, we introduce so-called non-linear coherent states which are associated with this non-linear algebra. The properties of these coherent states are discussed in some detail. THE DARBOUX METHOD In this section we briefly review the Darboux method [6] and show its connection to supersymmetric quantum mechanics [3]. In doing so we start with considering a pair of standard Schrödinger Hamiltonians acting on L 2 (R), and a linear operator obeying the intertwining relation It is obvious that this intertwining relation cannot be obeyed for arbitrary functions V ± and Φ. In fact, the relation (3) explicitly reads As the unit operator 1 and the momentum operator (i.e. ∂/∂x) are linearly independent, their coefficients have to vanish. In other words, we are left with two conditions between the three functions V ± and Φ: Inserting the first one into the second one and integrating once we find where ε is an arbitrary real integration constant sometimes called factorization energy [3]. With this relation and with (5) we can express the two potentials under consideration in terms of the function Φ: At this point one realizes that these are so-called SUSY partner potentials [3]. In fact, using relations (8) we note that These supersymmetric partner Hamiltonians are due to the intertwining relation (3) essentially isospectral, that is, Their eigenstates are related via SUSY transformations. To make this more explicit, let us denote by |φ ± n the eigenstates of H ± for eigenvalues E n > ε, Then these states are related by SUSY transformations [3] |φ In addition to the states in (11) one of the two Hamiltonians H ± may have an additional eigenstate |φ ± ε with eigenvalue ε obeying the first-order differential equation A|φ − ε = 0 and A † |φ + ε = 0, respectively. In terms of the function Φ they explicitly read where N ± stands for a normalization constant. Clearly, only one of the two solutions (13) may be square integrable. This situation corresponds to an unbroken SUSY. If none of them is square integrable then SUSY is said to be broken [3]. The Darboux method reviewed in this section can now be used to find for a given potential, say V + , all its possible SUSY partners V − . Firstly, one has to solve equation (7), that is, finding all possible SUSY potentials Φ. This in fact corresponds to find all possible factorizations for the corresponding Hamiltonian H + . Finally, the corresponding SUSY partner V − can be obtained via (5). In this way one can construct new exactly solvable potentials. The parameters involved in the SUSY potential turn out to obey certain conditions and therefore these new potentials are more precisely called conditionally exactly solvable (CES) potentials. Let us note that the Darboux method may be generalized to intertwining operators containing higher orders of the momentum operator [9]. MODELLING OF CES POTENTIALS In this section we give some more details on the construction of CES potentials using the Darboux method. As just mentioned above we start with a given potential V + and try to find all its associated SUSY potentials. That is, we have to find the most general solution of the generalized Riccati equation (7). In doing so we will first linearize this non-linear differential equation via the substitution Φ( which is actually a Schrödinger-like equation for V + . Note, however, that we are not restricted to normalizable solution of (14). In other words, the energy-like parameter ε is up to now still arbitrary. In terms of u the linear operator A reads and thus is only a well-defined operator on L 2 (R) if u does not have any zeros on the real line. As a consequence we may admit only those solutions of (14) which have no zeros. Form Sturmian theory we know that this is only possible if ε is below the ground-state energy of H + which we will denote by E 0 . Hence, we obtain a first condition on the parameter ε, which reads ε < E 0 . This also implies that ε does not belong to the spectrum of H + . In fact, the associated eigenfunction (13) would read φ + ε (x) = N + u(x), which is not normalizable due to condition put on ε. The above condition on ε is still not sufficient to guarantee a nodeless solution. Being a second-order linear differential equation (14) has two linearly independent fundamental solutions denoted by u 1 and u 2 . Hence, the most general solution for ε < E 0 is given by a linear combination of the fundamental ones: Therefore, the condition that u does not vanish also imposes conditions on the parameters α and β, which have to be studied case by case [5]. Let us now assume that H + is an exactly solvable Hamiltonian, which means that its eigenvalues E n and eigenstates |φ + n are exactly known in closed form. For simplicity we have assumed that H + has a purely discrete spectrum enumerated by n = 0, 1, 2, . . . such that ε < E 0 < E 1 < . . .. Then via the method outlined above one can construct all its SUSY partners H − which are conditionally exactly solvable due to the conditions which have to be imposed on the parameters α, β and ε. By construction the eigenvalues of H + are also eigenvalues of H − and the corresponding eigenfunctions are obtained via the SUSY transformation (12). In the case of unbroken SUSY H − has one additional eigenvalue ε which belongs to its ground state given by φ − ε (x) = N − /u(x). Finally, we note that in terms of u the partner potentials read and form a two-parameter family label by ε and β/α. Note that only the quotient β/α or its inverse is relevant for (17). For various examples of CES potentials found by this method see [5]. Here we limit our discussion to those related to the harmonic oscillator. THE HARMONIC OSCILLATOR In this section we will now construct all possible SUSY partner potentials for the harmonic oscillator V + (x) = (m/2)ω 2 x 2 , ω > 0, via the Darboux method. The corresponding Schrödinger-like equation (14) reads in this case 4 and has as general solution a linear combination of confluent hypergeometric functions The condition that u does not have a real zero implies that α must not vanish and thus can be set equal to unity without loss of generality. Furthermore, β has to obey the inequality [5,10] |β| < β c (ε) : 4) From now on we will use dimensionless quantities, that is, x is given in units of /mω and all energy-like quantities are given in units of ω. The corresponding partner potentials of the harmonic oscillator then read according to (17) We note that for the above u SUSY remains unbroken and therefore, the spectral properties of H − are given by where H n denotes the Hermite polynomial of degree n. Figures of the potential family (21) for various values of ε and β can be found in [5]. Here let us stress that one can even allow for complex valued β ∈ C\[−β c (ε), β c (ε)] which in turn will give rise to complex potentials generating the same real spectrum [10]. We also note that the present CES potential (21) contains as special cases those previously obtain by Abraham and Moses [7] and by Mielnik [8]. See also [5] for a detailed discussion. Algebraic Structure We will now analyse the algebraic structure for the partner Hamiltonians of the harmonic oscillator. In fact, using the standard raising and lowering operators of the harmonic oscillator H + = AA † + ε = a † a + 1/2, which close the linear algebra one may introduce via the SUSY transformation (12) similar ladder operators for the SUSY partners [11] which act on the eigenstates of H − in the following way The last two relations explicate that the ground state |φ − ε of H − is isolated in the sense that it cannot be reached via B from any of the excited states and, vice versa, the excited states cannot be constructed with B † from |φ − ε . These ladder operators close together with the Hamiltonian H − the quadratic, hence non-linear, algebra This quadratic algebra belongs to the class of so-called W 2 algebras and may be viewed as a polynomial deformation of the su(1, 1) Lie algebra. Such deformations have been discussed by Rocek [12] and, within a more general context, by Karassiov [13] and Katriel and Quesne [14]. The quadratic Casimir operator associated with the algebra (27) reads In the Fock space representation (26) we have the following explicit expression and the relations BB † = Ψ(H − ) and B † B = Ψ(H − − 1). Hence the Casimir (28) vanishes within this representation as expected [13,14]. Non-linear coherent states Let us now construct the non-linear coherent states [15] associated with the quadratic algebra (27). There are several ways to define such states [16]. Here we will define them as eigenstates of the "non-linear" annihilation operator B, leading essentially to so-called Barut-Girardello coherent states [17]. We also note that the construction procedure presented below is very similar to that of coherent states associated with quantum groups [18]. Let us note that the ground state |φ − ε of H − is isolated and therefore we may construct the coherent states over the excited states {|φ − n } n∈N 0 only. For this reason we make the ansatz where µ is an arbitrary complex number and the real coefficients c n are to be determined from the defining relation Using relations (26) we obtain the following recurrence relation for the c n 's, That is, the coefficients c n for n ≥ 1 can be expressed in terms of c 0 , where (z) n = Γ(z + n)/Γ(z) denotes Pochhammer's symbol. The remaining coefficient c 0 = c 0 (µ) is determined via the normalization of the coherent states Thus, we can express c 0 in terms of a generalized hypergeometric function [19] c −2 Let us now discuss some properties of these non-linear coherent states. First we note that these states are not orthogonal for µ = ν as expected: Secondly, let us investigate whether these states form an overcomplete set. In other words, we consider the question: Can these states generate a resolution of the unit operator? For this we have to recall that the non-linear coherent states have been constructed over the excited states of H − . Therefore, we start with postulating a positive measure ρ on the complex µ-plane obeying the following resolution of unity: Within the polar decomposition µ = √ x e iϕ we make the ansatz with a yet unknown positive density σ on the positive half-line. Inserting this ansatz into (37) we obtain the following conditions on σ Hence, σ is a probability density on the positive half-line defined by its moments given on the right-hand side of (39). Let us note that the integral in (39) may be viewed as a Mellin transformation [20] of σ and in turn the latter is given by the inverse Mellin transformation of the moments. This inverse Mellin transformation turns out to lead to the integral representation of Meijer's G-function [19]. In other words, we have the explicit form: In Figure 1 a plot of the radial density f (|µ| 2 ) = 2π dρ(µ * , µ)/(dϕd|µ| 2 ) is given showing that it leads to a well-behaved positive measure on the complex µ-plane. Finally, let us point out that similar non-linear coherent states associated with the CES potentials of the radial harmonic oscillator have been constructed in [15]. In that case broken as well as unbroken SUSY can be considered and the corresponding symmetry algebra is a cubic one. In analogy to the discussion in [15] one can show that the coherent states discussed here are also minimum uncertainty states.
2014-10-01T00:00:00.000Z
1998-06-24T00:00:00.000
{ "year": 1998, "sha1": "5624ddfc54cd6e3ad679647a621a46668f9d3735", "oa_license": null, "oa_url": "http://arxiv.org/pdf/quant-ph/9806080", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5624ddfc54cd6e3ad679647a621a46668f9d3735", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
240354379
pes2o/s2orc
v3-fos-license
Dynamical landscape of transitional pipe flow The transition to turbulence in pipes is characterized by a coexistence of laminar and turbulent states. At the lower end of the transition, localized turbulent pulses, called puffs, can be excited. Puffs can decay when rare fluctuations drive them close to an edge state lying at the phase-space boundary with laminar flow. At higher Reynolds numbers, homogeneous turbulence can be sustained, and dominates over laminar flow. Here we complete this landscape of localized states, placing it within a unified bifurcation picture. We demonstrate our claims within the Barkley model, and motivate them generally. Specifically, we suggest the existence of an antipuff and a gap-edge -- states which mirror the puff and related edge state. Previously observed laminar gaps forming within homogeneous turbulence are then naturally identified as antipuffs nucleating and decaying through the gap edge. In pipe flow, turbulence first appears intermittently in space, interspersed with laminar flow, rather than homogeneously in the entire pipe [1][2][3]. This is characteristic of the subcritical transition to turbulence in wall bounded flows where turbulence coexists with the linearly stable laminar flow (Hagen-Poiseuille profile for pipes) [4]. Thus, turbulence can be excited only through a large enough perturbation of the base flow. At the low end of the transitional regime, controlled by the Reynolds number Re, such excitations generically develop into a localized turbulent patch, called a puff for pipe flow. Initially, puffs have short lifetimes and tend to rapidly decay. As Re increases, puffs become increasingly stable to decays, but puff splitting, a single puff turning into two, becomes increasingly more likely, allowing the proliferation of turbulence [5]. Then, at high enough Re (termed Re slug here) puffs are replaced by expanding turbulent structures, called slugs, with laminar flashes randomly opening and closing within their turbulent cores. This is the regime of intermittent turbulence [6]: a homogeneous state where turbulence production matches turbulence dissipation can occupy the entire pipe, but coexists with random laminar pockets. Further increasing the Reynolds number, such flashes make way to a homogeneous turbulent core within the slug, ending the transitional regime. There are three key states around which the coarse grained dynamics are known to be organized below Re slug : the laminar base flow, the (chaotic) puff state and a state called the edge state, here termed the decay edge, which controls puff excitations and decays. Even above Re slug , it is known that the decay edge remains surprisingly unchanged [7,8]. In this paper we expand this phase space of states, proposing novel states together with their bifurcations with Re. These novel states, the gap edge and antipuff, mirror the decay edge and puff, playing an analogous role for the intermittent turbulence above Re slug . In addition, the suggested bifurcation diagram clarifies how the puff state can disappear while the decay edge remains. Thus, a unified picture of the transitional regime emerges, demonstrating how this regime can be fruitfully interpreted in a dynamical systems framework. We argue for the proposed picture on general grounds and verify its validity using the Barkley model [9]. I. BACKGROUND Here we provide further details about the puff and decay edge and the corresponding phase space structure. We also introduce the coarse grained dynamical point of view taken in the following [9], and motivate our use of the Barkley model. A puff is a localized chaotic traveling wave, which, while having a long lifetime, is only of transient nature, forming a chaotic saddle in phase space [10]. Considering localized structures, phase space can be roughly separated into initial conditions which directly laminarize, and those which decay after a long transient, visiting the puff state first [11,12]. Separating these two sets is the so called edge of chaos, small perturbations around which end up either in the laminar or the puff state. Furthermore, the edge of chaos corresponds to the stable manifold of the decay edge state [7,11], an attracting state for trajectories on the edge which has a single transverse unstable direction. It leads to a puff state on the one side of the edge and to the laminar state on the other. The decay edge and the puff share a similar spatial structure, and there is evidence that they originate in a saddle node bifurcation at a lower Re [7,13]. The point of view taken here is to treat the puff, decay edge and homogeneous turbulence as well defined dynamical states, characterized by an average structure. This is a coarse grained view [14], wherein the detailed chaotic dynamics are treated as noise around the average state. Thus, while the chaotic dynamics themselves have a rich dynamical structure, organized around unstable solutions of the governing equations [15][16][17][18], as evidenced both for the puff and the decay edge [13,[19][20][21][22], we focus on a coarser dynamical description. Following [9,23,24] we focus on two variables meant to capture the state of the flow at a cross section of the pipe, and which can vary along the pipe direction x. Namely, the mean shear u(x, t) and turbulent velocity fluctuations q(x, t). Turbulent fluctuations could be captured through the transverse velocity root-mean-square, averaged over the pipe cross section [24], being zero in the laminar state. A proxy for the mean shear is the local centerline velocity: it is smallest in a turbulent flow where the mean profile is almost flat-equal to the mean flow rate u ≈Ū (Ū is also called the bulk velocity), and largest for the base laminar Hagen-Poiseuille flow, with u = U 0 = 2Ū . The mean flow shear and the turbulence level are the minimum ingredients required to capture the dynamical processes behind turbulence generation and its sustainment [25]. Moreover, based on these two variables the Barkley model successfully reproduces both qualitative and quantitative features of pipe, as well as duct flow [23]. The stochastic version of the Barkley model further displays the phenomenology of puff splitting and decay in pipe flows, as well as the intermittent turbulence regime [9]. The key insight at the heart of the Barkley model is that the transition from puffs to slugs is a transition from an excitable system to a bi-stable system: turbulence can be excited but not sustained below Re slug , whereas homogeneous turbulence, with spatially uniform turbulence level and mean shear (q t , u t ), coexists with laminar flow (0, U 0 ) as a stable state above Re slug . An important feature, which the model reproduces, is a continuous transition from slugs to puffs [8], interpreted as a "masked transition": the homogeneous turbulent state actually first appears at a Re below Re slug , denoted here by Re turb , but is masked by the presence of puffs [23]. This completes the known part of the bifurcation diagram for the transitional regime which we expand in the following, see II. A UNIFIED BIFURCATION DIAGRAM We propose two novel states which complete the set of basic states in the transitional region, Fig. 1: the gap edge and antipuff. These are traveling wave states, consisting of localized laminar flow embedded within homogeneous turbulence. In the region Re turb < Re < Re slug the gap edge is an unstable state lying at the edge between sustained homogeneous turbulence and localized turbulence in the form of a puff, analogously to the decay edge separating the base laminar flow and the puff. Above Re slug puffs disappear but the gap edge remains, separating homogeneous turbulence from a stable laminar pocket state we call an antipuff, which is the mirror image of a puff. Note that at Re slug , slugs neither expand nor contract, corresponding to multiple solutions with sections of arbitrary length at the turbulent and laminar fixed points, which can be interpreted either as puffs or antipuffs, represented by a vertical line in Fig. 1. Finally, the gap edge and the antipuff disappear together at Re gap . We propose that the intermittent turbulence regime observed in pipe flow corresponds to the random excitations and decays of antipuffs through the decay edge, and thus Re gap marks the end of this regime. A connection of the observed laminar pockets, here interpreted as antipuffs, to the laminar tails of slugs has been previously recognized [6,9], though their existence as distinct stable structures not explicitly stated. We now substantiate this picture and flesh out the conditions for its validity. A. General considerations A key characteristic of puffs are fronts: spatial locations where, while u remains roughly constant, the turbulence level, q, either sharply rises from zero to a finite value (the upstream front with u = U 0 ) or sharply decreases to zero from a finite value (downstream, with u < U 0 ). The front speeds determine the speed of puffs and the Re range for their existence. Analogously, front speeds play a key role in establishing the existence of antipuffs. We denote by c + (u, Re) (c − (u, Re)) the front speed at mean velocity u where the turbulence level increases (decreases) in the downstream direction. Turbulence has been shown to be advected with speed u − ζ in pipe flow [24], where ζ is a constant offset velocity from the centerline value. Writing c − (u, Re) = u−ζ+S(u, Re), the relative speed S(u, Re) thus determines the relative stability of laminar flow (q = 0) compared with a turbulent flow (q = 0) at a common velocity u. Indeed, if S(u, Re) < 0 the downstream laminar flow overtakes the upstream turbulent flow, which is thus less stable at this u [26]. As c + (u, Re) represents the same physics but with turbulence downstream of laminar flow, c + (u, Re) = u − ζ − S(u, Re) [27]. Puffs exist as long as front speeds match: there exists u p such that c − (u p , Re) = c + (U 0 , Re). At Re > Re slug , u p < u t , where u t is the homogeneous turbulence mean flow. Puffs are replaced by weak slugs, which have a downstream front at the turbulent velocity u t . Since c − (u t , Re) > c + (U 0 , Re) slugs expand. Generally, S(u, Re) is an increasing function of u and Re: the higher the shear, the higher the production of turbulence; the higher the Re the lower the dissipation of turbulence-both making turbulence more sustainable. The condition for existence of antipuffs is a region where S(u t , Re) < 0 for Re > Re slug , satisfied in pipe flows for Re ∈ (2250, 3000) [24]. Indeed, starting from a fully turbulent pipe flow, q = q t , u = u t , imagine a local decrease of the level of turbulence to zero in a small interval in the pipe, while keeping u = u t . This forms two fronts back to back, with relative speed Re) > 0 producing an initially expanding laminar region. The flat turbulent profile, however, cannot be sustained at q = 0, and u will relax towards U 0 . If u were to reach U 0 , forming an upstream front of a slug, then the gap would tend to close since c + (u t , Re) − c − (U 0 , Re) < 0. Thus, there exists a velocity u t < u ap < U 0 , the antipuff speed, giving matching front speeds c − (u t , Re) = c + (u ap , Re) which define the antipuff. At Re = Re slug puff fronts satisfy c − (u t , Re slug ) = c + (U 0 , Re slug ), so that u ap = U 0 is a solution for antipuff fronts. Assuming it is unique, then Re < Re slug gives u ap > U 0 and antipuffs disappear. At the other end, antipuffs merge with the gap edge and disappear once S(u t , Re) = 0, occurring at Re = Re gap . To motivate the existence and structure of the gap edge, consider reducing q locally in a turbulent pipe keeping u = u t : the level of turbulence will return to q t if reduced by a minuscule amount, homogeneous turbulence being stable, while setting q = 0 will open a laminar pocket which will expand into an antipuff (or puff, depending on Re). Thus, there exists an intermediate value of turbulence 0 < q g < q t right at the boundary, allowing for a traveling wave solution with upstream q t → q g , and downstream q g → q t fronts at almost the same speed u ≈ u t (due to slow adjustment of u to q), traveling at speed close to u t − ζ. III. THE BARKLEY MODEL We now turn to the Barkley model, describing the numerical results we have obtained in support of the above described picture, as well as some asymptotic analytical results. The dynamics in the Barkley model reads Velocities are normalized such thatŪ = 1 and U 0 = 2. The parameter r plays the role of Re and η is a spatiotemporal white noise with strength σ, modeling chaotic fluctuations. While the real turbulent states are chaotic and spatially intricate, the essential dynamical and physical features in the transitional region are very well captured within the Barkley model [9,23]. A. Results for the Barkley model We first present the new states, obtained numerically for the model and then provide the details for the numerical methodology. The spatial profile of an antipuff as well as that of the gap edge, the latter obtained by edge tracking, are shown in Fig. 2 for a representative value of r. Note that while the turbulence drops to zero inside the antipuff, the centerline velocity u does not reach the laminar value of 2, consistent with observations in pipe flow [6]. The full bifurcation diagram is shown in Fig. 3, where states are ordered by their turbulent mass. The measured bifurcations for the Barkley model are exactly those sketched in Fig. 1. Note the gap in turbulent mass formed between the turbulent state and the gap edge with increasing r, and the eventual merging of the gap edge and antipuff as expected. B. Methodology for numerical experiments For the numerical experiments using the Barkley model (1), we use the same parameters as in Ref. [9]: , and x ∈ [0, L] periodic with L = 100. Space is discretized with N x = 128 or N x = 256 grid points, and spatial derivatives are computed via fast Fourier transforms. Temporal integration is performed by a first-order exponential time differencing (ETD) scheme [28], with time-steps between ∆t = 10 −2 and ∆t = 10 −3 . In simulations including stochastic noise, we use a noise-strength σ = 0.2, and include the stochastic term by generalizing ETD to the stochastic integral, similar to [29,30]. Projection onto non-moving reference frame All spatially non-trivial attracting states we will be focusing on for the deterministic Barkley model are socalled relative fixed points-they are traveling wave solutions which move with a constant speed along the pipe. In the reference frame moving with this velocity they turn into fixed points, and in a periodic domain such as ours, they are limit cycles in the lab reference frame. In order to find these solutions with classical algorithms designed to obtain temporally constant configurations, we project the equations adaptively in time onto the corresponding moving reference frame, the idea is similar to that developed in [31,32]. In particular, in order to adaptively eliminate the object's translation along the pipe, we project the deterministic drift of the equation onto its part perpendicular to translation. This can be done by realizing that ∂ x is the generator of translations, so that n = ∂ x (q, u) is the direction in configuration space at the point (q, u) that points into the direction of spatial translation. We can then project the right-hand-side of the deterministic part of equation (1), onto the subspace orthogonal to n, where |.| and ., . are L 2 norm and inner product, so that theb-dynamics have no translational component. This allows us to obtain dynamics that only model the deformation of objects but not their movement speed. Note additionally that the prefactor of this projection will yield the movement speed of the object, since In these projected dynamics, all states we are interested in (puff, antipuff, decay edge, gap edge) are fixed points of theb dynamics, withb = 0. For example, the decay edge which is a limit cycle of b is now a fixed point with b = 0, and has a single unstable direction corresponding to either decaying into the laminar state, or being the minimal seed to form a puff. Not only does this procedure allow us to treat the configurations of interest as proper fixed points, but it also eliminates any CFL condition from the advective term. In combination with the usage of ETD this means that the reaction terms (f (q, u) and g(q, u)) are the only terms restricting the time step. Note that we use this projection only for our deterministic computations, as the interaction of the (spatially very rough) random noise with the spatial derivative needed to compute the translational component makes the projection inaccurate. For stochastic simulations, we instead apply a spatial translation at each iteration so that that the center of turbulent mass, x q = L 0 x q(x) dx/ L 0 q(x) dx remains at the domain center. Edge tracking algorithm In order to find the stable deterministic fixed points of the Barkley mode, it is enough to run numerical simulations until convergence, starting from an appropriate initial condition. For example, in order to generate the stable puff state, we initialize with a localized region of turbulence, which turns out to be a configuration within the basin of attraction of the puff state for properly chosen r. For finding the unstable fixed points, in particular the relevant edge states between puff and laminar flow (the decay edge), and between turbulent flow and puff or antipuff (the gap edge), we employ edge tracking. The algorithm is implemented as follows: Define by B(q, u) the map from a configuration (q, u) to its basin of attraction B ∈ {laminar, puff, turbulent, antipuff, two puffs, . . .}. Numerically, we implement this function by integrating the deterministic dynamics until they are stationary, and comparing their turbulent massq = L 0 q(x) dx with that of the known fixed points. While in general this comparison would be inconclusive (for example, a slug might have the same turbulent mass as two puffs), it is sufficient to identify the fixed points once the configuration is fully converged and no longer changes. Now, to obtain the deterministic edge state, we then integrate two separate configurations of the system, z 0 = (q 0 , u 0 ) and z 1 = (q 1 , u 1 ), initialized to the two fixed points between which we want to find the edge state, for example B(z 0 ) = laminar and B(z 1 ) = puff. Via bisection, we iteratively approach the basin boundary until the distance d between z 0 and z 1 is below some threshold, d(z 0 , z 1 ) < ∆ min , making sure that we also retain that B(z 0 ) = laminar and B(z 1 ) = puff. Since the basin boundary is generally repulsive, z 0 and z 1 will over time separate. Whenever they have separated too much, d(z 0 , z 1 ) > ∆ max , we perform another bisection procedure until they are again close together. This procedure is performed until the states z 0 and z 1 converge. Effectively, the algorithm integrates the dynamics restricted to the separating sub-manifold, by restricting the dynamics in the unstable direction (the separation between z 0 and z 1 ), while not interfering with all other directions. The end result is a state which is stable when restricted to the separating manifold, which corresponds to a fixed point of the dynamics with a single unstable direction, precisely the "saddle points" or edge states we are interested in. Bifurcation diagram for the Barkley model With the edge tracking algorithm lined out above, the schematic bifurcation diagram shown in figure 1 can be computed explicitly for the Barkley model by computing the relevant fixed points and edge states for each value of r. In order to efficiently compute edge states, in particular the gap edge in the puff regime, we employed two additional techniques: First, we used continuation to get a good first guess for the gap edge at a given r by using the previous result for the edge computation at a close-by r. Second, close to the edge we can use a local-in-time heuristic to decide on which side of the basin boundary a configuration is located: If its turbulent massq is increasing in time, the configuration lies towards the turbulent fixed point, while ifq is decreasing in time, the configuration lies towards the puff (or antipuff). While this criterion is only true close to the edge, it allows us to compute the unstable branch much more efficiently. C. Asymptotic results for the Barkley model Here we demonstrate how the general arguments made above manifest themselves in the deterministic Barkley model using analytical arguments. We will focus on leading order results in 1, which is the parameter controlling the slow relaxation of the mean shear u in the model. Above we have denoted front speeds by c ± (u, r) = u − ζ ∓ S(u, r), while in the notations of [9] S(u, r) = √ Ds(u, r). Using standard techniques [33,34], it can be shown that at leading order in One can then solve explicitly for the velocity u p at the downstream front of a puff, though that gives a lengthy expression which we omit here. The turbulent fixed point (q t , u t ) corresponds to the intersection of the u nullcline with the q (upper branch) nullcline defined by q + = 1 + r+u−U0 r+δ , u t being the solution to the equation U 0 − u + κ(Ū − u)q + (u, r) = 0. The turbulent fixed point first appears at r turb , at the intersection of the u nullcline with the nose of the q nullcline which is at q t = 1. This gives and with r turb = 2/3 for our parameters. The gap edge We now discuss the gap edge in the limit → 0. We note that many characteristics we describe below are identical to those of the decay edge in this limit. We build on the analysis presented in [9] to make our arguments for the properties of the gap edge. To solve for the structure of the gap edge in the limit → 0, we may consider u = u t fixed and solve for q at this fixed u (this is also true for fronts of puffs and antipuffs). Then, assuming a traveling wave solution at speed c g , and moving into its reference frame, the dynamical equation for q becomes a spatial ODE: This is equivalent to a particle with position q, moving in a force field −f (q, u) with linear friction with coefficient c g −u t +ζ acting on it. The system being one dimensional, the force can be written as a derivative of an (inverted) potential V r (q), with f (q, u) = ∂ q V r (q), which has maxima at q = 0 and q = q + (r, u t ). This is an inverted 4. (a): The spatial structure of the gap edge q(x) corresponds to a homoclinic trajectory of a particle moving in the potential Vr(q), with ∂qVr(q) = f (q, u) and x playing the role of time. The particle starts at qt, reaches qg and returns. (b) When r = rgap, this homoclinic trajectory goes all the way to qg = 0. potential compared to the local dynamics for q, i.e ∂ t q keeping u fixed and considering a spatially homogeneous solution. The gap edge solution corresponds to a homoclinic trajectory of the one particle system 11: going from q t and back, i.e q(x → ±∞) = q t with zero "velocity" q(x → ±∞) = 0. Such a trajectory is possible as long as q = q t is the lower maximum of the potential compared to q = 0, which in terms of the local dynamics of q corresponds to turbulent flow being a local minimum of the potential while laminar flow is a global minimum. From conservation of energy in the one particle system (or time reversal symmetry where x plays the role of time), such a trajectory requires zero friction (meaning conservative one particle dynamics), giving c g = u t − ζ. For r < r gap , this situation is depicted in figure 4 (a). As r increases, the turbulent fixed point becomes more stable: it rises in relative height in the inverted potential, making the homoclinic trajectory approach closer to q = 0 as the (potential) energy of the initial condition increases, until the laminar and turbulent maxima have identical height. At this r = r gap , the trajectory goes all the way to q = 0 and the homoclinic orbit is made of two heteroclinic orbits connecting the two fixed points. This is the point where the gap edge and antipuff merge, the fronts of the gap edge becoming fronts of antipuffs which go all the way to/from q = 0, as depicted in figure 4 (b). The corresponding mathematical details are more thoroughly discussed in a general context in [9] Appendix A. The antipuff regime The transition from puffs to slugs happens when u t (r) = u p (r), which for our parameters gives r slug ≈ 0.76, though O( ) corrections are significant here since the range r slug − r turb is itself of this order. At this r = r slug , S(u t , r) = −0.13, i.e negative as required for the existence of an antipuff. Furthermore, solving numerically for u t we obtain that S(u t (r), r ≈ 0.83) = 0 so that r gap ≈ 0.83 (again corrections are significant here). Note that r = r gap is not necessarily the point where weak fronts of the slug [35] stop existing, which instead requires −ζ + S(u t , r) > 0 [9,24]. Although above we have focused on the case of a unique solution for u ap , here in the limit of → 0 there are in fact two possible solutions. A match between front speeds of the antipuff is first possible at r ap ≈ 0.756 < r slug ≈ 0.76 giving u ap ≈ 1.8 (for this r, u t ≈ 1.29). In particular, at r ap ≈ 0.756 the minimum of the curve c + (u, r) = u − ζ − S(u, r), given by u = U 0 − r − 9D 8 ≈ 1.8 touches the line c − (u t , r), see Fig. 5(a,b). This corresponds to the appearance of one stable and one unstable antipuff in a saddle node bifurcation, as discussed below. Indeed, at higher 0.756 < r < r slug there are two intersection points between c + (u, r) = u − ζ − S(u, r) and the line c − (u t (r), r) inside the segment u t (r) < u < U 0 , giving two solutions for u ap as in Fig. 5(c). At r = r slug ≈ 0.76 the larger of the two velocities satisfies u ap = U 0 = 2 so that its downstream front is identical to that of a puff, see Fig. 5(d). We will discuss how this two antipuff scenario will manifest itself in the bifurcation diagram in the next section. However, while it is probably realized in the Barkley model for very small but finite , its region of existence in r is minuscule, 0.756 < r < 0.76, making it indistinguishable in practice from a single antipuff appearing at r slug . Thus, we could not satisfactorily verify it in numerical simulations. IV. ALTERNATIVE SCENARIOS FOR THE BIFURCATION DIAGRAM Here we consider two alternative scenarios to the bifurcation diagram presented in Fig. 1. Remarkably, in these scenarios there is a range of Re for which puffs and antipuffs coexist. Both scenarios appear to be inconsistent with observations for pipe flow, though the differences are subtle and thus could be relevant to other wall bounded flows where puff-like and slug-like structures occur. The three main assumptions we have made so far are: (i) a continuous transition from puffs to slugs, implying Re turb < Re slug , (ii) at Re slug homogeneous turbulence is metastable compared to laminar flow, corresponding to S(u t , Re slug ) < 0 as can be measured at the downstream front of a slug, and (iii) there is a unique solution for the antipuff speed u ap which gives fronts of matching speed. While the first two assumptions can be directly measured, the third assumption is more subtle but could still be checked: it implies that a puff continuously turns into an antipuff when viewed in the q-u plane. That indeed appears to be the case for pipe flow [6], though this issue has not be at the focus of a dedicated study. In the following we will assume (i) is satisfied throughout, though we are not aware of a general argument precluding a discontinuous transition from puffs to slugs. We begin by exploring the consequences of breaking assumption (iii) while keeping (i) and (ii). Indeed, the equation determining the speed of the downstream front of an antipuff does not necessarily have a unique solution: u ap − S(u ap , Re) = u t + S(u t , Re) can have more than one solution, but at most two, since S(u ap , Re) is an increasing function of u ap . Thus the right hand side of the equation is not necessarily monotonic but at most has one extremum. If there are indeed two solutions for u ap , they correspond to the presence of a stable and an unstable antipuff, and we will denote by Re ap the Reynolds number where they first appear together. Note that Re ap > Re turb since an antipuff is a localized state within homogeneous turbulence. For Re ap < Re < Re gap , creating a laminar pocket within homogeneous turbulence will lead to the formation of an antipuff. Thus, the gap edge lies at the boundary between turbulence and the stable antipuff state and the bifurcation diagram is unchanged for Re slug < Re < Re gap . Like before, Re gap corresponds to the point where the gap edge merges with an antipuff. The stable antipuff appears at Re ap and disappears at Re gap . Thus, the unstable antipuff must disappear at Re slug . Indeed, at Re slug u ap = U 0 is a solution, since a slug has matching upstream and downstream front speeds at this Re. Thus, like previously, a puff turns into an antipuff at Re slug , but here it is the unstable antipuff. Note that slugs, which connect the laminar base flow with homogeneous turbulence, are still contracting (since u p > u t ) for all Re turb < Re < Re slug . However, even though slugs are contracting, if one were to sufficiently decrease the mean flow in the laminar region, then the laminar region would contract to a finite length, forming the stable antipuff. The corresponding bifurcation diagram is presented in Fig. 6 (a). As a second alternative, let us briefly discuss the case where assumption (ii) is broken while keeping assumption (i). This corresponds to assuming S(u t , Re slug ) > 0, but that puffs still continuously turn into slugs at Re slug . In particular, the condition S(u t , Re slug ) < ζ for the existence of a weak slug front is assumed to still be satisfied [9]. In this case, Re gap < Re slug so that stable antipuffs disappear before Re slug . It follows that this is also a regime with two antipuffs, breaking also assumption (iii), the unstable antipuff disappearing at Re slug as before. No intermittent turbulent regime can exist in this case. This scenario is sketched in Fig. 6 (b). V. INTERMITTENT TURBULENCE REGIME As stated above, we propose that the intermittent turbulence regime corresponds to the range Re slug < Re < Re gap , so that laminar pockets within homogeneous turbulence observed in simulations of pipes [6] are in fact antipuffs which are excited and subsequently decay. Both excitations and decays are expected to occur through the gap edge. These laminar pockets set the fraction of laminar flow within homogeneous turbulence, and thus have a similar role to that of puffs for the reverse transition from turbulence to laminar flow. Antipuffs however do not completely mirror puffs: they can be spontaneously excited from the turbulent state, as it is not absorbing, while on the other hand they cannot split. The fraction of laminar flow in the homogeneous turbulent state is thus controlled by the probabilities of antipuff excitations and decays. These vary smoothly with Re, excitations becoming rarer and lifetimes becoming shorter as the gap edge grows deeper, as indeed observed in pipe flow [6], and the Barkley model [9]. Thus, this is not a phase transition and in particular there is no critical point corresponding to it. We now wish to demonstrate that the laminar pockets within homogeneous turbulence observed in the Barkley model indeed correspond to the excitations and decays of antipuffs. We therefore consider the stochastic Barkley model. The stochastic model had been previously ex- plored for the noise level σ = 0.5 in [9], but this level of noise is so high that laminar flashes are frequent. Thus, the observation of a single creation and decay event is hard, the pockets lifetimes are short, and multiple laminar pockets regularly coexist. In order to isolate creation and decay of a single stochastic laminar pocket, we perform numerical simulations at a lower noise level, σ = 0.22 and in Fig. 7 present a stochastic creation event (left panel), and a stochastic decay event (right panel) both at r = 0.748 which is lower than r gap for this noise level. In addition, we show the profile of the stochastic laminar pocket in Fig. 8, where we present both the spatial q and u profile for an average pocket (right) and a qu-plot (left), which includes both the average as well as the density of individual realizations. The averaging is performed by aligning the structures in space according to the downstream front, which is therefore sharp. Note that this smears the upstream front making the average less representative, as is evident in the q-u-plot, since the spatial extent of the pocket tends to vary significantly, as seen in Fig. 7. The resemblance of the average structure to the deterministic antipuff is striking. It is also evident, for the mean as well as for individual realizations, that the mean shear u never reaches the laminar value U 0 = 2, a characteristic feature of antipuffs. VI. CONCLUSION We have motivated the existence of two novel states: the gap edge and the antipuff and have discussed how they fit within a bifurcation diagram involving previously known states. Our work motivates the study of antipuffs as well defined separate states, as well as a search for the gap edge. It further suggests the existence of invariant solutions which have a localized laminar region (e.g. where streamwise vorticity is depleted) embedded in a turbulent (vortical) flow, as those could be underlying the gap edge and the antipuff state. Taken together, a unified dynamical picture of the transitional regime emerges: laminar gaps forming within homogeneous turbulence are the mirror images of turbulent patches embedded within laminar flow. Still, the transition from laminar flow to turbulence with increasing Re is not the mirror image of the transition from homogeneous turbulence to laminar flow with decreasing Re. This is a consequence of the absorbing nature of the laminar base flow, which the homogeneous turbulent state does not share. Thus, while the former transition is a proper phase transition, the latter is not. Finally, while we believe the bifurcation diagram we presented is relevant for pipe flow, other alternatives are also possible. We have presented two such alternatives here. In future work, it will be interesting to explore their possible relevance to other wall bounded flows and the ensuing consequences for the transition to and from turbulence.
2021-11-02T01:15:39.989Z
2021-10-30T00:00:00.000
{ "year": 2021, "sha1": "47606d52655123380ca09de97db5b20f06f81742", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "47606d52655123380ca09de97db5b20f06f81742", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
258179084
pes2o/s2orc
v3-fos-license
Self-Supervised Learning from Non-Object Centric Images with a Geometric Transformation Sensitive Architecture Most invariance-based self-supervised methods rely on single object-centric images (e.g., ImageNet images) for pretraining, learning features that invariant to geometric transformation. However, when images are not object-centric, the semantics of the image can be significantly altered due to cropping. Furthermore, as the model becomes insensitive to geometric transformations, it may struggle to capture location information. For this reason, we propose a Geometric Transformation Sensitive Architecture designed to be sensitive to geometric transformations, specifically focusing on four-fold rotation, random crop, and multi-crop. Our method encourages the student to be sensitive by predicting rotation and using targets that vary with those transformations through pooling and rotating the teacher feature map. Additionally, we use patch correspondence loss to encourage correspondence between patches with similar features. This approach allows us to capture long-term dependencies in a more appropriate way than capturing long-term dependencies by encouraging local-to-global correspondence, which occurs when learning to be insensitive to multi-crop. Our approach demonstrates improved performance when using non-object-centric images as pretraining data compared to other methods that train the model to be insensitive to geometric transformation. We surpass DINO[Caron et al.[2021b]] baseline in tasks including image classification, semantic segmentation, detection, and instance segmentation with improvements of 4.9 $Top-1 Acc$, 3.3 $mIoU$, 3.4 $AP^b$, and 2.7 $AP^m$. Code and pretrained models are publicly available at: https://github.com/bok3948/GTSA Introduction Invariance-based methods are one of the primary self-supervised learning approaches for computer vision. These methods learn to be insensitive to various transformations, such as rotations, flips, crops, color jittering, blurring, and random grayscale, which provide an inductive bias that helps with representation learning [Chen and He [2020], Bardes et al. [2022a], Zbontar et al. [2021]]. Augmentations employed in self-supervised learning methods can be divided into two categories: photometric transformations and geometric transformations. Photometric transformations, such as color jittering, Gaussian blurring, and grayscale conversion, involve changes to the appearance of an image, like color, brightness, or contrast. geometric transformations, including random crop, multi-crop, flip and rotation, deal with changes to the spatial configuration of the image. In the case of pretraining with non-object centric image, Learning invariant features from crop-related geometric transformations can be problematic. This is because cropped views may not always depict the same object [Purushwalkam and Gupta [2020], Zhang et al. [2022]]. In contrast, object-centric images are less prone to such issues due to their inherent focus on specific objects, which remain semantically consistent across various augmentations. This explains the significant performance drop observed when applying invariant methods to non-object centric images [Purushwalkam and Gupta [2020], El-Nouby et al. [2021]]. It can also be one of the reasons that, to obtain comparable results with curated datasets, a considerably larger amount of uncurated data is required [Goyal et al. [2021]]. Furthermore, when learning to be insensitive to geometric transformations, there is a risk of not capturing location information, and dense prediction models need to be sensitive to these transformations rather than insensitive. Therefore, being insensitive to geometric transformations may not be appropriate. As mentioned earlier, training a model to be insensitive to geometric transformations may lead to noise in learning. However, these transformations can still be beneficial for representation learning since they prevent pathological training behavior ] and provide diversity in inputs. Therefore, we propose a method that focuses on training a model to be sensitive with respect to those transformations, instead of insensitive to those transformations. To achieve this, we must provide a target that varies according to the input's geometric transformation during training. to create a target that varies with cropping, we pool the overlapping region from the teacher's feature map, which can be seen as cropping, and provide it to the student as a target. Additionally, to make the model sensitive to four-fold rotations, we rotate the target feature map to align it appropriately with the student input, and we include a prediction task to predict the degree of rotation of the student's input. Furthermore, we use a patch correspondence loss in our approach. When learning invariant features through multi-crop inputs, it will encourage global-to-local correspondence [Caron et al. [2021b], Caron et al. [2021a]], resulting in the capture of long-term dependencies. Our model uses an additional loss that encourages correspondence between patch representations through cosine similarity, allowing us to capture long-term dependencies [Bardes et al. [2022b]]. Unlike encouraging correspondence between randomly selected crops, our approach induces correspondence between those that are similar in feature space, leading to more accurate correspondence. Our experiments demonstrate that when using non-centric images as pretraining data, it is more advantageous to train a model to be sensitive to geometric transformations rather than insensitive. We significantly outperformed prominent invariance-based methods in various tasks, including image classification, semantic segmentation, detection, and instance segmentation. Related Work Non-Contrative Learning Non-contrastive learning methods aim to learn an invariant bias towards transformations by training on different views of the same image, without explicit negative samples [Garrido et al. [2022]]. In the absence of negative samples, non-contrastive learning methods employ various alternative approaches to prevent representation collapse. These include non-contrastive losses that minimize redundancy across embeddings [Bardes et al. [2022a], Zbontar et al. [2021]], clustering-based techniques that maximize the entropy of the average embedding [Caron et al. [2019], Caron et al. [2021a], Goyal et al. [2021], Assran et al. [2022]], centering and sharpening output features [Caron et al. [2021b]], and heuristic strategies utilizing asymmetric architectural design with stop-gradient, additional predictors, and momentum encoders [Richemond et al. [2020], Chen and He [2020], Tian et al. [2021b]]. Our method belongs to the non-contrastive learning category and adopts an asymmetric architectural design to prevent representation collapse. Self-Supervised Learning with Uncurated Dataset Several self-supervised pretraining methods have been proposed for uncurated datasets, such as the clustering-based method presented in [Caron et al. [2021a], Tian et al. [2021a]]. These methods have shown good performance even when using uncurated datasets, and [Goyal et al. [2021]] demonstrated the scalability of their method to larger datasets for increased performance. Additionally, [El-Nouby et al. [2021]] showed that, given sufficient iterations, even a small non-object centric dataset can yield results that are comparable to those obtained using a larger, highly curated dataset. However, our approach differs from other methods that aim to adapt to uncurated datasets. While clustering-based techniques are used in these methods, they still learn invariant representations to Figure 1: The GTSA. The Geometric Transformation Sensitive Architecture (GTSA). the teacher only receives global views only, while the student receives both global and local views. The learning process is designed to increase the similarity in overlapping regions and predicting four-fold rotation. Additionally, to capture long-term dependencies, GTSA encourages similarity between the teacher's patch representations and the student's patch representations by matching these patch representations using cosine similarity. augmentations, whereas our approach learns features that are sensitive to geometric transformations. On the other hand, [Tian et al. [2021a]] aims to address the shift in the distribution of image classes rather than object-centric bias. Self-Supervised Methods that Learn to be Sensitive to Transformations Early self-supervised learning methods, such as [Noroozi and Favaro [2017], Yamaguchi et al. [2021]], train the model to be sensitive to transformations by predicting the permutation or rotation applied to the input. As contrastive learning has gained prominence in representation learning, the importance of learning transformation-invariant representations has become increasingly evident [Misra and van der Maaten [2019], , ]. More recent work has utilized a hybrid approach that is sensitive to some transformations and insensitive to others [Dangovski et al. [2022]]. Performance improvement has been achieved by training to be sensitive to four-fold rotation while insensitive to other transformations. Similarly, our model also learns to be sensitive to four-fold rotations and insensitive to other photometric transformations. However, our method additionally becomes sensitive to crop-related transformations. Methods In this section, we describe the training procedure for our proposed GTSA method, as illustrated in Figure 1. We adopt an asymmetric Teacher-Student architecture, similar to those in [Richemond et al. [2020], Chen and He [2020]]. The Teacher comprises an encoder and a projector, while the Student includes an encoder, projector, and predictor. Following the multi-crop strategy used in [Caron et al. [2021a], Caron et al. [2021b]], we feed only the global view to the Teacher and both global and local views to the Student. Our objectives involve maximizing the similarity between overlapping region representations and similar patch representations and predicting rotation. Inputs. Similar to [Ziegler and Asano [2022]], we utilized various augmentation techniques including color jitter, random grayscale, random Gaussian noise, Gaussian blur, random resize crop, and multicrop. Additionally, we employed four-fold rotation. For each image, we apply random augmentations and generate G global views and L local views. The inputs are [x g 1 , x g 2 , . . . , x l 1 , x l 2 , . . . ], where g and l indicate global and local view, respectively. x represents batchified images, and x ∈ R B×C×H×W . Here, B is the batch size, C represents the number of image channels, and H and W denote the image size. Teacher and Student. Apart from the additional predictor attached to the Student, the Teacher and Student share the same structural design. We utilized the ViT [Dosovitskiy et al. [2021]] as the encoder and employed stacked CNN blocks for the projector, each block comprising a convolution layer, layer normalization [Ba et al. [2016]], GELU activation [Hendrycks and Gimpel [2020]] and residual connection [He et al. [2015]]. The predictor has a similar architecture to the projector but uses fewer CNN blocks in its composition. We annotated the Predictor and Projector as H and U, respectively. Note that both the projector and predictor do not reduce the spatial resolution of the encoder's output. The Teacher does not get updated through gradient descent; instead, its weights follow the Student's weights using exponential moving average. [Tarvainen and Valpola [2018], He et al. [2019]] Correspondence Region Pooling Operator. We introduce a correspondence region pooling operator, denoted as Φ(·). To be sensitive to crop-related augmentations, the student must receive a target that reflects the crop augmentation. The Φ(·) operator serves this purpose by cropping specific locations in the feature space. It extracts the overlapping portions between the teacher view and student view in the feature space. To accomplish this, we first calculate the overlap region bounding boxes in the input space and scale them to match the feature spatial resolution. We then apply the Φ(·) operator to both the student and teacher feature maps. We implement this operator using RoI-Align [He et al. [2018]]. Rotation Operator. To be sensitive to rotation, we propose a rotation operator. This operator rotates the teacher output feature map according to the input rotation. We denote this as R(·). By using this operator, the student receives a target that reflects the input rotation. Rotation Predictor. As we employ the rotation prediction pretext task, we extract a vector from the student encoder output using a Global Average Pooling layer [Lin et al. [2014]]. This vector is then input into the rotation predictor, which generates logits for the rotation prediction pretext task. The architecture of this process includes a linear layer, GELU activation, and a normalization layer. We denote the rotation predictor as P. Loss Function. We denote the output feature map of the student predictor as z and the output feature map of the teacher's projection layer asz. z ∈ R B×D×hs×ws andz ∈ R B×D×ht×wt , where B is the batch size, D is the feature dimension, h s and w s represent the spatial size of the student feature map, and h t and w t represent the spatial size of the teacher feature map. We apply Φ to both z andz, and additionally apply R to Φ(z). Both Φ(z) and R(Φ(z)) will have the same dimensions. Φ(z), R(Φ(z)) ∈ R B×D×ho×wo . We then compute the cosine similarity between them along feature dimension. The equation is as follows: where i is the index of the batch, t indicates the t-th spatial location in the output of the operator and T is (h o × w o ). the cosine similarity is calculated as the dot product of the two vectors divided by the product of their magnitudes. For the Rotation prediction pretext task, we use rotation prediction loss, which is annotated with l rp . We design this loss using cross-entropy loss. The equation is as follows: where y i is the target indicating the i-th sample rotation angle.ŷ i is the Rotation Predictor output probability distribution of the i-th sample. In this case, as we use four-fold rotation, the possible rotation angles are [0,90,180,270] degrees. Additionally, we use patch correspondence loss, which we annotate with l pc . We pair semantically similar patches from the teacher and student patch representations based on their cosine similarity and make them more alike using the l pc loss. However, as noise may exist in this process, we do not encourage correspondence between all patches. Instead, we only encourage the similarity of the top-K most similar features among all patches, the same as [Bardes et al. [2022b]]. This helps to Here, z i,p refers to the p-th patch representation of the i-th sample from the student, andz i,p denotes The patch representation that is the closest to z i,p among the patch representations for the teacher's i-th sample. K represents total number of matched pairs which are filtered. To consider multi-crop scenarios, we use the following total loss function: Here, α and β are hyperparameters that control the impact of l pc and l rp , respectively. G and L denote the total number of global views and local views, respectively. Experiments Experiments have three distinct subsections for a comprehensive explanation of our approach. In Section 4.1, we demonstrate the effectiveness of GTSA in learning high-quality representations from non-object centric images. We pretrained our model on the COCO train2017 dataset [Lin et al. [2015]], a collection composed of non-object centric images. Then, we evaluated the performance of our model by fine-tuning it on various downstream tasks, specifically Classification, Detection, Instance Segmentation, and Semantic Segmentation. In addition, we pretrained our model on the ADE20K train dataset [Zhou et al. [2018]], which also consists of non-object centric images. However, due to the larger size of other downstream task datasets compared to the pretraining dataset, we restricted our evaluation to Semantic Segmentation on the ADE20K dataset. In Section 4.2, we demonstrated that our model operates as intended, showcasing its sensitivity to rotation and crop-related transformations. In Section 4.3, we compiled the results from our ablation study. We illustrated the effects of rotation prediction loss and patch correspondence, and through visualization of encouraged patch pairs, we verified that our method encourages correspondence even with patches that are far apart. Pretrain Setup. We used the same hyperparameters as DINO, as much as possible. Specifically, we set the batch size to 512, the global view size to 224x224, and the local view size to 96x96. We also used a scheduler to start the momentum at 0.996, just like DINO, and gradually increased it to 1. For the optimizer, we employed AdamW [Loshchilov and Hutter [2019]] and set both α and β to 0.5. When pretraining with the ADE20K train dataset, we changed the jitter strength, which is a hyperparameter used for color jittering, from 1.0 to 0.2 and didn't normalize the encoder's output, and also β to 0.25. Apart from these differences, all other settings were the same. Our default model is ViT-S/16, and we pretrained it for 100 epochs with 8 NVIDIA GeForce RTX 3090 GPUs. Fine-tuning Baseline. We chose Dino and MoCo v3 ] as our baseline methods. These methods are highly relevant to our research as they, like us, employ the Vision Transformer (ViT) as an encoder and focus on training the model to be insensitive in a self-supervised manner. Thus, they provide a compelling counterpoint to our approach which train the model to be sensitive to geometric transformations. The official codes from these baseline methods were leveraged in producing our results. To ensure a fair and balanced comparison, all methods underwent pretraining under the same conditions: 100 pretraining epochs and the use of the ViT-S/16 encoder. Moreover, the fine-tuning process was executed in an entirely same manner across all methods. Image Classification. We compared our method with other self-supervised methods in terms of image classification performance when fine-tuning on the iNaturalist 2019 dataset [Horn et al. [2018]]. Table 1 shows that our method outperforms other methods that learn only invariant features. We achieve a 4.9 and 10.9 higher accuracy compared to DINO and MoCo-v3, respectively, and a 19.1 accuracy improvement compared to random initialization. Detection and Instance Segmentation. Table 2 shows the performance of our method on COCO detection and instance segmentation tasks. GTSA outperforms DINO and MoCo-v3 by 3.4 and 6.2 AP b in detection, and by 2.7 and 5.2 AP m in instance segmentation, respectively. All models were fine-tuned using Mask R-CNN [He et al. [2018]] and FPN [Lin et al. [2017]] under the standard 1x schedule. Semantic Segmentation. Table 3 reports the performance of on ADE20K semantic segmentation using the Acc, mIoU, and mAcc metrics with all methods are pretrained with COCO train2017 dataset. While DINO achieves a 27.3 mIoU, GTSA attains a higher performance of 30.6 mIoU, which is a 3.3 mIoU improvement. Moreover, our method outperforms MoCo-v3 by 7.1 mIoU. All models were fine-tuned using Semantic FPN [Kirillov et al. [2019]] under the standard 40k iteration schedule, following the same approach as in [Yun et al. [2022]]. Table 4, also reports the performance of on ADE20K semantic segmentation but differ in that in this table use ADE20K train dataset as pretraining data. GTSA outperforms DINO and MoCo with inprovement 2.6 mIoU. fine-tuning setting are all same to Table 3. Proving sensitivity to geometric transformations In this section, we present Figure 2 to showcase our model's sensitivity to various transformations. We designed this experiment by measuring the variance in the output based on input transformations. Specifically, we generated ten views with single type of augmentation and fed these views into a model pretrained using the DINO method and another pretrained using our method. We then measured the variance of the encoder output generated by global average pooling. Here, we utilized the COCO val2017 dataset to compute the mean of variance, which is denoted on the y-axis. As shown in Figure 2, both DINO and GTSA learned to be invariant to color jittering, resulting in a very low variance. However, for four-fold rotation and crop-related transformations, GTSA exhibited a substantially higher variance compared to DINO. The crop-related transformations was implemented using a random resize crop, creating two global views and eight local views. It is important to note that the exact same inputs were fed into both the DINO and GTSA. Ablation study. In this section, we demonstrate the performance enhancement achieved through l pc and l rp , and we present a figure that visualizes matched pairs. This illustrates that even distant patches are matched, confirming that correspondence is encouraged over long distances. As displayed in Table 5, we set l as the baseline and showed performance improvement with the addition of l pc and l rp . For simplicity, we pretrain on the ADE20K train dataset for 100 epochs and report the results for Semantic Segmentation on the ADE20K dataset. We observed a 0.4 mIoU increase upon adding l pc , and an additional 0.4 mIoU increase when l rp was incorporated. The visualization of matched pairs when computing patch l pc . The visualization was generated by feeding images from the COCO val2017 dataset into a GTSA model, which had been previously trained on the COCO train2017 dataset. Figure 3 visualizes matched pairs. We used GTSA, which was pretrained for 100 epochs on the COCO train2017 dataset, and input images from COCO val2017 to obtain the matched pairs. From the left image, we can see that matching occurs between parts that depict columns, even if they are not precisely the same column. Similarly, in the right image, we see matching between two parts, both depicting a wall, despite being located at a distance from each other. This demonstrates that our method encourages the capture of long-term dependencies as we intended. Conclusion We propose the Geometric Transformation Sensitive Architecture (GTSA) as a self-supervised method designed for non-object centric images. Our approach trains the model to be sensitive to geometric transformations, specifically rotation and crop-related transformations, by utilizing targets that reflect geometric transformations. Experimental results demonstrate that our method outperforms other transformation-invariant methods when pretrained on non-object centric images. Limitations and Future Works. Our method does not learn to be sensitive to all types of geometric transformations. Specifically, it is trained to be sensitive to four-fold rotations and crop-related transformations. In the future, we aim to explore its effectiveness when made sensitive to a broader range of geometric transformations. Moreover, we will conduct research to achieve superior performance on curated datasets.
2023-04-18T01:16:30.899Z
2023-04-17T00:00:00.000
{ "year": 2023, "sha1": "7316e760e22adc7ed7b65e2c647f4a905883e796", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "7316e760e22adc7ed7b65e2c647f4a905883e796", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
118682032
pes2o/s2orc
v3-fos-license
Quaternion Octonion Reformulation of Quantum Chromodynamics We have made an attempt to develop the quaternionic formulation of Yang - Mill's field equations and octonion reformulation of quantum chromo dynamics (QCD). Starting with the Lagrangian density, we have discussed the field equations of SU(2) and SU(3) gauge fields for both cases of global and local gauge symmetries. It has been shown that the three quaternion units explain the structure of Yang- Mill's field while the seven octonion units provide the consistent structure of SU(3)_{C} gauge symmetry of quantum chromo dynamics. Introduction The role of number system ( hyper complex number ) is an important factor for understanding the various theories of physics from macroscopic to microscopic level. In elementary particle physics , electromagnetism , the strong and weak nuclear forces are described by a combination of relativity and quantum mechanics called relativistic quantum field theory. The electroweak and strong interactions are described by the Standard Model (SM). Standard Model unifies the Glashow -Salam -Weinberg (GSW) electroweak theory and the quantum chromodynamics (QCD) theory of strong interactions. According to celebrated Hurwitz theorem [1] there exits four-division algebra consisting of R (real numbers), C (complex numbers), H (quaternions) [2,3] and O (octonions) [4,5,6]. All four algebras are alternative with antisymmetric associators. Real number explains will the classical Newtonian mechanics,complex number play an important role for the explanation beyond the framework of quantum theory and relativity. Quaternions are having relations with Pauli matrices explain non abelian gauge theory. Quaternions were very first example of hyper complex numbers having the significant impacts on mathematics & physics. Because of their beautiful and unique properties quaternions attracted many to study the laws of nature over the field of these numbers. Yet another complex system i.e; Octonion may play an important role [6,7,8,9] in understanding the physics beyond strong interaction between color degree of freedom of quarks and their interaction. Quaternions naturally unify [10] electromagnetism and weak force, producing the electroweak SU(2) × U(1) sector of standard model. Octonions are used for unification programme for strong interaction with successful gauge theory of fundamental interaction i.e; octonions naturally unify [11] electromagnetism and weak force producing SU(3) c × SU(2) w × U (1) Y . In this paper, we have made an attempt to develop the quaternionic formulation of Yang -Mill's field equations and octonion reformulation of quantum chromo dynamics (QCD). Starting with the Lagrangian density, we have discussed the field equations of SU(2) and SU(3) gauge symmetries in terms of quaternions and octonions. It has been shown that the three quaternion units explain the structure of Yang-Mill's field while the seven octonion units provide the consistent structure of SU(3) C gauge symmetry of quantum chromodynamics (QCD) as they have connected with the well known SU(3) Gellmann λ matrices. In this case the gauge fields describe the potential and currents associated with the generalized fields of dyons particles carrying simultaneously the electric and magnetic charges. Let us consider that we have two spin 1/2 fields, ψ a and ψ b . The Lagrangian without any interaction is thus defined [12] as where m is the mass of particle, ψ a and ψ b are respectively used for the adjoint representations of ψ a and ψ b and the γ matrices are defind as Here σ j are the well known 2 × 2 Pauli spin matrices. Lagrangian density (1) is thus the sum of two Lagrangians for particles a and b. We can write above equation more compactly by combining ψ a and ψ b into two component column vector; and accordingly, there is the adjoint spinor where the spinor field ψ, is the described [13] as a quaternion ψ =ψ 0 + e k ψ k (∀ k = 1, 2, 3) followed by a multiplication rule e j e k = −δ jk + ǫ jkl e l (∀j, k, l = 1, 2, 3). Here δ jk and ǫ jkl are respectively denoted as Kronecker delta symbol and three index Levi -Civita symbols with their usual definitions.The quaternion conjugates of quaternion basis elements as Accordingly the adjoint spinor ψ =ψ † γ 0 (ψ † denotes the Hermitian conjugate spinor) is described as whereas a spinor (3) is described as a quaternion which can be decomposed as This is the simpletic representation of quaternions in terms of complex number representations. In equation (10), we have written ψ a = (ψ 0 + e 1 ψ 1 ) and ψ b = (ψ 2 − e 1 ψ 3 ) described in terms of the field of real number representations. Accordingly, we may write So, we may write the quaternionic form of the Lagrangian in terms of ψas where m = m 1 0 o m 2 is the mass matrix with m 1 is the mass of the field ψ 1 whereas m 2 is that of the field ψ 2 . Quaternionic Dirac Equation Substituting the values of ψ and ψ from equations (3) and (4) in equation (12), we get which is reduced to equation (1). Defining the Eular Lagrangian equation as and taking the variation with respect to ψ a and ψ b , we get and Equations (15) and (16) are respectively recalled as the Dirac equations [13] for the spinors ψ a and ψ b . Similarly if we take the variations with respect to ψ a and ψ b we get and which are respectively recalled as the Dirac equations for the adjoint spinors ψ a and ψ b . In equations (15-18) the γ matrices are quaternion valued [13] i.e. These γ matrices satisfy the following relations Let us write the Dirac equation in terms of a quaternion valued spinor ψ. Now multiplying equation (16) by quaternion basis element e 2 , adding the resultant to equation (15) and using equation (10) , we get the Dirac equation as Similarly, we may write the quaternion conjugate Dirac equation as Dirac equations (22-23) provide the four current as which satisfies the continuity equation ∂ µ j µ = 0. Quaternionic SU (2) Global gauge symmetry In global gauge symmetry , the unitary transformations are independent of space and time. Accordingly, under SU(2) global gauge symmetry, the quaternion spinor ψ transforms as where U is 2 × 2 unitary matrix and satisfies On the other hand, the quaternion conjugate spinor transforms as and hence the combination ψψ = ψψ = ψψ = ψ ψ is an invariant quantity. We may thus write any unitary matrix as where H is Hermitian H † = H. Thus, we may express the Hermitian 2 × 2 matrix in terms of four real numbers, a 1, a 2 , a 3 , and θ aŝ where 1 is the 2 × 2 unit matrix, σ j are well known 2 × 2 Pauli-spin matrices and e 1 , e 2 , e 3 are the quaternion units which are connected with Pauli-spin matrices as Hence, we write the Hermitian matrix H as Equation (28) may now be reduced as For SU(2) global gauge transformations both θ and − → a are independent of space time. Here exp (i θ) describes the U(1) gauge transformation while the term exp (−e j a j ) represents the non-Abelian SU(2) gauge transformations. Thus under global SU(2) gauge transformations, the Dirac spinor ψ transforms as The generators of this group e i obey the commutation relation; which implies e i e j = e j e i showing that the elements of the group are not commutating giving rise to the non abelian gauge structure. So, the partial derivative of spinor ψaccordingly transforms as As such the Lagrangian density is invariant under SU(2) global gauge transformations i.e. δL = 0. The Lagrangian density thus yields the continuity equation after taking the variations and the definitions of Euler Lagrange equations as where the SU(2) gauge current is defined as which is the global current of the fermion field. Quaternionic SU (2) Local Gauge Symmetry For SU(2) local gauge transformation we may replace the unitary gauge transformation as space-time depaendent. So replacing U by S in equation (25), we get in which where parameter is infinitesimal quantity depending on space and time and q is described as the coupling constant. Consequently, the Lagrangian density (13) is no more invariant under SU(2) local gauge symmetry as the partial derivative picks an extra term i.e. where the covariant derivative D µ has been defined in terms of two Q− gauge fields i.e Two gauge fields A µ and B µ are respectively associated with electric and magnetic charges of dyons (i.e particles carrying the simultaneous existence of electric and magnetic charges). Thus the gauge field {A µ } is coupled with the electric charge while the gauge field {B µ } is coupled with the magnetic charge (i.e. magentic monopole). These two gauge fields are subjected by the following gauge transformations For the limiting case of infinitesimal transformations of ζ , we may expand S by keeping only first order terms as So, on replacing partial derivative of global gauge symmetry to covariant derivative of local gauge symmetry, we may write the invariant Lagrangian density for the quaternion SU(2) gauge fields in the following form which yields the following current densities of electric and magnetic charges of dyons i.e where e is the electric charge and g is the magnetic charge. Equation (45) where brackets [ , ] and { , } are used respectively for commutation and the anti commutation relations while δ AB is the usual Kronecker delta-Dirac symbol. Octonion conjugate is defined as where we have used the conjugates of basis elements as e 0 = e 0 and e A = −e A . Hence an octonion can be decomposed in terms of its scalar (Sc(x)) and vector (V ec(x)) parts as Conjugates of product of two octonions and its own are described as while the scalar product of two octonions is defined as The norm N(x) and inverse x −1 (for a nonzero x) of an octonion are respectively defined as x 2 α .e 0 ; The norm N(x) of an octonion x is zero if x = 0, and is always positive otherwise. It also satisfies the following property of normed algebra Equation (50) shows that octonions are not associative in nature and thus do not form the group in their usual form. Non -associativity of octonion algebra O is provided by the associator (x, y, z) = (xy)z − x(yz) ∀x, y, z ∈ O defined for any three octonions. If the associator is totally antisymmetric for exchanges of any three variables, i.e. (x, y, z) = −(z, y, x) = −(y, x, z) = −(x, z, y), then the algebra is called alternative. Hence, the octonion algebra is neither commutative nor associative but, is alternative. Relation between Octonion and Gellmann Matrices Let us establish the relationship between octonion basis elements e A and Gellmann λ matrices. Comparing equations (50) and (58), we get in the following manner i.e. (64) Hence we may describe one to mapping (interrelationship) between octonion basis elements and Gellmann λ matrices by using equations (62-63) as, As such, we may get the following relationship between Gell Mann λmatrices and octonion units: 3,4,5,6,7); 9 Octonionic Reformulation of QCD The local gauge theory of color SU(3)group gives the theory of QCD. The QCD (quantum chromodynamics) is very close to Yang-Mills (non Abelian) gauge theory. The above mentioned SU(2) gauge symmetry describes the symmetry of the weak interactions. On the other hand, the theory of strong interactions,quantum chromodynamics (QCD), is based on colour SU(3) ( namely SU(3) c ) group. This is a group which acts on the colour indices of quark favours described in the form of a basic triplet i.e. where indices R, B, and G are the three colour of quark flavours. Under SU(3) c symmetry, the spinor ψ transforms as where λ are Gellmann matrices , a = 1, 2, ......8 and the parameter α is space time dependent. We may develop accordingly the octonionic reformulation of quantum chromodynamics (QCD) on replacing the Gellmann λ matrices by octonion basis elements e A given by equations (65) and (66). Now calculating the value of λ a α a (x) = 8 a=1 λ a α a (x) and using the relations between GellMann λ matrices and octonion units given by equations (68), we find Now taking following transformations we get It may also be written in the following generalized compact form i.e. which may be written in terms of the following traceless Hermitian matrix form as Now (69) becomes So we may write the locally gauge invariant SU(3) c ,Lagrangian density in the following form; where D µ ψ = ∂ µ ψ + e e a A a µ ψ + g e a B a µ ψ and the occurence of respectively the electric and magnetic charges on dyons. On the similar ground the two gauge fields {A µ }and {B µ } are present in the theory due to the occurence of respectively the electric and magnetic charges on dyons. As such, in the present theory we have two kinds of color gauge groups respectively associated with the two gauge fields of electric and magnetic charges on dyons. Hence the locally gauge covariant Lagrangian density is written as which leads to the folowing expression for the gauge covariant current density of coloured dyons J a µ = e ψγ µ ψ e a + g ψγ µ ψ e a . which leads to the conservation of Noetherian current in octonion formulation of SU(3) c gauge theory of quantum chromodynamics (QCD) i.e.
2010-06-29T09:33:32.000Z
2010-06-29T00:00:00.000
{ "year": 2010, "sha1": "64e21492d1325ae39d2d4b8902eecce9a799e139", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1006.5552", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "64e21492d1325ae39d2d4b8902eecce9a799e139", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
1394016
pes2o/s2orc
v3-fos-license
Clustering of HIV-1 Subtypes Based on gp120 V3 Loop electrostatic properties Background The V3 loop of the glycoprotein gp120 of HIV-1 plays an important role in viral entry into cells by utilizing as coreceptor CCR5 or CXCR4, and is implicated in the phenotypic tropisms of HIV viruses. It has been hypothesized that the interaction between the V3 loop and CCR5 or CXCR4 is mediated by electrostatics. We have performed hierarchical clustering analysis of the spatial distributions of electrostatic potentials and charges of V3 loop structures containing consensus sequences of HIV-1 subtypes. Results Although the majority of consensus sequences have a net charge of +3, the spatial distribution of their electrostatic potentials and charges may be a discriminating factor for binding and infectivity. This is demonstrated by the formation of several small subclusters, within major clusters, which indicates common origin but distinct spatial details of electrostatic properties. Some of this information may be present, in a coarse manner, in clustering of sequences, but the spatial details are largely lost. We show the effect of ionic strength on clustering of electrostatic potentials, information that is not present in clustering of charges or sequences. We also make correlations between clustering of electrostatic potentials and net charge, coreceptor selectivity, global prevalence, and geographic distribution. Finally, we interpret coreceptor selectivity based on the N6X7T8|S8X9 sequence glycosylation motif, the specific positive charge location according to the 11/24/25 rule, and the overall charge and electrostatic potential distribution. Conclusions We propose that in addition to the sequence and the net charge of the V3 loop of each subtype, the spatial distributions of electrostatic potentials and charges may also be important factors for receptor recognition and binding and subsequent viral entry into cells. This implies that the overall electrostatic potential is responsible for long-range recognition of the V3 loop with coreceptors CCR5/CXCR4, whereas the charge distribution contributes to the specific short-range interactions responsible for the formation of the bound complex. We also propose a scheme for coreceptor selectivity based on the sequence glycosylation motif, the 11/24/25 rule, and net charge. Background HIV-1 entry into the host cell is mediated by the viral envelope glycoprotein gp120 associated with gp41 and involves on the host cell surface the CD4 molecule together with the CCR5 or CXCR4 receptor [1,2]. Upon CD4 binding, a conformational change is induced in gp120, exposing a region that can interact with CCR5 or CXCR4 [2]. CCR5 and CXCR4 belong to the chemokine receptor family, which is part of the G-protein couple receptor (GPCR) superfamily, a large group of membrane proteins characterized by seven transmembrane α-helices and four extracellular and four intracellular domains. CD4 binding also can induce further conformational changes in the envelope glycoprotein, exposing a glycine rich region of gp41 which is involved in membrane fusion [3,4]. The envelope glycoprotein gp120 is composed of 400-410 amino acids including 5 variable regions (V1-V5) [2,5,6]. The third variable region of gp120 forms a loop, called the V3 loop, and is composed of 31-39 amino acids. The V3 loop is closed by a disulfide bridge formed by two cysteines and is positively charged. It consists of three distinct regions: the base (closer to the core of the protein), the tip at the opposite end, and the stem between the base and the tip. The V3 loop is implicated in the phenotypic tropisms of HIV viruses, playing an important role in viral entry by utilizing as coreceptor CCR5 or CXCR4. Viruses utilizing CCR5 are referred to as R5 and are preferentially transmitted, whereas those utilizing CXCR4 are associated with disease progression and are referred to as X4. Considering that HIV viruses undergo mutations at very high rates, it is not unusual for several variants to exist in a given patient sample [7,8]. It has been suggested that when amino acids at positions 11 and/or 25 of the V3 loop are positively charged, the virus shows preference for selecting CXCR4 as coreceptor, and when the amino acid at position 11 is uncharged or negatively charged and at position 25 is negatively charged, the virus shows preference for CCR5 coreceptor [8][9][10][11][12]. This means that charge switch to positive at positions 11 or 25 suggests switch of coreceptor selection to CXCR4. It has also been suggested that, besides amino acids 11 and 25, amino acid 24 is also involved in coreceptor selection, with the proposition of the so-called "11/24/25" rule [12]. This rule states that positively charged amino acids at one or more of positions 11, 24 or 25 suggest an X4 virus. The V3 loop is solvent exposed, highly charged, and highly dynamic. Its dynamic character is indicated by the fact that the V3 loop is absent in many crystallographic structures because of lack of resolved electron density. In two available crystallographic structures in which gp120 is stabilized because of multicomponent complex formation, the V3 loop is structurally resolved but with different secondary structure content ( [3,6]; Figure 1). Several studies have demonstrated that the V3 loop interacts with the N-terminal extra-cellular domain of CCR5 (CCR5-Nt) and the extracellular loop 2 (ECL2) [6]. Post-translational modifications by the addition of sulfate groups in two or three of the tyrosines of CCR5-Nt have been shown to be essential in the interaction with gp120 [13][14][15]. The physicochemical mechanism of the gp120:CCR5 interaction is not well understood. Earlier studies have proposed that the interaction between CCR5-Nt and V3 loop is driven by electrostatics, between a highly positive V3 loop and a highly negative CCR5-Nt [16][17][18][19]. We have previously proposed a correlation between the strength of electrostatic potential with binding affinities and inhibitory activities for several V3 loop-derived peptides [18]. Another study of V3 loop chimeras has shown that their ability to bind CCR5 is affected by the amino acid composition and charge [11]. The diversity of HIV-1 presents a major challenge in the development of effective treatments. Currently, HIV-1 strains are divided into three distinct genetic groups: M (major), N (non-major, non-outlier), and O (outlier), with variants within group M being responsible for the majority of the infected population. This group is further divided based on the sequence variability of its env and gag genes [7] into ten subtypes or clades, named A through K, and circulation recombinant forms (CRFs). Differences in coreceptor usage, geographical distribution and global prevalence have been demonstrated for several of the identified subtypes [19][20][21][22]. In this study we have modeled the V3 loop of several HIV-1 subtypes using the available two crystal structures with intact V3 loop as templates [3,6] and consensus sequences, which were obtained from the HIV Databases of the Los Alamos National Laboratory [23]. We have performed computational studies to cluster the various subtypes according to similarities of the spatial distributions of their electrostatic potentials and the spatial distributions of their charges. The spatial distributions of individual charges are responsible for generating the spatial distributions of electrostatic potentials, while taking into account dielectric and ionic screening. We have analyzed the resulting clusters to determine correlations between the electrostatic potential distributions and charge distributions with net charge, epidemiological data such as global prevalence and geographical distribution, and coreceptor selection. We have also generated sequence alignment and sequence similarity clusters for all the V3 loop subtypes. Our goal was to perform a clustering analysis of the gp120 V3 loop of HIV-1 at various levels of refinement, based on sequence, net charge, and spatial distribution of electrostatic potential and charge. The electrostatic clustering analysis may be useful in much-needed vaccine, vaccine adjuvant, or inhibitor design against HIV-1 infection [24][25][26]. Methods Our computational framework AESOP (Analysis of Electrostatic Potentials Of Proteins) [27][28][29][30][31] was used to generate theoretical structures of several V3 loop subtypes, to calculate electrostatic potentials, and to cluster their respective spatial distributions of electrostatic potentials. We have also performed clustering analysis of V3 loop subtypes according to their charge distributions and sequence similarities. V3 loop structural templates We used the coordinates of two Protein Data Bank (PDB [32]) files in which the V3 loop was intact, as structural templates. The PDB codes are 2B4C [5] and 2QAD [6], both from subtype B. In 2B4C, the gp120 core with V3 isolate JR-FL was complexed to CD4 (N terminal two-domain fragment) and the antigen-binding fragment (Fab) of the X5 antibody. In 2QAD, gp120 was in complex with CD4 and a functionally sulfated antibody, 412d. From both structures, we have retained only the coordinates of the V3 loop for our study. The V3 loop in both structures starts at position 296 and ends at position 331. In the case of 2B4C four amino acids have double conformations, from which conformation A was retained. In both structures amino acids 310-311 are missing while two amino acids occupy position 322. We have renumbered the atoms and amino acids starting from position 1 and ending in position 35, using Swiss-PDB Viewer (SPDBV, [33]). V3 loop subtype consensus sequences HIV-1 sequences are deposited in the HIV Databases of the Los Alamos National Laboratory [ [23]; http://www. hiv.lanl.gov]. Using tools within the database we extracted consensus sequences for the V3 loop of HIV-1. For our study, we isolated the amino acid sequences between and including the first and last cysteines of the V3 loop. The Sequence Search Interface Tool was first used to obtain nucleotide sequences for HIV-1 subtypes. Within this search tool, the parameters selected were: subtype (for example, subtype A), virus (HIV-1), and genomic region (V3). The search result file is the input file for the ElimDupes tool, which compares all the sequences and eliminates any duplicates. A cutoff of Figure 1 Molecular models of the V3 loop. (A) Stick representation of backbone and side chains using the gp120 structure with PDB Code 2QAD. (B) Ribbon representation of backbone using the structure with PDB Code 2QAD. (C) Stick representation of backbone and side chains using the gp120 structure with PDB Code 2B4C. (D) Ribbon representation of backbone using the gp120 structure with PDB Code 2B4C. The color code for (A) and (C) is: blue, positively charged; red, negatively charged; green, polar; gray, nonpolar. The color code for (B) and (D) is according to secondary structure. Images were generated using VMD [78]. 93% DNA sequence identity of the env gene was used. The unique sequences file was used as the input file for the HIValign tool, which aligns the sequences based on curated alignments within the database using the Hidden Markov Model (HMM) method. Several options were selected for this tool: align the sequences by HMM, codon-align the sequences, and translate to amino acid. The Simple Consensus Maker tool was then used to obtain a consensus sequence, with the resulting file from HIValign being used as the input file. The default parameters were kept, resulting in an alignment sequence with the first sequence identified as the consensus. This procedure was done for each subtype and groups N and O and the results of consensus sequence alignment are shown in Table 1. Subtype A includes subsubtypes A1 and A2, subtype F includes sub-subtypes F1 and F2, and subtype CPX includes the 11 cpx subtypes available in the database. The consensus for subtype D resulted in 33 amino acid sequence, because of gaps at positions 24-25. To equalize the length of the D subtype with the 35-amino acid length of the rest of the subtypes, we calculated amino acid frequencies at positions 24-25 of the D subtype and chose the amino acids with the second highest frequency in the alignments (gaps being the highest frequency). These amino acids were lysine at position 24 and asparagine at position 25 (Table 1). Subtype J and group O contained two amino acids with the exact same frequency at a particular location. In the case of subtype J, the amino acids where isoleucine and leucine; and for group O, the amino acid was glutamic acid and a gap (introduced by the alignment). For subtype J, isoleucine was selected and for group O, glutamic acid was selected. Subtypes B and K have the same consensus sequences, and subtypes CPX and H also share identical consensus sequences. Subtypes AB, AE, AG, and CPX are circulating recombinant forms (CRFs). The program Modeller [9v6, 34] was used to create homology models of all subtypes, using the two crystal structures as templates, with the modifications described above. The default optimization and refinement protocol of Modeller was used to generate single models, optimized with conjugate gradients and molecular dynamics-based simulated annealing. Clustering of electrostatic potentials The use of similarity measures for clustering of electrostatic (and other physicochemical) properties is a topic of chemistry and drug design research [35][36][37][38]. Clustering of electrostatic potentials of protein families has been introduced by Wade and coworkers [39][40][41][42][43][44][45], including software tools under the name PIPSA [39,40,43,44], and subsequently used or extended by others, including our group [27][28][29][30][31][46][47][48][49][50][51]. This type of analysis depicts electrostatic similarities of proteins, which can be correlated to biological properties and functions. For our analysis we used the AESOP computational framework [27][28][29][30][31], which provides a platform for elucidating the role of electrostatics, and more specifically the role of ionizable amino acids, in protein association. This is accomplished using theoretical alanine scan or other mutagenesis, in which electrostatic properties are perturbed by systematically removing ionizable amino acids [27][28][29][30][31]48,49,51]. The effects of these perturbations are then quantified through the use of electrostatic similarity clustering and electrostatic free energies of association, to give insights into the contributions of ionizable amino acids in both recognition and binding [27,28,30,31,48,49,51]. Since electrostatics is also known to be an important aspect of protein dynamics and evolution, AESOP also has utilities for analyzing the electrostatics of molecular dynamics trajectories [28] and homologous proteins/protein domains [31,47,50]. Poisson-Boltzmann electrostatic calculations and hierarchical clustering analysis were performed as described elsewhere [27][28][29][30]. The program PDB2PQR [52] was used to prepare the V3 loop coordinates for electrostatic where Φ a and Φ b are the electrostatic potentials of proteins a and b at grid point (i, j, k) and N is the total number of grid points. This error-type relation compares the spatial distributions of electrostatic potentials of pairs of proteins. A matrix of 18 × 18 ESDs was created corresponding to the HIV-1 subtype structures. The normalization factor of the denominator assures small values in the vicinity of the 0-2 range, with 0 corresponding to identical spatial distributions of electrostatic potentials and 2 to totally different. Four matrices were constructed for two sets of structures (from two templates), with electrostatic potentials calculated at two ionic strength values. Each matrix was analyzed separately. Visualization of the spatial distributions of electrostatic potentials, as isopotential contour surfaces, was accomplished using the program Chimera [55]. The ESD shown above was also applied to cluster subtype sequences based on charge distribution maps using APBS. Hierarchical clustering analysis was performed using the hclust function of R. The clustered data were plotted as dendrograms using the language and statistical computing environment R (Foundation for Statistical Computing: Vienna, Austria, 2009. http://www.R-project.org). Clustering of subtype sequences Alignment for all HIV-1 subtype sequences of Table 1 was performed using ClustalW2 [56]. The score matrix generated by ClustalW2 was used as the input distance file to create a clustering dendrogram using the linkage function of MatLab (The MathWorks Inc., Natick, MA). Importance of V3 loop variability and charges for viral infection HIV is characterized by its ability to frequently mutate as evidenced by the large number of different isolates and by sequence diversity. A variability "hotspot" is the V3 loop which is implicated in a number of important functions including coreceptor usage during cell entry. Despite its hypervariable nature, V3 retains a basic function, that to interact and to modulate its preferential usage of CCR5 and CXCR4, a crucial step in the process of infection and indeed for the survival of the virus [57,58]. With this in mind, we attempted in the present investigation to address the contrasting function of V3, that of the frequent mutations necessary to evade host immune responses, and at the same time to retain the required interaction with coreceptors on the host cell. In this respect, we explored the combined electrostatic potentials of the amino acids in the V3 loop and their distribution in all HIV-1 subtypes, for which the tropism and V3 amino acid sequence are known, in order to exploit canonical rules that might exist. We have performed electrostatic potential calculations of the gp120 V3 loops, using the Poisson-Boltzmann method [59] and clustering analysis [60] of the spatial distributions of electrostatic potentials for several HIV-1 subtypes. The clustering analysis allows the classification of similarities/dissimilarities of the subtypes based on the common property of electrostatic potentials. Electrostatic interaction is expected because, typically, the V3 loop has an excess of positive charge and the putative interacting N-terminal domain of the coreceptor CCR5, and to a lesser extent CXCR4, has an excess of negative charge. We have performed similar clustering analysis for the spatial distributions of charges and for sequence similarities of HIV-1 subtypes. It is actually the property of charge that many researchers have investigated to shed light into the V3 loop-CCR5/CXCR4 interaction. For example, a recent study has proposed that positively charged amino acids at positions 11, 24 and 25 are involved in coreceptor selection and binding (the "11/ 24/25" rule [12]). In our study we present an analysis that includes the sequence specificities and charges of V3 loops from various subtypes, but also incorporates the more detailed information that is hidden within the spatial distributions of electrostatic potentials. It is actually the electrostatic potential that is responsible for recognition of two proteins if they have excess of opposite net charges. Recognition, which in our protein-protein interaction model refers to the formation of a weak and nonspecific encounter complex, is followed by binding, which is the formation of the specific final complex [27][28][29][30][61][62][63][64][65][66][67][68][69]. Although the origin of the electrostatic potential is unit and partial charges located in the protein surface and interior, the protein net charge does not capture the effect of charge distribution on proteinprotein interactions. It is the spatial distributions of electrostatic potentials of two proteins that mediate long-range electrostatic interactions and protein-protein recognition. It is also the spatial distributions of charges of the two proteins that participate in mediating shortrange charge-charge (salt bridging or weak Coulombic effects) and charge-dipole or dipole-dipole (hydrogen bonding) interactions and the formation of the final protein complex. The underlying hypothesis is described by the following transitive argument: if the electrostatic potentials and charges mediate protein-protein association, and if association mediates viral entry, we can deduce correlations to virulence by studying the specific properties of electrostatic potentials and charges, such as type (positive/negative), strength, and spatial distributions. These types of correlations are indications of where to look for causalities and may be helpful in predicting viral attributes. Clustering of electrostatic potentials, charges, and sequences Figure 2 shows the dendrogram that clusters the calculated spatial distributions of V3 loop electrostatic potentials. These calculations were performed using 0 mM ionic strength, depicting largest magnitude of Coulombic interactions within each structure which are unscreened by solvent ions. The calculations were performed using homology model structures derived from the crystallographic structure of gp120 with PDB Code 2QAD and the HIV-1 subtype consensus sequences available in the year 2009 at the HIV Databases of the Los Alamos National Laboratory (Table 1). Clustering has been performed by pairwise comparison of the electrostatic potentials of all subtypes listed in Table 1, as described in Methods. V3 loop subtypes with similar spatial distribution of electrostatic potential cluster together. The V3 loops studied have positive net charge, with the exception of group O, which has -1 net charge ( Figure 2). The predominant net charge is +3, appearing in 9 subtypes (A, AE, AG, B, C, D35, G, F, K) and in the sequences of the two crystal structures, 2QAD and 2B4C, which belong to subtype B (Figure 2). From the remaining subtypes, group N has a net charge of +1 and AB, D, H, J, and CPX have net charge of +2 (Figure 2). Although subtypes with the same net charge cluster together, there are finer subclusters for subtypes that discriminate according to the spatial distribution of electrostatic potentials. For example, from the +2 subtypes: AB and J cluster together; H and CPX cluster together (they are identical); and D clusters on its own. Overall, the +2 subtypes form the following cluster (with sub- Figure 2). The +2/+3 subtypes form a supercluster together. The +1 group N clusters on its own and forms a larger supercluster with the +2/+3 subtypes, whereas the -1 group O clusters entirely on its own (Figure 2). In a dendrogram, generated with the more realistic electrostatic potential calculations using 150 mM ionic strength (corresponding to physiological ionic strength in serum), we observe similar overall clustering with local variations (Figure 3 supercluster. The +1 group N clusters on its own and forms a larger supercluster with the +2/+3 subtypes, whereas the -1 group O clusters entirely on its own ( Figure 3). Coulombic interactions within the V3 loops are screened by solvent ions, which results in less obvious differences in the spatial distributions of electrostatic potentials when inspected visually (e.g., compare isopotential contours of Figure 3 to Figure 2). Nevertheless, we observe persistent electrostatic clustering patterns for the various subtypes, despite differences in their V3 loop sequences. The clustering of the distribution of charges in space for each subtype is shown in Figure 4. Some clusters within this dendrogram can be found in Figures 2 and 3 (e.g., H and CPX). However, the subtypes are mostly mixed within the +1/+2/+3 supercluster. In general, charge distribution does not depict subtle differences between the subtypes. This is because charges are localized in the structure and are independent from each other. However, electrostatic potentials, generated by these charges, have additional features. First, electrostatic potentials account for dielectric and ionic screening. Because of the latter, we observe differences in the magnitudes and shapes of electrostatic potentials in Figures 2 and 3. Second, electrostatic potentials account for spatial enhancements (additive effect of potentials with Figure 2 Electrostatic clustering analysis of the HIV-1 subtypes, using the year 2009 consensus sequences and structural template derived from the gp120 structure with PDB Code 2QAD. The horizontal axis of the dendrogram represents electrostatic similarity distance. Electrostatic potentials were calculated using ionic strength corresponding to 0 mM salt concentration. Isopotential contours are presented in four different orientations, corresponding to rotations about the vertical axis (indicated in the figure). Isopotential contours are plotted at ± 1 k B T/e, with blue and red corresponding to positive and negative electrostatic potentials, respectively. The net charge, global prevalence, geographic distribution, and coreceptor selectivity are indicated in the figure for each subtype. N/A denotes that information was not available. The orange boxes highlight clusters with HIV-1 subtypes that have similar electrostatic potential and same charge. Green circles in the branches of the dendrogram denote intersection points between net charges or infected population. The symbol # refers to the global prevalence of Subtype D, which includes D and D35 combined. Charge distribution clustering analysis of the HIV-1 subtypes, from the year 2009 consensus sequences and structural template derived from the gp120 structure with PDB Code 2QAD. The horizontal axis of the dendrogram represents charge similarity distance. The net charge, global prevalence, coreceptor selectivity and geographies of each subtype are indicated in the figure for each subtype. N/A denotes that information was not available. Green circles in the branches of the dendrogram denote intersection points between net charges or infected population. The symbol * refers to the global prevalence of Subtype B which includes the two crystal structural templates (from 2QAD and 2B4C). The symbol # refers to the global prevalence of Subtype D which includes D and D35, combined. sequences with +2 and +3 net charges. These observations suggest that electrostatic clustering is more detailed, containing more refined charge-related information, than sequence clustering. [21]), and coreceptor selectivity (see below). Subtype C is responsible for almost 50% of the infected population [21]. In the 0 mM data subtype C forms a cluster together with subtypes A, G, AG, K and B, accounting together for~85% of the infected population ( Figure 2). In the 150 mM data subtype C forms a cluster together with subtypes G, AG, K, and B, accounting together for~73% of the infected population (subtype A, corresponding to~12.3% of the infected population, Figure 5 Sequence clustering analysis of the HIV-1 subtypes, from the 2009 consensus sequences, based on sequence similarity. The horizontal axis of the dendrogram represents sequence similarity distance. Global prevalence, coreceptor selectivity and geographic distribution of each subtype are indicated in the figure. N/A denotes that information was not available. The green box highlights sequences that belong to Subtype D, while the orange box highlights the two crystal structural templates (from 2QAD and 2B4C), which belong to subtype B. The * refers to the global prevalence of Subtype B which includes the two crystal structure templates. moved to a neighboring cluster; Figure 3). Geographic distributions [21] are also quoted in Figures 2 and 4. Clustering and structural variability For many years the intact structure of V3 loop in gp120 was elusive, presumably because of its dynamic character. This was alleviated in the crystal structures 2QAD and 2B4C, which contain multi-protein complexes that stabilize gp120 and the V3 loop. (In both crystal structures, the V3 loop is stabilized by contacting the antibody components of the multi-protein complex.) The dynamic character of the V3 loop can be deduced by observing that its conformation is significantly different in the two crystal structures, 2QAD and 2B4C ( Figure 1), despite the fact that they differ only in two conservative mutations (Q/N and F/L, Table 1). To assess the degree that V3 loop dynamics affect its electrostatic properties, at least using two extreme conformations of the crystal structures, we performed similar clustering analyses for electrostatic potentials and charges, using the 2B4C structure (Additional Files 1, 2 and 3). Electrostatic potential clustering at 0 mM ionic strength (Additional File 1) is similar to the corresponding data of the 2QAD structure ( Figure 2). However, there are differences in the 150 mM data (Additional File 2 and Figure 3), i.e. +2 subtypes are scrambled within the +3 subtype clusters. The difference between the 150 mM clustering data from the two crystal structures originates from their conformational variability, which results in different charge distributions and different enhancements or cancellations of positive/negative electrostatic potential distributions. Such differences are not observed in the 0 mM data, because of lack of ionic screening, resulting in more uniform distribution of the dominant electrostatic potential (here being positive with the exception of subtype O). As in the case of 2QAD, in 2B4C clustering of spatial distributions of charges does not depict the fine clustering of electrostatic potential similarities/dissimilarities (compare Additional Files 1 and 2). Also, as in the case of 2QAD, in 2B4C electrostatic clustering is more detailed, containing refined charge-related information not present in sequence clustering (compare Additional Files 1, 2 and 3, and Figure 5). Influence of homology modeling-derived local flexibility in calculating electrostatic similarity Our goal in the studies described above was to produce and analyze consensus electrostatic potential templates for the V3 loop structures that capture the average electrostatic characteristics of each consensus sequence. The consensus sequences were constructed using the highest-occurrence amino acid at each V3 loop position, using several thousands of patient sequences. It should be understood that amino acid changes to revert a consensus sequence back to one of the many sequences used to construct the consensus sequence, would affect the V3 loop structure at the vicinity of the change(s), as well as the corresponding electrostatic potential distributions. In addition to sequence variability, the structural flexibility of the V3 loop indicates dynamic electrostatic potential distributions around an average distribution within each subtype. As mentioned above, with knowledge of the great structural flexibility of the V3 loop, our strategy was to perform our analysis twice using the two crystallographic structures of the V3 loop in order to represent two extremes of the possible conformations and thereby accounting for a conformational transition. Additionally, the analysis based on each crystallographic template was also performed twice, using ionic strengths corresponding to counterion concentrations of 0 and 150 mM, resulting in a total of 4 electrostatic similarity analyses (Figures 2 and 3, and Additional Files 1 and 2). Calculations at 0 mM ionic strength produce electrostatic potentials which are more dispersed and smoother, not as affected by the underlying structure as the 150 mM potentials, whereas calculations at 150 mM potentials, in addition to representing physiological conditions, are more dependent on the underlying structural details. As a test to assess the effects of local flexibility on the reliability of our electrostatic potential similarity analysis, we produced 5 homology models for each of the two V3 loop sequences corresponding to those of the crystallographic structures. This was made possible with Modeller, by back-predicting structures using the crystallographic template structures from 2B4C and 2QAD. When comparing the 5 homology models to their actual crystallographic template we observe that there is only slight variation, occurring mainly because of different side chain rotamers. We performed electrostatic potential calculations for each set of models at both 0 and 150 mM ionic strength, and computed electrostatic similarities between the electrostatic potentials of each of the 5 homology models and the electrostatic potential of the corresponding template structure. The means and standard deviations of the calculated electrostatic similarities for the models of each template structure at both ionic strengths, are shown in Table 2. It is observed that the electrostatic potentials calculated for the homology models at 0 mM ionic strength were quite similar to those of the template structure, since the mean ESD is 0.1 for both template structures (Table 2). When looking at the dendrogram of Figure 2, which was calculated at 0 mM ionic strength, we notice that an ESD value of 0.1 is lower than the branches of most clusters, suggesting that such variation is unlikely to significantly affect the overall clustering. When looking at the 150 mM data we observe that the mean ESDs are a little higher at a value of~0.4, as anticipated given the less smooth and more detailed electrostatic potentials compared to those at 0 mM. However, by analyzing the 150 mM dendrogram in Figure 3, we observe that it is unlikely that these variations would have a dramatic effect on clustering either, since once again the 0.4 value is near the ESD of most pairings. These tests show that the homology modeling procedure does not exactly reproduce the parent potential, but the variations observed are acceptable given the local flexibility of the small V3 loop peptides. A previous study of the effect of homology modeling on electrostatic similarity calculations has concluded that the variation of electrostatic potentials in homology models and deviations from electrostatic potentials corresponding to experimental structures is comparable to electrostatic potential variations within NMR ensembles of structures or within molecular dynamics trajectories [39]. In our case, the consensus electrostatic potentials resulting from homology modeling based on two structural templates and at two ionic strengths provide electrostatic fingerprints that account for sequence variability and structural flexibility. These fingerprints can be used to understand the binding properties of each subtype and to predict the classification of new sequences. Sequence, glycosylation, and charge rules for coreceptor selectivity Because there are no X4-tropic consensus sequences in the 2009 data, with the exception of the non-consensus sequence of crystal structure 2B4C (Figure 2), we resorted to sequence, glycosylation, and charge rules to present a predictive scheme for coreceptor selectivity. The coreceptor selection by HIV-1 is known to be influenced by the charge of the V3 loop, amino acid types at specific locations, and the presence of glycosylation sites. Differences in coreceptor selection by HIV-1 subtypes have been shown by experimental studies [12,20,70,71], and computationally predicted [72][73][74][75][76], although the effectiveness of the predictions is not conclusive. Based on previous studies and renewed thinking with respect to net charge, we used several criteria for coreceptor selection, shown in Figure 6. If the glycosylation motif (N 6 X 7 T 8 |S 8 X 9 , where × ≠ Pro and N being the glycosylation site) is absent from the V3 loop sequence, the virus will show preference toward CXCR4 as coreceptor. Experimental studies have demonstrated that loss of glycosylation sites in the V3 loop is associated with selection of CXCR4 [70,71]. If the N 6 X 7 T 8 | S 8 X 9 motif is present, the coreceptor selection will be influenced by the amino acids at positions 11, 24, and 25 (of the "11/24/25" rule); if any of these amino acids are not positively charged, the virus will show preference toward CCR5 [12]. We propose that if the N 6 X 7 T 8 |S 8 X 9 glycosylation motif is present and any of the amino acids at positions 11, 24 and 25 are positively charged, coreceptor preference will be governed by the net charge of the V3 loop sequence. If the net charge of the V3 loop is > 5, the virus will show preference toward CXCR4. Experimental studies have suggested that a high charge in the V3 is associated with loss of the glycosylation site and utilization of CXCR4 [71]; however if the net charge of the V3 loop is ≤ 5, the virus will show preference for CCR5. Coreceptor selection will be affected by the presence and number of acidic chemical groups, like sialic acids, in the glycans. Typically, the glycans can have up to four sialic acids, each adding one negative charge to the loop [77]. Thus, the presence of glycans may reduce the net charge of sequences with amino acid net charge of > 5 to ≤ 5. This means that a sequence classified as X4-tropic based on amino acid net charge, can be reclassified as R5-tropic using net charge based on amino acids and glycans. Because the number of sialic acids is not known, sequences falling in this category are classified as X4-, R5-or dual-tropic ( Figure 6). It should be noted that at lower V3 loop net charges (+3, +4), no effect was seen with alteration of N-glycosylation [71]. In our interpretation, if glycosylation takes place, it lowers the net positive net charge even more and thus the sequence remains within the R5-tropic definition according to the scheme of Figure 6. We have tested the flow chart of Figure 6 with experimental data for a series of R5-and X4-tropic sequences [70,71] and found consistency between the predicted and experimentallyderived tropisms. All consensus sequences studied here, and the sequence of 2QAD crystal structure, are R5-tropic according to the scheme of Figure 6, perhaps because CCR5 is the first viral preference for the asymptomatic cell infection prior to switching to CXCR4, and an insufficient number of X4-tropic sequences is available for consensus. However, individual patients infected with X4-tropic viruses of the aforementioned data of Refs. [70,71] have V3 loop sequences which are classified as X4-tropic using the scheme of Figure 6. It is likely that as CCR5 receptors are being depleted, the virus evolves through mutational pressure in increasing the positive charge of the V3 loop for more efficient recognition of cells with CXCR4 receptors. This may be because the N-terminal domain of CXCR4 has smaller negative net charge (and electrostatic potential) than Figure 6 Flow chart for prediction of HIV-1 coreceptor selectivity based on V3 loop sequence and charge properties. This scheme is based on the presence of the N 6 X 7 T 8 |S 8 X 9 sequence/glycosylation motif [71], the presence of a positive amino acid at sequence positions 11, 24, and 25 (the 11/24/25 rule) [12], and the net charge. The presence of acidic chemical groups in the glycosylation patterns (e.g., sialic acids) could affect the charge of the V3 loop, thus affecting the coreceptor selection. Therefore, the virus can use CXCR4, CCR5 or both receptors for cell entry (dual tropic). Conclusions In overview, we have performed clustering analysis to distinguish the electrostatic contributions to recognition and binding for the 2009 consensus sequences of the V3 loop of HIV-1 gp120. Our analysis is based on a twostep association model, which distinguishes recognition (formation of a weak nonspecific encounter complex) from binding (formation of a strong specific final complex). Clustering of spatial distributions of electrostatic potentials (in the protein exterior and interior) depicts the significance of long-range electrostatic interactions to the recognition of the V3 loop with extracellular loops of CCR5/CXCR4. Clustering of spatial distributions of charges (in the protein surface and interior) provides information on the significance of individual charges in short-range electrostatic interactions to the binding of the V3 loop to CCR5/CXCR4. This analysis clusters the V3 loop consensus sequences according to the similarities/dissimilarities of their electrostatic potentials and charges. Although clustering of charges and electrostatic potentials share similarities, they are in general different with the former emphasizing local effects and the latter emphasizing macroscopic effects. In addition, electrostatic potentials are sensitive to ionic strength effects, which is not the case for charges. This type of clustering, at the level of the specific physicochemical property, is not depicted in the widely used clustering of sequences, although conceptually sequences are closer to charges as they contain alignments of amino acids with specific physicochemical properties, including charge. The major advantage of charges and electrostatic potentials is that they contain information of spatial physicochemical details, which is not present in sequences. Clustering of charges and electrostatic potentials provides a refined analysis, compared to clustering of sequences, for proteins in which electrostatics is the driving force for association, as is the case of the gp120 V3 loop. The clustering of electrostatic potentials is of particular importance for inhibitor design and eventually for anti-HIV drug design. As we have shown previously for the case of short peptides derived from the V3 loop of gp120, scrambling of charges within the sequence does not affect binding to an N-terminal peptide of CCR5 or inhibition in infectivity assays [18,19]. The magnitude of the electrostatic potential was in general proportional to net charge for highly positively charged V3 loop-derived peptides (with additive electrostatic potential property), and correlated well with binding and inhibition data. In the case of the flexible and variable V3 loop, targeting the recognition process, and specifically targeting the bulk physiochemical property of the electrostatic potential, may be an efficient avenue for drug design. This may be possible as long as the spatial distribution of the electrostatic potential remains largely invariable despite the dynamic character of the V3 loop. In the present study, we provide a database of electrostatic property classification for consensus sequences of gp120, at the V3 loop level, for the time point of year 2009. We also provide correlations with prevalence and geographic distribution and coreceptor selectivity. Coreceptor selectivity depends on the specific N 6 X 7 T 8 |S 8 X 9 sequence motif, the specific positive charge location according to the 11/24/25 rule, and the overall charge and electrostatic potential distribution mediated not only by charged amino acid side chains, but also by glycosylation patterns. For this reason, an elaborate scheme for determining coreceptor selectivity is presented. Additional material Additional File 1: Electrostatic potential clustering analysis of the HIV-1 subtypes, from the 2009 consensus, using the structure with PDB Code 2B4Cas template. The horizontal axis of the dendrogram represents electrostatic similarity distance. Electrostatic potentials were calculated using ionic strength corresponding to 0 mM salt concentration. Isopotential contours are presented in 4 different orientations, corresponding to rotations about the vertical axis. Isopotential contours are plotted at ± 1 k B T/e, with blue and red corresponding to positive and negative electrostatic potentials, respectively. The net charge, global prevalence, coreceptor selectivity, and geographic distribution are indicated in the figure for each subtype. N/A denotes that information was not available. The orange boxes highlight clusters with HIV-1 subtypes that have similar electrostatic potential. Green circles in the branches of the dendrogram denote intersection points between net charges or infected population. The * refers to the global prevalence of subtype B, which include the two structural templates (2B4C and 2QAD). The # refers to the global prevalence of subtype D, which include D and D35. Additional File 2: Electrostatic potential clustering analysis of the HIV-1 subtypes, from the 2009 consensus, using the structure with PDB Code 2B4Cas template. The horizontal axis of the dendrogram represents electrostatic similarity distance. Electrostatic potentials were calculated using ionic strength corresponding to 150 mM salt concentration. Isopotential contours are presented in 4 different orientations, corresponding to rotations about the vertical axis. Isopotential contours are plotted at ± 1 k B T/e, with blue and red corresponding to positive and negative electrostatic potentials, respectively. The orange box highlight clusters with HIV-1 subtypes that have similar electrostatic potential and same charge. Green circles in the branches of the dendrogram denote intersection points between net charges or infected population. Additional File 3: Charge distribution clustering analysis of the HIV-1 subtypes, from the 2009 consensus, using the structure with PDB Code 2B4Cas template. The horizontal axis of the dendrogram represents charge similarity distance. The net charge, global prevalence, coreceptor selectivity and geographical distribution are indicated in the figure for each subtype. N/A denotes that information was not available. Green circles in the branches of the dendrogram denote intersection points between net charges or infected population. The * refers to the global prevalence of subtype B, which include the two structural templates (2B4C and 2QAD). The # refers to the global prevalence of subtype D, which include D and D35.
2016-05-04T20:20:58.661Z
2012-02-07T00:00:00.000
{ "year": 2012, "sha1": "86e5ac13ddda59027536a6a4d51ceb5e87e23fba", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc3295656?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "67ffd3be32033e3f6004bffb83c1cdcf01cb7443", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
12143229
pes2o/s2orc
v3-fos-license
Intracellular Context Affects Levels of a Chemically Dependent Destabilizing Domain The ability to regulate protein levels in live cells is crucial to understanding protein function. In the interest of advancing the tool set for protein perturbation, we developed a protein destabilizing domain (DD) that can confer its instability to a fused protein of interest. This destabilization and consequent degradation can be rescued in a reversible and dose-dependent manner with the addition of a small molecule that is specific for the DD, Shield-1. Proteins encounter different local protein quality control (QC) machinery when targeted to cellular compartments such as the mitochondrial matrix or endoplasmic reticulum (ER). These varied environments could have profound effects on the levels and regulation of the cytoplasmically derived DD. Here we show that DD fusions in the cytoplasm or nucleus can be efficiently degraded in mammalian cells; however, targeting fusions to the mitochondrial matrix or ER lumen leads to accumulation even in the absence of Shield-1. Additionally, we characterize the behavior of the DD with perturbants that modulate protein production, degradation, and local protein QC machinery. Chemical induction of the unfolded protein response in the ER results in decreased levels of an ER-targeted DD indicating the sensitivity of the DD to the degradation environment. These data reinforce that DD is an effective tool for protein perturbation, show that the local QC machinery affects levels of the DD, and suggest that the DD may be a useful probe for monitoring protein quality control machinery. Introduction Proteins are important for almost every cellular process. Accordingly, a significant portion of modern biology is devoted to studying the production and interactions of proteins. As biologists gain a quantitative understanding of the timing, concentration, and spatial localization important for protein function, molecular tools allowing for precise cellular perturbations are vital [1]. Consequently, we developed a small, inherently unstable protein domain based on the FK506-and rapamycinbinding protein (FKBP12), termed a destabilizing domain (DD) [2]. This instability can be conferred to a genetically fused protein of interest, and the resulting fusion protein is rapidly degraded in the absence of stabilizing ligand. The addition of a specific small molecule ligand, Shield-1, can rescue the fusion protein from degradation in a rapid, dose-dependent, and reversible manner. This system has been widely applied in variety of cell types and organisms [3,4,5,6,7,8,9,10,11]. The definitive mechanism of DD regulation has not been fully elucidated, although it is known that cytoplasmic DD degradation is mediated by the ubiquitin-proteasome system [12]. By targeting DD fusions to the endoplasmic reticulum (ER) we found Shield-1 could regulate extracellular, secreted proteins over 1-2 orders of magnitude [3]. However, we also noticed elevated levels of DD fusions that co-localized with ER in the absence of Shield-1. These observations precipitated the idea that the local degradation and quality control machinery specific to each subcellular locale may significantly affect DD levels and ligand-dependent regulation, thus warranting further investigation of the technology. In the last 30 years considerable progress has been made toward determining the machinery of protein homeostasis in the cell. Most notably the ubiquitin-proteasome system (UPS) is a general mechanism for protein degradation in the cytosol and degrades most cytoplasmic substrates [13,14]. The UPS functions via a series of protein interactions that modify substrates with ubiquitin and targets them to the proteasome for degradation. Recently the focus has increased on compartmental degradation such as ERassociated degradation (ERAD). This work has led to the discovery of two important sets of proteins that are integral to ER compartment homeostasis and which function in concert with ER chaperones and folding enzymes, such as BiP, calnexin, calreticulin, and EDEM. The first set is uniquely devoted to ERAD and the biochemical interactions that remove misfolded substrates from the ER [15]. The second set of proteins controls the ER unfolded protein response (UPR) and allows the cell to adapt to misfolded substrates in the ER [16]. Similarly, the mitochondria has its own molecular chaperones, proteases, and mechanisms of dynamic response to misfolded protein stress [17]. As the degradation of the DD appears to be proteasome dependent, and the UPS functions within the cytoplasm, we sought to test the behavior of the DD in various cellular compartments in conjunction with perturbants that modulate protein production, degradation, and local protein QC machinery. Our results reinforce our previous work that the DD effectively regulates protein levels in the cytoplasm, nucleus, and through the ER. We show for the first time that the ER and mitochondria have limited ability to recognize and/or degrade the DD based on fluorescence microscopy, flow cytometry and immunoblot in the absence of Shield-1. The induction of protein quality control machinery in the ER significantly reduces the basal levels of the DD protein in the ER in the absence of Shield-1 suggesting that the ER, unlike the cytoplasm, is tolerant of elevated levels of DD. To further explore whether the DD could initiate the ER UPR upon Shield-1 washout (i.e. switching from secretion to degradation of the DD), we show that the DD proteins in the ER were not capable of inducing the UPR as measured by XBP1 splicing. These studies provide insights into how efficiently the DD functions as a tool for protein perturbation in diverse cellular environments and can be affected by changes in the local degradation machinery. Results We made several genetic constructs encoding fluorescent and luminescent proteins fused to the DD to test how each cellular compartment would respond to the DD and perturbation with Shield-1 (Table 1). We genetically fused the DD to the N-terminus of YFP to create the cytoplasmic, cDD, cell line. To generate a nuclear localized DD, nDD, we added the nuclear localization sequence from the SV40 large T-antigen to the N-terminus of the DD-YFP construct. In the mitochondria we tested both the N and C-terminal orientation of a DD relative to Venus fluorescent protein, mDDn and mDDc. The mitochondrial DD reporter constructs contain the mitochondrial matrix targeting sequence from aldehyde dehydrogenase 2 (ALDH2, [18]). The ER DD fluorescent reporter, eDD, was made by fusing the secretion signal from Gaussia principis secreted luciferase (GLuc, [19]) to the Nterminus of DD-GFP. To create an optical, secreted extracellular reporter protein, eDDs, a functional GLuc was cloned in the place of the ER targeted GFP. Targeted DD constructs were introduced into HEK293 cell by retroviral transduction, and drug selection produced stable populations containing the DD ( Figure S1). The chemical dependence of the destabilizing domain allows the quantitative comparison of protein levels in each cell line after treatment with Shield-1 or vehicle control, and small molecule perturbants of the translation, degradation, secretion, and local quality control machinery. Only qualitative comparisons may be made between the raw fluorescence intensity values across the cell lines since targeting sequences affect the expression levels of the fusions, there are differences in retroviral transduction, and we used different variants of a fluorescent reporter protein. For example, we used Venus fluorescent protein for mitochondrial targeted mDDn and mDDc as it is more tolerant of acidic environments [20]. An additional caveat of these experiments is that we cannot be certain of the relative penetrance of Shield-1 into each compartment. Despite these limitations, our results can be generalized and used as a benchmark for future studies using the DD technology in subcellular compartments and suggest the importance of the local QC machinery for functional regulation. Cytoplasmic and nuclear destabilizing domains We first tested the change in fluorescence in the cytoplasm and nucleus using cDD and nDD cells after treatment with vehicle or Shield-1 (1 mM). Both cDD and nDD cell lines displayed Shield-1 dependent fluorescence after overnight treatment using fluorescence microscopy ( Figure 1A and B). cDD cells show diffuse cytoplasmic fluorescence while the nDD fluorescence colocalizes with Hoechst nuclear stain indicating localization of the fusion protein to the nucleus. Quantitative fluorescence levels were assayed using flow cytometry in cDD and nDD cells that were treated with vehicle or Shield-1 (2 mM) for 6 hours, a shorter time course allowing the concurrent treatment with other small molecule perturbants. DD fusions in the cytoplasm had a 11.3fold induction of signal while fusions in the nucleus had a 3.7-fold induction ( Figure 1C and D). Chemical inhibitors of protein translation, cycloheximide (CHX), and the proteasome, MG132, were used to assess production and degradation of the DD fusions in the cytoplasm and nucleus. Each sample was treated with vehicle or Shield-1 and simultaneously treated with CHX (5 mg/mL), MG132 (5 mM), or co-treated with both for 6 hours. Fluorescence levels were quantified using flow cytometry ( Figure 1C and D). CHX decreased the background fluorescence levels without Shield-1 treatment in cDD cells (p,0.005). MG132 increased background fluorescence levels (p,0.005), indicating decreased DD fusion degradation after proteasome blockade and supporting our previous data suggesting that destabilized fluorescent proteins are degraded via the UPS [2,12]. In the presence of Shield-1 and MG132, fluorescence levels were lower than in cells treated with Shield-1 alone. When cDD and nDD cells are treated with CHX, Shield-1 did not cause as drastic an induction of fluorescence, 3.0-fold and 1.7fold respectively ( Figure 1C and D). As expected, treating with both CHX and MG132 led to little Shield-1 dependent regulation of fluorescence. Neither untransduced cells nor cells constitutively expressing DD-free fluorescent protein showed significant changes in fluorescence when treated with any of the small molecules as described above ( Figure S2). Mitochondrial destabilizing domains Mitochondrial DD cell lines, mDDn and mDDc, had a high fluorescence background in the absence of Shield-1 (Figure 2A and B). Colocalization of DD fusion with a Mito-tracker orange stain indicated proper targeting in both mitochondrial DD cell lines, mDDn and mDDc, via the ALDH2 matrix targeting sequence. While mDDn fluorescence was solely targeted to mitochondria in the presence and absence of Shield-1 based on colocalization of the fluorescent protein with the Mitotracker dye, the addition of Shield-1 caused fluorescence signal localized to both the mitochondria and cytoplasm in mDDc cells ( Figure 2B). Flow cytometry indicated that Shield-1 does not significantly affect the levels of mDDn ( Figure 2C) contrasting the Shield-1 dependent regulation in the cytoplasm and nucleus. We used CHX and MG132 to probe whether this observation was related to production or degradation. Neither CHX nor MG132 treatment significantly affected the levels of DD fusions in mDDn cells in the absence of Shield-1 (p = 0.22, p = 0.12 respectively, Table 1. Targeted DD Reporters. Compartment Construct Cell Line Name Figure 2C). In mDDc cells there was a 1.8-fold increase in fluorescence after Shield-1 treatment and small fluorescence changes after treatment with CHX and MG132, suggesting a cytoplasmic fusion pool we observed with fluorescence microscopy ( Figure 2D). Endoplasmic reticulum destabilizing domains While Shield-1 largely did not affect DD levels in the mitochondria, we have previously demonstrated robust Shield-1 dependent regulation of secreted ER-targeted proteins such as GLuc, IL-2, and TNF-a [3]. As with the mitochondrial DDs, microscopy revealed the presence of fluorescence in both vehicle and Shield-1 treatment groups in eDD cells ( Figure 3A). Clear colocalization of eDD fluorescence with an ER stain occurred in the absence of Shield-1 and small puncta were evident, suggesting protein aggregation. In the presence of Shield-1, colocalization with the ER was reduced ( Figure 3A) and there was increased colocalization with the Golgi apparatus ( Figure S3). Additionally, there was higher total intracellular eDD fluorescence in the absence rather than the presence of Shield-1 ( Figure 3B). To address this observation we treated cells with brefeldin-A (BFA) to inhibit protein transport from the ER to the Golgi. BFA treatment caused intracellular fluorescence levels to rise 1.7-fold when treated with Shield-1 as analyzed by flow cytometry ( Figure 3B). These data fit a model of Shield-1 induced stabilization and translocation of the eDD through the Golgi network and eventual secretion. In the absence of Shield-1, treatment of eDD cells with CHX and MG132 did not cause a statistically significant reduction or increase in mean fluorescence intensity respectively (p = 0.18, p = 0.48, Figure 3B). This suggested that as in the mitochondria, there was little constitutive turnover of ER targeted DD fusions in the absence of Shield-1. However, treatment with Shield-1 or vehicle resulted in statistically significant differences in fold change (0.6, p,0.05) after co-treatment with MG132 and MG132/CHX ( Figure 3B). This fold change with co-treatment of MG132 was likely a result higher initial fluorescence levels in the presence of MG132 prior to Shield-1 administration. To support the above findings and since it is difficult to quantify fluorescent proteins extracellularly, we investigated the effects of Shield-1, CHX, and MG132 on the flux of an ER-targeted DD fused to a luminescent reporter protein. Gaussia luciferase is a secreted, ATP-independent luciferase that yields quantitative measures of protein levels in the extracellular space [21]. Intracellular and extracellular luciferase activity was monitored using bioluminescence after Shield-1 (1 mM) or vehicle treatment. As predicted by microscopy, intracellular levels of Gaussia luciferase were not greatly affected by Shield-1 while extracellular levels varied over 10-fold ( Figure S4). eDDs cells were treated at various time points with Shield-1 (1 mM) and/or a low dose of CHX (1 mg/mL). Co-treatment with CHX and Shield-1 attenuated extracellular levels of luciferase approximately 10-fold relative to Shield-1 treatment alone ( Figure 3C). This indicated that it is primarily nascent proteins that are stabilized by Shield-1 and supported similar comparisons in cDD and nDD cells. Treatment with MG132 (1 mM) led to eventual extracellular accumulation of GLuc after 12 hours ( Figure 3D), fitting the model that degradation inhibitors such as MG132 can facilitate correct folding and localization of misfolded substrates [22,23,24]. Our data suggested that compartment-specific folding and QC machinery were important to the functionality and degradation of the DD. Thus, we tested whether the folding environment in the ER could affect the intracellular levels of the eDD and vice versa (i.e. that the DD could affect the folding environment by stimulating a stress response). Specifically we were interested in whether the ER UPR could be induced by the removal of Shield-1, which has the effect of switching a cell from secreting to degrading the DD. When high levels of unfolded proteins are detected in the ER, mammalian cells can activate the UPR through three response pathways mediated by the proteins, IRE1ab, PERK, and ATF6ab/CREB-H [16]. IRE1 splicing of XBP-1 mRNA provides a time-dependent readout of ER stress [25]. The protein product of spliced XBP-1 mRNA, XBP(S), rises 4-8 hours after the addition of a stress agent such as tunicamycin, thapsigargan, or DTT in HEK293 cells [26]. We monitored the appearance of XBP(S) to determine whether the removal of Shield-1 would activate the ER UPR in eDD and control cDD cells. Cells were incubated first with Shield-1 for 96 hours to equilibrate the cells to the folded and secreted DD state followed by a timecourse of Shield-1 washout. No induction of the ,50 kDa protein XBP(S) is seen 4 hours after Shield-1 washout in the either cDD or eDD cells ( Figure 4A, vehicle). Addition of the ER stress agent tunicamycin induced a robust splicing response at 4 and 8 hours. These data suggest that the removal of Shield-1 was not a large enough insult to trigger the UPR as monitored by XBP-1 splicing. The reverse question, whether the ER folding environment affects the levels of the eDD, was probed in the same experiment. Inducing the UPR with tunicamycin reduced intracellular levels of DD fusions in the ER as monitored using an aXFP immunoblot suggesting that UPR related increases in ER quality control machinery had significant effects on the levels of the mis/unfolded DD substrates present in the ER ( Figure 4A & B). As expected cDD levels are highly sensitive to the washout of Shield-1 indicated by the deceasing amounts of DD-XFP present over time. eDD levels showed little Shield-1 sensitivity, supporting our earlier microscopy and flow cytometry data ( Figure 3A & B). Taken together these data suggested that the ER harbored significant levels of mis/unfolded protein in the absence of Shield-1 and that the DD was sensitive to up regulation of the ER UPR. Discussion Destabilizing domains have been fused to cytoplasmic, nuclear, and secreted proteins in many experimental systems, however their characteristics in the endoplasmic reticulum and mitochondria were previously unknown [3,4,5,6,7,8,9,10]. As the mechanism of regulation is intrinsically related to the access of protein folding and degradation machinery, we reasoned that the cytoplasmically derived FKBP DD might exhibit variable levels based on subcellular localization. In this report we provide a baseline for future studies using this DD in subcellular compartments, show that the local protein quality control affects DD levels and show that the DD does not induce an IRE1 mediated stress response in the ER. The DD functions in a chemically dependent manner in the cytoplasm, nucleus and through the secretory pathway. In these contexts small molecule inhibitors of translation, degradation, and secretion act on DD levels predictably, illustrating several dynamics of posttranslational regulation. Inhibiting translation with cycloheximide decreases the Shield-1 dependent dynamic range and blocking degradation with MG132 increases the basal fluorescence levels in the cells. After treatment of both Shield-1 and MG132, fluorescence levels are lower than in cells treated with Shield-1 alone, suggesting a decreased rate of protein translation and/or upregulated protein quality control machinery after MG132 treatment [27]. The mitochondria and ER, however, appear to be tolerant of elevated levels of the DD even in the absence of Shield-1. The colocalization of fusion protein fluorescence with a mitochondrial stain shows proper targeting of both mDDn and mDDc. The pool of DD that is colocalized with mitochondria appears to be Shield-1 insensitive. We speculate that this may stem from the lack of protein QC machinery in the mitochondria that can recognize and degrade the DD in the absence of Shield-1, however, further biochemical studies such as gradient centrifugation will be necessary to prove that the mDD is fully translocated and intact in the mitochondrial matrix. After culture of mDDc cells for several weeks, inclusion bodies develop in a small population of cells suggesting cellular stress that we have not observed in any other DD-containing cell lines ( Figure S5). Since protein homeostasis in the mitochondria is a balance between nonselective degradation by processes such as autophagy and selective degradation by peptidases and ATP-dependent proteases, the development of an orthogonal mitochondria-specific DD may be challenging, but also highly valuable given the importance of mitochondrial proteins in pathologic process such as aging and neurodegenerative diseases [28]. A less obvious application of a mitochondria-specific DD would be to function as a biosensor for compartmental protein QC activity as cells age, face pathogens or are subjected to other stresses. High fluorescence levels of both mitochondrial DD cell lines in the absence of Shield-1 suggests that cytoplasmic QC machinery cannot degrade the DD fusions fully before mitochondrial localization. Whether proteins are cotranslationally inserted into the mitochondria or nascent polypeptides are released from ribosomes in the cytosol for posttranslational import (or are imported via a combination of both) remains an open question [29]. If the mDD was exposed to cytoplasmic degradation machinery before mitochondrial import, we might expect to see little signal in the absence of Shield-1 depending on the relative rates of synthesis, degradation, and import. Thus both cotranslational insertion into the mitochondria or chaperone-protected transport to the mitochondrial outer membrane channels are possible explanations for the accumulation of mDDn and mDDc in the mitochondria. Microscopy of mDDc cells, in contrast with mDDn cells, indicates the presence of fluorescent proteins in both the cytoplasm and mitochondria when treated with Shield-1. Cytoplasmic localization of the mDDc fusions could be experimentally supported by an immunoblot that shows the ratio of cleaved fusions (mitochondrial) to uncleaved fusions (cytoplasmic). Thus, in the presence of Shield-1, uncleaved levels alone would rise. One potential reason for this dual localization is that the placement the rapidly folding Venus fluorescent protein N-terminally with respect to the DD reduces the efficiency of mitochondrial import, creating a cytoplasmic pool. In the absence of Shield-1 the defective importation is not observed because the cytoplasmic population of mDDc could be degraded. Co-treatment with MG132 in the absence of Shield-1 increases fluorescence levels of mDDc, suggesting that cytoplasmic proteasomal degradation of the protein is occurring. A second explanation is that Shield-1 binding of the DD when located on the C-terminus of the fusion protein causes a percentage of the proteins to be ''unfolding incompetent,'' and thus, import incompetent. In this case, Shield-1 would stabilize the protein such that the mitochondrial importation machinery cannot unfold the protein. Matoushek and coworkers have observed a similar phenomenon in yeast mitochondrial suspensions where treatment with a stabilizing ligand, methotrexate, can cause defective mitochondrial import of dihydrofolate reductase [30]. Our observations of the eDD provide valuable insights for future use of destabilizing domains in the ER. Immunoblot and microscopy show that a reservoir of eDD exists in the absence of Shield-1 at high intracellular levels that are comparable to protein levels in Shield-1 stabilized cDD cells. The addition of Shield-1 allows the secretion of DD fusions through the canonical secretion pathway as evidenced by treatment with brefeldin-A and CHX treatment significantly reduces luminescent protein secretion. Destabilizing domains in the ER may aggregate as suggested by puncta formation ( Figure 3A) in a similar manner to another FKBP mutant that was used for conditional ER aggregation [31]. Though the DD does not have a large dynamic range of regulation within the ER itself, additional insights into ER regulation may be gained by determining the relative ''age'' of the DD fusions trapped in the ER with a photoactivatable fluorescent protein or pulse-chase experiment [32]. Ongoing projects in our lab are investigating whether there are any cellular adaptations that occur when cells are expressing the DDs. Here we show that the removal of Shield-1 did not cause XBP-1 splicing in eDD cells suggesting that the IRE1ab pathway of the UPR is not induced acutely when the cell is challenged with unstable ER localized protein. One intriguing difference between the cDD and eDD is the elevated expression of XBP1(U) ( Figure 4A). XBP1(U) is a negative feedback regulator of XBP1(S) by complexing with XBP1(S) and shuttling it out of the nucleus for degradation via a nuclear export sequence and degradation motif [26]. Thus, eDD cells may have adapted to high levels of mis/ unfolded protein in the ER during their generation, allowing eDD cells to tolerate and degrade accumulated mis/unfolded substrates within the bandwidth of the ER quality control machinery and without activating the UPR. Treatment with tunicamycin reduced the levels of DD fusions in the ER ( Figure 4A and B). This indicates that UPR related increases in degradation and/or decreases in translation have significant effects on the levels of mis/unfolded DD substrates present in the ER. Decreased translation may be mediated by another unfolded protein response pathway such as PERKdependent translational attenuation, which would be consistent with decreased levels of eDD-GFP in CHX treated cells shown in Figure 3B [33]. Future experiments monitoring intracellular and extracellular luciferase activity or a pulse chase analysis after tunicamycin treatment may demonstrate the predominate mechanism leading to decreased DD fusion levels. Regardless of mechanism, the DD is quite sensitive to local, compartmentspecific protein quality control and greatly affected by the ER unfolded protein response. The destabilizing domain technology has proven utility in many different experimental settings to predictably and conditionally tune protein levels in cells. These results may guide the use of the destabilizing domains in new experimental systems and provide a comprehensive baseline of expected regulation in the cytoplasm, nucleus, extracellular space, ER, and mitochondria. We find that the local protein QC environment in the ER affects the basal levels of the DD in the absence of Shield-1. This information may direct the future development of new DD-ligand pairs that can orthogonally regulate proteins in different cellular compartments. In addition to providing the ability to perturb cellular processes and pathways through direct fusion to proteins of interest, the destabilizing domains may eventually be used to facilitate insights into the endogenous machinery of protein homeostasis and degradation. Retroviral Gene Expression Various fluorescent proteins, YFP, GFP, and Venus [20] and a secreted luminescent protein Gaussia Luciferase (GLuc, [34]) and subcellular targeting sequences, SV40 nuclear localization sequence (NLS, [35]), Gaussia ER localization sequence (LS, [19]), ALDH2 mitochondrial targeting sequence (MTS, [18]) were genetically fused to a destabilizing domain ( Table 1). All DDs were the F36V L106P mutant of human FKBP12, except the mDDc cell line that contained the F36V, E31G, R71G, K105E variant, the most robust C-terminal DD. These fusion genes were cloned into pBMN retroviral expression vectors containing blasticidin or puromycin drug resistance genes. Amphotrophic phoenix cell lines were plated at 2610 6 cells in a 10-cm dish 12 hours before transfection with pBMN vectors. Cells were transfected with Lipofectamine 2000 in Opti-MEM at a 2:5 ratio (mg DNA : mL cationic lipid). HEK293 human embryonic kidney cells (ATCC) were plated at 1610 6 cells per plate in a 10-cm dish and incubated with complete media (10% FBS, 10 units Pen/Strep) containing retrovirus and polybrene (4 mg/mL, Sigma) at 37uC overnight. At this time the retroviral media was removed and the cells were incubated with complete media. Blasticidin (10 mg/mL, Invitrogen) or puromycin (2 mg/mL, Invitrogen) was added to the media 48 hrs after transduction. Drug selection continued for 10 days. Microscopy Stably transduced HEK293 cells with localized DD-XFPs were incubated overnight on chamber slides (Lab-TekII) in complete media with Shield-1 (1 mM) or ethanol vehicle control. The next day the cells were incubated with ER-Tracker red, MitoTracker orange, or BODIPY TR Ceramide (red fluorescent Golgi label) following company instructions (Molecular Probes, Invitrogen). All cells were incubated with Hoechst stain (1 mg/mL) for 5 minutes, washed with PBS containing calcium and magnesium, and imaged on an epifluorescent Axioscop2 (Zeiss) microscope and photographed using a c-mount camera (QImaging). Statistical Analysis P values were calculated using a paired two-tailed T test. P values,0.05 were considered significant. Luciferase Assays Cells containing DD-regulated secreted luciferase, eDDs cells, were plated in triplicate in a 96-well plate. Cells were treated at various times with non-toxic doses of vehicle (ethanol control), Shield-1 (1 mM), CHX (1 mg/mL), MG132 (1 mM), or co-treated with Shield-1 and CHX or MG132. The media from the eDDs cells was transferred to a new 96-well plate, coelenterazine (100 ng/mL, Nanolight) was added, and the luminescence was quantified using an In Vivo Imaging System (IVIS, Caliper Life Sciences). Immunoblotting cDD and eDD cell lines were cultured with Shield-1 (1 mM) for 96 hours before being split to a 24-well plate. Shield-1 media was replaced at 0, 2, 4, 8, 12, and 24 hours with recombinant FKBPcontaining media (5 mM) prior to collecting cells for western blot. In a duplicate group of wells, the media was replaced with recombinant FKBP and co-treated with tunicamycin (5 mM, Sigma). A gradient (4-20%) SDS-PAGE gel (Biorad) was run and protein was transferred to PDVF membrane (Millipore). Membranes were blocked in 10% dry milk for 1 hour and exposed to rabbit polyclonal anti-XBP1 (1 mg/mL, Abcam) antibody overnight at 4uC. The membranes were then washed in TBST buffer and exposed to anti-rabbit HRP conjugated secondary antibody (0.2 mg/mL, Molecular Probes). Chemiluminescence was performed using Immobilon Western Kit (Millipore). The antibodies were dissociated from the membrane with Restore Western Blot Stripping Buffer (Thermo Scientific) for 15 minutes and exposed to anti-XFP antibody (0.2 mg/mL, Clontech) following a similar procedure to the above. Densitometry of DD-XFP fusion levels were assessed using ImageJ software (NIH). Figure S1 Flow cytometry of DD fusion cell lines. Each DD-containing cell line and a mitochondria-targeted Venus fluorescent protein cell line (no DD) was exposed to Shield-1 (2 mM) and assessed by flow cytometry for viral transduction efficiency post-antibiotic selection. Transduction efficiency was measured by the percentage of cells that were more fluorescent (FL-1) than 98% of untransduced HEK293 (the black bar represents that gated population).
2018-04-03T00:27:30.035Z
2012-09-12T00:00:00.000
{ "year": 2012, "sha1": "94b7fd9ae2c183e8e8d1b55663006b77cc231f42", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0043297&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "94b7fd9ae2c183e8e8d1b55663006b77cc231f42", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
40263769
pes2o/s2orc
v3-fos-license
Look-ahead Attention for Generation in Neural Machine Translation The attention model has become a standard component in neural machine translation (NMT) and it guides translation process by selectively focusing on parts of the source sentence when predicting each target word. However, we find that the generation of a target word does not only depend on the source sentence, but also rely heavily on the previous generated target words, especially the distant words which are difficult to model by using recurrent neural networks. To solve this problem, we propose in this paper a novel look-ahead attention mechanism for generation in NMT, which aims at directly capturing the dependency relationship between target words. We further design three patterns to integrate our look-ahead attention into the conventional attention model. Experiments on NIST Chinese-to-English and WMT English-to-German translation tasks show that our proposed look-ahead attention mechanism achieves substantial improvements over state-of-the-art baselines. Introduction Neural machine translation (NMT) has significantly improved the quality of machine translation in recent several years [10,26,1,9], in which the attention model increasingly plays an important role. Unlike traditional statistical machine translation (SMT) [13,4,32] which contains multiple separately tuned components, NMT builds upon a single and large neural network to directly map source sentence to associated target sentence. Typically, NMT adopts the encoder-decoder architecture which consists of two recurrent neural networks. The encoder network models the semantics of the source sentence and transforms the source sentence into context vector representation, from which the decoder network generates the target translation word by word. Attention mechanism has become an indispensable component in NMT, which enables the model to dynamically compose source representation for each timestep during decoding, instead of a single and static representation. Specifically, the attention model shows which source words the model should focus on in order to predict the next target word. However, previous attention models are mainly designed to predict the alignment of a target word with respect to source words, which take no account of the The English sentence is analyzed using Stanford online parser 1 . Although the predicate "are pushing" is close to the word "France", it has a stronger dependency on the word "countries" instead of "France". fact that the generation of a target word may have a stronger correlation with the previous generated target words. Recurrent neural networks, such as gated recurrent units (GRU) [5] and long short term memory (LSTM) [8], still suffer from long-distance dependency problems, according to pioneering studies [1,12] that the performance of NMT is getting worse as source sentences get longer. Figure 1 illustrates an example of Chinese-English translation. The dependency relationship of target sentence determines whether the predicate of the sentence should be singular (is) or plural (are). While the conventional attention model does not have a specific mechanism to learn the dependency relationship between target words. To address this problem, we propose in this paper a novel look-ahead attention mechanism for generation in NMT, which can directly model the longdistance dependency relationship between target words. The look-ahead attention model does not only align to source words, but also refer to the previous generated target words when generating a target word. Furthermore, we present and investigate three patterns for the look-ahead attention, which can be integrated into any attention-based NMT. To show the effectiveness of our look-ahead attention, we have conducted experiments on NIST Chinese-to-English translation tasks and WMT14 English-to-German translation tasks. Experiments show that our proposed model obtains significant BLEU score improvements over strong SMT baselines and a state-of-the-art NMT baseline. Neural Machine Translation Our framework integrating the look-ahead attention mechanism into NMT can be applied in any conventional attention model. Without loss of generality, we use the improved attention-based NMT proposed by Luong et al. [16], which utilizes stacked LSTM layers for both encoder and decoder as illustrated in Figure 2. The NMT first encodes the source sentence X = (x 1 , x 2 , ..., x m ) into a sequence of context vector representation C = (h 1 , h 2 , ..., h m ) whose size varies with respect to the source sentence length. Then, the NMT decodes from the context vector representation C and generates target translation Y = (y 1 , y 2 , ..., y n ) one word each time by maximizing the probability of p(y j |y <j , C). Next, we briefly review the encoder introducing how to obtain C and the decoder addressing how to calculate p(y j |y <j , C). Encoder: The context vector representation C = (h l 1 , h l 2 , ..., h l m ) are generated by the encoder using l stacked LSTM layers. Bi-directional connections are used for the bottom encoder layer, and h 1 i is a concatenation of a left-to-right − → h 1 i and a right-to-left All other encoder layers are unidirectional, and h k i is calculated as follows: Decoder: The conditional probability p(y j |y <j , C) is formulated as Specifically, we employ a simple concatenation layer to produce an attentional hidden state t j : where s l j denotes the target hidden state at the top layer of a stacking LSTM. The attention model calculates c j as the weighted sum of the source-side context vector representation, just as illustrated in the upper left corner of Figure 2. where α ji is a normalized item calculated as follows: s k j is computed by using the following formula: If k = 1, s 1 j will be calculated by combining t j−1 as feed input [16]: Given the bilingual training data D = {(X (z) , Y (z) )} Z z=1 , all parameters of the attention-based NMT are optimized to maximize the following conditional log-likelihood: Model Description Learning long-distance dependencies is a key challenge in machine translation. Although the attention model introduced above has shown its effectiveness in NMT, it takes no account of the dependency relationship between target words. Hence, in order to relieve the burden of LSTM or GRU to carry on the target-side long-distance dependencies, we design a novel look-ahead attention mechanism, which directly establishes a connection between the current target word and the previous generated target words. In this section, we will elaborate on three proposed approaches about integrating the look-ahead attention into the generation of attention-based NMT. Figure 3(b) illustrates concatenation pattern of the look-ahead attention mechanism. We not only compute the attention between current target hidden state and source hidden states, but also calculate the attention between current target hidden state and previous target hidden states. The look-ahead attention output at timestep j is computed as: where AT T (s l j , s l i ) is a normalized item. Specifically, given the target hidden state s l j , the source-side context vector representation c j , and the target-side context vector representation c d j , we employ a concatenation layer to combine the information to produce an attentional hidden state as follows: Concatenation Pattern After getting the attentional hidden state t f inal j , we can calculate the conditional probability p(y j |y <j , C) as formulated in Eq. 3. Enc-Dec Pattern Concatenation pattern is a simple method to achieve look-ahead attention, which regards source-side context vector representation and target-side context vector representation as the same importance. Different from concatenation pattern, Enc-Dec pattern utilizes a hierarchical architecture to integrate look-ahead attention as shown in Figure 3(c). Once we get the attentional hidden state of conventional attention-based NMT, we can employ look-ahead attention mechanism to update the previous attentional hidden state. In detail, the model first computes the attentional hidden state t e j of conventional attention-based NMT as Eq. 4. Second, the model calculates the attention between the attentional hidden state t e j and previous target hidden states: Then, the final attentional hidden state is calculated as followed: Dec-Enc Pattern Dec-Enc pattern is the opposite of the Enc-Dec pattern, and it uses look-ahead attention mechanism to help the model align to source words. Figure 3(d) shows this pattern. We compute look-ahead attention output firstly as Eq. 10, and attentional hidden state is computed by: Finally, we can calculate the attention between the attentional hidden state t d j and source hidden states to get final attentional hidden state: where h l i is source-side hidden state at the top layer. Dataset We perform our experiments on the NIST Chinese-English translation tasks and WMT14 English-German translation tasks. The evaluation metric is BLEU [21] as calculated by the multi-blue.perl script. Training Details We build the described models modified from the Zoph RNN 4 toolkit which is written in C++/CUDA and provides efficient training across multiple GPUs. Our training procedure and hyper parameter choices are similar to those used by Luong et al. [16]. In the NMT architecture as illustrated in Figure 2, the encoder has three stacked LSTM layers including a bidirectional layer, followed by a global attention layer, and the decoder contains two stacked LSTM layers followed by the softmax layer. In more details, we limit the source and target vocabularies to the most frequent 30K words for Chinese-English and 50K words for English-German. The word embedding dimension and the size of hidden layers are all set to 1000. Parameter optimization is performed using stochastic gradient descent(SGD), and we set learning rate to 0.1 at the beginning and halve the threshold while the perplexity go up on the development set. Each SGD is a mini-batch of 128 examples. Dropout was also applied on each layer to avoid over-fitting, and the dropout rate is set to 0.2. At test time, we employ beam search with beam size b = 12. Results on Chinese-English Translation We list the BLEU scores of our proposed model in Table 1. Moses-1 [11] is the state-of-the-art phrase-based SMT system with the default configuration and a 4-gram language model trained on the target portion of training data. Moses-2 is the same as Moses-1 except that the language model is trained using the target data plus 10M Xinhua portion of Gigaword corpus. The BLEU score of our NMT baseline, which is an attention-based NMT as introduced in Section 2, is about 4.5 higher than the state-of-the-art SMT system Moses-2. Table 1. Translation results (BLEU score) for Chinese-to-English translation. " †": significantly better than NMT Baseline (p < 0.05). " ‡": significantly better than NMT Baseline (p < 0.01). For the last three lines in Table 1, Enc-Dec pattern outperforms concatenation pattern and even Dec-Enc pattern, which shows Enc-Dec pattern is best approach to take advantage of look-ahead attention. Moreover, our Enc-Dec pattern gets an improvement of +0.93 BLEU points over the state-of-the-art NMT baseline, which demonstrates that the look-ahead attention mechanism is effective for generation in conventional attention-based NMT. Effects of Translating Long Sentences A well-known flaw of NMT model is the inability to properly translate long sentences. One of the goals that we integrate the look-ahead attention into the generation of NMT decoder is boosting the performance in translating long sentence. We follow Bahdanau et al. [1] to group sentences of similar lengths together and compute a BLEU score per group, as demonstrated in Figure 4. Although the performance of both the NMT baseline and our proposed model drops rapidly when the length of source sentence increases, our Enc-Dec model is more effective than the NMT Baseline in handling long sentences. Specifically, our proposed model gets an improvement of 1.88 BLEU points over the baseline from 50 to 60 words in source language. Furthermore, when the length of input sentence is greater than 60, our model still outperforms the baseline by 1.04 BLEU points. Experiments show that the look-ahead attention can relieve the burden of LSTM to carry on the target-side long-distance dependencies. Target Alignment of Look-ahead Attention The conventional attention models always refer to some source words when generating a target word. We propose a look-ahead attention for generation in NMT, which also focuses on previous generated words in order to predict the next target word. We provide two real translation examples to show the target alignment of look-ahead attention in Figure 5. The first line is blank because it does not have look-ahead attention when generating the first word. Every line represents the weight distribution for previous generated words when predicting current target word. More specifically, we find some interesting phenomena. First, target words often refer to verb or predicate which has been generated previously, such as the word "was" in Figure 5(a). Second, the heat map shows that the word "we" and the word "looking" have a stronger correlation when translating the Chinese sentence as demonstrated in Figure 5(b). Intuitively, the look-ahead attention mechanism establishes a bridge to capture the dependency relationship between target words. Third, most target words mainly focus on the word immediately before the current target word, which may be due to the fact that the last generated word contains more information in recurrent neural networks. We can control the influence of the look-ahead attention like Tu et al. [27] to improve translation quality and instead we remain it as our future work. Results on English-German Translation We evaluate our model on the WMT14 translation tasks for English to German, whose results are presented in Table 2. We find that our proposed look-ahead attention NMT model also obtains significant accuracy improvements on largescale English-German translation. In addition, we compare our NMT systems with various other systems including Zhou et al. [34] which use a much deeper neural network. Luong et al. [16] achieves BLEU score of 19.00 with 4 layers deep Encoder-Decoder model. Shen et al. [25] obtained the BLEU score of 18.02 with MRT techniques. For this work, Related Work The recently proposed neural machine translation has drawn more and more attention. Most of the existing approaches and models mainly focus on designing better attention models [16,19,20,28,18], better strategies for handling rare and unknown words [17,14,24], exploiting large-scale monolingual data [3,23,33], and integrating SMT techniques [25,7,35,30]. Our goal in this work is to design a smart attention mechanism to model the dependency relationship between target words. Tu et al. [28] and Mi et al. [19] proposed to extend attention models with a coverage vector in order to attack the problem of repeating and dropping translations. Cohn et al. [6] augmented the attention model with well-known features in traditional SMT. Unlike previous works that attention models are mainly designed to predict the alignment of a target word with respect to source words, we focus on establishing a direct bridge to capture the long-distance dependency relationship between target words. In addition, Wu et al. [31] lately proposed a sequence-to-dependency NMT method, in which the target word sequence and its corresponding dependency structure are jointly constructed and modeled. However, the target dependency tree references are needed for training in this model and our proposed model does not need extra resources. Very Recently, Vaswani et al. [29] proposed a new simple network architecture, Transformer, based solely on attention mechanisms with multi-headed self attention. Besides, Lin et al. [15] presented a self-attention mechanism which extracts different aspects of the sentence into multiple vector representations. And the self-attention model has been used successfully in some tasks including abstractive summarization and reading comprehension [22,2]. Here, in order to alleviate the burden of LSTM to carry on the target-side long-distance dependencies of NMT, we propose to integrate the look-ahead attention mechanism into the conventional attention-based NMT which is used in conjunction with a recurrent network. Conclusion In this work, we propose a novel look-ahead attention mechanism for generation in NMT, which aims at directly capturing the long-distance dependency relationship between target words. The look-ahead attention model not only aligns to source words, but also refers to the previous generated words when generating the next target word. Furthermore, we present and investigate three patterns to integrate our proposed look-ahead attention into the conventional attention model. Experiments on Chinese-to-English and English-to-German translation tasks show that our proposed model obtains significant BLEU score gains over strong SMT baselines and a state-of-the-art NMT baseline.
2017-08-30T11:27:02.000Z
2017-08-30T00:00:00.000
{ "year": 2017, "sha1": "84a90877a317dd28d836fcb0e5c1fb292d3b91bc", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1708.09217", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "32d4240069a621bc0e4053eb3f663c99af20d122", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
119134563
pes2o/s2orc
v3-fos-license
Classification of graph C*-algebras with no more than four primitive ideals We describe the status quo of the classification problem of graph C*-algebras with four primitive ideals or less. Introduction The class of graph C * -algebras (cf. [Rae05] and the references therein) has proven to be an important and interesting venue for classification theory by K-theoretical invariants; in particular with respect to C * -algebras with finitely many ideals, and in 2009, the authors formulated the following working conjecture: Conjecture 1.1. Graph C * -algebras C * (E) with finitely many ideals are classified up to stable isomorphism by their filtered, ordered K-theory FK + Prim(C * (E)) (C * (E)). Here, the filtered, ordered K-theory is simply the collection of all K 0 -and K 1groups of subquotients of the C * -algebra in question, taking into account all the natural transformations among them (details will be given below). The conjecture addresses the possibility of a classification result which is not strong (cf. [Ell10]) in the sense that we do not expect every possible isomorphism at the level of the invariant to lift to the C * -algebras. The conjecture remains open and we are forthwith optimistic about its veracity, although some of the results which have been obtained, as we shall see, seem to indicate that an added condition of finitely generated K-theory could be needed. In the present paper we will discuss the status of this conjecture for graph algebras with four or fewer primitive ideals; if the number is three or fewer we can present a complete classification under the condition of finitely generated K-theory, but for the number four there are many cases still eluding our methods. Adding, in some cases, the condition of finitely generated K-theory -or even stronger, that the graph algebra is unital -we may solve 103 of the 125 cases, leaving less than one fifth of the cases open. Our main contribution in the present paper concerns the class of fan spaces which has not been accessible through the methods we have used earlier, but we will also go through those results in our two papers [ERRa] and [ERR09] which apply here. 1.1. Tempered primitive ideal spaces. Invoking an idea from [ERS11] we organize our overview using a tempered ideal space of the C * -algebra in question. This is defined for any C * -algebra with only finitely many ideals as the pair (Prim(A), τ ) with I 0 the maximal proper ideal of I (this exists by the fact that I is prime and contains only finitely many ideals). We set To be able to work systematically with these objects, we now give them a combinatorial description. Note that whenever X or X are locally closed, standard results in graph C *algebra theory give that A(X ) and A(X ) are AF algebras and O ∞ -absorbing algebras, respectively. Definition 1.3. Let X be a topological space. The specialisation preorder ≺ on X is defined by x ≺ y if and only if x ∈ {y}. A topological space satisfies the T 0 separation axiom if and only if its specialisation preorder is a partial order. Definition 1.4. A subset H of a preordered set (X, ≤) is called hereditary if x ≤ y ∈ H implies x ∈ H. Definition 1.5. Let (X, ≤) be a preordered set. The Alexandrov topology of X is the topology with the closed sets being the hereditary sets. A topological set is called an Alexandrov space if it carries the Alexandrov topology of some preordered set. The preorder is necessarily the specialisation preorder. Since we are dealing with C * -algebras with finite primitive ideal spaces, these are all Alexandrov spaces satisfying the T 0 separation axiom. Consequently, we can equivalently consider all partial orders on finite sets. The tempered primitive ideal space for a C * -algebra with n primitive ideals may hence be uniquely described using a partial order on {1, . . . , n} and a map in {0, 1} {1,...,n} . The transitive reduction of a relation R on a set X is a minimal relation S on X having the same transitive closure as R. In general neither existence nor uniqueness are guaranteed, but if the transitive closure of R is antisymmetric and finite, there is a unique transitive reduction. We will illustrate our (finite) topological spaces with graphs of the transitive reduction of the specialisation order, where we write an arrow x → y if and only if x is less than y in the transitive reduction of the specialisation order (similar to the Hasse diagram). The value of τ will be indicated by colors of the vertices of the graph; white for 0 and black for 1. We obtain a unique signature for each tempered ideal space as follows. Consider the adjacency matrix of the graph of the specialisation order and recall that (by transitivity and antisymmetry) we can always permute the vertices so that the adjacency matrix becomes an upper triangular matrix. Since the relation is reflexive, we will have ones in the diagonal, so without loss of information we may write the values of τ there. To each such upper triangular matrix t 1 a 1,2 a 1,n−1 a 1,n t 2 a 2,3 a 2,n . . . . . . t n−1 a n−1,n t n        we associate two binary numbers a = a 1,2 a 1,3 · · · a 1,n a 2,3 a 2,4 · · · a 2,n · · · a n−1,n and t = t 1 · · · t n In general, there are several such binary numbers associated with a specialisation order by means of permuting the vertices. We choose the order of the vertices to obtain the smallest possible pair (a, t) ordered lexicographically as the unique identifier for this specific tempered ideal structure. In the interest of conserving space we write hexadecimal expansion of the numbers when referring to a certain structure. We write n.a.t and n.a to indicate signatures and tempered signatures, respectively, defined this way (where n and a are numbers written in decimal expansions and t is a number written in hexadecimal expansion). If a primitive ideal space is disconnected, we may classify the C * -algebras associated to each component individually. We will hence assume throughout that the C * -algebras have connected primitive ideal space (when considering graph algebras, a necessary, but not sufficient, condition for this is that the underlying graphs are connected considered as undirected graphs). Determining the number of connected T 0 -spaces with n points is hard for most n; the number has been computed up to n = 16 in [BM02]. But for small n even the number of tempered ideal spaces can readily be found by naive enumeration, by first counting all spaces and then performing inverse Euler transform to obtain those that are connected: | Prim(A)| 1 2 3 4 5 6 Number of spaces 1 2 5 16 63 318 Number of connected spaces 1 1 3 10 44 238 Number of tempered spaces 2 10 62 510 5292 69364 Number of connected tempered spaces 2 4 20 125 1058 11549 We will restrict our attention to | Prim(A)| ≤ 4 and hence have 15 (connected) primitive ideal spaces 1 which may be given temperatures in a total of 151 different ways to concern ourselves with: preserving all natural transformations in such a way that all α Y 0 are also order isomorphisms. All components of this invariant are readily computable ( [CET]), and often, much of it is redundant. We will not pursue that issue here. The filtered K-theoryFK X (A) of A is defined analogously by disregarding the order structure on K 0 . The filtered (ordered) K-theory over a finite T 0 -space X can also be used for C * -algebras over X without being tight. 2 1.3. Graph C * -algebra. A graph (E 0 , E 1 , r, s) consists of a countable set E 0 of vertices, a countable set E 1 of edges, and maps r : E 1 → E 0 and s : E 1 → E 0 identifying the range and source of each edge. If E is a graph, the graph C * -algebra C * (E) is the universal C * -algebra generated by mutually orthogonal projections {p v : v ∈ E 0 } and partial isometries {s e : e ∈ E 1 } with mutually orthogonal ranges satisfying (1) s * e s e = p r(e) for all e ∈ E 1 (2) s e s * e ≤ p s(e) for all e ∈ E 1 (3) The countability hypothesis ensures that all our graph C * -algebras are separable, which is a necessary hypothesis for many of the classification results. We will be mainly interested in graph C * -algebras with real rank zero. For a graph E, we have that the real rank of C * (E) is zero if and only if E is satisfying Condition (K), i.e., no vertex of E is the base point of exactly one simple cycle (see Theorem 3.5 of [JJ]). Moreover, by Proposition 3.3 of [JJ], every graph C * -algebra with finitely many ideals has real rank zero. Thus, every graph C * -algebra with finitely many ideals has a norm-full projection, and by [Bro77], every graph C * -algebra with finitely many ideals is stably isomorphic to a unital C * -algebra. Throughout the paper we will use the following facts about graph C * -algebras without further mention. (1) Every ideal of C * (E) is stably isomorphic to a unital graph C * -algebra. (2) Every sub-quotient of C * (E) is stably isomorphic to a unital graph C * -algebra. (3) The K-groups of every sub-quotient of C * (E) is finitely generated. (4) Every non-unital simple sub-quotient of C * (E) that is an AF-algebra is isomorphic to K. Proof. As in the proof of Theorem 5.7 (4) of [MT07] (see also [BHRS,Proposition 3.4]), every ideal of a graph C * -algebra satisfying Condition (K) is Morita equivalent to C * (F ), where F 0 ⊆ E 0 . Hence, (1) holds since a graph C * -algebra C * (E) is unital if and only if E 0 is finite. (3) follows from (2) and [DT02, Theorem 3.1]. Suppose C * (F ) is a simple unital AF-algebra. Then F has no cycles. Since C * (F ) is unital, F 0 is finite. Therefore, F has a sink. By [DT05, Corollary 2.15], every singular vertex must be reached by any other vertex since C * (F ) is simple. Thus, F must be a finite graph. Hence, C * (F ) ∼ = M n . From this observation, (4) follows from (1) and (2) since any non-unital simple C * -algebra stably isomorphic to K is isomorphic to K. 2 Although this is not exactly the same definition as the filtrated K-theory in [MNa], it is known to be the same for all the cases where we have a UCT. For more on this invariant and C * -algebras over X the reader is referred to [MNa] and the references therein. See [Rae05] and the references therein for more on graph C * -algebras. General theory We first describe the situations in which the graph algebras can be classified using widely applicable results. 2.1. The AF case. The AF case corresponds to temperatures that are constantly 0. We incur these at the tempered signatures Proof. It follows from Theorem 9.1 and Corollary 9.2 of [KR02] that (c) implies (b), that (b) implies (a), and that the three coincide in the simple case. It follows from Proposition 3.5 of [KR02], that pure infiniteness passes to ideals and subquotients. Thus it follows from [TW07] that (a) implies (c). The isomorphism result of Kirchberg (cf. [Kir94] and [Kir00]) reduces the classification problem of nuclear and strongly purely infinite C * -algebras which are also in the bootstrap class to an isomorphism problem in ideal-related KK -theory. Since all purely infinite graph C * -algebras fall in this class we may hence confirm Conjecture 1.1 in the purely infinite case by providing a universal coefficient theorem which allows the lifting of isomorphisms at the level of filtered K-theory to invertible KK X -classes. This, however, is not known to be possible in general. Indeed, Meyer and Nest in [MNa] showed that there are purely infinite C * -algebras over the space 4.A which fails to have this property, but since the examples provided there cannot possibly come from graph algebras, the question remains open in general. The work of Bentmann and Köhler established that general UCTs are available precisely when the space X is an accordion space, and Arklint with the second and third named authors provided UCTs for other spaces, including 4.A, under the added assumption that the C * -algebra has real rank zero which is automatic here. Specializing even further, Arklint, Bentmann and Katsura provided a UCT which applies for our space 4.3B under the added assumption that the C * -algebra has real rank zero and that the K 1 groups of all subquotients are free, which also is automatic here. Kir94], [NCP00], [Ror97], [Res08], [MNa], [BK], [Kir00]) 2.3. The separated case. The classification problem for the two mixed cases with | Prim(A)| = 2 not covered by the results mentioned above -the tempered signatures 2.1.1 and 2.1.2 -were resolved in [ET10] drawing heavily on [ERR09]. In [ERRa], we generalized this to more complicated cases having the separation property which is automatic in the two-point case, as detailed below. The idea is to find an ideal I such that I is AF and A/I is O ∞ -absorbing, or vice versa. We do not know in general how to prove classification in this case, but under certain added assumptions related to the notion of fullness, this leads to results that may be used to resolve the cases of tempered signature 3. Definition 2.4. Let n > 1 be a given integer. Then we let X n denote the partially ordered set (actually totally ordered) X n = {1, 2, . . . , n} with the usual order. For a, b ∈ X n with a ≤ b, we let [a, b] denote the set {x ∈ X n : a ≤ x ≤ b}. Proposition 2.6. Let A 1 and A 2 be graph C * -algebras satisfying Condition (K). Fan spaces In this section, we develop methods to deal mainly with the spaces 3.3, 3.6, 4.A, 4.38. We observe the following in [ERRa] Lemma 3.1. Let E be a graph such that C * (E) has finitely many ideals and assume that I ⊳ J C * (E) are ideals. Then Lemma 3.3. Let (B i ) i∈I be a family of C * -algebras (small enough for direct sums and products to exist). Let π j : i∈I B i → B j denote the canonical projection, for each j ∈ I. Then there is a canonical isomorphism i∈I π i : as the j'th coordinate map. In this case, the direct product coincides with the direct sum. Proof. Here we view the multiplier algebras as the algebras of double centralisers (cf. pp. 39 and 81-82 in [Mur90]). Let (ρ 1 , ρ 2 ) be a double centralizer on i∈I B i (i.e., an arbitrary element of M( i∈I B i )). Using an approximate unit, it is easy to see that ρ 1 and ρ 2 restricted to B j map into B j itself. In this way we get a canonical * -homomorphism from M( i∈I B i ) to M(B j ). By the universal property of the direct product, we get a * -homomorphism ϕ from M( i∈I B i ) to i∈I M(B i ), where the j'th coordinate map clearly is an extension of π j to the multiplier algebras, and hence it is the extension π j of π j . Clearly, ϕ is injective. It is also easy to show that ϕ is surjective by constructing the preimage. Therefore, if I is finite, the direct product of the short exact sequences Assumption 3.4. For this subsection, let n > 1 be a fixed integer, and let X i = X li for i = 1, 2, . . . , n, where l 1 , l 2 , . . . , l n are fixed positive integers. Let, moreover, and define a partial order on X as follows. The element m is the least element of X, and for each i = 1, 2, . . . , n, if x, y ∈ X i then x ≤ y in X if and only if x ≤ y in X i . No other relations exist between the elements of X. Proof. Note that the diagram Lemma 3.6. Let A be a tight C * -algebra over X. Then is full if and only if e · π k is full for all k = 1, 2, . . . , n. Proof. By Lemma 3.5, η e·π k = π k • η e . Thus, if e is a full extension, then e · π k is a full extension since π k is surjective. Suppose e · π k is a full extension for all k = 1, 2, . . . , n. Note that A(X \ {m}) is n j=1 A(X j ) and thus from Lemma 3.3 it follows that the j'th coordinate map of ( n i=1 π i ) • η e is exactly π j • η e = η e·πj (according to Lemma 3.5). Since n i=1 π i is an isomorphism and since e · π k is a full extension for all k = 1, 2, . . . , n, we have that e is a full extension. That this direct sum of full extensions is again full can easily be shown by first cutting down to each coordinate. Theorem 3.7. Let A and B be graph C * -algebras that are tight C * -algebras over X. Assume that there exists an isomorphism α : Proof. We may assume that A and B are stable C * -algebras. Note that for each is an AF algebra. First we assume that X = ∅ and X \ {m} = ∅. Note that A(X ) and B(X ) are AF algebras. Since α X : K 0 (A(X )) → K 0 (B(X )) is a positive isomorphism, there exists an isomorphism β : A(X ) → B(X ) such that K 0 (β) = α X (by Elliott's classification result [Ell76]). Since A(X ) and B(X ) are AF algebras and β is an X -equivariant isomorphism, we Let X min be the set of minimal elements of X , and for each a, b ∈ X let Let x ∈ X min be given. Let i x ∈ {1, 2, . . . , n} be the unique number such that . So by Theorem 4.14 of [MNa], Kirchberg [Kir00], and Theorem 3.3 of [ERRa], there exists an isomorphism ϕ x : As in the proof of Proposition 6.3 of [ERRa], Corollary 5.3 of [ERRa] implies that η e A x and η e B x are full extensions, and thus also the extensions with Busby maps η e B x • β [m,x) and ϕ x • η e A x are full. Since the extensions are non-unital and B([x, ∞)) satisfies the corona factorization property, there exists a unitary u x ∈ M(B([x, ∞))) such that where u x is the image of u x in the corona algebra (this follows from [EK01] and [KN06]). Hence, by Theorem 2.2 of [ELP99], there exists an isomorphism η x : Since A( X ix ) and B( X ix ) have linear ideal lattices, this induces an isomorphism So now by construction, for all x ∈ X min , and for all j = 1, 2, . . . , n satisfying that A(X j ) is an AF algebra. Now we define an isomorphism θ from A(X \ {m}) to B(X \ {m}) as the direct sum of the ψ x 's and β Xj 's. We get that (from Lemma 3.3 and Lemma 3.5) where the θ j 's denote the corresponding ψ x 's and β Xj 's. Hence, by Theorem 2.2 of [ELP99], A ∼ = B. If X = ∅ the result is due to Elliott's classification result [Ell76], and if X = {m} the theorem follows easily by making modifications to the above proof. Remark 3.8. Let A and B be graph C * -algebras that are C * -algebras over X, so that A(X i ) and B(X i ) are tight C * -algebras over X i , for i = 1, 2, . . . , n. Assume that This follows from the proof above. The above extensions are essential, e.g., if A({x i }) is the least ideal of A({x i , m}), for all i = 1, 2, . . . , n, and the remark applies to the cases 3 (a) 4.E.1, where we view the algebra A that is tight over the space 4.E as a C * -algebra over a ← b → c as indicated by the assignment b → a ← b → c. (b) 4.1E.1 and 4.1E.3, where we view the algebra A that is tight over the space 4.1E as a C * -algebra over a ← b → c as indicated by the assignment (c) 4.3E.1, where we view the algebra A that is tight over the space 4.3E as a C * -algebra over a ← b → c as indicated by the assignment The following proposition follows from the results in [ET10]. Proposition 3.9. Let A be a graph C * -algebra with exactly one nontrivial ideal I. Using the UCT for accordion spaces (see [MNa] and [BK]) and for many other four-point spaces under the added assumption of real rank zero as described in [ARR12], the cases 3.6.2, 3.6.3, 4.38.8, 4.38.9, 4.38.B, can be classified using the following theorem. Theorem 3.10. Let A and B be graph C * -algebras that are tight C * -algebras over X, with X i being a singleton, for each i = 1, 2, . . . , n. Suppose there exists an isomorphism α : Proof. If A({m}) is an AF algebra, the result follows from Theorem 3.7. Suppose A({m}) is an O ∞ -absorbing simple C * -algebra and that A and B are stable C * -algebras. Then by Lemma 3.5 and Proposition 3.9, π i • η e A : A({m}) → Q(A(X i )) and π i • η e B : B({m}) → Q(B(X i )) are full extensions, for all i = 1, 2, . . . , n. Hence, by Lemma 3.6, η e A and η e B are full extensions. The theorem now follows from the results of [ERRa]. 3.2. Primitive ideal space with n minimal elements. Assumption 3.11. For this subsection, let n > 1 be a fixed integer, and let X i = X li for i = 1, 2, . . . , n, where l 1 , l 2 , . . . , l n are fixed positive integers. Let, moreover, X = {M } ⊔ X 1 ⊔ X 2 ⊔ · · · ⊔ X n and define a partial order on X as follows. The element M is the greatest element of X, and for each i = 1, 2, . . . , n, if x, y ∈ X i then x ≤ y in X if and only if x ≤ y in X i . No other relations are between the elements of X. Lemma 3.12. Let A be a tight C * -algebra over X and let Y ∈ O(X \ {M }) be given. Consider the extensions Proof. Note that the diagram Lemma 3.13. Suppose the following diagram of C * -algebras with short exact rows is commutative Proof. We first prove (1). Let x ∈ E 1 and y ∈ E 2 such that 0 ≤ y ≤ ϕ 1 (x). Since ϕ 2 (A 1 ) is a hereditary sub-C * -algebra of A 2 , we have that there exists z ∈ ϕ 1 (E 1 ) such that π 2 (y) = π 2 (z). Thus, y − z ∈ B. Since the map on the ideals is the identity, we have that y − z ∈ ϕ 1 (E 1 ). Hence, y ∈ ϕ 1 (E 1 ). Therefore, ϕ 1 (E 1 ) is a hereditary sub-C * -algebra of E 2 . We now prove (2). Let x ∈ E 2 . Since ϕ 2 (A 1 ) is full in A 2 , there exists y in the ideal of E 2 generated by ϕ 1 (E 1 ) such that x − y ∈ B. Since the map on the ideals is the identity, we have that y − z ∈ ϕ 1 (E 1 ). Hence, x is in the ideal of E 2 generated by ϕ 1 (E 1 ). Lemma 3.14. Let e : 0 → I → A → n k=1 A k → 0 be an extension and let ι k : A k → n k=1 A k be the inclusion. Suppose η e • ι k is full for each k. Then η e is full. Proof . Let (a 1 , a 2 , . . . , a n ) be a nonzero positive element in n k=1 A k . Without loss of generality, we may assume that a 1 = 0. Note that ideal in Q(I) generated by η e (a 1 , . . . , a n ) contains the ideal in Q(I) generated by η e • ι 1 (a 1 ). Since η e • ι k is full, we have that the ideal in Q(I) generated by η e • ι 1 (a 1 ) is Q(I). Thus, the ideal in Q(I) generated by η e (a 1 , . . . , a n ) is Q(I). The following result applies to the cases 3. Note that A(X ) and B(X ) are AF algebras. Since α X : K 0 (A(X )) → K 0 (B(X )) is a positive isomorphism, there exists an isomorphism β : A(X ) → B(X ) such that K 0 (β) = α X (by Elliott's classification result [Ell76]). Since A(X ) and B(X ) are AF algebras and β is an X -equivariant isomorphism, we have that K 0 (β Y ) = α Y for all Y ∈ LC(X) such that Y ⊆ X . In particular, Since β is an X -equivariant isomorphism, by Lemma 3.12 above and Theorem 2. be the canonical projections. Note that the range of η e A • ι A,X \{M} and the range of η e A • ι A,X are orthogonal and the range of η e B • ι B,X \{M} and the range of η e B • ι B,X are orthogonal. Moreover, We claim that there exist full hereditary sub-C * -algebras E 1 and E 2 of A and B, respectively, such that E 1 ∼ = E 2 . Then by Theorem 2.8 of [Bro77], Choose full projections p 1 , q 1 ∈ A(X ) and p 2 , q 2 ∈ A(X \ {M }) such that p 1 + p 2 is orthogonal to q 1 + q 2 in A(X \ {M }) (to do this, we use stability, and that graph algebras with finitely many ideals satisfies Condition (K) and hence are of real rank zero). Therefore, η e A (p 1 + p 2 ) = 1 Q(A({M})) since η e A (p 1 + p 2 ) is orthogonal to η e A (q 1 + q 2 ). Set e 1 = ψ(p 1 ), e 2 = β X \{M} (p 2 ), f 1 = ψ(q 1 ), and f 2 = β X \{M} (q 2 ). Then e 1 + e 2 and f 1 + f 2 are nonzero orthogonal projections. So, η e B (e 1 + e 2 ) = 1 Q(B({M})) . Set is an AF algebra, by Corollary 2.11 of [Zha91] f lifts to a projection f ′ in M(B({M })). Note that there exists an isomorphism γ from 3.14, pp. 147 of [BB]). Thus, we have an isomorphism is commutative. By Corollary 5.6 of [ERRa], η e A • ι A,Xi and η e B • ι B,Xi are full extensions for each i = 1, 2, . . . , n with X i being O ∞ -absorbing (i.e., X i ⊆ X ). Thus, by Lemma 3.14, η e A • ι A,X and η e B • ι B,X are full extensions since A(X ) = i∈{1,2,...,n},Xi⊆X A(X i ) and B(X ) = i∈{1,2,...,n},Xi⊆X B(X i ). Since η e A (p 1 + p 2 ) = 1 Q(A({M})) and η e B (e 1 + e 2 ) = 1 Q(B({M})) and since β {M} and ψ are isomorphisms, we have that β {M} • η e A • ι A,X • j (p 1 ) = f and η e B • ι B,X • ψ • j (p 1 ) = f . Thus, η e1 (p 1 ) and η e2 (p 1 ) are not equal to 1 Q(f ′ B({M})f ′ ) . Therefore, e 1 and e 2 are non-unital full extensions. Since which is invertible, and since γ is an isomorphism, we have that Let a 1 ∈ p 1 A(X )p 1 and a 2 ∈ p 2 A(X \ {M })p 2 . Then Hence, Note that the Busby invariant of the extension (1), Theorem 2.2 of [ELP99], and the five lemma, E 1 ∼ = E 2 . By Lemma 3.13, E 1 is isomorphic to a full hereditary sub-C * -algebra of A and E 2 is isomorphic to a full hereditary sub-C * -algebra of B. We have just proved the claim. If X = ∅ the result is due to Elliott's classification result [Ell76], and if X \ {M } = ∅ the theorem follows easily by making modifications to the above proof. Remark 3.16. Let A and B be graph C * -algebras satisfying Condition (K) that are C * -algebras over X such that each of A(X i ), B(X i ) are either AF algebras or O ∞absorbing and such that A(X i ) and B(X i ) are tight C * -algebras over X i , whenever A(X i ) and B(X i ) are O ∞ -absorbing. Assume that there exists an isomorphism α : FK + X (A) → FK + X (B). Assume moreover, that A({M }) is an AF algebra and that for every ideal I of A, we have that I ⊆ A({M }) or A({M }) ⊆ I. Then A ⊗ K ∼ = B ⊗ K. This follows from the proof above together with Corollary 5.6 of [ERRa] and applies to the cases 4 (a) 4.1E.4 and 4.1E.C, where we view the algebra A that is tight over the space 4.1E as a C * -algebra over a → b ← c as indicated by the assignment (b) 4.1F.4 and 4.1F.C, where we view the algebra A that is tight over the space 4.1F as a C * -algebra over a → b ← c as indicated by the assignment The following result resolves the cases 3. 3.2, 3.3.3, 4.A.1, 4.A.3, 4.A.7. Theorem 3.17. Let A and B be graph C * -algebras that are tight C * -algebras over X, with X i being a singleton, for each i = 1, 2, . . . , n. Suppose there exists an isomorphism α : FK + X (A) → FK + X (B) such that α lifts to an invertible element in KK (X; A, B). Proof. Note that we may assume that A and B are stable C * -algebras. If A({M }) is an AF algebra, then the theorem follows from Theorem 3. Suppose A({M }) is O ∞ -absorbing. Then B({M }) is O ∞ -absorbing. Hence, by Proposition 3.9 and Lemma 3.14, the extensions are full extensions. The theorem now follows from the results of Theorem 4.6 of [ERRa]. A pullback technique The main idea of this section is to write the algebra as a pullback of extensions we can classify coherently. The problem is, that classification usually does not give us unique isomorphisms on the algebra level. But when the quotient is an AF algebra we can in certain cases use that the KK -class of the isomorphism is unique. The main idea here is similar to the main idea of Section 3. Assume that there are isomorphisms ϕ A : A 1 → A 2 , ϕ B : B 1 → B 2 and ϕ C : C 1 → C 2 , such that the following diagram commutes: Then we get a canonically induced isomorphism from P 1 to P 2 . Proof. The existence of the * -homomorphism from P 1 to P 2 follows from the universal property of the pullback. That this * -homomorphism is an isomorphism also follows from the universal property. Proof. This follows from Proposition 3.1 of [Ped99] by noting that we have a commuting diagram with short exact rows. Assume also Z = ∅ and that A(Z) is an AF algebra. is O ∞ -absorbing, then we assume that: (a) There exist two disjoint clopen subsets Proof. We may assume that A and B are stable C * -algebras. Note that for each Note that the diagram is commutative with short exact rows and columns, analogously for B. If both A(O 0 ) and A(O 1 ) are AF algebras, then it follows from the permanence properties of AF algebras that A is an AF algebra, and thus also B. In this case the theorem follows from Elliott's classification result [Ell76]. Now assume that A(O 0 ) is an AF algebra and that A(O 1 ) is O ∞ -absorbing. Let Z 1 1 = Z \ Y 2 1 and Z 2 1 = Y 2 1 . Then Z 1 1 and Z 2 1 are locally closed subsets of X, and Z is the disjoint union of Z 1 1 and Z 2 1 . Since A(Y 0 ) and B(Y 0 ) are extensions of AF algebras, these are themselves AF algebras. Since α Y0 : K 0 (A(Y 0 )) → K 0 (B(Y 0 )) is a positive isomorphism, there exists an isomorphism β : A(Y 0 ) → B(Y 0 ) such that K 0 (β) = α Y0 (by Elliott's classification result [Ell76]). Since A(Y 0 ) and B(Y 0 ) are AF algebras and β is an is an isomorphism, we also have an isomorphism . So by Theorem 4.14 of of [MNa], Kirchberg [Kir00], and Theorem 3.3 of [ERRa], there exists an isomorphism ϕ : in KK 1 (A(Z 1 ), B(O 1 )), since KK (β Z 1 ) is the unique lifting of α Z 1 . As in the proof of Proposition 6.3 of [ERRa], Corollary 5.3 of [ERRa] implies that η e A and η e B are full extensions, and thus also the extensions with Busby maps η e B • β Z 1 and ϕ • η e A are full. Since the extensions are non-unital and B(O 1 ) satisfies the corona factorization property, there exists a unitary u ∈ M(B(O 1 )) such that where u is the image of u in the corona algebra (this follows from [EK01] and [KN06]). Hence, by Theorem 2.2 of [ELP99], there exists an isomorphism η : and analogously for B, we get an isomorphism from 0 which is equal to β Z on the quotient. Now the theorem follows from Lemma 4.2 and Lemma 4.1. Now assume instead that both I and J are O ∞ -absorbing. The proof is similar to the case above. Instead of lifting α Y0 : Ad hoc methods In this section we present arguments which resolve the classification question for some examples of tempered ideal spaces which are not covered by the general results above. Most of the results are based on knowing strong classification for smaller ideal spaces, as explained below. Our results of this nature, presented in [ERRc], are of a rather limited scope, and require restrictions on the K-theory, requiring the K-groups to be finitely generated, or even for the graph C * -algebra to be unital. We will see this idea in use in a very clear form in the two open cases for three primitive ideals (cf. Section 5.1) and in more complicated four-point cases. Our starting point is Theorem 5.1. Let A 1 and A 2 be graph C * -algebras that are tight C * -algebras over a finite T 0 -space X and let U ∈ O(X) be non-empty. Let e i be the extension (1) e i is a full extension; (2) there exists an invertible element α ∈ KK (X; A 1 , A 2 ); and Then Proof. By (3), there exists an isomorphism ϕ Y : It follows from (1) that e i are essential, so by [ERRa,Theorem 3 . Hence, by [ERRa, Proposition 6.1 and Lemma 4.5], we have that A 1 ⊗ K ∼ = A 2 ⊗ K. Definition 5.2. For a T 0 topological space X, we will consider classes C X of separable, nuclear C * -algebras in the bootstrap category of Rosenberg and Schochet N such that (1) any element in C X is a C * -algebra over X; (2) if A and B are in C X and there exists an invertible element α in KK (X; A, B) which induces an isomorphism from FK + X (A) to FK + X (B), then there exists an isomorphism ϕ : A → B such that KK (ϕ) = α X , where α X is the element in KK (A, B) induced by α. Remark 5.3. Let X be a finite T 0 -space, let U be an open subset of X, and let C U and C X\U be classes of C * -algebras satisfying the conditions of Definition 5.2. If A 1 and A 2 are separable C * -algebras such that A 1 (U ), A 2 (U ) ∈ C U and A 1 (X \ U ), A 2 (X \ U ) ∈ C X\U , then (3) in Theorem 5.1 holds. Let C X and C Y be classes of C * -algebras satisfy the conditions in Definition 5.2. Let C X⊔Y be the classes of C * -algebras consisting of elements A ⊕ B with A ∈ C X and B ∈ C Y . Then C X⊔Y satisfies the conditions in Definition 5.2. Remark 5.4. Here we will provide some examples of classes satisfying the conditions in Definition 5.2. (1) By [Kir00], the class all stable, nuclear, separable, O ∞ -absorbing C * -algebras that are tight over a finite T 0 -space satisfy the conditions in Definition 5.2. By [ERRc, Corollary 3.10 and Theorem 3.13] and by the results of [EK], the following classes of C * -algebras satisfies the conditions in Definition 5.2. (2) Let C Xn be the class of nuclear, separable, tight C * -algebras A over X n such that A is stable, A({n}) is a Kirchberg algebra, A([1, n − 1]) is an AF-algebra, and K i (A[Y ]) is finitely generated for all Y ∈ LC(X n ). (3) Let C ′ X2 be the class of unital graph C * -algebras with exactly one non-trivial ideal with the ideal being an AF algebra and the quotient O ∞ -absorbing, simple C * -algebras. Let C X2 be the class of C * -algebras A such that A ∼ = B ⊗ K for some B ∈ C ′ X2 . By [Ell76], the following class of C * -algebras satisfy the conditions in Definition 5.2. (4) Let C X be the class of stable AF-algebras over X. Linear spaces. This case is solved in [ERRc], and the reader is referred there for details. However, since this is the most basic case in which our approach via Theorem 5.1 is applied, we will explain the methods for the benefit of the reader. Lemma 5.5. Let A be a graph C * -algebra such that A is a tight C * -algebra over X n . Proof. In [ERRc], we prove (i) and (ii). We now prove (iii). Note that is full since this is an essential extension and , the extension in (iii) is full by [ERRa,Proposition 5.4]. To solve the cases 3.7.5 and 4.3F.9, we now argue as follows: Theorem 5.6. Let A 1 and A 2 be graph C * -algebras that are tight C * -algebras over is an AF-algebra; and (iii) the K-groups of A i are finitely generated. Note now that x induces invertible elements r has a smallest ideal A i ({n}) which is O ∞ -absorbing and the quotient A i ([2, n − 1]) is an AF algebra. By Theorem 3.9 of [ERRc], there exists an isomorphism Xn (x). We have just shown that Assumption (3) of Theorem 5.1 holds. By Theorem 5.1, we can conclude that A 1 ⊗ K ∼ = A 2 ⊗ K. Proof. First note that the extension 0 → I⊗K → A⊗K → A/I⊗K → 0 is essential. Hence, in the case 4.F.x for x = 3, 5, 7, 9, B, D the extension is full since I ⊗ K is a simple, purely infinite, stable C * -algebra, which implies that Q(I ⊗ K) is simple. If A is unital and Y is the space 4.F.x for x = 2, 4, and C, then the extension is full since in this case I ∼ = K and Q(K) is simple. We are left with showing the extension is full for the case 4.F.A. This case follows from [ERRa, Proposition 5.4 and Corollary 5.6]. Lemma 5.9. Let A be a graph C * -algebra with tempered signature 4.3F. x for x = 5, 6, A, D. Then the ideal lattice of A is 0 I 1 I 2 I 3 A and the extension Proof. We will for show that e : 0 → I 2 ⊗ K → I 3 ⊗ K → I 3 /I 2 ⊗ K → 0 is a full extension. By Lemma 5.5, e is a full extension for x = 5, A, D. Consider the case x = 6. Note that I 2 and I 3 /I 1 are isomorphic to non-AF graph C * -algebras with exactly one nontrivial ideal. Therefore, by Proposition 3.9, (1) If A is unital, then the extension 0 → I ⊗ K → A ⊗ K → A/I ⊗ K → 0 is full. Proof. Suppose A is unital. Using the general theory of graph C * -algebras with this specific ideal structure, we have that I is stable. Since A/I is simple and unital, the conclusion now follows from [ERR09, Lemma 1.5 and Proposition 1.6]. We now prove the extension 0 → I ⊗ K → A ⊗ K → A/I ⊗ K → 0 is always full for the spaces 4.39.x with x = 9, B, C, D. Note that I = I 1 ⊕ I 2 with I 1 simple and I 2 a tight C * -algebra over X 2 . By [ERRc,Lemma 4.5] and [ERRa, Corollary 5.3 and Corollary 5.6], we have 0 → I 2 ⊗ K → A/I 1 ⊗ K → (A/I) ⊗ K → 0 is full. Since A/I 2 ⊗ K is a non-AF graph C * -algebra with exactly one nontrivial ideal, the extension 0 → I 1 ⊗ K → A/I 2 ⊗ K → A/I ⊗ K → 0 is a full extension (cf. Proposition 3.9). Thus, by Lemma 3.6, 0 → I ⊗ K → A ⊗ K → A/I ⊗ K → 0 is full. Using the above lemmas and the Universal Coefficient Theorem of Bentmann and Köhler [BK], we get the following cases: Corollary 5.11. Let A and B be graph C * -algebras that are tight over a finite accordion space X. Assume that there exists an isomorphism from FK + X (A) to FK + X (B Proof. By the above lemmas, all the extensions are full. Note that the specified ideal and quotient for each space belongs to classes of C * -algebras satisfying the conditions in Definition 5.2. Hence, the result now follows from Theorem 5.1 and the UCT for accordion spaces. Lemma 5.12. Let A be a graph C * -algebra with tempered signature 4.1F. x for x = 2, 5, 6, 7, or D, and let I 1 be the smallest ideal of A and let I 2 be the ideal of A containing I 1 such that I 2 /I 1 is simple. Lemma 5.13. Let A be a graph C * -algebra with tempered signature 4.3E.x for x = 3, 4, 5, 9, B, or D, and let I 1 and I 2 be the minimal ideals of A. We now prove the extension is full for the case x = 3. Note that in this case I 1 ⊗ K and I 2 ⊗ K are purely infinite, simple C * -algebras. Let I be the ideal of A containing (I 1 ⊕I 2 ) such that I/(I 1 ⊕I 2 ) is simple. By Lemma 3.5 and Lemma 3.6, 0 → (I 1 ⊕I 2 )⊗K → I⊗K → I/(I 1 ⊕I 2 )⊗K → 0 is a full extension. The conclusion now follows from [ERRa, Proposition 5.4] since I/(I 1 ⊕ I 2 ) ⊗ K is an essential ideal of A/(I 1 ⊕ I 2 ) ⊗ K. Suppose x = 9 and A is unital. Then I i is either K or a stable, purely infinite, simple C * -algebra. Let I be the ideal containing I 1 ⊕ I 2 such that I/(I 1 ⊕ I 2 ) is simple. Note that the signature of I is 3.6. By Lemma 3.5, the push forward extension of the extension 0 → (I 1 ⊕ I 2 ) ⊗ K → I ⊗ K → I/(I 1 ⊕ I 2 ) ⊗ K → 0 via the coordinate projection (I 1 ⊕ I 2 ) ⊗ K → I i ⊗ K is essential, and hence full since Q(I i ⊗ K) is simple. Thus, by Lemma 3.6, 0 → (I 1 ⊕ I 2 ) ⊗ K → I ⊗ K → I/(I 1 ⊕ I 2 ) ⊗ K → 0 is full. By [ERRa, Proposition 5.4], 0 → (I 1 ⊕ I 2 ) ⊗ K → A ⊗ K → A/(I 1 ⊕ I 2 ) ⊗ K → 0 is a full extension since I/(I 1 ⊕ I 2 ) is an essential ideal of A/(I 1 ⊕ I 2 ). Lemma 5.14. Let A be a graph C * -algebra with tempered signature 4.3E.7. Let I be the ideal of A such that A/I is simple. Then 0 → I⊗ K → A ⊗ K → A/I⊗ K → 0 is a full extension. Proof. Let I 1 and I 2 be the minimal ideals of A which is contained in I. Since I/(I 1 + I 2 ) is a non-unital, purely infinite, simple C * -algebra, we have that 0 → I/(I 1 + I 2 ) ⊗ K → A/(I 1 + I 2 ) ⊗ K → A/I ⊗ K → 0 is a full extension. The conclusion of the lemma now follows from Corollary 5.3 of [ERRa]. Lemma 5.15. Let A be a graph C * -algebra with tempered signature 4.1F.E. Let I be the smallest ideal of A. Proof. Let I 1 be the ideal of A such that I 1 contains I and I 1 /I is simple. Since I 1 is stably isomorphic to a non-AF graph C * -algebra with exactly one nontrivial ideal, we have that 0 → I ⊗ K → I 1 ⊗ K → I 1 /I ⊗ K → 0 is full. Since I 1 /I is an essential ideal of A/I, the conclusion of the lemma follows from Proposition 5.4 of [ERRa]. Using the above lemmas and the results of [ARR12], we get the following: Proof. By the above lemmas, all the extensions are full. Note that the specified ideal and quotient for each space belongs to classes of C * -algebras satisfying the conditions in Definition 5.2. Hence, the result now follows from Theorem 5.1. O-shaped spaces. Lemma 5.17. Let A be a graph C * -algebra that is a tight C * -algebra over the Oshaped space 4.3B.7. Let I be the smallest ideal of A and let I 1 and I 2 be the ideals of A which contain I and I k /I is simple. Then 0 → (I 1 + I 2 ) ⊗ K → A ⊗ K → A/(I 1 + I 2 ) ⊗ K → 0 is a full extension. Proof. Note that A/I is a tight C * -algebra over the space 3.6.5. Then by Lemma 3.6, 0 → (I 1 + I 2 )/I ⊗ K → A/I ⊗ K → A/(I 1 + I 2 ) ⊗ K → 0 is a full extension since I 1 /I and I 2 /I are purely infinite, simple C * -algebras. Also, since I is an essential ideal of I 1 + I 2 and since I is a purely infinite, simple C * -algebra, we have that 0 → I ⊗ K → (I 1 + I 2 ) ⊗ K → (I 1 + I 2 )/I ⊗ K → 0 is a full extension. The conclusion of the lemma now follows from Proposition 3.2 of [ERR10] since A/(I 1 + I 2 ) is simple. Lemma 5.18. Let A be a graph C * -algebra that is a tight C * -algebra over the Oshaped space 4.3B.E. Let I be the smallest ideal of A. Then 0 → I ⊗ K → A ⊗ K → A/I ⊗ K → 0 is a full extension. Proof. Let I 1 and I 2 be the ideals of A which contain I and I k /I is simple. Since I k ⊗ K is isomorphic to a graph C * -algebra with exactly one non-trivial ideal and I k ⊗ K is not an AF algebra, by Proposition 3.9, we have that 0 → I ⊗ K → I k ⊗ K → I k /I ⊗ K → 0 is a full extension. By Lemma 3.14, 0 → I ⊗ K → (I 1 + I 2 ) ⊗ K → (I 1 + I 2 )/I ⊗ K → 0 is a full extension. The conclusion of the lemma now follows from Proposition 5.4 of [ERR09] since (I 1 + I 2 )/I ⊗ K is an essential ideal of A/I. Using the above lemmas and the results of [ABK], we get the following cases: Corollary 5.19. Let A and B be graph C * -algebras that are tight over a O-shaped space X. Assume that there exists an isomorphism from FK + X (A) to FK + X (B). If A and B both have tempered signature 4.3B.7 or 4.3B.E, then A ⊗ K ∼ = B ⊗ K. Proof. By the above lemmas, all the extensions are full. Note that the specified ideal and quotient for each space belongs to classes of C * -algebras satisfying the conditions in Definition 5.2. Hence, the result now follows from Theorem 5.1. Summary of results In this final section, we index our results. Cases that open are indicated by "?". Cases that are solved in general are marked by " √ ", and if we need to impose conditions of finitely generated K-theory or unitality, this is indicated by " √ f.g. " or " √ 1 ", respectively. 6.1. One point spaces. Having nothing new to add, we include the simple case only for completeness. 1.0.x 0 √ Theorem 2.1 6.4. Four point spaces. In this section, we present our results for the case of four primitive ideals. As will be obvious below, the strength of our results varies dramatically with the nature of the spaces. In general, we can say quite a lot about all spaces apart from 4.E, 4.1E, and 4.3B. It may be interesting to note what makes these spaces difficult to handle; indeed the case 4.E is an accordion space in which a general UCT is know to hold, but it differs from the other accordion spaces by having poor separation properties when it comes to establishing fullness. The Oshaped spaces are also hard to separate fully, but have the added difficulty that no general UCT is known for them.
2012-12-30T17:26:11.000Z
2012-12-30T00:00:00.000
{ "year": 2012, "sha1": "2f55f3c3b8e73b53d33ea8469a40033fcd507988", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1212.6750", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6915884746cdb7df23d41bd27d274bd33097ebfb", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
257841252
pes2o/s2orc
v3-fos-license
Enhance Unobservable Solar Generation Estimation via Constructive Generative Adversarial Networks Power distribution grids experiences proliferation of solar photovoltaics (PV) at the system edge. However, its counterpart of sparse meter deployment provides insufficient monitoring of PVs, for which the potential violations challenge the operators for energy management and stable operation. Some previous works use satellite imagery to detect distributed PVs for the easy access of data. However, their PV localization methods rely on label-rich area with unitary background/environment to implement well; even further/harder, they do not provide precise metered-PV detection and quantification to estimate/know PV generation outputs in unobservable area, which is essential to prevent the edge from excessive two-way power flow and other violations. Thus, we combine the two steps of detecting PV existence and quantify PV amount into one classification task. To boost the classification performance in unobservable edge area, we construct a generative adversarial network that simultaneous augments the diversity of labelled PV satellite images and embed distinct PV characteristics/features for training the classifier. Furthermore, the PV localization and quantification result is combined with geographic information, historical weather conditions and neighboring generation patterns to estimate power output at the system edge. We validate the proposed approaches on PV systems in the southwest of the U.S. Experiment results show high accuracy and robustness in predicting distributed solar power without sufficient prior information. Enhance Unobservable Solar Generation Estimation via Constructive Generative Adversarial Networks Jingyi Yuan and Yang Weng Abstract-Power distribution grids experiences proliferation of solar photovoltaics (PV) at the system edge.However, its counterpart of sparse meter deployment provides insufficient monitoring of PVs, for which the potential violations challenge the operators for energy management and stable operation.Some previous works use satellite imagery to detect distributed PVs for the easy access of data.However, their PV localization methods rely on label-rich area with unitary background/environment to implement well; even further/harder, they do not provide precise metered-PV detection and quantification to estimate/know PV generation outputs in unobservable area, which is essential to prevent the edge from excessive two-way power flow and other violations.Thus, we combine the two steps of detecting PV existence and quantify PV amount into one classification task.To boost the classification performance in unobservable edge area, we construct a generative adversarial network that simultaneous augments the diversity of labelled PV satellite images and embed distinct PV characteristics/features for training the classifier.Furthermore, the PV localization and quantification result is combined with geographic information, historical weather conditions and neighboring generation patterns to estimate power output at the system edge.We validate the proposed approaches on PV systems in the southwest of the U.S. Experiment results show high accuracy and robustness in predicting distributed solar power without sufficient prior information.Index Terms-Distributed PV forecast, diversified data sources, structured learning, weak supervised learning, variational GANs. I. INTRODUCTION T HE sustainable and inexhaustible solar energy is one of the fastest increasing renewable resources in smart power grid.For example, research estimates that 150-530 GW of cumulative solar-based power will potentially be available in the U.S. by 2040 [1].Unlike the conventional energy source with scheduled power output, PV generation depends on various temporal and spatial factors, e.g., weather, atmospheric conditions, and Jingyi Yuan is with the Electrical, Computer and Energy, Engineering, Arizona State University, Engineering Research Center (ERC), Tempe, AZ 85281 USA (e-mail: jyuan46@asu.edu). Yang Weng is with the Electrical, Computer and Energy, Engineering, Arizona State University, Engineering Research Center (ERC), Tempe, AZ 85281 USA (e-mail: yang.weng@asu.edu). Color versions of one or more figures in this article are available at https://doi.org/10.1109/TPWRS.2023.3262773. Digital Object Identifier 10.1109/TPWRS.2023.3262773installation position and quantity [2].They naturally bring variability and uncertainty, leading to bidirectional power flow and frequent voltage fluctuations of voltages and currents in distribution grids [3], [4].To maintain the safety and reliability of the grid, distribution system operators (DSOs) require accurate information of the solar panel locations and PV generation forecasts for system remodeling and predictive energy management [5], [6], [7].Moreover, the foreseeable future of PV energy-sharing and its economics posits in urban areas raise a high demand for precise and easily accessible information of distributed PV system [8], [9].For PV generation forecast, existing approaches can be divided into two folds: the physical model-based methods and the data-driven approaches [10], [11], [12], [13], [14], [15], [16], [17], [18], [19], [20].Physical model-based methods rely on numerical weather prediction (NWP) or satellite imagery to analyze the atmospheric conditions for solar irradiance, with which the power output is computed using physical characteristics [10], [13].Some of the data-driven approaches directly estimate PV outputs from historical data, which primarily extracts the statistical properties from PV measurements for new predictions [11], [12].Machine learning models have been developed to consider highly correlated factors to characterize solar irradiance and predict PV generation.Such methods require complete information on PV locations, panel numbers, solar meter measurements, etc.However, in power distribution grids, the scattered PV generation data may come from different sources, including solar panel/inverter manufacturers, PV system development companies, utilities, and residential/commercial consumers.The methods mentioned above rely on the timely data aggregation from these different sources, which requires constant and intensive manual efforts [21], [22].Thus, the knowledge is often incomplete or unavailable, especially for residential-level consumers.For example, the National Renewable Energy Laboratory (NREL) focuses on an Open PV Project to track distributed PV installations.The project relies on voluntary surveys and self-reports to provide a general understanding of PV distribution but is still unreliable in precision.Moreover, the database is easily outdated due to the rapid growth and widespread PV installations [23]. Instead of multiple PV data sources, the other group of datadriven methods uses smart meter data to detect unauthorized PV installation and estimate behind-the-meter PV generation [14], [15], [16], [17], [18], [19], [20].These methods are summarized in Fig. 1.Specifically, smart meters record the net load data of customers, which is the conventional load minus the PV Fig. 1.A summary of model-free PV detection and generation estimation works using smart meter data.generation.While the PV is invisible, different model-free methods are adopted to uncover its existence and generation output.For example, the change points in historical measurements are detected, verified, and estimated but the unsupervised method requires predefined hyper-parameters to work properly [14].The supervised learning methods in [17], [20] need sufficient labeled data to fit an accurate estimator, especially the deep neural networks.Only net load data is used to disaggregate unknown PV generation based on the inherent temporal and spatial correlations [16], [19].However, the distribution grids may have unobservable areas on the edge, for which we have no access to complete smart meter data [9]. While the primitive information in distribution grids is limited, it can be inferred from extra public data sources that are easy to access, e.g., satellite images of PVs.Previous efforts lead to several approaches on identifying PV locations from Geographic information system (GIS), which is an image classification task [24], [25], [26], [27].However, the informative image data has complex structures to provide precise inference, and the insufficient labeled data in practice makes the classification method difficult for implementation.For example, [24] and [25] train and test on similar PV system images without considering significant variations in solar panel positions, textures, numbers, and backgrounds.Though the rooftop solar detection tool based on convolutional neural networks (CNNs) claims better feature extraction for classification [26], [27], they rely on abundant hand-annotated data sets and are unclear about how to obtain the PV coordinates effectively [26], [27], [28].Hence, these approaches are limited to solving the problem of locating and quantifying the distributed PVs in raw satellite images on their own.Furthermore, a precise PV generation estimate helps maximize the information gain for DSOs.In the literature, [29] considers the solar irradiation for an extended time period at a particular region to estimate generation.However, it is limited to addressing the concern of feature limitation.Also, [30] has introduced support vector machine (SVM) as a base learner together with a meta learner, based on the K-means algorithm to cluster the training set for predicting short-term solar power generation.Such a methodology requires labeled historical PV data of the same solar power system, which is the same assumption of deep learning models such as probabilistic neural networks (PNNs) [31].Therefore, previous methods find it difficult to extrapolate information, providing a solution when some PVs do not come with complete historical recordings. To address the problems, we propose to utilize different data sources of publicly available information together with the utility owned measurements to enhance PV localization and generation forecast.Specifically, we use available GIS and satellite image data for PV detection and quantification.While existing works have also utilized such data sources, they have not considered the practical problem of labeled data deficiency as the first step, not to mention generation estimation.Therefore, we enhance PV localization and quantification by designing weak-supervised generative adversarial networks (GANs). The proposed model not only generates diversified labeled data to address data deficiency but also embeds PV characteristics during generation to enable distinct PV image augmentation.The PV images are augmented for more accurate PV detection.Instead of separating two tasks, we integrate them into one model as a feedback loop, which makes improvements simultaneously for both data augmentation and PV detection during training.On the input side, we provide backbone structures as informative inputs to the generative model.The proposed model not only generates diversified labeled data to address data deficiency but also embeds PV characteristics during generation to enable distinct PV image augmentation.The PV images are augmented for more accurate PV detection in downstream task.Instead of separating two tasks, we integrate them into one model as a feedback loop, which makes improvements simultaneously for both data augmentation and PV detection during training.Based on the detected PV information, we adopt K-nearest neighbors method to estimate the best possible generation considering the relevant factors of solar irradiance.Fig. 2 shows an overview of the proposed method for generation predictability enhancement. Our contribution lies in designing a constructive GAN to expand both the volume and diversity of labeled datasets for PV detection and enhancing the generation estimation by utilizing effective features and neighboring generation patterns.Specifically, the learning performance of detection and quantification is boosted via specific knowledge embedding: 1) using diversified backbone structures (grey-scale images with different PV quantities and locations) as additional informative inputs; 2) integrating the evaluation of solar panel classification into the data augmentation process as guidance; 3) adopting both content and style losses to train the GAN model specifically for rooftop solar panel datasets. II. FORMULATE LEARNING PROBLEM TO PREDICT DISTRIBUTED SOLAR POWER FROM (DIVERSIFIED) DATA To predict widely distributed solar power, we need to accomplish three steps: 1) detect the PV existence with respect to locations, 2) quantify the amounts, and 3) estimate the output generation.While we have no access to the direct information, it is inherently contained/held in diversified data sources.We fully utilize these data sources to infer the power output of distributed PVs in this paper.The task of latent information discovery is to solve a comprehensive/structural machine learning problem.Specifically, we start with the publicly available data sources (i.e., Google Earth) to sample raw satellite images (resolution of 4800 × 2987) with coordinates in geographic information system (GIS).Since the raw image covers a large geographical area, we segment each one into M = 400 pieces to zoom into rooftops.Fig. 3 shows one sample image, where a few locations of the raw images have solar panels.With M = 400, less than 2% of segmentation is labeled as 1. For segment i ∈ {1, 2, . . ., M}, the label q i ∈ {0, 1, 2, . . ., k} indicates the amount of solar panels, which is used in a multi-class classification for PV quantification.For a simplified PV detection task, the label is q e i ∈ {0, 1} to indicate the non-existence and existence of solar panels in an image.Moreover, y t , where t = 1, . . ., T time points, is the generation of the corresponding solar power system based on the generation g K t of K nearest neighbor PVs.For generation estimation, other available data include the temperature, humidity, and cloud cover with respect to the coordinates, which are the same location information used to sample satellite images.Therefore, the problem setup is as below. r Problem: location detection, quantity assessment and power generation estimation of rooftop PV systems; r Given: 1) raw image x x x raw from Google Earth and available labels q i , 2) the known PV generation time series g K t of solar panel systems covered in the x x x raw solar panel image, and 3) time-series weather data f K t of locations covered in the x x x raw image; r Find: for a new single segmented image x i , 1) existence q e i of solar panels, 2) quantity q i of solar modules, and 3) power generation time series y i,t .This paper aims to estimate distributed PV generations based on accurate PV location detection and quantity assessment.A robust image detector requires a large quantity of diversified data to train, while the known installation locations from utility are limited to less than 2% of the total segments.To enlarge the labeled image data, an intuitive way is to use augmentation techniques (e.g., flip, rotate, extract patches, and transform color spaces) to operate on obvious data invariants [32].For the solar panel case, we can rotate slices of images to contain different orientation angles of PV installations.This data augmentation is realistic, but there are no new instances created for data variability/diversity.The detection model may easily cause overfitting and perform poorly in unseen data.For a diversified image augmentation, deep learning-based generative models, such as generative adversarial networks (GANs), are popular at generating similar but varied samples as compared to the existing instances.GANs gain benefits from rich feature extraction of neural networks and the adversarial-training scheme; however, the lack of learning guidance causes the model to collapse easily, which creates bad samples like clustered solar panels pixels and object mixtures.Moreover, the labelled data is augmented with the goal to improve the training of a classification model (PV detection and quantification).The separation of the two steps can lead to propagation errors.To address these concerns, we aim to integrate both tasks into one compact learning model and train the entire model with mutual benefits to achieve high accuracy of PV classification against unbalanced data. III. PROPOSED VARIATIONAL GANS WITH WEAK SUPERVISION FOR PV DETECTION AND QUANTIFICATION To address the concerns of insufficient labeled data, we propose in this section an enhanced solar image classification based on weak supervision over GANs, as shown in the middle of Fig. 2. A. Recall Basic GANs to Augment Image Samples As mentioned in Section II, the goal of augmentation is to generate various solar images that mimic the original data in feature patterns as well as provide diversity. To achieve this, we train a generator G as a feed-forward neural network parametrized by θ to produce new data x aug , e.g., a rooftop solar panel image.x aug is a random variable, whose complex distribution is expected to be learned from the distribution of real data samples x real .In basic generative adversarial networks (GANs), the generator captures the mapping from a random variable z to image x aug , where z is usually sampled from Gaussian noise distribution with the same size of x real .To enforce the similar feature pattern in augmented images, the generative model G is trained discriminatively against another neural network, a discriminator D. The discriminator is parameterized by φ to score the outputs (comparing x aug with x real ), computing the probability that one image comes from real dataset.It aims to assign a high score to a real image x real , while assigning low scores to the generated image x aug .Mathematically, the discriminator is realized as a classifier to maximize the binary cross-entropy loss while increasing the distinguishability between real and fake data.In contrast, the goal of generator G is to produce outputs that achieve high scores from the discriminator D, satisfying the constraints imposed by D in the process.Therefore, G minimizes the loss.Thus, for training the GANs, the objective takes the expectation of random variables as and the training optimizes over min θ max φ L GAN .Such an adversarial-training scheme can augment images on a larger invariant space, which is implicit for the data space.GAN becomes popular in solving power system problems because it adopts an adversarial strategy in generative learning to augment the diversity of labeled data.[33] firstly leverage the characteristic to generate scenarios of renewable outputs, which can mimic diverse conditions and uncertainties to produce more renewable data.Similarly, GAN model is used to estimate the unknown power injection at unobservable loads based on the available historical measurements [34], [35].While the generation from Gaussian noises is insufficient to cover the needs of specific data patterns, conditional inputs are used to restrict the generated data to a particular class like weather conditions of high wind, real-time system configurations of topology/admittances, and electricity market data [33], [34], [36], [37]. B. Embed PV Characteristics in Image Augmentation to Enhance Detection The basic GANs generate new instances from noise inputs z to provide flexibility in generation.However, it easily causes insufficient learning of complex image distribution, especially when the labeled PV image data is limited for discriminative learning.The direct application of GANs creates bad samples like clustered solar panels pixels and object mixtures in the background.The generation process needs to be materialized with more explicit information.Moreover, unlike the assumption of sufficient training data in other GAN implementations, our task has the precondition of limited labeled solar panel images.And, the expected outputs should have distinct solar panels located on the roof, trees, pools, and other objects in the background. Therefore, we adopt the conditional settings of GANs to consider extra information together with Gaussian noises.Specifically, y is an embedding variable to condition the generative model on the external information.In our case, the solar panels naturally have the characteristic shape to be distinguished from other objects so that we use the grey-scale outlines for y.The joint inputs of y and noise z contribute to the replication of real PV images with both "backbone" representation and flexible background, as presented in the left-hand side Fig. 4. Similar to the basic GANs, the generator improves during training to fool the discriminator for a relatively high score compared to the real images.Mathematically, the generators in conditional GANs learn the mapping {y, z} → x aug , where y is the given information, z is the random noise, and x aug is the output image.Meanwhile, discriminator D is trained to distinguish between the real and generated images, for which Fig. 4 illustrates the adversarial training scheme in the middle part.D is also conditional in such settings.It is fed with the concatenated image and the corresponding "backbone" information y.In implementation, we have compared the conditional and unconditional settings of D, and it appears that conditional D better leads the augmented image x aug to follow the PV characteristics.Thus, we have modified the objective for training GAN in (1) with consideration of extra information ( Despite the benefit, the input of grey-scale outlines has reduced the information of PV characteristics, which mainly embed the shape and position.To compensate for the remaining information that is essential for PV classification, we consider regularizing the generation process with limited but available labeled data.Although GANs do not use an explicit loss function, adding a traditional loss could benefit the image generation.Motivated by [38], we consider the content loss and style loss to retain consistent with PV characteristics on the pixel level and in the feature space.In the PV image generation task, the content loss is the mean square error between the generated image and the real image at the pixel level.Minimizing the content reconstruction loss recovers the detailed pixel information when the pair of grey-scale outline and ground-truth image are available.Notably, using the content loss will only have blurring and deterministic output images, which makes the conditional inputs of GANs in (2) necessary for our task.Moreover, while the labeled PV image data is limited, we use the style loss minimization to guarantee the style consistency Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.from feature space among different images.Thus, the loss for image consistency of PV characteristics is ( Similar to the content loss, the style loss is the error between the features correlations expressed by Gram matrices A = K k=1 F ik F jk (F is the feature map of the image).The hyperparameters α and β are weights to determine the emphasis of the two losses. C. Integrate Classification Feedback Into PV Image Augmentation for Mutual Benefits The generator is enhanced by embedding prior knowledge of PV characteristics on both inputs and loss function.Besides, GANs use the discriminator to judge the quality of generated data via assigning scores.Although the discriminative learning scheme benefits, the DNN-based discriminator can be too general to control the generation process as expected.In our case, the specific goal of image generation/augmentation is to improve the accuracy of PV detection and quantification.GANs aim at enlarging the data to train the classifier for rooftop solar panels.Usually, the data augmentation and classification are separated steps, where the former step cannot receive performance feedback of the latter task.It is difficult to make an improvement based on causal reasoning, and propagation errors between steps may exist.To solve the problems, we propose to integrate the two steps in a loop, which guides the augmentation towards improving the classifier f 1 .As Fig. 4 shows in the bottom half, we feed the augmented images from the generator into the f 1 and include the classification loss during training, where η parameterizes the classifier network.P is the probability of x to be class 1, which means the solar panel exists in the augmented image.In this way, the evaluation metric constrains the generation process.Since the classifier f 1 is integrated into the GAN structure, we recognize it as the other discriminator to distinguish augmented images that can improve PV classification. In short, the final optimization is min θ,η max φ L Aug + L Im + λL Class .λ is the hyperparameter (defaulted as 10 in this paper) to specify the weight of the classification loss compared to the generator loss L Aug .In this way, the labeled satellite images are effectively augmented to improve detecting PV systems. Moreover, since we utilize task feedback as weak supervision, the PV classifier is trained via minimizing L Class .During the training process, the generator and the classifier are updated in turn for multiple iterations and converge.After learning, we simultaneously finish dataset augmentation and classification, so there is no extra step to train an additional model for the solar panel detection task. D. Specify Conditional Inputs to Represent Distinct PV Features The self-enhancement and feedback-control discussed above provide weak supervision for generating images of rooftop solar panels.Not only increasing the volume of labeled data is beneficial, but also the diversity is essential.For example, an effective classifier training expects images with different numbers of solar panels and various background objects.Although the available training dataset never has such instances, we introduce the randomness function to form the backbone, i.e., grey-scale outline as shown in Fig. 4. To enable it, we first apply different abstraction to the real labeled images, such as Sobel filter, semantic segmentor, landmark extractor, and color-specific filter [39], [40], after which we crop the basic backbone of the solar panel target.We then define t and s as the center and size (amount) of the single target, and r is the rotation degree if needed.The backbones of the diversified samples are constructed by different values of t, s, and r.The reconstructed backbone images are fed into the proposed variational GANs. E. Proposed GAN Architectures and Optimization for PV Classification The previous designs involve different functions of deep learning to enhance PV detection and quantification.In this section, we specify the model architectures of each function and illustrate the training setups. 1) Generator G for Image Generation: To embed PV tion and shape, we input conditional "backbones" to the generator.The desired output should not only retain the localized information from grey-scale image but also reach high precision in context for classification.To enable such a transformation between high-dimensional inputs and outputs, we adopt the U-shaped architecture [41].Its upsampling layers balance the accurate object localization and context usage, which is capable of generating distinct features of solar panels. 2) Discriminator D With Markovian Consistency: As the consistencies in both pixel level and feature space are important for solar panel images, the discriminator follows the same rule.Different from the regular classifier, the whole image (real or generated) is scored in the unit of patch identically and independently, which is an N × N (N ∈ {1, 16, 70} [42]) square of the image.A smaller N focuses on pixel level to benefit color representation, and a larger value sharpens spatial statistics across features.For example, N = 1 is the special pixel-level assessment, but the generated images cannot bring greater color diversity in our task.We find N = 70 works best to generate distinct objects in solar panel images. 3) Classifier f 1 for PV Quantification: The classifier is the last and the most important step to detect and quantify solar panels in images.Meanwhile, it is expected to weakly supervise the generation process by feedback evaluation.The model Inception-v3 achieves state-of-the-art performance in image classification, whereas the feature extraction layers require large and diversified dataset to train.Therefore, we pre-train the Inception-v3 model with 1.28 million images containing 1,000 different classes in the ImageNet and achieve 93.3% accuracy [43], [44].We reuse the feature extraction layers and retrain the last layer of decision-making with our generated solar panel dataset for evaluation. 4) Configuration: During training, we use the Adam optimizer with a learning rate of 0.0002, and momentum parameters β 1 = 0.5, β 2 = 0.999 to train for 200 epochs for each experiment.For each epoch, the batch size is chosen as 5 due to device limit.All the experiments are completed with a computer equipped with Inter(R) Core(TM) i7-9700 k CPU and Nvidia Gerforce RTX 3080Ti GPU. IV. ESTIMATE GENERATION OUTPUT VIA CLOSE PROXIMITY OF PV SYSTEMS Knowing the quantity of solar panels at a specific location helps gain insight into predicting generation capability, but its effectiveness needs to be further enhanced by historical PV generation profiles.Thus, to estimate solar generation output, we integrate the proposed PV detection and quantification with the data-driven solar irradiance forecasting, as shown in the right-hand side of Fig. 2. Specifically, based on the historical observations, the PV generation patterns of neighbor installations are similar because of the similar numerical weather conditions.In residential areas, the neighboring houses are usually covered in one basic spatial unit to have the same feature values of numerical weather predictions.Therefore, this section describes the feature selection and generation estimation based on neighbor information. A. Select Relevant Features Based on Weather Conditions There are many different features available for making generation estimation.For example, the geographical features of longitude, latitude, altitude, the weather conditions of temperature, pressure, humidity, cloudiness, and the quantities of modules.To increase the information gain and select the most relevant features, we followed a filter method for feature selection based on the information gain of minimum descriptive length (MDL) [45].Let a •,• denotes the number of training samples.a i,• is the number of training samples from class P i , and a •,j gives the number of samples with the j th value of the provided feature.Therefore, a i,j is the number of instances from class P i and has the j th value of the provided feature.If we have P classes, the information gain MDL is defined as a logarithm of all combinations of class labels ( A high score of information gain on geographical space shows that neighboring regions have similar PV generation patterns.The quantity of the modules is already available from previous learning results and serves as one of the most important variables, in that more solar panel units generate more power. Moreover, to further boost the information gain for generation estimation, we consider the thermal space.The temperature is correlated with the efficiency of solar cells [46], which is also the efficiency of the power generation.Based on the feature set, we learn to predict the power generation time series of the unknown solar power system. B. Apply Nearest Neighbor Approach With a set of carefully selected relevant features, we aim to fully utilize the latent correlation to make precise estimation of PV generation.Fig. 5 shows a clear correlation among generations of the "neighboring" PV systems that locate in different ZIP (Zone Improvement Plan) code areas.The Euclidean distance is a promising measure for the distance between any two samples in feature space, no limited to only geographical distance [47].With identified PVs in previous learning process, the coordinates are known.Let q i , x latitude i and x longitude i , and x weather i , Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. In the feature space, the closest K points to S F test are used for training purposes.Their respective distances are used as the weights d K and the PV generation time series of K nearest neighbors are represented as g K t .For t = 1, . . ., T time points, the PV generation of the unknown solar power system is given by y t .Thus, the optimization problem becomes This procedure gives an estimate of power generation for the unknown rooftop solar power system.We select the hyperparameter of K through two different error metrics, namely mean square errors (MSE) and mean absolute percentage errors (MAPE) respectively.MSE is calculated using the predicted power generation time series y t and real power generation y t,real , where t = 1, . . ., T are the sequential time slots MSE = 1 V. NUMERICAL TESTS To evaluate the proposed methods on PV detection and generation estimation, we conduct experiments on realistic test cases in the following. A. Data Pre-Processing and Tool Selection for Validation First, we aim to enlarge the labeled solar panel image data so that we can train a classifier to detect the rooftop solar panel location from the geographic information system (GIS).We collect the original data from SunPower Inc. and local utility, which contains the known installation locations of rooftop solar panels in GIS.With the longitudes and latitudes, the satellite images are sampled with a fixed resolution of 4800 × 2987 on Google Earth (fixed zoom level).It is free and has several benefits when compared to its commonly used rivals like SPOT5 or DigitalGlobe satellite imagery.[48] illustrates the high-resolution imagery of Google Earth archive has robust positional accuracy and plays a vital role for solar panel image recognition and quantification.The original positive sample number is only 1017 + 2813 = 3830, while related detection work used tens/hundreds of times of positive samples for training.The classification performance is greatly affected since the limited image data cannot cover the scenarios that exist in reality.Due to the highly unbalance in original image dataset, we select partial images without solar panels to be negative samples and get 15000 as a total.Moreover, we use the geographic coordinates to integrate a time series of power generation data (612 + 239 data sequences).Typically, the recorded time series of PV data is with a time resolution of 15 minutes for a whole year.So, to align the data format we chose time resolution of weather conditions to be 15 minutes as well.We aim to show robustness by validating the method in two different states in the U.S., 1) Tempe, Arizona and 2) Santa Ana, California. B. Enriched Labeled Data With Physics Embedding Improve Detection Accuracy In the following, we validate the accuracy of PV detection and quantification of the proposed weakly-supervised GANs. 1) Effective Augmentation: To solve the data deficiency problem, we first consider standard augmentation techniques with respect to real conditions that residential houses can face from different directions, and the house styles can be quite similar within each community.Therefore, by adequately rotating images via the rotation matrix R = [cos θ, − sin θ; sin θ, cos θ], θ ∈ [0, 2π], we increase the labeled dataset.Since direct rotation changes the horizontal structure of the image, we pre-process the data to crop them circularly, using the circular segmentationbased approach as in [49], where the shape will be consistent no matter how it is rotated (Fig. 6).In such augmentation, the information around the edges and vertices is compromised, and the performance suffers.Meanwhile, the improvement of the classifier requires sufficient and various positive samples, e.g., images with solar panels in different scenarios.Therefore, we expand the labeled dataset by the GANs.We validate the proposed GAN with two types of metrics on the generated data quality.On one hand, Fréchet Inception Distance (FID) [50] measures the closeness of extracted features between generated data and real data for image generation tasks.Namely, the lower FID score reveals more similarity, which is a better quality of augmented images.On the other hand, the image dataset is enlarged to boost classification performance so that we use the test performance of the solar panel classification to assess the augmentation quality.The output labels of the f 1 for the quantification task, and we compute accuracy by comparing the f 1 outputs with ground truth labels, which is: Quantification Accuracy = Amount of Samples{ qi =q i } Amount of Total Samples We further compute the commonly used metrics to evaluate the detection performance, which are Overall Accuracy = TP + TN Amount of Total Samples (9) The and "false" represent whether or not the classification results are the same as labels, while "positive" and "negative" mean with and without solar in the image as two classes.Namely, "true positive (TP)" represents that solar panels exist and are correctly detected.Similarly, "false positive (FP)" means the positive (wrong) estimation of negative samples.For our task, we aim to evaluate the model's capability of correct solar panel detection.First, we expect to know what proportion of the detected solar panels does actually exist?Thus, the detection precision in (7) measures the ratio of correct solar panel detection among all the positive output (with solar panel) samples.It is used to evaluate the quality of augmented positive examples, as shown in Table I.Meanwhile, we are interested in what proportion of the images with actually existed solar panels are detected?This is computed by the detection recall in (8).Moreover, we include 1) the overall classification accuracy to represent the correct classification for both positive and negative samples, and 2) the quantification accuracy to represent the correct identification of amount of existed solar panels.First, we tried popular architectures, such as DCGAN, LS-GAN, and WGAN-GP [51], [52], [53].However, the learning of mapping from simple noises to target images poorly follows our expected direction of distinct solar panels in images, as shown in Fig. 7. Thus, the proposed model improves upon two aspects: providing informative inputs and embedding feedback evaluation. 2) Informative Inputs for Self-Enhancement: In this task, dependent information y is the grey-scale image that covers the number of solar panels and their positions in an image.We adopt the Pix2Pix model [43] as a basis for its superior image translation.The abstraction of grey-scale images is an edge detection task, for which we select the Sobel filter among different filters [39].It convolves the image with a small, separable, and integer-valued filter in the horizontal and vertical directions and is relatively efficient in computations.The Sobel filter enhances edges of objects in grey-scale images by providing differentiating (which gives the edge response) and smoothing (which reduces noise) concurrently. During training, we found that using the base model (Pix2Pix) only makes the generator hard to converge.The discriminator converges fast and cannot further improve the generation, because our solar panel data is complex with multiple objects.Such data complexity can also be observed from a high base FID value (131.07 ± 3.47), for which we randomly separate the real solar panel dataset into two groups and measure the similarity in between.Therefore, we increase the depth of discriminator D to balance with the generator capability and use more task-specific losses as proposed, leading to the complete model in Table I.The solar images generated by the proposed complete model achieve a much lower FID score than the base data generation model (Pix2Pix).The visual comparison in Fig. 8 shows intuitive improvement.The images "translated" by Pix2Pix in the middle resemble the real ones from Google Earth, which is more comparable than those from standard GANs (Fig. 7).Nevertheless, the generated solar panels often mix with the background objects, and the Pix2Pix model sometimes collapses around 100 epochs or even earlier.In contrast, our model with specific guidance can better recover the target solar panels in the image with clear edges and color, as shown in the bottom of Fig. 8.To demonstrate the contributions of each component, we conduct an ablation study.The first row of Table I shows that the content loss plays an essential role in synthesizing images.We select L Content and L Style , which both make a large difference in FID score, as shown in Fig. 9.While the pixel-level loss can encourage color similarity of the image, it can also bring blurring results.The style loss better corrects the color of the target objects, leading to distinct solar panels.Moreover, the contribution to the following classification task is more essential for PV detection, which is reveal in the next rows of Table I, and we analyze in the next section. 3) Feedback Evaluation to Constrain the Augmentation: As the goal is to improve the classifier for solar panel detection, we channel the feedback from the state-of-art pretrained Inception-v3.The images generated under such weak supervision are then fed into the classifier as positive samples, and the cross-entropy loss is fed back to the generator for control.The GAN and the inception-v3 classifier are trained simultaneously to augment positive samples.Specifically, the last four columns of Table I show the results of an ablation study of the proposed variational GAN model, where each column reveals the contribution of each design to the PV image augmentation and PV classification.I).While the content loss incentives the color similarity of the whole image (pixel-level), the logical constraint focuses on the target objects more.Such weak supervision improves the generation quality, leading to distinct solar panels in the image. Without any design, the performance deteriorates.To compare the third and the last columns of Table I, we observe that although integrating the classifier does not significantly improve FID on image similarity, it benefits the post-classification task with higher accuracy. Except for the comparison in the ablation study, we explore the improvement of the classifier with different data availability to show the benefit of GAN-based augmentation.The following datasets are considered to train the classifier for PV detection and quantification: 1/2 of the original dataset with limited labels (1/2), the entire original dataset (original), the enlarged labeled dataset by basic functional image augmentation such as rotation and flip (basic Aug.), and different sizes of expanded labeled datasets from the proposed variational GAN (GAN Aug. 1×, 2×, and 3×), respectively.improving, the loss of classifier keeps decreasing until convergence.We observe that the losses close to convergence are lower and more stable.Moreover, we test the trained model on the randomly sampled images of the residential areas in California and Arizona and show the results in Fig. 11.Previous data-driven methods assume sufficient labeled data for training, so the accuracy is high when using only the sufficient original datasets or basic augmented data.The adopted classifier f 1 , the Inception-v3 model, has state-of-the-art performance in image classification, which is a benchmark model.In our case, using only the original labeled dataset or dataset with basic augmentation, the accuracy is low for PV detection and quantification.We observe a noticeable increase in accuracy from the barplot when using the proposed GAN method to augment labeled data.With increasing amount and diversity of augmented data, the testing accuracy keep going up and reach comparable and even better performance than previous data-driven method that assume sufficient labeled data. C. Robust Generation Estimation Via Flexible Features and K-NN The validation to detect and quantify PV systems brings us to the next stage of generation estimation validation.There are many different features available for making estimations of PV generation.But, to maximize the information gain, we propose to select the most relevant features using a filter method-based approach.The result of that approach is a set of features that include geographical coordinates, weather conditions of temperature and cloudiness, and the quantity of modules.Furthermore, the regions differ in solar irradiance, climate, soiling profile, and terrain from one another [54].The generation of such solar panels is provided by SunPower Inc., and is used as the training set for learning.We use the K-nearest neighbor (K-NN) method for learning by assigning the weights to the time-series data of the nearest points geographically.The only hyperparameter to be tuned here is K.We will present the performance changes with respect to K later. Then, we apply the weighted K-NN regression to predict the short-term solar power generation.Fig. 12 compare the predicted generation with the ground truth for an entire year.The previous data-driven method refers to the support vector regression (SVR) model, which has shown good performance in PV generation estimation [11], [12], [15].Since the experiment setups are different, we use the SVR model but change the inputs to be the same as that of our case.Specifically, the SVR implementation does not have a GAN-based data augmentation to train a classifier for PV quantification as a preliminary step.To better visualize the comparsion, both real and predicted generation data are downsampled.We plot the long-term accumulated generation based on months in Fig. 13.The results are expressed in a bar plot over 12 months in 2015, where the black error bars indicating the variance in generation estimation.From a general energy production point of view, the model performance is stable in the the months when the weather is stable.For example, estimations of June, July, November, and December have higher accuracy than March, April, September, and October. To select the best K data samples of neighbors for PV generation estimation, we compare the performances in Table II.Obviously, K = 3 is the optimal choice for both error metrics.If the data samples of more or less nearest neighbors are considered, the estimation errors increase.Especially, if we select K > 4, the error is much higher, for which too much feature distance between the considered PV systems could increase the variance for estimation. VI. CONCLUSION To accommodate the limited data availability and timeliness of PV data in distributed power system, we propose to systematically enhance PV localization and generation forecast using multiple data sources such as satellite imagery and numerical weather conditions.Specifically, we first design weakly supervised GANs for solar panel image augmentation.Multiple aspects of GAN enhancement is designed to augment the images that can improve PV classification, including: 1) preparing backbones images as conditional inputs to embed PV characteristics, and 2) restrict the inexplicit learning process of the GAN model by specific losses.Moreover, we leverage the discriminative training mode of GAN to integrate PV detection and quantification into the augmentation loop.In this way, the performance of the targeted downstream classification task guides the image generation process.Thus, we obtain the detection results without further efforts, and combine with historical neighboring measurements to estimate the PV generation.We validate the proposed approaches on areas of distribution grids that have wide PV coverage but limited prior information.The result shows that the proposed approaches can efficiently avoid model collapse in the image generation, reach comparable classification performance with methods trained using sufficient data, and obtain accurate generation estimation. Manuscript received 19 October 2022; revised 10 February 2023; accepted 14 March 2023.Date of publication 29 March 2023; date of current version 26 December 2023.This work was supported in part by the Department of Energy under Grants DE-AR00001858-1631 and DE-EE0009355, in part by the National Science Foundation (NSF) under Grants ECCS-1810537 and ECCS-2048288, and in part by the BIRD Foundation.Paper no. TPWRS-01582-2022.(Corresponding author: Yang Weng.) Fig. 2 . Fig. 2.An overview of the proposed approaches. Fig. 3 . Fig. 3.A real image acquired from Google Earth and sliced. Fig. 4 . Fig. 4. Block diagram of the proposed variantional GANs with weak supervision for PV detection and quantification. Fig. 5 . Fig. 5. Plot of raw data from the industrial partner SunPower Inc. shows homogeneous curves of power generation for three different ZIP code areas on a single day. t,real − y t ) 2 .The variance of the distribution can be captured in mean absolute percentage error (MAPE) MAPE = 1 T T t=1 |y t,real −y t | y t,real × 100%. Fig. 6 . Fig. 6.The classic augmentation of solar panel image via basic operation of circular rotation. Fig. 7 . Fig. 7. Generated results after training the popular architectures of GANs.There appears to be visually under-fitting and missing-training direction via repeated noise textures across multiple samples. Fig. 8 . Fig. 8.Given conditional inputs of "backbones" (top), compare the generated rooftop solar panel images from Pix2Pix (middle) and our proposed model (bottom).Without specific guidance, the solar panels easily mix with background objects. Fig. 9 . Fig. 9. Visual results of the ablation study based on the obvious differences in FID values (TableI).While the content loss incentives the color similarity of the whole image (pixel-level), the logical constraint focuses on the target objects more.Such weak supervision improves the generation quality, leading to distinct solar panels in the image. Fig. 10 demonstrates the trends of loss during training.There are significant decreases in converged losses when training the classifier with the data augmented by the proposed GAN.Specifically, Fig. 10(a)-(c) present fast loss decreases before 50 th epoch, but the classification losses cannot be lower after then.In contrast, the training losses of Fig. 10(d)-(f) experience an increase and then go down.It is due to the simultaneous training of classifier and the generator for image augmentation.At the first few epochs, the proposed GAN keep updating parameters from random initialization, for which the generated labeled images are not perfect yet.Therefore, the classifier trained by these generated images has high losses.With the image generation Fig. 10 . Fig. 10.Classification loss during training (epochs) using different training datasets with respect to augmentation methods: (a) 1/2 of the original dataset with limited labels, (b) the entire original dataset, (c) the enlarged labeled dataset by basic functional image augmentation, and (d)-(f) different sizes of expanded labeled datasets from the proposed variational GAN (GAN Aug. 1×, 2×, and 3×). Fig. 11 . Fig. 11.The testing accuracy of PV detection and quantification when training the classifier f 1 using different training datasets with respect to augmentation methods. Fig. 12 . Fig. 12. Compare the real and the predicted downsampled PV generation of a rooftop solar power system. Fig. 13 . Fig. 13.Comparison of accumulated real and predicted monthly PV energy generations for the year 2015. TABLE I THE QUALITY OF GENERATED DATA AFTER 200 EPOCHS.A LOWER FID INDICATES BETTER SIMILARITY, AND A HIGHER ACCURACY MEANS THAT THE DATA AUGMENTATION BETTER BENEFITS THE POST CLASSIFICATION TASK.TO SERVE AS A BASIS, FID OF TWO RANDOMLY SEPARATED REAL DATASETS IS 131.07 ± 3.47 AND THE CLASSIFIER TRAINED WITH REAL DATA HAS THE TEST ACCURACY OF 0.59 ± 0.030 TABLE II COMPARE ERRORS OF GENERATION ESTIMATION WITH RESPECT TO DIFFERENT K VALUES
2023-03-31T15:09:54.406Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "dc2622dbf9cd699f6127b6fa1b91b0dfe39ab4d9", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1109/tpwrs.2023.3262773", "oa_status": "CLOSED", "pdf_src": "IEEE", "pdf_hash": "79cd22934951533f854ddea0c5fa825de13dc1c3", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
9687744
pes2o/s2orc
v3-fos-license
Relationship of high sensitivity C-reactive protein with presence and severity of coronary artery disease. OBJECTIVE Inflammation plays a key role in the pathogenesis of atherosclerosis. This study aimed to assess the relationship of serum inflammatory marker high sensitivity C Reactive protein (hsCRP), with the presence and severity of angiographically evaluated coronary artery disease (CAD). METHODS This study was conducted at departments of physiology and cardiology, College of Medicine & King Khalid University Hospital, King Saud University, Riyadh from August 2009 to March 2012. Eighty seven patients (57 males and 30 females) with angiographically evaluated CAD were studied. In all these patients CAD severity was assessed by Gensini scoring and vessel scoring. Control group consisted of 29 healthy subjects (17 males and 12 females). Fasting venous blood samples were analyzed for lipid profile and high sensitivity C-reactive protein (hsCRP). RESULTS There were non-significant differences in age, weight and BMI among healthy subjects and CAD patients. Comparison of lipid profile between control and CAD patients showed that CAD patients had significantly higher TG and significantly lower HDL levels compared to control subjects. CAD patients presented with significantly higherhsCRP levels than controls. Linear regression analysis between hsCRP and CAD severity determined by Gensini scores showed a significant positive correlation (r=0.423, p=0.018). Triple vessel disease patients had significantly higher hsCRP levels than one vessel and two vessel disease, while the difference was non significant between one and two vessel disease groups. CONCLUSIONS These results suggest that patients with angiographically evaluated CAD have significantly higher levels of hsCRP levels compared to healthy individuals and are correlated with the presence & severity of CAD. INTRODUCTION There is strong evidence that cardiovascular conditions are linked with inflammation. Likewise there is role of inflammation in the pathogenesis of atherosclerosis. 1 This ultimately leads to the occurrence of acute cardiovascular events. 2 The chronic inflammatory process in atherosclerosis usually results in an acute clinical event by plaque rupture and therefore causes acute coronary syndromes. 3 Many large prospective trials have shown that the inflammatory biomarker high-sensitivity C-reactive protein (hsCRP) is an independent predictor of future cardiovascular events. 4 Several studies from Europe and United States indicate that elevated lev-els of hsCRP among apparently healthy men and women are a strong predictor of future cardiovascular events. 5,6 Addition of hsCRP to conventional risk factors acts as an independent significant predictor of cardiometabolic risk. 7 hsCRP has been reported to be an independent significant predictor and a risk factor of cardiometabolic risk, with an additive value to metabolic syndrome components. 8 It has a long-term predictive value in patients with diagnosed coronary artery disease (CAD) and angina pectoris. 9,10 It is also useful as predictor in individuals with multiple risk factors. 11 hsCRP not only is an important predictor of first myocardial infarction but also for recurrent coronary events. [12][13][14] In most of the studies reported, the association of hsCRP with cardiovascular risk has been found to be highly significant in global risk-assessment programs. 15 Little data is available regarding association of hsCRP with the presence and severity of CAD. To the best of our knowledge there are no studies correlating hsCRP levels in CAD with Gensini and vessel scoring of CAD severity. This study aimed to assess the relationship of serum inflammatory marker high sensitivity C Reactive protein (hsCRP), with the presence and severity of angiographically evaluated coronary artery disease (CAD). In this cross sectional study 87 patients (57 males and 30 females) were studied who had undergone angiography and were found to have CAD. They were recruited from department of cardiology, King Khalid University Hospital, Riyadh. Control group consisted of 29 healthy subjects (17 males and 12 females) matched for age and BMI. They were in stable metabolic state and were not suffering from any acute or chronic inflammatory conditions that could affect hsCRP levels. They were free of any clinical manifestations of coronary, peripheral or cerebral artery disease by history, physical examination and electrocardiographic findings. Demographic data, family history and results of the coronary angiography were obtained from patient's files and filled in specially designed data collection form. Inclusion criteria consisted of adult patients of both sexes with ischemic heart disease who had attacks of angina or myocardial infarction and had undergone coronary angiography. Exclusion criteria included, acute or chronic renal diseases, thyroid disorders, acute infections, recent stroke, diabetic ketoacidosis, non-ketotic hyperosmolar diabetes and any recent surgery in the last two months. Blood samples were collected after overnight fasting, serum was separated and stored at -80 o C until assayed as a single batch. This project was conducted at hsCRP was measured using a turbidimetric assay (Quantex CRP ultra sensitive kits, BIOKIT, S.A., Barcelona, Spain) on auto-analyzer Hitachi 911, (ROCHE diagnostics, Indianapolis, Indiana, USA). The hsCRP kits measured ranges from 0.10 to 20.0 mg/L. All our patients underwent left ventriculography and selective coronary angiography. Coronary arteries were imaged by standard views with cranial and caudal positions. Presence of ischaemia was defined on the basis of minimum 50% stenosis in coronary vessels. Gensini scoring system was used to determine the severity of CAD. With the help of this scoring system the percentage of blockage in different coronary vessels at different sites of blockage is calculated and each vessel under consideration is given a score. 16 Left main coronary artery, left anterior descending artery (LAD), left circumflex (LCx) and right coronary arteries (RCA) were assessed. If there were multiple lesions in the same vessel that was regarded as one-vessel disease. Vessel scoring was also calculated and graded into single, double and triple vessel disease. Statistical Analysis: We used Statistical Package for Social Sciences (SPSS) version 19, for data analysis. To assess differences in age, blood pressure, TC, LDL, HDL, TG, and BMI Student's t test was utilized. hsCRP, due to its non parametric distribution, was analyzed by Mann-Whitney U test for two groups and KruskalWalli's test for more than two groups. A p-value of <0.05 was considered as statistically significant. Spearman's correlation coefficients were also calculated between Gensini score of CAD severity, vessel scores, hsCRP and lipid profile in all CAD patients. RESULTS There were non-significant differences in age, weight and BMI among healthy subjects and CAD patients (Table-I). While hsCRP levels were significantly higher in CAD patients compared to healthy individuals. Table-II shows comparison of lipid profile between control and CAD patients. CAD patients had significantly higher TG (p=0.0074) and significantly lower HDL (p=0.0001) levels compared to control subjects. Table-III shows Spearman's correlations between Gensini score of CAD severity, vessel scores, hsCRP and lipid profile in CAD patients. Although CAD patients presented with higher hsCRP levels but there was no significant correlation of CAD severity with hsCRP or blood lipids. Fig.1. shows mean values of Gensini Score and percentage of blockage in LAD, LCx and RCA determined by angiography. Fig.2. expresses linear regression analysis between hsCRP and CAD severity determined by Gensini scores in all CAD patients and showed a significant positive correlation (r=0.423, p=0.018). We compared hsCRP levels between control group and CAD groups according to vessel scores in all CAD patients. All CAD groups had significantly higher mean values of hsCRP compared to control subjects Fig.3. Triple vessel disease patients had significantly higher hsCRP levels than one vessel and two vessel disease. The difference was non significant between one and two vessel disease groups. DISCUSSION The main observations in this study are that hsCRP is a marker of the presence and severity of CAD defined by Gensini scoring or vessel scoring. This can be explained as that hsCRP is an acutephase reactant protein marker that can demonstrate the subclinical inflammatory states detecting lower serum levels of CRP. There are a lot of advantages in hsCRP measurements related to CAD. One advantage is that it is a stable compound and it can be measured at any time of the day without special relevance to biological clock of the day. 17 Other markers such as lipids and IL-6, exhibit circadian variations and are related to meals also. Thus, we can perform hsCRP testing in clinical settings without regard for time of day. 18 Despite all these advantages there is still controversy and limitations of hsCRP levels and other confounding variables as marker of cardiovascular diseases. [19][20][21][22] Cushman et al have revaluated the prevalence and correlates of increased hsCRP and reported a significant impact of hsCRP measurement on coronary heart disease risk reclassification. They observed that with the inclusion of hsCRP in their testing data, the Reynolds risk score classified the population differently compared to the new Framingham risk scores. 23 This observation is in agreement to our previous study regarding lipoprotein(a) and its significant correlation with presence, diffuseness and the severity of CAD. 24 A similar study was performed in Indian population to determine the concentration of hsCRP and its association with coronary atherosclerosis assessed by coronary angiography. In line with our results they reported that the serum concentration of hsCRP was associated with presence of CAD, but regarding severity the correlation was non significant. 25 It is recently reported that there is state-level geographic variation in inflammatory biomarkers among otherwise healthy women which cannot be completely attributed to traditional clinical risk factors and lifestyle. It is suggested that future research approaches should aim to identify additional factors that may explain geographic variation in biomarkers of inflammation among healthy women. 26 In a recent study by Hrira et al reported that ApoB and hs-CRP levels were markedly associated with the severity of CAD in Tunisian patients and their findings are similar to our results. 27 The possible limitations of our study are limited number of subjects and cross sectional design. Prospective studies on large scale are needed to explore the true pathogenic role of hsCRP in assessing cardiovascular risk. CONCLUSION We conclude that patients with angiographically evaluated CAD have significantly higher levels of hsCRP levels compared to healthy individuals and are correlated with the presence & severity of CAD. *** Correlation is significant at the 0.001 level (2-tailed). **Correlation is significant at the 0.01 level (2-tailed). *Correlation is significant at the 0.05 level (2-tailed). VScore; Vessel Score, G Score; Gensini Score.
2016-05-04T20:20:58.661Z
2013-09-30T00:00:00.000
{ "year": 2013, "sha1": "8916852f55123a99ed307d57defa0f003b465d0d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.12669/pjms.296.3302", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8916852f55123a99ed307d57defa0f003b465d0d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55674020
pes2o/s2orc
v3-fos-license
Geomagnetic Survey to Explore High-Temperature Geothermal System in Blawan-Ijen, East Java, Indonesia Ijen geothermal area is high-temperature geothermal system located in Bondowoso regency, East Java. It is categorized as caldera-hosted geothermal system which is covered by quaternary andesitic volcanic rocks with steep topography at the surrounding. Several surface thermal manifestations are found, such as altered rocks near Mt. Kukusan and a group of Blawan hotsprings in the northern part of the caldera. Geomagnetic survey was conducted at 72 stations which is distributed inside the caldera to delineate the existence of hydrothermal activity. Magnetic anomaly was obtained by reducing total magnetic measured on the field by IGRF and diurnal variation. Reduction to pole (RTP) method was applied with geomagnetic inclination of about -320. In general, the result shows that high magnetic anomaly is distributed at the boundary of study area, while low magnetic anomaly is observed in the centre. The low anomaly indicates demagnetized rock that probably caused by hydrothermal activity. It has a good correlation with surface alteration observed close to Mt. Kukusan as well as high temperature reservoir drilled in the centre of caldera. Accordingly, the low magnetic anomaly also presents the possibility of geothermal reservoir in Ijen geothermal area. Introduction Ijen Volcanic Complex is located in the Bondowoso, West Java, Indonesia.Expectation of geothermal system appearance in the volcanic complex is indicated by the occurrence of Blawan hotspring in the northern part of caldera rim and steam heated extremely acid sulphate water on Ijen crater as the only surface manifestation in this area.The hot spring express an outflow of the system, while the Ijen crater indicates a young volcano which is uneconomical to be developed.Therefore, it is necessary to do further research to find an appropriate location to be targeted in such challenging hidden geothermal system. In 2017, magnetic survey had been conducted over Ijen geothermal prospect area at 72 stations.Magnetic method is carried out to measure magnetic field variations in the earth's surface.The studies of magnetic anomalies are often useful for investigating high temperature geothermal systems hosted by Quaternary volcanic rocks [1].These anomalies represent the demagnetization of reservoir rocks caused by hydrothermal processes due to thermal fluid and rock interactions.In this study, we used a reduce to pole (RTP) technique for imaging total force of magnetic anomalies.RTP transformation delineates the hydrothermally demagnetised rock which is shown as negative anomalies [2].This paper presents the results of magnetic anomaly, RTP anomaly and the upwardcontinuation anomaly which were then interpreted to get a conclusion of the geothermal prospect zone in the Ijen Volcanic Complex. Regional Geology The Ijen Volcanic Complex (IVC) is located in East Java.The Old Ijen volcano is thought to form around the Pleistocene [3].The caldera itself is formed by Plinian eruption, ejecting 80 km3 of volcanic material, back to the 0,2-0,05 Ma [4].Currently, the product is revealed partly at the northern part.Meanwhile, in the southern part, it has been covered by younger volcanoes. The post-caldera volcanoes are classified into two groups: Caldera Rim Volcanoes and Intra Caldera Volcanoes [5].Some fault are found in the caldera and extents right in the centre of the caldera, strikes the remains caldera rim in the north and tear down the wall into a steep creek.Here in IVC, several structures that occurred are Blawan fault, Kawahwurung fault, Krepekan fault, Cemara-Kukusan fault, Kalipahit-Banyulinu fault, Djampit fault, Rante fault, Pawenan-Blau fault and Kendeng-Merapi fault (Fig. 1). The deformation may caused by tectonic activity, magma refilling in the chamber, or collaboration of those two that reactivate the caldera floor structures.Hot springs in Blawan area indicating that secondary permeability will affect the geothermal manifestation occurrences. Acquisition and Data Processing The acquisition of magnetic data had been carried out in the Ijen geothermal prospect area in 2017.The magnetic measurements were conducted using a Proton Precision Magnetometer (PPM).A total of 72 magnetic stations were measured with 1 to 1.5 km grid spacing and distributed inside the caldera rim. Furthermore, several corrections had been applied to the measured magnetic field data set in order to get the anomaly reflected from the geological target.At the first step, the contour map of the observed magnetic field was reduced for diurnal variation.Then the magnetic data was corrected for IGRF to remove the normal geomagnetic field.By reducing the total magnetic field with diurnal variation and IGRF correction then the magnetic anomalies associated with local magnetic variations of rocks can be obtained. Moreover, the reduction to the pole (RTP) technique was carried out by reducing the magnetic data to the pole with the angle of inclination -32 o .It is used to get the response of geothermal reservoir located just below the low magnetic anomaly.The upward continuation was also conducted to reduce the effects of shallow/local anomaly.In this case, we performed the continuation to a height of 250 meters. Magnetic Anomaly The magnetic anomaly map of Ijen geothermal prospect area is shown in the Fig. 2 with the highest value of about 1984 nT, while the lowest magnetic anomaly value of about -1206 nT. The low magnetic anomalies were found covering almost the entire study area and they were surrounded by high magnetic anomalies.Several bipolar anomalies were identified in the three locations.In the southeast area, it is probably associated with the existence of Mt Kukusan while anomaly in the southwest area could be correlated with Mt Pendil.The other anomaly in the northern part of the study area has not been clearly determined regarding possible correlation with the geological condition in this area. Reduce to Pole (RTP) The result of reduction to the pole (RTP) can be seen in Fig. 3.The focus in the investigation for the location of the geothermal reservoir is low (negative) anomaly area due to the hydrothermal demagnetization.The negative magnetic anomalies occur over the northern part of the study area and extend to the southeast area. Fig. 2. Magnetic Anomalies over Ijen Geothermal Area The pattern of negative anomaly found in the centre of the study area is probably associated with the presence of hydrothermal demagnetised rocks of Mt Kukusan.It is supported by the presence of surface alteration observed close to Mt. Kukusan. The location of the prospect zone is also strengthened by the intersection of several faults; Blawan fault, Cemara-Kukusan fault, and Kawahwurung Fault which could form a high permeable zone. Upward Continuation Fig. 4 shows the upward continuation anomaly map.Generally, the pattern of the high and low magnetic anomaly in the upward continuation map yields the same pattern with the RTP anomaly. The upward continuation anomaly represents a regional anomaly.As shown in the result, it further strengthens that the negative magnetic anomalies area occured in the northern part of the study area and extended to the southeast which could become the prospect zone in Blawan Ijen geothermal area. Conclusion This study was carried out in the Ijen geothermal prospect area.The presence of surface manifestation is very rarely, only Blawan hotspring found in the northern part of study area and extremely acid fluid on the top of Mt.Ijen.Therefore, the magnetic method was conducted to determine the location of geothermal prospect zone between Blawan area and Mt.Ijen.A negative anomaly is identified in the center (North to southeast) of the study area and it was interpreted as the hydrothermal demagnetised rocks of Mt Kukusan.This negative anomaly has good correlation with the presence of altered rock found near Mt Kukusan.The intersection of several faults; Blawan fault, Cemara-Kukusan fault and Kawahwurung Fault in the center of study area supports the possibility of geothermal prospect zone location. Fig. 3 . Fig. 3.The result of Reduce to Pole (RTP) of Magnetic Anomalies of Ijen Geothermal Area Fig. 4 . Fig. 4. The Map of Upward Continuation Anomalies of Ijen Geothermal Area
2018-12-06T21:24:51.371Z
2018-02-21T00:00:00.000
{ "year": 2018, "sha1": "654dec6473b0361ffd35539caef589f0f3b79258", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2018/06/e3sconf_icenis2018_02003.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "654dec6473b0361ffd35539caef589f0f3b79258", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
5070334
pes2o/s2orc
v3-fos-license
Genomic imprinting as an adaptative model of developmental plasticity Developmental plasticity can be defined as the ability of one genotype to produce a range of phenotypes in response to environmental conditions. Such plasticity can be manifest at the level of individual cells, an organ, or a whole organism. Imprinted genes are a group of approximately 100 genes with functionally monoallelic, parental‐origin specific expression. As imprinted genes are critical for prenatal growth and metabolic axis development and function, modulation of imprinted gene dosage has been proposed to play a key role in the plastic development of the unborn foetus in response to environmental conditions. Evidence is accumulating that imprinted dosage may also be involved in controlling the plastic potential of individual cells or stem cell populations. Imprinted gene dosage can be modulated through canonical, transcription factor mediated mechanisms, or through the relaxation of imprinting itself, reactivating the normally silent allele. Developmental plasticity Developmental plasticity can be defined as the ability of one genotype to produce a range of phenotypes in response to environmental conditions. Such plasticity can be manifest at the level of individual cells, an organ, or a whole organism. The totipotent zygote represents the pinnacle of cellular developmental plasticity and as the embryo develops, lineage restriction events reduce the developmental potential of subpopulations of cells such that cellular plasticity declines with developmental age. Epigenetic modifications, which can be defined as 'the structural adaptation of chromosomal regions so as to register, signal or perpetuate altered activity states [1], play a key role in stabilising these lineage restriction events. Consequently, the reacquisition of developmental potential, as occurs naturally during germ cell development and fertilisation and artificially during cloning or during the generation of induced pluripotent stem cells (iPSCs), requires extensive epigenetic reprogramming. This includes the reprogramming of epigenetic marks at imprinted loci. Genomic imprinting Imprinted genes are a unique class of approximately 100 genes which are expressed predominantly from one chromosome in a parental-origin dependent manner (Fig. 1). Imprinted genes are not distributed uniformly through the genome, but are often found in clusters where the parental allele-specific pattern of gene expression is coordinately regulated by imprinting control regions (ICRs) through long-range cis-acting mechanisms. ICRs are characterised by differing epigenetic marks on the two parentally inherited chromosomes [2]. DNA methylation and the post-translational modification of core histones are important epigenetic modifications and these play key roles in imprinting control. To date, all ICRs identified are differentially DNA methylated regions (DMRs) on the two parental chromosomes. These differential methylation marks are acquired in the developing oocytes and sperm and, in normal circumstances, are heritably maintained after fertilization in the developing embryo and throughout life. Secondary or somatic DMRs, found at some imprinted promoter regions, acquire their parental-origin specific methylation post-fertilisation. This requires the gametic ICR and is thought to reinforce imprinted gene expression (Fig. 1, [3]). The role of histone modifications in imprinting control is less clear, however DMRs are characterised by the asymmetrical accumulation of different histone modifications on the two parental chromosomes and recently a requirement for histone demethylation in order to establish germline CpG methylation has been identified at some ICRs [4]. The existence of parental-origin specific DMRs necessitates a process of epigenetic reprogramming during gamete development such that germ cells exhibit the appropriate epigenetic marks at ICRs to ensure the successful development of future offspring (Fig. 1). This begins after the primordial germ cells (PGCs) have been specified at E7.5 and continues throughout the migration of the PGCs to the genital ridge. A second wave of demethylation occurs around E11.5 and includes the dramatic and rapid erasure of methylation at imprinted loci [5]. Dogma dictates that this demethylation is complete, and that there is no epigenetic inheritance through meiosis, however, some elements, such as intra-cisternal A particles, can partially escape this methylation reprogramming [6,7]. Imprinted genes have been proposed as key modulators of organismal developmental plasticity, but there is also evidence for their involvement in the plasticity of organs and single cells. There are two mechanisms through which the expression of an imprinted gene may be modulated: through a canonical, transcription factor driven mechanism ( Fig. 2A), or through the modulation of imprinting itself (Fig. 2B). Imprinted genes are not universally mono-allelically expressed, rather the umbrella classification of ''imprinted'' conceals an extraordinary variety of temporal and tissue specificity of mono-allelic expression and, for some genes, inter-individual heterogeneity [8,9]. Our understanding of what initiates mono-allelic expression remains sketchy, although in some cases this coincides with differentiation events which restrict cellular developmental potential. Altered imprinted gene dosage through loss of imprinting, the activation of the normally silent allele, or the silencing of the normally active allele of an imprinted gene has been observed in various pathological states, however it remains unclear whether this is utilised as a mechanism of dosage control during normal development. The existence of parental-origin specific DMRs necessitates a process of epigenetic reprogramming during gamete development such that germ cells exhibit the appropriate epigenetic marks at ICRs to ensure the successful development of future offspring. This begins after the primordial germ cells (PGCs) have been specified at E7.5 and continues throughout the migration of the PGCs to the genital ridge. A second wave of demethylation occurs around E11.5 and includes the dramatic and rapid erasure of methylation at imprinted loci (reviewed by Sasaki and Matsui [5]). Imprinting dynamics in early development and in stem cells Because the derivation and in vitro culture of embryonic stem (ES) cells are potential points of origin for epigenetic abnormalities, the epigenetic status of all stem cells and their derivatives must be established prior to their therapeutic use in humans. Recently, induced pluripotent stem cells (iPSCs) have been generated by the forced over-expression of defined sets of transcription factors in human somatic cells [10][11][12]. iPSCs hold great potential for the study of genetic diseases and to be a source of patient-specific stem cells for regenerative medicine therapies. Consequently the rigorous characterisation of these cells, including the epigenetic and expression status of imprinted loci, is of paramount importance. It remains unclear whether iPSCs are molecularly and functionally equivalent to blastocyst-derived ES cells. Recently a controversial study showed overall messenger RNA and microRNA expression patterns to be indistinguishable between murine iPSCs and ES cells with the exception of the aberrant silencing in some iPSC lines of the non-coding RNA transcripts of the imprinted Dlk1-Dio3 domain on chromosome 12qF1 [13]. This was associated with reduced contribution to chimaeras and a failure to produce viable all-iPSC derived mice, implying that expression of these imprinted non-coding RNAs is required for full developmental plasticity. While this study is compromised because restoration of expression and rescue of the phenotype was not conducted, it has reignited the debate on imprinting status and its functional implications during early development and in stem cell models. Evidence is accumulating for the relaxation of imprinting in the stem cell niche of some tissues. Our recent work on the role of the Dlk1-Dio3 locus in adult neurogenesis suggests that the selective modulation of imprinting is a normal mechanism of altering gene dosage and is associated with the control of developmental potential in the adult neurogenic niche [14]. We demonstrate that during early postnatal life, the normally silent maternal copy of Dlk1 is derepressed specifically in the multipotent stem cells of the neurogenic niche [14]. This is associated with the partial gain of methylation at the imprinting control region of this locus, the IgDMR. Interestingly, imprinting of Gtl2, a maternally expressed non-coding RNA in the same cluster is unaffected, and differential methylation of a secondary DMR at the Gtl2 promoter is maintained. Differentiation of adult neural stem cells both in vivo and in vitro is associated with the reacquisition of imprinting at Dlk1. These data force us to reconsider imprinting control mechanisms and the role of imprinting in developmental plasticity. However, it is currently unknown how many stem cell populations and imprinted genes behave in this way, and how mechanistically such dynamic imprinting modulation is achieved. While the body of data on the tissue and temporal-specificity of imprinting at many loci is growing, the expression and imprinting status of imprinted genes during very early embryonic development remains largely uncharacterized. The emergence of monoallelic expression occurs at different developmental stages at different loci and also varies between different genes within a single imprinted locus. Data acquired from the study of undifferentiated ES cells derived from the pluripotent inner cell mass of the blastocyst have been used as an in vitro model of early development which complements in vivo data. This has revealed that at some loci the acquisition of mono-allelic expression occurs in tandem with differentiation or lineage restriction events. In undifferentiated ES cells equal expression has been shown from both Igf2r promoters. Differentiation is associated with the gain of imprinting at the Igf2r locus through the specific upregulation of expression from the maternally inherited allele [15]. In the early embryo, Igf2r is biallellically expressed from the 4 cell stage up to and including the blastocyst stage. Monoallelic expression is gained from E4.5-E6.5 and is dependent on the expression of the overlapping noncoding RNA Airn. Imprinting at the Kcnq1 cluster is also dependent on the expression of a non-coding RNA from the paternally inherited allele, Kcnq1ot1. In contrast to Airn, Kcnq1ot1 is paternally expressed in preimplantation embryos from the two-cell stage. Genes located close to Kcnq1ot1 are ubiquitously imprinted, and monoallelic expression is already detected in blastocysts and undifferentiated ES cells [16,17]. More distal genes are imprinted only in the extra-embryonic tissues and restriction of expression to one parentally-inherited allele coincides with trophoblast specification [17]. Together these data lead us to suggest that imprinting is a mechanism of dosage control which may, in some instances, be associated with the control of developmental potential. The careful study of the dynamics of imprinted gene expression at defined lineage restriction decisions in different cell populations and different developmental stages during in vivo development and in vitro differentiation and derivation are now required to test how widespread or rare such a strategy is. Perturbation of imprinted gene dosage is associated with neoplastic transformation Another interesting model in which to interrogate the role of imprinted genes in cellular plasticity is provided by cancer cells which are characterised by an abnormal gain in developmental potential. The importance of epimutation in cancer is increasingly being recognised. Indeed, some consider cancer to be as much an epigenetic disease as it is a genetic disease [18]. Many imprinted genes play roles in cellular growth and proliferation and consequently there may be selective pressure for their deregulation in cancer cells. Loss of imprinting (LOI) has been reported to be the most abundant alteration in some cancers and tends to be an early event in neoplastic transformation, demonstrating the importance of imprinted dosage in the maintenance of cellular and tissue identity [19][20][21]. Indeed, patients with congenital imprinting syndromes and deregulated imprinted gene dosage have an increased risk of cancer [22,23]. The gene encoding the insulin-like growth factor II (IGF2) and the H19 gene (a putative tumor suppressor gene) are imprinted in humans and expressed from the paternally inherited and maternally inherited allele respectively. Studies in solid tumours showed that the biallelic expression of IGF2 in gliomas and invasive breast cancers is associated with the aggressiveness of tumour growth [21]. There is evidence that LOI may predate and predispose to carcinogenesis, potentially by retarding cellular differentiation and derepressing developmental and proliferative potential (Fig. 3). Igf2 imprinting is lost in the colonic mucosa of 10% of the population and is associated with a personal and/or family history of colonic adenocarcinoma [24]. A murine Igf2 LOI model recapitulates the altered morphology of the normal colonic mucosa seen in patients with IGF2 LOI: an increased proportion of undifferentiated cells and expanded colonic crypts in the absence of proliferative changes [25]. This is associated with an increased incidence of colon cancer, strongly suggesting that LOI at the IGF2 locus promotes neoplastic transformation. While much literature documents LOI during neoplastic transformation, reports of transcription-factor mediated deregulation of imprinted gene expression in these processes is also growing. Imprinting is generally maintained at the DLK1-DIO3 locus in tumours, however, a variety of neuroendocrine and glial tumours are characterised by high levels of DLK1 expression, implicating dosage perturbation via a transcription-factor mediated mechanism [26]. A recent in vitro analysis found hypoxia-mediated DLK1 upregulation to be associated with increased ''stemness'', tumourigenic potential and reduced differentiation [27], supporting the hypothesis that imprinted gene dosage may be related to tumorigenesis and malignant transformation (Fig. 3). In each organ there exists a stem cell population which replenishes the cells of that organ through asymmetrical divisions, one cell remaining a stem cell while the other differentiates [28]. Several reports indicate that LOI in tissue-specific stem cells may cause the population to abnormally proliferate and expand [20,25,29]. Stem cells and cancer cells commonly share gene expression patterns, regulatory mechanisms, and signalling pathways. This has led to ''the cancer stem cell hypothesis'' which suggests that tumours arise from stem cell populations with dysregulated self-renewal caused by epigenetic and/or genetic initiating events, resulting in abnormal expansion and aberrant differentiation [25,30], (Fig. 3). Furthermore, tumour cell heterogeneity has been proposed to be due in part to epigenetic variation and epigenetic plasticity in these progenitor cells [31]. As discussed above, there is evidence for the relaxation of imprinting in the stem cell niche of some tissues [14]. The mechanisms involved in regulating such selective relaxation of imprinting are almost entirely unknown, but are of potentially great importance to our understanding of how cellular developmental potential is controlled and the processes underlying neoplastic transformation. Developmental plasticity of a whole organism Organism developmental plasticity, the adaptive modification of developmental phenotype in response to environment, can result in astonishing phenotypic diversity. For example, polyphenism in invertebrates produces the colourfully different dry and wet season morphs of certain butterflies [32] or the sexual, asexual, winged and wingless forms of the pea aphid [33]. In mammals there is increasing recognition of the power of the environment during prenatal development to shape adult growth, metabolism and behavioural phenotype. Indeed, studies on laboratory mice have shown that environmental influences can be a greater determinant of phenotype than genetic variation [34]. The study of genetically identical inbred mouse strains essentially eliminates interindividual genetic variation, consequently any inter-individual phenotypic variation must stem from epigenetic differences. As imprinted genes are crucial for reproductive and maternal behaviour, embryonic growth and the development and function of key metabolic axes (Fig. 4), they have been proposed as candidates to play a key role in mammalian developmental plasticity. It has also been hypothesised that, as the expression of imprinted genes is functionally monoallelic, exquisitely dosage sensitive and controlled by multiple layers of epigenetic regulation, imprinting and imprinted gene dosage may be more susceptible to environmental changes which impinge on the normal function of the cellular epigenetic apparatus [35]. However, we propose that the converse may instead be true: given the dependence of imprinted gene expression on epigenetic modifications, these may be more tightly safeguarded in the face of environmental perturbations during development and any mechanism which requires the action of the canonically repressed allele is likely to be highly regulated. Proper investigation of these hypotheses requires the analysis of how the expression of imprinted genes, as a class, responds to environmental challenge relative to the whole transcriptome and to other functionally related gene sets. In the absence of such analyses in the published literature we review the existing data on the stability of imprinted gene dosage and the epigenetic status of imprinted DMRs to environmental challenge during early life. The role of imprinted genes in developmental plasticity in response to peri-conception environmental challenges As the penultimate carbon donor to the methyltransferase enzymes is the essential amino acid methionine, diet may impinge on methyl-group availability for biological processes, including epigenetic modifications. It has been proposed that nutritional availability around conception may affect the post-fertilisation wave of epigenetic reprogramming [36,37]. Multiple studies of embryos fertilised and cultured in vitro have suggested that imprinting control elements may be more susceptible to the environment during this period than previously thought [38,39]. However, these studies are potentially confounded by the effects of superovulation, which has been shown to alter the epigenetic status of maternal ICRs [40]. There is some evidence of an association of peri-conception famine exposure with increased susceptibility to cardiovascular disease and earlier disease onset [41,42]. Exposure at this time point has been associated with subtle changes in methylation at three DMRs in different imprinted clusters in blood samples of affected versus unaffected sibs. However, the functional significance of this is unclear as leukocyte methylation is notoriously variable and the studies did not examine any associated expression changes, effects on imprinting, or attempt any correlation with known phenotypic outcomes [43,44]. In a rat model of periimplantation low protein diet Kwong et al. [36,37] demonstrated a male-specific reduction in birth weight and development of hypertension at a young age, associated with a 30% reduction in male blastocyst H19 expression. While methylation at the H19 DMR was slightly altered [37], it did not correlate with the observed change in expression, indicating that it was not mechanistically responsible and, although not directly tested, imprinting was likely to be intact. Furthermore, the phenotypic implications of a subtle reduction in H19 expression during early development are unknown. In summary, although in vitro studies provide some evidence that the epigenetic status of DMRs in the early embryo is labile and susceptible to culture conditions, there is currently little evidence for this from in vivo studies. The role of placental imprinted gene expression in developmental plasticity The placenta controls nutrient supply to the foetus, is the site of foeto-maternal interaction and is a highly active endocrine tissue, secreting factors which alter maternal metabolism and behaviour [45,46]. The placenta is also a highly plastic organ, responsive to foetal demand for resources [47]. Alterations in placental development can therefore have a dramatic effect on foetal growth; indeed, placental insufficiency is a leading cause of intra-uterine growth restriction in the developed world. Imprinted genes play key roles in placental growth, patterning and function and in the coordination of foetal resource demand and maternal supply, as exemplified by analysis of the Peg3 and Igf2 mutants [48][49][50]. Consequently several studies have sought to address whether deregulation of placental imprinted gene expression is associated with human developmental programming and intra-uterine growth restriction. The maternally expressed Phlda2/Ipl acts to restrain placental growth while the paternally expressed Mest promotes it [51,52]. Apostolidou et al. [53] screened 200 human placentas by qPCR for PHLDA2, IGF2, IGF2R and MEST. Only PHLDA2 expression significantly correlated (negatively) with birth weight, but imprinting was not affected, implicating a transcription factor mediated mechanism. In contrast, McMinn et al. [54] assessed the transcriptome of a small sample of human IUGR and normal placentas and observed increased expression of PHLDA2 and decreased MEST, There is also evidence that imprinted genes act coordinately in the foetus to regulate growth, thus altering foetal demand for maternal resources. Imprinted genes play key roles in the development of metabolic organs and modulate key adult metabolic pathways. Adapted from Charalambous et al. [66] MEG3, GATM, GNAS and ZAC1 in IUGR placentas. They observed no methylation changes at the PHLDA2 or MEST ICRs, nor were the spatial distribution of PHLDA2 expression changed. Imprinted genes constituted 7% of their expression changes, a significantly higher proportion than would be expected, potentially implicating imprinted genes as a class as playing a key role in human IUGR. However, morphological adaptations occur in small placentas in an effort to sustain foetal growth [47] and thus the observed expression changes may be indirect, reflecting a secondary effect of the altered morphology. MATERNAL In human studies much effort has focussed on establishing whether circulating foetal IGF2 levels correlate with foetal growth. However, the evidence is conflicting, as a variety of studies which have found that IGF2 levels in the placenta and/or cord blood correlate positively with birth weight [55][56][57][58][59], while others find no such relationship [60][61][62][63][64]. The discrepancies may partly be due to a failure to take into account the impact of changes in the levels of the circulating non-imprinted binding proteins which alter IGF2 bioavailability, IGFBPs. Serum level of several IGFBPs has been found to correlate with birth weight and may be modulated by in utero nutrition [57,58,61,65]. This relationship with proteins which modulate bioavailability makes Igf2 a particularly challenging model for assessing phenotypic plasticity and imprinted gene dosage and also suggests that the effective dosage of imprinted genes may be modulated post transcriptionally by non-imprinted pathways. In summary, while there is some evidence of altered placental imprinted gene dosage in IUGR, there is no evidence that this involves changes in the epigenetic status of imprinted DMRs suggesting that transcription factor-mediated dosage modulation is responsible. Therefore, there is currently little evidence to suggest that placental imprinting is susceptible to environmental perturbation. Imprinted genes and the postnatal sequelae of altered in utero development The careful analysis of murine genetic models has demonstrated that imprinted genes play critical roles in the development of key metabolic organs with obvious consequences for postnatal metabolic phenotype. This includes neuroendocrine and endocrine organs involved in the control of homeostatic metabolic axes such as the brain, pituitary, adrenal and pancreas; as well as tissues critical for energy storage and utilisation such as muscle, white and brown adipose tissue and liver (Fig. 4) (reviewed in [66]). Perturbed development and postnatal function of these tissues is thought to contribute to the metabolic sequelae of developmental plasticity in response to in utero deprivation, but few studies have investigated whether altered somatic imprinted gene expression may be involved. Imprinted genes and the pancreatic consequences of in utero growth restriction Pancreatic sensitivity to blood levels of glucose, insulin, IGF1 and other hormones is critically important for metabolic health. The pancreas is a plastic organ, and early life events may play a role in determining the capacity for adult pancreatic plasticity. Several animal models have demonstrated altered pancreatic development following in utero deprivation [67][68][69][70]. A group of imprinted genes including Igf2, Rasgrf, Grb10, Neuronatin and Zac1 play key roles in pancreatic development and maturation and may be involved in the pathogenesis of these defects, although direct evidence of this remains scanty. Martin et al. [67] looked at the IGF axis and pancreatic function in rats which had been protein restricted in utero during the last week of gestation. These rats have a phenotype similar to that of local Igf2 overexpression [71] However, pancreatic Igf2 mRNA expression was reduced and there was no change in hepatic or serum levels of IGF2. Waterland and Garza [72] investigated the role of nutrition on pancreatic maturation by altering rat litter size during lactation. Both overnourished and undernourished animals had impaired pancreatic islet glucose-stimulated insulin secretion. Expression of Neuronatin was found to be significantly reduced in the overnourished individuals. Given the phenotypic similarities with an in vitro siRNA knockdown [73], reduced Neuronatin expression in this model may have contributed to the insulin secretory defects. However, interpretation of these data in the context of the whole pancreatic transcriptome is required to determine whether imprinted genes as a group are uniquely susceptible in the pancreas to environmental perturbation. Convincing evidence for the role of progressive reductions in pancreatic expression of the key transcription factors Hnf4a and Pdx1 following compromised early life conditions suggest that this may be unlikely [69,70]. Imprinted genes in brain development and the central control of metabolic axes The brain is perhaps the most plastic organ of the body, capable of remarkable feats of learning and memory which involve rapid and widespread alterations of neuronal architecture and biochemistry. The majority of imprinted genes show high expression in the brain and many are imprinted only here [our observations, 74]. Furthermore, the human congenital imprinting syndromes, for example Prader-Willi syndrome (PWS) and Angelman syndrome (AS), are all characterised by neurological and behavioural impairments and learning difficulties, indicating the importance of imprinting in brain development and function [75]. Stress or deprivation in utero and negative experiences early in life have been associated in humans and animals with lasting changes in behaviour and emotionality and various psychiatric diseases [76]. Hotspots of imprinted gene expression are found in many areas critical for motivation, emotion and reward, such as the brainstem monoaminergic nuclei, the amygdala, nucleus accumbens and ventral tegmental area (our observations, [74,77]). While there has been much speculation on the possible role of imprinted genes in these areas and hence in psychiatric illness, direct evidence of this is sparse [78,79]. However Dlk1 and Grb10 have recently been shown to be involved in the development of the midbrain dopaminergic population [80], while loss of Magel2 is associated with defects in serotonergic signalling [81]. There is increasing evidence that developmental plasticity alters the central regulation of homeostatic axes such as those involved in control of blood volume, stress susceptibility and energy balance [82][83][84]. Many imprinted genes show high expression in key components of the hypothalamo-pituitary axis (our observations; [74,85]) and although genetic mouse models of altered dosage at the Dlk1-Dio3, Peg3 and Gnas loci show altered ''set points'' of metabolic axes there is, to our knowledge, currently no data linking changes in the early life environment with changes in the central nervous system expression of imprinted genes [86][87][88][89]. Concluding remarks Dosage control at imprinted loci is essential for successful embryonic development. The temporal dynamics of acquisition of imprinted expression at certain loci coincide with cellular differentiation or lineage restriction events and the abnormal silencing of a cluster of imprinted non-coding RNAs has been associated with reduced developmental potential of iPSCs [13]. Conversely, recent data suggests that the highly selective and regulated relaxation of imprinting is associated with cellular developmental potential in some stem cell populations [14]. Furthermore, loss of imprinting and altered imprinted gene expression dosage has been associated with neoplastic transformation [21]. This leads us to suggest that imprinting may be associated with the control of cellular developmental plasticity. The investigation of the temporal dynamics of imprinting in vivo during early development and in further tissue-specific stem cell populations is required to determine the extent of the physiological role of imprinted gene expression in cellular developmental plasticity. It has been proposed that imprinted genes may be more susceptible to dosage perturbation due to early life environmental challenges, and therefore that they may play a key role in the plastic developmental response of an organism to the early life environment. However, we propose that the opposite may be true, that imprinting genes may be protected from or may be less susceptible to such environmental perturbation. To properly test such hypotheses, the expression of imprinted genes in the context of the whole transcriptome response to environmental challenge during early life must be assessed, and such data is currently lacking. Most studies have been hampered by low sample size, but there is emerging evidence that genes such as Phlda2 may be involved in altered placental development associated with intra uterine growth restriction. However, untangling cause and effect in such a morphologically plastic tissue is complicated. Where there is some evidence of altered expression of imprinted genes in developmental plasticity, this is generally not associated with substantial relaxation of imprinting, and does not consistently correlate with changes in DNA methylation, implicating transcription-factor mediated mechanisms, rather than loss of imprinting. Therefore, modulation of gene dosage through loss of imprinting, as a developmental mechanism, may be rare, and any mechanism which requires the action of the canonically repressed allele is likely to be highly regulated.
2018-04-03T02:10:35.691Z
2011-07-07T00:00:00.000
{ "year": 2011, "sha1": "9e344c858e0a37cc653032a377c4859fa270b0fb", "oa_license": "CCBY", "oa_url": "https://febs.onlinelibrary.wiley.com/doi/pdfdirect/10.1016/j.febslet.2011.05.063", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "3bc80140063f8cfe6d3f47111eebdc56aaf64fad", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
213363569
pes2o/s2orc
v3-fos-license
Plasma-prepared arsenic telluride films: relationship between physico-chemical properties on the parameters of the deposition process Previously, we demonstrated the principal possibility to synthesize arsenic telluride films of different chemical and phase composition by PECVD when we directly use elemental arsenic and tellurium as the precursors. This paper presents the results of systematic study of physicochemical properties of As-Te films prepared in low-temperature non-equilibrium RF (40 MHz) argon plasma discharge at low pressure (0.1 Torr) as well. The surface morphology, structure and thermal crystallization behavior of the films obtained were studied in dependence on the plasma parameters of the deposition process. The characteristics of stationary and transitional photoconductivity of the films have been studied as well. Introduction Tellurium-based chalcogenide materials are supposed to be promising for application in IR optoelectronics and photonics [1][2][3][4][5][6], and, also, for fabrication of three-dimensional architecture of integrated optical elements of future quantum computer [7][8][9]. Despite the fact the effect of threshold switching was first found out for the telluride glasses by Northover and Pearson in 1960s [10,11], the As-Te binary system has been serving as a clue model to figure out the phase-forming behavior of the whole family of tellurium-based materials so far [12][13][14][15][16][17][18]. The Te-based chalcogenide materials are a strong candidate for implementation as dense, one-selector oneresistor (1S1R), resistive switching memory arrays, but their low thermal stability is one of the key factors that prevents rapid adoption by emerging resistive switching memory technologies [19]. Thermochemical, particularly, crystallization properties of As-Te bulk samples have been studied in detail and published in [18][19][20][21][22]. Structural properties of As-Te materials were studied and reported in [22][23][24], where the chalcogenide films were obtained by spin-coating technique from bulk samples and underwent annealing and quenching under different conditions. The authors of [24] have also suggested that the structural ordering of As-Te films is determined by two parallel processes of dissociation that occur during the formation of the glassy net structure. Physico-chemical properties of chalcogenide materials are largely determined by the method of their preparation, since the method of synthesis directly affects their structure and the state of defects in it. The validity of this statement is vividly confirmed when studying the properties of As-Te glasses obtained in different conditions. Thus, in [16], samples of As x Te 100-x thin films were obtained under different quenching conditions and studied using Raman spectroscopy. It was found that AsTe and As 2 Te 3 were the main elements of the structure. In addition, it was noted in [17][18][19][20] that, depending on the annealing modes, especially with fast cooling of the samples, the structural units of the AsTe and AsTe 2 composition are present in the As-Te glass net. Mostly, thin films obtained using state-of-the-art deposition technologies do not have the same optical properties as the original bulk samples. The As-Te films obtained are chemically and structurally disordered. Potential fluctuations due to disorder create small localized states near the allowed bands in the form of exponentially falling tails, which are characteristic of all amorphous semiconductors. According to [21], the tail width of the valence band can, as a first approximation, be estimated from the Urbach slope of the optical absorption curve. Since the holes are more mobile carriers in chalcogenides, it is also possible that this tail is probed by appropriate measurements of electric transport. In addition, the structural glassnet of any amorphous chalcogenide glass film contains a large number of defects such as 'broken bond'. According to the model proposed in [22], in chalcogenide glasses, these defects are charged and have no unpaired electrons (D + and Ddangling bonds, with and without two spin-paired electrons, respectively). The neutral dangling bond D°with one unpaired electron is energetically unfavorable and manifests itself only in the excited state. These defects are responsible for localized states located more deeply in the forbidden zone of amorphous material. Direct observation of these defects, for example, by measuring the optical absorption spectrum, is often difficult due to the lack of sensitivity in the case of thin layers of amorphous chalcogenide glasses. The antisite defects in glassy As -Te alloys are investigated by emission Mössbauer spectroscopy in [25]. Measurement of the photoconductivity of chalcogenide films, whose magnitude and temporal characteristics are determined by the density and type of defects responsible for the existence of deep localized states in the band gap (mobility gap) of an amorphous semiconductor, is one of the indirect methods of sensing (detecting) such defects in chalcogenide glasses. Vacuum evaporation and spin-coating are the most common methods for synthesizing of planar structures of the As-Te system. In [22][23][24]26] we emphasize the advantages of using plasma chemical methods for the synthesis of chalcogenide glasses in the form of bulk samples and thin films. Compared to classical methods such as CVD, thermal vacuum evaporation or spin coating, plasma chemical vapor deposition (PECVD) provides much greater possibilities for controlling the deposition process and the formation of the final structure. At the same time, additional factors of influence are temperature and concentration of electrons in the plasma. The purpose of this work is a systematic study of the physicochemical properties of amorphous As-Te films of different composition, obtained by plasma chemical deposition from the gas phase under low-temperature argon plasma under reduced pressure. Experimental The samples of chalcogenide films were obtained at the facility, the schematic diagram of which is given in [24]. The layers were deposited on high-pure fused quartz substrates with a thickness of 0.5 mm and crystalline NaCl; the substrate temperature was about 40°C. In order to avoid thermal crystallization of amorphous films, flat slit electrodes with aluminium contacts (distance between contacts was 0.5 mm, contact width was 10 mm) for measurements of dark and photoconductivity were made on a fused silica substrate before deposition of As-Te films. The study of chemical composition of the obtained samples was carried out by x-ray microanalysis on a JSM IT-300LV scanning electron microscope (JEOL) with an energy dispersive attachment for elemental analysis of X-MaxN 20 (Oxford Instruments) under high vacuum and accelerating voltage of 20 kV. The Raman spectra were studied on a NTEGRA Spectra Raman spectroscopy complex (NT-MDT) using a HeNe laser with the wavelength of 632.8 nm. The study of the Raman spectra was carried out in the scheme of reflection. All spectra were recorded at the room temperature. The thickness of As-Te films was estimated by Atomic Force Microscopy according to the step formed in the process of obtaining samples. It was from 5 to 8 μm. To measure the photoconductivity, a semiconductor continuous-wave laser with the power of 40 mW and the wavelength of 785 nm was used as the light source, the light intensity was controlled by calibrated filters with neutral density. A shutter with a constant time of about 10 -3 s was used to turn on/off the light. Photoconductivity relaxation was recorded using a SourceMeter ® KEITHLEY series 2420 device with a time constant of not more than 0.02 s. The temperature of the test samples was set using a vacuum thermal bench Linkam Scientific Instruments THMS350V with an accuracy 0.1°C. The electric field applied to the test sample did not exceed 200 V cm −1 . The spectral distribution of the stationary photoconductivity was obtained using a MDR-204 monochromator with the 100 W halogen lamp as the light source. Study of the process of plasma deposition The study of the plasma-chemical process consisted of two stages. The first stage included the optical emission diagnostics of the chemically active plasma, which allows determining what kind of active particles is formed in the plasma and offer possible mechanisms for the plasma-chemical reactions. The second stage included mass-spectroscopy of exhausted gas mixtures to determine the degree of conversion of the precursors as well as the impurities in the plasma discharge. OMS of the plasma discharge The optical emission spectra of the inductively-coupled plasma discharge of high-pure argon (for comparison), and mixtures of (Ar+As) and (Ar+Te) at different values of energy input were studied in the range of 180÷1100 nm. The power supplied into the plasma discharge changed in the range of 30-50 W, the total pressure in the system was constantly maintained 0.1 Torr. Individual emission spectra of As and Te in argon discharge are presented in figure 1. The excited states of As (I) atoms at 228.81, 234.98, 238.12, 245.65, 278.02 nm and As (II) atoms at 283.08, 299 and 311.51 nm [24][25][26][27] may be identified in the spectra. Besides, the emission spectrum of Ar-As plasma includes the picks responsible for exiting of As 2 molecules at 248.74, 289.03, 298.9, 302.65, 305.8 nm [27]. With increase of the energy input into the plasma discharge the intensity of the molecular bands decreases, but the intensity of the atom lines increases. It is established that in the vapour phase arsenic exists in the forms of As 4 and As 2 agglomerates. [28]. Under the influence of electron impact, the agglomerates dissociate by the following reactions:¯¯( [30], while the intensity of the argon lines in the range of 690-850 nm remains almost unchanged (do not shown in the spectra). As a result of this initiation by plasma, we get our precursors in the atomic excited state in the gas phase. Judging by the fact that As-Te molecular lines are absent in the spectra, we can assume that the reaction of formation of the As-Te solid phase occurs on the substrate simultaneously with the quenching process. We may also assume that the atomization of the initial clusters in plasma discharge seems to be the reason of deposition of the arsenic telluride materials with the more structurally and chemically uniform surface. Since the dissociation energy of As 4 , As 2 , and Te 2 is different, this also affects the content of As and Te in the final chalcogenide thin films deposited. Chromate-mass-spectrometry of the exhausted gas mixtures The investigation of the exhausted gas mixtures is of a special interest due to, on the one hand, it allows determining the degree of conversion of the precursors, on the other hand, it makes possible to clarify behavior of some volatile impurities, especially if they are of a carbon nature. The exhausted gas mixtures after each process were accumulated in the traps, cooled by liquid nitrogen and analyzed by chromate-mass-spectrometry. The data are shown in table 1. The quantity of water vapor traces traditionally rises with growing of the power input into the plasma discharge [25,27]. One of the reasons may be that plasma enhances outgassing of the water traces from the walls of the plasma-chemical chamber. Both commercial precursors-arsenic and tellurium -include heterophase inclusions of the carbon nature. In plasma discharge their intensive conversion takes place. It may be interaction with the traces of oxygen and water with formation of CO 2 or carbon derivatives of different molecular masses, which may be partly removed during the process of deposition. With increase of the plasma power the concentration of carbon-containing gas-forming impurities increases in the exhausted gas mixture. 3.2. Study of the materials obtained 3.2.1. Dependence of chemical composition of the As x Te 100-x films on the energy input into the plasma discharge. EDX and SEM studies of the samples In order to study the effect of plasma power on the chemical composition, structure and properties of As x Te 100−x films, first of all, the sample with composition As 20 Te 80 was obtained with a minimum generator power −10 W. Then, at the same ratio of the precursors in the gas phase, samples of the films of different compositions were obtained, while the plasma power varied in the range from 10 to 65 W. The composition of the films was investigated by x-ray microanalysis (EDX). The data are presented in table 2. As follows from the data presented in table 3, with increase of the energy input from 10 to 64 W, the arsenic content increases from 20 to 80 at%. The SEM images of the As x Te 100-x are presented in figure 3. Judging by the photos obtained the As 20 Te 80 sample has a specific crystalline structure due to the sufficient excess of tellurium, the As 40 Te 60 film looks like a two-phase one and the samples As 50 Te 50 and As 80 Te 20 are, presumably, one-phase, but formed by different structural units. The activation energies of photoconductivity of the AsxTe100x films are presented in the Table 4. The XRD patterns of the As-Te plasma prepared films The XRD measurements have been done for the films deposited on the crystal NaCl substrates at the substrate temperature 40°C. All the prepared samples are of amorphous matter ( figure 4(a)), except for the sample the chemical composition As 40 Te 60 ( figure 4(b)) [17] and corresponding curves illustrating the dependence of the signal intensity from the double angle 2Θ, include only several broad and structureless bands. The curve for the sample As 40 Te 60 consists of reflexes corresponding to the As 2 Te 3 crystal phase (figure 4(b) in insertion) and reflexes corresponding to the AsTe crystal phase. Due to similarity of the x-ray diffraction patterns of the samples in the figure 4 is represented only one pattern corresponding to the As 50 Te 50 sample. DSC measurements of the As-Te samples In order to investigate the phase composition of the As x Te 100-x plasma-prepared materials in the range from 310 to 620 K, the Differential Scanning Calorimeter (model: DSC 204 F1 Phoenix, Netzsch Geraẗebau, Germany) was used. The calorimeter was calibrated and tested against melting of n-heptane, mercury, tin, lead, bismuth, and zinc. The temperatures and the enthalpies of transitions were evaluated according to the standard Netzsch Software Proteus procedure. The technique for determining of transition values according to the data of DSC measurements is described in detail in [31] and the Netzsch Software Proteus. The heating rate and cooling rate were 5 K min −1 . The measurement was carried out in argon atmosphere, 0.025 g of the compound under study was put in the aluminum crucible. The data of the measurements are illustrated in the figure 5. The SEM images are additionally given for each film in the insertion to illustrate the phase composition. As 20 Te 80 sample has three exothermic crystallisation reactions at 415.8 K, 482.1 K and 514.9 K and one endothermic glass transition 373 K. In the previous paper [20] the authors Titus and Asokan, reported only one crystallization pick at 415 K and one endothermic glass transition at 373 K for the As x Te 100−x bulk samples with the arsenic content below 40 at%. But we know that the structure or, in our case, phase composition of the chalcogenide materials strongly depends on the method of their preparation. The Tg at 373 K was explained in [20] by the transition 'virgin glass (Tg)→super cooled melt' and the Tc 1 at 415 K was referred to the process of formation of hexagonal Te phase. The change in the phase state was proved by x-ray diffraction patterns of two glasses, As 25 Te 75 and As 30 Te 70 , representing the region of 25-40 at% of arsenic. In the case of the plasma prepared As 20 Te 80 we can see two additional crystallization processes referred to formation of fss As-Te and monoclinic As 2 Te 3 crystal phases at 482.1 K (Tc 2 ) and 514.9 (Tc 3 ) K, respectively, that have not been observed previously in [20]. The identification of fss As-Te and monoclinic As 2 Te 3 crystal phases was done in [20] on the analysis x-ray diffraction patterns of As 40 Te 60 glass annealed in a sealed evacuated (10 Torr) ampoule. The sample As 40 Te 60 possesses barely noticeable transition addressing to formation of hexagonal Te at 415 K, and one endothermic glass transition 'virgin glass (Tg)→super cooled melt' at 518.6 K followed by two crystallisation picks at 524.4 and 553.5 K due to appearance of fss As-Te and monoclinic As 2 Te 3 crystal phases [20]. The sample As 50 Te 50 has one Tg of the transition 'virgin glass (Tg)→super cooled melt' at 473 K and one Tc at 520 K corresponding to formation of As-Te crystal phase. And, finally, As 80 Te 20 sample has two crystallization temperatures at 544 and 574.1 K referring to fss As-Te and monoclinic As 2 Te 3 crystal phases [20]. The excess of arsenic, which is usually in the form of an amorphous phase, has not manifested itself from the point of view of phase transitions in the measured temperature range. As intermediate conclusions we can formulate the following statements: 1. we once again confirmed the fact that the phase composition of chalcogenide materials significantly depends on the method of their preparation, 2. plasma-chemical method allows varying the phase composition, if necessary, by changing the quenching parameters on the substrate. 3.3. Raman spectroscopy of the As x Te 100−x plasma-prepared samples Raman spectra of the As x Te 100-x plasma-prepared films are shown in figure 6. All the spectra consist of three broad picks referring to the vibration of structural fragments containing Te-Te at ,160 сm −1 , As-Te at ,197 сm −1 and As-As bonds at ,234 сm −1 . The sample As 20 Te 80 containing excess of tellurium has the most intense peak at 160 cm −1 attributed to vibrations of tellurium chains or their fragments in the As-Te glass net [24]. The Raman spectra of the As 40 Te 60 and As 50 Te 50 samples seem approximately symmetrical with a dominant band at 197 cm −1 -the vibration mode of the As 2 Te 3/2 trigonal pyramid. In the sample with composition As 80 Te 20 , amorphous arsenic exhibits a broad maximum at a frequency of 235 cm −1 ; it is characteristic of glasses enriched with arsenic [22]. In paper [23] it was suggested that the structural ordering of As x Te 100-x films depends on two parallel reactions of dissociation that take place during the formation of the glass structure: This assumption is in a good agreement with our data obtained. Mass-spectrometry of the arsenic telluride materials Preliminary, the films were deposited on the soluble substrate (high-pure NaCl) and separated in deionized double distilled water. In order to get the mass-spectra of arsenic telluride films we exploited the standard analytical procedure, when 3-5 micrograms of the film were placed into a microreservoir of direct injection setup of QMS and entered via a vacuum gate directly into an ion source of the mass-spectrometer. The quartz glass microreservoir was gradually heated up from 35 to 500°C at the rate of 50°/min. The mass-spectra of arsenic telluride materials deposited at various conditions were fixed in the mass number range 30-650 at an ionization energy of the electrons −75 eV. The spectra of two samples-As 40 Te 60 and As 50 Te 50 are presented in figure 7. Both mass-spectra obtained include the structural units of the original films -As, Te, As 2 , As 3 , Te 2 , As 4 , AsTe 2 , and As 2 Te 2 . The species of AsTe 2 are likely the pieces of As 2 Te 3 phase and the fragments of As 2 Te 2 look like the representatives of the As-Te phase. It looks logically, that with increase of arsenic content the concentration of the As-Te structural fragments increases. The presented spectra once again confirm the previously stated theory that the structure of the films is determined by the flow of two parallel reactions: As 2 Te 3 2As+3Te and AsTeAs+Te. The obtained mass-spectra together with analysis of the exhausted gas mixtures allow us to talk about conversion of carbon-containing impurities during the process of plasma-chemical synthesis. Commercial high-pure precursors always include carbon in the form of nanoparticies due to two reasons. First of all, carbon is non-limiting impurity in terms of semiconductors. Secondly, there is a lack of appropriate methods of deep purification of arsenic and tellurium from nanoparticles of different nature. That is why in both mass-spectra, besides the main lines related to the structural fragments of the glass net, a large number of impurity lines of carbon derivatives are observed. With increase of the energy input the quantity of carbon-containing impurities with the masses less than 400 decreases, partly due to removing into trap and partly due to polymerization. 3.5. Transparency of As x Te 100-x films in the range of 2-25 microns IR transparency of the plasma-prepared As x Te 100-x films was studied in the range of 2.0 to 25 μm. For this aim the films were synthesized on NaCl substrate of high purity. The results are presented in figure 8. The sample with composition As 50 Te 50 possesses the widest transparency window from 2.0 to, at least, 26 μm. The other films includes substantial quantity of As 2 Te 3 phase and sharp decline near 22 μm. This feature appears, presumably, because of the fact that As 50 Te 50 includes different structural fragments than the As 2 Te 3 , and these units do not manifest the bands of intrinsic absorption at 22 μm. This assumption correlates with the data of structural investigations reported in [24]. 3.6. Investigation of photoconductivity of As x Te 100−x samples obtained at various plasma powers The electrical conductivity of AsTe films was studied as follows. Two aluminum contacts 10 mm long with a distance of 0.5 mm between them were deposited in vacuum on clean glass substrates by magnetron sputtering. Then, over the contacts, the AsTe film of certain compositions were formed by plasma-chemical deposition. Measurement of electrical conductivity was carried out in vacuum at the temperatures from +60 to −100°C, placing the samples in a thermostatic transparent cuvette. Temperature dependences of the dark current in AsTe films of different composition are shown in figure 9. The temperature dependences of the photocurrent in AsTe films of different composition when irradiated with light from the wavelength of 795 nm at the intensity of 40 mW cm −2 are presented in figure 10. The electrical conductivity of the films increases with increase in the content of tellurium, while decrease in photosensitivity is observed. The temperature dependences of the dark and photoconductivity are of the activation character (figures 9, 10). The dependences of the dark current on temperature are described by two exponents, which indicates two mechanisms of conductivity. Both mechanisms possess low activation energies and low values of the preexponential factor, which indicates the hopping mechanism of conduction over localized states in the forbidden zone of the amorphous semiconductor. Theoretical models of the mechanisms of conductivity in chalcogenide semiconductors have been repeatedly discussed in the works of other authors. In our case, the model of charged own defects [32] probably fits. Defects of this type arise due to the violation of the ground state of the chemical bond, and are charged dangling bonds D + and D − [33-37]. Conclusions The As x Te 100-x films of different chemical and phase composition have been prepared by changing of the energy input into the plasma discharge. As a result of this initiation by plasma, we get our precursors in the atomic excited state in the gas phase. Judging by the fact that As-Te molecular lines are absent in the spectra, we can assume that the reaction of formation of the As-Te solid phase occurs on the substrate simultaneously with the quenching process. We may also assume that the atomization of the initial clusters in plasma discharge seems to be the reason of deposition of the arsenic telluride materials with the more structurally and chemically uniform surface. The exhausted gas mixtures after each process were accumulated in the traps, cooled by liquid nitrogen and analyzed by chromate-mass-spectrometry. Both commercial precursors-arsenic and tellurium -include heterophase inclusions of the carbon nature. In plasma discharge their intensive conversion takes place. Judging by the Raman and mass-spectra obtained the structural ordering of As x Te 100-x films depends on two parallel reactions of dissociation that take place during the formation of the glass structure: As 2 Te 3 2As+3Te and AsTeAs+Te. The electrical conductivity of the films increases with increase in the content of tellurium, while decrease in photosensitivity is observed. The dependences of the dark current on temperature are described by two exponents, which indicates two mechanisms of conductivity. Both mechanisms possess low activation energies and low values of the preexponential factor, which indicates the hopping mechanism of conduction over localized states in the forbidden zone of the amorphous semiconductor.
2019-12-19T09:10:52.598Z
2020-01-10T00:00:00.000
{ "year": 2019, "sha1": "1aed2699d1e481d90e0bbe2eb62c6ef7485c04e3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/2053-1591/ab62ea", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "6171287f11266e95a5efc0ba078ebe15b7528641", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Materials Science" ] }
257758988
pes2o/s2orc
v3-fos-license
Graphene nanoparticles as data generating digital materials in industry 4.0 One of the potential applications of 2D materials is to enhance multi-functionality of structures and components used in aerospace, automotive, civil and defense industries. These multi-functional attributes include sensing, energy storage, EMI shielding and property enhancement. In this article, we have explored the potential of using graphene and its variants as data generating sensory elements in Industry 4.0. We have presented a complete roadmap to cover three emerging technologies i.e. advance materials, artificial intelligence and block-chain technology. The utility of 2D materials such as graphene nanoparticles is yet to be explored as an interface for digitalization of a modern smart factory i.e. “factory-of-the-future”. In this article, we have explored how 2D material enhanced composites can act as an interface between physical and cyber spaces. An overview of employing graphene-based smart embedded sensors at various stages of composites manufacturing processes and their application in real-time structural health monitoring is presented. The technical challenges associated with interfacing graphene-based sensing networks with digital space are discussed. Additionally, an overview of the integration of associated tools such as artificial intelligence, machine learning and block-chain technology with graphene-based devices and structures is also presented. . Flow chart illustrating the digitalization of composite structures using graphene nanoparticles as interface for creating a digital factory environment. Reduced Graphene Oxide based sensors along with traditional sensors can be incorporated in the manufacturing setup for digital manufacturing. Using the advanced tools such as block-chain technology, artificial intelligence, virtual simulations and digital twins, smart manufacturing can be achieved within Industry 4.0 framework. data management system are also required for smart operations 30 . The block-chain technology is a promising tool for data collection and management, whereas artificial intelligence tools can provide the required signal processing capabilities. Hence, rGO based sensors, AI-powered tools and block-chain technology can form a triad that could enable smart manufacturing. Moreover, the database can be diversified with the help of simulation tools and digital twins. In this article, we have reconnoitered the prospective utilization of graphene nanoparticles as digital materials within the context of Industry 4.0. First, we have explained how to use rGO as an embedded sensor, followed by the types of data generated by these sensors during the manufacturing process as well as during the service life of a structure. The use of block-chain technology and artificial intelligence tools for collecting and processing the data, and the role of digital twins in smart manufacturing is presented. The data generated using traditional and rGO based sensors can be collected and stored in an efficient and secure manner using block-chain technology. Machine and deep learning tools can be used for creating calibration, detection and predictive models using this database, which can analyze real-time signals captured using graphene-based sensors. In summary, we have presented a roadmap to converge three emerging technologies i.e. advanced 2D materials, artificial intelligence and block-chain in order to realize smart manufacturing in Industry 4.0. Creating reduced graphene oxide strain sensors To make rGO based strain sensors for composites, graphene is used as a precursor, usually synthesized either by a top-down method or by a bottom-up approach 31 . The top-down approaches, such as mechanical exfoliation, oxidation-reduction of GO, liquid phase exfoliation and arc discharge, involve the structural breakdown of precursor such as graphite, followed by the interlayer separation to produce graphene sheets 32 . Chemical vapor deposition (CVD), epitaxial growth and total organic synthesis utilizing carbon source gas to synthesize graphene on a substrate, are examples of bottom-up technique 31 . Graphene nanoparticles and similar 2D materials can be embedded within a fiber reinforced composite structure either by dispersing them in the matrix or by coating them directly on the fiber reinforcements 33,34 . Reduced graphene oxide mixed within the matrix. In this approach, the polymeric resin (matrix) is modified by dispersing graphene nanoparticles within the resin, resulting in a traditional nanocomposite 35,36 . Enormous amount of useful data is gathered such as, mixing ratios, mechanical stirring force, centrifugal mixing parameters, etc. The data gathered during mixing of graphene nanoparticles in the resin system is useful to predict physical state of reduced graphene oxide, such as exfoliation and quality of reduction achieved, which can directly influence properties such as electrical conductivity, EMI shielding and a number of different mechanical properties 29 . However, the modified resin may also cause issues such as altering resin viscosity, particle agglomeration, premature gelation, filtering effect within the fabric while infusing the resin, and uneven distribution of the filler throughout the composite laminate 37,38 . These issues have hindered the practical application of rGO loaded resins and their composites, especially during the manufacturing of large and thick parts e.g. wind turbine blades, where mold filling can become very challenging. Reduced graphene oxide coating on reinforcements. Coating rGO directly onto the fibrous reinforcements instead of modifying the matrix is an alternative approach to overcome the issues highlighted above. In addition to imparting sensing abilities, the coating of reinforcements with rGO also provides the possibility of improving the mechanical and physical properties of the composite; hence, endowing multifunctional properties to the final structure 39 . Techniques for the depositing rGO onto fibrous reinforcements include, (i) chemical vapor deposition (CVD) 40 , (ii) electrophoretic deposition 41 , (iii) solution and spray coating 42 , and (iv) sizing containing rGO directly applied to the fibers during the fiber manufacturing process 43 . When deposited on the reinforcements, the composite part/structure becomes electrically conductive due to the formation of a network of meso-scale rGO nanoparticles 30 . When subjected to external stimuli, such as fluid pressure or mechanical forces, the conductive path is disrupted and the overall electrical resistance/conductivity of the part/structure is altered. This change in resistance/conductivity is measured and correlated with the external stimuli. The overall resistance of the conductive network formed by rGO can be divided into three types: (i) intrinsic resistance of rGO, (ii) contact resistance, and (iii) tunneling/hopping resistance. This can be expressed using the following equation 29 where R i is intrinsic resistance, R c is the contact resistance and R t is tunneling resistance. The key requirement for these sensors is the ability to detect any small changes in the overall resistance (ΔR). The signal is normally manipulated as a relative or Fractional Change in Resistance (FCR) rather than absolute measurements. The measured value is taken relative to a reference value (R 0 ) and normalized by the same (R 0 ), given as; where, R is the measured value and R 0 is a reference value. The coated rGO can make the fabric material "digitally responsive" by generating signals which can be measured using any data acquisition (DAQ) system. The physical changes happening during manufacturing can easily be monitored, such as compaction response of the reinforcement, mold-clamping forces, resin pressure distribution, flow front tracking and resin cure kinetics, which were www.nature.com/scientificreports/ traditionally collected using external sensors and actuators that were not part of the material itself 44 . For process monitoring, the changes in electrical resistance can be expressed in terms of gain factor, which is a measure of the percentage change in the initial resistance of the structure. Apart from the signal, different parameters also need to be archived such as, sensor calibration, coating parameters, etc. 45 . There is huge amount of quantifiable parameters that can be recorded from the coating stage such as, concentration of the coating solution, sonication parameters (time, temperature and frequency), number of coating layers, rGO reduction time and temperature etc. These parameters affect the final resistance value, and hence the sensitivity of the rGO-based sensors 29 . The gathered data can be stored and analyzed for designing molds, selecting optimum injection gates and vents, measuring reinforcement permeability and predicting resin curing [46][47][48] . Data generated by reduced graphene oxide strain sensors While in operation, the rGO sensors generate signals that corresponds to various physical phenomenon/activities depending on the environment the smart material is exposed to. In a typical composites manufacturing process such as, Liquid Composite Molding (LCM) process, there are three main stages, i.e. compaction of the dry reinforcement, resin injection and resin curing, as shown in Fig. 2. All three stages are prone to process variabilities and need to be monitored using strain and pressure sensors. In the reported literature, rGO embedded fabric sensors have been employed for monitoring LCM processes 44,45 , which are some of the commonly used out-of-autoclave composite manufacturing processes. The rGO coated fabric based sensors are now being used in a variety of geometric forms (point sensors, line sensors, or area sensors) and configurations for monitoring applications 45 . It is also desirable that the concept of embedded sensors is applied to other composites manufacturing processes, such as filament winding and pultrusion for civil and construction industry. The embedded rGO-based sensors provide useful data at each stage throughout the manufacturing cycle, with vital information extracted related to the void content and structural health of the manufactured structure. Data generation during manufacturing. The first step in manufacturing of composites via Liquid Composite Molding (LCM) is the preforming step, in which the dry reinforcements are subjected to transverse compaction, so that they can conform to the mold shape and achieve the target part-thickness and fiber volume content. The compaction stage varies depending on the type of LCM technique used. The Resin Transfer Molding (RTM) is a closed-mold process where rigid mold platens apply high compaction forces on the reinforce- Figure 2. Data generated during the life cycle of a smart composite component, during and after manufacturing. Fabric compressibility is quantified by the applied stress required to achieve the target fiber volume fraction. The evolution of reinforcement permeability and flow characteristics are the important characteristics during the resin infusion followed by the cure kinetics of the resin. The distribution of stress within a structure is crucial for monitoring its health and for adopting prognostic measures. All aspects being monitored using in-situ coated fabrics. www.nature.com/scientificreports/ ments using a press, whereas, in Vacuum Assisted RTM (VARTM), the vacuum force is applied to compress the vacuum bag against the reinforcement laid on a single sided mold. During the compaction phase, uneven compression within a mold may result in thickness variations, particularly in the case of VARTM. In both cases, the applied compaction forces determine the fiber volume fraction of the composite, which in turn, determines the quality of the final part and mechanical properties of the composite. The rGO-based embedded sensors have been used to monitor compaction forces acting on the reinforcements in both VARTM and RTM processes. The rGO-based sensors are able to detect compaction forces of dry and resin impregnated reinforcements in the form of resistance change. During this stage, the mold clamping forces and stress relaxation data are required, which are usually proactively determined through characterization experiments 49 . These information are now being obtained in-situ via sensors based on 2D materials 50 . Recently Ali et al. 50 have demonstrated that even a very complex time dependent phenomenon such as stress relaxation of the reinforcements in a closed mold can also be monitored using rGO and MXenes based embedded sensors. During resin injection, the pressure distribution within the mold changes rapidly. This phenomenon is generally monitored using point sensors drilled within the mold [51][52][53][54][55] . rGO coated fabrics can act as an attractive alternative to these sensor arrays 44 . The resistance change data generated from the coated fabrics depends on resin conductivity and dielectric properties of the resin system used 44 . The conductivity/resistivity of graphene nanoparticles plays an important role when resin impregnates through the coated fibers. The gradual change in pressure inside the mold is also an indicator of resin impregnation captured via change in resistance of the embedded sensors. Moreover, race tracking and dry spot formation within the part can be detected by comparing signals from sensors placed at different spatial positions within the preform 45 . The interaction of resin with the sensors can provide information about the distribution of resin within the mold. It is also possible to make 2D plots of resin infusion process by spatial mapping during impregnation process 56,57 . This requires a virtual array of sensors, multiplexing system in combination with a Source Measure Unit (SMU) or similar resistance measuring unit. The resistance of the embedded sensors is sensitive to gelation and crosslinking, as the resin observes shrinkage during these stages and apply compaction forces on graphene nanoparticles, thus resulting in a change in the overall electrical resistance 56 . Various stages of curing including, initial gelation, hardening (where resin shrinkage takes place) and post-cure are detected by monitoring relative changes in electrical resistance of the sensors such as described by Khan et al 45 . Data generation during post-manufacturing. Composite structures are frequently subjected to a number of loading scenarios in multiple applications throughout their service life. Depending on the type of application, these loads can range from high-velocity to low-velocity impact producing large deflections 58,59 . Any structural health monitoring system consists of sensing elements, preferably embedded within the structure and connected to a signal processing unit with diagnostic algorithms, and a data management resource 30 . Carbon based nanomaterials coated sensors have shown great potential in recent years for sensing applications in composite structures. Compared to carbon nanotubes, rGO and graphene flake sensors standout in their sensing applications due to higher aspect ratio and cost-effectiveness 60 . rGO embedded composite structures can be used to sense strain and damage during their lifetime. The mechanism of piezo-resistive sensing in FRPCs depends on whether rGO is coated on fabrics or mixed within the resin. When rGO is present in the matrix, an irreversible increase of the electrical resistance 61 can be detected due to the initiation of cracks in the matrix and delamination of fabric layers. Alternatively, in the cases where rGO is coated directly on reinforcements, the conductive networks are confined to fiber surface, hence, detecting matrix cracks becomes relatively difficult. Nonetheless, enormous amount of data that is generated can be used for preventive arrangements of composite structures before any catastrophic failure happens 62 . A lot of work has been reported on Structural Health Monitoring (SHM) where, composites were tested in different modes including tensile, compression, bending, impact, creep and stress relaxation 63 . A comprehensive literature review on the subject shows that a number of studies have reported successful implication of rGOcoated fabric sensors for monitoring the flexural response of composite structures. It is quite interesting to note that apart from precise strain sensing capability under flexural loading, these smart sensors can also exhibit a distinct response for tensile and compressive loads, if placed above and below the neutral surfaces in flexural loading 41,64 . A number of researchers have pushed these rGO coated sensors one step further to investigate their feasibility for sensing the repetitive long-term loading in composite structures. Remarkable repeatability in the piezoresistive response has been reported in both flexural and tensile cyclic loading for as high as 3000 loading cycles 65,66 . It is also worth mentioning that graphene nanoparticles based fiber sensors have also been adopted in complex composite structures for successful in-situ SHM. In fact, these smart sensors were yet again capable of reporting distinctive response to compression and tension loading based on their placement above and below the neutral surface 67 . Interestingly, a couple of studies recently further extended the use of rGO-coated sensors in the form of smart composite face sheets in honeycomb sandwich structures for in-situ SHM. Smart aerospace sandwich structures were not only sensitive to span length and core thickness 68 , but also exhibited distinctive responses to multiple loading rates in beams of any arbitrary width of interest 69 . Considering that sandwich composites based on Nomex™ honeycomb cores are an integral part of the modern aerostructures, these recent findings show remarkable potential in terms of sensing capabilities of active rGO-coated piezoresistive sensors in the aerospace industry. Significant progress has been made thus far in terms of sensing the conventional mechanical response in composite structures. However, it is critically important to note that the inherited viscoelastic nature of the polymer resin and fiber reinforcements makes their mechanical response time-dependent, hence, the piezoresistive response of these smart sensors also becomes a function of time 70 . Therefore, it is very crucial to investigate www.nature.com/scientificreports/ the long-term creep and viscoelastic stress relaxation response using rGO-based smart sensors. Despite the importance of such response in long-term application of composite structures, this area of research has not been exploited properly yet. Irfan et al. 65 conducted first of this kind of study based on rGO-based smart sensors, to investigate the effect of temperature on the mechanical performance of composites using dynamic mechanical analysis. The results were also compared with the response of MXene-coated fabric sensors under similar dynamic mechanical analysis using temperature sweep experiments. The results were quite promising as the sensors were not only capable to detect the thermomechanical response, but also detected the glass transition phenomenon and transition from glassy to rubbery region. In fact, rGO-based sensors exhibited smoother response compared to MXene-based sensors. Therefore, rGO-based sensors have shown great potential for self-sensing applications in multiple industrial applications of composite structures. Nonetheless, self-sensing smart composite structures can be regarded as an emerging field, despite a number of limitations and challenges for researchers working in this field. Before their implication on an industrial scale, a number of areas need rigorous research. Some of these areas may include: (i) the scalability of these sensors; (ii) calibration; (iii) effect of other external stimuli, such as environmental factors; (iv) comparison with well-established conventional sensors for these applications, such as Piezoelectric Sensors (PZT) and Fiber Bragg Grating (FBG) sensors and (v) making these sensors smart enough to convey signal directly on portable devices such as mobile phones. The meta-verse of composites manufacturing Given the fact that rGO-based sensors have great potential to be used in an industrial environment, their integration with cyber world is still a challenge and not much work has been done. In this section, we present a roadmap of Industry 4.0 technologies and how these technologies can use data generated through these sensors (as described in above mentioned sections) to create smart factories. A smart factory is self-adapting, and highly automated manufacturing environment capable of autonomously running entire production processes and making data-driven decisions 71,72 . Such a manufacturing setup has the ability to self-optimize performance and improve efficiency, flexibility, and quality control by self-adapting to new conditions through learning in realor near-real time 73 . It integrates digital and physical systems through an interconnected network of machines, communication mechanisms, and computing power, and uses advanced technologies such as block-chain, artificial intelligence, machine learning etc., to gather and analyze data 74,75 . This integration is achieved through network of sensors and actuators enabling a physical system to access the capabilities of the virtual space or the "meta-verse" 75,76 . Data gathered by rGO-based sensors, can be used for conducting virtual experiments and decision making in a smart factory. The rGO-coated sensors can feed digital information from the physical space to the digital space such as rGO mixing ratios, mold clamping forces, pressure distribution in the mold etc. This digital information comes in various formats (numeric data, images, time-dependent data, etc.). The role of digital space or the "meta-verse" is to collect this data securely, interpret the data and generate actionable commands. These actionable commands could be a decision tree that can enable/disable resin feeding lines based on the information gathered from the mold using rGO-based sensors. Creating digital material twins using data from rGO sensors. The concept of "meta-verse" is very broad and its key components are virtual/augmented reality using digital twins, artificial intelligence, blockchain, IoT etc. Digital Twin (DT) is one of the core components of Industry 4.0, which is termed as virtual replica or digital prototype of the physical process, fully integrated with the physical system and capable of performing virtual simulations in real time [77][78][79] . The virtual simulation is a key aspect of DT that requires continuous iterations between physical and virtual entities 80,81 . These simulations include physics based computational approaches (FEM/CFD) 82-87 as well as data driven stochastics simulations [88][89][90] . The advantages of digital simulations over experimental procedures is evident in material consumption, labor hours and overall cost reduction. Apart from these advantages, these simulations can be used to generate datasets to be used in training and creating machine learning models. Although, such simulations cannot be performed in real-time, machine learning models based on the synthetic data can be useful 91 . The capabilities of digital twin are sometimes enhanced with the Virtual and Augmented Reality technologies 92 that enable human-machine interactions 93,94 . For example, Perez et al. 95 presented and validated a VR-enhanced DT for designing the automated process of a multi-robot manufacturing setup as well as its enhanced implementation and in-operando monitoring. Digital twins are implemented at different yet interlinked levels 13 . In the context of composite structures, these levels include the design, manufacturing/assembly and in-services/operation phases 80 . At the design level, it is also known as "digital material twin" (DMT) which refers to the realistic computational models of the composite material that can be used for design verification and predicting the mechanical properties of the final composite, as well as estimating the process parameters such as the compaction response and resin flow properties within the reinforcing fibers [96][97][98] . These parameters are well captured by rGO coated fabrics (as described in previous sections) and this information can be stored and used to create "near to reality" DMTs. Moreover, rGO coated fabric sensors can also be used for the experimental validation of DT simulations. Digital material twins for virtual manufacturing can be generated from different 3D scanning techniques, such as X-ray computed tomography (XCT) [99][100][101][102][103] . During production, DT is implemented at the shop-floor level for effective process monitoring, control and optimization 16,104,105 . Seon et al. created a DT for optimizing the de-bulking process of autoclave composites for mitigating void formation 106 . Zambal et al. 107 generated DT for the detection of defects during carbon fiber layup using data collected from various sensors along with analytical modeling and finite element simulations. Finally, in the operational phase, DT is used for prognostics and diagnostics activities 108 www.nature.com/scientificreports/ of stiffened composite panels by estimating the load acting on the structure using strain data acquired from Fiber Bragg Grating (FBG) sensors. Sisson et al. 111 pursued a digital twin approach to optimize rotorcraft flight parameters by minimizing stress on critical mechanical components and through probabilistic diagnosis, prognosis, and optimization. Using the data collected from strain sensors, it is not only possible to detect the presence of the damage but also the evolution of the damage, hence remaining useful life of the part can also be predicted 109 . The knowledge about the health of the structures and parts helps in taking pre-emptive measures such as part replacement, repairing the damage, arresting cracks, etc. AI assisted digital manufacturing using data from rGO sensors. Artificial Intelligence (AI) generally refers machines that are designed to perform tasks that typically require human-like intelligence, such as perception, reasoning, and decision making [112][113][114] . Inherently, AI systems consist of data-driven mathematical models for inference and solving problems autonomously 114 . AI encompasses sub-fields of machine and deep learning, computer vision, natural language processing and cognitive computing, each of which focuses on different aspects of AI technology. Artificial intelligence and 2D materials are two of the disruptive technologies that are intertwined [115][116][117] . On one hand, the 2D materials could be an enabler for constructing devices for AI, such as memristors, photodetectors, etc. [118][119][120][121][122] . On the other hand, AI tools such as machine and deep learning can not only accelerate the discovery, design and optimization of 2D materials [123][124][125][126] , but also can interpret the signals generated by sensors based on 2D materials. Here, since we are discussing graphene as potential sensor, we will restrict our discussions to AI tools for signal processing. The role of AI techniques in digital manufacturing using rGO sensors can be primarily viewed as a signal processing tool. Monitoring the manufacturing process usually involves detecting anomalies and measuring physical quantities such as pressure, temperature etc., which can be easily captured using rGO sensors. The real-time processing of signals with very low computational power makes these tools very attractive 127,128 . The signals measured by rGO-based sensors would normally be in the form of resistance/voltage/current measurements. These signals need to be converted to physical parameters such as pressure, stress, strain, temperature etc. through different calibration and correlation models 61,[129][130][131][132] . Such calibration models can be easily developed using machine learning tools 17,50,133 . Zhu et al. 17 employed a machine learning tool (principle component analysis) to predict the concentration of hydrogen gas from the measured response of rGO based gas sensor. Ali et al. 50 calibrated MXene coated glass fabric sensors using supervised machine learning algorithms to correlate the compressive stress with the measured signal. Hajizadegan et al. 133 extracted the concentration levels of the bio-chemical dopants from the harmonic spectrum of graphene-based harmonic sensors using artificial neural networks (ANN). Other than the calibration models, AI tools can be easily employed for detection, inspection and monitoring tasks 134 . These tasks may include detection of resin race-tracking in molds 135 , flow disturbances 136 , and unfilled zones formation 137 during the filling stage of an LCM process as well as inspection of broken-filaments during fiber production 138 . Novel AI-based methods for the inspection of the Automated Fiber Placement (AFP) process have also been presented by several researchers [139][140][141][142][143] . As part of health monitoring of structures, machine/deep learning models have been used for defect/damage detection [144][145][146][147][148][149][150] , characterization of cracks/delamination [151][152][153] and classification of impact levels 154 . Yu et al. 154 demonstrated that probabilistic Bayesian and traditional artificial neural networks can successfully classify the energy levels of different impact events based on the signals obtained from a network of piezoelectric sensors. Deep learning tools are particularly capable of such tasks when the signal is in the form of 2D/3D fields and maps 56,57 . In such cases, these models are not only able to detect these defects, but also locate them 152 . Finally, the machine/deep learning-based surrogate/predictive models can be used for process simulations [155][156][157] as well as for failure predictions in diagnostic and prognostic maintenance [158][159][160] . Using the data provided by a set of pressure sensors, Zhu et al. 161 implemented a neural network model for the prediction of flow-front patterns at any impregnation time. Similar predictive models were also presented for forecasting resin cure 162 and flow front progression 163 . Stieber et al. presented neural network based models FlowFrontNet 164 and PermeabilityNets 165 for the prediction of dry spot formation and permeability maps from a sequence of flow front images respectively. Pratim et al. 166 presented an ANN framework to predict the life (durability) and residual strength (damage tolerance) of fiber-reinforced polymer (FRP) composites from real-time acquired dielectric permittivity of the material. Hassan et al. 167 used genetic algorithms for failure prediction in self-sensing nanocomposites based on conductivity changes observed via electrical impedance tomography. In summary, these tools can be integrated within the digital manufacturing setup as calibration, detection and predictive models as summarized in Fig. 3. Moreover, these models can be periodically re-trained as availability of new data without losing the old weights, hence, truly updating the whole manufacturing process. Some of the models discussed here used data generated from traditional sensors or synthetic data rather than data collected by piezoresistive rGO sensors. However, the methods discussed here can easily be adapted for analyzing data obtained via rGO sensors. Block-chain technology based on rGO sensor data. As AI tools can analyze the data collected through rGO-based sensors efficiently, the block-chain technology can collect and manage data in a secure, trustworthy and traceable manner 168,169 . By definition, a block-chain is an evolutionary list of immutable records, called blocks, which are linked together using cryptography and stored on a decentralized network of computers or nodes in chronological order 170 . Block-chain technology employs a self-executing piece-of-code, known as smart contracts, to automate the process in a much reliable and trustful way 171 . Currently, this technology is being exploited extensively by the financial and banking sectors, healthcare and supply chain sectors 172 173 . While using rGO as a sensing element for the manufacturing of fiber reinforced polymer composites, the data is generated at various stages, which includes physical properties as well as process parameters. These stages form the multi-echelon supply chain that comprises raw materials, manufacturing process and the finished components/structures 173,174 . The nature and format of data varies depending on the processing step, and includes numeric values, time/temperature dependent curves, or even two/three dimensional fields, as well as subjective descriptions. All the data generated at each step can be collected and stored in an efficient and secured manner using block-chain. A conceptual illustration of the use of block-chain in collecting and storing the generated data is illustrated in Fig. 4. Apart from the data directly collected from rGO sensors, the data related to physical characteristics of the reinforcement and matrix, as well as data generated from physical and virtual experiments is also crucial for efficient processing. The physical characteristics of the reinforcement and matrix are usually provided by the supplier (first block in Fig. 4). These properties are then validated as well as new characteristics are determined via characterization experiments and virtual simulations using digital twins (second block in Fig. 4). The shape of the part to be produced, which will be in the form of a 3D geometry, is another important piece of data. Mold designs and other process parameters depends on the type of manufacturing method used. In case of LCM, the process parameters include the number and location of inlet/outlet ports, injection pressure, etc. For processing prepregs, the cure cycle and temperature are the main parameters. The rGO coated materials can play a vital role in in-situ data acquisition during the process. The inspection of finished parts will produce more data related to the quality of the part, such as porosity maps and void content and tolerance levels 175 . Finally, while in service, the smart structure based on rGO sensors will generate signals related to its structural health, which can be managed in the maintenance log-book on the block-chain ledger 176 . Apart from direct involvement, block-chain can also help in creating DT's 169,177 and work in conjunction with artificial intelligence to have an overall impact 178 . Nevertheless, block-chain technology is a secure, large-scale and reliable data collection and management tool for implementing smart operations using networks of sensory elements 179 , including rGO based sensors. Concluding remarks and outlook There are numerous challenges and opportunities for the technological applications and market penetration of graphene nanoparticles as a digital materials in various real-world applications. It is vital to consider these challenges prior to the largescale commercialization of graphene as a sensing element in fiber reinforced polymer composites, and to make them compatible with the standards of Industry 4.0. The material selection process is of paramount importance as there are several 2D materials now available, and the chosen 2D material will affect not only the processing steps but also the final sensing properties of the product. The economy of scale is also a factor when choosing a 2D material. Atomistic modeling can be a tool to narrow down the material selection for a particular application. This becomes very important when multifunctional composites are involved. The engineered 2D materials such as MXenes can be designed to obtain optimized properties. Atomistic modeling can also help in making hybrids of two or more materials. Synthesis of good quality materials is also a challenge, especially if the processes are not well defined in literature and practice. It must be decided if in-house synthesis is required or off-the-shelf options may work for an application. Adding graphene and other 2D materials into the process chain is the next challenge. There are numerous ways in which graphene can be incorporated into composites, for example, mixing in the resin system, coating on the reinforcements, weaving a coated tow into the reinforcement fabric, or coating the final composite with the graphene solution. There is no single solution, the user needs to decide which method is optimum for the target application. Sensor manufacturing is another closely related challenge. It is also important to decide about the size, number and spatial location of sensors in a structure. Embedding a sensor in a complex 3D shape while maintaining its sensing properties can be a difficult task. It is also important to keep in mind the manufacturing process, where it would require different approaches to embed a sensor. Whichever technique is used to embed graphene nanoparticles into the composite, it is important to quantify the sensing capability of the composite to sense any physical changes. Sensor calibration is a major challenge in this field, especially when inter-lab sensors are involved. There is no standardization of these sensors yet, however for a commercial application, a standardization protocol is desirable. The property retention of sensors over time is also a critical factor. Environmental factors such as temperature and humidity might affect the sensing capability over time. This is also important in commercial sensors such as FBG sensors, and a routine inspection is performed to ensure the working of these sensors in real-world applications such as bridges. In the same way, graphene nanoparticles based sensors should have a provision for inspection over time. Meanwhile, in lab environments, accelerated tests can be performed to quantify the property retention. A large-scale production system is essential for the commercialization of graphene nanoparticles as a viable digital material. As mentioned earlier, various commercial vendors are available for provision of graphene materials however, the application of graphene in different fields poses unique challenges. Graphene and other 2D materials are viable nanomaterials to be used as smart sensors in fiber reinforced composites. They can provide process and structural health monitoring at every stage of composites manufacturing and application. In addition, Figure 4. Data collection at various stages of composites manufacturing using block-chain technology. The data generated at various manufacturing stages including the data sheets of the raw materials and in-service signals can be gathered in an efficient and secure manner by using block-chain technology. www.nature.com/scientificreports/ these materials can also enhance other base properties of the neat composite, including mechanical properties and EMI shielding. The entities in meta-verse are far more mature than 2D materials. The digital space has seen tremendous advancements in computational capabilities that include cloud computing, big data analytics, IoT and artificial intelligence (AI). However, their integration with sensors based on 2D materials have not been achieved yet. Even, compatibility of various digital tools is also not clear. One of the key characteristic of the block-chain technology is publically available information. However, most of the information in a manufacturing environment are of propriety nature. In this regard, consortium or federated block-chains can be used where the information is restricted to a target audience only. The AI tools are data driven, and require carefully curated data sets for training. Such type of data is scarce at the moment but is expected to grow with time. Lastly, the concept of digital twin based on graphene nanoparticles sensors is also in its conceptual phase. The growth of all these technologies together can bring in the true essence of Industry 4.0. There is no doubt that there are rich opportunities for the application of graphene and other 2D materials in this area. It is a high time that academics and composites industries including aerospace and automotive sectors should work together to solve challenges in the field and aim for the wide-scale adaptation of graphene as a digital material to reap the benefits of this wondrous material. Data availability All data generated or analyzed during this study are included in this published article. License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2023-03-27T13:42:07.865Z
2023-03-27T00:00:00.000
{ "year": 2023, "sha1": "7b3a0696640889676b05177235f4757b53f77255", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "7b3a0696640889676b05177235f4757b53f77255", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
2604943
pes2o/s2orc
v3-fos-license
Biosynthesis of high density lipoprotein by chicken liver: intracellular transport and proteolytic processing of nascent apolipoprotein A-1. To study the in vivo processing and secretion of Apolipoprotein A-I (Apo A-I), young chickens were administered individual L-[3H]amino acids intravenously and the time of intracellular transport of nascent Apo A-I from rough endoplasmic reticulum (RER) to the Golgi apparatus was measured. Within 3 to 9 min there was maximal incorporation of radioactivity into Apo A-I in both the RER and the Golgi cell fractions. By contrast, the majority of radioactive albumin was also present in the RER by 3 to 9 min, but did not reach peak amounts in the Golgi fraction until 9 to 25 min. Both radioactive Apo A-I and albumin appeared in the blood at about the same time (between 20 and 30 min). NH2-terminal amino acid sequence analysis of nascent intracellular Apo A-I showed that it contains a pro-hexapeptide extension identical to that of human Apo A-I. After 30 min of administration of radioactive amino acids radioactive Apo A-I was isolated by immunoprecipitation from the liver and serum. NH2-terminal sequence analysis of 20 amino acids indicated that chicken liver contained an equal mixture of nascent pro-Apo A-I and fully processed Apo A-I, whereas the serum only contained processed Apo A-I. Further studies showed that the RER only contained pro-Apo A-I, whereas a mixture of pro-Apo A-I and processed Apo A-I was found in the Golgi complex. These results indicate that, in chicken hepatocytes, there is a more rapid transport of Apo A-I than of albumin from the RER to the Golgi cell fractions, and that Apo A-I remains in the Golgi apparatus for a longer period of time before it is secreted into the blood. In addition these studies show that the in vivo proteolytic processing of chicken pro-Apo A-I to Apo A-I occurs in the Golgi cell fractions. indicated that the first 18 amino acids resemble a "signal" prepeptide and that the following hexapeptide is akin to the propeptides seen in several secretory polypeptide hormones. However the propeptide contains carboxy-terminal gin-gin residues and the prosegment is not cleaved intracellularly either by human liver and intestine in organ culture, by perfused rat intestine and liver, or by rat hepatocytes and human hepatoma cells in culture (12,13,16,38,39,41). Nascent apolipoproteins are complexed intracellularly with lipids prior to secretion into the blood. The site of synthesis of the apoprotein, lipid (1, 14), and carbohydrate (34) moieties oflipoproteins have been studied but there is little information on the nature of the nascent lipid protein complexes in the various organelles and on the elapsed time of transfer of nascent Apo A-I from its site of synthesis in the rough endoplasmic reticulum (RER) to the Golgi apparatus. Recently we showed that newly synthesized Apo A-I, even though it is present within the RER and the smooth endoplasmic reticulum, failed to float between densities 1.063-1.21 g/ml, whereas the Apo A-I, which is present within the Golgi complex, is capable of floating at a buoyant density similar to that of plasma HDL (3). Using radioactive glycerol, we have further shown that although glycerol is quickly incorporated into lipids of the endoplasmic reticulum and the Goigi cell fractions, the nascent lipids are mostly conjugated with Apo A-I in the Golgi apparatus (4). This indicates that most of the lipid-protein assembly occurs late in the secretory process, probably in the Golgi apparatus. In this report we describe the in vivo proteolytic processing of Apo A-I, by measuring the NH2-terminal amino acid residues of nascent chicken apoprotein A-I in the RER, Golgi apparatus, and plasma; and we show that, compared with albumin, Apo A-I has a distinctive rapid transport time into the Golgi apparatus. Rabbit immunogiobulin monospecific to chicken serum albumin was purchased from Cappel Laboratories, CochranviUe, PA, and young leghorn chickens (5-10-d old) were obtained from Spafas Poultry Farms, Norwich, CT. Preparation of Apo A-I and Development of Antibodies: Blood was collected from the jugular vein of adult roosters and allowed to clot. The serum was separated from the clot and adjusted to 1 mM EDTA, pH 7.4, and 154 mM NaCI. HDL was floated from the adjusted serum between densities 1.063-1.21 g/ml, dialyzed against 0.15 mM NaCl, 0.01 M sodium phosphate (pH 7.4) at 4"C and delipidated as previously described (3). Rabbit antiserum to HDL apoproteins was prepared and tested for its specificity as previously described (4). This antiserum was used to isolate nascent Apo A-I from the liver and serum. In Vivo Incorporation of Radioactive Amino Acid into Apo A-I: Nascent Apo A-I was radiolabeled in vivo by injecting individual radioactive amino acids into the jugular veins of young chickens. At various times (from 3 to 60 min). the livers were removed and blood was collected. The livers were rinsed throughly and homogenized in phosphate-buffered saline ( I0 mM sodium phosphate buffer, pH 7.4, 154 mM NaC1) containing 100 ~g/ ml PMSF, 100 U/ml trasylol, l mM benzamidine, and l mM TPCK. When total intracellular Apo A-I was isolated, the homogenate was disrupted by the addition of 0.5% sodium deoxycholate and Triton X-100 and a soluble supernatant fraction was obtained by centrifuging at 105,000 g for 60 min. The Apo A-I or albumin present in this fraction was obtained by immunoprecipitation. To obtain serum Apo A-I, the blood samples were allowed to clot in the 1220 THE JOURNAL OF CELt BIOLOGY • VOLUME 101, 1985 presence of 100 gg/ml PMSF, 100 U/ml Trasylol, 1 mM benzamidine, and 1 mM TPCK for several hours at room temperature. The serum was adjusted to 10 mM sodium phosphate, pH 7.4, 154 mM NaC1, 0.5% sodium deoxycholate, 0.5% Triton X-100, and secreted nascent APO A-I was isolated by immunoprecipitation. Preparation of RER and Golgi Cell Fractions: Livers were homogenized in 0.25 M sucrose, filtered through a layer of cbeese cloth, and a postmitochondrial supernatant was removed by centrifuging the homogenate at 16,000 g for 10 min. A total microsomal fraction was obtained by centrifuging the post mitochondrial supernatant fraction at 105,000 g for 90 min. The RER and Golgi cell fractions were further separated and characterized as described earlier (4). Isolation of Radioactive Nascent Apo A-I and Albumin from the RER and Golgi Complex Fractions: The cell fractions were treated with 0.5% sodium deoxycholate, 0.5% Triton X-100 in phosphatebuffered saline (10 mM sodium phosphate, pH 7.4, 154 mM NaCI, 100 ug/ml PMSF, 100 U/ml Trasylol, l mM benzamidine, and l mM TPCK), and a detergent-soluble fraction was obtained by centrifugation at 105,000 g for 60 min. The radioactive nascent APO A-I and albumin present in these fractions were recovered by immunoprecipitation and in some experiments also by SDS PAGE. Immunoprecipitation of Albumin and Apo A-I: Three equal aliquots, each containing 0.5-g tissue equivalent of detergent-soluble fractions prepared from either liver homogenate or isolated organelles, or 0.1 ml of serum prepared as described above, were taken. Each sample received one of the following: (a) 75 #1 of antiserum to rooster serum Apo A-I(I ml antiserum precipitated ~0.15 mg of Apo A-I); (b) 50 ul of antiserum to chicken serum albumin (1 ml antiserum precipitated 0.3 mg of serum albumin); or (c) 75 ul of rabbit preimmune serum. All samples were incubated at 37"C for 60 rain, followed by 48 h at 4"C with gentle shaking. The antigen-antibody precipitates were collected by centrifugation and washed several times with 0.02 M sodium phosphate buffer, pH 7.4, containing 154 mM NaCI, 100 ug/ml PMSF, 1 mM benzamidine, 1 mM TPCK, and 100 U/ml Trasylol. The samples treated with preimmune rabbit serum did not contain a precipitate. The immune complex was either (a) suspended in 6 M urea, 1% SDS, and 2% ~3-mercaptoethanol in 0.01 M Tris-phosphoric acid, pH 6.7, and heated in a boiling water bath for 2 min prior to separation of Apo A-I and/or albumin on SDS PAGE, or (b) dissolved in 200-400 ul aldehyde-free 10% acetic acid. The first method was used when measuring the incorporation of radioactive leucine into Apo A-I and albumin and the rate of transport of these proteins from RER to the Golgi apparatus whereas the second method was applied when radioactive Apo A-I was prepared for amino acid sequencing. NH2-terminal Sequence Analysis of Intracellular and Secreted Chicken Apo A-I: Samples of immunopurifled Apo A-I that were labeled in vivo by individual radioactive amino acids were mixed with 200 mg polybrene (Beckman Instruments, Inc., Palo Alto, CA) and 50 nmol sperm whale apomyoglobin (Beckman Instruments, Inc.) and subjected to 20 cycles of automated sequential Edman degradation using a 0.1 M Quadrol buffer program in a Beckman 890D Sequencer (Beckman Instruments, Inc.). The phenylthiohydantoin-derivatized amino acids obtained at each cycle were dried under N2 and their radioactivity measured in presence of 10 ml of Betafluors (National Diagnostics, Inc., Somerville, N J) using a Packard liquid scintillation counter. The yield in each cycle was obtained by identifying and calculating the amount of amino acids recovered when standard apomyoglobin was used. The derivatized amino acids were separated by high pressure liquid chromatography using a CI8 Zorbax ODS column (DuPom Co., Wilmington, DE). Other Methods: SDS PAGE was performed in 10% polyacrylamide slab gels containing 0.1% SOS using the gel and buffer system described by Laemmli (26). In some experiments, the radioactive proteins were detected by cutting the gels into 0.5-mm slices and determining radioactivity in the excised pieces. The slices were digested at 70-75"C in 0.4 ml 30% hydrogen peroxide and 0.2 ml of 60% perchloric acid for 2 to 4 hr. The samples were cooled at 4"C and counted in presence of 10 ml of Scintiverse Ii (Fisher Scientific Co., Fair Lawn, N J) in a liquid scintillation spectrometer. Amino acid composition of Apo A-I was determined in a Beckman amino acid analyzer 6300 using ninhydrin reaction program No. 1 (Beckman Instruments, Inc.). Time Course of Albumin and Apo A-I Secretion These initial experiments were designed to establish and compare the time required for secretion of nascent albumin and Apo A-I. Both radioactive proteins appeared in the blood at about the same time ( Fig. 1). In the first 15 min following the administration of L-[3H]leucine there was very little radioactive albumin or Apo A-I in the blood. After this initial 15min period, both nascent albumin and Apo A-I entered the blood and continued to accumulate until 30 min. Radioactive albumin reached a maximal level in the blood by 45 min and radioactive Apo A-I by 30 min. Time of Intracellular Transport of Albumin and Apo A-I from RER to the Golgi Cell Fractions To determine whether nascent Apo A-I and albumin are transported at the same rate from site of synthesis on the RER to the Golgi apparatus, young chickens were administered L-[3H]leucine intravenously and at different times the livers were removed, fractionated into the RER and Golgi cell fractions, and the time taken for radioactive albumin and Apo A-I to appear in these cell fractions were measured. At given times, the amounts of both albumin and Apo A-I were measured in the same animal. Pulse-labeled albumin reached maximal amounts in the RER between 3 and 9 min after the administration of L-[aH]leucine and between 9 and 25 min in the Golgi apparatus ( Fig. 2A). By contrast, maximal Apo A-I radioactivity occurred at the same time (from 3 to 15 min) in both the RER and the Golgi cell fractions (Fig. 2B). There is an indication, at 3 min, that APO A-I first enters the RER, but it also appears very rapidly in the Golgi cell fraction. The amounts of radioactive Apo A-I present in the RER and the Golgi cell fraction at 15 min suggests that Apo A-I is cleared from the RER before it is emptied from Golgi vesicles. These experiments indicate that although albumin and Apo A-I are secreted into the blood at similar rates ( Fig. 1), nascent APO A-I undergoes a much more rapid transport from the RER to the Golgi apparatus than does albumin. Amino Acid Composition and NH2-Terminal of Serum Apo A-I Obtained from Young Chickens Because the NH2-terminal amino acid sequence of serum Apo A-I, obtained from young chickens, has been reported to be different from that obtained from hens (2 l, 32), we isolated ADo A-I from young chicken serum HDL, determined its amino acid composition and the amino acid sequence of 20 NH2-terminal residues, and compared it to that of Apo A-I obtained from hen, rooster, and human serums. Apo A-I from the serum of young chickens contained 248 amino acid residues per mol and the overall amino acid composition was very similar to that of hen (21), rooster (25), and human (2) Apo A-I with a few exceptions (Table I). There are only six residues of serine as compared to 13 for hen, 10 for rooster, and 14 for human Apo A-I. Young chicken serum Apo A-I also appeared to contain slightly higher amounts of glycine, which may be due to the fact that in the final stage of purification of Apo A-I by SDS PAGE, a glycine buffer was used. Twenty cycles of automated Edman degradation of young chicken serum Apo A-I revealed the following amino Table II). This amino-terminal sequence is identical to that previously reported by Shackelford and Lebherz (32) for chickens and differs from that of Apo A-I from hen and humans. The reported NH2-terminal sequences for hen serum Apo A-I is D.E.P.Q.P.E.L (21) and for human serum APO A-I is D.E.P.P.Q.S.P. (6). Amino-Terminal Sequence of Nascent Intracellular and Secreted Apo A-I Aspartic acid and proline occur at the amino-terminal portion of both chicken and human Apo A-I and we suspected that the pro-segment of chicken Apo A-I may be similar to that of the pro-segment of human Apo A-I. Therefore, in 1222 THE JOURNAL OF CELL BIOLOGY • VOLUME 101, 1985 order to determine whether or not the pro-segment of chicken Apo A-I is homologous to that of human Apo A-I and whether it is processed intracellularly, we administered to young birds, individually in separate experiments, the radioactive form of each of the amino acids present in the pro-segment of human Apo A-I, and of aspartic acid and proline. After 30 rain, the blood and livers were collected and the radioactive Apo A-I present within the liver and that secreted into the blood was obtained by immunoprecipitation. The immunoprecipitates were subjected to twenty cycles of automated Edman degradation and the radioactivity in each cycle determined. The radioactive profile obtained in each cycle for the various radioactive amino acids used is shown in Fig. 3. Intracellular radioactive Apo A-I contained two aminoterminal residues, arginine and aspartic acid, suggesting that it is a mixture of processed Apo A-I (containing NH2-terminal aspartic acid) and of another form of Apo A-I, with NH2terminal arginine (Fig. 3A). Having determined the sequence of 20 NH2-terminal amino acids of serum (processed)Apo A-I (Table II), any additional amino acids found in these positions should be due to the presence of pro-Apo A-I. Therefore the occurrence of arginine in cycles l, 15, and 17, of histidine in cycle 2, of phenylalanine in cycle 3, or tryptophan in cycle 4, ofglutamine in cycles 5, 6, and 10, ofaspartic acid in cycles 7, 14, and 18, and of proline in cycles 9 and 12 indicates the following pro-hexapeptide extension: R.H.F.W.Q.Q. The occurrence of arginine in cycles 9 and 11, ofglutamine in cycle 4, of aspartic acid in cycles I, 8, 12, and 15, and ofproline in cycles 3 and 6 identifies the other radioactive protein as processed APO A-I since this assignment is identical to that obtained when serum Apo A-I was sequenced (Table II). NH2terminal sequence analysis of the secreted radioactive protein obtained from blood showed radioactive arginine in cycles 9 and 10, glutamine in cycle 4, aspartic acid in cycles 1, 8, 12, and 15, and proline in cycles 3 and 6 ( Fig. 3 B). This indicates that nascent Apo A-I in serum has the following NH2-terminal sequence: (Table II) and indicates that the processing is complete by the 30-min time point. A summary of the NH2-terminal sequences of intracellular and secreted Apo A-I is presented in Fig. 4. Cellular Sites of Processing of Pro-Apo A-I To determine whether the processing of radioactive pro-Apo A-I occurs within the endoplasmic reticulum or the Golgi apparatus, young chickens were administered L-[2,3-3H]proline (present in NH2-terminal residues 3 and 6 of processed Apo A-I) or L-[G-aH]glutamine (present in residues 5 and 6 of the pro-segment and in residue 4 of processed Apo A-I), and after 10 to 15 min the livers were fractionated into RER and Golgi cell fractions. Radioactive APO A-I was isolated by immunoprecipitation from detergent-soluble RER and Golgi cell fractions and subjected to 20 cycles of automated amino acid sequence analysis. The results are given in Fig. 5. In the RER, glutamine radioactivity was found in cycles 5, 6, and 10 ( Fig. 5A, upper panel), whereas in the Golgi cell fractions radioactive glutamine was present in cycles 5, 6, and 10 and also in cycle 4 (Fig. 5B, upper panel). Proline radioactivity was found in cycles 9 and 12 for Apo A-I isolated from the RER (Fig. 5A, lower panel) and in cycles 3, 6, 9, and 12 for Apo A-I obtained from the Golgi apparatus (Fig. 5 B, lower panel). This indicates that nascent Individual radioactive amino acids were injected intravenously and at either 10 to 15 min (RER and Golgi apparatus) or 30 min (total intrace~lular and serum) nascent Apo A-I was isolated from the various cell fractions by immunoprecipitation and subjected to automated amino acid sequencing. The percent of pro-Apo A-I and Apo A-I was calculated from the yield obtained from the internal apomyoglobin standard and from the amount of radioactivity in each cycle when informative radiolabeled amino acids were used (Figs. 3 and 5). * Calculations based on the averages from four different radioactive amino acids. * Based on two experiments with different radioactive amino acids. cycles which are distinctive for either pro-Apo A-I or for fully processed Apo A-I, and taking into account the recovery of derivatized amino acids (when using sperm whale apomyoglobin), the percent of pro-Apo A-I and processed Apo A-I in each sample was calculated. Intracellularly in the total liver 56.5% of pro-Apo A-I is processed to Apo A-I and a similar ratio (49%, pro-Apo A-I; 51% Apo A-I) was obtained in the Golgi cell fractions. By contrast, nascent Apo A-I in the RER is 100% in the pro-Apo A-I form; and in the serum 30 min after the administration of radioactive amino acid, 100% of the nascent Apo A-! is fully processed (Table III). DISCUSSION Since 1971, when Morgan and Peters studied the in vivo secretion of albumin and transferrin, it has been known that hepatic proteins travel through the intracellular secretory pathway at different rates (31). This variation in intracellular transit time is reflected in different secretion times for hepatic plasma proteins (27). The translocation of secretory proteins, from site of synthesis on the RER to the Golgi apparatus, is the step in which most variations occurs and this has led to the suggestion that receptors in the endoplasmic reticulum membrane select and regulate the transport of protein from the endoplasmic reticulum to the Golgi apparatus (29). Thus it is not surprising that nascent Apo A-I travels to the Golgi apparatus at a different rate than albumin. What is of interest however is the quickness with which Apo A-I enters the Golgi fractions and the length of time it is retained within Golgi apparatus-derived vesicles prior to secretion into the blood. Studies which have measured the rate of entry of nascent secretory hepatic proteins into the Golgi apparatus have shown that albumin usually leads and other proteins such as a 1-antichymotrypsin and transferrin follow at a slower rate (29). The secretion of Apo A-I differs in that its transfer from RER to Golgi cell fraction is so rapid that it is difficult, by in vivo pulse-labeling methods, to measure its rate. Peak amounts of Apo A-I radioactivity are noticed in the RER and the Golgi cell fractions at the same time (between 3 and 15 min) and there is only a slight indication, at the earliest time points (3 min) that Apo A-I has entered the RER prior to the Golgi cell fraction (Fig. 2). By contrast, in the same animal, the stepwise progression of nascent albumin from RER to Golgi apparatus is clearly apparent. If albumin and Apo A-I are synthesized in similar locations on the RER and we accept the hypothesis that small carrier vesicles bud from the endoplasmic reticulum and carry the nascent proteins to the Golgi 1224 THE JOURNAL OF CELL BIOLOGY • VOLUME 101, 1985 apparatus (29), then our results imply that vesicles carrying nascent Apo A-I are immediately formed and transported to the Golgi region. Alternatively, the polysomes synthesizing Apo A-I may be located in specialized regions of the cytoplasm, in close juxtaposition to the Golgi apparatus, thus allowing faster translocation. Membrane-attached polysomes have been noticed close to Golgi apparatus (10). Another possibility is that, at the early time following pulse labeling, Apo A-I is not present in the Golgi apparatus but in some other cellular sub-fraction that co-fractionates with the Golgi cell fractions. This is unlikely, since the Golgi cell fractions (described in references 3 and 4) are devoid of RER vesicles. If the pulse-labeled Apo A-I is present in a vesicle which is not derived from the Golgi apparatus, it would have to be a specialized smooth endoplasmic reticulum compartment which does not contain nascent albumin, such as the postulated carrier vesicles which travel from the endoplasmic reticulum to the Golgi apparatus. Our previous studies showed that nascent Apo A-I in the RER contain very little lipid and that most of the assembly of Apo A-I into lipoproteins occurs in the Golgi cell fraction (4). What determines when and how lipid-protein conjugation occurs is not known, but a mechanism must exist in the endoplasmic reticulum to protect nascent Apo A-I from binding to or from being inserted permanently into existing lipid particles. There may be a need, therefore, for Apo A-I to be quickly segregated within the endoplasmic reticulum and the haste by which it is transported to the Golgi apparatus may be in order to position it in the locale at which proper lipidprotein interactions may occur. The rapidity with which nascent Apo A-I enters the Golgi cell fraction is reminiscent of the rate at which pulse-labeled total membrane proteins enter this fraction (30). Studies with rats and humans, using both liver and intestine, with either perfused tissues, organs, or cell culture, have shown that pro-Apo A-I is secreted into the blood and is then later processed, by the removal of an NH2-terminal hexapeptide, to Apo A-I (9,12,13,15,38,41). In young chickens the secretion of Apo A-I is clearly different. Pro-Apo A-I is the only form present in the RER, a near equal mixture of pro-Apo A-I and Apo A-I is found in the Golgi cell fraction, and 100% of the newly secreted apolipoprotein present in the serum is processed Apo A-1. These results may be interpreted in several ways. Young chickens may begin to process pro-Apo A-I to Apo A-I in the Golgi cell fraction and the processing is completed prior to secretion. In this case only processed Apo A-I is secreted into the blood. Another possibility is that, since there is a mixture of pro-Apo A-I and Apo A-I in the Golgi fraction, this mixture is secreted and pro-Apo A-I is then immediately converted by an extracellular enzyme to Apo A-I. We do not detect, however, at 30 min of secretion, any pro-Apo A-I in the circulating blood. A third possibility is that there is no processing intracellularly and that blood enzymes, present in the liver during homogenization may account for the presence of intracellular processed Apo A-I. It is unlikely however that these processing enzymes will penetrate the Golgi vesicles and not enter the vesicles derived from the RER. However we can not rule out the possibility that pro-Apo A-I is sequestered in the RER differently than in the Golgi apparatus or is in a conformation unfavorable to processing. A fourth possibility is that processing occurs ex-traceUularly and that nascent Apo A-I is rapidly endocytosed by the liver. The method used in this study for preparation of Golgi cell fractions has been shown to contain endosomes (24), and these endosomes could provide the 51% radioactive processed Apo A-I found in the Golgi cell fraction. This latter possibility is considered unlikely since the amount of processing in the Golgi cell fraction was measured between 10 and 15 min after the administration of radioactive amino acids, and at that time there was little or no radioactive Apo A-I secreted into the blood (see Fig. 1). The most likely possibility is that processing begins in the Golgi cell fraction and continues to occur in the terminal events of secretion. A metalloenzyme which converts human pro-Apo A-I to Apo A-I has been detected in plasma HDL, in mesenteric lymph, and in lymph chylomicrons (9). The source and exact nature of this extracellular enzyme have not been determined and we do not know whether or not a similar enzyme is present in chickens. The facts that the processing enzyme has been detected in lipoprotein particles and that, in chicken, processing occurs intracellularly in a Golgi cell fraction suggests that the processing enzyme could be a part of nascent HDL. The processing enzyme may be inserted into HDL during assembly and processing may occur at any subsequent stage of HDL secretion. This would be different to the processing of other secretory pro-proteins whose point of cleavage are usually marked by arginine residues. In these latter cases the processing enzyme is thought to reside in the Golgi membrane (36). Paired glutamine residues, at the site of proteolytic processing, are not unique to pro-Apo A-I. The initial translation product of tropoelastin b mRNA contains a 24 amino acid NH2-terminal peptide which, like the prosegment of Apo A-I, contains a gin-gin dipeptide at its carboxyterminus (23). Tropoelastin b, a major component of connective tissue, is an extracellular protein. It is not known, however, whether the paired giutamine residues mark the cleavage sites of a signal or a pro-sequence. If the paired glutamine residues mark the end of the signal sequence, then it could be assumed that cleavage occurs co-translationally. If, however, tropoelastin b also contains a prosegment, which is marked by the two giutamine residues, then cleavage may be either intracellular, as in chicken Apo A-I, or extracellular, as in rat and human Apo A-I. Our studies show that the prosegments of chicken and human Apo A-I are both six amino acids in length with identical sequences including a pair of glutamines at the carboxy-terminal end. In addition, the NH2-terminal portions of processed Apo A-I are similar, both commencing with asp. giu. pro...residues. Yet the pro-Apo A-I in human and rat, which presumably also travels through the Golgi apparatus, is not cleaved intracellularly to Apo A-I whereas that of chicken is processed. That pro-Apo A-I may be cleaved intracellularly in one species and not in another, and yet both have identical prosegments, is puzzling. The difference in processing between human, rat, and chicken may be a species difference, may be due to a difference in the mode of action of the processing enzyme, or may be due to the fact that only in chicken has processing been measured in vivo while in the human and rat, organ and cell culture methods have been used.
2014-10-01T00:00:00.000Z
1985-10-01T00:00:00.000
{ "year": 1985, "sha1": "fd329e4884cef855410258cf8ca208b4733e5d0a", "oa_license": "CCBYNCSA", "oa_url": "http://jcb.rupress.org/content/101/4/1219.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "fd329e4884cef855410258cf8ca208b4733e5d0a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
14422518
pes2o/s2orc
v3-fos-license
Encryption Quality Analysis and Security Evaluation of CAST-128 Algorithm and its Modified Version using Digital Images this paper demonstrates analysis of well known block cipher CAST-128 and its modified version using avalanche criterion and other tests namely encryption quality, correlation coefficient, histogram analysis and key sensitivity tests. I. INTRODUCTION CAST-128 [1], [2], [3] is a design procedure for symmetric encryption algorithm developed by Carlisle Adams and Stafford Tavares. CAST has a classical Feistel network ( Fig. 1) consisting of 16 rounds and operating on 64-bit blocks of plaintext to produce 64-bit blocks of cipher text. The key size varies from 40 bits to 128 bits in 8-bit increments. The function F includes the use of four S-boxes, each of size 8 x 32, the left circular rotation operation and four operation functions that vary depending on the round number. We label these operation functions as f1 i , f2 i , f3 i and f4 i (Fig. 2). We use I to refer to the intermediate 32-bit value after the left circular rotation function and the labels I a , I b , I c and I d to refer to the 4 bytes of I where I a is the most significant and I d is the least significant. With these conventions, function F is defined as follows: In [7] we have shown that this modification leads to 20% improvement in the exection time of function F. Now we will show that the above modification to the function does not violate the security of the algorithm when compared to that of original algorithm. For this, we will make use of avalanche effect, encryption quality, key sensitivity test and statistical analysis. II. AVALANCHE EFFECT We have used Avalanche effect [1], [2] to show that the modified algorithm also possesses good diffusion characteristics as that of original algorithm. We have taken 60000 pairs of plaintexts with each pair differing only by one bit. We have encrypted them first by using the original algorithm and then by using modified one. For both the algorithms same key (K1) is used which is selected arbitrarily. We have counted the number of times original algorithm gives better avalanche, the number of times the modified algorithm gives better avalanche and the number of times both algorithms give same avalanche. Tabulation of results observed by changing one bit of plaintext in the samples for rounds 2, 4, 6, 8, 10, 12, 14 and 16 of original and modified algorithms is as shown in table I. We have carried out similar tests by changing one bit in the key and using set of 60000 plaintext samples. First we encrypted these plaintexts with a key using both the algorithms. Then just by changing the key by one bit chosen randomly the same set of plaintexts is encrypted using both the algorithms. We have observed the change in the number of bits. The results are tabulated as shown in table II for different rounds. From the results, we can observe that both the algorithms posses good avalanche properties. III. ENCRYPTION QUALITY ANALYSIS The quality of image encryption [6], [11] may be determined as follows: Let F and F′ denote the original image (plainimage) and the encrypted image (cipherimage) respectively each of size M*N pixels with L grey levels. The above results show that modification done to the function does not degrade the quality of encryption. IV. KEY SENSITIVITY TEST We have conducted key sensitivity test [6], [11] on the image Cart.bmp for original and modified CAST-128 algorithms using the following 128 bit keys K1 and K2 where K2 is obtained by complementing one of the 128 bits of K1 which is selected randomly. The hexadecimal digits of K1 and K2 which have this difference bit are shown in bold case. K1 = ADF278565E262AD1F5DEC94A0BF25B27 (Hex) K2 = ADF238565E262AD1F5DEC94A0BF25B27 (Hex) First the plainimage Cart.bmp (Fig. 3A) is encrypted with K1 using original CAST-128 algorithm and then by using K2. These cipher images are shown in Fig. 3B and 3C. Then we have counted the number of pixels that differ in the encrypted images. The result is 99.610687% of pixels differ from the image encrypted with the key K2 from that encrypted with K1. The difference image shown in 3D confirms this. When we tried to decrypt the image which is encrypted with K1 using K2 (Fig. 3E), or vice-versa (Fig. 3F) no original information is revealed. Above experiment is repeated for modified CAST-128. 99.602608% of pixels differ from the image encrypted with K1 (Fig. 4B) compared to the image encrypted with K2 (Fig. 4C). Fig. 4D shows the difference of the two images. When we tried to decrypt images encrypted with K1 and K2 by using keys K2 and K1 respectively decryption completely failed as it has happened in original CAST-128 and the results are shown in 4E and 4F. The textures visible in the cipherimages of the above tests is an indication of appearance of large area in the original image where pixel values rarely differ. It is the property of block ciphers that for a given input there will be fixed ciphertext, which means as long as plaintext block repeats, ciphertext block also repeats. This can be avoided by using one the modes of operation other than ECB mode. V. STATISTICAL ANALYSIS This is shown by a test on the histograms [6], [11] of the enciphered images and on the correlations of adjacent pixels in the ciphered image. A. Histograms of Encrypted Images We have selected Ape.bmp image as plainimage for histogram analysis. We have encrypted this image first by using original CAST-128 algorithm and then by using modified CAST-128 algorithm. Then we have generated histograms for plainimage and its encrypted images. Fig. 5 shows the histograms for original image and its corresponding cipherimage obtained using original CAST-128 algorithm. Fig. 6 shows histogram for cipherimage encrypted using modified CAST-128 algorithm. From these figures we can see that the histogram of the encrypted images is fairly uniform and is significantly different from that of the original image. From the histogram we can also observe that there is a huge variation in the percentage of number of pixels with a certain grey scale value which is varying from 0 to 1%. For cipher images this percentage is almost constant. This shows that the number of pixels with a certain grey scale value is almost same which is around 0.4% approximately. This is clearly shown in Fig. 5B, 5D and 6B. B. Correlation of Two Adjacent Pixels To determine the correlation between horizontally adjacent pixels [6], [11] in an image, the procedure is as follows: First, randomly select N pairs of horizontally adjacent pixels from an image. Compute their correlation coefficient using the following formulae where x and y represent grey-scale values of horizontally adjacent pixels in the image. E(x) represents the mean of x values, D(x) represents the variance of x values, cov(x,y) represents covariance of x and y and r xy represents correlation coefficient. We have randomly selected 1200 pairs of two adjacent pixels from the plainimage, Ape.bmp and the corresponding cipherimages encrypted using original and modified algorithms. Then we have computed the correlation coefficient using the above equations. The correlation coefficient for plainimage was found to be 0.874144. For ciherimage which is encrypted using original CAST-128, it is 0.016693 and it is 0.012245 for image encrypted using modified CAST-128. Fig. 7, 8 and 9 show the correlation distribution of two horizontally adjacent pixels for plainimage Ape.bmp and the encrypted images encrypted using original and modified CAST-128 algorithms respectively. Table V gives the correlation coefficients for two bit map images Ape and Cart and their encrypted images using original and modified CAST-128 algorithms. The correlation coefficient values for plainimages are much larger than for that of encrypted images in both cases. All the observations from the tests we conducted reveal a fact that the modified algorithm is at least as strong as original one. VI. CONCLUSION We have made an attempt to analyse the security of original and modified versions of CAST-128 algorithm. We have also tried to demonstrate that the modification made to the function does not violate the security and is at least as strong as the original algorithm. For this purpose, we have used avalanche criterion, encryption quality, histogram analysis, key sensitivity test and correlation coefficient.
2010-04-04T23:49:04.000Z
2009-04-01T00:00:00.000
{ "year": 2010, "sha1": "f9b6d0a466e6294d72bc453be2db0641701ad7af", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "0c464ccd8ecc53e9b4c48f3f7ee207764e38336c", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
45570116
pes2o/s2orc
v3-fos-license
Numerical Investigations on Hybrid Fuzzy Fractional Differential Equations by Improved Fractional Euler Method In this paper, the improved Euler method is used for solving hybrid fuzzy fractional differential equations (HFFDE) of order q ∈ (0,1) under Caputo-type fuzzy fractional derivatives. This method is based on the fractional Euler method and generalized Taylor’s formula. The accuracy and efficiency of the proposed method is demonstrated by solving numerical examples. Introduction The study of fractional differential equations (FDE) forms a suitable setting for mathematical model of real world in various fields viz.Physical and chemical processes.Several forms of FDE have been proposed in more accurate models, and there has been considerable interest in developing numerical methods.M. Mazandarani et.al.proposed the modified fractional Euler method (fuzzy-context) to solve fuzzy fractional differential equations [16,11].Hybrid fuzzy differential equations (HFDE) have been focus of many studies due to natural way to model dynamic system with embedded uncertainty [12,13,14].So far, numerical methods have been used to solve these equations such as, Euler method in [12], Runge-Kutta method in [13].For instance, Pederson et.al [14] investigated the hybrid fractional differential equations.The aim of this paper is to solve the HFFDE by the improved Euler method under Caputo-type fuzzy fractional derivatives.The paper is prepared as follows.After a preliminary section, we will study the Caputo-type fuzzy fractional derivatives.The next section, we discuss HFFDE.Consecutively, we briefly describe the improved fractional Euler method.In the penult section, we present numerical examples to illustrate the theory.Finally in the last section, we give concluding remarks. Preliminaries We denote by R F the class of fuzzy subsets u : R → [0, 1] satisfying the following properties: (a) u is normal, that is, there exist x 0 ∈ R with u(x 0 ) = 1. (c) u is upper semi-continuous on R. (d) cl {x ∈ R|u(x) > 0} is compact where cl denotes the closure of a subset. Then the α-level set [u] α is a non-empty compact interval for all 0 ≤ α ≤ 1 and any u ∈ R. The notation [u] α = [u α , u α ] denotes explicitly the α-level set of u.We refer to u and u as the lower and upper branches on u respectively.For u ∈ R, we define the length of u by: len(u) = u − u.For u, v ∈ R F and λ ∈ R, the sum u + v and the product λ u are defined by α means the usual addition of two intervals (subsets) of R and λ [u] α means the usual products between a scalar and a subset of R. The metric structure is given by the Hausdorff distance d : Then it is easy to see that d is a metric in R F and has the following properties: Definition 2.2.Let F : I → R F and fix t 0 ∈ (a, b).We say that F is (1)-differentiable at t 0 , if there exists an element F ′ (t 0 ) ∈ R F such that for all h > 0 sufficiently near to 0 such that F(t 0 + h) ⊖ F(t 0 ), F(t 0 ) ⊖ F(t 0 − h) and the limits (in the metric D) We say that F is (2)-differentiable if for all h > 0 sufficently near to 0 such that and the limits (in the metric D) If t 0 is the end point of I, then we consider the corresponding one-sided derivative. International Scientific Publications and Consulting Services Theorem 2.2.Let F, G : I → R F be integrable and λ ∈ R. Then (1) Fuzzy fractional integral and derivative The space of all continuous fuzzy number valued functions on I, the space of all absolutely continuous fuzzy number valued functions on I and the space of all Lebesgue integrable fuzzy number valued functions on I are respectively denoted by C F (I), (AC) F (I) and L F (I). Throughout this paper, let β ∈ (0, 1).Definition 3.1.[5,16] Let f ∈ L F (I).The Riemann-lioville fractional integral of order β of the fuzzy number valued function f is defined as follows: where Γ(β ) is the well-known Gamma function. Theorem 3.1.[11] Let f ∈ L F (I).The Riemann-Liouville fractional integral of order β of the fuzzy number valued function f , based on its α-cut representation, can be expressed as where Definition 3.2.[16] If f ∈ AC(I), then Riemann-Lioville fractional derivative of order β of the crisp function f exists almost everwhere on I and can be represented by Note that Riemann-Lioville fractional derivative of order β of f is the first order derivative of the fractional integral 1 − β of f .Definition 3.3.[16] If f ∈ AC(I), then Caputo fractional derivative of order β of the crisp function f exists almost everywhere on I and can be represented by Note that Caputo fractional derivative of order β of f is the fractional integral 1 − β of the first order derivative of f .Definition 3.4.[16] Let f ∈ (AC) F (I) and If the fuzzy number valued function f is (1)-differentiable, then f is said to be Caputo differentiable in the first form and denoted by C a D ] . Theorem 3.3. (Characterization theorem) Let us consider the fuzzy fractional initial value problem where f : (A2) x α and x α are continuous and uniformly bounded on any bounded set.that is for any then the FFIVP (3.1) and the system of fractional differential equations (FDEs) International Scientific Publications and Consulting Services Proof.Assume the hypothesis (A1)-(A3) are satisfied.First fix ε > 0. Choose δ = ε H and suppose ∥(t, x, y) − (t, x 1 , y 1 )∥ < δ .Then, Next we must show that f α , f α are uniformly bounded on any bounded set.Let S be any bounded Hence, f α is uniformly bounded on S. Similarly, f α is uniformly bounded on any bounded set.Therefore, eqn. Generalized Taylor's formula under the Caputo-type fuzzy fractional derivative was introdued in [11]. 4 The hybrid fuzzy fractional differential system Consider the hybrid fuzzy differential system International Scientific Publications and Consulting Services where Here we assume the existence and uniqueness of solutions of the hybrid system hold on each [t m ,t m+1 ].To be specific the system would look like: Pederson and Sambandham [14] introduced hybrid terms in the fractional differential equations.We note that β ∈ (0, 1) and C a D β x m represents some type of fractional differentiation(fixed for all m ′ s).By the solution of Equation ( 4.3) we mean the following function: ] , (A5) f α m and f α m are equicontinuous and bounded on any bounded set.That is, for any ε > 0 there is a δ m (ε Then, (3.1) and the hybrid system of FDEs are equivalent. International Scientific Publications and Consulting Services Proof.Assume the hypothesis of Theorem 3.3.Suppose x(t) is a solution of eqn.(3.1).Fix m = 0, 1, 2, ...,.For a hybrid fuzzy fractional differential Equation (4.3), we develop the improved fractional Euler's method via an application for fuzzy fractional differential equations in [11].The HFFIVP (4.3) is equivalent to the following systems of fractional ordinary differential equations Numerical method for (4.3) is the same for Caputo-differentiable of the two forms.We assume that x is Caputodifferentiable of form C a D β x(t).The initial value problem (4.3) is equivalent to the following integral equations: By substituting t = t 1 into eqn.(4.9) and approximation of the J β F(t, x α , x α ), J β G(t, x α , x α ) by the modified trapezoidalrule with h = t 1 − t 0 , we have x(t) about t 0 = 0 and neglect the second order term(involving h 2β ), the formula for the fractional Euler method is as follows: A system of points that approximates the solution of x(t) is produced by above recursive method.At each step, the fractional Euler method used as a prediction, and the modified trapezoidal rule is used to make a correction to get the finite value.The general formula for the improved fractional Euler algorithm is as follows: Theorem 4.2.[14] Consider the systems (4.8) and (4.12).For a fixed k ∈ Z + and r Proof.Fix k ∈ Z + and α ∈ [0, 1].Choose ε > 0. For each i = 0, 1, ...., k we will find a δ * i > 0 such that h i < δ * i implies where the h i values are allowable by regular partition of the [t i ,t i+1 ]'s.By convergence of numerical method [15] over We may assume δ * k < 1.Then h k < 1.By numerical stability there exists a δ k > 0 such that Therefore if h k < δ * k and (4.15) holds then ) By numerical stability there exists a Therefore if h k−1 < δ * k−1 and (4.18) holds then By numerical stability there exists a δ i > 0 such that Therefore if h i < δ * i and (4.21) holds then In particular,there exists a δ * 1 > 0 such that if h 1 < δ * 1 and (4.21) holds with i = 1 then By convergence of the numerical method over [t 0 ,t 1 ], we may choose δ * 0 > 0 such that h 0 < δ * 0 implies y α (t 1 ; 1) − x α 0,N 0 (α) < δ 1 and y α (t 1 ; 1) − x α 0,N 0 (α) < δ 1 Numerically, Pederson and Sambandam [12,13] solved some examples in fuzzy context with integer order.To give a clear overview of our study and to illustrate the above discussed method, here and in this section. Example 5.1.[12] Consider the following HFFIVP where The HFFIVP (5.26) is equivalent to the following system of HFFIVP: ) is a continuous function of t, x, and λ k (x(t k )) and HFFIVP has a unique solution on [t k ,t k+1 ].To numerically solve the HFFIVP (5.26) we use the improved fractional Euler method for hybrid fuzzy fractional equations the system (5.26).The results are shown in Table 1 and 2. Furthermore, the approximate solutions in the interval [0,2] are illustrated in Fig. 1. The numerical results are shown in Table 3 and 4. Furthermore, approximate values in the interval [0, 2] are illustrated in Fig. 2. Conclusion In this paper, we utilized the improved fractional Euler method to solve hybrid fuzzy fractional differential equations of order q ∈ (0, 1).The fractional derivative is considered under Caputo-type fuzzy fractional derivative based on strongly generalized fuzzy differentiability.Consistency, convergence of the numerical method is discussed.The solution obtained using the suggested method and show that this technique can be solved the problem effectively.All numerical results are obtained using Matlab.Higher order methods will be consider in our future work. y and it is denoted x ⊖ y.Throughtout this paper, the sign ⊖ always stands for H-difference and we remark that x ⊖ y ≤ x + (−1)y in general.Usually we denote x + (−1)y by x − y.In the sequel, we fix I = [a, b], for a, b ∈ R. Theorem 2 . 1 . [11] Let F : [0, ∞) → R F .Assume that F α (x) and F α (x) are Riemann-integrable on [a, b] for every b ≤ a and assume that there are two positive functions M α , M α such that ∫ b a F α (x)dx ≤ M α and ∫ b a F α (x)dx ≤ M α for every b ≤ a. Then F(x) is improper fuzzy Riemann-integrable on [0, ∞) and the improper fuzzy Riemann-integrable is a fuzzy number.Furthermore [ ∫ ∞ 0 F(x)dx (4. 9 ) Let g(t) be a crisp continuous function and (⌈β ⌉)-times differentiable in the independent variable t over the interval of differentiation(integration) [0, b].Let the interval [0, b] be subdivided into N subintervals [t j ,t j+1 ] of step size h = b N using the nodes t j = jh for j = 0, 1, ....N.Consider the following Riemann-Liouville integral Figure 1 : Figure 1: The approximate solution to the HFFIVP Figure 2 : Figure 2: The approximate solution to the HFFIVP
2017-11-30T07:23:37.615Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "2460ab4d94b7ec43abe9759b3b7b458c9098c1cf", "oa_license": "CCBY", "oa_url": "http://ispacs.com/journals/jfsva/2016/jfsva-00337/article.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2460ab4d94b7ec43abe9759b3b7b458c9098c1cf", "s2fieldsofstudy": [ "Mathematics", "Engineering" ], "extfieldsofstudy": [ "Mathematics" ] }
252642228
pes2o/s2orc
v3-fos-license
Recombinant Thrombopoietin Effectively Shortens the Time to Response and Increases Platelet Counts in Elderly Patients with Severe Immune Thrombocytopenia Background: This study was conducted to investigate the short-term efficacy and safety of rhTPO for the management of severe ITP in the elderly as first-line treatment. Methods: A total of 54 elderly patients with severe ITP were studied, including 39 patients treated with a combination regimen of rhTPO plus standard treatment (glucocorticoid; rhTPO group) and 15 patients treated with glucocorticoid treatment alone (control group). The response rate, time to initial response, peak platelet counts, and time to peak platelet counts were compared, and clinical characteristics correlated with the efficacy of rhTPO were analyzed. The efficacy of rhTPO in the elderly is comparable to the non-elderly in terms of the OR, CR, time to initial response, and peak platelet counts. Results: There were no differences in the overall response (OR) and the complete response (CR) in the rhTPO group compared to the control group. The time to initial response in the rhTPO group was shorter than that in the control group (p = 0.032). In patients without intravenous immunoglobulin (IVIg) and platelet transfusion, the peak platelet counts in the rhTPO group were higher than those in the control group (p = 0.003). Conclusions: Standard glucocorticoid treatment plus rhTPO effectively shortens the time to response and increases platelet counts in the elderly with severe ITP. Introduction Primary immune thrombocytopenia (ITP) is an acquired autoimmune hemorrhagic disorder characterized by low platelet counts (PLT) and increased risks of bleeding [1,2]. The clinical manifestations of ITP range from asymptomatic to mild bruising, mucosal bleeding, and even intracranial hemorrhage. According to present ITP guidelines, corticosteroids remain the most important and effective frontline treatment of ITP, although accompanied by various side effects [3,4]. For life-threatening bleeding events, which are quite frequent in severe ITP, the major aim of management is to increase platelet counts to the safe level (usually defined as exceeding 30 × 10 9 /L) as soon as possible [5]. High-dose intravenous immunoglobulin (IVIg) and platelet transfusion are widely recommended as an emergency treatment to increase platelet counts rapidly [6]. Previous studies have established that antibody-mediated and/or T-cell-mediated platelet destruction plays a critical role in the pathogenesis of ITP [4,[7][8][9][10]. Megakaryocytes are also strongly influenced (e.g., impaired development ability, diminished function of 2 of 8 platelet release) [11]. Thrombopoietin (TPO) is an endogenous growth factor, being able to effectively promote the development of megakaryocytes and stimulate platelet production. Noticeably, the level of TPO in patients with ITP is usually relatively low, suggesting that TPO could be a potential target for the management of ITP. At present, TPO receptor agonists (TPO-RAs), such as eltrombopag and romiplostim, have shown remarkable efficacy in increasing platelet counts in patients with chronic/persistent ITP [12][13][14][15] and are thereby widely adopted as second-line treatment around the world. However, the administration of TPO-RAs in patients with severe ITP is still limited, partly due to its long response time; it usually takes 2-4 weeks to increase the platelet count to 50 × 10 9 /L in patients with ITP. Recombinant human thrombopoietin (rhTPO) is a natural TPO analogue synthesized by genetic recombination technology and has been widely used for over 20 years in China. rhTPO is isolated and highly purified from Chinese hamster ovary (CHO) cells containing a gene for efficient expression of human thrombopoietin. As a glycosylated, full-length molecule, which is identical to endogenous TPO, the mechanism of rhTPO is more similar to that of natural TPO, and rhTPO shows profound effects on megakaryocyte development and platelet production. Previous studies have suggested that rhTPO markedly increases platelet counts in patients with chemotherapy-induced thrombocytopenia [16] as well as in patients with myelodysplastic syndromes and aplastic anemia [17,18]. Wang et al. conducted a multicenter randomized controlled trial in 2012, showing that platelet counts in patients with chronic ITP increase rapidly after rhTPO treatment [19]. Other studies have also demonstrated that rhTPO with/without glucocorticoid could both rapidly increase platelet counts and improve the complete response rate in severe newly diagnosed ITP [20], ITP with pregnancies [14,21], and corticosteroid-resistant/relapsed ITP [22]. Similarly, it is worth mentioning that a prospective, randomized, controlled clinical trial conducted by Hou suggested that the combination of high-dose dexamethasone with rhTPO could remarkably improve the initial and sustained response, strongly supporting the efficacy of rhTPO as a potential frontline treatment for newly diagnosed adult ITP [23]. Although extensive research has been carried out on the efficacy and safety of rhTPO in adult ITP, there is no single study that focuses on elderly patients with severe ITP. While ITP has been traditionally considered a disorder predominantly affecting young or middleaged women, an epidemiological study conducted by Moulis et al. illustrated that the incidence of ITP in the elderly (>60 years old) was approximately 5-9/100,000 person-years, noticeably higher than 2.94/100,000 person-years in the non-elderly [24]. Additionally, it has been previously observed that the risk of bleeding in elderly patients, particularly in those with severe ITP (PLT<10 × 10 9 /L), is significantly higher than that in non-elderly patients, even at equivalent platelet counts [24][25][26]. Whereas corticosteroids are effective in elderly patients with ITP, the features of the elderly (e.g., various severe comorbidities, high bleeding risk) limit the long-term application of corticosteroids and make the side effects unacceptable. It is an urgent demand for elderly patients with severe ITP to find an alternative treatment that could increase platelet counts rapidly to reduce the risk of catastrophic bleeding events. However, there are no studies yet concentrating on the efficacy and safety of rhTPO in the elderly with severe ITP. It is still uncertain whether the elderly respond to rhTPO effectively as the non-elderly do. This retrospective study was designed to further clarify the efficacy of rhTPO as frontline treatment in elderly patients with severe ITP. Patients In total, 227 patients with ITP between March 2016 and March 2021 were screened at the Department of Hematology of the Zhongshan Hospital Qingpu Branch, the Department of Hematology of Jinshan Hospital, and the Department of Hematology of Minhang Hospital. According to age, platelet counts, and clinical symptoms, 54 treatment-naive elderly patients (≥65 years) with severe ITP were enrolled in this study ( Figure 1 shows the strategy of case selection). Of these 54 patients, 39 received standard frontline treatment (corticosteroids ± IVIg) plus rhTPO (rhTPO group) and 15 received standard frontline treatment alone (control group). All patients met the diagnostic criteria for severe ITP recommended by the updated international consensus report (PLT <10 × 10 9 /L and/or bleeding symptoms sufficient to mandate treatment) [1]. Bone marrow aspiration and examinations were applied to all patients to exclude secondary thrombocytopenia. Clinical data (including age, gender, comorbidities, bleeding symptoms, baseline platelet counts, megakaryocyte counts, lymphocyte subsets, quantitation of immunoglobulins, treatment regimen, platelet counts, and adverse events) were recorded for analysis. The bleeding manifestation was evaluated by an ITP bleeding scale recommended in a consensus of Chinese experts on the diagnosis and treatment of adult ITP by the Chinese Medical Association [5,20]. This scale showed strong assessment consistency and close correlation with the ITP-specific bleeding assessment tool (ITP-BAT), while having less time-consuming calculation. The baseline clinical characteristics of patients are shown in Table 1. In total, 227 patients with ITP between March 2016 and March 2021 were screened at the Department of Hematology of the Zhongshan Hospital Qingpu Branch, the Department of Hematology of Jinshan Hospital, and the Department of Hematology of Minhang Hospital. According to age, platelet counts, and clinical symptoms, 54 treatment-naive elderly patients (≥65 years) with severe ITP were enrolled in this study ( Figure 1 shows the strategy of case selection). Of these 54 patients, 39 received standard frontline treatment (corticosteroids ± IVIg) plus rhTPO (rhTPO group) and 15 received standard frontline treatment alone (control group). All patients met the diagnostic criteria for severe ITP recommended by the updated international consensus report (PLT <10 × 10 9 /L and/or bleeding symptoms sufficient to mandate treatment) [1]. Bone marrow aspiration and examinations were applied to all patients to exclude secondary thrombocytopenia. Clinical data (including age, gender, comorbidities, bleeding symptoms, baseline platelet counts, megakaryocyte counts, lymphocyte subsets, quantitation of immunoglobulins, treatment regimen, platelet counts, and adverse events) were recorded for analysis. The bleeding manifestation was evaluated by an ITP bleeding scale recommended in a consensus of Chinese experts on the diagnosis and treatment of adult ITP by the Chinese Medical Association [5,20]. This scale showed strong assessment consistency and close correlation with the ITP-specific bleeding assessment tool (ITP-BAT), while having less time-consuming calculation. The baseline clinical characteristics of patients are shown in Table 1. Treatments Patients in the control group received standard frontline treatment (corticosteroids ± IVIg). Corticosteroid regimens included high-dose dexamethasone and Treatments Patients in the control group received standard frontline treatment (corticosteroids ± IVIg). Corticosteroid regimens included high-dose dexamethasone and methylprednisolone. Highdose dexamethasone was administered to patients at a dose of 40 mg/day for 4 days. Methylprednisolone was administered at a dose of 60~120 mg/day and then tapered off gradually. IVIg at a dose of 400 mg/kg/day for 5 days and/or platelet transfusion were prescribed to patients at the physician's discretion. Patients in the rhTPO group received standard frontline treatment plus rhTPO. rhTPO was subcutaneously injected into patients at a dose of 15,000 U/day for no longer than 14 days and stopped when platelet counts exceeded 100 × 10 9 /L. The numbers of patients treated with IVIg and platelet transfusion are also shown in Table 1. Outcome Evaluation According to the criteria recommended by the international consensus [1], we defined response criteria based on the peak platelet counts in 14 days from initial treatment. The short-term response criteria were as follows: (1) complete response (CR), peak PLT ≥ 100 × 10 9 /L, without bleeding; (2) partial response (PR), peak PLT ≥ 30 × 10 9 /L but < 100 × 10 9 /L, without bleeding; and (3) no response (NR), peak PLT < 30 × 10 9 /L, or bleeding after treatment. The overall response (OR) was defined as CR plus PR. The time to initial response was defined as the duration from the day of initial treatment to the day when the platelet counts first exceeded 30 × 10 9 /L. Statistical Analysis GraphPad Prism 8.0.2 software was used for statistical analysis. Descriptive summaries of the data were performed in Excel (Microsoft Corp., Redmond, WA, USA). Normally distributed continuous variables are summarized as the mean ± SD, while nonnormally distributed continuous variables are summarized as the median (first quartile, third quartile). Discrete variables are expressed as percentages. Quantitative and qualitative data were compared by the Mann-Whitney U and Fisher's exact tests, respectively. The Pearson correlation analysis method was used to analyze the correlation between two groups. Statistical significance was defined as p < 0.05 and high significance as p < 0.001. Response Rate Overall, no significant difference in the response rate between the rhTPO group and the control group was found. The overall response rate in the rhTPO group was 100% (39/39), slightly higher than 93.3% (14/15) in the control group (p = 0.278). The complete response rates were 71.8% (28/39) in the rhTPO group and 73.3% (11/15) in the control group (p > 0.999). As high-dose IVIg and platelet transfusion could rapidly increase the platelet counts of patients with severe ITP, patients treated with IVIg and platelet transfusion were excluded and the rest were analyzed. The overall response rates in the rhTPO group (17 cases) and the control group (7 cases) were 100% (17/17) and 85.7% (6/7), respectively (p = 0.292). The complete response rate in the rhTPO group was remarkably higher than that in the control group (82.4% (14/17) vs. 42.9% (3/7)), while showing no statistical significance (p = 0.137), as shown in Table 2. Time to Initial Response The time to initial response was defined as the duration from the day of initial treatment to the day when platelet counts first exceeded 30 × 10 9 /L. Within 14 days after initial treatment, the time to initial response in the rhTPO group was 5.0 (3.0, 6.0) days, significantly shorter than 6.0 (4.0, 7.0) days in the control group (p = 0.032). Among patients not receiving IVIg and platelet transfusion, the figure for the rhTPO group was still shorter than that for the control group (4.0 (3.0, 6.0) days vs. 7.0 (4.0, 10.0) days, p = 0.041; Table 2). Peak Platelet Counts The peak platelet counts in the rhTPO group were 141.0 (91.0, 253.0) ×10 9 /L, higher than 127.0 (84.0, 209.0) × 10 9 /L in the control group, with no statistical significance (p = 0.276). However, after excluding patients receiving IVIg and platelet transfusion, the peak platelet counts in the rhTPO group were markedly higher than those in the control group (159.0 (114.5, 263.0) × 10 9 /L vs. 84.0 (46.0, 104.0) × 10 9 /L, p = 0.003). In terms of the time to peak platelet counts, there was no significant difference between the rhTPO (8.3 ± 2.9 days) and control (8.8 ± 1.7 days; p = 0.544) groups. Similarly, in patients not treated with IVIg and platelet transfusion, the time to peak platelet counts in the rhTPO and control groups were 7.8 ± 3.2 days and 8.7 ± 2.4 days, respectively, with no statistical significance (p = 0.487); see Table 2. Factors Related to the Time to Initial Response There was no significant correlation between age and the time to initial response (R2 = 0.011, p = 0.529). Differences of gender also had no significant effect on the time to initial response (t = 1.708, p = 0.096). Other clinical characteristics collected in this study (e.g., the megakaryocyte counts of bone marrow, the total lymphocyte counts, and the level of immunoglobulin) did not show any correlation with the time to initial response as well. Adverse Events Current treatments were well tolerated. Only one case (6.7%) of infection during hospitalization in the control group was observed, while there were no infection cases (0%) in the rhTPO group (p = 0.278). No catastrophic bleeding events were observed during hospitalization. While there were seven cases (18.0%) in the rhTPO group and one case (6.7%) in the control group where the peak platelet counts exceeded the upper normal limit (>300 × 10 9 /L), no thromboembolic events were reported during hospitalization. Efficacy Comparison with Non-Elderly Patients A total of 35 rhTPO-treated non-elderly patients (<65 years) with severe ITP were selected from the same database. The screening criteria were the same as those for elderly patients, except for age. The baseline platelet counts were 3.0 (1.0, 8.0) × 10 9 /L in the elderly and 5.0 (3.0, 7.0) × 10 9 /L in the non-elderly, respectively (p = 0.213). The overall response rates in the elderly and non-elderly were both 100% (p > 0.999). The complete response rate in the elderly was 71.8% (28/39), slightly lower than 82.9% (29/35) in the nonelderly, with no significant difference (p = 0.284). There was also no significant difference in the time to initial response between the two groups. The figure for the elderly was 5.0 (3.0, 6.0) days, consistent with 5.0 (4.0, 5.0) days for the non-elderly (p = 0.919). Similarly, the peak platelet counts in the non-elderly were markedly higher than those in the elderly (199.0 (125.0, 384.0) × 10 9 /L vs. 141.0 (91.0, 253.0) × 10 9 /L), while showing no statistical significance (p = 0.115). The time to peak platelet counts showed the same trend, with 8.3 ± 2.9 days in the elderly and 8.4 ± 2.5 days in the non-elderly (p = 0.851), as shown in Table 3. Table 3. Comparison of short-term efficacy in elderly and non-elderly patients. Discussion The urgent and optimum therapeutic goal for elderly patients with severe ITP is to increase platelet counts to the safe level and therefore avoid catastrophic bleeding events. Immune abnormalities lead to increased platelet destruction and decreased platelet production in patients with ITP [11,[27][28][29]. Given the high risk of bleeding in elderly patients, it is necessary to find a better treatment option for elderly patients with severe ITP. Recombinant human TPO is a full-length and glycosylated TPO expressed in Chinese hamster ovary cells and purified by bioengineering techniques. It was approved by the China State Food and Drug Administration as a second-line treatment option for ITP. As illustrated in the Introduction section, there are no studies concerning the efficacy of rhTPO in elderly patients with severe ITP. This is the first study to focus on elderly patients with severe ITP that analyzed the short-term efficacy of rhTPO as frontline treatment. Elderly patients have a high risk of bleeding, and the risk of fatal intracerebral hemorrhage is much higher in elderly patients than that in young patients if the platelet count is not promptly and rapidly increased. As first-line prothrombogenic therapy, TPO-RAs or TPO can be helpful for rapid platelet increase in elderly patients with ITP [15]. This study provided clinical evidence for prothrombogenic therapy to become first-line treatment. In our study, the most important clinical finding was that the time to initial response in the rhTPO group was significantly shorter than that in the control group, with a considerable difference of 1 day. This result could be still observed after excluding patients treated with IVIg and platelet transfusion, suggesting that rhTPO plays a crucial role in accelerating the increase in platelet counts, being independent of IVIg and/or platelet transfusion. Given the higher risk of bleeding and ITP-related death in the elderly compared to the non-elderly [6], it is the primary aim to increase the platelet count to the safe level as quickly as possible. Our results showed that the time to initial response reduced by 1 day in the rhTPO group, demonstrating that standard frontline therapy plus rhTPO could improve the platelet counts to the safe level more quickly and effectively in elderly patients with severe ITP, thereby potentially reducing the incidence of catastrophic bleeding episodes. In our study, there was no occurrence of anti-TPO in patients, which might be related to the short-time use of TPO and the relatively low incidence of anti-TPO in the Chinese population. The response rate of the elderly to rhTPO treatment was also evaluated. The data showed that the OR in the rhTPO group was slightly higher than that in the control group, whether or not excluding IVIg and platelet transfusion. The CR in the rhTPO group was consistent with that in the control group, and it became remarkably higher than that in the control group after excluding patients treated with IVIg and platelet transfusion. However, the differences in the OR/CR rates between the two groups did not reach statistical significance. Previous studies have reported that rhTPO combined with glucocorticoids could significantly improve the complete response rate in patients with severe ITP [19]. The limited sample size in our study possibly led to a different conclusion. Well-designed prospective, randomized controlled studies are necessary to clarify the role of rhTPO as frontline therapy in improving the response rate in elderly patients with severe ITP. In terms of the peak platelet counts and the time to peak platelet counts, this study did not show any significant difference between the two groups. However, in the subgroup of patients not receiving IVIg and platelet transfusion, the peak platelet counts in the rhTPO group were significantly higher than those in the control group. Considering the sample size of this subgroup (only 24 cases), the result needs to be confirmed by further studies. Interestingly, there was a result that seems to be contrary to that of previous studies, which have suggested that TPO-RAs are associated with a higher risk of thromboembolic events [30,31], especially in patients with other risk factors for thrombosis (e.g., age over 65 years). In this study, there were several patients whose peak platelet counts exceeded the upper normal limit (>300 × 10 9 /L), but no thromboembolic events were reported during hospitalization. It seems that rhTPO may not greatly increase the risk of thrombosis in elderly patients with severe ITP. However, clinicians should still pay attention to elderly patients treated with rhTPO, because rapidly increased platelet counts may still put elderly patients at an extremely high risk of thromboembolic events. rhTPO was well tolerated in the elderly. Adverse events were mild, and there were no catastrophic bleeding events as well as thromboembolic events reported during hospitalization, while one case of infection occurred in the control group. To further explore the difference in the efficacy of rhTPO treatment between elderly and non-elderly patients, non-elderly patients with severe ITP (also treated with rhTPO) were selected from the same database and the data were analyzed. The results showed that there was no significant difference in all targets between the two groups, suggesting that the efficacy of rhTPO in elderly patients is comparable to that in non-elderly patients. As it is usually thought that the response of the elderly to rhTPO could be poorer than that of the non-elderly, this result probably suggests that rhTPO is also an effective treatment in elderly patients with severe ITP. In conclusion, rhTPO combined with standard frontline treatment could significantly shorten the time to initial response in elderly patients with severe ITP and could increase platelet counts to the safe level rapidly, thereby potentially reducing the risk of bleeding. In addition, after excluding the influence of IVIg and platelet transfusion, rhTPO significantly improves peak platelet counts without eliciting extra thromboembolic events. Given the retrospective nature and small sample size of this study, well-designed prospective studies with larger sample sizes and longer follow-up periods are needed to verify the efficacy and safety of rhTPO as the frontline option in elderly patients with severe ITP. Institutional Review Board Statement: The study was in accordance with the ethical standards formulated in the Declaration of Helsinki and was approved by the respective local Medical Ethics Committees of Zhongshan Hospital, Qingpu Branch, Fudan University (no. 2020-13). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: Y.L. had access to the complete dataset used in the study and takes responsibility for the integrity of the data and accuracy of the data analyses. The dataset is available upon justified request.
2022-10-01T15:04:44.717Z
2022-09-29T00:00:00.000
{ "year": 2022, "sha1": "c8e33634f939ed9c338d2590b779204a166fcaae", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/11/19/5763/pdf?version=1664442581", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6ee01e06239e2271ce0786d9b494c3aea1a2361f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
216245430
pes2o/s2orc
v3-fos-license
Library usage pattern of school teachers in Sri Lanka: problems and issues The aim of the study was to explore Sri Lankan school teachers‟ information seeking behavior, intention and occasions of accessing libraries, and to measure library skills and the challenges and barriers that they faced during library use. It further examined whether there were any association between the extent of library awareness skills of teachers and four variables such as gender, age, work experience and school locations. A simple random sampling technique was applied to draw the sample for the study and 400 school teachers who had followed postgraduate courses in education were selected as sample from total population (106,756). A questionnaire survey method was used to collect data. Data analysis was done by means of simple and inferential statistics by using SPSS Statistical Package. The results emerged from the present study showed that the studied sample did not regularly access the libraries and they were uncertain about their abilities for effective library use. It was further found that the respondents were rather unsatisfied about the weak library collections of school libraries and improper library services and library arrangements. Unavailability up to date electronic materials and proper guidance also made barriers for school teachers‟ library use. The analysis further found that there was no association between library awareness skills of the teachers and the four given variables (gender, age, work experience and school location). Introduction School libraries are precious sources of information that contribute to productive educational achievement (CILIP, 2004). They are imperative supplement to education as well as a base for generating creative thinking, cultural improvement and for self-development of a person (Önal, 2009). The school library, therefore, is an essential institution for the learning society that helps teachers and students in achieving educational goals. The school library also has full potential to improve the quality of educational processes in a school since it provides knowledge and skills for students and teachers (De la Vega & Puente, 2010). Today, most of the resources that relevant to the "teaching " can be obtained in digital form and -as a result -users are changing their habits as they do not need to come to physical library for getting information (Hiller, 2002). The physical library has become a student-dominated place and the usage of the physical library by the teachers has declined dramatically since the mid-1990s (Martell, 2008). Thus, an investigation needs to be done to determine the factors that caused for low library use of school teachers. On the other hand, it is believed that school teachers are the key players in successful education and their teaching and learning can be enhanced by establishing functional library with professionally qualified librarians (Wessels & Mnkeni-Saurombe, 2012). Therefore it is a prime necessity to conduct studies to examine school teachers" library usage, their satisfaction towards library services, their library skills and problems that they faced while using libraries. The findings of such studies will be certainly helpful to identify the factors that affect teachers" library use and consequently they may be helpful to plan and organize more customer focused school libraries for the betterment of school education of the country. Moreover, the library needs to continue working on users" information needs because users" needs are getting change continuously (Song, 2009). The understanding of users" information needs and the level of their satisfaction towards the library services will be helpful to improve the library services to keep up with upgraded information systems in a digital age with limited funding. Therefore the present study will explore the nature of library use by school teachers and also the challenges faced by them when using libraries to fulfill their information needs. This study can successfully be used to develop new policies in relation to school libraries and their services. This in return will help the school libraries to provide a better and effective information service for the teachers by upgrading library services that is an imperative need for the quality education in a country. Research Objectives The main objective of this study is to examine library usage patterns of school teachers in Sri Lanka and to identify the reasons that they prevent from using school libraries, their services and resources effectively. The specific objectives of the study are as follows: i. To examine the purpose and frequency of library use by the teachers ii. To assess the level of user satisfaction towards the library services and resources iii. To identify the problems faced by the teachers when using school libraries iv. To find out any associations between the extent of library awareness skills of teachers and selected variables Literature Review Although a substantial amount of research literature is available globally on the role and importance of school libraries in education, a limited number of studies have been conducted to examine the role of the school library in delivering school curriculum (Sim, 2001;Tan, 2003) and the extent of its use by teachers. Mokhtar and Majid (2005) found that teachers in Singapore did not use school libraries effectively s. In consistent with these studies Williams and Coles (2007) found that teachers had perceived the school library as a resource for pupils and not for their own professional development. They further found that the lack of time and lack of access to research information in the school library as significant barriers for the school teachers" information gathering. In a study conducted by Tachie-Donkor and Dadzie (2017) found that secondary school teachers in Ghana had used school libraries to supplement their teaching notes on a topic or subject to be taught in schools and to keep abreast with current information in their subject areas. However, most of them claimed that materials available in school libraries were not relevant for their teaching. A similar study conducted by Korobili et al., (2011) among public secondary schools in Greece found that most of the teachers preferred using their personal collection rather than the library collection because the library did not comprise relevant materials that could help them in developing curriculum. The study further found that teachers did not have the requisite skills that needed to browse the library"s collections. Asselin and Doiren, (2003) noted that school library programs were not included in the training of pre-service teachers. As a result, most of the teachers were unable to develop adequate skills for effective school library use. Another study by McLelland and Crawford (2004) revealed that the school library was seriously under-utilized by teaches as a result of limited staff. Although there are many studies on school library usage of learners, there is little evidence of research on library usage of school teachers in Sri Lanka. In the Sri Lankan context, only one study was found on a similar topic. The study conducted by Rathnayake (2013) found that most of the teachers visited the library to find information that was relevant for the subjects that they taught in the schools. In addition, they used school libraries for reading newspapers (32%) and referring curriculum related materials (28%) such as syllabi and teacher"s guides etc. The study further revealed that school libraries helped teachers by providing supplementary reading materials that is relevant for teaching. Out of total, 76% respondents mentioned that the library helped teachers by providing required materials for their professional development as well. With regard to the frequency of library use, the majority (72%) visited the library when the need arose and 20% visited once a week which gave an evidence of poor library usage of teachers. It was further evident from the study that the lack of time and heavy work load as main problems encountered by the teachers when using libraries. Methodology The participants who took part in this study were all in-service graduate teachers who followed postgraduate courses at the Department of Education, University of Peradeniya which is one of the main higher educational institutions in Kandy district. The total population of graduate school teachers who work in government schools in Sri Lanka is 106,756 (Statistical Abstract, 2017). Based on sample size calculation prescribed by Krejcie and Morgan (1970), 400 were determined as the sample of the present study from the above population (106,756). A specifically designed self-administered questionnaire (printed version) was used as the main data collection tool and 400 questionnaires were distributed randomly among the participants at their lecture halls in early September 2018. The collected data were analyzed by using SPSS software package (version 21.0). Results and Discussion As mentioned earlier, 400 questionnaires were distributed among the sample and 318 returned resulting 79.5% response rate. Personal characteristics of the respondents As it can be gleaned from the Table 1, the majority of the sample (88 %,) was female and male comprised only 12%. With regard the age of the respondents, majority were 31-40 years of age group, with the remainder being between 20-30 (20%) or 41-50 (19.5%) years. More than half of the respondents engaged in teaching profession for 1-5 years. Teachers who had been in the profession for 6-10 years made up 23.6% of the sample, followed by more than ten years( 23.9%). With regard to the area of school located, more than one third of the respondents, 125 (39%) indicated that their schools were situated in a suburb area followed by 35% in rural area and 24% in a central city. The schools where the respondents taught were fairly represented town areas as well as rural areas. School library use The respondents were asked whether they used the library to obtain information that was relevant for teaching. Two hundred and seventy (84.9%) respondents indicated that they used the library to obtain information to fulfill their information needs and 48 respondents (15.1%) indicated that they did not use the library to fulfill their information needs. When the respondents were asked whether they met their information needs from the library, 80.8% mentioned that their information needs were fulfilled by the library while 11.3% mentioned that their needs were not fulfilled and 7.9% refrained from responding. Respondents were again asked to mark the reasons for their school library use and the results are shown in Table 2 below. As shown in the Table 2, the majority of the respondents used the school library to obtain information for their own reading, to get supplementary information for classroom teaching and to keep abreast with current information in their subject areas. More than 75% of the respondents used the library for lesson preparation followed by reading newspapers or magazines and for reference. The respondents were asked to rate the level of adequacy of information resources available in the school library by using four scales namely "Very good", "Good", "Fair", and "Below standard". At the same time, they were asked to indicate how they access information available in the library and the action that they took if they did not meet their information needs from the library. Results are depicted in Table 3. According to the Table 3, for more than 40% of the respondents, the information resources and facilities available at the school"s library was "fair" followed by "good" (26%), "below standard" (15.4%), , and "very good" (6.3%). With regarding to the access to the library, the majority of the respondents accessed by browsing the library stock (79.6%), while 75.5% sought help from the library staff and only 44% of them used the library catalogue. It was further revealed that the majority (83%) of respondents asked the officer in charge of the library for assistance or sought help from the internet to find information wherever they were not available at the school library. Of the respondents, 75% consulted colleague teachers whereas 4.4% gave up when their information needs were not met by the school library. (Table 3) Frequency of library use The respondents were asked about the frequency of visit to their school library. More than one third of the respondents indicated that they visited the library weekly while 25.8% visited occasionally and 14.2% visited the library monthly and only 9.4% visited the library daily ( Figure 1). As a whole, the results confirmed that a majority of the respondents had not been to their school libraries regularly. Library skills of school teachers The respondents were asked to rate how much they agreed on the given statements about library skills. In this regard, a five points scale was utilized (from 1 to 5, where 1 indicated "Strongly Disagree" and 5 indicated "Strongly Agree"). Library skill scale developed by Kampen (2004) was used to measure teachers" library skills after making necessary modifications to suite with the local context. Statements are listed in descending order according to the Likert mean scores. It was found that 64% of the respondents were in the opinions that the school library should provide more services for the teachers, while 45% of the respondents felt that library was easy to use and 38% of the respondents had confident about their awareness on resources available in the library (Table 4). However, the majority (72%) disagreed on the statement "availability of too many information sources in the library" while 55% of the respondents disagreed on "adequacy of information training sessions offered by the library for the teachers" and 45% of the respondents disagreed on "comfortable use of computers in the library". These results gave an insight that the teachers had the skills on the use of libraries easily while they were aware about the available library resources. At the same time they urged more library services for the teachers. On the other hand, they felt that libraries lack information sources and they urged from the library to offer more information skills training sessions for them. Satisfaction towards available library services and resources The respondents were asked to rate about their level of satisfaction and it was found that more than half of the respondents agreed on the statement "library arrangement is good and relevant information sources could be easily located" while less than half of the respondents agreed on "helpfulness of user awareness programs conducted by the library" (47%) and "helpfulness of library staff" (46%) with the highest mean score values. However, there were a substantial proportion of respondents who were uncertain (43%) or dissatisfied (38%) on library resources available in the subjects that they taught in schools and 45% of the respondents indicated their disagreement on the statement, "library has current and updated collection". The results revealed that a substantial proportion of respondents were satisfied about library services particularly on user awareness programs, staff assistance and library arrangement but they were not satisfied about the availability of current and updated resources in their subject fields. Problems faced in using school libraries The respondents were asked to indicate the types of problems they encountered in using their school libraries. The top most twelve problems that encountered by the teachers in using the library were listed. From Table 6, it can be seen that the majority had faced some problems such as the lack of relevant materials in the library, lack of information on available resources, non-availability of e-resources and outdated library collections. These results were consistent with the study conducted by Williams & Coles (2007) who found that the inadequate library facilities, unavailability and inability to locate up-to-date information as problems faced by the teachers in using school libraries. These findings are not different from the studies conducted by Johnson (2000) and Tachie-Donkor, and Dadzie (2017) who concluded that teachers did not find relevant materials from their school libraries. Association of library awareness skills and variables In order to find out whether any associations between teachers" library awareness skills and four selected variables such as gender, age, experience and school location, an exploratory factor analysis was conducted . The same library skill scale developed by Kampen (2004) was used and it was tested for its reliability, and the Cronbach's alpha value was 0.701 for the overall scale. Factor analysis of the library skills The method of list wise deletion was used to exclude respondents who did not check any of the items from the factor analysis. Principal component analysis was employed to extract factors, and two factors were retained by a Scree test and Kaiser's criterion (eigenvalue>l). Statements with the highest loading with a particular factor were grouped under that factor, and those that correlated less than 0.5 with a factor were not loaded. The Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy is a test of the amount of variance within the data that could be explained by factors. As a measure of factorability a KMO value of 0.5 is poor; 0.6 is acceptable; a value closer to 1 is better, and the Bartlett's test indicates that the data is probably factorable if p< .50 (Brace, Kemp & Snelgar, 2006). Data from the library awareness scale was analysed by means of a principal component analysis, with varimax rotation. The various indicators of factorability were very good, and the residuals indicate that the solution was a good one. Three components with an eigenvalue >1.0 were found; items with the highest loading were chosen for each factor (see Tables 7 and 8). A three factor solution accounted for 54.325% of the total variance and three components were elicited (see Table 7). These three factors were grouped into three broad themes: The components (factors) can be thought of as representing three main themes: component 1-related to awareness of what the library offers and its use; component 2-related to attributes towards using the library and component 3-related to the services offered by the library. The factor one consisted of six items with factor loading > .3, which accounted for 26.069% of the total variance. The items with highest loadings in this factor were related to the respondents' awareness of library resources and the easiness of library to use. Factor Two was named "attributes towards using the library ", and it accounted for 16.167 % of the total variance, four items with high loading. Items in this factor addressed ability to use the library has had a negative effect on teaching and the perception of the availability of too many possible sources in the library. Factor three named "services offered by the library" and it accounted for 12.089% of the total variance, two items with loading. The item with highest loadings in this factor was related to the perception of the library to provide more services for teachers. (See Tables 7 and 8) These factors will be used for the regression analysis and verifying the study's model. Regression analysis of the library skills Logistic regression analysis was conducted using SPSS version 21.0 to identify the predictors of the library skills of the school teachers. The goal of this analysis was to study the associations between each dependent variable representing key factors of library skills and four independent variables such as gender, age, experience and location of school. In this section, a series of ordinal logistic regressions were run for achieving the objectives of the study (Objective no.5). Firstly a logistic regression was run for the highest loading factor of the library skill scale that is the library should provide more services for teachers. Three hundred and eighteen cases were analyzed and the full model significantly predicted " library should provide more services for teachers" (omnibus chi-square X 2 =26.856, df=16, p=0.043). Overall, 43.4 % of predictions were accurate. Table 9 gives a detailed description of the values calculated for the independent variables entered in the last step of the regression stepwise forward method. As it can be gleaned in Table 9, there were no variables found to be statistically significant. Conclusion As emerged from the findings, teachers top most reason to use school library was to obtain information for their own reading. The next most important reasons to use school library were to obtain information to supplement classroom teaching and to keep abreast with current information in subject areas related to their teaching purposes. The results revealed that most of the teachers apprehend that the library should provide more services for teachers and most of them were uncertain about their ability of using the library easily. Moreover, the results showed that most of the teachers agreed that user awareness programs conducted by the library were very helpful and they were satisfied about the helpfulness of library staff too. The results further revealed that the lack of relevant materials in the school library, lack of information about available sources, unavailability of electronic resources, outdated library materials, difficulty in finding resources from existing collection as main problems encountered by the teachers when using school libraries. Furthermore, it was found that there was no association between library awareness skills of the teachers and the four variables such as gender, age, work experience and school location. Recommendations Based on the findings of the study, the following recommendations are made; i. Upgrading school libraries with necessary information resources and services to develop knowledge and skills of teachers. ii. Providing more up-to-date information materials relevant to school teaching and learning. iii. Conducting information literacy skills courses for teachers to make their use of library more effective and to enable them to transfer these skills to the learners. iv. Developing innovative marketing strategies in libraries to promote services that offer for school teachers.
2020-04-02T09:23:52.773Z
2020-03-31T00:00:00.000
{ "year": 2020, "sha1": "6d40ba5e6901996e92f04db21589ed27d1a4859c", "oa_license": "CCBYSA", "oa_url": "http://jula.sljol.info/articles/10.4038/jula.v23i1.7964/galley/5302/download/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "995cb64ab412d6f427468fb110d105f0d0637b44", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
99119832
pes2o/s2orc
v3-fos-license
Relationship of QTAIM and NOCV Descriptors with Tolman’s Electronic Parameter The 𝜎 -donor properties of various P-donor ligands have been studied at the PBEPBE level of theory, which has proved to be accurate in computing the symmetric carbonyl stretching frequencies in nickel(0)-tricarbonyl complexes containing P-donor ligands. The delocalization index from the QTAIM methodology and the energy component associated with the NOCV deformation density representing the donor interaction give the best correlation with Tolman’s electronic parameters, whereas the electron density at the bond critical point and the Wiberg bond index are connected with the donor strength of the ligands to a lesser extent. Introduction In the last decades the continuous increase in computing performance as well as the discovery of several computational principles and effective algorithms has resulted in a huge leap in terms of accuracy and speed of quantum chemical methods.Computational chemistry is now feasible to handle many catalytic reactions with acceptable to excellent agreement with experimental observations.Moreover, a theoretical project is often faster than an experimental one and is sometimes the only way to obtain certain data, such as the evaluation of partial charges within a molecule or the geometry of transition state structures. The importance of selective catalysts has become enormous in drug discovery and green chemistry, for instance, in reducing waste, simplifying the processes, and promoting possibility for using renewable resources.Asymmetric catalysis is one of the most economic and environmentally friendly ways to produce enantiopure fine chemicals.But in spite of the importance of homogeneous catalysis and the rapid development of computational facilities, most organometallic catalysts still have been discovered through serendipity rather than systematic design. Phosphines, phosphites, and other P-donor ligands are of crucial importance in carbonylation and in many other reactions catalyzed by transition metal (TM) compounds.Changing the coordinated ligands is a powerful way of modifying the properties of transition metal complexes, especially those active in homogeneous catalysis.Their structural variation allows fine tuning of catalytic activity and chemo-, regio-, and enantioselectivity.It has long been known that varying the substituents can cause changes in the behavior of the uncoordinated ligands as well as their TM complexes.Information about the nature of transition metal-phosphorus bond is crucial for characterization of catalytically active compounds and for the tuning of their properties in order to develop more efficient catalysts.Phosphines bound to TMcarbonyl complexes can be ranked in an electronic series based on CO stretching frequencies [1,2].As the different behavior of various phosphines cannot be explained entirely in terms of their electronic character, Tolman introduced the ligand cone angle as well, as a fundamental descriptor for ligand steric effects.Employing computational chemistry may bring the advantage that determining the properties of ligands and their TM complexes is usually faster and cheaper by using quantum chemical methods.Calculating Tolman's electronic parameter (TEP), for instance, is also much safer because the work with the poisonous Ni(CO) 4 complex can be avoided.It should be noted, however, that 2 Advances in Chemistry definite progress has also been made in the field of the experimental determination of electronic parameters.For instance, the Rh-Vaska type of complexes is easy to synthesize and strong correlation has been established between the ](CO) of Ni(CO) 3 (PR 3 ) and the corresponding RhCl(CO)(PR 3 ) 2 complexes [3].Moreover, the rate of Se=P bond formation from KSeCN and tertiary phosphines is sensitive over 5 orders of magnitude and reveals a strong correlation with the electronic properties of the P-donor ligands [4]. In the recent years several attempts have been made for employing theoretical methods to characterize the donor and acceptor properties of phosphines and other (mainly P-donor) ligands.These methods can be divided into two categories.The first group deals only with the isolated ligand, focusing its electronic and steric properties, neglecting the influence of the metal containing fragment.As a prominent example, the molecular electrostatic potential at the lone pair of the phosphorus atom should be emphasized, which correlates well with TEP, according to Suresh and Koga [5].Moreover, the method known as quantitative analysis of ligand effects (QALE) relies on experimental data of known ligands and provides the resolution of net donating ability into QALE parameters [6,7].The second category uses approaches which focus on the entire transition metal complex, thereby including the possibility of scrutinizing ligand-ligand effects as well. The goal of the paper is to investigate the relationship of descriptors of some popular electronic structure methods with Tolman's electronic parameter which is still the most generally accepted method to measure the net donating ability of a ligand.In this context, the delocalization index and the electron density in the bond critical point from the Quantum Theory of Atoms in Molecules (QTAIM) methodology [8] are compared with the Wiberg bond index of the Ni-P bond and with the bonding energy and the energy component of the donor interaction derived from the Natural Orbitals for Chemical Valence (NOCV) methodology [9]. Computational Details All structures were fully optimized at the DFT/PBEPBE level of theory [10] with ultrafine grids employing the Gaussian 09 suite of programs [11].The PBEPBE functional was already employed successfully for computing Tolman's electronic parameters [12].For nickel the def2-TZVP was used, whereas for the other atoms the def2-SVP basis sets were used [13] for geometry optimizations (denoted as PBEPBE/def2-TZVP(def2-SVP)) while full triple- basis set (def2-TZVP) has been employed for the single-point energy structure calculations.Local minima were identified by the absence of the negative eigenvalues in the vibrational frequency analyses.For the QTAIM (Quantum Theory of Atoms In Molecules) calculations [8] the AIMAll software package was utilized [14].Natural bond orbital (NBO) analyses [15] have been performed by the GENNBO 5.0 program and Wiberg bond indices were calculated on natural atomic orbital basis.For the ETS-NOCV calculations [9] the ADF 2012 software [16,17] has been employed.The cone angles of various P-donor ligands have been determined utilizing the steric program [18] preserving the geometry of the ligands in the coordinated Ni(CO) 3 L complexes thereby adapting the steric parameters to the molecular environment. Results and Discussion For the proper comparison of experimental carbonyl stretching frequencies with electronic structure descriptors a set of simple monodentate ligands has been chosen covering the range of known TEP parameters of P-donor ligands.This comprises monotertiary phosphines, phosphorus trifluoride, and phosphites, namely, P(OMe) 3 and P(OPh) 3 .The ](CO)s and the most important structural parameters, that is, the Ni-P bond distances of their Ni(CO) 3 L type of complexes, are compiled in Table 1.Because of the high conformational flexibility of the phosphite ligands [19] the global minima for phosphite complexes have been searched according to Figure 1.For both P(OMe) 3 and P(OPh) 3 the conformation "A" was proved to be the least favored energetically, whereas the relative free energy of conformers "B"-"D" fell within the range of 2 kcal/mol.The most stable structure is "C" for Ni(CO) 3 (P(OPh) 3 ) whereas it is conformer "B" for Ni(CO) 3 (P(OMe) 3 ).The set of ligands has also been extended with P(o-Tol) 3 in order to get an insight into the deviation of electronic parameters when the steric demand of the ligand has been increased significantly in comparison to P(p-Tol) 3 .The ligand 1,3,5-triaza-7-phosphaadamantane (PTA) has also been taken into consideration because of its unique properties being a water soluble ligand with small steric bulk and fairly enhanced basicity [20]. The Ni-P distance depends not only upon the donating ability of the ligands, but also upon their steric demand; therefore no unambiguous relationship can be established between the Ni-P distances and the TEPs.It is worth noting, however, that electron withdrawing substituents on phosphorus result in shorter bonds, as seen, for instance, in Ni(CO) 3 (P(CF 3 ) 3 ) (2.167 Å) as opposed to Ni(CO) 3 (PMe 3 ) , where the deviation is only +3 cm −1 .That is, the PBEPBE/def2-TZVP(def2-SVP) level of theory seems reliable to predict Tolman's electronic parameters for unknown P-donor ligands. The QTAIM methodology provides various suitable descriptor for characterizing the bonding (as well as the weak) interactions in transition metal complexes [22,23].Detailed information from the electron density distribution can be obtained by the Laplacian of the electron density (∇ 2 (r)) which indicates regions with relative charge concentrations (∇ 2 (r) < 0) and charge depletions (∇ 2 (r) > 0) in a molecule.Upon the formation of a chemical bond the Laplacian distribution is no longer spherical, and the distortion of the valence shell charge concentration (VSCC) is characteristic for the atomic interactions.The ellipticity () obtained from the two negative eigenvalues ( 1 and 2 ) of the Hessian of (r) at the bond critical point (BCP) is the measure of deviation of the density distribution from the axial symmetry of a chemical bond and defined as = 1 / 2 .As expected, ellipticities of zero or very close to zero have been obtained for the Ni-P bond of all the complexes, indicating that the charge distribution along the Ni-P interaction is highly cylindrical. Comparing the Laplacian of two prototypes of complexes Ni(CO) 3 (PMe 3 ) and Ni(CO) 3 (P(CF 3 ) 3 ), representing basic and the least basic phosphines, some differences can be observed (see Figure 2).The charge concentration part, depicting the lone pair of phosphorus, is responsible for the -donor interaction.In Ni(CO) 3 (P(CF 3 ) 3 ) the charge concentration is definitely closer to the P atom, which is a consequence of the electron withdrawing property of the trifluoromethyl groups.The more compact density distribution is also reflected in the remarkable difference in the NPA charge of phosphorus, being significantly less negative for the complex containing the P(CF 3 ) 3 .The P(CF 3 ) 3 ligand also takes some electron density from the carbonyl ligands decreasing their partial charges.Interestingly, the Ni center becomes somewhat more negative in this case, preserving some of the charge withdrawn from the carbonyl ligands.Based upon the electron density partition scheme of QTAIM, Bader and Stephens introduced the delocalization index denoted here as DI(AB).This provides the number of electron pairs delocalized between the basins of atoms A and B [24].The DI between atomic basins are somewhat related to formal bond orders for an equally shared pair between two atoms in a polyatomic molecule. The delocalization index for the Ni and P atomic basins in Ni(CO) 3 L complexes is depicted in the function of the computationally determined carbonyl stretching frequency (see Figure 3(b)), with a reasonable linear correlation of 2 = 0.946.Notably worse correlation ( 2 = 0.844) has been obtained for the electron densities at bond critical points [(BCP)], especially the phosphite ligands, and P t Bu 3 deviates from linearity (Figure 3(a)). The phosphite ligands show some deviation when the Wiberg bond indices are shown in the function of ](CO).The moderate correlation ( 2 = 0.806) is also a consequence of the behavior of the P(CF 3 ) 3 ligand, which shows remarkably low value for WBI (see Figure 4).The QTAIM parameters as well as the Wiberg bond indices are compiled in Table 2.In order to scrutinize the -donor properties of P-donor ligands the ETS-NOCV methodology has been selected, where ETS stands for Extended Transition State.Within this approach the interaction energy between the selected ligand and the remaining part of the complex is decomposed into chemically meaningful components representing different steps toward the formation of the molecule from its fragments.When the orbital interaction part is expressed in NOCV, rather than orthogonalized fragment orbitals, only a few complementary pairs will contribute to the interaction energy in a significant amount.This provides a better visualization of the deformation densities and of their energy contributions to the bond energy [9,25,26]. The deformation density Δ orb representing the donor interaction in the selected complex Ni(CO) 3 (PMe 3 ) is depicted in Figure 5.The interaction energies that resulted from the energy decomposition as well as the energy components for the -donor interaction are compiled in Table 3.The deformation density can be expressed as a sum of pairs of complementary NOCV orbitals ( − and ) corresponding to eigenvalues equal in absolute value but opposite in signs [27].The complementary pairs of NOCV define the channels where the electron charge transfers will take place between the molecular fragments, that is, from the coordinating lone pair of phosphorus towards the * orbitals of carbonyl ligands. The energy components of the -donor interaction for the selected ligands are in a good linear correlation with computed ](CO)-s with 2 = 0.931.Somewhat more loose relationship has been found for the bond energies between the metal containing fragment and the P-donor ligands ( 2 = 0.836) (see Figure 6).Strong deviation has been obtained, however, when the sterically demanding ligand P(o-Tol) 3 was included in the training set.Thus, the deformation density Δ orb representing the -donor interaction is also a suitable quantum chemical descriptor for the prediction of Tolman's Concluding Remarks In the present study some quantum chemical descriptors have been scrutinized in order to examine how suitable they are for predicting Tolman's electronic parameters for various Pdonor ligands.The delocalization index from the QTAIM methodology and the interaction energy associated with the NOCV deformation density for the -donor interaction are the two descriptors which reveal reasonable linear correlation with the computed carbonyl stretching frequencies.Whereas it is the expected behavior for the NOCV energy contributions, this is somewhat surprising for DI-s, meaning that they are much more directly connected to the donor strength of the ligand than other descriptors, such as (BCP)s and Wiberg bond indices.It has been also found that the PBEPBE functional in combination with the def2-SVP basis set (def2-TZVP for the nickel atom) gives accurate predictions for the A 1 (or A) carbonyl stretching frequencies in nickel-tricarbonyl complexes.The significant increase of steric demand of the ligand, represented by P(o-Tol) 3 , causes a deviation from linearity for all descriptors, albeit in a different extent. Table 2 : Computed CO stretching frequencies, electron densities at bond critical points, delocalization indices, and Wiberg bond indices, for Ni(CO) 3 L complexes. Table 3 : Computed CO stretching frequencies, donor energy components, and interaction energies between fragments, determined by NOCV calculations for Ni(CO) 3 L complexes.
2019-04-08T13:07:32.863Z
2016-09-07T00:00:00.000
{ "year": 2016, "sha1": "2913cff54a8d77e769eddd53509fe3c6b733552f", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/archive/2016/4109758.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2913cff54a8d77e769eddd53509fe3c6b733552f", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
28364098
pes2o/s2orc
v3-fos-license
Health care utilization in general practice after HPV vaccination—A Danish nationwide register-based cohort study Objective The Human Papillomavirus (HPV) vaccine has increasingly been suspected of adverse effects in Denmark since 2013. By using consultations with the general practitioner (GP) as an indicator for morbidity, this study aims to examine the association between HPV vaccination and morbidity in girls in the Danish childhood immunization program. Methods The study is a nationwide register-based cohort study. Both the HPV and the Measles, Mumps and Rubella (MMR) vaccines were offered to 12-year-old girls in Denmark in the study period (2008–2015). Therefore, both vaccines were included as exposures to allow differentiation between potential effects. This resulted in four exposure groups: HPV only vaccinated, HPV+MMR vaccinated, MMR only vaccinated, and Non-vaccinated girls. Outcomes were: daytime consultation rates and frequent GP attendance (> 7 annual GP consultations). We estimated consultation rates by negative binomial regressions analysis and frequent GP attendance by logistic regression analysis. Both analyses were stratified on the years 2008–2013 versus 2014. Results The study included 214,240 girls born in 1996–2002. All vaccinated groups consulted the GP more often than the non-vaccinated group, both before and after the vaccination. After the vaccination, an increase in consultations was observed for all three vaccinated groups; most distinct for girls vaccinated in 2014. For girls vaccinated before 2014, we found a slightly higher risk of frequent GP attendance after vaccination in the HPV only group compared to the non-vaccinated group, whereas in 2014, frequent GP attendance was seen for all three vaccinated groups; most substantial for the MMR only vaccinated group. Conclusion In this study, no exclusive increase in health care utilization was detected after HPV vaccination. However, a general difference in the health care utilization pattern was found between vaccinated and non-vaccinated girls, which increased after the time of vaccination, primarily for girls vaccinated after 2013. Introduction Cervix cancer is the second most common cancer in women worldwide, accounting for 270,000 deaths in 2012 [1,2]. Human papillomavirus (HPV) is the cause of virtually all cervical cancer cases [3,4], and more than sixty out of 195 countries worldwide have included the HPV vaccine in their national immunization programs [5]. The HPV vaccination was introduced in the Danish national childhood immunization program in 2009. Since then, the vaccination has been offered to all 12-year-old girls [6] Large international randomized controlled trials showed that the HPV vaccine was safe and well-tolerated [7,8] and subsequent epidemiological studies have not found a higher risk of autoimmune or neurological diseases in vaccinated girls or women compared to un-vaccinated [9][10][11]. However, the HPV vaccination program has been challenged in Denmark since 2013 due to an increasing number of suspected adverse effects of the HPV vaccine followed by an intense media attention and public debate [12]. Some of the reported symptoms have been classified as postural orthostatic tachycardia syndrome (POTS), chronic fatigue syndrome, long-lasting dizziness, headache, syncope, seizures, abdominal pain, joint and muscle pain, and cognitive dysfunction [13,14]. The Global Advisory Committee on Vaccine Safety, which was set up by the World Health Organization (WHO), reported in 2015 that no safety issues that would alter the recommendation of the HPV vaccination had been found [15]. As the reported adverse effects are very heterogeneous, and most of them do not have a specific hospital diagnosis code, this is a complex area, which is difficult to study. It has, therefore, been discussed whether the diagnosis-based epidemiological studies fully cover these possible adverse effects. In Denmark, all citizens have free and direct access to the general practitioner (GP). The GP is the first point of contact in the health care system, and many health problems are handled and treated by the GP, who acts as gatekeeper to secondary care, e.g. through referral to specialists and hospitals [16,17]. Girls experiencing possible adverse effects of the HPV vaccine are most likely to first seek help at their GP and we expect that healthcare utilization at the GP will reflect potential health effects due to the HPV vaccine. The aim of the current study was, therefore, to examine the association between the HPV vaccine and primary health care utilization as indicator for increased morbidity among girls included in the Danish childhood immunization program. As the Measles, Mumps and Rubella (MMR) vaccination was offered to girls at the same age as the HPV vaccination, information on MMR vaccination was included in the study. Study population The study was designed as a register-based cohort study. Every citizen in Denmark is registered with a unique 10-digit civil registration number (CRN) [18]. This number was used to [18], we obtained information on emigration and death, and girls who were still alive and living in Denmark at their 14 th birthday or at the end of follow-up on 31 December 2015, whichever came first, were included in the study (n = 214,424). Girls with missing information on region of residence were excluded (n = 184), and the final study population then consisted of 214,240 girls. Information on exposure and outcome Information on both exposure and outcome was obtained from the Danish National Health Insurance Service Register (NHSR) [20]. All primary health care services provided to citizens in Denmark are registered in the NHSR with specific codes. The registrations are based on a fee-for-service payment of the GP, and registered records are virtually complete. This makes it possible to follow the individual contacts with primary care over time. The register includes information on the year and week in which the service is provided [20]. Exposure The Danish immunization program includes vaccines against nine different infectious diseases given at different ages through childhood. In addition, girls are offered the HPV vaccine. The vaccinations are provided by the GP and is free of charge [21]. The HPV and MMR vaccinations were both offered to 12-year-old girls as part of the Danish immunization program in the period 2009-2016. The main exposure in the study was the HPV vaccine, but the MMR vaccine was also considered as an exposure in order to examine potential effect modification. HPV vaccine. In January 2009, the HPV vaccination was introduced in the national childhood immunization program and offered to all 12-year-old girls [6]. In the period between January 2009 and August 2014, girls were vaccinated with the quadrivalent Gardasil vaccine three times within a year. After this, the vaccination program was changed, and Gardasil was only administered twice. The vaccines were injected by the GP and registered in the NHSR (service codes: 8328, 8329, 8330 or 8334, 8335, 8336). In the current study, a girl was categorized as exposed to HPV vaccine if one of the HPV vaccine service codes appeared in the register before her 14 th birthday or before the end of follow-up, whichever came first. MMR vaccine. From 1987 until April 2016, the MMR vaccine was offered to children in Denmark as part of the national childhood immunization program at fifteen months and twelve years of age. After the inclusion of the HPV vaccine in the national childhood immunization program in 2009, the MMR and HPV vaccine were both offered to 12-year-old girls at the same time. In this study, a girl was categorized as MMR vaccinated if either the MMR vaccine service code provided for 12-year-old children was registered (8612) or the MMR vaccine service code for the vaccination given at fifteen months was registered after the age of eleven years (8601). Exposure status. The exposure was categorized into four mutually exclusive groups: HPV only vaccinated, HPV+MMR vaccinated, MMR only vaccinated, and Non-vaccinated (neither HPV-nor MMR vaccinated). The date of vaccination was defined as a random date in the week of the first HPV vaccination according to the NHSR for the two HPV vaccinated groups. For the MMR only vaccinated girls, it was defined as a random date in the week of the MMR vaccination. The non-vaccinated girls do not have a date of vaccination, and this group served as a reference group representing heath care utilization at a given age and for a given calendar period. Outcome The outcome in the study was primary health care utilization measured as face-to-face daytime consultation rates (hereafter referred to as consultations) and high frequency of GP attendance (hereafter referred to as frequent GP attendance). Frequent GP attendance was defined as more than 7 daytime face-to-face consultations during the year following the vaccination/index date. Information about consultations at the GP (service code for consultations: 0101) two years before and two years after the date of vaccination was obtained from the NHSR. A vaccination is not supposed to be registered in addition to a consultation code, unless an actual consultation at the GP has taken place. A high number of girls had consultation codes (0101) registered in the same week as a vaccination. Therefore, such consultation was disregarded as it was considered as either a registration error or a minor health concern that did not prevent the GP from vaccinating the girl. Covariates Potential confounders were all selected a priori. Information on age, region of residence, ethnicity, birth order, type of household, parental education and socioeconomic status was obtained from Statistics Denmark [22]. Information on parental covariates was obtained for the year before the date of vaccination for the vaccinated girls and the year before the 12 th birthday for the non-vaccinated girls. However, when data was missing, data from two years before the date of vaccination/12 th birthday was used. Statistics Negative binomial regression models were used to estimate incidence rate ratios (IRRs) and 95% confidence intervals (CIs) comparing the rate of face-to-face consultations per three months for each of the three vaccinated groups compared to the consultation rate for a group of non-vaccinated girls. This was calculated for the time period from two years before until two years after time of vaccination. We used categorical covariates as presented in Table 1 to ensure that estimates were adjusted for age, calendar year, ethnicity, month of vaccination, region of residence, birth order, type of household, parental education and socioeconomic status. As the HPV vaccination coverage decreased steeply in Denmark in 2014 and 2015 [18,19], a sub-analysis stratifying on year of first vaccination (2008-2013 versus 2014-2015) was performed. In a supplementary analysis, we studied the association between HPV vaccination and frequent GP attendance. The vaccinated girls were matched on birthdate with non-vaccinated girls. The non-vaccinated girls were allocated an index date equal to the vaccination date of the matched vaccinated girl. The odds ratio (OR) of frequent GP attendance among vaccinated girls was calculated using a logistic regression analysis, with adjustment for all combinations of calendar time of vaccination and age at vaccination, as presented in Table 1. The estimates were also adjusted for prior health care attendance (continuously), ethnicity, region of residence, birth order, type of household, parental education and socioeconomic status. The analysis of frequent GP attendance was stratified on years (2008-2013 versus 2014). Girls who had the HPV vaccination in 2015 were excluded from this analysis as it required one year of follow-up. The adjusted results were presented with 95% CIs. In all performed analyses, a cluster-robust variance estimation was applied to account for dependence between repeated observations on the same subjects. The statistical analyses were performed in Stata 13.1 (Stata Corporation, College Station, Texas). A two-sided p-value of 0.05 or less was considered statistically significant. In this period, the numbers were 25,220 (69.5%) for HPV+MMR, 3,839 (10.6%) for HPV only, and 3,537 (9.8%) for MMR only. Hence 3,665 (10.1%) were non-vaccinated. Both before and after 2013, vaccinated girls were more likely than non-vaccinated girls to have parents who were married and who had higher socioeconomic position. Non-vaccinated girls had the lowest mean number of consultations during the entire period. In our study population, the mean age for HPV and MMR vaccination was 12.37 and 12.16, respectively. Of the girls receiving both vaccines 31% were vaccinated at different occasions and in most of these cases (85%) the girls had received the MMR vaccine before the HPV vaccine (median 225 days). Girls receiving the MMR only vaccine were vaccinated on average 40 days earlier than girls receiving the HPV only vaccine. Consultations at the General Practitioner The mean number of consultations at the GP is visualized by age and vaccination status in Fig 1. The mean number of visits at the GP varied from 1.5 visits a year for nine-year-old non-vaccinated girls to 2.8 times a year for the sixteen-year-old HPV only vaccinated girls. All groups had stable consultation rates until age fourteen and progressively increasing (although varying) consultation rates after the age of fourteen. The mean number of consultations was generally highest in the group of HPV only vaccinated girls followed by the HPV+MMR group and the MMR only group. In the adjusted analysis, higher consultation rates were observed for all three groups of vaccinated girls compared to the group of non-vaccinated girls, both before and after the time of vaccination (Fig 2). There was a tendency to a decrease in consultation rate ratios in the nine months before vaccination for the HPV+MMR and MMR only vaccinated girls, whereas an increase was observed for the HPV only vaccinated group, relative to the non-vaccinated group of girls. After the time of vaccination an increase in consultation rate ratios was observed for the three groups of vaccinated girls; this was most distinct for the HPV only and the HPV+MMR vaccinated groups. In the stratified analysis, the consultation rate ratios for those vaccinated within the period from 2008 through 2013 were very similar to the overall analysis. In 2014 the three vaccinated groups had a similar consultation pattern both before and after time of vaccination with a tendency towards a steeper increase in consultation rate ratios after vaccination, particularly for the MMR group. Table 2 presents the results of the analysis concerning frequent GP attendance stratified on year of vaccination. The percentage of frequent attenders among the included girls was approximately 2-3%. The percentage of frequent attenders was very similar for the HPV only and HPV+MMR vaccinated groups in the two periods, whereas the percentage of frequent attenders decreased for the non-vaccinated girls and increased for the MMR only vaccinated girls in 2014. For those vaccinated from 2008 through 2013, we found a slightly higher risk of frequent GP attendance after vaccination in the HPV only group compared to the non- vaccinated group, which was, however not statistically significant. In contrast, for those vaccinated in 2014, an indication of higher OR of frequent GP attendance was detected for all three vaccination groups; this was especially seen in the group of MMR only vaccinated girls who had an OR of 1.41 (CI: 1.01-1.99). Discussion This nationwide population-based cohort study investigated the primary health care utilization as an indicator for increased morbidity after HPV vaccination among girls included in the Danish national immunization program. Overall, the study found that vaccinated girls in all groups had higher consultation rates than non-vaccinated girls both before and after vaccination. The consultation rate ratios, however, tended to increase after vaccination. This was evident in all three vaccination groups, but most distinct for the HPV only and the HPV +MMR vaccinated groups. The consultation rate ratios for those vaccinated in the period from 2008 through 2013 were very similar to the overall analysis, whereas the increase in consultation rate ratios after vaccination tended to be steeper for those vaccinated after 2014. In addition, we observed a higher probability of frequent GP attendance in the year following vaccination for all girls vaccinated in 2014 compared to non-vaccinated girls. This increased probability of frequent attendance in the vaccinated groups in 2014 was possibly partly due to a decrease in the percentage of frequent attenders in the non-vaccinated group. In the study, no exclusive association between the HPV vaccine and increased health care utilization following vaccination was detected, but a general difference in the health care utilization was found between vaccinated and non-vaccinated 12-year-old girls. Although the results cannot exclude that vaccination is associated with increased morbidity, the similar results for all vaccinated groups do not indicate any specific concerns about the HPV vaccine. Both the steeper increase in consultation rate ratios observed for those vaccinated in 2014-15 and the higher ORs for frequent attendance observed for all three groups vaccinated in 2014, but not for those vaccinated earlier, indicate that the association between vaccination and increased health care use is not due to adverse events related to the vaccination. As the same vaccines were used during the entire study period, there is no obvious biological explanation for this time dependent change. Intense media attention concerning the potential adverse effects of the HPV vaccine has been seen in Denmark since 2013. This massive media attention could potentially have led to an increased awareness of potential symptoms, but it could also have made the girls and their parents more inclined to draw a link between experienced symptoms and the HPV vaccine. This might have increased their consultation rate, which could partly explain the results. A recently published study by Héquet at al. [23] found that the use of medical services was a strong driver for HPV vaccination initiation at the individual level. This finding is in line with our results of the difference in health care utilization pattern between vaccinated and non-vaccinated girls prior to vaccination. We are not aware of other studies investigating the association between the HPV vaccine and later primary health care utilization. However, our findings are compatible with the findings reported in other post-licensure epidemiological studies, where no safety concerns have been detected [9][10][11]24,25]. One important strength of the study is the prospective study design and the large study population. The study population consisted of almost all girls born in Denmark in the period from 1996 to the end of 2002, with practically no loss to follow-up. Furthermore, all information at the individual level was obtained from national registers, which eliminates the risk of recall bias. A limitation of the study is the conditioning on the future. Hence, in the study we only include girls born in Denmark, who were alive and living in Denmark at their 14 th birthday or the end of follow up. However, less than 0.5% (918) of girls died or emigrated between their 11 th and 14 th birthday and as the risk of both emigration and death are thought to be independent of HPV vaccination and health care attendance, the risk of bias is considered to be very limited. Another limitation in the study is the possibility of missing registrations on the given vaccinations. Still, as the reimbursement of the GPs depends on the registrations in the NHSR, severe underreporting is less likely. A recently published study, however, found that MMR vaccination overage at fifteen months of age in Denmark was higher than reported in the NHSR [26]. These potential administration errors could cause some misclassification of vaccination status in the current study. This is mainly a concern related to the MMR vaccine as the HPV vaccine was given three times, and it is considered less likely that the registration goes wrong all three times. Until 2014, the HPV only vaccinated girls had significantly higher consultation rates in the nine months before vaccination compared to the HPV+MMR and MMR only vaccinated girls. This might partly be explained by missing registrations of the MMR vaccines given. Hence, in the group of girls who had received both the HPV and MMR vaccine 31% of the girls had the vaccinations at different occasions and in 85% of the cases the MMR vaccine was given earlier then the HPV vaccine. Therefore, a missing registration of MMR vaccination (with a potential registration of a consultation instead) is thought to occur mainly before HPV only vaccination. The higher consultation rate for the HPV only vaccinated girls before vaccination was not present in 2014/2015. This might be due to the intense public debate about the HPV vaccine in 2013, which led to more strict registration of the vaccinations given in general practice. In this study, a service code for a consultation recorded in the NHSR was disregarded in the analyses if it appeared in the same week as a vaccination was given. This was done because it was considered more likely to be an administrative error than an indication of morbidity. As the service codes are only recorded weekly in the NHRS, it was not possible to specifically disregard only the consultation codes on the specific day of vaccination. A consultation service code was registered along with a vaccination service code in approximately 15% of the vaccination weeks. Some of these registered consultations are likely not to be due to administrative errors, but to reflect actual consultations with the GP. Thus, there is a risk of underestimation of the health care utilization among the vaccinated girls in the same week as the vaccination took place. Unmeasured confounding may be a limitation in the present study. Non-vaccinated girls may be different from vaccinated girls with respect to other characteristics. As an example, comorbidity may be linked to both vaccination status and consultation rates. However, due to the lack of information on comorbidity, we could not adjust for comorbidity in this study. Also, as a consequence of the media attention concerning possible adverse effects of the HPV vaccine starting in 2013, the HPV vaccination coverage decreased sharply in 2014. Hence, the distribution of covariates in the three vaccination groups, particularly in the non-vaccinated group, has probably changed. This potential variation in the distribution of unmeasured confounders could partly explain some of the differences in the results observed between those vaccinated before and those vaccinated after January 2014. In our study, GP attendance was used as indicator for morbidity of the included girls. Unfortunately, the specific reasons for GP contact or potential diagnosis are not stated in the NHSR. Therefore, GP attendance is a crude measure of morbidity, and more severe morbidity might not be captured in our study as health care use in secondary care may not necessarily be associated with more GP contacts. Conclusion In this study, no exclusive increase in health care utilization was detected as an indicator for morbidity after HPV vaccination. However, a general difference in the health care utilization pattern was found between vaccinated and non-vaccinated 12-year-old girls in the Danish childhood immunization program. A difference in health care utilization pattern between vaccinated and non-vaccinated girls was present already before vaccination, but increased after the time of vaccination, primarily for girls vaccinated in 2014/2015. This might reflect a general difference in the health care utilization between vaccinated and non-vaccinated girls and/ or increased awareness of potential adverse effects after intense media attention from 2014 onwards. To our knowledge, this study is the first of its kind. Hence, our results need to be further explored.
2018-04-03T05:25:45.405Z
2017-09-08T00:00:00.000
{ "year": 2017, "sha1": "f693f534054168f03c74968db4c56e1e63214683", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0184658&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f693f534054168f03c74968db4c56e1e63214683", "s2fieldsofstudy": [ "Medicine", "Political Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
225402284
pes2o/s2orc
v3-fos-license
Human and bovine tuberculosis knowledge, attitude and practice (KAP) among cattle owners in Ethiopia Tuberculosis (TB) is a re-emerging disease occurring worldwide, resulting in multi-billion-dollar loss and human death annually. The situation is worse in developing countries like Ethiopia, where lower knowledge, attitude, and practice (KAP) of the people is poor about the disease. A questionnaire-based cross-sectional study was conducted to assess livestock owners' KAP level towards human and bovine Tuberculosis in Gondar, Ethiopia. A total of 349 study participants were addressed through a face-to-face interview. Descriptive statistics and Pearson's chi-squares analysis were used to analyze the data and observe the association between outcome (KAP level) and predictor variables (sociodemographic characteristics). Out of the 349 respondents, 223 (63.9%) were males, while 126 (36.1%) were females. The KAP measuring interview indicated that 97.4% of the participants are aware of human tuberculosis, while only 84 (24.1%) know about bovine tuberculosis cause and mode of transmission. Inhalation was reported as the main route of transmission for human TB (41.1%) whereas, 50% of the respondent mentioned inhalation, contact, and ingestion of raw animal products as the main route of TB transmission from animal to human. Among those who have heard of bovine tuberculosis, only 56 (66.7%) of respondents consider bovine tuberculosis as a significant threat to public health. The study showed there is a lower KAP on bovine TB among cattle owners in the study area. Therefore, community health education about the impact of the disease, transmission, control, and prevention should be integrated with one health-oriented education and research to eradicate the disease from the country. Introduction Tuberculosis (TB) is a re-emerging disease occurring worldwide, causing multi-billion-dollar loss and human death annually. The disease affects both humans and animals caused by a group of bacteria called Mycobacterium tuberculosis complex of different species, including Mycobacterium tuberculosis and Mycobacterium bovis (Thoen et al., 2009). M. tuberculosis primarily causes TB in humans, whereas M. bovis predominantly affects cattle (Pal et al., 2014). It is the cause of Zoonotic TB in humans that can spread from infected vertebrate animals to humans (Cosivi et al., 1998;Ashford et al., 2001;Pal, 2007;Pal et al., 2014). The burden of human TB in Ethiopia is one of the world's highest (Pal et al., 2014;WHO, 2014). The country remains an epicenter for potential zoonotic diseases such as Bovine tuberculosis (Grace et al., 2012), putting the public health sector at risk. The exponential growth of the population of the country demanded an increase in animal products. In turn, this scenario resulted in an intensification of dairy and feedlot farms of productive breeds of animals Elias et al., 2008). The situation created a conducive environment for spreading zoonotic diseases like bovine tuberculosis (Ameni et al., 2003). Bovine TB in cattle is manifested throughout different agro-ecological zones of Ethiopia. Its prevalence in cattle ranges from 16.2% to 65.8% in different farming systems (Shitaye et al., 2007). However, a meta-analysis study indicated that the pooled prevalence of bovine tuberculosis in Ethiopia is 5.8%. In Ethiopia, the prevalence of mycobacterium tuberculosis reaches 0.6%. The mycobacterium complex cause tuberculosis (TB) in various mammalian hosts but exhibit specific host tropisms (Ameni et al., 2011). The Bacterium had demonstrated a potential for reverse zoonoses due to microclimate sharing between humans and animals. The prevalence in this context may not exceed 1% (Ocepek et al., 2005). In Ethiopia, there have been reports demonstrating Mycobacterium tuberculosis infecting bovines (Ameni et al., 2011). A study in central Ethiopia reported that 27% of isolates from grazing cattle were Mycobacterium tuberculosis. In this particular study area, TB is one of the most prominent health constraints. It is the leading killer of people living with HIV AIDS. Furthermore, bovine tuberculosis in the Gondar town is a substantial public health risk. Its prevalence in different production systems reported in studies (Tintagu Gizaw, 2017) indicated a prevalence of 8.3% from the abattoir-based study. Simultaneously, it was reported to reach 11% in dairy production herds (Mekonnen et al., 2019). Even though the disease poses a substantial public health risk in the Gondar area, people's KAP is not studied. Studies conducted so far on human and bovine TB in Ethiopia indicated that there is still a gap in KAP about the diseases. A study conducted in Addis Ababa indicated that only 13.9% knew bovine. This indicates that community members living in the capital even have a knowledge gap about the disease. Besides a study conducted in Gondar on high school students measuring their level of understanding about human TB showed that only 59% were knowledgeable about the disease. All these results indicated a knowledge gap to be filled (Hibstu and Bago, 2016;Kidane et al., 2015). In developing countries like Ethiopia, a low living standard in both animals and humans plays a significant role in bovine tuberculosis transmission between human to human and human to cattle or vice versa Ejeh et al., 2013). Educational efforts were reserved for addressing human to human transmitted TB even though the impact of TB from animal to human is significant. Cattle owners and those in contact with the animal and their products are at risk of acquiring bovine tuberculosis (Ameni et al., 2003). A community based public health education remains the most powerful weapon in promoting awareness among cattle owners. The knowledge about the implication of bovine tuberculosis in humans has to be developed and disseminated adequately. Before planning an educational program, the level of understanding of livestock owners towards the disease have to be measured. With this understanding, a cross-sectional study to assess the community's KAP on human and bovine TB was designed. Study population and area description The study involved interviewing household heads who rear cattle and other animals, who have contact with animals and who consume animal products. The study was conducted from February to April 2019 in and around Gondar town. Gondar town is found northwestern part of Ethiopia at 748km away from Addis Ababa and 180km from North East of Bahir Dar. The estimated human population is of the estimated to be 207,044 and the total area of the city covers 5560 ha (CSA, 2007). Study design and sample size determination A cross-sectional study was conducted from February to April 2019. Participants were selected using a random sampling approach. From Amhara national, regional state of Gondar town administrative agricultural and rural development office, the cattle owners' list was obtained. The list of all cattle owners was found, a random number was generated to select participant cattle owners. The interview sample size was determined using the methods described by (Yemane, 1967) with finite population correction for proportions formula and using a 5% margin of error at a 95% confidence interval, the total sample size was 349 cattle owners. Questionnaire design and data collection methods A close-ended questionnaire was developed which consisted of four parts to measure KAP level of owners were developed based on the common understanding of the disease and literature reviews. The contents of the questions were focused on participants' demographic information, knowledge, attitude, and practice (KAP) measuring questions. The questionnaire was pre-tested and administered through face-to-face interviews through house-to-house visits. The KAP questionnaires main contents focused measuring the participants knowledge about causes, transmission mode, treatment, control and prevention mechanisms of human and bovine TB. Besides, we introduced questions that assess the attitudes and practices of participants about habit of raw meat/milk consumption, husbandry/management practices, herd size and structure owned, watering/feeding, production system, presence of contact between human and cattle, and known current or previous history of TB status in their households was recorded. Ethics statement According to the National Research Ethics Review Guideline of Ethiopia, this research doesn't require formal ethical approval. However, we had verbal consent with participants for the right of confidentiality of information they provide. Data management and analysis The collected data were cleaned, checked entered using Epi data software. Then, it was exported to Microsoft Excel and analyzed by SPSS version 20 software package. KAP for each participant was calculated by giving a score. If a participant correctly answered questions, we gave one, if not we gave zero for any wrong answer. All the correct answerers were added and divided by the number of participants; and multiplied by 100 to determine the mean percentage level of knowledge, attitude and practices questions. For each participant who scored above the average we classify them to have good KAP, while those who scored below the average, we group them to have poor KAP. For association analysis between KAP level and socio demographic predictor variables, respondents were categorized as those who had good KAP or those who had poor KAP level based on score greater than or equal to the mean value assigned as having good KAP, and the score less than the mean value as poor KAP level. The relationships between the predictor variable (age, gender, marital status, educational status, occupation, residence) with KAP scores were examined using Pearson's chi-square value. Socio-demographic characteristics of the participants A total of 349 cattle owners participated in this study. Among them 223 (63.9%) were males, while 126 (36.1%) were females. Most of the respondents (39.5%) were between 18 to 30 years old. Regarding the educational status, the highest number (25.2%) of the respondents were illiterate. Majority of the respondents (40.4%) were engaged cattle rearing practices. Regarding residential area, 236 (67.6%) study participants live in rural peasant associations while the rest were in city administration (Table 1). Species of animals owned, purpose and husbandry system Majority of the participants 232 (66.5%) owned local cattle breeds (Boss indicus), and the remaining 117 (33.5%) had cross breed between Boss indicus and Boss taurus) cattle. Besides, 194 (55.6%) keep cattle only, while the rest rear cattle and other livestock. The primary purpose/ reasons to keep cattle were milk production (42.75%), milk and draft (36.5%), and meat production. Regarding husbandry practices of cattle, 168 (48.1%) graze their animals free in the fields, 110 (31.5%) keep their animal in intensive management, and the rest 71 (20.3%) respondents practice semi-intensive management systems (Table 2). Knowledge of respondents towards human and bovine tuberculosis From the total participants, 97.4% of them have knowledge on human TB while the rest 2.6% have no information about the disease. In contrast most of the participants (75.9%) had never heard about bovine TB. Misperceptions such as bad weather (both hot and cold) and genetically from parents were thought to be associated with the disease. Most of the respondents mentioned that TB patients (40.0%) and radio/TV (41.1%) as the primary source of information on human and bovine TB. Concerning the modes of human TB transmission, 130 (41.1%) said that M. tuberculosis can be transmitted by inhaling exhaled air when a person with TB coughs, sneezes and speaks/droplet transmission. Among the respondents who had information about bovine TB, 42.9% thought that the ingestion of raw animal products (milk and meat) as the mode of transmission from animals to humans (Table 3). The majority of respondents believed that human TB is a curable disease. 298 (94.3%) of the participants stated that drugs from health center are the best treatments for TB. The highest number of respondents stated that TB transmissions could be preventable. Furthermore, 99 (31.7%) of the participants mentioned that covering mouth and nose when coughing and sneezing, avoid sharing of utensils, and separation of the patient room as a commonly used method for preventing the spread and transmission of TB. Moreover, 51 (16.7%) respondents mentioned that the spread of TB could be reduced through vaccination to human and cattle even though vaccination in for bovine TB is not a common practice in the area. Among participants who had awareness about TB in humans and animals, 30.9% witnessed TB among their family members or friends. Regarding the type of TB observed, 78.2% referred to a pulmonary form and 21.9% to an extrapulmonary form of TB. About the TB patients' treatment history, 92% of them took modern drugs given by the health center (Table 4). The attitude of respondents towards human and bovine tuberculosis Regarding the attitude towards people with TB disease, 233 (68.5%) of the respondents feel compassion and desire to help, while 57 (16.8%) feel compassion, but they tend to stay away, 31 (9.1%) respondents said they have fear of contracting the disease and will not get closer to patients, and the remaining 19 (5.6%) participants have no particular feelings towards TB patients. Most of the respondents, 143 (42.6%), said that most of the community usually segregate TB patients. Among the participants, 48 (14.1%) responded that TB affects only poor people. The highest proportion of the respondents, 212 (62.4%), did not consider the consumption of the raw animal products (milk and meat) to pose a risk for exposure to bovine tuberculosis, whereas 31 (9.1%) respondents were not sure about it. Two-thirds of respondents (67.6%) stated that vaccination against TB would protect anyone from TB disease (Table 5). The practice of participants towards bovine tuberculosis From the total participants, 10.3% of the households practice raw milk consumption, while 41.3% boil fresh milk before consumption. Most respondents boil milk by fearing milk-borne diseases, while 42.2% of the household heads boil milk for cultural reasons. More than onefourth (27.5%) of participants responded that they share the same watering point with cattle. 28 (8%) respondents stated they share the same house with their animals. Nearly three-fourths (74.1%) of the respondents advised TB patients to checkup in health centers. Nine among ten (90.3%) respondents will go to the hospital if they think they had been infected with TB, and 23 (6.8%) go to the pharmacy, whereas the rest would prefer visiting traditional healers (Table 6). Knowledge, attitude, and practice towards the zoonotic potential of bovine tuberculosis Among the study participants who had information about bovine tuberculosis 84 (24.1%), 56 (66.7%) regarded bovine tuberculosis as a significant public health threat. More than half (60.7%) of participants stated that raw milk and meat as the source of bovine tuberculosis. However, more than thirty percent (33.3%) of respondents think bovine TB affects animals only. Most of the respondents (69.6%) mentioned that yy did not know their feeling, not sure whether they help or not, did not give special attention. using cooked meat and boiled milk reduces the transmission of bovine tuberculosis from animals to humans (Table 7). Factors associated with KAP level of the respondent towards human and bovine tuberculosis KAP level was calculated by scoring one for a correctly provided answer and zero for the wrong answer. The respondents' average score was categorized as good KAP and poor KAP based on a KAP score of !11.02 AE 3.575 as good and a KAP score ! of 3.07 AE 2.058 as poor KAP for human TB. Similarly, for bovine TB, the KAP score !11.02 AE 3.575 regarded as good KAP, while the score <3.07 AE 2.058 was categorized as poor KAP. Based on this calculation, about 178 (51%) and 65 (18.6%) respondents had good KAP levels for human TB (Table 8) and Bovine TB (Table 9), respectively. There was a significant association between KAP scores and the respondent's age (p < 0.05). The highest proportion of respondents 25 (7.1%) having good KAP level towards bovine tuberculosis was in the age group of 31-40, while respondents with good KAP towards human TB were in 18-30 age category. Educational status and current occupation were associated with KAP scores (p < 0.05). The study participants' residence was associated with KAP scores on bovine TB (c2 ¼ 10.361, p < 0.05). Discussion The present study revealed that almost all cattle owners (97.4%) have information about human TB, while less awareness about bovine tuberculosis (24.1%). This result with regard to M. tuberculosis in humans was in agreement with a study conducted in Addis Ababa and in southern Ethiopia that reported 99.5% KAP level (Kidane et al., 2015) 99.6% (Hibstu and Bago, 2016) respectively, who found a profound awareness about human TB among high school students. Nevertheless, Romha et al. (2014) indicated a lower (29.7%) awareness of bovine tuberculosis among cattle owners in the southern part of Ethiopia. Likewise, Getahun and Eshetu (2018) reported that 69.0% of respondents have no information about bovine tuberculosis among the community in the Gambella region of Ethiopia. The current study revealed a higher proportion of bovine tuberculosis knowledgeable respondents than the report of Kidane et al. (2015), explaining 13.9% of knowledgeable high school students in Addis Ababa, Ethiopia. On the contrary, different studies showed a higher proportion of knowledgeable respondents about bovine tuberculosis (Kuma et al. (2013) in Jimma zone in South West Ethiopia, Ameni and Erkihun (2007) in Adama, Central Ethiopia, Munyeme et al. (2010) in Zambia and Addo et al. (2011) in China reported 45.6%, 35%, 39.6%, and 88% respectively). More than 20% of cattle owners said that they get information and awareness about the disease from radio/television (TV) both national and local channels. Similarly, Hoa et al. (2009) reported (64.6%) respondents get information from television. This may be due to the recent attention given by the government and NGOs operating in Ethiopia. These organizations always air information on these diseases on Tv and radio to create awareness. On the other hand, Yadav et al. (2006) described that neighbors, friends, and family members as a significant source of information in India. Thus, different intervention means and efforts are suggested to consider each setting's peculiar nature and target group (Hoa et al., 2009). In this study, the greater awareness about human tuberculosis could reflects remarkable educational efforts towards human TB through multiple information sources, participation large number of multicultural respondents in animal production, health, and husbandry. Despite a higher proportion of the study participants having information about human TB, more than half (56.7%) had little knowledge about the disease's cause. More than half (63.1%) of the respondents mentioned germ/bacteria is the actual cause of bovine tuberculosis. However, misperceptions such as bad weather (both cold and hot air) and genetically from parents were implicated as a cause of human and bovine TB. This finding is in line with Gebremariam et al. (2011), Bati et al. (2013) and Getahun and Eshetu (2018), who reported similar misperceptions among the general community in Addis Ababa and Gambella region, southwestern part of Ethiopia. We found that the zoonotic potential of bovine tuberculosis was not well known by cattle owners. Among those who have an awareness of bovine tuberculosis, (33.3%) believed that no transmission of TB from animal to human occurs. In line with this, Kidane et al. (2015) reported similar results among high school students in Addis Ababa. Likewise, Bati et al. (2013) and Romha et al. (2014) highlighted that only 22.9% and 16.6% of respondents believed the fact that TB can be acquired from animals, respectively. Apart from the variation due to the study population's difference with multicultural practice in the respective study areas, it also implicates the wide knowledge gap among the general community regardless of age group. From respondents who had information about bovine tuberculosis, 42.9% stated that the ingestion of raw animal products (milk and meat) as the mode of bovine tuberculosis transmission of zoonotic TB. Similarly, different studies reported the culture of raw milk consumption in Ethiopia as a potential transmission way of M. bovis to humans (Ameni and Erkihun, 2007;Bati et al., 2013;Romha et al., 2014). More than half (57.8%) of study participants boil milk due to fear of milk borne disease such as tuberculosis, brucellosis and E. coli. Similar but much higher findings were reported by Kidane et al. (2015) in Addis Ababa and Getahun and Eshetu (2018) in the Gambella region, southwest Ethiopia with 66.2% and 90.9% of respondents, boil milk due to fear of milk-borne diseases, respectively. This proves awareness of people who practice boiling improves disease prevention practice. Less than half (41.1%) of the respondents recognized that human TB could be transmitted through inhalation of exhaled air when a person with TB coughs, sneezes, speaks or sings. This result was inconsistent with the studies conducted in Ethiopia's different areas (Legesse et al., 2010;Abebe, 2010) and Selangor (Noremillia and Haliza, 2015), who reported 80.8% and 96% respectively. The inconsistency could be due to the variability of information and study population. Significant portions (30.9%) of respondents have closely witnessed the presence of TB cases in relatives or friend typical clinical signs include coughing, fever and loss of appetite. More than three-fourth (78.2%) of participants referred to a pulmonary form and 21.9% to an extrapulmonary form regarding the type of TB observed. However, the rates were higher than those reported by Kidane et al. (2015) in Addis Ababa and Getahun and Eshetu (2018) in southwest Ethiopia, where 21.7% and 19.3% reported pulmonary forms, respectively. Regarding the TB patients' treatment history, 92.0 % of them took a modern drug given by health centers. This is in line with a study conducted in southwestern Ethiopia (Getahun and Eshetu, 2018). Most of the participants responded that TB is curable with modern drugs, covering their mouth and nose when coughing and sneezing, avoids sharing of utensils and separating patient room as important prevention and control approach indicated that awareness of the study community about the appropriate treatment and prevention measure of the disease could play a significant role in reducing the spread of the disease (Bati et al., 2013). More than two-thirds (68.5%) of the respondents feel compassion and desire to help TB patients. This finding was higher than the study reported by Hibstu and Bago (2016) in southern Ethiopia. More than ten percent (14.1%) of the entire study participants stated that TB affects only poor people. This was in line with the finding in rural Ethiopia by Yimer et al. (2009). Nine among ten (90.3%) respondents would go to a health facility if they think TB had infected them, and the rest would prefer to find other self-treatment options like herbs and to visit traditional healers. This result was nearly similar a study conducted in southern Ethiopia (Hibstu and Bago, 2016). The study participant's educational status for awareness of TB in humans and animals was significantly associated with the KAP score (P < 0.05). All respondents with grade eight and above educational level had good KAP of TB in humans and animals. The possible reason could be as education increases, people would have acquired better information access about the diseases. This result is consistent with previous reports in Ethiopia (Mesfin et al., 2005;Bati et al., 2013). The finding of this study revealed that farmers and merchants were more knowledgeable than the rest of the study groups. Conclusions Even though a relatively good understanding of TB was observed compared to previous studies, the KAP level was not adequate. 97.4% of the participants know human tuberculosis about its cause, transmission, symptoms and prevention approaches, while 24% know bovine tuberculosis cause and transmission mode. Respondents had a lower level of understanding of the zoonotic potential of bovine TB. It is an indication that the public health wing of the Veterinary service provider of the country should develop education programs with regard to zoonotic diseases such as tuberculosis, brucellosis, anthrax and food safety. If the country needs to eradicate such disease with a substantial public health impact, the plan should start from grass root level by creating awareness to livestock owners and animal product consumers. Community health education about the impact of the disease, transmission, control, and prevention should be integrated with one health-oriented education and research to eradicate tuberculosis and other zoonotic diseases from the country. Declarations Author contribution statement Amare Bihon: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Wrote the paper. Solomon Zinabu: Performed the experiments; Analyzed and interpreted the data; Wrote the paper. Yimer Muktar and Ayalew Assefa: Analyzed and interpreted the data; Wrote the paper. Funding statement This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Data availability statement Data included in article/supplementary material/referenced in article.
2020-08-13T10:02:19.441Z
2020-08-07T00:00:00.000
{ "year": 2021, "sha1": "5f33c34006090be79620ed665ac1ed6630e58e06", "oa_license": "CCBY", "oa_url": "http://www.cell.com/article/S2405844021004308/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c2699566f5fae453aefef73e636d4986c6151792", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1281304
pes2o/s2orc
v3-fos-license
“I Want, Therefore I Am” – Anticipated Upward Mobility Reduces Ingroup Concern Empirical findings suggest that members of socially disadvantaged groups who join a better-valued group through individual achievement tend to express low concern for their disadvantaged ingroup (e.g., denial of collective discrimination, low intent to initiate collective action). In the present research, we investigated whether this tendency occurs solely for individuals who have already engaged in social mobility, or also for individuals who psychologically prepare themselves, that is ‘anticipate’, social mobility. Moreover, we examined the role of group identification in this process. In two studies, we looked at the case of ‘frontier workers’, that is people who cross a national border every day to work in another country where the salaries are higher thereby achieving a better socio-economic status than in their home-country. Study 1 (N = 176) examined attitudes of French nationals (both the socially mobile and the non-mobile) and of Swiss nationals toward the non-mobile group. As expected, results showed that the mobile French had more negative attitudes than their non-mobile counterparts, but less negative attitudes than the Swiss. In Study 2 (N = 216), we examined ingroup concern at different stages of the social mobility process by comparing the attitudes of French people who worked in Switzerland (mobile individuals), with those who envisioned (anticipators), or not (non-anticipators), to work in Switzerland. The findings revealed that anticipators’ motivation to get personally involved in collective action for their French ingroup was lower than the non-anticipators’, but higher than the mobile individuals’. Moreover, we found that the decrease in ingroup concern across the different stages of social mobility was accounted for by a lower identification with the inherited ingroup. These findings corroborate the deleterious impact of social mobility on attitudes toward a low-status ingroup, and show that the decrease in ingroup concern already occurs among individuals who anticipate moving up the hierarchy. The discussion focuses on the role of the discounting of inherited identities in both the anticipation and the achievement of a higher-status identity. INTRODUCTION Individuals are members of inherited social groups defined, for instance, by their gender, ethnicity, or age. They belong simultaneously to more malleable categories qualified by their educational and professional achievements. The present work is interested in how individuals cope with such multiple group memberships, in particular when these memberships are associated with different value and prestige (i.e., social status). Nowadays, most societies are still organized around a hierarchical principle of distribution of resources and power, creating and reinforcing economic, cultural, and political inequalities. Some groups are associated with a high social status whereas others with a low one (Tajfel and Turner, 1986;Sidanius and Pratto, 1999). Nevertheless, while in traditional societies the various group memberships tended to be aligned in status (Lenski, 1954), the stronger social fluidity of contemporary societies leads individuals to belong to multiple groups of conflicting status. For example, individuals from disadvantaged inherited backgrounds (e.g., women, ethnic minorities) may achieve higher status through professional attainments. The present research seeks to better understand the socio-psychological processes at play when individuals are confronted with such status inconsistency due to upward mobility. We first consider how they cope with the contradicting demands arising from such multiple group memberships, by investigating their concern for the inherited low-status group members who did not achieve social mobility (Study 1) and more generally toward the inherited low-status ingroup (Study 2). We then investigate whether the anticipation of social mobility already leads to a decreased ingroup concern. Although being hierarchically organized, modern societies are characterized by an ideal of meritocracy that leads people to believe that personal investment and efforts are main causes of success (McCoy and Major, 2007). People are encouraged to focus on their personal trajectory and to engage in individual strategies in order to improve their social standing and to achieve self-worth (Tajfel and Turner, 1986;Wright, 2001). The social mobility strategy, as defined by social identity theory (SIT: Tajfel and Turner, 1986), describes individuals who suffer from the low status associated with their group membership and decide to quit their group to join a better valued one. However, although clear-cut scenarios can be designed in the laboratory in order to make salient one specific membership, the study of identities in real life is more complex. Most often, individuals are confronted with contexts in which several of their group memberships are salient. Moreover, we argue that the possibility to leave a social group for another drastically varies depending on the nature of the group memberships. Whilst some group memberships are achieved by individuals throughout their lives (e.g., professional occupation, political affiliation), other groups are imposed from birth and are thus inherited in quality (e.g., gender, ethnicity). For individuals who are members of low-status achieved groups, the social mobility process can effectively occur as they move from one group to another. An illustration is an individual's attempt to quit their employee status by moving up the social ladder and becoming a manager. However, when the low-status membership is inherited, this status is quite impermeable, meaning that the individual has little power to modify it. For instance, a woman cannot easily change her sex, but she can focus on her professional standing and become a manager (Ellemers, 2001;Derks et al., 2016, for a review). Past research has shown that there is a tendency toward status crystallization, meaning that the probability to achieve a high-status membership is greater for members of highstatus inherited groups than for members of low-status inherited groups (Lenski, 1954;Bourdieu, 1984). Nonetheless, societies have become increasingly fluid over the past decades, notably because of social and political movements (e.g., feminism, human rights movements) which have contributed to break societal barriers. A product of these more fluid societies is the increasing number of individuals experiencing statusinconsistent identity configurations. According to Lenski (1966), individuals who are simultaneously members of low-and highstatus groups experience a psychological tension derived from their motivation to improve their social identity while still belonging to a low-status group. In line with this idea, Wright and Taylor (1999) showed that low-status group members who succeeded as a token felt more negative emotions than individuals who succeeded in a non-discriminatory context. This means that, when being simultaneously members of low-and highstatus groups, individuals face contradicting social expectations. Indeed, such expectations (e.g., stereotypes) differ to a great extent according to group status (Fiske et al., 2002;Kervyn et al., 2009). Moreover, while high-status groups promote norms and values related to independence, individualism, and selffulfillment, low-status groups convey norms and values that promote interdependence and solidarity among its members (Lorenzi-Cioldi, 1988, 2009. Thus, by conforming to the norms of one of their memberships, individuals in statusinconsistent identity configurations deviate from the norms of their other membership, and expose themselves to various forms of social punishments. As an illustration, female managers may be punished for enacting agentic behaviors, because these behaviors contradict the female stereotype despite the fact that agency is expected for the professional role (Rudman and Glick, 2001). Consistent with the motive to achieve self-worth as posited in SIT, individuals who possess multiple social identities have "a natural tendency to think of themselves in terms of that status or rank which is highest, and to expect others to do the same" (Lenski, 1966, 87). Providing evidence of Lenski's reasoning, a series of studies conducted by Derks and colleagues showed that, among women who achieved a high-responsibility professional role, those who reported low levels of gender identification and who reported having experienced gender discrimination tended to describe themselves as more similar to the highstatus group, compared to women high in gender identification and/or women having experienced low gender discrimination. In this way, they portrayed themselves using more masculine traits (the characteristics of the high-status group), while using the same amount of feminine traits as other women (Derks et al., 2011a,b). In addition, research showed that female faculty rated their male Ph.D. students more favorably than their female Ph.D. students, whilst no difference was observed among the male faculties' evaluations (Ellemers et al., 2004). Of interest, the same pattern of results was observed among Hindustanis in the Netherlands who self-described as more Dutch when lowly identified with their ingroup and when having experienced discrimination (Derks et al., 2015). Lack of ingroup support has also been reported from the perspective of achieved lowstatus individuals. Research showed that female employees and Non-White employees felt less support from ingroup supervisors (i.e., female and Non-White supervisors) than from outgroup supervisors (i.e., male or White supervisors) in organizations with an adverse diversity climate (Paustian-Underdahl et al., 2017). Such parallel findings across social categories suggest that low ingroup concern among socially mobile women is not specific to gender, and that it can be broadly attributed to the status dynamics between different group memberships. Providing evidence to this reasoning, Kulich et al. (2015) compared the ingroup concern of low-status inherited group members who had successfully engaged in social mobility to their congeners who had not. In their research, the authors observed that mobile members of different social categories of inherited low-status groups (e.g., Afro-Americans, immigrants, and women) expressed greater hostility and lesser support for the inherited low-status group compared to non-mobile members. Taken together, these findings suggest that individuals who experience a status-inconsistent identity configuration describe themselves as more similar to the achieved highstatus group and are less supportive of the low-status inherited group. An important issue is the kind of motivation that leads to such ingroup unsympathetic attitudes. Predictions derived from SIT (see Ellemers, 2001) would explain the lack of ingroup concern by a decrease in ingroup identification. However, Kulich et al.'s (2015) findings revealed that identification with the inherited group did not play a role, as mobile and non-mobile participants were similarly identified with their inherited group. This suggests that, despite their self-distancing from the inherited group on the attitudinal dimension and their stronger counter-stereotypical self-descriptions, the mobile remained identified with this group, and also similarly selfdescribed on the dimension that is stereotypical of it. In addition, the authors found that the lack of support for the inherited ingroup was accounted for by an increased identification with the achieved high-status group. Mobile individuals identified more strongly with their achieved group than the non-mobile. Such pattern suggests that individuals with multiple identities do not necessarily disengage from their low-status group. They may have to cope with the simultaneous presence of several identities and thus the coping strategies on the attitudinal, self-evaluation, and self-categorization levels are not aligned. For example, the disparagement of the inherited low-status ingroup may be motivated by an effort to become accepted in the new high-status group (Wright and Taylor, 1999), while keeping the ties with the low-status inherited group. Indeed, the conflicting nature of their identity configuration becomes evident in the contrasting phenomena of identification and simultaneous negative attitudes toward the low-status ingroup. Research is thus needed to identify the specifics of this assimilation process which does not appear to influence to the same extent the different dimensions of analysis (e.g., attitudes, identification). One way to look at this assimilation process is to compare the attitudes of mobile individuals to the attitudes of the high-status group members. Indeed, considering the motivation of low-status groups' members to enhance the positivity of their social identity, one may expect that mobile individuals adopt similar attitudes as those of high-status groups' members, that is, members of groups that mobile individuals joined through their social mobility. This could be considered as a strategy to increase their chances to be accepted in the new group by showing to its members that they do not consider themselves as members of the low-status (out)group anymore (Merton, 1968). Indeed, Van Laar et al. (2014) showed that members of the high-status group offered support to the mobile only if they perceived that the latter were not behaving in a manner that was prototypical of their low-status group. Nevertheless, we also know from the social identity perspective that social groups' members need to feel their membership not only as positive, but also as distinct (Tajfel and Turner, 1986). Thus, to achieve this feeling of distinctiveness, individuals tend to express ingroup bias. This bias describes the tendency of individuals to show more negative attitudes toward outgroup members, compared to ingroup members, especially when social categorization is salient (Mullen et al., 1992). This may help to ensure the distinctiveness of their group membership, but also indirectly to increase their self-esteem (see for example, Brown, 2000). In sum, it appears more reasonable to think that in order to maintain the distinctiveness of their membership, members of the high-status group will still express more negative attitudes toward the low-status group members, than the mobile individuals, thus preventing a complete assimilation of mobile individuals which would threaten their distinctiveness. A second way of looking at this assimilation process is to wonder whether social mobility in itself is related to a lack of ingroup concern, or whether this relationship is already present among individuals who merely aspire to undertake social mobility. Indeed, Merton (1968) suggests that social mobility is often preceded, or facilitated, by the expression of positive attitudes toward the group to which the individual seeks to belong to. He theorized this process as an anticipatory socialization which contributes to increase the probability of successful individual mobility, as well as the integration in the new group once joined. In line with this idea, research by Ellemers et al. (1993) showed that, when people find themselves in a permeable intergroup context, they tend to act in order to defend their individual interests rather than the interests of their group (see also Wright et al., 1990). Ellemers et al. (1990) further showed that, in a permeable context where individual mobility is facilitated, competent individuals strongly identify with the high-status group. Thus, on the basis of this literature, we expect different levels of ingroup concern between the non-mobile who do not strive for social mobility and the non-mobile who do. In conformity with Merton's (1968) suggestion, anticipators of social mobility should reduce their ingroup concern in order to enhance their chances to achieve the mobility. Finally, even if anticipators should reveal a lower concern than the nonanticipators, we expected them to still be more concerned than mobile individuals. The latter, who successfully achieved social mobility, should be motivated to maintain their distinctiveness by distancing even more strongly from their low-status inherited ingroup. The Present Research In order to examine the identity management strategies in dual identity configurations, we conducted two correlational studies. Our target group consisted of French nationals living in areas around the Swiss border. As the costs of living are higher in this region than the average costs in France, many French from this region attempt to join the Swiss work-force which grants a number of financial and symbolic advantages: The unemployment rate in Switzerland is almost half (5.1%, FSO, 2014) of the rate in France (9.9%, INSEE, 2014), and the median salary in Switzerland (5,560 euros) is three times higher than the median salary in France (1,712 euros). We took advantage of this natural setting and compared these socially mobile French 'frontier workers' , who achieved a considerably higher socioeconomic standing, with the non-mobile French who worked in France. Our first aim was to test whether mobile individuals (i.e., French frontier workers) are less concerned with the achieved low-status group that they have left (i.e., French workers in France; see Study 1) and with their inherited ingroup as a whole (i.e., French people who live in border regions of Switzerland; see Study 2), compared to non-mobile individuals (i.e., French workers in France). Moreover, we looked at the relevant inherited high-status outgroup (i.e., Swiss workers in Switzerland), who are granted consistency between their inherited and achieved memberships (see Study 1). This design bore the opportunity to assess the extent to which mobile individuals assimilate to the high-status group. Finally, we examined whether the actual achievement of social mobility is a necessary condition to undermine ingroup concern, or whether the mere prospect of undertaking social mobility is sufficient to do so. This was done by measuring non-mobile participants' willingness to engage in social mobility (Study 2). From these general goals, we derived the following two hypotheses: Hypothesis 1 (tested in Study 1) predicts a linear effect of social mobility on the concern for the low-status achieved group. More specifically, we expect non-mobile individuals to express more concern than the mobile individuals, who in turn should express more concern than the high-status inherited group members. We thus aimed to demonstrate that mobile individuals, who achieved a high-status position through individual mobility, express a lower concern for the fate of their inherited group members who did not succeed individually, and that they have the willingness to assimilate with the high-status group. Moreover, we sought to highlight that the high-status group's members can feel threatened by mobile individuals and should therefore express an even lower concern. This would safeguard their ingroup distinctiveness. Hypothesis 2 (tested in Study 2) predicts a linear effect of the stages of social mobility (non-anticipators, anticipators, mobile individuals) on the concern for the inherited low-status group. More specifically, we expect the concern for the low-status inherited ingroup to be highest among non-mobile individuals who do not wish to undertake mobility (i.e., non-anticipators), moderate among non-mobile individuals who strive for mobility (i.e., anticipators), and lowest among individuals who have succeeded in their mobility (i.e., mobile) -the latter being motivated to claim their distinctiveness in the face of the anticipators. Indeed, even if anticipators have a strong desire to improve their social status, they are still part of the non-mobile group (i.e., by being simultaneously members of low-status achieved and inherited groups), and for this reason, they should still express a higher concern for their inherited membership compared to mobile individuals who can focus on their high-status achieved membership in order to reduce the identity threat associated with their low-status inherited membership. In addition, we aimed to investigate the mechanisms underlying these attitudinal differences. Although the social identity perspective leads to the expectation that self-ingroup distancing derives from a lower identification with the inherited group, this has not been found in past research (Kulich et al., 2015). We believe that different levels of inherited ingroup identification could have been concealed in previous studies because such studies only looked at non-mobile individuals without making the distinction between anticipators and nonanticipators of social mobility. To move a step forward, our research considered social identification with the inherited group among these two non-mobile subgroups. From this, follows our next hypothesis: Consistent with assumptions derived from SIT, Hypothesis 3 (tested in Study 2) predicts that the lower ingroup concern among anticipators of social mobility (as compared to non-anticipators) and even lower concern among mobile individuals should be explained by a lower identification with the low-status inherited group. Study 1 Study 1 tested H1, which predicts a linear effect of social mobility on concern for the low-status achieved group. More specifically, we examined this concern among French workers in France (i.e., the non-mobile individuals), French workers in Switzerland (i.e., the mobile individuals), and Swiss workers in Switzerland (i.e., the members of the high-status group). In addition, we also explored the identification patterns associated with these different categories. Method Participants A total of 176 participants (122 women and 54 men, M age = 34.53, SD age = 9.23, ranging from 19 to 62 years old) were recruited through social networks and were asked to complete an online questionnaire. One-hundred and fifteen participants were French and 61 were Swiss. Materials and measures Participants indicated their citizenship and were then presented with a short excerpt in which France and Switzerland were compared on several domains, such as employment rates and average wages. The aim of this introductive part was to emphasize the current socio-economic status gap of the two national groups. Participants were then asked to answer several items, which are listed below in the chronology of their occurrence. Social mobility. The social mobility variable distinguished between three groups of participants. French working in France were the non-mobile group (n = 43). These participants are characterized by their relative low inherited status (i.e., French as compared to Swiss nationals) along with a low achieved status (i.e., they work in France). French nationals working in Switzerland were the mobile group (n = 72). These participants are characterized by a low inherited status combined with a high achieved status (i.e., French working in Switzerland). Finally, Swiss nationals working in Switzerland (n = 61) were the high-status group, characterized by high inherited and achieved statuses. Identification with the inherited and the achieved groups. Identification, with the inherited and the achieved groups, was assessed with the 10 items of the self-investment dimension of the hierarchical model of ingroup identification (Leach et al., 2008). Sample items are "I feel a bond with [Ingroup]", "I'm glad to be [Ingroup]", or "I often think about the fact that I am [Ingroup]" (1 fully disagree to 7 fully agree). First, participants were asked to answer these items for their inherited group (i.e., "French people in general" or "Swiss people in general"). The reliability of this scale was satisfactory for both targets (respectively, α = 0.90; M = 4.77, SD = 1.36 for the French, n = 115, and α = 0.89; M = 5.42, SD = 1.03 for the Swiss, n = 61). Second, they were asked to answer these items for their achieved group (i.e., "workers in France" or "workers in Switzerland"). The reliability of this scale was also satisfactory for both targets (respectively, α = 0.84; M = 4.43, SD = 1.16 for the workers in France, n = 43, and α = 0.84; M = 5.29, SD = 0.96 for the workers in Switzerland, n = 133). Concern for the low-status achieved group. We measured participants' motivation to engage in social action aimed at improving the situation of French nationals who lived in border regions of Switzerland and worked in France. We measured support for both personal involvement and group involvement in social action because this allowed to capture potential psychological distancing from the group as a result of a simultaneous expression of high support for group involvement and low motivation to get personally engaged. These measures were taken on a 7-point scale, from 1 not at all to 7 totally. Support for group involvement was assessed with two items (e.g., "French people who work in France and live in border areas should fight collectively for financial compensation for the difference they face between the cost of living and the level of their wages", r = 0.64, p < 0.001; M = 4.67, SD = 1.79). Personal involvement was assessed with two items measuring participants' motivation to get personally involved in social action (e.g., "I would be willing to sign a petition to call for more economic support for French people who work in France and live in border regions", r = 0.55, p < 0.001; M = 3.78, SD = 1.87). We also introduced a direct measure of concern for the lowstatus achieved group with the single item: "I feel concerned by the fate of French people living in border areas of Switzerland and working in France" (1 not at all to 7 totally; M = 4.19, SD = 1.95). Socio-demographic information. Finally, participants indicated their gender, their professional status (with in general 67.6% employees, 8.5% entry-level managers, 9.1% middle managers, 8% senior managers and 6.9% missing data), and their age (M = 34.53, SD = 9.23; ranging from 19 to 62 years old). We also measured the subjective social status of their occupation with two items (i.e., "To what extent do you think that your professional occupation is valued -and -prestigious in society?") (1 not at all to 7 totally, r = 0.46, p < 0.001, M = 3.98, SD = 1.34). Moreover, we measured the perceived status of working in Switzerland and France with two items ("To what extent do you think that working in France/Switzerland is valorizing?", 7 point-scale from 1 not at all to 7 totally, with, respectively, M = 3.34, SD = 1.62 for the French, and M = 5.29, SD = 1.22 for the Swiss). Preliminary analyses We performed several analyses in order to detect potential differences on relevant socio-demographic indicators between the three groups of participants. First, we looked at gender, as the occupational gender divide may lead to men and women occupying professions that differ in type and status (e.g., Charles and Grusky, 2004). Chi-square analysis showed that men and women were similarly distributed across the three mobility groups, χ 2 (2, N = 176) = 3.34, p = 0.18. Moreover, the three groups did not differ in professional status, χ 2 (6, N = 164) = 8.80, p = 0.18. ANOVA of the continuous variable measuring the subjective social status of the occupation, with social mobility as a between-participants factor, revealed no effect, F(2,172) = 0.69, p = 0.50, η 2 p = 0.16. We also tested if the status that the three investigated groups attributed to working in France and in Switzerland differed. In a repeated-measures ANOVA with the two status items and the three participant groups as a between-participants factor, we observed that all groups of participants rated working in Switzerland as more valued than working in France, F(1,171) = 144.49, p < 0.001, η 2 p = 0.46. Furthermore, this effect was qualified by a significant interaction effect between the two factors, F(2,171) = 4.11, p = 0.018, η 2 p = 0.05. Although the French workers also believed that working in Switzerland was more valued than working in France (p < 0.001), pairwise comparisons revealed that the French working in France attributed a higher value to working in France than the other two groups (ps < 0.04). Finally, we tested for age differences in the three mobility groups and observed a marginal effect, F(2,173) = 2.53, p = 0.08, η 2 p = 0.03. Pairwise comparisons showed that French workers in France (M = 31.81, SD = 6.94) were younger than French (M = 35.28, SD = 9.04, p = 0.051) and Swiss (M = 35.57, SD = 10.54, p = 0.04) workers in Switzerland. No difference was observed between the two groups working in Switzerland (p > 0.85). In light of this unexpected finding, participant age was entered as a covariate in all of the following analyses 1 . Hypotheses testing Concern for the low-status achieved group. In order to test H1 which predicts a linear effect of social mobility on concern for the low-status group, we computed two orthogonal contrasts with the social mobility variable. The first contrast (C1) opposed the French workers in France (i.e., non-mobile), coded −1, to the Swiss (i.e., high-status group members), coded 1, with the French workers in Switzerland (i.e., mobile) coded 0, lying in between these groups. The residual contrast (C2) opposed the French workers in Switzerland, coded −2, to the two others groups, the French workers in France and the Swiss, both coded 1. H1 predicted a significant effect of C1 but not C2, thus highlighting a linear effect of the social mobility variable (Judd et al., 2011). We performed a repeated-measures ANCOVA with personal and group involvement measures of social action as a withinparticipant factor, the two orthogonal contrasts as betweenparticipants factors, and age as a covariate. The findings showed a significant main effect of involvement in social action, F(1,172) = 59.57, p < 0.001, η 2 p = 0.26, such that personal involvement (M = 3.78, SD = 1.87) was lower than group involvement (M = 4.67, SD = 1.79). The analysis further produced an interaction between involvement in social action and C1, F(1,172) = 15.02, p > 0.001, η 2 p = 0.08, showing that C1 had a significant effect on the motivation to get personally involved in social action, t = −3.61, p < 0.001, but did not impact significantly the group dimension, t = −0.67, p = 0.50. As the interaction between involvement in social action and C2 was not significant, F(1,172) = 0.34, p = 0.56, η 2 p = 0.002, the C1 effect can be interpreted as linear effect. As predicted in H1, Swiss participants (M = 3.20, SD = 1.81) were less motivated to get personally involved in social action for the French working in France compared to the French workers in France themselves (M = 4.53, SD = 1.83, t = −3.61, p < 0.001) with the French workers in Switzerland (M = 3.82, SD = 1.81) situated between these two groups. The interaction between involvement measures and age was not significant, F(1,172) = 0.89, p > 0.34, η 2 p < 0.01. We then conducted an ANOVA on the single item of concern for the low-status achieved group, with the two orthogonal contrasts as between-participants factors and age as a covariate. The pattern of results was quite similar to the one observed for personal involvement in social action. The analysis showed a significant effect of C1, F(1,172) = 37.75, p < 0.001, η 2 p = 0.18, and a marginal effect of C2, F(1,172) = 3.14, p = 0.078, η 2 p = 0.018. Thus, as predicted in H1, we observed a linear tendency showing that Swiss participants (M = 3.03, SD = 1.95) expressed a lower concern for the low-status achieved group compared to the non-mobile French (M = 5.16, SD = 1.97). The French workers in Switzerland (M = 4.60, SD = 1.81) were situated between these two groups, and were closer to the French workers in France than to the Swiss. Identification with the inherited and the achieved groups. In order to investigate the identity patterns of the non-mobile, the mobile and the high-status group members, we performed a repeatedmeasures ANCOVA with inherited versus achieved identification as a within-participant factor, the three participant groups as between-participants factor (i.e., corresponding to the social mobility variable), and age as a covariate. Results revealed an interaction between group and identification, F(1,172) = 18.61, p < 0.001, η 2 p = 0.18 (means are presented in Figure 1, left panel). Pairwise comparisons first indicated that mobile participants identified more strongly with their achieved group than the Swiss (p = 0.01) and the non-mobile French (p < 0.001), while the Swiss identified more strongly with their achieved group than the non-mobile participants (p = 0.002), F(2,172) = 15.47, p < 0.001, η 2 p = 0.15. Second, we also observed that the Swiss identified more strongly with their inherited group than non-mobile (p = 0.009), and mobile French (p = 0.003), while no differences were observed between these two last groups (p = 0.99), F(2,172) = 5.60, p < 0.01, η 2 p = 0.06. We then investigated this interaction by focusing on the difference between identification with the inherited group and identification with the achieved group. Findings revealed that mobile participants identified more strongly with the achieved than the inherited group, F(1,172) = 27.68, p < 0.001, η 2 p = 0.14, while the Swiss and the non-mobile French showed a reversed pattern and identified more strongly with the inherited than the achieved group, with respectively, F(1,172) = 6.60, p = 0.01, η 2 p = 0.04 (for the Swiss), and F(1,172) = 3.69, p = 0.056, η 2 p = 0.02 (for the non-mobile French). Discussion Consistent with Hypothesis 1, Swiss participants reported lower concern and lower motivation to get personally involved in social action for the French working in border regions of Switzerland, compared to the French working in these regions themselves. Moreover, the attitudes of the frontier workers (i.e., mobile individuals) were situated in between these two groups' attitudes. Thus, we observed that even if the mobile French appeared to take distance from the low-status achieved group, they did not fully assimilate to the Swiss high-status group members. Indeed, they still expressed a higher concern and a higher motivation to get involved in social action compared to what reported the Swiss participants. These discrepancy between French mobile and Swiss participants can be interpreted through the lens of SIT (Tajfel and Turner, 1986). More specifically, by considering that the Swiss constituted an outgroup on both identity dimensions (i.e., inherited and achieved), it is expected that they show less support for the French, as compared to the French (either mobile or non-mobile). From the perspective of mobile individuals, French workers in France also belong to an outgroup, but on the achieved dimension exclusively. Consistent with previous research (e.g., Kulich et al., 2015), these findings highlight the negative impact of social mobility on the attitudes toward the ingroup. They also provide evidence that the assimilative dynamic toward the high-status group can be a consequence of the social mobility process. Indeed, we observed that even if French mobile participants effectively expressed less concern than their non-mobile counterparts, they still appeared more concerned than the high-status group members (i.e., the Swiss). As discussed in the Introduction, we argue that this attitudinal difference between French mobile and Swiss participants may also have been due to a motivation of Swiss participants to maintain their distinctiveness, potentially threatened by the arrival of mobile individuals. Of interest, we observed different degrees of involvement in social action for the low-status achieved group on the individual and the group level. Although we found the predicted negative impact of social mobility on the motivation to get personally involved in social action, we did not observe any differences on the motivation for group involvement. These findings suggest that individuals, regardless of their status, acknowledged the disadvantaged conditions endured by the French working in border regions of Switzerland, and that they were favorable toward group involvement in social action. If we consider the normative pro-egalitarian context of contemporary societies, we can apprehend such a support for group involvement as a socially valued opinion resulting in a shared conformism to social norms. Concerning the identification dimension, we observed a significantly higher identification with the achieved group among mobile French and Swiss workers compared to non-mobile French workers. Consistent with Kulich et al. (2015), this result indicates that mobile participants clearly focus on their achieved higher-valued identity, and distance themselves from their lowstatus group, as it was also observed on the attitudinal dimension. It is also consistent with the literature showing that individuals identify more strongly with ingroups that are more socially valued (e.g., Ellemers et al., 1990;Roccas, 2003). Moreover, still in line with the findings of Kulich et al. (2015), we did not observe any difference in the identification with the inherited group between mobile and non-mobile participants. This suggests that the mobile keep their ties with their inherited group and manage their social mobility through an increase in the identification with the new high-status achieved group. Nevertheless, as we claimed in the Introduction, we believe that the absence of an effect of the identification with the inherited group may be due to the fact that the non-mobile group is quite heterogeneous in the group members' desire to engage in a mobility in the future. Study 2 will address this issue. Finally, comparison of the inherited identity patterns of the Swiss nationals and the mobile individuals revealed a higher identification with the inherited identity by the Swiss compared to the two French groups. This is not surprising as the Swiss have a more positive inherited identity in this intergroup context. Moreover, we observed that the Swiss revealed a preference for their inherited identity as compared to their achieved identity, which was the exact opposite of the pattern observed for the mobile. The difference between the two identification levels was considerably smaller for the Swiss than for the mobile French. This suggests that the mobile French were motivated to emphasize their higher valued identity and to distance it from the lower valued one. The Swiss, although to a smaller extent, focused more on the inherited than the achieved identity. This may be because it is their inherited identity that clearly differentiates them from the French mobile individuals, and so fulfills their need of a positive, but also, distinct social identity (Tajfel and Turner, 1986). Indeed, their achieved identity is more malleable and also shared by a portion of the French and it may thus be considered as less important. Study 2 In Study 2, we focused on French nationals in order to examine concern for the inherited low-status group at different stages of the social mobility process. We used the same setting as in Study 1, with three modifications. The main modification consisted in distinguishing, among the French working in France, between those who anticipated social mobility by expressing the desire to work in Switzerland in the future (i.e., mobility anticipators) and those who did not (i.e., non-anticipators). Second, the target of the involvement dependent measure was the inherited low-status group (and not the achieved lowstatus group). Third, we measured prejudice toward the mobile group. The aim was to test if the mere anticipation of social mobility is sufficient to produce a tendency toward self-ingroup distancing, a phenomenon that should be most prominent among the mobile. As predicted in H2, we expected a linear effect of social mobility on concern for the low-status inherited group, showing the highest concern among non-anticipating individuals, a moderated concern among anticipators, and the lowest concern among mobile individuals. In addition, we investigated whether the differentiation of mobility anticipating, mobility non-anticipating, and mobile individuals, revealed different levels of inherited group identification. As predicted in H3, such differences should, in turn, account for the gap in these groups' ingroup attitudes. Participants and procedure Participants were 216 French nationals (137 women and 79 men, M age = 34.54, SD age = 10.16, ranging from 20 to 61 years of age) living in border regions of Switzerland. We used the same recruitment procedure as in Study 1. After reporting their nationality, we presented a short introductive text in order to prime the status gap between French nationals working in France (low-status achieved group) and French nationals working in Switzerland (high-status achieved group). Following this, participants indicated the country of their employment. Then, participants proceeded to the measures outlined below in chronological order. Measures Social mobility. We distinguished between three groups of different social mobility stages. Participants who worked in Switzerland were categorized as mobile (n = 95). Participants who worked in France (n = 121) were asked to report the extent to which they would like to work in Switzerland in the future (1 not at all to 7 totally, M = 4.69, SD = 2.32). A mediansplit on the responses to this question (median = 5) provided two subgroups of participants: the mobility anticipators (n = 58) who expressed strong desire to work in Switzerland (M = 6.76, SD = 0.43), and the non-anticipators (n = 63; M = 2.78, SD = 1.61) who reported a lower desire for mobility. Thus, the mobile and the anticipators can be considered as "psychologically mobile" because they either have, or are considering, to be mobile, whereas the non-anticipators have not. Identification with the inherited and the achieved groups. As in Study 1, participants' identification with the inherited and the achieved groups was measured with Leach et al.'s (2008) identification scale. Participants were first asked to state their identification with the inherited group (i.e., French people in general; α = 0.93; M = 4.38, SD = 1.44), and with the achieved group (i.e., workers in France for anticipators and non-anticipators, α = 0.92, M = 4.10, SD = 1.43; or workers in Switzerland for mobile participants, α = 0.89; M = 5.40, SD = 1.11). Concern for the inherited group. To assess ingroup concern, we measured participants' motivation to get involved in actions aimed to improve the situation of French people living in border regions of Switzerland. As in the first study, we used two items indicating support for group involvement in collective action (e.g., "The French should unite and show solidarity with each other to collectively fight against a decline in their standard of living", r = 0.64, p < 0.001; M = 4.67, SD = 1.79), and two items indicating the motivation to get personally involved in collective action (e.g., "I would be willing to get personally involved to improve the economic and social situation of the French in a precarious situation (e.g., pay more taxes)", r = 0.55, p < 0.001; M = 3.78, SD = 1.87). Both constructs were measured with 7-point scales from 1 not at all to 7 totally. Prejudice toward frontier workers. In addition, we measured participants' prejudice toward frontier workers with four items. Sample items are: "Because of their special status, frontier workers should pay a solidarity tax to help French people living in border areas and working in France, who are suffering from rising prices (e.g., estate market)" and "Frontier workers only think of their own interest and often forget their origins" (α = 0.71; M = 3.68, SD = 1.58). Socio-demographic information. Finally, participants reported their gender, their professional status (with in general 62.5% employee, 8.8% entry-level manager, 12% middle manager, 4.6% senior manager, and 12% of missing data), their age, and the subjective status of their occupation (same two items as in Study 1, r = 0.36, p < 0.001, M = 3.86, SD = 1.34). Preliminary analyses As in Study 1, we conducted preliminary analyses in order to examine potential socio-demographic differences between the three social mobility groups. A chi-square test showed that gender was not similarly distributed across the three groups, χ 2 (2, N = 216) = 10.26, p = 0.006, Cramer's V = 0.22. Whilst men and women were equally represented among mobile participants, the sample showed an overrepresentation of women among the non-anticipators (73% of women vs. 27% of men) and the anticipators (72.4% of women vs. 27.6% of men). A further chi-square analysis showed no differences in terms of the professional status between the three groups, χ 2 (6, N = 190) = 2.80, p = 0.83. An ANOVA testing the effect of the three groups on subjective professional status of the participants' occupation revealed no significant effect, F(2,216) = 2.00, p = 0.14, η 2 p = 0.02. Finally, an ANOVA showed an effect of participants' social mobility stage on age, F(2,216) = 6.28, p = 0.002, η 2 p = 0.07, revealing that nonanticipators (M = 37.75, SD = 11.28) were significantly older than anticipators (M = 30.47, SD = 8.88, p < 0.001), and marginally older than mobile participants (M = 34.91, SD = 9.33, p = 0.08). The latter group was also older than the anticipators (p = 0.007). Based on these results, we included gender and age as covariates in all the following analyses 2 . Hypotheses testing Concern for the inherited ingroup. In order to test H2, which predicts a linear effect of social mobility, we computed two orthogonal contrasts with the social mobility variable. The first contrast (C1 opposed the non-anticipators of social mobility (i.e., French workers in France who do not wish to work in Switzerland in the future), coded −1, to the mobile (i.e., French workers in Switzerland), coded 1. Anticipators (i.e., French workers in France who wish to work in Switzerland) were coded 0 and were thus situated between the two former groups. The residual contrast (C2) tested differences between the anticipators, coded −2, and the two other groups, the nonanticipators and the mobile, both coded 1. As expected in H2, we predicted a significant effect of C1 but not of C2, thus highlighting a linear effect of our social mobility variable (Judd et al., 2011). We conducted a repeated-measures ANCOVA with the two orthogonal contrasts as between-participants factors, involvement in collective action (group versus personal involvement) as a within-participant factor, and age and gender (coded women −1 and men 1) as covariates. The findings first showed a main effect of involvement, F(1,211) = 165.28, p < 0.001, η 2 p = 0.44. Participants reported greater support for group involvement in collective action (M = 5.55, SD = 1.31), than for personal involvement (M = 3.96, SD = 1.67). The interaction between the involvement dimensions and C1 was also significant, F(1,211) = 4.61, p = 0.03, η 2 p = 0.02, but while C1 had a significant impact on the personal dimension of collective action, t = −2.62, p < 0.01, its impact on the group dimension was not significant, t = −0.55, p > 0.58. Moreover, we also observed a significant interaction between the involvement dimensions and C2, F(1,211) = 6.45, p = 0.01, η 2 p = 0.03, showing a significant impact of C2 on the group dimension of collective action, t = −2.26, p = 0.02, indicating that the anticipators expressed a higher support for the collective action (M = 5.84, SD = 1.20) from the group than the support of the non-anticipators and the mobile combined (respectively, M = 5.46, SD = 1.23 and M = 5.38, SD = 1.40). As C2 was not significant for the personal dimension, the effect of C1 on the personal dimension could be interpreted as linear effect: As expected in H2, the non-anticipators (M = 4.40, SD = 1.63) were more motivated to get personally involved in collective action than the mobile (M = 3.69, SD = 1.73), with the anticipators (M = 3.92, SD = 1.55) situated between these two groups. Finally, the analysis produced a significant interaction between gender and involvement, F(1,211) = 4.09, p = 0.04, η 2 p = 0.02. However, none of the pairwise comparisons reached significance for this interaction (ps > 0.27, η 2 p < 0.006). All other effects were non-significant (ps > 0.29, η 2 p < 0.005). Prejudice toward the frontier worker status. We performed an ANCOVA on the prejudice expressed toward frontier workers with the two contrasts as between-participants factors, and gender and age as covariates. The findings revealed a main effect identification did not affect the pattern of results, thus we have not reported the details here. of C1, F(1,216) = 41.73, p < 0.001, η 2 p = 0.16, but not of C2, F(1,216) = 1.44, p = 0.23, η 2 p < 0.01. Non-anticipators (M = 4.40, SD = 1.54) reported more prejudice toward frontier workers than mobile participants (M = 2.95, SD = 1.28), with anticipators (M = 4.09, SD = 1.60) situated between these two groups. Consistent with H2, we therefore observed a linear effect of the social mobility variable on the prejudice toward frontier worker status. Identification with the inherited and the achieved groups. In order to test H3, which predicts that inherited identification should explain anticipators' and mobile's lower ingroup concern (i.e., compared to non-anticipators), we performed PROCESS Model 4 mediation analysis, using 10,000 bootstrapped samples following Hayes' 2013 recommendations. The model included C1 (non-anticipators versus anticipators versus mobile) as a predictor, personal involvement in collective action as the dependent variable, identification with the inherited group as potential mediator, controlling for C2 (anticipators versus nonanticipators and mobile participants), gender, and age (see full results in Table 1). The analysis revealed a significant effect of mobility stage (i.e., C1) on identification with the inherited group (path a: B = −0.34, SE = 0.12, p = 0.007), which in turn was positively associated with personal involvement (path b: B = 0.30, SE = 0.08, p < 0.001). Moreover, identification with the inherited group proved to be a significant mediator, CI 95% [−0.23, −0.02], of the relation between social mobility (C1) and the motivation to get personally involved in collective action for the ingroup. The direct effect of C1 became marginally significant when controlling for the mediator (path c': B = −0.28, SE = 0.14, p = 0.051). A Sobel test confirmed that the difference between path c and path c' was significantly different from 0 for the indirect effect of the identification with the inherited group, z = −2.16, p = 0.03, corroborating the mediating role of the identification with the inherited group. In sum, this study provides evidence of an identity discount strategy in social mobility trajectories. Findings are graphically represented in Figure 2. In addition, in order to test the replicability of the identification pattern observed among non-mobile and mobile participants previously observed in Study 1, we performed a repeated-measures ANCOVA with inherited versus achieved identification as a within-participant factor, and the three mobility groups as between-participants factor, controlling for age and gender. Means are displayed in the right panel of Figure 1. Results only revealed an interaction between group and identification, F(1,211) = 31.5, p < 0.001, η 2 p = 0.23. Pairwise comparisons first showed that for achieved identification [F(2,211) = 39.34, p < 0.001, η 2 p = 0.27] mobile participants identified more strongly with the achieved group than the nonanticipators (p = 0.007), and the anticipators (p < 0.001), and the anticipators identified less than the non-anticipators (p < 0.001). Second, pairwise comparisons showed for inherited identification [F(2,211) = 4.32, p = 0.01, η 2 p = 0.04] that while anticipators and mobile participants identified to a similar extent (p = 0.96), they both identified less with their inherited group than non-anticipators (p = 0.01 for the anticipators, C1, Non-anticipators (coded −1) vs. Mobile (coded 1; with Anticipators coded 0); C2, Anticipators (coded −2) vs. Non-anticipators and Mobile (both coded 1); Estimates are unstandardized; † p < 0.10, * p < 0.05, * * p < 0.01, * * * p < 0.001. and p = 007 for the mobile). Finally, we also examined the discrepancy between identification with the inherited group and identification with the achieved group. Findings showed that the mobile identified more with the achieved than with the inherited group, F(1,211) = 61.58, p < 0.001, η 2 p = 0.23, and that the reverse pattern occurred for the anticipators, F(1,211) = 11.6, p = 0.001, η 2 p = 0.05. No difference between identification to the achieved and the inherited groups was observed for the non-anticipators, F(1,211) = 0.28, p < 0.60, η 2 p = 0.001. Discussion Study 2 investigated the role of the mobility stage on self-distancing from the inherited ingroup, by looking at ingroup attitudes and identification. The novelty of Study 2 is that it took a more fine-tuned perspective on the nonmobile group by distinguishing between those who desired to engage in social mobility in the future and those who did not. Such analysis provided a preliminary insight on the crucial role of the mobility stage on ingroup attitudes and identification. On the attitudinal dimension, we observed different effects of social mobility on the support for group involvement in collective action and on the motivation to get personally involved in it. First, results revealed the unexpected effect that anticipators of social mobility expressed a greater support for group involvement in collective action compared to the two other groups aggregated. Such finding highlights the particular dissatisfaction of anticipators regarding the fate of their inherited membership. In line with our expectations, results on personal involvement in collective action further revealed that anticipators preferred to focus on their personal trajectory in order to improve their chances to enhance their social identity rather than to join the group in its claim. This result is consistent with Taylor and McKirnan's (1984) model of social mobility stages arguing that low-status group members would only act collectively if they had failed to individually mobilize. Indeed, as predicted by H2, even if anticipators were more motivated to get personally involved compared to the mobile, they were less motivated compared to the non-mobile, preferring to focus on an individualistic strategy to improve the value of their social identity. Consistent with previous evidence in the literature showing a negative impact of the social mobility process on ingroup concern (Derks et al., 2011a(Derks et al., ,b, 2015Kulich et al., 2015), mobile individuals showed the weakest levels of personal involvement. Finally, moving a step further, the present findings demonstrated that the mere anticipation of social mobility is sufficient for triggering a decrease in ingroup concern. We thus argue that experiencing the socialization process of social mobility per se is not a necessary condition for lower ingroup concern, but that imagining the possibility to be socially mobile, thus a purely psychological process, is sufficient to engage in attitudinal change. In this study, we also assessed prejudice toward the mobile group. In parallel to what was observed on ingroup concern, the findings showed that prejudice toward frontier workers was also contingent on the social mobility stage. Again, we observed a linear effect of the social mobility, such that nonmobile participants expressed higher prejudice toward frontier workers than mobile individuals, with the anticipators of social mobility situated between these two groups. Although nonmobile participants had to evaluate their most direct and relevant outgroup, individuals who anticipated social mobility expressed less prejudice toward this group, compared to FIGURE 2 | Unstandardized regression coefficients for the relationship between mobility stage (Contrast 1) and the motivation to get personally involved in collective action for the inherited group as mediated by the identification with the inherited group, controlling for C2, age and gender (Study 2). The coefficients in parentheses correspond to the total effect (path c). † p < 0.10, * * p < 0.01, * * * p < 0.001. the non-anticipators. Consistent with Merton's theorization concerning anticipatory socialization, these findings highlight the positive orientation individuals develop toward an outgroup they aspire to belong to (Merton, 1968). Moreover, the fact that non-anticipators have meanwhile demonstrated a higher level of prejudice toward frontier workers may be related to the previous literature investigating the reaction toward deviance, and particularly the "black sheep effect" (Marques et al., 1988;Pinto et al., 2010). According to this literature, deviance tends to be more severely punished when it comes from an ingroup member than when it comes from an outgroup member. Individuals indeed perceive the deviant's behavior as threatening to the identity of the ingroup. By the rejection of this behavior and its actor, they reaffirm the ingroup's standards and contribute to the longevity of the group. Thus, it is not surprising that non-anticipators had unfavorable attitudes toward frontier workers who are ultimately perceived as betrayers, preferring to improve their own status while the whole group continues to suffer from inferior conditions (Blair and Jost, 2003). As for group identification, novel insights were obtained through the distinction between non-anticipators and anticipators in the non-mobile group. Indeed as expected in H3, the linear decrease of ingroup concern observed throughout social mobility stages was accounted for by a lower identification with the inherited group. Such a finding suggests an identity discount strategy, as derived from SIT assumptions. This strategy points to individuals who distance themselves from their inherited low-status ingroup on both the attitudinal and the identification dimensions (Ellemers, 2001). Extending past research (e.g., Derks et al., 2011bDerks et al., , 2015Kulich et al., 2015), the findings from Study 2 illustrate the willingness of individuals to increase their chances to attain a better valued social identity through individual mobility, despite the fact that they are unable to actually part with their low-status membership. Moreover, by revealing similar levels of ingroup identification among anticipators and mobile individuals (even though these two groups still differed in their ingroup concern), our data support the idea that ingroup identification plays a crucial role in the process of social mobility (Tajfel and Turner, 1986;Ellemers, 2001). This is also in line with Taylor and McKirnan's (1984) claim that members of low-status groups who perceive themselves as competent, and so as non-prototypical of their low-status membership, will try everything possible to dissociate themselves from this group. Consistently, Study 2 revealed the same identification pattern for the mobile French as in Study 1. The mobile identified more with their higher valued achieved group than with their lower valued inherited group. As for the non-mobile individuals, their identification pattern strongly depended on their desire to engage in social mobility. Indeed, those who wished to be socially mobile were less identified with both the inherited and the achieved groups compared to the nonanticipators. These results thus highlight the conflict anticipators may feel between their desire to improve their condition and their actual low-status memberships (Lenski, 1966). Moreover, the non-anticipators were more identified with the inherited group compared to the mobile and the anticipators. Thus, the absence of a difference in inherited identification between non-mobile and mobile participants in Study 1, as well as in Kulich et al. (2015), may have been due to the fact that all non-mobile individuals were treated as one group, thereby mixing two groups (anticipators and non-anticipators) of very different identification patterns. In support of this reasoning, the marginal difference observed in Study 1 between inherited and achieved identification became significant only for the anticipators in Study 2. In sum, it seems that no distance in the identification occurs for non-mobile non-anticipators but a clear motivation to distance between the two arises for anticipators. The correlational nature of the present research limits the interpretation of causal relationships between social mobility, attitudes and identification. This study only shows results from anticipators and mobile individuals but not the actual process of people who move from the anticipator to the mobile stage. Thus, at least two different mechanisms could be responsible for the patterns observed for anticipators. First, they may start to disidentify from the unwanted achieved group in order to replace it by a higher identification with the new highstatus group as soon as they actually successfully engage in social mobility. Interestingly, we observed in line with this idea that anticipators showed in Study 2 the same identification with the inherited group as the mobile participants. As previously discussed, this may describe a psychological strategy through which anticipators are adapting to a potential social mobility rather than behaving like other non-mobile individuals. According to Sidanius and Pratto (1999), such individual orientation rests on meritocratic beliefs, which conceptualize as legitimating myths that contribute to protect the social hierarchy by valuing individualistic behavior and strengthening the unequal treatment of members of the two groups. Moreover, this is also in line with the theorization of Merton (1968) and the results we observed on the attitudes toward frontier workers in Study 2. Second, an alternative explanation that cannot be completely discarded is that anticipators and mobile individuals differ from non-mobile due to previous experiences or socialization processes. Controlling for all the possible differences in socialization, attitudes, and employment histories that may exist between the three investigated French groups is an important but difficult task. Future research should thus aim at experimentally manipulating social mobility in order to investigate its impact on attitudes and identification. Longitudinal studies could also be informative as they would allow to assess actual changes in attitudes and (dis)identification patterns. In addition, further research is needed to determine the extent to which the present findings in ingroup concern can be generalized to broader intergroup attitudes, such as ingroup bias and prejudice expression. Indeed, the measures we used in this research were targeting the personal interests of individuals (e.g., the reduction of the costs of life in the French border regions of Switzerland), thus maximizing the differences between the three groups of participants. Another limitation of this study is its incapacity to provide insights concerning the consequences of such identity discount strategy on the quality of the relationship mobile individuals maintain with their inherited group members. Indeed, it is reasonable to think that by being simultaneously lowly identified with the inherited ingroup and expressing lower concern for it, mobility anticipators take the risk of being judged as disloyal (Blair and Jost, 2003). This may in turn motivate ingroup members to devalue them as a way to punish the deviance and at the same time to reaffirm the norm. Paralleling this idea, research demonstrates that when women attain high-responsibility positions in the workplace, meaning that they successfully achieved social mobility in a male-dominated domain, they are perceived as less communal than women in general (see for example Rudman and Glick, 2001). Research conducted by Heilman and Okimoto (2007) illustrated that unexpected (i.e., gender incongruent) competence demonstration of agentic women led to punishment in hiring procedures. However, by adding information reaffirming the communality of the female candidates (i.e., by emphasizing their mother status), the selective bias against agentic women decreased as they appeared more stereotypical. Consistently, Phelan et al. (2008) showed that whereas perceived competence was the most important factor predicting the selection of candidates in hiring procedures, the criteria of selection shifted when they came to concern agentic female candidates: Rather than being evaluated based on their competence, they were evaluated based on their social skills, thus being punished if they did not live up to expectations that women should be socially skilled. GENERAL DISCUSSION This research was aimed at providing further insights about the deleterious impact of upward mobility on attitudes toward the inherited low-status ingroup. In Study 1, we observed that socially mobile individuals expressed negative attitudes toward non-mobile group members, simultaneously with a higher identification with their achieved group. In parallel, inherited identification was not different between these groups, a pattern that illustrates their desire to assimilate to the high-status group. On the one hand, mobile individuals have the willingness to improve their social condition, but on the other hand they are bound by the inevitable membership in a low-status group. Of interest, Van Laar et al. (2014) showed that members of low status groups and those of high social status groups do not offer support for mobile individuals under the same conditions. On the one hand, highstatus group members appear to be more sensitive to the behavioral dimension than the affective one, preferring mobile individuals who did not behave as a prototype of their lowstatus ingroup regardless of their identification with this group. On the other hand, low-status group members prefer mobile individuals who keep a strong identification with their ingroup, regardless of their level of behavioral prototypicality, or their competence (Campos et al., 2016). In light of these recent works, the negative ingroup attitudes expressed in the present research by the French mobile together with the expression of affective proximity with this group can be apprehended as a strategy to increase their integration in the high-status group, without breaking the affective ties they have with their inherited ingroup. Extending these findings to actual social mobility, Study 2 further examined the impact of the willingness to engage in social mobility among individuals who have been so far nonmobile. Of interest, results showed that the mere anticipation of social mobility was sufficient to produce a lower concern for the ingroup. In addition, this tendency appeared to be due to an identity discount strategy (Tajfel and Turner, 1986;Ellemers, 2001). Indeed, we observed that the lower ingroup concern expressed by the mobile and, to a lesser extent by the anticipators, was accounted for by their lower identification with their inherited group. Therefore, by distinguishing between non-mobile individuals based on their willingness to undertake mobility, we provided further insights about the process of social mobility. Despite recent findings illustrating the maintenance of identification with the low-status group and its coexistence with high levels of identification with the high-status group (Kulich et al., 2015), our research rather emphasizes an identity discount strategy as a privileged way for mobile individuals to cope with their status-inconsistent identity configuration. Nevertheless, little is known about the mechanisms leading social mobility anticipators, as well as mobile individuals to engage in such a coping strategy. Further investigations are thus needed in order to unravel such social dynamics and to identify precisely the conditions favoring the expression of such an identity discount. In summary, it seems that individuals anticipating upward mobility follow the principle "I want -therefore I am". Indeed, they already start to dissociate from their low-status group, not only through their (more negative) attitudes toward it, but also through their level of identification with the inherited group, as characterized by the identity discount strategy uncovered in this research. This attitudinal and identity discounts allow them to reduce the dissonance they may experience as a result of the asymmetrical statuses of their different groups' membership. The felt dissonance could even be stronger than the one experienced by mobile individuals, because of the coexistence of their motivation to improve their condition and the unescapable nature of their inherited membership. ETHICS STATEMENT This study was carried out in accordance with the recommendations of 'Ethical code concerning research at the Faculty of Psychology and Educational Sciences at the University of Geneva, Ethical Commission' with written informed consent from all subjects. In all studies, participants ticked a box before starting the survey and after finishing it, indicating their informed consent and agreement to use their responses for research purposes. The protocol was approved by the 'Ethical Commission of the Faculty of Psychology and Educational Sciences at the University of Geneva'.
2017-08-29T05:44:10.206Z
2017-08-28T00:00:00.000
{ "year": 2017, "sha1": "e5a8e49e659736c567440bfa44680b7a5e87a5a8", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2017.01451/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a9c226b21b2c66a3048de2ab14fb595728e5dfb9", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
13743410
pes2o/s2orc
v3-fos-license
Optimization of Waste Plastics Gasification Process Using Aspen-Plus 0.05 to 0.05 Introduction In this era of plastics dominated world, it remains a fact that there exists an everincreasing margin between the volume of waste plastics generated and the volume recycled [1]. Of the total plastic waste, recyclable thermoplastics like polyethylene, polystyrene, polypropylene and PVC account for nearly 78% of the total and the rest is composed of the non-recyclable thermosets like epoxy resins and polyurethane [2]. Typically, plastics waste management is practiced according to the following hierarchical order: Reduction, Reuse, Recycling, and finally energy recovery. Although reuse of plastics seems to be best option to reduce plastic wastes, it becomes unsuitable beyond certain cycles due to the degradation of plastic. Mechanical recycling of plastics involves significant costs related to collection and segregation, and is not recommended for food and pharmaceutical industries. While chemical recycling focuses on converting waste plastics into other gaseous or liquid chemicals that act as a feedstock for many petrochemical processes, energy recovery utilizes the stored calorific value of the plastics to generate heat energy to be used in various plant operations. Moreover, since plastic wastes always consist of a mixture of various polymeric substances, chemical recycling and energy recovery seems to be best possible solution, both in terms of economic and technological considerations. One of the major processes of chemical recycling involves thermal treatment of the waste plastics. The inevitable shift in world's energy paradigm from a carbon based to hydrogen based economy has revolutionized the capabilities of thermal treatment processes, viz. combustion, gasification and pyrolysis, in particular on the latter two techniques. In fact, recent technical investigations on the novel municipal solid waste (MSW) management methods reveal that a combined gasification and pyrolysis technique is more energy efficient and environmentally friendly than other processes [3]. In general the process of gasification for energy extraction from solid carbon source involves three simultaneous or competing reactions namely combustion, pyrolysis and gasification. The partial combustion of solid fuel creates an oxygen devoid, high temperature condition within the reactor which promotes the pyrolysis reaction, breaking the fuel into products that are a mixture of char and volatiles containing small and long chain hydrocarbons. The presence of gasifying agent (steam) drives the water shift reaction converting the carbon sources in to a mixture of valuable chemicals, tar, fuel gases and some residual particulate matter. The products undergo various downstream operations in order to separate and purify the valuable gaseous products that are later utilized for energy generation. This auto thermal feature makes the gasification process an economically viable and efficient technique for recovery of energy from waste plastics. Gasification in commercial scale is practiced based on batch, semi batch and continuous modes of operation depending upon the processing capacity of the plant. Typically a plant processing large throughput utilizes fluidized beds due to the advantages such as enhanced gas-solid contact, excellent mixing characteristics [4], operating flexibility [5], and ease of solids handling [6] that lead to a better overall gasification efficiency. Fluid beds are preferred as it offers high heat and mass transfer rate and a constant reaction temperature which results in a uniform spectrum of product in a short residence time. It is important to keep the good fluidization characteristics of the bed, since introduction of material with different properties than the original components of the bed affect the quality of fluidization. Introduction of plastic material in fluidized beds demand additional attention due to its softening nature and possibility of blocking the feeding line. As soon as the plastic enters the hot reaction zone, it thermally gets cracked and undergoes a continuous structural change until it is eliminated from the bed. The sequence of interaction between the inert particle in the fluidized bed and the plastic material has been narrated by Mastellone et al., [7]. Gas-solid fluidization is the operation by which a bed of solid particles is led into a fluid-like state through suspension in a gas. Large scale gasifiers employ one of the two types of fluidized bed configurations: bubbling fluidized bed and circulating fluidized bed. A bubbling fluidized bed (BFB) consists of fine, inert particles of sand or alumina, which are selected based on their suitability of physical properties such as size, density and thermal characteristics. The fluidizing medium, typically a combination of air/nitrogen and steam, is introduced from the bottom of the reactor at a specified flow rate so as to maintain the bed in a fluidization condition. The dimension of the reactor section between the bed and the freeboard is designed to progressively expand so as to reduce the superficial gas velocity which prevents solid entrainment, and to act as a disengaging zone. A cyclone is provided at the end of the fluidized bed either to return fines to the bed or to remove fines from the system. The plastic waste is introduced into the fluidized bed at a specified location, either over-bed or in-bed using an appropriately designed feeding system. Pyrolysis experiments by Mastellone et al. [7] has shown that when the feed is introduced over the bed (from the freeboard region), it results in uniform surface contact with the bed material, thus enhancing transfer properties. The bed is generally pre-heated to the startup temperature either by direct or indirect heating. After the bed reaches the ignition temperature, plastic wastes are slowly introduced into the bed to raise the bed temperature to the desired operating temperature which is normally in the range of 700-900 °C. The plastic wastes are simultaneously pyrolyzed as well as partially combusted. The exothermic combustion reaction provides the energy to sustain the bed temperature to promote the pyrolysis reactions. One of the main disadvantages of fluidized bed is the formation of large bubbles at higher gas velocities that bypass the bed reducing transfer rates significantly. If the gas flow of a bubbling fluidized bed is increased, the gas bubbles become larger forming large voids in the bed entraining substantial amounts of solids. The bubbles basically disappear in a circulating fluidized bed (CFB) and CFB the solids are separated from the gas using a cyclone and returned back to the bed forming a solids circulation loop. A CFB can be differentiated from a BFB in that there is no distinct separation between the dense solids zone and the dilute solids zone. The residence time of the solids in the circulating fluid bed is determined by the solids circulation rate, attrition of the solids and the collection efficiency of the solids in the cyclones. The advantages of the circulating fluidized bed gasifiers are that they are suitable for rapid reactions resulting in high conversion The disadvantage being, i) temperature gradients in the direction of the solid flow, ii) limitation on the size of fuel particles iii) high velocities resulting in equipment erosion. Although there are many different types of fluidized beds available for gasification and combustion, bubbling fluidized type is the most preferred type whenever steam is used as a gasifying medium [8]. The advantages of steam gasification have been well addressed in the literature [9]. A wide variety of plastics are in use depending upon the type of application, of which the most widely utilized are polyethylene (PE), Polypropylene (PP), Polyvinyl chloride (PVC), Polystyrene (PS) and Polyethylene terephthalate (PET). Each type differs in physical and chemical properties, and so do their applications. In general, the combustion of most of the plastics is considered safe with the exception of PVC that generates dioxins due to the presence of chlorine compound in its structure. In contrast with combustion, pyrolysis and gasification are endothermic process which require substantial amount of energy to promote the reactions. The pyrolysis process generally produces gas, liquid and solid products, the proportions of which depends on the operating conditions, while the gasification is predominantly reactions involving carbon or carbon-based species and steam, producing syngas (CO and H2) and minor higher molecular weight hydrocarbons [6]. Cracking of PE either into its constituent monomer or other low molecular weight hydrocarbons has become a vital process due to the increased amounts of polyethylene wastes in the present world. Pyrolysis and/or gasification of PE serve as an appropriate tool for the recovery of energy and for waste plastic disposal simultaneously. Compared with other alternative feedstock like biomass and coal, PE possesses relatively higher heating value, and is much cleaner in terms of fuel quality attributing to lesser fuel pre-processing costs. Pyrolysis or gasification of PE results in a product stream rich in hydrogen and minimal CO or CO2 content as compared to cellulose based wastes that yields relatively higher carbon monoxide and lower hydrogen product composition mainly due to the presence of oxygen in cellulose based feedstock. Irrespective of the type of reactor and type of waste being handled, the key operating parameters that play a vital role in the gasification process are the equivalence ratio, reactor temperature, steam to fuel ratio, gasifying medium and residence time. In order to exert better reliability of the system, the operating variables have to be optimized and controlled with significant accuracy. The cheapest and most effective technique to qualitatively understand the effect of each operating variable and to identify possible optimal conditions is through process simulation. Such attempts to develop simulation models for process optimization has been reported in open literature of fuel sources such as, tyre [6], coal [10][11][12][13], and biomass [8,[14][15][16] using various computer simulation packages. However, the utility of any process simulation tool has not been well explored or recorded in the literature for modeling plastics gasification. This chapter discusses recent work by the authors on Aspen Plus based process model to analyze the performance of a plastics gasification process under equilibrium conditions. The primary goal of this work is to successfully test and demonstrate the applicability of Aspen Plus to simulate the gasification process for one of the most abundantly used plastic, polyethylene (PE). This study will serve some preliminary qualitative and quantitative information on the overall behavior of the gasification process including the sensitivity of process parameters. Modeling the gasification process The gasification process models available in literature can be generally classified under steady state or quasi-steady state or transient state models. The steady state models do not consider the time derivatives and are further classified as kinetics free equilibrium models or kinetic rate models [17]. The following is a list of few researchers who have used the above-mentioned models for modeling the gasification process of various fuels; transient model for coals by Robinson [18], steady state kinetic model for biomass by Nikoo [14], steady state kinetic model for plastic wastes by Mastellone [7], kinetics free equilibrium model for biomass by Doherty [15], Paviet [17], and Shen [8], kinetics free equilibrium model for tyre by Mitta [6]. Of these, the kinetics free equilibrium steady state model is the most preferred for predicting the product gas composition and temperature, and more importantly for studying sensitivity analysis of the process parameters. Table 1 shows a summary of a few gasification simulation models developed in Aspen Plus for various materials. The model used in this work to investigate the simulation of PE gasification in fluidized bed reactor is based on the model previously developed by Mitta et al. [6] for simulating tyre gasification. The simplified tyre gasification equilibrium model was simulated using Aspen Plus and it was successfully validated using the experimental data. Such an equilibrium type of approach considers only the equilibrium products, namely methane, hydrogen, carbon monoxide, carbon dioxide, water, sulphurous and nitrogen compounds formed within the reactor. Any other high molecular weight hydrocarbons, such as tars and oils, are less likely to form under equilibrium conditions and hence are not included in the simulation. More importantly, the equilibrium condition facilitates an exhaustive optimization study focusing on key process parameters, including the gasification temperature, equivalence ratio, steam to fuel ratio, and gasifying medium, thereby neglecting the complexities of the gasifier hydrodynamics and reaction kinetics. The following assumptions are made in the current study for developing the process model. Material 1. All the chemical reactions were assumed to have reached equilibrium within the gasifier. 2. Only methane, hydrogen, carbon monoxide, carbon dioxide, oxygen, nitrogen, H2S, and water were considered to be present in the product stream. 3. The primary components of char are only carbon and ash. The entire gasification process was modeled using Aspen's built-in unit operation library in two stages; pre-processing and gasification. The two stages are discussed separately in the following sections. Figure 1 illustrates the process flow sheet of the simplified PE gasification model. The first stage corresponds to fuel preprocessing where the polyethylene sample was processed or conditioned to remove any moisture present before the start of the gasification process. Drying and separation are the unit operations grouped in this stage and are represented by the respective modules in Aspen Plus. The fuel polyethylene stream labeled as "PE" was defined as a non-conventional stream and the ultimate and proximate analysis are provided as input to the model, refer Table II for parameter values. Polymer NRTL/Redlich-Kwong equation of state with Henry's law "POLYNRTL" and "POLYSRK" was chosen as parameter models to calculate the thermo physical properties of the components. At first, the fuel stream was first introduced into a drying unit "DRIER", which was modeled in Aspen Plus using an RSTOIC module. A temperature of 110 ºC and a pressure of 1 atm were selected as drier operating conditions. The stream leaving the drier, labeled "DRIED" contains the dried PE in solid phase and the removed moisture in vapor phase. This stream was fed to a separation unit "SEPARATOR" that splits the feed stream into product streams, labeled as "DRYPE" and "MOISTURE". Volatiles and char gasification In a typical gasification process, the fuel is first pyrolyzed by applying external heat where it breaks into simpler constituent components. These volatile components, along with char are then combusted, and the heat liberated from the combustion reactions would be used up by the subsequent endothermic gasification reactions. In the Aspen plus model, the dried portion of the fuel "DRYPE" exiting from the "DRIER" enters a pyrolyzer "PYROL" modeled as a RYIELD block in Aspen Plus. Based on the ultimate analysis of PE shown in Table II, the product yield distribution was calculated in the RYIELD module using Aspen Plus built-in calculator. An operating temperature of 500 ºC and a pressure of 1 atm were chosen in order to set the exiting stream "VOLATILE" to a pre-heated temperature of 500 ºC. The volatiles stream, along with char was then passed to a gasifying unit "GASIFIER" that was modeled as a RGIBBS module. As it can be noticed in the model, the combustion and gasification reactions are allowed to take place within the "RGIBBS" module itself. The RGIBBS module calculates the equilibrium composition of the system using Gibbs free energy minimization technique. It provides an option to either consider all the components present in the system as equilibrium products or restrict the components based on some specific reactions or restrict it based on a temperature approach. In this study, all components from the gasification reactions, listed in Table IV, along with H2S were included as possible fluid phase or solids products in the RGIBBS module. The gasifying mediums, air and steam, are preheated and mixed before it is sent to the gasifier. The outlet stream labeled as "PRODUCTS" contains product gases resulting from the gasification process while the "ASH" stream contains any residual solids. Parameter The flow rate of fuel stream was held constant at 6 kg/h for all simulations. The two key parameters that influence the reactor temperature and the product distribution are equivalence ratio and the steam-fuel ratio, and hence were the only variables considered in the simulation. Equivalence ratio can be defined as the ratio of mass of oxygen/air supplied to the mass of oxygen/air necessary for complete combustion of all the carbon and hydrogen present in the feed to carbon dioxide and water respectively. Model validation The base case model for the gasification process was developed using Aspen plus built in modules based on the simulations popularly adopted in literature. In order to validate the appropriateness of the present model, simulations have been performed for gasification of tyre and the results were compared with the work due to Mitta et al. [6]. The ultimate and proximate analysis data used for tyre simulation in this study has been listed in Table II. However since the simulation parameters were not fully detailed by the authors, the parameters utilized in the present simulation is not the same as reported by Mittal et al. Therefore, only a qualitative comparison of the effect of parameters on the product distribution was considered for comparison purposes. Results showed good agreement in terms of the trends of the composition versus temperature plots and that serves as a basis for model validation. In this work, a similar kind of study was performed to investigate the performance characteristics of the PE gasification process. In the case of isothermal gasification studies, it is challenging to include the temperature variation effects resulting from the entering steam flow, and exclusion of which results in significant deviation in the simulation results [14]. Hence, in this work, an adiabatic type of gasification reactor was modeled to investigate the effects of two key parameters, namely the equivalence ratio and steam-to-fuel ratio. The response variables include the gas composition, Carbon monoxide efficiency, hydrogen efficiency, and combined CO and hydrogen efficiency. The carbon monoxide efficiency measures the extent of conversion of carbon present in the fuel to carbon monoxide. The definition of hydrogen efficiency and the combined efficiency follows the same. Van is the volume fraction of hydrogen in the gas. The combined efficiency represents the fraction of the maximum possible conversion or production achievable by the system. This maximum limit is considered when all the available carbon and hydrogen present in the fuel is converted to CO and H2 [18]. The performance of the gasifier is also analyzed in terms of cold gas efficiency (CGE) that is defined as: Where Vg = Gas generation rate (m 3 /sec) Qg = heating value of the gas (kJ/m 3 ) Mb = fuel consumption rate (kg/sec) Cb = heating value of the fuel (kJ/m 3 ) Effect of steam-to-PE ratio The effect of steam-to-PE mass ratio on PE gasification process was investigated in the range of 0.05 to 5 (corresponding to a mole ratio of 0.04 to 3.9) with a constant PE feed rate of 6 kg/h and an equivalence ratio of 0.15 (air flow rate of 15 kg/h). It can be expected that at low concentrations of water, oxidation reactions via Reactions (1-3) would dominate resulting in a higher temperature. The resulting temperature rise in turn would propel Reactions (4 and 6), which according to chemical equilibrium principle would shift forward, resulting in formation of CO and hydrogen. When the partial pressure of the reactant steam was increased, Reactions (4-6) would exhibit a tendency to shift forward, thus leading to a higher CO2 and hydrogen content with simultaneous drop in CO molar composition. Due to the participation of the endothermic reactions at higher steam composition, the overall equilibrium temperature would show a decreasing trend. At some point, when there is enough hydrogen available to react with the carbon, the formation of methane would be favored as per Reactions (8)(9)(10). Subsequently, the methane formed would react with the excess steam to form back CO and hydrogen, as depicted by reaction (6). Overall, at any steam-to-PE ratio, the equilibrium system temperature and product composition would be a result of the competing simultaneous endothermic and exothermic reactions. Figure 2 illustrates the variation of product molar composition and the equilibrium reactor temperature as a function of steam-to-PE mass ratio. The simulation predicted equilibrium temperature resulting from the gasification process helps to deduce certain qualitative conclusions on the overall gasification reaction and thus validate the theoretical explanations. From the simulation results, it can be noticed that when steam content is much less than the stoichiometric amount required for Reaction (4), which is equivalent to a steam-to-PE mass ratio of 1.33, the composition of hydrogen displays a sharp increasing trend while that of methane decreases. The high temperature and high methane content at lower steam-to-PE ratios are a result of the methanation and oxidation reactions. Above the stoichiometric point, hydrogen along with carbon monoxide shows a gradual decreasing tendency with a simultaneous increase in CO2 content. This is in agreement with the theoretical explanation, wherein it was predicted that an increase in the amount of steam would strongly favor the forward endothermic reaction forming carbon monoxide and hydrogen. With higher steam content, the oxidation of CO is favored resulting in a steady increase of carbon dioxide during the gasification process. The steam composition in the product stream is a result of the excess and unreacted steam entering and exiting the reactor. As expected, above the stoichiometric point, the temperature of the reactor remains constant at around 850 K, possibly balanced by the complicated endothermic and exothermic gasification reactions. Figure 3 shows the effect of the steam-to-PE ratio on the fractional efficiency of CO, CO2 and H2. It is evident that at around a steam-to-PE ratio of 0.4, the production of CO and hydrogen peaks while that of carbon dioxide is at a minimum. This is a favorable condition for any waste gasification process where it is desired to minimize as much as carbon dioxide as possible. Hence, it can be concurred that the favorable steam-to-PE mass ratio for the gasification process should be between 0.4 and 0.6, where the combined as well as the individual compositions of CO and H2 are at a maximum. Furthermore, the cold gas efficiency (CGE) of the process seems to be affected only at lower steam-to-PE ratio. The predicted CGE values are much higher than those obtained in typical waste gasification process which is about 60%. It can be expected that under equilibrium conditions, as considered in this study, the gas yield is significantly higher than real process which directly contributes to increased efficiency. Effect of equivalence ratio The effect of equivalence ratio on the overall gasification efficiency was studied at two different steam-to-PE ratios. Typically, a commercial biomass gasifier is operated at an ER value of 0.25 in order to maintain auto thermal conditions (van den Bergh, 2005). Hence, a range of 0.05 to 0.3 was selected for this study in order to determine the optimum ER for PE gasification process. The cases for the two different steam-to-PE ratios have been presented and discussed separately below. The oxidation reactions of carbon, CO and hydrogen, depicted by Reactions (1-3) are spontaneous and exothermic, resulting in release of significant amount of heat energy. It can be expected through Reaction (1) that at low values of ER (low values of stoichiometric air), only incomplete combustion of carbon would take place leading to the formation of CO with release of heat. Therefore, for the range of ER considered in this study, only Reactions (1) and (3) are the possible oxidation reactions, and thus any heat released during the combustion process will be directly attributed to these two reactions. In general, at any fixed steam-to-PE ratio, the other parameters that drive the gasification process would be the ER and consequently the heat released from the combustion reactions. The intensity of the heat released controls the temperature, which in turn affects the directional shift in equilibrium of the gasification reactions. For example, the endothermic reactions (4, 6, and 7) would tend to shift in the forward direction with an increase in temperature and vice versa. Hence with increasing ER, it can be expected that the conversion of carbon to CO and hydrogen would be highly favored to other products such as carbon dioxide and methane. Case 1: Steam-to-PE ratio 0.6 At low ER and low steam content, Reactions (4, 5 and 7) would be possibly controlled by the temperature and the partial pressure of steam. At such conditions, it could be expected that Reaction (5) would not be driven forward resulting in lower carbon dioxide formation. Furthermore, at low ER values, reactions with water would significantly compete with the oxidation reactions, thus limiting the resulting equilibrium temperature. At high ER and low steam content, this effect would be compounded such that temperature would be the primary variable that would determine the direction of the gasification reactions. In addition, at higher ER the composition trend of CO could be expected to fall down due to the subsequent combustion and methanation reactions of CO. Figure 4 illustrates the variation of product gas composition and temperature as a function of various equivalence ratios. Between ER values of 0.05 and 0.2, reactor temperature, CO content, and hydrogen content increases steadily while the composition of methane decreases very sharply. In addition, the composition of carbon dioxide shows a steady decrease whereas the molar composition of water remains a constant. At ER values higher than 0.2, it can be observed that the temperature increases very sharply along with a steady decrease of hydrogen and carbon monoxide. It can also be noticed that beyond this point, only hydrogen, CO, and water are the major components of the product stream. The low values of carbon dioxide predicted throughout the range can be explained by the fact that at such low ER and steam-to-PE ratios considered in this study, neither complete oxidation nor steam gasification of carbonaceous components, depicted by reactions (2) and (5) respectively, proceeds at any significant rate. The sharp increase in the temperature beyond ER = 0.2 is due to the domination of the exothermic combustion reactions over others. The simulation results are very much in agreement with the theoretical expectations discussed earlier in this section. Figure 5 illustrates the variation of the fractional efficiencies with the equivalence ratios. It is clear that the efficiency of the conversion proceeds rapidly at lower ER's and reaches a maximum at ER of 0.2 and at a fixed steam-to-PE ratio of 0.6. The effect of ER on CGE is not significant at lower values since the composition of CO, hydrogen and methane that directly contribute to the heating value of the product gas increases until ER = 0.2. Beyond this point, since the yield of the above products decreases, CGE follows a decreasing trend and records a value of about 75% at an ER value of 0.3. Figure 5. Illustration of the effect of equivalence ratio on gasification efficiency at a fixed steam-to-PE ratio of 0.6. Case 2: Steam-to-PE ratio 4 An additional study of the effect of ER on the gasification process at a higher steam-to-PE ratio was included to provide better and comprehensive understanding of the sensitivity of equivalence ratio. In this case, the gasification reactions would not only be driven by the heat released by the preceding combustion reactions, but also by the partial pressure of steam. At a higher steam-to-PE ratio, it could be expected that Reaction (4) would significantly compete with Reaction (1) to consume the carbon present in the feed. Hence, the absolute value of the equilibrium temperature would be lower when compared to the previous case, steam-to-PE ratio of 0.6. Although high ER values would restrict the forward shift of the exothermic Reaction (5), the presence of higher steam content would favor the equilibrium to shift in the forward direction resulting in higher net carbon dioxide content. Referring to Figures 4 and 6, it is evident that the trends of composition and temperature follow the same as case 1, but with different absolute values. It should be noted that the simulations predicted a temperature of about 800 K at an ER of 0.1 for case 2 compared to a value of ca. 850 K for case 1. It can also be observed that the composition of carbon dioxide was slightly higher and that of carbon monoxide was significantly lower than the results reported earlier in Case 1. It can also be noticed from Figures 5 and 7 that the absolute maximum value of the combined CO and H2 efficiency is significantly different among the two cases, which are predicted as 40% for case 1 and 7% for case 2. The composition of carbon dioxide in the product gases is very negligible at lower steam content, while it reaches about 4% for the case of higher steam content. Nevertheless, in both the cases, the maximum fractional efficiency of all the components occurs at an ER value of ca. 0.2. Furthermore, as discussed earlier in section 3.1, the effect of steam-to-PE ratio ion CGE is remarkable only until 0.6. Thus, the trend of CGE in Figure 7 for the case of higher steam-to-PE ratio resembles the same as that of Figure 5. Hence, it can be concluded that an ER value of 0.2 and steam-to-PE ratio of 0.4 to 0.6 would yield a product stream containing 35% hydrogen, 25% CO, and negligible CO2 at a temperature of 1000 K. These values seem acceptable for all practical purposes and are very much in agreement with the literature data, where a value steam-to-fuel value of 0.42 and an ER value of 0.15 were reported as the optimum parameters for co-gasification of wood and polyethylene [18]. Conclusions The gasification process of waste polyethylene was successfully modeled using a combination of various unit operation modules available in Aspen Plus simulation package. The model used in this work to investigate the simulation of PE gasification in fluidized bed reactor is based on the model previously reported in literature for simulating waste tyre gasification. The equilibrium model developed in this study enables one to predict the behavior of PE gasification process under various operating conditions. Moreover, the results obtained are easy to interpret and thus could be directly corroborated with actual plant data. Although temperature plays a vital role in controlling the conversion and product composition, it has been treated as a free variable in this study. Other process conditions were optimized in order to attain the appropriate temperature suitable for different applications that ideally lies between high temperature low calorific value and low temperature high calorific value product gas. The product distribution was the result of many competing simultaneous reactions mainly dictated by the temperature and the steam flow. The effect of the equivalence ratio and steam-to-PE ratio on the gasification efficiency was investigated in the range of 0.05 to 0.3 and 0.05 to 5 respectively. Based on the simulation results, the behavior of the conversion process was characterized and the values of the combined and individual fractional efficiencies have been presented. The following results summarize the findings from this study: • Optimum steam-to-PE ratio was determined to be between 0.4 and 0.6 for low temperature applications. Under this condition, the yield of syngas and cold gas efficiency reaches a maximum. • Product gas temperatures as high as 1273 K could be attained at higher steam-to-PE ratio at the expense of decrease in calorific value • Sensitivity analysis on ER proposes an optimum value of about 0.2. Both CGE and syngas efficiency reaches a maximum at this point. Due to the lack of detailed experimental data on waste PE gasification for various process conditions, the predicted data could not be validated. Although the results from this work heavily depend on the assumption made, i.e. thermodynamic equilibrium, significant qualitative results were deduced that would help to establish a sound reference for any detailed process optimization studies. Furthermore, this model can be used to estimate the final gas composition and other parameters, including gas yield and temperature for other solid waste fuels and mixtures. Upon including the hydrodynamics and gasification kinetics, this model could be used to evaluate the performance and behavior of many types of gasifiers under different process conditions.
2018-05-04T00:16:59.820Z
2012-10-24T00:00:00.000
{ "year": 2012, "sha1": "4c838a797902d5b45ef8d4d71a877e4a765e8bb2", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/40404", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "bd4a6d930fd00d4194274cda85b9c823f4f41110", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
260969999
pes2o/s2orc
v3-fos-license
A Proposal for a Robust Validated Weighted General Data Protection Regulation-Based Scale to Assess the Quality of Privacy Policies of Mobile Health Applications: An eDelphi Study Background  Health care services are undergoing a digital transformation in which the Participatory Health Informatics field has a key role. Within this field, studies aimed to assess the quality of digital tools, including mHealth apps, are conducted. Privacy is one dimension of the quality of an mHealth app. Privacy consists of several components, including organizational, technical, and legal safeguards. Within legal safeguards, giving transparent information to the users on how their data are handled is crucial. This information is usually disclosed to users through the privacy policy document. Assessing the quality of a privacy policy is a complex task and several scales supporting this process have been proposed in the literature. However, these scales are heterogeneous and even not very objective. In our previous study, we proposed a checklist of items guiding the assessment of the quality of an mHealth app privacy policy, based on the General Data Protection Regulation. Objective  To refine the robustness of our General Data Protection Regulation-based privacy scale to assess the quality of an mHealth app privacy policy, to identify new items, and to assign weights for every item in the scale. Methods  A two-round modified eDelphi study was conducted involving a privacy expert panel. Results  After the Delphi process, all the items in the scale were considered “important” or “very important” (4 and 5 in a 5-point Likert scale, respectively) by most of the experts. One of the original items was suggested to be reworded, while eight tentative items were suggested. Only two of them were finally added after Round 2. Eleven of the 16 items in the scale were considered “very important” (weight of 1), while the other 5 were considered “important” (weight of 0.5). Conclusion  The Benjumea privacy scale is a new robust tool to assess the quality of an mHealth app privacy policy, providing a deeper and complementary analysis to other scales. Also, this robust scale provides a guideline for the development of high-quality privacy policies of mHealth apps. The purpose of the study should be clearly defined and demonstrate the appropriateness of the use of the Delphi technique as a method to achieve the research aim.A rationale for the choice of the Delphi technique as the most suitable method needs to be provided The purpose is reported in Abstract and the section Introduction, pages 1 and 2. Appropriateness and rationale could be found in the section Study Design, pages 3 and 4. Expert panel 9 Criteria for the selection of experts and transparent information on recruitment of the expert panel, socio-demographic details including information on expertise regarding the topic in question, (non)response and response rates over the ongoing iterations should be reported Panel expert information is reported in the section Selection Criteria and Recruitment, page 4. Socio-demographic details and response rates are reported in the section Expert Panel, page 5. Description of methods 10 The methods employed need to be comprehensible; this includes information on preparatory steps, piloting of material and survey instruments, design of the survey instrument(s), the number and design of survey rounds, methods of data analysis, processing and synthesis of experts' responses to inform the subsequent survey round, and methodological decisions taken by the research team throughout the process Information about methods is reported on the section Round 1, pages 4-5, and the section Round 2, page 5. Procedure 11 Flow chart to illustrate the stages of the Delphi process, including a preparatory phase, the actual "Delphi rounds," interim steps of data processing and analysis, and concluding steps Flow chart is reported on the section Round 1, page 6. Discussion of limitations 14 Reporting should include a critical reflection of potential limitations and their impact of the resulting guidance Limitations are reported in the section Limitations, pages 8-9. Adequacy of conclusions 15 The conclusions should adequately reflect the outcomes of the Delphi study with a view to the scope and applicability of the resulting practice guidance Supplementary Appendix B Round 1 Questionnaire (Translated from Spanish) Categorization of the relevance of components of privacy policies.Thank you very much for participating in this study, this page shows basic information about the project.However, you can access the Participant Information Sheet at https://uses0-my.sharepoint.com/:b:/g/personal/jaimebm_us_es/EbIFkEg28eZAvXQJFyLaspsB6eeNeKifGSGPZQHIubIH7A?e=qwEt3T. If you want to see the content of the survey before continuing, you can see it at https://uses0-my.sharepoint.com/:b:/g/personal/jaimebm_us_es/EdJL1oxgG65Lup3whfXif_8BMRiVWLfyfy4h-FLCXW6V9w?e=TClpYm. Your participation in this study consists of two phases: • In the first one (this questionnaire) you must fill in a questionnaire, in which your opinion will be asked about the importance of the presence of certain items in the privacy policy documents in mobile health applications.These items are indicated in the Article 13 of the General Data Protection Regulation (GDPR).Additionally, you will be asked to point out, if you wish, any other item that, in your opinion, should be used to assess privacy policies.• In the second round (an email containing a link will be sent to you in the coming weeks), you will be shown aggregated statistical data from the answers of other participants in the study, together with a comparison with your previous round answers.You will be asked again to rate the importance of these items together with others that could be identified in the previous round. Remember that your participation in this study is voluntary and, by sending this form, you give your consent for your personal data to be processed, in accordance with the information clause, available at https://sic.us.es/sites/default/files /pd/cievaluacionpolprivacidad.pdf. If you need more information, you may contact Alejandro Carrasco Muñoz (acarrasco@us.es) Contact and demographic data Enter your personal data below (all fields are required) Surname: Name: Position: Institution: Email address (it will be used throughout the study): By checking the following box, you agree to participate in the project and give your consent for your data to be processed in accordance with our privacy policy: [ ] Assessing the Importance of Certain Items in Privacy Policies Point out the relative importance, in your opinion, of the presence of certain information (items) in the privacy policies of mobile health applications.When answering this questionnaire, keep in mind that, beyond strict compliance with the GDPR (and, specifically, article 13), you must give your opinion on the importance of these items. Value the importance that the following information appears in the privacy policies of mobile health applications: Item identifier Brief A Scale to Assess the Quality of Privacy Policies Benjumea et al. Is there any other item that you think should appear in the privacy policy documents in mobile health applications?If so, use the space below to describe it, as well as a brief detail of the reasons why you are making your proposal. Round 2 Questionnaire (Translated from Spanish) Categorization of the relevance of components of privacy policies (Round 2). Thank you very much for participating in the second round of this study.Remember you can access the Participant Information Sheet at https://uses0-my.sharepoint.com/:b:/g/personal/jaimebm_us_es/EbIFkEg28eZAvXQJFyLaspsB6eeNe-KifGSGPZQHIubIH7A?e=qwEt3T. Your participation in this study consists of two phases: • In the first one (already completed) you filled in a questionnaire, in which we asked your opinion about the importance of the presence of certain items in the privacy policy documents in mobile health applications.These items are indicated in the Article 13 of the General Data Protection Regulation (GDPR).Additionally, you were asked to point out, if you wished, any other item that, in your opinion, should be used to assess privacy policies.• In the second round (this one), we have sent you an email with aggregated statistical data from the answers of other participants in the study, together with a comparison with your previous round answers.You are now asked to rate again the importance of these items together with others that have been identified in the previous round. Remember that your participation in this study is voluntary and, by sending this form, you give your consent for your personal data to be processed, in accordance with the information clause, available at https://sic.us.es/sites/default/files /pd/cievaluacionpolprivacidad.pdf. If you need more information, you may contact Alejandro Carrasco Muñoz (acarrasco@us.es). Email address (use the same email you used in round 1): Value the importance that the following information appears in the privacy policies of mobile health applications: (Continued) A Scale to Assess the Quality of Privacy Policies Benjumea et al. Regarding the purposes for the processing (item I4), what characteristics of the purposes for the processing should be included?(One or more options may be selected) [ ] General description of the purposes for the processing.[ ] Specific description of the purposes for the processing.[ ] Potential benefits to the user and to the data controller.A Scale to Assess the Quality of Privacy Policies Benjumea et al. Scale to Assess the Quality of Privacy Policies Benjumea et al.
2023-08-19T06:16:38.477Z
2022-07-06T00:00:00.000
{ "year": 2023, "sha1": "336bcb9084cb047ef35a0bf1c2a02543b8907cd1", "oa_license": "CCBY", "oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/a-2155-2021.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "ea15002d5359a7c80087305d916e099139d22f38", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
16633786
pes2o/s2orc
v3-fos-license
Cardiac progenitor cell-derived exosomes prevent cardiomyocytes apoptosis through exosomal miR-21 by targeting PDCD4 Cardiac progenitor cells derived from adult heart have emerged as one of the most promising stem cell types for cardiac protection and repair. Exosomes are known to mediate cell–cell communication by transporting cell-derived proteins and nucleic acids, including various microRNAs (miRNAs). Here we investigated the cardiac progenitor cell (CPC)-derived exosomal miRNAs on protecting myocardium under oxidative stress. Sca1+CPCs-derived exosomes were purified from conditional medium, and identified by nanoparticle trafficking analysis (NTA), transmission electron microscopy and western blotting using CD63, CD9 and Alix as markers. Exosomes production was measured by NTA, the result showed that oxidative stress-induced CPCs secrete more exosomes compared with normal condition. Although six apoptosis-related miRNAs could be detected in two different treatment-derived exosomes, only miR-21 was significantly upregulated in oxidative stress-induced exosomes compared with normal exosomes. The same oxidative stress could cause low miR-21 and high cleaved caspase-3 expression in H9C2 cardiac cells. But the cleaved caspase-3 was significantly decreased when miR-21 was overexpressed by transfecting miR-21 mimic. Furthermore, miR-21 mimic or inhibitor transfection and luciferase activity assay confirmed that programmed cell death 4 (PDCD4) was a target gene of miR-21, and miR-21/PDCD4 axis has an important role in anti-apoptotic effect of H9C2 cell. Western blotting and Annexin V/PI results demonstrated that exosomes pre-treated H9C2 exhibited increased miR-21 whereas decreased PDCD4, and had more resistant potential to the apoptosis induced by the oxidative stress, compared with non-treated cells. These findings revealed that CPC-derived exosomal miR-21 had an inhibiting role in the apoptosis pathway through downregulating PDCD4. Restored miR-21/PDCD4 pathway using CPC-derived exosomes could protect myocardial cells against oxidative stress-related apoptosis. Therefore, exosomes could be used as a new therapeutic vehicle for ischemic cardiac disease. Cardiovascular disease is one of the leading pathological causes of mortality worldwide. Cardiovascular ischemic diseases such as heart failure, acute myocardial infarction and myocardial ischemia/reperfusion injury produce plenty of reactive oxygen species (ROS) in ischemic zone, 1,2 which is a major contributor to cardiomyocyte apoptosis and death, and deteriorates cardiac disease. Therefore, it is urgent to find an effective way to restore the cardiovascular system under oxidative stress. Stem cell transplantation is an effective way to replace the apoptotic or dead cardiomyocytes, but the underlying mechanism of this repair process has not been fully explained. Cardiac progenitor cells (CPCs) resident in adult heart have emerged as one of the most promising stem cell types for cardiac regeneration and repair. The mechanism of post transplantation has always been predicated on the hypothesis that these cells would engraft, differentiate and replace damaged cardiac tissues. Although both direct cell differentiation and indirect paracrine effect mechanisms have been implicated in the therapeutic benefit, accumulating evidence suggests predominant roles of the paracrine secretion by CPCs. 3 Furthermore, many researchers indicate that transplanted CPCs secrete a lot of factors to reduce tissue injury and/or enhance tissue repair. 4,5 Over the past few years, several experimental evidences have demonstrated that the CPCs released a specialized membranous nano-sized vesicle termed exosomes to improve cardiac function in the damaged heart. [5][6][7] Exosomes are small (30-100 nm) membrane vesicles, merging their membrane contents into the recipient cell membrane and delivering effectors including transcription factors, oncogenes, small and large non-coding regulatory RNAs (such as microRNAs (miRNAs)), mRNAs and infectious particles into recipient cells. 8,9 In this way, exosomes secreted by CPCs were considered to participate in cardiac protection and repair. 7,10,11 But exosomes contents vary from different pathological conditions, the difference might cause completely reversed fate of target cells. Hence, it is fruitful to investigate the biological function of exosomes under a specific pathological condition, including oxidative stress. In addition, this study will provide new theoretical basis for treatment of myocardium injury. Among the contents of exosomes, miRNAs have been shown to govern important processes that contribute to the pathophysiological consequences of acute myocardial infarction. 12 It is a class of short (about 22 nucleotides), single-stranded non-coding RNAs that have key roles in the regulation of gene expression. miRNAs can either promote or inhibit cardiomyocyte cell apoptosis, 13 and also regulate ROSmediated heart disease. 14,15 But whether miRNAs from CPCderived exosomes have some important role in ROS-induced cardiomyocytes was still undetermined. Here we investigated the protective effect of the CPC-derived exosomes for myocardial cells in ischemic myocardial injury model, which is mainly through passing on exosomal miR-21 inhibit programmed cell death 4 (PDCD4) in myocardial cells. The fruitful work provides a potential cell therapy strategy for myocardial ischemic diseases. Results CPC-derived exosomes were collected and identified in morphology and phenotype. The Sca-1 + cells isolated from adult mouse heart presented as long spindle-shaped fibrocyte-like adherent cells (Figure 1a). The percentage of Sca-1 + CPCs was determined with Flow Cytometry, and the results showed that up to 95.04 ± 4.29% of population were Sca-1 + cells after magnetic-activated cell sorting (MACS; Figure 1b). To obtain the CPC-derived exosomes particles, the culture medium of CPCs was collected and precipitated. Then the morphology and phenotypes of isolated particles were identified according to the characteristics of exosomes described previously. 16 First, the concentration and the range of size of the particles were measured using nanoparticle tracking analysis (Nanosight, Malvern, UK), the results demonstrated that the concentration of the particles was 1.31 × 10 9 ± 0.29 × 10 9 particles per ml, and the diameters of the particles were within the range of 50-150 nm, with the average of 145 nm ( Figure 1c). Secondly, the morphology of the CPC-derived particles was observed directly through transmission electron microscope (TEM), the particles were revealed as round-shaped vesicles with double layer membrane structure and diameters about 100 nm ( Figure 1d). Finally, the protein levels of exosomes markers CD63, CD9 and Alix were measured with western blotting, all of the three markers could be detected in the CPC-derived exosomes ( Figure 1e). So, the above properties analysis indicated that CPC-derived particles collected in our experiments were identified as exosomes. Oxidative stress enhanced the production of CPCderived exosomes and caused apoptosis of cardiomyocytes. To test the effects of oxidative stress on cardiac cells, CPCs and H9C2 (a cardiomyocyte cell line) were treated with H 2 O 2 . After treated with H 2 O 2 on indicated concentrations for 6 h, the H9C2 cells were harvested for protein collection and western blotting. The results showed that 100 μM H 2 O 2 increased the level of cleaved caspase-3 (the active type of caspase-3), suggesting an early apoptosis of the cardiomyocytes was induced under the oxidative stress ( Figure 2a). Then, to observe whether the same condition of oxidative stress could affect the exosomes secretion of CPCs, exosomes were collected from CPCs treated with 100 μM H 2 O 2 for 6 h, and exosomes' concentration was analyzed with nanoparticle trafficking analysis (NTA). The results showed that the exosomes concentrations increased from 1.31 ± 0.29 × 10 9 particles per ml to 3.36 ± 0.66 × 10 9 particles per ml after the H 2 O 2 treatment, suggesting that the oxidative stress could enhance the exosomes production of CPCs (Figure 2b). MiR-21 in the CPC-derived exosomes increased under the oxidative stress, potentially was involved in protecting cardiomyocytes from apoptosis. Exosome, acting as a carrier particle, has an intriguing role on cellular communication through the exchange of miRNAs or proteins between cells. 17 It is essential to investigate the miRNAs contents with potential biological functions in exosomes secreted under certain pathological situations. 12 So, we selected 13 miRNAs reported either involved in oxidative stress (miR-150, miR-21), 15,18 or cardiomyocytes apoptosis (miR-195, 320, 140, 24, 214, 34a), [19][20][21][22] or contained in extracellular vesicles (EVs) (miR-126, 146, 132, 210, 21, 451), 5,23-25 as shown in Figure 3a. Whether the oxidative stress would affect the profiles of these 13 miRNAs in CPC-derived exosomes was estimated through quantitative PCR. The agarose gel electrophoresis results showed that the cells and exosomal RNAs were integrated (Supplementary Figure S1). There were six miRNAs (miR-21, 24, 214, 132, 195, 210) detected in both H 2 O 2 -induced exosomes and non-induced ones. Among these miRNAs, miR-21 in CPC-derived exosomes was significantly upregulated (45-fold change) after the H 2 O 2 treatment (Figure 3b). This provided us a potential exosomal miRNA target, which might have a role on affecting the apoptosis of cardiomyocytes under the condition of the oxidative stress. The level of miR-21 was examined in H 2 O 2treated H9C2 cells, and the results showed that miR-21 were significantly downregulated in H9C2 cells under H 2 O 2 treatment (Figure 3c), suggesting that a possible connection exists between the decrease of miR-21 and the apoptosis of cardiomyocytes with oxidative stress. So the gain-of-and loss-of-function experiments were performed using the mimic/inhibitor of miR-21. MiR-21 mimic or inhibitor obviously increase or decrease miR-21 expression in H9C2 specifically (Supplementary Figure S2). To detect the effects of miR-21 on the H 2 O 2 -induced apoptosis of cardiomyocytes, the levels of procaspase-3 and cleaved caspase-3 were detected by western blotting. The results showed that miR-21 mimic obviously decreased cleaved caspase-3 expression, whereas the inhibitor increased cleaved caspase-3 expression ( Figure 3d) in H9C2 under oxidative stress (100 μM H 2 O 2 ). This confirmed the anti-apoptotic function of miR-21 and implied that rescuing the downregulated miR-21 in the cardiomyocytes under oxidative stress might be a potential strategy to protect the cardiomyocytes from apoptosis. 18,26 miR-21 in tumor cells. 27 As shown in Supplementary Figure S3, there are conserved binding sites in 3′UTR of PDCD4 mRNA in different species. To confirm whether PDCD4 is a target of miR-21 in H9C2 cells, the expression levels of PDCD4 in H 2 O 2 -treated H9C2 cells were measured by western blotting. The results showed that PDCD4 was Cytometry analyzed purified Sca-1 + CPCs from the first preparation. Typical purity of isolation is 495% after magnetic beads sorting. (c) Nanoparticle trafficking analyzed the diameters and concentration of exosomes; 1 is a representative screen shot of the NTA videos, the bright white dot indicates one moving particle, (2) NTA estimated the size of the EVs between 90 and 300 nm, and the mode of these particles is 85 nm, and predict the proper concentration is around 1.04 × 10 8 particles per ml, the dilution is 1 : 80, 3 is a heat map pattern of 2, and 4 is a detail statistical report. To further address whether miR-21 directly binds the 3′UTR region of PDCD4, we generated some chimeric constructs which harbor luciferase wild-type 3′UTR sequence (WT-3′UTR) or mutant 3′UTR sequence (Mut-3′UTR; Figure 4c). As expected, miR-21 mimic exclusively inhibited the luciferase activity of Luci-WT-3′UTR, suggesting that the putative binding site is important for miR-21 suppressing PDCD4 expression ( Figure 4d). We next want to explore the relationship between PDCD4 and apoptosis in H9C2 cells. Through siRNA-mediated gene silence, we found that PDCD4-siRNA apparently decreased the cleaved caspase-3 level, (Figure 4e). The knockdown efficiency of PDCD4-siRNA was also detected by western blotting (Figure 4e, upper). In addition, the Annexin V/PI assay showed that PDCD4 downregulated cells decreased the percentage of the apoptotic cells to 13.26%, compared with the 23.84% in H 2 O 2 group and 30.83% in siRNA-NC group (Figure 4f-j). Taken together, these results confirmed that the effects of miR-21 on oxidative stress-induced apoptosis were through targeting PDCD4. CPC-derived exosomes restored the miR-21-PDCD4 pathway and attenuated apoptosis in cardiomyocytes under oxidative stress. Exosomes are important media to regulate cell-to-cell communication. The first step for exosomes releasing their cargoes into target cells is through fusing themselves into the membrane of target cells. To determine whether CPC-exosomes can be taken up by cardiomyocytes, CPC-exosomes were labeled with PKH26, a fluorescent cell linker compound that is incorporated into the cell membrane by selective partitioning. After incubating the labeled exosomes with cardiomyocytes for 12 h, the exosomes pellet show strong red fluorescence in the cytoplasm of H9C2 cells (Supplementary Figure S5), indicating that lots of exosomes were taken up by the H9C2 cells. Not surprisingly, when the cells were pre-treated with CPC-derived exosomes, the decrease of miR-21 was rescued ( Figure 5a) under oxidative stress and the increases of PDCD4 and cleaved caspase-3 were suppressed (Figure 5b). Consistent with the higher yields of exosomes and exosomal miR-21 under oxidative stress, the exosomes derived from H 2 O 2treated CPCs showed even stronger effects on increasing miR-21 levels and decreasing PDCD4 expression in the receptor cells. Whether the CPC-derived exosomes protected cardiomyocytes from the apoptosis caused by oxidative stress, an Annexin V/PI analysis was carried out. The results showed that the cells pre-treated with H 2 O 2exosomes decreased the percentage of the apoptotic cells to 13.58%, compared with the 33.29% in H 2 O 2 group, whereas the normal exosomes (non-H 2 O 2 induced) could only reduce the apoptotic percentage to 17.39% (Figure 5c-h). Therefore, the CPC-derived exosomes might be crucial to protect the cardiomyocytes from apoptosis caused by oxidative stress, and this effect was achieved by delivering miR-21 to targeting PDCD4 in the receptor cardiomyocytes. Discussion Oxidative stress has been identified as critical in many key steps in cardiac diseases, such as atrial enlargement, 28 mitral regurgitation 29 and heart failure. 30 Stem cell-based therapies have shown promise to repair detrimental myocardial remodeling and cardiac dysfunction, but significant obstacles to this approach remain. Thus, the amplification and delivery of beneficial paracrine signals generated by stem cells could overcome obstacles associated with cell injection-based approaches to repair damaged myocardium. 31 Because CPCs are specialized to function in the heart, CPCgenerated signals may be particularly well suited to treat cardiac pathologies, 32 the paracrine effect of CPCs has considered to be an important mechanism of cardiac protection. 7 Exosomes, one of the most important paracrine factors, has been reported in many other cells, such as tumor cells and stem cells, the cargoes of exosomes are verified as the crucial signaling molecular for some key pathway. 9 reported that miR-451 contained in CPC-derived exosomes had cardioprotection roles for acute ischemia/reperfusion injury. However, we did not observe the obvious expression of miR-451 with qPCR in our experimental system, either in normal exosomes or in H 2 O 2 induced ones. Considered that miRNAs normally express and act in a very sensitive manner, we propose the explanation to the differences of the exosomal miRNAs described in our and others work is due to the different experiment conditions, including different concentration or treated time of stimulus. However except the certain miRNAs, other molecules were not evaluated in this study. miRNAs are small non-coding RNAs that block translation or induce degradation of mRNA and thereby control patterns of gene expression. 39 Many miRNAs have reported to contribute to the pathophysiological consequences of acute myocardial infarction. 12 Several miRNAs regulate apoptosis and survival pathways in cardiomyocytes, inhibition of apoptosis or activation of survival programs enhances cardiac regeneration. Proapoptotic miRNAs include the miR-15 family, 20 miR-34, 19 miR-320, 40 and miR-140, 22 and anti-apoptotic miRNAs include miR-24, 41 miR-214, 13 regulate H 2 O 2 induced cell apoptosis, like miR-150, 15 miR-21 42 and miR-103/107. 43 Other miRNAs, which found in extracellular vesicles or exosomes, are reported to have an essential role in cardiac regeneration, 24 including miR-126, 25 miR-132, 24 miR-146 44 and miR-210. 5 Here we detected six miRNAs of 12 were encapsulated by CPC-exosomes under oxidative stress. Interestingly, we found only the miR-21 upregulated by ≥ 5fold by oxidative stress at the 6-h time point, probably to be involved in regulating cardiac functions of interest. miRNAs normalized by U6, U6 as a housekeeping gene was widely used in the miRNAs quantification, including exosomal miRNAs quantification, but Lin et al. 45 found that human U6 promoter activity was downregulated in the presence of hydrogen peroxide. In our study, we use hydrogen peroxide to induce oxidative stress for cardiomyocytes, and our study mainly focused on the exosomal miRNAs, which respond to environmental stimulus, in a different profile than cellular miRNAs do. When we compared the expression of exosomal U6 from oxidative stress-treated cells to the one of untreated control cells, there were no noticeable differences. The research about U6 regulated by oxidative stress was mostly focused on the human U6 promoter, whereas our work was focused on rat cardiomyocytes (H9C2). Meanwhile, there was few research demonstrated that U6 expression could be influenced by oxidative stress in rat. As the regulation of U6 could have species diversity, a detailed study is necessary to clarify this subject. miR-21 has been reported to mediate gene regulation and cellular injury response in H 2 O 2 -induced vascular smooth muscle cells. 18 Our results demonstrated that oxidative stress caused decrease of miR-21 levels in cardiomyocytes, indicated that miR-21 may be one of the key factors to regulate cardiomyocytes function under oxidative stress. In the gain-ofand loss-of-function experiments, we found that upregulated miR-21 levels effectively inhibit H 2 O 2 -induced cardiomyocytes apoptosis. Usually miRNAs mediate cell function by inhibiting the post-transcription process of downstream target genes; previous studies revealed that miR-21 specifically targets and regulates PDCD4, PTEN, RECK and Bcl-2 in tumor proliferation, invasion and migration. 26,27,46 Our data confirmed that PDCD4 was negatively regulated by miR-21 in oxidative stress-induced cardiomyocytes, and PDCD4 silencing inhibited the anti-apoptosis function of miR-21. These findings strongly supported that PDCD4 is the underlying mechanism of miR-21-mediated cellular protection. Exosomes as a mediator to regulate cell-to-cell communication, such as stromal cells to breast cancer cells, mesenchymal stem cells to endothelial cells, has been fully proved. 47,48 The understanding of exosomes biogenesis and endocytosis was incomplete, whether exosomes could specifically recognize their receptor cells still needs to be deeply explored. 9 In our study, when the cardiomyocytes were pretreated with CPC-derived exosomes, the exosomes can be taken up at high efficiency, and the exosomal miR-21 also can be transferred into the cardiomyocytes and take part in the cellular signaling pathway. This delivery successfully caused downstream response, including the decreasing PDCD4 expression and percentage of apoptotic cells in the cardiomyocytes. So CPC-exosomes can rescue injured cardiomyocytes by reconstructing the miR-21/PDCD4 pathway, CPCs exosomal miR-21 may be one of the protective factors to prevent cardiomyocytes from apoptotic process. Although our data suggest that exosomal miR-21 had a critical role in the apoptotic regulation of recipient cells, we do not rule out the contribution of other exosomal cargoes. Whether CPCexosomes have same function in animal heart disease models, the mechanism remains to be further explored. In conclusion, the fruitful work gives explanation to an intricate exosome-mediated cross-talk of CPCs and cardiomyocytes ( Figure 6). CPC-derived exosomes prevent cardiomyocytes apoptotic program, at least partly, via miR-21 contained in exosomes. Generally, our data indicate that individual species of miRNA have a crucial role in exosomes function. Therefore, we can get the conclusion that the exosomal miR-21 could be demonstrated as a promising therapeutic strategy for ROS-mediated cardiac disease. Exosomes purification. The CPC-exosomes isolation procedures were performed as previously described. 49 Briefly, 10 ml conditional culture medium with 10% Exo-FBS was used for culturing CPCs in T75 flask. After 48 h, supernatant was centrifuged at 3000 r.p.m., 15 min to remove cells, followed by filtration through 0.22 μm filter to remove cell debris. Exosomes in medium were precipitated with ExoQuick TC (System Biosciences) under the manufacturer's instruction, then Transmission electron microscopy. For the TEM morphology investigation, 3 μl of exosomes pellet was placed on formvar carbon-coated 200-mesh copper electron microscopy grids, and incubated for 5 min at room temperature, and then was subjected to standard uranyl acetate staining. The grid was washed with three changes of PBS and allowed to semi-dry at room temperature before observation in transmission electron microscope (Hitachi H7500 TEM, Tokyo, Japan). Micrographs were used to quantify the diameter of exosomes. Nanoparticle trafficking analysis. Analysis of absolute size distribution of exosomes was performed using NanoSight NS300 (Malvern, UK Whole-cell proteins were extracted for western blotting analysis, the procedures were same as above described. Luciferase reporter assay. HEK293 cells were co-transfected with 500 ng psiCHECK-2-PDCD4-3ʹUTR (wild type and mutant type) and 50 nM miR-21 mimic, 100 nM inhibitor and negative control (ribobio) using Lipofectamine2000 (Invitrogen, USA) following the manufacturer's instructions. After 48 h cells were lysed to Dual-Luciferase Reporter Assay System (Promega, Madison, WI, USA), and luciferase activity was measured using a GloMax20/20 Luminometer (Promega). Luciferase activity was normalized by Renilla/Firefly luciferase signal in HEK293 cells. RNA interference. The synthesized siPDCD4 and scramble (GeneCopoeia, Rockville, MD, USA) were transfected into cells with Lipofectamine (Invitrogen, Carlsbad, CA, USA), according to the manufacturer's instructions. Briefly, H9C2 cells (8 × 10 4 per well) were plated onto 6-well plates and allowed to grow for 12 h. siPDCD4 (50 nM) or scramble with 5μl lipofectamine were added into cells. After transfection, cells were incubated at 37°C for 24 h to 72 h, then treated in the presence of 100 μM H 2 O 2 for 6 h. Cells were harvested for analysis after the indicated time points as described above, and the proteins were for western blot assay and cells for Annexin/PI assay. The siPDCD4 sequence: F: GAAAGCGUAAGGAUAGUGUdTdT, R: ACACUAUCCUUACGCUUUCdTdT Apoptosis assay of H9C2 treated with CPC-derived exosomes. H9C2 cells were cultured in DMEM/F12 (Hyclone, Logan, UT, USA) medium supplemented with 10% FBS, H9C2 cells were pre-incubated with 10% Exo-FBS DMEM with stressed, normal or without CPC-exosomes (2 × 10 9 particles per ml) for 24 h, then treated with 100 μM H 2 O 2 for 6 h. Following treatment, the apoptosis rate analyzed by the Flow cytometry with Annexin V/PI kit (BD Bioscience, Franklin Lakes, NJ, USA), according to the instructions of the manufacturer. Whole-cell lysate were prepared by adding cell lysis buffer (Thermo), cell protein concentration determined by BCA Protein Assay Kit (Thermo), and then resolved on a 10% sodium dodecyl sulfate bis-tris gel, and transferred to an Immobilon FL PVDF membrane (Millipore). The membrane was blocked with 5% non-fat milk in TBST buffer, and incubated with rabbit anti-caspase-3 (1 : 1000, #9662, Cell Signaling Technology), rabbit anti-PDCD4(1 : 1000, #9535, Cell Signaling Technology) overnight, and then, incubated with HRP linked goat anti-rabbit IgG (1 : 5000, Cell Signaling Technology), and the protein bands were visualized using the with automatic imager (General Electric). The blots were quantified using FluorChem 8900 software (Alpha Innotech Corporation, San Leandro, CA, USA), and the relative protein expression was normalized to β-actin. Statistical analysis. An unpaired t-test and one-way ANOVA were performed using GraphPadPrism version 5.0 for Windows (GraphPad Software, Inc., La Jolla, CA, USA) to determine P-value in repeated experiments. All values are expressed as mean ± S.E.M. A value of Po0.05 was considered to indicate statistically significant differences. Unless otherwise noted, all results were obtained through a minimum of three independent experimental replications. Conflict of Interest The authors declare no conflict of interest.
2017-11-08T18:22:08.662Z
2016-06-01T00:00:00.000
{ "year": 2016, "sha1": "b3aa038843756d416098e12da761c8e8955eb798", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/cddis2016181.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b3aa038843756d416098e12da761c8e8955eb798", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
56108801
pes2o/s2orc
v3-fos-license
Paleoseismological data from a new trench across the El Camp Fault ( Catalan Coastal Ranges , NE Iberian Peninsula ) The El Camp Fault (Catalan Coastal Ranges, NE Iberian Peninsula) is a slow slipping normal fault whose seismic potential has only recently been recognised. New geomorphic and trench investigations were carried out during a training course across the El Camp Fault at the La Porquerola alluvial fan site. A new trench (trench 8) was dug close to a trench made previously at this site (trench 4). With the aid of two long topographic profiles across the fault scarp we obtained a vertical slip rate ranging between 0.05 and 0.08 mm/yr. At the trench site, two main faults, which can be correlated between trenches 8 and 4, make up the fault zone. Using trench analysis three paleoseismic events were identified, two between 34.000 and 125.000 years BP (events 3 and 2) and another event younger than 13 500 years BP (event 1), which can be correlated, respectively, with events X (50.000-125.000 years BP), Y (35.000-50.000 years BP) and Z (300025.000 years BP). The last seismic event at the La Porquerola alluvial fan site is described for the first time, but with some uncertainties. Introduction Paleoseismologic methods developed in high seismicity areas for high slip rate faults are increasingly used for improving the calculations of seismic hazard on faults with very low slip rates (< 0.1 mm/yr).This is the case of some fault systems in Europe, such as the Rhine Graben, the Catalan Coastal Ranges and the Betic Cordillera.In fact, the recognition of the seismogenic characteristics of a fault, i.e. the evaluation of its slip rate, the size of the maximum expected earthquake and the age of the most recent surface faulting earthquake can substantially change the perception of seismic hazard in regions traditionally considered to be stable or not very active. The «Europaleos field training course in paleoseismology» focused on the El Camp Fault (Catalan Coastal Ranges, NE Iberian Peninsula) and was held in Cambrils (Spain) in February 2001.The aim of this course was to provide training for young researchers in paleoseismology.The course centred on the El Camp Fault for the following reasons: a) earlier studies have demonstrated its potential for paleoseismic investigation; b) some uncertainties concerning the fault geometry and its seismic behaviour remained to be resolved; c) it is a good example of a low slip rate fault that could be encountered in other parts of Europe and around the Mediterranean, and d) the area is characterised by high seismic vulnerability.This paper seeks to gain an insight into the slip rate and paleoseimic history of the fault, account for change in the scarp direction visible at the trench site and to present the results obtained during the Europaleos course. The El Camp Fault The El Camp Fault is located on the southeastern flank of the Catalan Coastal Ranges (fig. 1) which are bounded by the València trough basin on the west.The Catalan Coastal Ranges are characterised by an en échelon array of NE-SW faults, where the main faults are listric, dip to the SE and have the detachment level at a depth of 15 km (Roca and Guimerà, 1992;Roca, 1996).The Catalan Coastal Ranges are the result of an E-W extension, which affected the eastern part of the Iberian Peninsula during the Neogene (Mauffret et al., 1973;Fontboté et al., 1990;Banda and Santanach, 1992;Roca and Guimerà, 1992;Roca, 1996).This extension has been interpreted as a product of an extensional back-arc basin related to the Apennine subduction (Doglioni et al., 1997(Doglioni et al., , 1999;;Gueguen et al., 1998).Earlier studies on the Catalan Coastal Ranges suggest that these ranges have been in a post-rift stage with weak tectonic activity since the middle Miocene (Roca and Guimerà, 1992;Roca, 1996). The El Camp Fault is made up of two en échelon faults (north and south segments).From commercial and deep seismic profiles, it has been shown that the El Camp Fault is a normal fault, with a dip of 60°and the main detachment level located at 13-15 km depth (Roca, 1992;Roca and Guimerà, 1992;Sàbat et al., 1997).The El Camp Fault constitutes the north-western limit of the El Camp basin.Near the town of Reus, the detrital sedimentary infill of the El Camp basin has a thickness of 1400-2000 m, which extends from the Miocene to the present time (Nuñez et al., 1980;Medialdea et al., 1986). The El Camp Fault has not shown significant historical or instrumental seismic activity.However, recent paleoseismologic studies based on detailed geomorphological and trench analyses have shown that the El Camp Fault is active, particularly its southern segment.This segment is approximately 24 km long, bearing in mind that the fault extends under the sea for a distance of 10 km.The segment is characterised by: a) a slip rate ranging between 0.02 and 0.08 mm/yr; b) surface faulting earthquakes of Mw = 6.7 maximum magnitude; c) an average recurrence interval of 30.000 years, and d) the occurrence of the last event 3000 years ago (Masana, 1995(Masana, , 1996;;Masana et al., 2000Masana et al., , 2001a,b;,b;Santanach et al., 2001).Seven trenches located at different sites in the southern segment of the El Camp Fault enabled these authors to describe three seismic events during the last 125.000 years: event X, event Y and event Z. Event X, between 125.000 and 50.000 years BP, has been clearly described at two sites (site of trenches 1 and 2 and site of trench 4).Event Y, between 50.000 and 35.000 years BP, has only been described at the site of trench 4. Event Z, between 25. 000 and 3000 years BP, has been described at two sites (at trenches 1 and 2 -strongly based -and at trench 3). Geomorphologic and topographic survey The geological and geomorphological study of the southern part of the El Camp basin (fig.2) reveals the presence of recent alluvial fans of four generations: G1, G2, G3 and G4 (Villamarín et al., 1999;Santanach et al., 2001), which were mapped and surveyed in the field during Europaleos.Fans belonging to the G2 and G3 generations stretch from the mountain range to the sea, whereas those corresponding to the G4 generation are located at the foot of the mountain range or in the lowlands between the fans of the older generations.According to different dating methods (U/Th, thermoluminescence, paleomagnetic studies and correlation between different fan generations and sea level highstands), minimum ages of 300.000 and 125.000 years have been attributed to the top of the G2 and G3 generations, respectively (Villamarín et al., 1999;Santanach et al., 2001), the G4 generation being younger than 125.000 years in age.The oldest fans belonging to the G1 generation were not considered in this study since they do not interact with the fault where it is exposed. The geomorphological map (fig.2) shows a discontinuous fault scarp offsetting alluvial fans belonging to the G2, G3 and G4 generations.The El Camp Fault scarp offsets the G4 generation fans between the La Porquerola and the Les Planes fans and south of the Les Planes fan.These observations indicate that this segment of the fault has been active during the last 125.000years.For logistical reasons, we focused on one portion of the La Porquerola fan, where the fault affects the top of the G3 generation fan.This fan is highly cemented and therefore difficult to erode in contrast to the unconsolidated materials in the lower part of the scarp.Consequently, the scarp is clearly visible at the surface and a change in the direction is recognisable.The scarp is the result of a normal fault-propagation fold with a small scarplet at the bottom.Locally, a small wall constructed along the foot of the scarp operates as a sediment trap and has produced some sharp morphologies that are visible on the trench walls.To the NE of trench 4, the scarp shows a change in direction close to a gully that is entrenched in the scarp. Using a total station (Leica 1700), we made a microtopographic map of the selected area (fig. 3) and five topographic profiles.This information together with the geomorphology helped us to select the most favourable site for trenching.We decided to dig the trench in the zone where the fault scarp changes its direction (fig.3) in an attempt to determine whether the change in the scarp direction was due to a change in the fault direction or to erosion evidenced by the small gully located there.In addition, the location of the new trench in the vicinity of the gully would enable us to obtain a more complete section of recent sediments on the downthrown wall than at trench 4. Thus, the possibility of detecting the most recent paleoseismic event, which had not been observed at this site, is greater. Only two topographic profiles (fig.4) were long enough to obtain a preliminary evaluation of the slip rate at this location.The offset of the 125.000 years fan surface measured in the profiles is between 6.7 and 10.5 m.This provides a maximum vertical slip rate between 0.05 and 0.08 mm/yr.Although these results are consistent with the slip rates obtained in earlier studies on the El Camp Fault, which vary from 0.02 to 0.08 mm/yr (Masana, 1995(Masana, , 1996;;Masana , 2000, 2001a,b;Santanach et al., 2001), the 125.000 years fan surface offset obtained by these authors in the same zone was 6.5 m, which is more similar to the lower values obtained from our profiles.Therefore, the lower slip rate would be more realistic.On the assumption that the fault is purely normal with a dip of 60°at depth, a dip slip of 0.06 to 0.09 mm/yr is calculated. Trench analysis The trench dug through the scarp located at the La Porquerola fan (fig.5) was the eighth trench dug along the El Camp Fault (Masana et al., 2000(Masana et al., , 2001a,b;,b;Santanach et al., 2001).ture F2 has a subhorizontal dip and affects unit F. F2 is an open fracture produced by a dip slip along a normal fault (F1) and by the obstruction of the shallower part of the downthrown block of this fault (Santanach et al., 2002).This kind of movement could have produced the opening of the hollow visible at the trench along F2.The subsequent collapse of the obstructed part could have closed the hollow and could have produced a «reverse» fault geometry (fig.8).Two minor faults are also present, F3 and F4, the former being more clearly defined than the latter.Fault F3, a small antithetic fault, affects the hard layer located at the top of unit K with a maximum offset of 20 cm.F4 is defined by the 60 cm offset at the top of unit K, despite being badly constrained at lower levels.Unit J shows a mixture of materials, and clasts with a vertical preferred orientation, which could be the result of the F4 movement. One question we attempted to answer with the opening of the trench was whether the change in the scarp direction is due to: a) a shift in the direction of the fault, or b) to the erosion of the scarp.The analysis of the relationship between F1, the main fault, and the surface scarp, at trench 8, shows that the change in the scarp direction is attributed to the shift in the fault direction since the scarp follows the fault direction (fig.6).However, erosion controls the trench sedimentary architecture and could highlight the change in the scarp direction. Eight samples were collected for radiocarbon dating.Only three of them contained sufficient material to be dated and only two (ERT8-2 and ERT8-6) provided a measured age (table I).Samples ERT8-2 (charcoal) and ERT8-6 (a shell) were collected from unit L1, ERT8-2 near the top of the unit and ERT8-6 near the limit between L1 and L. These datings indicate that unit L1 was deposited between 34.000 and 13.000 years BP.The other available age constraint is the age of the top of the G3 generation (Villamarín et al., 1999).Therefore, the top of unit F is ca.125.000 years old and all the units exposed on the upthrown block are the same age or older with the exception of M, which is the most recent unit.hollows, which hinders the distinction between the different layers (fig.7). On the upthrown block, units B, C, D and F are planar and sub-parallel to each other.Only unit F, which corresponds to the top of the G3 generation alluvial fan, can be followed across the fault zone.On the downthrown block, seven units (G, H, J, K, L, L1 and M) overlie unit F near the scarp, but F appears at the surface again at a distance of approximately 40 to 50 m from it.Thus, units G, H, J, K, L, L1 and M infill a local depression produced by the fault movement.The geometry of the sedimentary units on the downthrown block is more complex than on the upthrown block.Units H to M cover part of the eroded scarp produced by the faulting and bending of unit F. The geometrical relationships between these lithological units will be discussed below in terms of paleoseismic events. The fault zone is composed of a fault (F1) and a fracture (F2).Fault F1 is located along unit W2 and offsets units B, C, D and F. The fault plane has a subvertical attitude or dips strongly to the WNW (fig.6).The offset top of unit F along F1 on the south wall shows a buried scarp that is 1.3 m high.However, this is the minimum offset since the scarp had probably been eroded.To the east of F1, frac- Unit K -Poorly consolidated and sorted, matrix supported gravel.Clasts are carbonatic, subangular and range from a few centimeters to a few decimeters in size.Matrix is silty and brown in colour.At top there is a hard layer. Unit L -Unconsolidated, poorly sorted, matrix-supported gravel.Clasts are carbonatic, subrounded and range from a few centimeters to a few decimeters.Matrix is silty and reddish-orangein colour. Unit M -Unconsolidated, brown silt, including few centimeter-size (max. 2 cm) carbonate clasts.At the top there is 10 cm width actual soil. Paleoseismic events At trench 8 we found evidence of at least one individual surface-faulting earthquake.Furthermore, evidence for two more events will be discussed.Briefly, we found evidence for three possible events (fig.6): a) event 1 at the top of L1 (uncertain); b) event 2 at the top of K (good), and c) event 3 at the top of H (uncertain).We shall discuss the events beginning with the most certain event and ending with the most uncertain one.It was not possible to obtain the single event displacement at trench 8. Displacements along F3 (and F4?), which reach the top of unit K, and the greater warp of this unit with respect to the overlying units suggest an event horizon, event 2, at the top of unit K. Units L and L1 represent the post event deposit. Evidence for event 1, albeit with some uncertainty, is provided by: a) the erosional truncation of unit M over L and L1, and b) the geometrical relations of unit M with respect to units L and L1.Units L and L1 filling a gentle depression appear to join the original slope, which is not consistent with the present one.Moreover, the origin of unit M differs from that of L and L1 in that it is much higher on the scarp.This could be explained by a change in the slope/scarp caused by a displacement along the fault (an earthquake?) and a consequent sinking of the downthrown block. Event 3 could be located at the bottom of unit K based on: a) unit J, a deformed unit, discussed above, is covered by unit K that shows less deformation, and b) unit K, which lies uncon- formably over unit F, sealing the subhorizontal fracture and associated collapse breccia. Given the low number of available datings, it has not been possible to determine with precision the time bracket for the three paleoseismic events.With our data, the three possible events would be younger than 125.000 years BP.The oldest events (2 and 3) are older than 34.000 years BP, whereas the youngest one (event 1) would be younger than 13.500 years BP. Comparison with earlier studies By comparing the data obtained from this new trench (trench 8) across the El Camp Fault with the paleosesimological information obtained from earlier studies on the same fault (Masana, 1995(Masana, , 1996;;Masana et al., 2000Masana et al., , 2001a,b;,b;Santanach et al., 2001), we were able to gain a new insight into the fault behaviour. The comparison between trenches 8 and 4 (fig.9) shows that F1 and F2 have the same dip tendency and relative location with respect to the fault scarp at both trenches and could, therefore, be interpreted as the same faults.Accordingly, at trench 8 it is assumed that F2 becomes a vertical fault at a certain depth and branches into F1. The accommodation space at trenches 4 and 8 due to the fault movement should be very similar because of their proximity, resulting in a similar sedimentary thickness.Nevertheless, the sedimentary thickness at trench 8 is greater than at trench 4.Moreover, the top of unit F is approximately 2 m deeper at trench 8 than at trench 4 (fig.10).This modification of the accommodation space can be attributed to a marked erosion in the vicinity of trench 8 evidenced by the small gully located to the north of it.The modification of the accommodation space should be pointed out to avoid interpreting the sedimentary relationship between units F, G and H as another event. Three seismic events have been described in relation to the El Camp Fault in the last 125.000 years (Masana, 1995(Masana, , 1996;;Masana et al., 2000Masana et al., , 2001a,b;,b;Santanach et al., 2001): a) event X has a time bracket between 125.000 and 50.000 years BP and is the oldest; b) event Y is between 50.000 and 35.000 years BP, and c) event Z occurred between 25. 000 and 3000 years BP.At trench 8 we also found 3 events, although some uncertainty exists for events 1 and 3.As stated above, the two oldest events at trench 8 (3 and 2) occurred in a time bracket between 125.000 and 34.000 years BP, whereas event 1 is younger than 13.500 years BP.Therefore, events 3 and 2 could correspond to events X and Y, respectively, and event 1 could correspond to event Z.Thus, trench 8 could yield evidence of the last event, which has not been recorded at trench 4. Although it was not possible to obtain the single displacements for events X, Y and Z at trench 8, they have been reported in earlier works (Masana et al., 2000(Masana et al., , 2001a,b;,b;Santanach et al., 2001): a) a displacement ranging between 1.4 and 2.0 for event X, but the authors suggest that this displacement could correspond to the accu- Stuiver and Reimer (1993). 13C values are the assumed values according to Stuiver and Polach (1977) when given without decimal places.Values measured for the material itself are given with a single decimal place.The quoted age is in radiocarbon years using the Libby half life of 5568 years and following the conventions of Stuiver and Polach (1977).Radiocarbon concentration is given as fraction Modern, D 14 C, and conventional radiocarbon age.Sample preparation backgrounds have been subtracted, based on measurements of samples of 14 C-free coal for ERT8-2 and ERT8-8, and of 14 C-free calcite for ERT8-6.Backgrounds were scaled relative to sample size.Comments: the material dated was acid-alkali-acid treated charcoal.The large uncertainty for ERT8-2 is due to the small sample size.mulation of two events; b) 0.4 m of displacement for event Y, and c) a displacement ranging between 0.7 and 1.0 m for event Z.These displacements could correspond to a maximum earthquake with an Mw ranging between 6.3 to 6.8 and, consequently, a rupture surface length between 13 and 24 km (Wells and Coppersmith, 1994), which is in agreement with the length of the El Camp Fault. Conclusions Two long topographic profiles made across the El Camp Fault at the La Porquerola alluvial fan allowed us to establish an offset bracket for the top of the G3 generation fan.This offset bracket ranges between 6.7 and 10.5 m.Based on these data, the vertical slip rate for the last 125.000 years ranges between 0.05 and 0.08 mm/yr and the dip slip between 0.06 and 0.09 mm/yr.We consider the lower values to be more realistic when these are compared with the earlier studies. The change in the scarp direction visible at the La Porquerola site is attributed to the change in the fault direction, although the erosion evidenced by the adjacent gully influenced the architecture of trench 8. The stratigraphic and structural analyses of the different units at trench 8 constrain one clear paleoseismic event and two less certain ones.The datings obtained give an age for the two oldest paleoseismic events (3 and 2) ranging from 125. 000 to 34. 000 years BP and an age younger than 13.500 years BP for the last one (event 1).Events 1, 2 and 3 at trench 8 can be correlated with events Z (3000-25.000years BP), Y (35.000-50.000 years BP) and X (50.000-100.000 years BP) described in earlier studies.Event Z had not been described at the La Porquerola site to date. Fig. 1 . Fig. 1.Location maps: a) location of the Catalan Coastal Ranges within the Iberian Peninsula.The map shows the Neogene basins and the faults with Neogene extension; b) Location of the El Camp Fault within the Catalan Coastal Ranges.The map shows the en échelon array of NW-SE Neogene listric faults in the Catalan Coastal Ranges. Fig. 2 . Fig. 2. The El Camp basin geomorphological map (modified from Masana et al., 2001b).Location is shown in fig. 1.The map shows the distribution of the different alluvial fan generations as well as the zones where the fault scarp intersects the different alluvial fans. Fig. 3 . Fig. 3. Microtopographic map of the trench site.The map shows the location of trenches 8 and 4 and the long profiles 1 and 3.The fault trace is also shown.The contour line interval is 0.5 m. Fig. 4 . Fig.4.Topographic profiles 1 and 3. A maximum and a minimum offset are plotted for each profile, taking into account that the original fan slope surface could correspond to the surface on the downthrown (dotted line) or upthrown block (dashed line).The position of trench 8 is projected on profile 1.The location of the two profiles is shown in fig.3.The vertical scale is exaggerated. 768Fig. 5 . Fig. 5. Trench 8 photograph taken from the southeast.The people in the photograph are on the upthrown block of the fault. Fig. 7 .Fig. 8 . Fig. 7. Photographic assembly showing the fault zone on the south wall of trench 8.The photograph shows the complexity of the fault zone as well as the weathering degree of the units located close to the fault.The main features and sedimentary units are shown (see the legend in fig.6). Paleoseismological data from a new trench across the El Camp Fault (Catalan Coastal Ranges, NE Iberian Peninsula) Table I . Trench 8 dating results.
2018-12-12T08:32:49.566Z
2003-12-25T00:00:00.000
{ "year": 2003, "sha1": "313b6ad40cd4e93433b037670baf73cf195df5dd", "oa_license": "CCBY", "oa_url": "https://www.annalsofgeophysics.eu/index.php/annals/article/download/3454/3499", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a170ada6ecfa6261bd2ad677b5d17d991749cdff", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
136801324
pes2o/s2orc
v3-fos-license
A model to predict S-N curves for surface and subsurface crack initiations in different environmental media The influence of environmental media on crack propagation of a structural steel at high-cycle and very-high-cycle fatigue (VHCF) regimes was investigated with the fatigue tests in air, water and 3.5% NaCl aqueous solution. The fatigue strength in water and 3.5% NaCl solution is significantly decreased and the cracking morphology due to different driving forces is presented. A model is proposed to explain the influence of environmental media on fatigue life, which reflects the variation of fatigue life with applied stress, grain size, inclusion size and material yield stress. The model prediction is in good agreement with experimental observations. (C) 2013 Elsevier Ltd. All rights reserved. Introduction Very-high-cycle fatigue (VHCF) [1], (also named ultra-high-cycle fatigue [2], ultra-long-life fatigue [3], or gigacycle fatigue [4]) of metallic materials is generally regarded as fatigue failure at stress levels below conventional fatigue limit and the corresponding fatigue life beyond 10 7 loading cycles. Lots of modern engineering structures and components, such as airplanes, turbines, nuclear structures, automobiles and high speed trains are expected to have fatigue life in the range of 10 7 to 10 10 load cycles. One typical feature of VHCF for high strength steels is that the S-N curve consists of two parts corresponding to subsurface and surface crack initiation, resulting in a stepwise or duplex shape [5][6][7][8][9][10][11]. Generally, the crack initiation in VHCF regime is observed as fisheye pattern on fracture surface, which is located at specimen subsurface region and originated from a nonmetallic inclusion for high strength steels [3,[5][6][7][8][9][10][11][12][13][14][15][16]. Since the pioneering work performed by Naito et al. [17,18] and by Atrens et al. [19], there have been a variety of studies on the VHCF behavior for different materials. Among these studies, the crack initiation mechanism in VHCF attracted most of the attention. Murakami et al. [3] attributed the mechanism of subsurface crack initiation to the interaction of hydrogen embrittlement with cyclic damage. Bathias and Paris [4] found that subsurface crack initiation originated from either nonmetallic inclusions or other microstructural inhomogeneities, e.g. perlite colonies and long platelets. They [4] argued that the probability of finding a sufficient stress concentration inhomogeneity is much higher in the interior of the material than at the surface. Nishijima and Kanazawa [15] attributed the reason why the fatigue life for internal failure is longer than that for surface failure to the fact that the stress intensity factor for flaws with the same size in the material interior is smaller than that at the surface. The influence of some factors, such as loading frequency [12,14,20,21], surface finishing condition [8,16,22], material microstructure state [23][24][25] and environmental media [10,11,26], on the VHCF properties of high strength steels has been widely studied. Among them, the effect of ultrasonic frequency on the estimated fatigue strength has been intensively studied in order to make sure that the fatigue results obtained by using ultrasonic testing and conventional fatigue equipment with a low frequency are comparable. Stanzl-Tschegg and Mayer [20] showed that the frequency influences might be divided into intrinsic and extrinsic. The former one is related to strain rate, dislocation structures, crack formation and propagation. Whereas, the latter influence includes the correlation of test frequency with environmental effect, influence of creep, specimen heating during ultrasonic testing, etc. However, Furuya et al. [14] found that the loading frequency does not have significant impact on the VHCF behavior of a high strength steel. Recently, Zhao et al. [12] showed that loading frequencies do have effect on fatigue strength of materials, but for materials with some specific microstructure the resultant of the effect may defer. Loading frequencies have little influence on specimens with high strength, while for the specimens with low tensile strength the fatigue resistance is markedly high in ultrasonic testing. Shiozawa and Lu [16] found that for surface shot-peened specimens, due to the surface residual stress induced by shot peening subsurface crack initiation dominated. We [8] studied the influence of surface notch on VHCF behavior of a structural steel, which revealed that surface notch decreases the fatigue strength and the possibility of subsurface crack initiation. Krupp et al. [25] studied the effect of the microstructure of an austenitic-ferritic duplex steel on the VHCF behavior and revealed that the formation of slip bands caused by fatigue damage in VHCF regime leads to the initiation and propagation of microstructurally short cracks in a very localized manner. Lei et al. [5] found that the inclusion size and location has a significant impact on VHCF behavior for high strength steels. The degradation of VHCF strength caused by the increase of inclusion size is ascribed to the decrease of the critical stress of fine-granular-area (FGA) formation for large inclusions. We [10] investigated the effect of environmental media on the fatigue strength and crack initiation of a high strength steel in VHCF regime and the decrease of fatigue strength in environmental media is reported. However, the crack initiation and propagation process of high strength steels in environmental media in VHCF regime is still not clear. In addition to experimental investigations, theoretical or empirical models for fatigue strength and life prediction in VHCF regime are of significant importance for both scientific and engineering applications. Murakami et al. [3] developed a model to predict fatigue strength in VHCF regime based on crack initiation site, crack area and material hardness. Hong et al. [6] demonstrated that the formation of FGA is responsible for a majority part of total fatigue life. It is shown in [27][28][29] that in VHCF regime the crack growth constitutes insignificant portion of the total fatigue life. Instead, the importance of fatigue crack initiation stage has been repeatedly emphasized. Chapetti et al. [30] showed a relation between the FGA size, the inclusion size and the fatigue life by fitting the experimental data of high strength steels. Liu et al. [31] proposed an expression in the form of Basquin equation for predicting the S-N curves based on the fatigue strengths at 10 6 cycles and at 10 9 cycles. Lai et al. [32] presented a unified model, which provides the prediction of fatigue behavior of hardened steels in different regimes, that is, low cycle fatigue regime quantified by the tensile strength, high cycle fatigue regime obeying Basquin's law and VHCF regime featured by the fisheye and FGA surrounding an initiating inclusion on the fracture surface. A combination of the deterministic model with a stochastic model describing the inclusion size distribution allows prediction of fatigue strength and the associated reliability of a steel component. Sun et al. [33] developed a model for estimating the fatigue life of high-strength steels in high cycle and VHCF regimes with fisheye mode failure based on the cumulative fatigue damage, which takes into account the inclusion size, FGA size and tensile strength of materials. We [9,10] developed a model to investigate the competition between surface and subsurface crack initiation at VHCF regime, and showed that high strength steels with fine grain size tend to initiate crack in the subsurface, whereas surface notch and environmental medium will lead to surface crack initiation. However, models to predict S-N curves in VHCF regime in different environmental media are still lacking due to the complicated crack initiation mechanisms. Recently, new models have been further proposed to predict or estimate the fatigue life for high cycle and VHCF regimes by taking into account the failure mechanism and cumulative damage characteristics [34][35][36][37]. In this paper, the process of crack initiation and propagation for a high strength steel in environmental media in VHCF regime is investigated. The specimens of a structural steel were subjected to rotary bending up to VHCF regime in the environments of laboratory air, fresh water and 3.5% NaCl aqueous solution, respectively. The influence of environmental media on the variation of fatigue strength and cracking process is presented. Based on the experimental observations, a model is proposed to study the S-N curves of the material in high cycle and VHCF regimes in different media. Material and experimental method In this paper, hour-glass type specimens (Fig. 1a) of a structural steel 40Cr (main compositions: 0.4% C and 1% Cr) were tested with a rotary bending machine operating at a frequency of 52.5 Hz and the testing environments were of three types: laboratory air, fresh water and 3.5% NaCl aqueous solution, respectively, so as to investigate the influence of environmental medium on the variation of fatigue strength and cracking process. The average size of original respectively. In addition, the hour-glass type specimens with Vnotch at the reduced section, as shown in Fig. 1(b), were also used to investigate the process of fatigue crack propagation. Based on the fatigue test data and scanning electron microscopy (SEM) observations of fracture surfaces, the effect of environment on the fatigue behavior at high cycle and VHCF regimes was examined. The mechanism of crack initiation and propagation under different environmental media was discussed. S-N curves For specimens tested in laboratory air (triangular symbols in Fig. 2), single crack originated from the surface of the specimens with fatigue life less than 10 7 loading cycles and the corresponding stress levels are above 700 MPa, whereas the crack started from subsurface for the specimens with fatigue life beyond 10 7 loading cycles and the stress levels are below 700 MPa. SEM observations showed that for the specimens cyclically fractured in laboratory air, the crack origination is due to a single origin from either surface or subsurface initiation. For fatigue testing in fresh water, a similar stepwise S-N curve is presented (square symbols in Fig. 2), but the stress of transition part in the S-N curve is dramatically decreased. The maximum stress with respect to the failure cycles of 10 5 is 600 MPa, which is about 70% of the value obtained in the laboratory air testing. When failure cycles extends to 5 Â 10 5 , the corresponding maximum stress is 350 MPa for water medium testing, which is only half of the value obtained in the laboratory air testing. In the transition part, the average value of the maximum stress (260 MPa) is only 36% of that given in laboratory air testing case (720 MPa). In the VHCF regime, the difference of the maximum stress between the two cases is even large. The big difference of fatigue strength presented by these two S-N curves implies that the environmental effect of fresh water on the degradation of fatigue strength is remarkable. In the case of fatigue testing in water medium, for fatigue lives shorter than 5 Â 10 5 loading cycles, the crack origination mode observed by SEM is surface related initiation, the same as the case of laboratory air. For fatigue lives beyond 5 Â 10 5 cycles, namely high cycle fatigue and VHCF regimes, the crack origination observed is a mixed mode of surface related and subsurface initiation. For the fatigue tests in 3.5% NaCl aqueous solution, the S-N curve (lower part of Fig. 2) displays a continuously descending shape. The fatigue strength is even lower than that tested in water medium from low cycle to VHCF regime, implying that the effect of 3.5% NaCl aqueous solution on the degradation of fatigue strength for the structural steel is more remarkable than that of water medium. For the case of fatigue testing in 3.5% NaCl aqueous solution (circular symbols in Fig. 2), the crack origination observed by SEM for all the specimens is the mixed mode of surface related and subsurface initiation. The fatigue strength in aqueous solution is substantially lower than that in laboratory air and the reduction increases gradually with decreasing stress level. It is also seen from Fig. 2 that the ratio of the applied maximum stress for the case of tested in fresh water to that in air for the failure life of 10 7 is about 34%. This ratio decreases to 21% in the vicinity of 10 8 failure cycles. For the case of tested in 3.5% NaCl aqueous solution, the fatigue strength decreases even more dramatically. The ratio of the applied maximum stress for the case of tested in 3.5% NaCl aqueous solution to that in air is only 10% for 10 7 failure cycles, and is even lower of 5.8% in the vicinity of 10 8 failure cycles. The above description is expressed as: where r w max is the applied maximum stress for the tests in fresh water, r s max is the applied maximum stress for the tests in 3.5% NaCl aqueous solution, and r a max is the applied maximum stress for the tests in laboratory air. Fractography for specimens tested in air For the specimens tested in laboratory air, all the fatigue fracture surfaces of both surface initiation and subsurface initiation modes present the morphology of three regions as shown in Fig. 3(a). Region A [ Fig. 3(b)] is crack initiation and early propagation zone, in which crack propagation velocity is very slow to produce a relatively smooth fracture surface with transgranular cleavage-like morphology and fatigue striations. This region is responsible for a substantially large part of total fatigue life. As shown in Fig. 3(b), crack initiated at the subsurface of specimen at VHCF regime, forming a fisheye pattern originated from a nonmetallic inclusion with main chemical compositions examined as Al, Ca and O. The average size of the inclusion located inside the fisheye and as a crack origin is 12 lm, obtained from 10 measurements. Region B is crack steady and relatively fast growth zone and Fig. 3(c) is a local micrograph of this zone showing quasi-cleavage morphology. Region C is final fracture zone and the fracture surface presents the ordinary morphology of dimple pattern [ Fig. 3(d)]. Regarding the inner boundary as crack tip for Regions A and B, one may calculate the stress intensity factor K I with the following formula: In the calculation, Region A is regarded as elliptical shape and Region B is regarded as circular shape [9]. The values of K I almost keep constant at 16 MPa m 1/2 from high cycle to VHCF regime for Region A. The values of K I for Region B are between 35 and 60 MPa m 1/2 , which correspond to the material fracture toughness. The values of K I for Region A and B are used to calculate the crack tip plastic zone size based on the expression: The plastic zone size for Region A is about 12.1 lm, which approximately equals to one grain size. In the crack initiation and early propagation stage (Region A), grain boundary serves as microstructural obstacle. In Region B, the plastic zone size ranges between 57.7 lm and 169.5 lm. As the increase of the plastic zone size, the crack propagation rate increases significantly. In Region C, as the increase of the crack driving force and decrease of the ligament of the specimen, the specimen displays a plane stress state. Thus, the fracture morphology shows a shear fracture with an angle of 45°along the tensile direction. For the tested specimens, the fracture is plane strain condition in the crack initiation stage and the crack tip has a high constraint as a result of the small plastic zone size and high stress triaxiality. With the decrease of ligament in Region C, the crack tip has a constraint loss which causes large plastic deformation. It is noted that the fatigue life consumed by FGA within a fisheye for high strength steels constitutes a significant portion of relevant total fatigue life, which is further confirmed by our recent investigation [6]. For a high strength steel, the size of FGA ranges from 40 to 100 lm and the size of fisheye is between 100 and 300 lm [6]. Normally, the size of fisheye comprises about 10 grains, and the size of FGA comprises between 2 and 4 grains. It is proposed that the size of FGA is the intrinsically characteristic dimension of crack initiation for VHCF [6], with DK FGA = DK th [6,13]. The mechanism of crack initiation and early propagation from subsurface for VHCF due to fisheye (containing FGA) defers from that of surface short cracks. The latter is attributed to the PSBs induced by localized plastic deformation. For the case of VHCF with subsurface crack initiation originated from inclusions, the nominal stress is below the value of conventional fatigue limit and the localized surface slip deformation becomes un-activated. Thus, the site of subsurface inclusion becomes the weak point to act as a crack origin after a large number of loading cycles. Therefore, there is a competition process of crack initiation from surface or subsurface [9,10]. Indeed, the micro mechanism of crack initiation from subsurface due to nonmetallic inclusions is still not very clear, which needs further in-depth investigations. It is sure that the size of the inclusion responsible for crack initiation is vital with respect to the cracking mechanism and to the fatigue life [5,33]. In most cases, the inclusion size is smaller than the FGA size. Thus, the morphology of FGA appears and the smaller inclusion leads to a longer fatigue life due to the larger capacity for related cracking within FGA. For some other cases, the size of inclusion responsible for crack initiation is larger than the intrinsic FGA size. Thus, the condition for FGA formation is over, which leads to a relatively smaller value of total fatigue life. Also, other factors of inclusion propensity, interface cohesion between inclusion and matrix, etc. are of influence on the behavior of crack initiation and therefore on the total fatigue life. Fractography for specimens tested in environmental media The observations show that the crack initiation in aqueous solution is of multiple crack origins (Fig. 4b) with surface (Fig. 4a) and subsurface (Fig. 4c) mixed mode. This indicates that the aqueous solution promotes the surface crack initiation. As an example, subsurface crack initiation at nonmetallic inclusions is observed for specimens tested in the aqueous solution in VHCF as shown in Fig. 4(b-d), where the crack initiated at several small nonmetallic inclusions. The diameter of the inclusions is about 3.5 lm, and the initiated small cracks coalescenced to form an initial crack. This implies that in environmental media, attention should be paid to both small and large inclusions with regard to the increase of fatigue life, although the coalescence effect of the cracks on the fatigue behavior still needs a further study. In addition, it is commonly believed that the maximum inclusion size exists, below which no crack would initiate at inclusions. The maximum inclusion sizes are 5 lm [38], 8 lm [39] and 15 lm [40] for different steels tested in air. Yang et al. [41] proposed an empirical expression to calculate the maximum size for high strength steel, i.e. the critical inclusion size (CIS) by correlating the relation between fatigue strength and Vickers hardness: where a is 0.813 for surface, 0.528 for subsurface and 0.969 for interior inclusions. In this study, CIS for interior inclusions is calculated to be 3.2 lm according to Eq. (5). This is in general agreement with the experimental observations of previous ones. For the case of fatigue tested in water and 3.5% NaCl aqueous solution, the crack origination observed by SEM is mainly the surface related initiation. Additionally, unlike the single crack origin for specimens tested in air, multiple fatigue crack origins were observed, and the fracture surface morphology for fatigue crack steady growth zone is predominantly intergranular, as shown in Fig. 5(a). From the measurements on SEM micrographs, the ratio of intergranular morphology is about 75% for the specimens tested in water, indicating that in water medium, fatigue crack growth along grain boundaries is a major mechanism. For the specimens tested in 3.5% NaCl aqueous solution, the intergranular morphology is about 90% in fatigue crack steady growth zone, indicating that crack growth along grain boundaries is a dominant mechanism. Fig. 5(a and b) also show secondary cracks along grain boundaries and cross section of the specimen, which is the phenomenon of grain boundary embrittlement due to the aqueous environmental effect. The presence of widespread secondary cracks is observed in the fatigue crack propagation period for the cases tested in water and in 3.5% NaCl aqueous solution, which is the damage evidence of environmental media on the material. Crack propagation process for specimens tested in 3.5% NaCl solution In addition to the SEM observation on the fracture surface of the broken specimens after fatigue testing, we designed a specific method for further examination of fatigue crack propagation process by taking advantage of low temperature breaking technique, for which a group of 7 specimens were cyclic loaded in 3.5% NaCl aqueous solution at the loading value of r s max ¼ 22:3 MPa. The specimens used for this examination are also hour-glass type but with a V-notch at the reduced section as shown in Fig. 1(b). One of the specimens was loaded to fatigue failure, and the other ones were controlled to terminate at different loading cycles (in sequence) before failure. Then the unloaded specimens were broken by means of low temperature fracture technique, i.e. the unloaded specimen was broken immediately after immersed in the liquid nitrogen for about 20 min, such that the morphology produced during the process of fatigue crack propagation can be separated by the morphology formed at low temperature fracture. The loading cycles of the specimens, which are ready for the observation of fatigue crack propagation, are listed in Table 1. Before the observation on fracture surface of fatigue unloaded specimens, we checked and confirmed that the low temperature fractography of the specimen without fatigue testing is normally cleavage and quasi-cleavage morphology. One important aspect is the observation on the unloaded specimens, for which the failure cycles of specimen S f is 8 Â 10 7 at r s max ¼ 22:3 MPa. Fig. 6 is the SEM photograph of whole fracture surface for specimen S f . It is observed that the typical morphology of multiple crack origins prevails at specimen surface or subsurface covering almost the circumference of the notch root, and that a large portion of fracture surface resulted from corrosion fatigue cracking, which is 65.2% obtained from image anal-ysis. The observations on unloaded specimens show clear evidence of fatigue crack initiation and early growth at surface and subsurface in the circumference of V-notch specimen. The fraction of fatigue cracking surface, as listed in Table 1, is small for the specimen unloaded after 10 5 cycles of loading and it increases with fatigue loading cycles until failure. Fig. 7(a and b) are two examples of the specimens unloaded after fatigue cycling of 6 Â 10 6 and 5 Â 10 7 , namely specimens S 4 with cracking area fraction of 24.3% and S 6 with that of 48%, respectively. The cracking area fraction of 7 specimens measured by image analysis method increases with the number of loading cycles. The variation of crack area fraction in the environmental media is attributed to the crack propagation mechanism in the solution. Under aqueous environment, the failure mechanism of high strength steel has been widely confirmed as hydrogen induced embrittlement [42][43][44][45]. Hydrogen effect is superimposed by triaxial stress state or stress concentration due to the material heterogeneity to cause stress corrosion cracking or corrosion fatigue Table 1 Loading cycles and cracking area fraction of V-notch specimens tested in 3.5% NaCl aqueous solution. a Specimen S f broke after cyclic loading, other specimens stopped loading at the given loading cycle then broken in liquid nitrogen. [44,45]. In the process of hydrogen induced failure, the diffusion and concentration of hydrogen is critical to the fatigue damage. It is obvious that a longer time period exposed to the aqueous environment while cyclic loading will lead to deeper hydrogen diffusion into the specimen and therefore more severe effect of the test environment superimposed on the mechanical loading, which is the qualitative explanation of the cracking area increase with loading cycles and the difference of fatigue strength between the cases tested in air and in aqueous solution is large at higher failure lives, i.e. the maximum stress is inversely proportional to fatigue failure cycles when introducing the aqueous environment as previously shown in Eqs. (1) and (2), and Fig. 2. Model for S-N curve prediction in different environmental media It is known that the crack initiation related to inclusions is attributed to the weak cohesive state between inclusion and matrix. Under cyclic loading, a crack may easily form due to the interface debonding and grow into the matrix. In such a case, the subsurface crack initiation cycle N i is [46]: where w i is surface energy related to subsurface crack initiation, l is grain radius and DU i is unit increment of energy for subsurface crack initiation. w i and DU i are functions of grain radius l, inclusion radius r (w = r/l), stress amplitude Dr and the resistance of dislocation movement k (u = 0.5Dr/k). DU i is written as: For the normalization of N i , e N is defined as Thus, The variation of n i with u and w are demonstrated in Fig. 8 by assuming u to be 1.1, 1.2, 1.4, 2 and 4, and w varying from 0 to 2. It is shown that fatigue life n i increases with the decrease of u, i.e. the decrease of fatigue loading Dr or the increase of the resistance of dislocation movement k. For a given loading state (u in constant), fatigue life generally decreases with the increase of w, namely the increase of inclusion size r or the decrease of grain size l. The trends are in agreement with the experimental observations. Yang et al. [41,47] observed that the fatigue life decreases with the increase of inclusion size for an alloy steel. It is widely observed that the fatigue life increases with the decrease of applied loading. Zhao et al. [12] found that the fatigue life increases with the resistance of dislocation movement, i.e. the yield stress of material. For fatigue crack initiation at surface, by considering the surface crack factor and half cycling process [46], surface crack initiation cycle N s is where DU s is the unit increment of energy for surface crack initiation, w s is surface energy related to surface crack initiation. w s and DU s are functions of the grain radius l, the inclusion radius r, the stress amplitude Dr and the resistance of dislocation movement k. And DU s is approximated [10,46] Note that both n s and n i are the functions of u and w. In short, Eqs. (17) and (21) are derived for the calculation of the fatigue life for crack initiation at surface or at subsurface in different environmental media. For the case tested in air, k w is taken as 3 in the calculation [48]. For the case tested in 3.5 % NaCl solution, k w is taken as 25 times of that in air, i.e. 75, from the relationship of K IC in air and the aqueous solution [10,49]. The fatigue life for surface crack initiation n s and subsurface crack initiation n i in air as a function of u and w is compared in Fig. 9(a). It is seen that the subsurface crack initiation life is higher than the surface crack initiation life for the same high value of u (high loading or low material yield stress). Thus, surface crack initiation occurs much easier in this stage. With decreasing u, the surface crack initiation life is higher than the subsurface crack initiation life at the same u, which as a consequence leads to the subsurface crack initiation. At points A, B and C, the subsurface crack initiation life equals to the surface crack initiation. The three points correspond to the transition plateau in an S-N curve from the subsurface to the surface crack initiation. Fig. 9(b) compares the surface crack initiation life with subsurface crack initiation life in 3.5% NaCl solution. A similar trend as that in air presents. However, the fatigue life for surface crack initiation n s in 3.5% NaCl solution is significantly decreased, which interprets the observation in Section 5 that surface crack initiation even occurs in VHCF regime. This is due to the fact that aqueous media promote surface crack initiation in the competition of surface versus subsurface crack initiation. It is also seen from Fig. 9(a and b) that when subjected to the same loading, the fatigue life in air is much longer than that in aqueous media. This explains the characteristics of the S-N curve for the fatigue behavior in Fig. 2. In addition, it is seen that in the transition, there displays no pronounced stepwise tendency. If the uncertainty of the fatigue life is considered, the S-N curves for subsurface and surface fatigue fall into the same scatter band. This is in agreement with the experiment results by Wang et al. [50] obtained from push-pull fatigue test for tool steels, as shown in Fig. 10. Also, the S-N curve in Lai et al. [32] agrees well with our model. The stepwise S-N curve (Fig. 2 for air) obtained from the rotary bending test is partly explained by the smaller ''control volume'' of the specimen due to the stress gradient along the specimen section. The effect of control volume has a significant impact on the crack initiation and fatigue life of the specimen according to the weakest link concept. It is shown in Fig. 9(a and b) that the loading curves at different u tend to be a plateau with the increasing of fatigue life. This is the threshold for plastic deformation, below which the dislocation is assumed to be locked in this model. In all, this model predicts that fatigue life decreases with the applied loading and inclusion size, whereas it increases with the material yield stress. The plateau of S-N curve corresponding to the transition from surface crack initiation to subsurface crack initiation is predicted. In 3.5% NaCl solution, the fatigue life decreases significantly and surface crack initiation occurred even in VHCF regime. The competition of crack initiation at surface and subsurface in different environmental media is also predicted with the model. Note that this model qualitatively predicts S-N curves for surface and subsurface crack initiation in different environmental media. A quantification prediction of S-N curves based on this method is planned. Conclusions Based on this study, the following conclusions are drawn: (1) During the crack propagation process for specimens tested in air, fracture surface displays three regions with different propagation mechanisms. The formation of different morphologies in these regions is attributed to different crack driving forces and plastic zone sizes (crack tip constraint) ahead of the crack tip. (2) The values of fatigue strength for specimens tested in water and in 3.5% NaCl aqueous solution are significantly decreased compared to that tested in air. The fractography characteristics for specimens tested in aqueous solution are multiple crack originations and intergranular cracking mode with widespread secondary cracks in fatigue crack steady propagation period. (3) For fatigue testing in water and 3.5% NaCl aqueous solution, subsurface crack initiation is observed at small nonmetallic inclusions. They initiated and coalescenced to larger crack. (4) For fatigue testing in 3.5% NaCl aqueous solution, the cracking area fraction of specimens increases with loading cycles, which is attributed to the effect of mechanical cycling superimposed by the corrosive action of environment. (5) A model is proposed to study the relationship between fatigue life, applied stress and material property in VHCF in different environmental media. This model predicts that fatigue life decreases with the increase of loading and inclusion size, whereas it increases with the material yield stress. In 3.5% NaCl solution, the fatigue life decreases significantly and surface crack initiation occurred even in VHCF regime. The competition of crack initiation at surface and subsurface in different environmental media is also predicted with the model. The model prediction is in good agreement with experimental observations.
2019-04-28T13:12:16.625Z
2015-02-01T00:00:00.000
{ "year": 2015, "sha1": "c68b6a78feadbf101b83bb66d370ffc4ee7e9b40", "oa_license": "CCBYNCSA", "oa_url": "http://dspace.imech.ac.cn/bitstream/311007/49898/1/IMCAS2015-J-3.pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "7f19765513cc5e50291c331e5b2817a420555edd", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
269526082
pes2o/s2orc
v3-fos-license
Machine-Learning-Aided Understanding of Protein Adsorption on Zwitterionic Polymer Brushes Constructing antifouling surfaces is a crucial technique for optimizing the performance of devices such as water treatment membranes and medical devices in practical environments. These surfaces are achieved by modification with hydrophilic polymers. Notably, zwitterionic (ZI) polymers have attracted considerable interest because of their ability to form a robust hydration layer and inhibit the adsorption of foulants. However, the importance of the molecular weight and density of the ZI polymer on the antifouling property is partially understood, and the surface design still retains an empirical flavor. Herein, we individually assessed the influence of the molecular weight and density of the ZI polymer on protein adsorption through machine learning. The results corroborated that protein adsorption is more strongly influenced by density than by molecular weight. Furthermore, the distribution of predicted protein adsorption against molecular weight and polymer density enabled us to determine conditions that enhanced (or weaken) antifouling. The relevance of this prediction method was also demonstrated by estimating the protein adsorption over a wide range of ionic strengths. Overall, this machine-learning-based approach is expected to contribute as a tool for the optimized functionalization of materials, extending beyond the applications of ZI polymer brushes. INTRODUCTION Zwitterionic (ZI) polymers are garnering considerable attention in practical applications owing to their ability to impart high hydrophilicity and antifouling properties at the material interface.They are notably used in coatings on various material surfaces, where the adsorption of contaminants (foulant) can impact the performance of the device, such as membranes for water treatment, 1 biosensors, 2,3 medical devices, 4 and drug delivery systems. 5The widely studied ZI polymers contain anions (such as phosphorylcholine, sulphobetaine, and carboxybetaine groups) and cations (primarily the ammonium group) pendent to a methacrylic polymer backbone.These ZI polymers possess unique properties that distinguish them from typical non-ionic polymers.The high antifouling properties of ZI polymers in aqueous environments result from the strong electrostatic interaction of the zwitterions with water molecules: they have 6−11 non-freezing and 4−11 intermediate water molecules per unit, which form a strong and thick hydration layer. 6This hydration layer inhibits the proximity of the foulant to the material interfaces and increases the Gibbs energy required for adsorption. 7,8As a result, ZI polymer brushes enable better stability and antifouling properties than typical hydrophilic polymer brushes [e.g., polyethylene glycol (PEG)], which form a hydration layer through hydrogen bonds.In addition, ZI polymers specifically interact with ions in solution.At low ionic strength in solution, ZI polymers adopt a collapsed conformation, owing to strong intra/interchain electrostatic dipole−dipole interactions. 9,10In contrast, as the salt concentration increases, the ions in solution shield this interaction, causing it to shift to an extended conformation.This effect, known as the "antipolyelectrolyte effect", exhibits an opposite behavior to that of typical polyelectrolytes. 11,12esigning an optimum ZI polymer brush that maximizes the antifouling properties in each aqueous environment is an important but challenging subject.Experimental approaches considered the appropriate conditions, such as ionic groups, 13 aqueous ionic strength, 12,14−16 pH, 17 temperature, 18 and flow rate, 19 to exhibit excellent antifouling properties.Further, the effect of brush properties, such as polymer density 20 and molecular weight, 21,22 on protein adsorption has also been extensively studied via analytical methods, such as quartz crystal microbalance with dissipation monitoring (QCM-D) and surface plasmon resonance (SPR).However, conducting quantitative studies remains difficult because of the complex contribution of various factors with different physical origins to the adsorption phenomenon on polymer brushes.Therefore, no optimal brush structure is established for each water quality, and it is only empirically understood for the correlation between each factor and the adsorption properties.Machine learning (ML) is a powerful approach for such complex experimental systems.ML is a method that can analyze vast amounts of data in a very short period, enabling the prediction of unknown data and the analysis of the importance of descriptors from the learning of existing data.Indeed, in various fields, such as polymer chemistry, 23 batteries, 24 and catalytic chemistry, 25 the discovery of parameters important for performances 26 and optimization of the elemental composition of materials 27,28 have been successfully achieved.However, there are surprisingly few validations using ML for understanding protein adsorption on ZI polymer brush surfaces.To the best of our knowledge, only a few reports have predicted the amount of protein adsorption from brush-related descriptors. 29,30Notably, Liu et al. conducted ML using 94 entries to study the correlation between the polymer layer thickness in the dry state and the amount of protein adsorption. 29They obtained the following important conclusions: among the descriptors, the polymer layer thickness most considerably contributes to protein adsorption, and an optimal polymer layer thickness exists that minimizes adsorption.Some experimental approaches through contact angle measurement 18 and serum adsorption test confirmed the existence of an optimum polymer thickness (typically in the range of 30−60 nm) 31 that demonstrated the applicability of ML to the design of antifouling interfaces. Meanwhile, the integration of the ML and the design of the ZI polymer brushes are still in their early stages, and several aspects remain unclear.Initially, the external environment outside the polymer layer (i.e., the properties of the protein solution and the operating conditions, such as the flow rate) was largely overlooked.Furthermore, the characteristics of the polymer layer, which are the most important factors influencing the adsorption properties, are not sufficiently considered.Indeed, most previous studies (in experimental approaches 31,32 and material informatics 29,30 ) used the layer thickness in the dried state obtained by ellipsometry as the descriptor of the polymer layer.However, such thickness information does not adequately describe the polymer morphology in the swelling state.For instance, even if the layer thickness in the dry state is comparable, the layer thickness and polymer molecular weight on swelling may considerably differ. 33Therefore, to understand the fouling phenomenon and design an optimized antifouling surface, a comprehensive ML-based platform is required that incorporates the detailed features of the polymer layer and external environment. Herein, we developed a ML model for the fouling phenomenon of single-component protein adsorption on ZI polymer brushes.This model can evaluate the influence of the molecular weight and density of the brushes.In addition to a detailed examination of the brush structure, the model introduces descriptors for external factors to identify the key factors involved in protein adsorption.Herein, the impact of the molecular weight and density on the ZI polymer brush is separately assessed for the first time.As a result, the amount of protein adsorption (Figure 1) was predicted with high accuracy.On the basis of the constructed model, we identified the brush conditions that enhance antifouling properties. DATA SET AND METHODS 2.1.Data Set Construction.Data samples were collected from previously reported literature that provided the density of the ZI polymer, molecular weight, and thickness, along with the corresponding protein adsorption.For the literature in which exact values were not mentioned in the text, the values were extracted from the graphs.We further excluded data where protein adsorption on the substrate (without polymer) was not stated and where protein composition in the solution was unclear (e.g., adsorption data using fetal bovine serum).As a result, 12 descriptors were considered, as shown in Table 1.Furthermore, correlation r ab between descriptors x and y was assessed using the following Pearson coefficients: 34 where a i and b i are ith sample values, a ̅ and b̅ are the means of the sample values, and s a and s b are the standard deviations of the sample values of descriptors for a and b, respectively.2.2.ML Models and Interpretations.All ML processes were performed using Scikit-Learn Library 1.3.0 in Python 3.11.5.The Shapley additive explanations (SHAP) value for each feature was estimated using SHAP Library 0.42.1.Herein, we examined the performance of six algorithms: multiple linear regression (MLR), least absolute shrinkage and selection operator regression (LASSO), ridge regression (Ridge), random forest regression (RFR), gradient-boosted regression (GBR), and extra-tree regression (ETR).The hyperparameters for each algorithm were first optimized using the randomized search cross-validation (GridSearchCV) technique from the range of values provided in Table S1 of the Supporting Information to prevent underfitting and overfitting of the model. where n is the total number of data, ŷi is the predicted value of the ith sample, y i is the measured protein adsorption amount, and y m is the mean value of all corresponding true values in the training set.A small value of RMSE indicated a better prediction by the model, and a value of R 2 close to 1.0 implied a better match between the measured and predicted data. To enhance model interpretability, we further implemented SHAP.The SHAP is a method developed by Lundberg and Lee 35 based on coalition game theory to describe the output of ML models.Using SHAP values, we can quantify the contribution of each descriptor to the predicted value from the constructed model.The SHAP value for an input feature x (out of a total of n input features), given a prediction p from the constructed ML model, is represented by the following equation: where S represents the subset for each feature without feature x, p(S ∪ x) represents the predictions through ML considering feature x, and p(S) represents the predictions without considering feature x.Herein, Types of zwitterionic polymers were defined as the amount of hydrated water per monomer unit (Figure S1 of the Supporting Information).b The charge of each protein was defined as pI − pH using the isoelectric point (pI) of the protein.1. SHAP analysis was conducted to analyze the impact of each descriptor on protein adsorption.The relative importance of each descriptor was quantitatively compared on the basis of the absolute SHAP values. RESULTS AND DISCUSSION 3.1.Data Set of Protein Adsorption for the ZI Polymer Brush.−39 However, the data set for ZI polymers remains unconstructed.Therefore, this study began by constructing a series of data samples that summarized protein adsorption with brush properties (i.e., density, molecular weight, and polymer type), solution properties (i.e., pH, temperature, concentration, protein characteristics, and ionic strength), and operating conditions (i.e., flow rate).To construct a reliable data set from the literature, we need to consider the homogeneity of the polymer brushes.When brushes are formed by typical radical polymerization, the distribution of the molecular weight is wide; thus, we cannot properly assess the effects of the molecular weight and density.Therefore, all cited literature controls the molecular weight by introducing precisely controlled polymerization, such as atom transfer radical polymerization (ATRP).In the controlled grafting to method, a brush can be formed by directly bonding the length-controlled polymer to the surface through terminal functional groups (e.g., thiol groups).In the controlled grafting from method, surface-initiated ATRP (SI-ATRP) is primarily applied to the substrate.The SI-ATRP allows for the indirect identification of the polymer molecular weight using sacrificial initiators, 40−42 and the polymer density is determined from the amount of introduced initiator. On the basis of these experimental reports, we ultimately obtained 125 data samples without missing values, as provided in Table S2 of the Supporting Information.We removed entries 1 and 2 from the data set because protein adsorption on the substrate was extremely large and difficult to properly assess (see the Supporting Information and Figure S2 of the Supporting Information).Consequently, 123 data samples were used for ML. As shown in Table 1, the data set contains five descriptors for the polymer brush properties and seven descriptors for the solution/operating conditions.Before implementing the ML model, we calculated the Pearson correlation coefficients for each feature.A strong correlation between the two features can increase the difficulty of the training model and may affect prediction accuracy.Thus, when the correlation coefficient between two features exceeds 0.9 43 or 0.95, 30,44 the feature that has a high correlation with the prediction target is typically retained and the other feature is deleted.Figure 2 shows that none of the 12 features used in this study has a strong correlation beyond this threshold; thus, we did not delete any descriptors from this study. Comparison and Selection of the Regression Methods. Three linear regression algorithms (MLR, LASSO, and RIDGE) and three decision tree-based nonlinear regression algorithms (RFR, GBR, and ETR) were performed using the previously constructed training and test data sets.Initially, ML was performed using all of the descriptors presented in Table S2 of the Supporting Information.Note that the thickness, molecular weight, and density are all included in the first run.However, thickness is the dependent variable of molecular weight and density: M n = hρN A /σ, where M n is the molecular weight, h is the dry brush thickness, ρ is the dry polymer density, N A is Avogadro's number, and σ is the chain density. 41,42,45Elimination of this overlap is considered in sections 3.4 and 3.5.Figure 3a presents the results of the analysis obtained from each algorithm, showing that the nonlinear regression algorithms have high accuracy.This suggests that the adsorption phenomenon cannot be represented by a simple linear sum of each descriptor.Figure 3b confirmed the superiority of the tree-based regression algorithms, presenting the efficiency of each algorithm as R 2 scores and RMSE.In a previous report that applied ML, similar trends are presented to protein adsorption on non-ionic polymer brushes. 30Figure 3b also indicates that RFR is the most suitable algorithm in this study; thus, RFR was used for further validation. 3.3.SHAP-Analysis-Based Importance Estimation of the Descriptors.Next, we quantitatively evaluated the importance of each descriptor in the built RFR model using SHAP.Large SHAP values are strongly weighted in the prediction; therefore, the magnitude of positive and negative SHAP values indicates the importance of the feature.Figure 4a shows that the descriptors of polymer density, molecular weight, and ionic strength negatively contributed to protein adsorption (i.e., higher values led to lower adsorption).In contrast, the flow rate and substrate adsorption positively contributed.These results align with previous experimental findings.For instance, Zhang et al. explicitly demonstrated an inverse correlation between the ionic strength and protein adsorption under various ionic species. 12Similarly, Amoako et al. showed that protein adsorption increased when the ZI polymer was under shear stress at high flow rates. 19The consistency of the SHAP analysis with these experimental facts supports the validity of the ML model constructed in this study. Furthermore, we calculated the important scores for each descriptor from the absolute average of the SHAP values (Figure 4b).Upon examination of the descriptors with high importance, polymer density had the highest score (+0.59), almost 3 times higher than the molecular weight M n (+0.19).Both parameters effectively inhibit protein adsorption; however, the antifouling mechanisms are presumed to be different.High polymer density promotes the formation of a robust hydration layer.As a result, high density increases osmotic pressure for protein insertion into the polymer layer.On the other hand, high polymer molecular weight prevents the protein adsorption by increasing the distance between the substrate surface and solution.Therefore, our results indicate that the formation of rigid hydration and the increase in osmotic pressure owing to high brush density are more important than the increase in diffusion distance by high molecular weight.In addition, solution/protein properties (Temp, Charge, pH, and Mpro) and the type of ZI polymer (Pol_Type) were found to have a small influence (<0.05) on protein adsorption.The minimal influence of the solution temperature and pH may be because the referred literature adopted the applicable pH and temperature ranges 42 of the ZI polymers.Meanwhile, the small influence of the monomer structure (Pol_Type) and protein properties (Charge and Mpro) differs from the results acquired by ML for non-ionic hydrophilic polymers, 29,30 indicating that this is a property specific to the ZI polymer brushes.This is likely because typical hydrophilic polymers, such as PEG, use their own steric repulsion, 8 whereas the ZI polymers operate by the hydration layer on the polymer brush.Ultimately, we excluded these parameters and did not consider them in the later ML modeling.Moreover, Figure 4b also shows that the descriptors, such as ionic strength (+0.17), protein concentration (+0.13), and flow velocity (+0.06), have an intermediate importance.Thus, for the first time, the importance of the interface, operating conditions, and external environment for protein adsorption was quantified, which has not been qualitatively understood thus far. 3.4.Consideration of the Descriptor for the Polymer Brush.−48 The grafted chain configuration, which is estimated from the Flory radius, is a well-known indicator of the degree of the brush state where R F the Flory radius of the hydrated polymer, expressed as R F = lN 3/5 (l is the monomer length of ≈0.3 nm and N is the degree of polymerization), 49 and s is the average distance between grafted polymers, which is expressed as the inverse of the square root of the polymer density.Generally, the polymer chain forms a mushroom structure at s/2R F ≫ 1 and a brush structure at s/2R F ≪ 1, and a weakly overlapped structure is formed between mushrooms and brushes. 50We further investigated the most appropriate descriptor for the polymer brush structure using the built RFR model.In the RFR model, (i) conventionally investigated polymer thickness (drying state), (ii) s/2R F value, and (iii) density and M n were 1a.Considerable improvement in prediction performance was not achieved even when the s/2R F value was used as a brush descriptor (R train 2 = 0.85 and R test 2 = 0.60).This is because the descriptor s/2R F cannot identify specific molecular weights and densities.In comparison to cases i and ii, the best prediction accuracy (R train 2 = 0.93 and R test 2 = 0.97) was achieved in case iii, which used polymer density and molecular weight as the descriptors for the polymer brush.Here, there are still some outliers in the training data that are not due to the polymer configuration but due to differences in the environment of each experiment (see the Supporting Information and Figure S3 of the Supporting Information).In case iii, the trend of the SHAP plots and the estimated importance distribution is similar to that when all descriptors are considered in Figure 4; the importance of density is the highest (2.1 times higher than that of the molecular weight).These results show that thickness or grafted chain configuration is insufficient to accurately represent the properties of polymer brushes; thus, polymer density and molecular weight are required to estimate protein adsorption. Dependence of Protein Adsorption upon the Brush Density and Molecular Weight.The correlation between surface properties and adsorption was investigated using a trained RFR model to clarify the effect of the molecular weight and density of the brush polymer on protein adsorption.Herein, the amount of adsorbed protein was predicted for 1200 conditions at various polymer molecular weights (M n ∼ 40 000, with the interval = 1000) and densities (σ ∼ 0.6, with the interval = 0.02).First, we predicted protein adsorption by fixing the parameters at Ionic Strength = 150 mM, Sub_Ad = 450 ng cm −2 , Pro_Conc = 1 g L −1 , and Flow Rate = 0.01 mL min −1 .Figure 6a shows the mapping of the predicted amounts of protein adsorption against the molecular weights and densities of the polymer brushes.The predicted adsorption distribution cannot be interpreted simply in terms of differences in polymer configurations, as previously described.In fact, the boundary of s/2R F = 1 differs from the adsorption trend (Figure S4 of the Supporting Information).Therefore, adsorption on ZI polymer brushes is a complex phenomenon that involves the influence of external hydration layers beyond the brush structures. In the prediction mapping, blocky color changes were generated, probably as a result of the limited number of data samples.However, mapping well expressed the features of experimental facts, and prediction can be used for further investigation.Figure 6b shows the color map of the grouped data samples with a high ionic strength range of 100−200 mM.Unlike the prediction, the parameter values (e.g., flow rate and protein concentration) differ for each experimental data.Nevertheless, the experimental data exhibited a robust correlation with the predicted outcomes, validating the predictions.Further, Figure 6b shows the limitations of experimentally feasible surfaces.Notably, no data samples were observed for surface densities with σ > 0.4 chains nm −2 , indicating that such surface densities are difficult to achieve experimentally as a result of steric hindrance by graft polymers. ML-aided prediction of protein adsorption can help to understand adsorption properties beyond current experimental facts or extract desirable surface conditions in practical environments.Thus, we further predicted protein adsorption on ZI polymer brushes concerning the molecular weight and density under various water environments.Here, the influence of the ionic strength, which shows high importance in case iii of Figure 5, was investigated.The consideration of the ionic strength is particularly crucial in the application of antifouling porous membranes for water treatment.Generally, ground- water and surface water have millimolar ranges of ionic strength, whereas seawater has approximately 700 mM.Therefore, we examined ionic strengths in the range of 1− 1000 mM (Figure 7a). Overall, the amount of adsorption tended to be larger at a lower ionic strength, which reproduces the decrease in the antifouling properties associated with the previously mentioned antipolyelectrolyte effect.With focus on the dependence of protein adsorption upon the molecular weight and density, the degree of influence of graft density is clearly greater than the influence of the molecular weight.Notably, favorable antifouling performance is exhibited in the region of densities above 0.2.We also observed the regions of high adsorption at low graft densities (<0.15) and moderate M n (20 000−23 000).This region with high protein adsorption may correspond to the "hot spot" that have been newly discovered by an experimental approach. 51Under such conditions, a decrease in ionic strength causes the hydration structure to collapse, and the distance between the polymer chains exceeds the protein size.Consequently, the protein is inserted within the three-dimensional space of the ZI polymer layer, and adsorption occurs via electrostatic interactions.However, all adsorptions at M n > 15 000 can efficiently decrease with the formation of a hydration layer as a result of an increase in ionic strength.Meanwhile, protein adsorption is significant in areas with low densities and molecular weights (σ < 0.15 and M n < 15 000).The adsorption is not sufficiently decreased even in a high ionic strength environment, attributable to the existence of a defective surface with an exposed substrate.In such a surface, the protein can directly adsorb onto the substrate and the antifouling effect through the hydration layer is not effectively exhibited. With focus on the correlation between protein adsorption and ionic strength, it is suggested that ZI polymer brushes effectively exhibit antifouling properties at an ionic strength over 100 mM.This is also evident from Figure 7b, which shows the ionic strength dependence of protein adsorption under specific molecular weights and densities.This result agrees with previous reports that experimentally show that ZI brushes can effectively resist protein adsorption in the range of ionic strength above 100 mM. Overall, the findings gathered from the application of ML to ZI polymer brushes are as follows: (1) The RFR was chosen as the best model to maximize the prediction performance.The positive or negative contribution of each descriptor to protein adsorption agreed with the experimental facts that supports the reliability of the built model.(2) The effects of the brush density and molecular weight on protein adsorption were separately evaluated for the first time.The results present that protein adsorption is more strongly influenced by the density than by the molecular weight.(3) Several descriptors (thickness, s/2R F value, and density with M n ) for the brush structure were investigated.The highest prediction performance was achieved using density and M n .(4) Visualization on the mapping of protein adsorption against brush density and molecular weight was provided for brush conditions that increase or decrease antifouling properties.Further, the effect of the ionic strength on antifouling properties was investigated to demonstrate the relevance of this study. As demonstrated in this study, ML is useful for understanding the overall adsorption phenomena from limited experimental data and determining the optimal brush structure in each environment.However, this research is still in its early stages and currently has several limitations.For example, more data samples are required to predict specific water environments.An increase in the number of data samples would allow for detailed analysis of the effects of each factor (such as the protein concentration, protein species, and flow rate) on adsorption behavior.In addition, the antifouling performance of ZI polymers varies with anion/cation moieties in the polymers or ion species in the solution, but their contributions were not considered in this study.Further advanced predictions incorporating these factors will be realized in the future, with the expansion of relevant experimental data. CONCLUSION Herein, we initially gathered a data set on protein adsorption on the ZI polymer brushes from the literature sources.This data set includes details about brush structures (molecular weight, density, thickness, polymer type, and substrate characteristics), solution conditions (pH, ionic strength, temperature, and protein characteristics), and control conditions (flow rate) that correspond to protein adsorption.Three linear (MLR, LASSO, and RIDGE) and decision-treebased nonlinear (RFR, GBR, and ETR) regressions were applied to compare prediction performance.The RFR was chosen as the primary ML model because it showed the highest R 2 value (R 2 = 0.9 for training and test data) and the lowest RMSE value.The SHAP analysis for the constructed RFR model showed that polymer density, molecular weight, ionic strength, and polymer layer thickness negatively contributed.The flow rate and adsorption on the substrate positively contributed to the overall protein adsorption.This agrees well with previous experimental reports and supports the validity of the model.Polymer density is shown to have the highest importance, which is 2−3 times higher than that of the molecular weight by comparing the importance of each descriptor via SHAP analysis.In addition, the thickness, grafted chain configuration, and density with M n were compared as the descriptors of the polymer brush, and the density with M n provided the best prediction in the RFR model.Furthermore, the trained ML model was used to produce a prediction mapping for protein adsorption against the molecular weight and graft density.This mapping allowed for the determination of regions with enhanced antifouling properties.Finally, the effect of the ionic strength on antifouling properties was investigated to demonstrate the relevance of this study. To the best of our knowledge, this is the first report to accurately estimate the contribution of density and molecular weight to protein adsorption using a ML-based approach.This work quantitatively evaluated the importance of the polymer brush to the antifouling properties, which was empirically not understood until now.Although this study focuses only on the ZI polymers based on a limited number of data sets, the approach may be applicable investigating various brush interfaces and could help for designing future antifouling surfaces. List of hyperparameters for ML algorithms, full data set for ML that encompasses brush and external conditions and consequential protein adsorption, and results of ML using all data samples without trimming (PDF) Figure 1 . Figure 1.Schematic of this research.In addition to external factors, such as flow velocity and pH/ionic strength, our ML model considers molecular weight and density as the polymer layer descriptors. Figure 2 . Figure 2. Heatmap of the Pearson coefficients among 12 selected features.The Pearson correlation coefficient ranges from −1 to 1, where 1 represents an absolute positive correlation and −1 represents an absolute negative correlation.Each feature is described in Table1. Figure 3 . Figure 3. (a) Images of the prediction performance of ML using three types of linear regressions and three types of nonlinear regressions.(b) R 2 and RMSE values of each algorithm in predicting the amount of adsorbed protein.Values for test data are shown with diagonal lines. Figure 4 . Figure 4. (a) SHAP plot for protein adsorption in the RFR model.Colors from red to blue represent the feature values from more to less.(b) Estimated importance (mean SHAP value) of considered descriptors. Figure 5 . Figure 5. Results of ML by the RFR model using (i) layer thickness in the dry state, (ii) s/2R F value, and (iii) polymer density and molecular weight as the descriptors for the grafted ZI polymer.The quantity of adsorption without polymer, protein concentration, ionic strength, and flow rate were used as external descriptors. Figure 6 . Figure 6.(a) Visualization for mapping of protein adsorption against the molecular weight and density of the ZI polymer brushes.The degree of protein adsorption was predicted using a trained RFR model, and the descriptors Ionic Strength, Sub_Ad, Pro_Conc, and Flow Rate were fixed.For each mapping, the molecular weights up to 40 000 g mol −1 and densities up to 0.6 chains nm −2 were investigated, providing 1200 predicted data.(b) Color map of experimental data samples within a high ionic strength range of 100−200 mM.The data were extracted from the constructed data set. Figure 7 . Figure 7. (a) Ionic strength dependence of the mapping protein adsorption.The descriptors are fixed as follows: Sub_Ad = 450 ng cm −2 , Pro_Conc = 1 g L −1 , and Flow Rate = 0.01 mL min −1 .(b) Predicted protein adsorption on the surface of ZI polymer brushes as a function of the ionic strength.The molecular weight is fixed at 10 000, and the polymer density is fixed at 0.2. Table 1 . Summary of the Descriptors Used in This Study a
2024-05-04T06:17:09.124Z
2024-05-03T00:00:00.000
{ "year": 2024, "sha1": "6ee9ec33abe6a101d164bfdde2746169f9729547", "oa_license": "CCBY", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsami.4c01401", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "cf33ea126df8f424b3e700769d49d6607600bd1f", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
305161
pes2o/s2orc
v3-fos-license
A $2\ell k$ Kernel for $\ell$-Component Order Connectivity In the $\ell$-Component Order Connectivity problem ($\ell \in \mathbb{N}$), we are given a graph $G$ on $n$ vertices, $m$ edges and a non-negative integer $k$ and asks whether there exists a set of vertices $S\subseteq V(G)$ such that $|S|\leq k$ and the size of the largest connected component in $G-S$ is at most $\ell$. In this paper, we give a kernel for $\ell$-Component Order Connectivity with at most $2\ell k$ vertices that takes $n^{\mathcal{O}(\ell)}$ time for every constant $\ell$. On the way to obtaining our kernel, we prove a generalization of the $q$-Expansion Lemma to weighted graphs. This generalization may be of independent interest. Introduction In the classic Vertex Cover problem, the input is a graph G and integer k, and the task is to determine whether there exists a vertex set S of size at most k such that every edge in G has at least one endpoint in S. Such a set is called a vertex cover of the input graph G. An equivalent definition of a vertex cover is that every connected component of G − S has at most 1 vertex. This view of the Vertex Cover problem gives rise to a natural generalization: can we delete at most k vertices from G such that every connected component in the resulting graph has at most vertices? Here we study this generalization. Formally, for every integer ≥ 1, we consider the following problem, called -Component Order Connectivity ( -COC). -Component Order Connectivity ( -COC) Input: A graph G on n vertices and m edges, and a positive integer k. Task: determine whether there exists a set S ⊆ V (G) such that |S| ≤ k and the maximum size of a component in G − S is at most . The set S is called an -COC solution. For = 1, -COC is just the Vertex Cover problem. Aside from being a natural generalization of Vertex Cover, the family { -COC : ≥ 1} of problems can be thought of as a vulnerability measure of the graph G -how many vertices of G have to fail for the graph to break into small connected components? For a study of -COC from this perspective see the survey of Gross et al. [12]. From the work of Lewis and Yannakakis [16] it immediately follows that -COC is NPcomplete for every ≥ 1. This motivates the study of -COC within paradigms for coping with pair, how can we find one in polynomial time? To answer (a) we restrict ourselves to reducible pairs with the additional property that each connected component C of G [Y ] can be assigned to a vertex x ∈ N (C), such that for every x ∈ X the total size of the components assigned to x is at least . Then x together with the components assigned to it form a set of size at least + 1 and have to contain a vertex from the solution. Since we obtain such a connected set for each x ∈ X, the solution has to contain at least |X| vertices from X ∪ Y . Again we remark that this definition of a reducible pair is local to this section, and not the one we actually end up using. To answer (b) we first try to use the q-Expansion Lemma (see [5]), a tool that has found many uses in kernelization. Roughly speaking the Expansion Lemma says the following: if q ≥ 1 is an integer and H is a bipartite graph with bipartition (A, B) and B is at least q times larger than A, then one can find a subset X of A and a subset Y of B such that N (Y ) ⊆ X, and an assignment of each vertex y ∈ Y to a neighbor x of y, such that every vertex x in X has at least q vertices in Y assigned to it. Suppose now that the graph does have an -COC solution S of size at most k, and that V (G) \ S is sufficiently large compared to S. The idea is to apply the Expansion Lemma to the bipartite graph H, where the A side of the bipartition is S and the B side has one vertex for each connected component of G − S. We put an edge in H between a vertex v in S and a vertex corresponding to a component C of G − S if there is an edge between v and C in G. If G − S has at least |S| · connected components, we can apply the -Expansion Lemma on H, and obtain a set X ⊆ S, and a collection Y of connected components of G − X satisfying the following properties. Every component C ∈ Y satisfies N (C) ⊆ X and |C| ≤ . Furthermore, there exists an assignment of each connected component C to a vertex x ∈ N (C), such that every x ∈ X has at least components assigned to it. Since x has at least components assigned to it, the total size of the components assigned to x is at least . But then, X and Y = C∈Y C form a reducible pair, giving an answer to question (b). Indeed, this argument can be applied whenever the number of components of G − S is at least · |S|. Since each component of G − S has size at most , this means that the argument can be applied whenever |V (G) \ S| ≥ 2 · |S| ≥ 2 k. Clearly this argument fails to yield a kernel of size 2 k, because it is only applicable when |V (G)| = Ω( 2 k). At this point we note that the argument above is extremely wasteful in one particular spot: we used the number of components assigned to x to lower bound the total size of the components assigned to x. To avoid being wasteful, we prove a new variant of the Expansion Lemma, where the vertices on the B side of the bipartite graph H have non-negative integer weights. This new Weighted Expansion lemma states that if q, W ≥ 1 are integers, H is a bipartite graph with bipartition (A, B), every vertex in B has a non-negative integer weight which is at most W , and the total weight of B is at least (q + W − 1) · |A|, then one can find a subset X of A and a subset Y of B such that N (Y ) ⊆ X, and an assignment of each vertex y ∈ Y to a neighbor x of y, such that for every vertex in X, the total weight of the vertices assigned to it is at least q. The proof of the Weighted Expansion Lemma is based on a combination of the usual, unweighted Expansion Lemma with a variant of an argument by Bezáková and Dani [1] to round the linear program for Max-min Allocation of goods to customers. We are now left with question (c) -the issue of how to find a reducible pair in polynomial time. Indeed, the proof of existence crucially relies on the knowledge of an (optimal) solution S. To find a reducible pair we use the linear programming relaxation of the -COC problem. We prove that an optimal solution to the LP-relaxation has to highlight every reducible pair (X, Y ), essentially by always setting all the variables corresponding to X to 1 and the variables corresponding to Y to 0. For Vertex Cover (i.e 1-COC), the classic Nemhauser Trotter Theorem [18] implies that we may simply include all the vertices whose LP variable is set to 1 into the solution S. For -COC with ≥ 2 we are unable to prove the corresponding statement. We are however, able to prove that if a reducible pair (X, Y ) exists, then X (essentially) has to be assigned 1 and Y (essentially) has to be assigned 0. We then give a polynomial time algorithm that extracts X and Y from the vertices assigned 1 and 0 respectively by the optimal linear programming solution. Together, the arguments (b) and (c) yield the kernel with 2 k vertices. We remark that to the best of our knowledge, after the kernel for Vertex Cover [3] our kernel is the first example of a kernelization algorithm based on linear programming relaxations. Overview of the paper. In Section 2 we recall basic definitions and set up notations. The kernel for -COC is proved in Sections 3, 4 and 5. In Section 3 we prove the necessary adjustment of the results on Max-Min allocation of Bezáková and Dani [1] that is suitable to our needs. In Section 4 we state and prove our new Weighted Expansion Lemma, and in Section 5 we combine all our results to obtain the kernel. Preliminaries Let , e i is adjacent to e j if and only if |i − j| = 1 mod t. The length of a path(cycle) is the number of edges in the path(cycle). A triangle is a cycle of length 3. In G, for any pair of vertices u, v ∈ V (G) dist(u, v) represents the length of a shortest path between u and v. A tree is a connected graph that does not contain any cycle. A rooted tree T is a tree with a special vertex r called the root of T . With respect to r, for any edge uv ∈ E(T ) we say that v is a child of u (equivalently u is parent of v) if dist(u, r) <dist(v, r). A forest is a collection of trees. A rooted forest is a collection of rooted trees. A clique is a graph that contains an edge between every pair of vertices. A vertex cover of a graph is a set of vertices whose removal makes the graph edgeless. Fixed Parameter Tractability. A parameterized problem Π is a subset of Σ * × N. A parameterized problem Π is said to be fixed parameter tractable(FPT) if there exists an algorithm that takes as input an instance (I, k) and decides whether (I, k) ∈ Π in time f (k) · n c , where n is the length of the string I, f (k) is a computable function depending only on k and c is a constant independent of n and k. A kernel for a parameterized problem Π is an algorithm that given an instance (T, k) runs in time polynomial in |T |, and outputs an instance (T , k ) such that |T |, k ≤ g(k) for a computable function g and (T, k) ∈ Π if and only if (T , k ) ∈ Π. For a comprehensive introduction to FPT algorithms and kernels, we refer to the book by Cygan et al. [5]. A data reduction rule, or simply, reduction rule, for a parameterized problem Q is a function φ : Σ * × N → Σ * × N that maps an instance (I, k) of Q to an equivalent instance (I , k ) of Q such that φ is computable in time polynomial in |I| and k. We say that two instances of Q are equivalent if (I, k) ∈ Q if and only if (I , k ) ∈ Q; this property of the reduction rule φ, that it translates an instance to an equivalent one, is referred as the safeness of the reduction rule. Max-min Allocation We will now view a bipartite graph G := ((A, B), E) as a relationship between "customers" represented by the vertices in A and "items" represented by the vertices in B. If the graph is supplied with two functions w a : A → N and w b : B → N, we treat these functions as a "demand function" and a "capacity" function, respectively. That is, we consider each item v ∈ B to have value w b (v), and every customer u ∈ A wants to be assigned items worth at least w a (u). An edge between u ∈ A and v ∈ B means that the item v can be given to u. A weight function f : E(G) → N describes an assignment of items to customers, provided that the items can be "divided" into pieces and the pieces can be distributed to different customers. However this "division" should not create more value than the original value of the items. Formally we say that the weight function satisfies the capacity constraint The weight function satisfies the capacity constraints if it satisfies the capacity constraints of all items v ∈ B. For each item u ∈ A, we say that f allocates uv∈E(G) f (uv) value to u. The weight function f satisfies the demand w a (u) of u ∈ A if it allocates at least w a (u) value to u, and f satisfies the demand constraints if it does so for all u ∈ A. In other words, the weight function satisfies the demands if every customer gets items worth at least her demand. The weight function f over-satisfies a demand constraint w a (u) of u if it allocates strictly more than w a (u) to u. We will also be concerned with the case where items are indivisible. In particular we say that a weight function f : E(G) → N is unsplitting if for every v ∈ B there is at most one edge uv ∈ E(G) such that f (uv) > 0. The essence of the next few lemmas is that if we have a (splitting) weight function f of items whose value is at most W , and f satisfies the capacity and demand constraints, then we can obtain in polynomial-time an unsplitting weight function f that satisfies the capacity constraints and violates the demand constraints by at most (W − 1). In other words we can make a splitting distribution of items unsplitting at the cost of making each customer lose approximately the value of the most expensive item. Allocating items to customers in such a way as to maximize satisfaction is well studied in the literature. The lemmata 1 and 2 are very similar, both in statement and proof, to the work of Bezáková and Dani [1][Theorem 3.2], who themselves are inspired by Lenstra et al. [15]. However we do not see a way to directly use the results of Bezáková and Dani [1], because we need a slight strengthening of (a special case of) their statement. capacity function w b : B → N, a demand function w a : A → N and a weight function f : E(G) → N that satisfies the capacity and demand constraints, outputs a function f : E(G) → N such that f satisfies the capacity and demand constraints and the graph G f = (V (G), {uv ∈ E(G) | f (uv) > 0}) induced on the non-zero weight edges of G is a forest. Proof. We start with f and in polynomially many steps, change f into the required function and that therefore f satisfies the capacity and demand constraints. Furthermore at least one edge that is assigned non-zero weight by f is assigned 0 by f and G f = (V (G), {uv ∈ E(G) | f (uv) > 0}) has one less cycle than G f . For a polynomial-time algorithm, repeatedly apply the process described above to reduce the number of edges with non-zero weight, as long as G f contains a cycle. Proof. Without loss of generality the graph G f := (V (G), {uv ∈ E(G) | f (uv) > 0}) is a forest. If it is not, we may apply Lemma 1 to f , and obtain a function f that satisfies the capacity and demand constraints, and such that G f = (V (G), {uv ∈ E(G) | f (uv) > 0}) is a forest. We then rename f to f . By picking a root in each connected component of G f we may consider G f as a rooted forest. We pick the roots as follows, if the component contains the special vertex r, we pick r as root. If the component does not contain r, but contains at least one vertex u ∈ A, we pick that vertex as the root. If the component does not contain any vertices of A then it does not contain any edges and is therefore a single vertex in B, we pick that vertex as root. Thus, every item v ∈ B that is incident to at least one edge in G f has a unique parent u ∈ A in the forest G f . We define the new weight function h. For every edge uv ∈ E(G) with u ∈ A and v ∈ B we define h(uv) as follows. Clearly h is unsplitting and satisfies the capacity constraints. We now prove that h also satisfies the demand constraints w a and satisfies the demand constraint w a (r) of r. Consider the demand constraint w a (u) for an arbitrary customer u ∈ A. There are two cases, either u is the root of the component of G f or it is not. If u is the root, then for every edge uv ∈ E(G) ≥ wa(u) − W + 1 Figure 1 Proof of Lemma 1 and 2. Cyclically shift smallest weight in a non-zero weight cycle to obtain a forest. Root each tree in the forest at a vertex in A such that each vertex in B has a parent in A. Assign the value of v ∈ B to its parent u ∈ A. In this new assignment, a non-root vertex u ∈ A loses its parent v0 ∈ B and f (v0u) ≤ W − 1 which explains the cost of making a splitting assignment into an unsplitting assignment. such that f (uv) > 0 we have that uv ∈ E(G f ) and consequently that u is the parent of v. Hence h(uv) = w b (v) ≥ f (uv), and therefore h satisfies the demand w a (u) of u. Since w a (u) ≥ w a (u), we have that h satisfies the demand w a (u). Furthermore, since r is the root of its component this also proves that h satisfies the demand w a (r). Consider now the case that u is not the root of its component in G f . Then u has a unique parent in G f , call this vertex v ∈ B. We first prove that f (uv ) ≤ w b (v ) − 1. Indeed, since v is incident to the edge uv we have that v has a parent u in G f , and that u = u because v is the parent of u. We have that We now proceed to proving that h satisfies the demand w a (u). For every edge uv ∈ E(G) \ {uv } such that uv ∈ E(G) such that f (uv) > 0 we have that uv ∈ E(G f ) and consequently that u is the parent of v. Hence we have that Therefore h satisfies the demand w a (u). Proof. We describe a recursive algorithm. If A = ∅ or B = ∅, then output no and terminate. Otherwise, construct the twin graph T BA with weight function w : A → N where for all u ∈ A, w(u) = q and let M be a maximum matching in T BA . Consider the graph If there are no sets X, Y such that there is a q-expansion of X into Y , then for any pair of sets A ⊆ A, B ⊆ B either N (B ) \ A = ∅ or |B | < q|A |. Since at each recursive step, the size of the graph with which the algorithm calls itself decreases, eventually either A becomes empty or B \ N G (A \ A ) becomes empty. Hence, the algorithm outputs no. Now we need to show that if there exist sets (A * , B * ) such that there is a q-expansion of A * into B * , then at each recursive call, we have that A * ⊆ A and B * ⊆ B. At the start of the algorithm, A * ⊆ A and B * ⊆ B. Since N (B * ) ⊆ A * and for all u ∈ A * d G (u) ≥ q, we have that A * ∪ B * ⊆ V (G ). If N (B ) ⊆ A , then the algorithm of Lemma 6 when run on G , q will output (A * , B * ). Note that B * ⊆ B . At the recursive step, A * ⊆ A and since which concludes the correctness of the algorithm. Since at each recursive call the size of the graph decreases by at least 1, the total time taken by the above algorithm is polynomial in n. One may think of a q-expansion in a bipartite graph with bipartition (A, B) as an allocation of the items in B to each customer in A such that every customer gets at least q items. For our kernel we will need a generalization of q-expansions to the setting where the items in B have different values, and every customer gets items of total value at least q. Definition 8 (Weighted q-expansion). Let G : = ((A, B), E) be a bipartite graph with capacity function w b : B → N. Then, a weighted q-expansion in G is an edge weight function f : E(G) → N that satisfies the capacity constraints w b and also satisfies the demand constraints w a = q. For an integer W ∈ N, the q-expansion f is called a W -strict q-expansion if f allocates at least q + W − 1 value to at least one vertex r in A, and in this case we say that f is W -strict at r. Further, a q-expansion f is strict (at r) if it is 1-strict (at r). If f is unsplitting we call f an unsplitting q-expansion. Lemma 22. There exists a polynomial time algorithm that given an integer and -COC instance (G, k) on at least 2 k vertices either finds a reducible pair (X, Y ) or concludes that (G, k) is a no-instance. Proof. If (G, k) is a yes-instance of -COC, then by Lemma 16, there exists a reducible pair (X, Y ). We use the following algorithm to find one: Step 1 Run the LP algorithm. Let A = 1 and B = 0 in the LP solution. Step 2 If both A and B are non-empty, then run the algorithm of Lemma 15 with input (G, k), A, B. If it outputs a reducible pair (X, Y ), then return (X, Y ) and terminate. Otherwise, go to step 3. Step 3 Now we do a linear search for a vertex in X. For each vertex v ∈ V (G), do the following: in the original LP introduce an additional constraint that sets the value of the variable x v to 1 i.e. x v = 1 and run the LP algorithm. If the optimal value of the new LP is the same as the optimal value of the original LP, then let A = 1 and B = 0 be the sets of variables set to 1 and 0 respectively in the optimal solution of the new LP and go to step 2. Step 4 Output a trivial no-instance. Step 1 identifies the set of variables set to 1 and 0 by the LP algorithm. By Lemma 21, we have that if there is a minimal reducible pair (X, Y ) in G, then X ⊆ A and Y ⊆ B. So, in Step 2 if the algorithm succeeds in finding one, we return the reducible pair and terminate otherwise we look for a potential vertex in X and set it to 1. If (X, Y ) exists, then for at least one vertex, setting x v = 1 would set X = 1 and Y = 0 (by Lemma 21) without changing the LP value and we go to Step 2 to find it. If for each choice of v ∈ V (G), the LP value changes when x v is set to 1, we can conclude that there is no reducible pair and output a trivial no instance. Since, we need to do this search at most n times and each step takes only polynomial time, the total time taken by the algorithm is polynomial in the input size.
2016-10-15T10:17:35.000Z
2016-10-15T00:00:00.000
{ "year": 2016, "sha1": "764bdf05496b83d08a3652bd45cbba67e4cbff06", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "497b3a734be6c7602fcd94ad8cdfff5bc2369dfb", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
18250533
pes2o/s2orc
v3-fos-license
X-ray absorption near-edge spectra of overdoped La_2-xSr_xCuO_4 high-T_c superconductors We present results for realistic modeling of the x-ray absorption near edge structure (XANES) of the overdoped high-T_c superconductor La_2-xSr_xCuO_4 in the hole doping range x = 0.20-0.30. Our computations are based on a real-space Green's function approach in which strong-correlation effects are taken into account in terms of a doping-dependent self-energy. The predicted O K-edge XANES is found to be in good accord with the corresponding experimental results in this overdoped regime. We find that the low energy spectra are dominated by the contribution of O-atoms in the cuprate planes, with little contribution from apical O-atoms. I. INTRODUCTION In their undoped parent compounds, high-T c cuprate superconductors are antiferromagnetic insulators which are characterized by a gap driven by strong electron correlations. For this reason these materials are commonly referred to as Mott insulators. Strong correlation effects weaken with increased electron or hole doping, eventually yielding a metallic state. In La 2−x Sr x CuO 4 (LSCO), for example, at a hole doping level of x ∼ 0.16, a paramagnetic state emerges and the material appears to recover Fermi liquid properties. However, despite over two decades of intense experimental and theoretical effort, the underlying principles governing how a Mott insulator transitions into a Fermi-liquid with doping are still not well understood. 1 The answer seems to be hidden within the mechanisms through which the quasiparticle spectral weight passes from the insulating Mott-Hubbard bands to the in-gap states near the Fermi level. In electron doped cuprates, the Mott gap and the associated lower Hubbard band can be directly probed by photoemission spectroscopy. 2,3 In the hole doped cuprates, on the other hand, this gap lies above the Fermi energy, so that techniques sensitive to empty states within a few eV above the Fermi energy must be deployed. Accordingly, light scattering techniques have been used, including optical, resonant inelastic x-ray scattering (RIXS), and x-ray absorption near edge spectroscopy (XANES) which probes the density of states (DOS) of empty states above the Fermi energy via excitations from core levels. 1,[4][5][6] The purpose of this study is to model the XANES spectrum of LSCO realistically, as an exemplar hole-doped cuprate, and to compare and contrast our theoretical predictions with available experimental data. The analysis is carried out using a real-space Green's function (RSGF) approach as implemented in the FEFF9 code. 7,8 Strong correlation effects on the electronic states near the Fermi energy (E F ) are incorporated by adding additional self-energy corrections to the one-particle electron and hole propagators. We concentrate in this initial study on the overdoped system because the cuprates are in a paramagnetic state in this doping range. Consequently the treatment of correlations effects is simpler, due to the absence of the pseudogap in the electronic spectrum. In LSCO, the pseudogap is found to vanish near x= 0.20. 1, 3 We start with a generic plasmon-pole self-energy and then dress our calculations with a doping dependent paramagnetic self-energy Σ obtained within the self-consistent quasiparticle-GW (QP-GW) scheme. 1,9,10 Here GW refers to the Hedin approximation to the self energy Σ = iGW , where G is the one-particle Green's function and W the screened Coulomb interaction. 11 This self-energy has been shown to capture key features of strong electronic correlations in various cuprate spectroscopies including ARPES 12 , RIXS 13 , optical 1 and neutron scattering 4 , in good agreement with experiments in both electron and hole doped systems. Many key properties of cuprates, including the physics of superconductivity, involve hybridized Cu d x 2 −y 2 and O p x,y,z orbitals near the Fermi energy E F . Thus the natural choices for the probe atoms in which the incoming x-ray excites a core-hole are Cu and O. Since dipole selection rules do not allow K-edge excitations in Cu atoms to couple to d-bands, we focus here mainly on the O K-edge XANES. This edge may be expected to reflect doping dependent changes in the near E F spectrum through its sensitivity to the O-p states. In O K-edge XANES experiments on LSCO, 14-16 two 'pre-peaks' have been observed to vary with Sr concentration. Our analysis indicates that the energy separation between these two peaks, which is comparable to the optical gap in the insulating phase, 1 is associated with the Mott gap. 17 In particular, the upper XANES peak corresponds to the empty states of the upper Hubbard band, and the lower peak to empty states in the lower Hubbard band resulting from hole doping. With increasing doping, the lower peak, which is absent for x = 0., starts to grow while the upper peak loses intensity. In the overdoped regime, the lower peak reaches a plateau, 18 while the intensity of the upper peak is substantially suppressed. In order to assess effects of core-hole screening, we have also calculated the Cu K-edge XANES, again using the same RSGF approach. 7,8 In this connection, two different corehole models were considered: (i) Full screening, i.e., without a core-hole as in the "initial state rule" (ISR), 19 and (ii) RPA screening as is typically used in Bethe-Salpeter equation (BSE) calculations. 20 Both of these core-hole models reasonably reproduce the experimental XANES of the O K-edge in the pre-peak region in overdoped LSCO. The remainder of this article is organized as follows. Introductory remarks in Section I are followed by a brief account of the methodological details of the RSGF formalism in Section IIA, and of the QP-GW self-energy computations in Section IIB. XANES results based on the plasmon-pole self-energy are discussed in Section IIIA, while results based on doping dependent QP-GW self-energies are taken up in Section IIIB. Finally Section IV contains a summary and conclusions. A. Real-space Green's function formulation Here we briefly outline the real-space Green's function multiple-scattering formalism underlying the FEFF code. More detailed accounts are given elsewhere. 7,8 The quasi-particle Green's function for the excited electron at energy E is defined as Here H is the independent particle (i.e., Kohn-Sham) Hamiltonian, with V being the Hartree potential plus a ground state exchange-correlation density functional, which in FEFF9 is taken to be the von Barth-Hedin functional. 21 Throughout this paper Hartree atomic units (e =h = m = 1) are implicit. This Hamiltonian together with the Fermi-energy E F are calculated self-consistently using the RSGF approach outlined below. In Eq. (1) the quantity Σ(E) is the energy-dependent one-electron self-energy. In this work we use a GW self-energy designed to incorporate the strong-coupling effects in cuprates, as discussed further in Section IIB below. In the RSGF approach it is useful to decompose the total Green's function G(E) as where G c (E) is the contribution from the central atom where the x-ray is absorbed and G sc (E) is the scattering part. For points within a sphere surrounding the absorbing atom the angular dependence of the real space Green's function can be expanded in spherical harmonics as Here, Y L is a spherical harmonic and L = (l, m) denotes both orbital and azimuthal quantum numbers. The physical quantity measured in XANES for x-ray photons of polarizationǫ and energy ω = E − E c is the x-ray absorption coefficient µ(ω), where E c is the core electron energy, and E is the energy of the excited electron. The absolute edge energy is given by l (E) at site n and the Fermi energy (E F ) self-consistently. These quantities can be expressed in terms of the Green's function in Eq. (4) as where respectively. Here |φ 0 is the initial state of the absorbing atom and R n is the Norman radius 22 around the n th atom, which is analogous to the Wigner-Seitz radius of neutral spheres, and the factor 2 accounts for spin degeneracy. B. Self-energy corrections from strong correlations in cuprates In the optimal or overdoped regime of present interest, cuprates do not exhibit any signature of a symmetry-breaking order parameter, and thus the quasiparticle dispersion can as HereŪ is the renormalized Hubbard U value. The imaginary part of the RPA susceptibility provides the dominant fluctuation interaction to the electronic system, which can be represented by W (q, ω) = (3/2)Ū 2 χ ′′ (q, ω). The resulting self-energy correction to the LDA dispersion within the GW approximation is where f (ξ) is the Fermi function and Γ is the vertex correction defined below. Different levels of self-consistency within the GW scheme involve different choices for χ 0 and dispersionξ k . Within our QP-GW scheme, the Green's function entering into the χ 0 bubble is renormalized by an approximate renormalization factor Z which is evaluated selfconsistently. The corresponding vertex correction is taken within the Ward identity Γ = 1/Z, is the renormalized dispersion where µ is the chemical potential. The renormalized band is employed to calculate the full spectrum of spin susceptibility: The doping dependence of U is discussed in Refs. III. RESULTS AND DISCUSSION In Subsection IIIA below, we discuss O and Cu K-edge XANES using a generic GW plasmon-pole self-energy and RPA core-hole screening, but without the self-energy correction arising from strong correlation effects. Subsection IIIB examines doping-dependent effects of self-energy corrections on the O K-edge XANES. A. XANES without self-energy corrections Since K-edge XANES probes the site-dependent p-density of states (p-DOS), we consider first the projected p-DOS from O and Cu sites near E F as obtained using the FEFF code. The low-temperature orthorhombic crystal structure with space group Bmab was used. 28 It is important to note that the structure involves two inequivalent O-atoms with different chemical environments, namely, the O-atoms in the cuprate planes (O pl ), and the apical This implies that O K-edge XANES pre-peak is mainly associated with unoccupied electronic states from atoms lying in the Cu-O planes. Notably, the Cu-p DOS (not shown) in the near E F energy window of Fig. 1 is also quite small and structureless and becomes significant only several eV above E F . The experimental evidence for the aforementioned features of XANES spectra has been discussed previously by several authors. [14][15][16] The treatment of core-hole screening in the O K-edge spectrum is addressed in Fig. 2, where we compare the experimental results from overdoped LSCO with computations using two different core-hole screening models. The computed XANES spectrum using a fully screened hole (blue solid curve) is seen to be in good accord with the results of the RPA screened hole (green dashed curve), although the intensity of the feature at 532 eV differs somewhat in the ISR and RPA results. All three curves in Fig. 2 show the presence of the pre-edge peak around 528.5 eV, a weaker pre-edge feature around 530 eV, and the prominent peak at 532 eV, which is due to the apical oxygen, as demonstrated by the calculated partial absorption, red (dashed-dotted) curve. Fig. 3 presents results along the preceding lines for the Cu K-edge XANES in undoped LSCO. Since the Cu K-edge probes p-states which are only weakly correlated, it provides a useful check of our theoretical method for the case of weak-correlation. The results in Figs. 2 and 3 clearly indicate that the K-edge spectra are not sensitive to the core-hole screening model, as both the RPA and the ISR give results in reasonable agreement with experiment. B. Strong correlation effects and doping dependence We emphasize that the conventional LDA-based formalism is fundamentally limited in its ability to describe the full doping dependence of the electronic structure of the cuprates, because the LDA yields a metallic instead of an insulating state in the undoped system. As the doping is further reduced, the intensity of the 528.5 eV peak decreases while a new peak appears near 530 eV and rapidly grows with underdoping until at x = 0., the 528.5 eV peak is completely gone. The remaining 530 eV peak represents the upper Hubbard band, and its shift in energy from the Fermi level is consistent with optical measurements. 1 Turning to the x = 0.10 spectra in Fig. 4, we see that, as expected, now theory differs substantially from experimental results. Although theory correctly reproduces the reduced intensity of the 528.5 eV peak, it does not show the observed enhanced intensity of the upper peak at 530 eV. Instead, the spectral weight is shifted halfway between the lower and upper peaks. In Fig. 5, the experimental results indicate the opening of a gap in the spectrum which is not captured in our modeling. However, we were able to reproduce the experimental doping dependence 18 in a simpler calculation in which the XANES spectrum is modeled via the empty density-of-states, but in which self-energy corrections including the magnetic gap are incorporated. 4 Fig. 5 compares the experimental XANES spectrum with this DOS approximation at x = 0.10. The splitting of the spectrum into two peaks with the appropriate gap is well reproduced. It should be noted that this same self energy reproduced the optical and RIXS gaps as a function of doping. 1,5,13 Interestingly, the good agreement between theory and experiment in the overdoped case implies that the upper peak at 530 eV possesses little weight in the overdoped system. In particular, we find a saturation of the lower energy feature ( 528eV) in the O K-edge spectra to make a significant contribution to the spectrum at higher energies greater than 531 eV. In examining effects of the core-hole screening, we find that the spectra are insensitive to the core-hole screening model at least in the overdoped regime. In the underdoped case, as expected, our self-energy corrections, which are appropriate for the overdoped paramagnetic system, fail to correctly describe salient features of the spectra. A simple calculation suggests that correcting this will require a more comprehensive modeling of XANES including effects of pseudogap physics on the self-energies in the underdoped regime.
2011-01-28T04:55:50.000Z
2011-01-28T00:00:00.000
{ "year": 2011, "sha1": "bc1f4bbd79f88d57cfeb16b7ac5145e661155e7b", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevB.83.115117", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "bc1f4bbd79f88d57cfeb16b7ac5145e661155e7b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
265802634
pes2o/s2orc
v3-fos-license
When the Going Gets Tough: A Case Report and Review of Calcinosis Cutis in an Infant with Pseudo-Hypoaldosteronism Calcium gluconate solutions are an essential part of the intensive care medication armamentarium. Calcium-related extravasations are not an infrequent occurrence. However, occult extravasation presenting solely as an isolated mass lesion with no preceding cutaneous manifestation is rare. Calcinosis cutis is an extraosseous collection of calcium deposits in the skin and subcutaneous tissues. Multiple etiopathogenetic factors play a role in its manifestations. We illustrate a case of a seven-week-old infant diagnosed with pseudo-hypoaldosteronism with a mysterious swelling on the left leg during the third week of hospitalization, which was attributed to occult iatrogenic calcinosis cutis. Introduction Extravasation injuries are among the most common iatrogenic morbidities in hospitalized patients [1].Extravasation is the inadvertent extra venous administration of a medication or solution into the surrounding tissues with the potential for severe tissue or cellular damage [2].The reported incidence of extravasation injuries with non-vesicant medications is nearly 11% in pediatric patients compared to 0.1-6% in adults [3]. Intravenous calcium infusions have garnered considerable disrepute over the years due to their local complications.A well-recognized complication with its usage is its propensity for extravascular leaks and ensuing tissue damage.The commonly used intravenous calcium-containing solutions, such as calcium gluconate and calcium chloride, possess an osmolality of 669 and 2040 mOsm/L, respectively [4].The hyperosmolarity combined with the cationic properties of calcium has the potential to cause deep penetrating tissue trauma. Fortunately, the damage is usually acute, painful, and visible, mandating remedial measures to prevent further injury.Occasionally, minor extravasations may be subtle and go unnoticed.It may rarely transform into a calcified mass or nodular lesion called iatrogenic calcinosis cutis [5].Pathologically, these extraneous calcium deposits contain hydroxyapatite crystals, which result from a combination of calcium with the exposed collagen in tissues.In vitro, calcium chloride has a more dissociative tendency; however, in practice, calcinosis cutis has been documented with both types of salts. Clinically, it may mimic conditions such as osteomyelitis, cellulitis, abscess, or thrombophlebitis.In addition, the delayed appearance of this rare lesion makes the temporal correlation between antecedent injury and the ensuing lesion relatively tenuous. Case Presentation A term male infant weighing 2810 grams born by normal vaginal delivery was brought to the pediatric department with a history of poor feeding, vomiting, and lethargy for the last two weeks.His admission weight was 2670 grams.On examination, his heart rate was 120/min, respiratory rate was 48/min, oxygen saturation was 99% on room air, blood pressure was 84/50 mm Hg, and random blood sugars were 110 mg/dl.The detailed examination did not reveal any dysmorphism or abnormal genitalia.Serum potassium was 8.21 mmol/L (normal: 3.5-5.1 mmol/L).Serum calcium was 8.5 mg/dl (normal: 8-11 mg/dl).An electrocardiogram was performed, which showed changes in hyperkalemia with tall T waves.His urine test was negative for blood leukocytes and nitrites.He was given supportive therapy for hyperkalemia with calcium gluconate intravenous solution, insulin dextrose solutions, and potassium binding resins.Given persistent hyperkalemia, other treatments for hyperkalemia, such as peritoneal dialysis, were also added later on.A full sepsis workup was performed, which was negative.In addition, 17-OHP (17-hydroxyprogesterone) and serum aldosterone levels were also normal.Thus, the patient was admitted to the pediatric intensive care unit on day two of admission with a presumed diagnosis of pseudo-hypoaldosteronism for correcting lifethreatening dyselectrolytemia.High doses of intravenous 10% calcium gluconate (200 mg/kg/day) were given by direct intravenous injection into the peripheral veins during the first two weeks of hospitalization to treat persistent hyperkalemia with ECG changes. On day 50 of his hospital stay, a swelling was noticed over the left leg.The lesion was hard, non-tender, and gradually progressive (Figure 1a), with no visible signs of inflammation.There was no intravenous catheter in the vicinity of the lesion at this time.Possibilities of cellulitis, osteomyelitis, or abscess were excluded based on appropriate investigations and clinical course.Skeletal radiographs demonstrated radio opacities in the extraosseous space suggestive of calcinosis cutis (Figure 1b).The retrospective review revealed an intravenous catheter in situ in the same limb 10 days back.There was no history of frank extravasation.Investigations revealed normal calcium, phosphorous, and vitamin D levels.His hormonal analysis was within normal limits, including parathormone, cortisol, and aldosterone levels (Table 1).It was, thus, attributed to occult extravasation with delayed presentation as an iatrogenic calcinosis cutis.Spontaneous recovery was documented within four months of its appearance.The genetic basis of pseudohypoaldosteronism was later established by confirmatory mutational analysis in the child. Discussion Calcinosis cutis is associated with hypoparathyroidism, hyperphosphatemia, leukemia, connective tissue disorders, trauma, and renal insufficiency [5].However, iatrogenic calcinosis cutis associated with calciumcontaining medications is an unusual occurrence.The first report of soft tissue calcification after an intramuscular calcium injection in an infant was published in 1936 by Tumpeer et al. [5].Following this, several similar reports emerged, leading to the change in the route of administration from intramuscular to intravenous in the late 1940s.However, the 1970s witnessed reports of complications from intravenous calcium gluconate infusions.Berger et al. published the first series documenting this phenomenon [6].Goldminz et al. reported a higher incidence of adverse effects related to calcium infusions in premature infants [7]. In a systematic review of calcium infusion-related complications, it was observed that the most common site of lesions was the dorsum of the hand (42%), followed by the upper limb (20%) and the lower limb (18%).The two most frequent symptoms were the appearance of erythema (65%) and swelling/edema (48%), followed by the appearance of skin necrosis (47%), indurated skin (33%), and yellow-white plaques or papules (33%).In nearly two-thirds of reported cases, calcium gluconate was the causative agent.The average volume of calcium infused was 19.2 ml (median 4.9 ml) [2].A recent report by Soon et al. [8] observed calcinosis cutis in two infants as a complication of parenteral calcium gluconate therapy in the postoperative period. Although not much information is available in published reports regarding cumulative doses of calcium administered and incidence of iatrogenic calcinosis, our index case received three to four doses of intravenous calcium, each of 200 mg/kg, on the first 10 days of admission. Pathologically, it can be classified into four types: dystrophic, metastatic, idiopathic, and iatrogenic [9].The most common variety is dystrophic, resulting from systemic pathologies such as infection, inflammatory processes, cutaneous neoplasm, or connective tissue diseases.Metastatic calcinosis occurs in the setting of hypercalcemia, causing tissue deposition.The idiopathic term is used when neither local tissue trauma nor systemic metabolic insults can explain these extraneous calcium deposits.The least common entity is iatrogenic calcinosis.Some reports indicate that repetitive heel stick injuries in neonates can give rise to calcified nodules, pathologically pure dystrophic varieties [3]. The pathogenesis of this iatrogenic calcinosis cutis is multifactorial.It is thought to result from a combination of transiently elevated local concentrations of calcium and tissue damage at the extravasation site.However, overt extravasation is not a prerequisite for tissue calcification.Some key factors are the friability of delicate tissues in a small infant, alkaline pH, and an infusion with the propensity for tissue trauma [10].Concomitant administration of sodium bicarbonate to treat hyperkalemia may have been an aggravating factor in our index case.Other medications exacerbating calcium deposition are prednisolone, sodium phosphate, prochlorperazine maleate, streptomycin sulfate, and amphotericin [7]. Interestingly, calcium infusions are characteristically transparent in vitro and remain inapparent radiologically in the subsequent few days following extravasation.One proposed explanation is that traumatized tissues promote calcium influx and enhance the retention of phosphates bound to the intracellular proteins.This leads to the crystallization of calcium phosphate, which is radiopaque.This process evolves and becomes clinically and radiologically apparent in nearly two weeks.The average time interval documented in the literature is 13 days (two hours -24 days) [11].Likewise, this was observed in our case, where the last calcium infusion was administered nearly 10 days before the onset of the calcinotic lesion.A recently published work established a chronogram for extravasation injuries following calcium infusions.They reported that erythema and inflammation appear in the first week, and nodular lesions tend to occur between the first and four weeks.Papules or plaques are reportedly present during the first four weeks, and necrosis occurs within three weeks. Differential diagnoses include cellulitis, osteomyelitis, arthritis, abscess, periostitis, myositis ossificans, and thrombophlebitis.Simple skeletal X-rays can delineate the characteristic findings of calcinosis cutis and help exclude other competing diagnoses.Lee and Gwinn's classification describes three types of radiological appearance of calcification following calcium extravasation.The most commonly reported is Type 1, followed by Type 2 and Type 3 [12].Type 1 pattern is an amorphous mass close to the injection site, resembling myositis ossificans or periostitis.Type 2 is characterized by diffuse calcification in subcutaneous plaques.Lesions resemble calcifications seen in fat necrosis or juvenile dermato-myositis.Type 3 pattern has vascular and perivascular calcification distribution simulating the appearance of arteriosclerosis.Radiographs in our index case suggest subcutaneous plaque-like deposits resembling a Type 2 pattern. The natural course of iatrogenic calcinosis cutis is usually benign and self-limited.Spontaneous resolution occurs due to trans-epidermal elimination but may take three to six months.Conservative therapies included local cooling and sometimes local heat application.Selective cases with secondary complications may require surgical intervention.The literature reports the use of antidotes such as hyaluronidase in acute calcium extravasations.Hyaluronidase has been successfully used to treat other extravasated substances such as 10-50% dextrose, TPN, calcium, radiographic contrast media, potassium, mannitol, aminophylline, and nafcillin.However, no standardized protocol for the management of calcinosis cutis exists.There are several emerging therapies for calcinosis, depending on the primary cause.Garcia et al. recently reported a case of iatrogenic calcinosis cutis successfully treated with topical sodium thiosulfate [13]. Conclusions While meticulous vigilance when using calcium-containing solutions in children remains the best form of prevention, certain safeguards must be exercised.These include switching to oral supplements as early as possible, ensuring flow rates of <2 ml/min, and avoiding co-administration of anions such as bicarbonate, phosphates, and sulfates.In addition, cannulation site checks and regular checks for the patency of intravenous catheters should be emphasized.Timely recognition of these adverse effects can help formulate wise investigative plans and save unnecessary costs and patient suffering. compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work.Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work.Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. FIGURE 1 : FIGURE 1: (a) Firm, indurated swelling over the dorsal aspect of the left leg in proximity to the previous cannulation site.(b) Anterior-posterior and lateral view radiographs depicting soft tissue calcifications in the left lower limb
2023-10-26T15:17:46.037Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "f854fbdd810d2693c644d1884990f6dd71327978", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/case_report/pdf/195424/20231024-3448-p9wm1s.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e0945c427494fa43dbde6020bed27061a9a89da0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
36731244
pes2o/s2orc
v3-fos-license
Effects of women’s groups practising participatory learning and action on preventive and care-seeking behaviours to reduce neonatal mortality: A meta-analysis of cluster-randomised trials Background The World Health Organization recommends participatory learning and action (PLA) in women’s groups to improve maternal and newborn health, particularly in rural settings with low access to health services. There have been calls to understand the pathways through which this community intervention may affect neonatal mortality. We examined the effect of women’s groups on key antenatal, delivery, and postnatal behaviours in order to understand pathways to mortality reduction. Methods and findings We conducted a meta-analysis using data from 7 cluster-randomised controlled trials that took place between 2001 and 2012 in rural India (2 trials), urban India (1 trial), rural Bangladesh (2 trials), rural Nepal (1 trial), and rural Malawi (1 trial), with the number of participants ranging between 6,125 and 29,901 live births. Behavioural outcomes included appropriate antenatal care, facility delivery, use of a safe delivery kit, hand washing by the birth attendant prior to delivery, use of a sterilised instrument to cut the umbilical cord, immediate wrapping of the newborn after delivery, delayed bathing of the newborn, early initiation of breastfeeding, and exclusive breastfeeding. We used 2-stage meta-analysis techniques to estimate the effect of the women’s group intervention on behavioural outcomes. In the first stage, we used random effects models with individual patient data to assess the effect of groups on outcomes separately for the different trials. In the second stage of the meta-analysis, random effects models were applied using summary-level estimates calculated in the first stage of the analysis. To determine whether behaviour change was related to group attendance, we used random effects models to assess associations between outcomes and the following categories of group attendance and allocation: women attending a group and allocated to the intervention arm; women not attending a group but allocated to the intervention arm; and women allocated to the control arm. Overall, women’s groups practising PLA improved behaviours during and after home deliveries, including the use of safe delivery kits (odds ratio [OR] 2.92, 95% CI 2.02–4.22; I2 = 63.7%, 95% CI 4.4%–86.2%), use of a sterile blade to cut the umbilical cord (1.88, 1.25–2.82; 67.6%, 16.1%–87.5%), birth attendant washing hands prior to delivery (1.87, 1.19–2.95; 79%, 53.8%–90.4%), delayed bathing of the newborn for at least 24 hours (1.47, 1.09–1.99; 68.0%, 29.2%–85.6%), and wrapping the newborn within 10 minutes of delivery (1.27, 1.02–1.60; 0.0%, 0%–79.2%). Effects were partly dependent on the proportion of pregnant women attending groups. We did not find evidence of effects on uptake of antenatal care (OR 1.03, 95% CI 0.77–1.38; I2 = 86.3%, 95% CI 73.8%–92.8%), facility delivery (1.02, 0.93–1.12; 21.4%, 0%–65.8%), initiating breastfeeding within 1 hour (1.08, 0.85–1.39; 76.6%, 50.9%–88.8%), or exclusive breastfeeding for 6 weeks after delivery (1.18, 0.93–1.48; 72.9%, 37.8%–88.2%). The main limitation of our analysis is the high degree of heterogeneity for effects on most behaviours, possibly due to the limited number of trials involving women’s groups and context-specific effects. Conclusions This meta-analysis suggests that women’s groups practising PLA improve key behaviours on the pathway to neonatal mortality, with the strongest evidence for home care behaviours and practices during home deliveries. A lack of consistency in improved behaviours across all trials may reflect differences in local priorities, capabilities, and the responsiveness of health services. Future research could address the mechanisms behind how PLA improves survival, in order to adapt this method to improve maternal and newborn health in different contexts, as well as improve other outcomes across the continuum of care for women, children, and adolescents. stage, we used random effects models with individual patient data to assess the effect of groups on outcomes separately for the different trials. In the second stage of the meta-analysis, random effects models were applied using summary-level estimates calculated in the first stage of the analysis. To determine whether behaviour change was related to group attendance, we used random effects models to assess associations between outcomes and the following categories of group attendance and allocation: women attending a group and allocated to the intervention arm; women not attending a group but allocated to the intervention arm; and women allocated to the control arm. Overall, women's groups practising PLA improved behaviours during and after home deliveries, including the use of safe delivery kits (odds ratio [ 29.2%-85.6%), and wrapping the newborn within 10 minutes of delivery (1.27, 1.02-1.60; 0.0%, 0%-79.2%). Effects were partly dependent on the proportion of pregnant women attending groups. We did not find evidence of effects on uptake of antenatal care (OR 1.03, 95% CI 0.77-1.38; I 2 = 86.3%, 95% CI 73.8%-92.8%), facility delivery (1.02, 0.93-1.12; 21.4%, 0%-65.8%), initiating breastfeeding within 1 hour (1.08, 0.85-1.39; 76.6%, 50.9%-88.8%), or exclusive breastfeeding for 6 weeks after delivery (1.18, 0.93-1.48; 72.9%, 37.8%-88.2%). The main limitation of our analysis is the high degree of heterogeneity for effects on most behaviours, possibly due to the limited number of trials involving women's groups and context-specific effects. Conclusions This meta-analysis suggests that women's groups practising PLA improve key behaviours on the pathway to neonatal mortality, with the strongest evidence for home care behaviours and practices during home deliveries. A lack of consistency in improved behaviours across all trials may reflect differences in local priorities, capabilities, and the responsiveness of health services. Future research could address the mechanisms behind how PLA improves survival, in order to adapt this method to improve maternal and newborn health in different contexts, as well as improve other outcomes across the continuum of care for women, children, and adolescents. Author summary Why was this study done? • A systematic review and meta-analysis of trials of participatory learning and action in women's groups found a 25% reduction in neonatal mortality associated with these groups, but the pathways to improved survival have not been explored using available evidence from all trials. • We used data from cluster-randomised trials of women's groups to explore behaviours in the antenatal, delivery, and postnatal periods in order to better explain the reduction in neonatal mortality associated with these groups. We also examined whether women who were assigned to the intervention arm and attended group meetings were more likely to have improved care practices than women who were also in the intervention arm but did not attend group meetings. What did the researchers do and find? • We conducted a meta-analysis using individual-level data to explore the relationship between women's groups and key behaviours in the antenatal, delivery, and postnatal periods. Our findings suggest that women's groups are able to improve key behaviours for home deliveries including clean delivery practices and thermal care practices. • To determine whether women who attended group meetings were more likely to have improved behaviours compared with women who did not attend, we compared behaviours between these women separately for the different trials. Overall, we found that women who attended group meetings were more likely to have improved behaviours than women who did not attend. What do these findings mean? • Our meta-analysis showed that women's groups were associated with improvements in critical practices including clean deliveries and appropriate thermal care for home deliveries. Evidence suggests that these care practices are essential for reducing neonatal mortality because of the importance of sepsis and hypothermia in areas with high neonatal mortality and low rates of facility births. Although this finding explains how women's groups improved survival in these contexts, we also found that women's groups improved survival in areas with lower neonatal mortality, such as rural Bangladesh and rural Malawi. It is possible that women's groups were able to help families make more timely, better informed decisions about care seeking. • Women's groups have demonstrated flexibility in adapting to a shifting environment to improve birth outcomes through important pathways. Key to the continued reduction in adverse birth outcomes will be sustained improvement in community-level practices, as well as ensuring that health facilities are equipped to support quality care. Introduction Between 1990 and 2015, mortality rates in children aged between 2 months and 5 years declined globally by 58% [1][2][3]. Neonatal mortality decreased by 47% over the same period, but the proportion of deaths occurring during the neonatal period out of all deaths among children under 5 years of age increased from 37% to 45% [3]. If these trends continue, neonatal mortality will constitute over 50% of deaths among children under 5 years of age by 2030 [3]. Increased coverage of effective interventions is required to improve neonatal survival [4]. Scaling up community interventions to improve maternal and newborn health outcomes has the potential to reduce neonatal mortality by 25% (risk ratio 0.75, 95% CI 0.67-0.83; 21 studies, n = 302,464). The most effective interventions are community mobilisation through women's groups, counselling for care and referral through home visits, and combinations of these 2 approaches [5]. A meta-analysis of home visiting programmes with or without homebased neonatal care found that interventions in proof-of-principle studies led to a 45% reduction in neonatal mortality (relative risk 0.55, 95% CI 0.48-0.63), while interventions tested at scale, in programmatic conditions, led to a 12% reduction (risk ratio 0.88, 95% CI 0.82-0.95) [6]. A meta-analysis of 7 trials evaluating the effects of women's groups practising participatory learning and action (PLA) found a 20% reduction in neonatal mortality (odds ratio [OR] 0.80; 95% CI 0.67-0.96) with high levels of heterogeneity (I 2 = 73.2%, p = 0.001) [7]. The WHO and UNICEF Every Newborn Action Plan now recommends both home visits and participatory meetings with women's groups as community strategies to improve maternal and newborn health [8]. In most of the studies included in the above-mentioned meta-analysis, women's groups went through a PLA cycle with 4 distinct phases [7]. In the first phase, groups identified and prioritised common maternal and newborn health problems in their community. In the second phase, they discussed potential solutions and prioritised them. In the third phase, groups implemented their chosen solutions, and in the fourth, they evaluated their progress and planned for the future [7,[9][10][11][12][13]. The cycle of meetings was intended to build the capacity of individuals, groups, and communities to take action to improve maternal and neonatal health [14]. Although women's groups practising PLA have been shown to reduce newborn mortality in some settings, questions remain about the mechanisms through which they achieve this [7]. In rural eastern India, the proof-of-principle Ekjut cluster-randomised controlled trial and its process evaluation suggested that improved clean delivery practices and thermal care were partially responsible for increased neonatal survival [15]. In Malawi, the MaiMwana trial process evaluation noted that groups used varied strategies to address maternal and neonatal health concerns, including health education, bicycle ambulances, distribution of insecticide-treated nets, establishment of mobile antenatal and under-5 clinics, and group funds [14]. In Nepal, the process evaluation suggested that improvement in mortality was possibly due to increases in care-seeking and preventive care practices for home deliveries [16]. Results from the meta-analysis showing the value of women's groups in improving neonatal survival were heterogeneous [7]. Although most of the trials in rural South Asia found reductions in neonatal mortality, this was not the case for the trial that took place in an urban Indian setting [7,17]. These findings and ongoing changes in the coverage of key strategies to improve maternal and neonatal survival, including facility-based deliveries, suggest a need to gain better insight into the mechanisms through which this complex intervention works. We sought to examine the effects of women's groups practising PLA on behaviours in the antenatal, delivery, and postnatal periods in order to understand the pathways to mortality reduction. Because the effects on neonatal mortality appeared to be greater in studies where more pregnant women attended meetings, we hypothesized that improved behaviours would also be related to whether a woman attended women's group meetings [7]. Ethics Ethical approval for the trials that collected the data for this study came from the UCL Great Ormond Street Institute of Child Health and Great Ormond Street Hospital for Children (UK) and in-country research ethics committees, as previously detailed [7]. Search criteria We did a meta-analysis of trials of women's groups practising PLA. Our search strategy and inclusion criteria were similar to those of a previous systematic review and meta-analysis. Briefly, we searched PubMed, Embase, Cochrane Library, CINAHL, African Index Medicus, Web of Science, the WHO Reproductive Health Library, and the Science Citation Index for studies published from the databases' inception dates until March 1, 2017, with no language restrictions. Search terms included a combination of 'community mobilisation', 'community participation', 'participatory learning and action', 'women's groups', and 'women'. We also sought unpublished data from researchers known to be active in this area. Studies were included if they were randomised controlled trials, participants were women aged 15-49 years, and the trial tested a PLA cycle with women's groups and reported information on at least 1 of our chosen outcomes [7]. Six of the 7 studies in the previous review met our inclusion criteria, as did 1 additional study from rural India [13]. In total, our analysis included 7 trials that took place between 2001 and 2012 within socio-economically disadvantaged communities in 4 countries, including rural communities in Bangladesh, Malawi, and Nepal, and rural and urban communities in India [7,[10][11][12][13][17][18][19]. We used individual-level data collected during these 7 cluster-randomised controlled trials. Table 1 describes the characteristics of each study, including the number of participants. Two of the trials used a 2-by-2 factorial design. The first Bangladesh trial used a factorial design to Published estimate comparing women's group intervention to control group adjusting for covariates, unless otherwise specified. 2 This number may differ from the number reported in the mortality estimate for the main trial paper as it includes liveborn infants with information collected as part of the survey questionnaire only. 3 This number may differ from the number reported in the mortality estimate for the main trial paper as it includes pregnancies with information collected as part of the survey questionnaire only. 4 Bangladesh 2005-2007 trial data used in this analysis include both women's groups and traditional birth attendant training intervention and control areas. 5 The Malawi trial was a 2-by-2 factorial cluster-randomised controlled trial of a women's group intervention and an infant feeding programme. Results are from the women's group intervention and control arms. OR, odds ratio; RR, risk ratio. Included studies assess the effects of the women's group intervention and of a traditional birth attendant (TBA) training intervention [11]. There was no evidence of interaction between these 2 interventions, so we included data collected from all study participants [11]. The trial in Malawi used a factorial design to assess both the women's group intervention and an infant feeding intervention. Because there was significant interaction between the 2 interventions and the infant feeding intervention had an independent effect on neonatal mortality, we did not include participants in the infant feeding arm in this analysis [20]. We also included 2 studies that took place in the same geographical region of Bangladesh. The initial Bangladesh trial did not find evidence of a reduction in neonatal mortality for the women's group intervention. This may have been due to very low coverage; only 3% of women reported attending women's groups. The objective of the second trial was therefore to determine whether scaling up the coverage of women's groups in the same geographical area would have an effect on neonatal mortality. In all studies except the trials in Nepal and Malawi, the data collection systems involved a female, community-based key informant who reported births and deaths in her area, which covered a population ranging from 250 to 800 households. For the trials in Nepal and Malawi, the key informant identified women in pregnancy. This key informant met with a trained interviewer once a month. The interviewer verified the informant's reports and paid her an incentive for each correct identification. In the Malawi trial, cluster enumerators, who were similar to key informants, were paid a monthly salary. Four to 6 weeks after delivery, the interviewer visited the home where a birth or death had been identified and collected information on the mother's and family's sociodemographic characteristics, as well as events in the antenatal, delivery, and postnatal periods using a structured questionnaire [9][10][11][12]17,19,20]. In the event of a maternal death, an interviewer or supervisor conducted a verbal autopsy with a relative or close friend [9,10,19]. Measures We selected outcomes representing a variety of important behavioural indicators in the antenatal, delivery, and postnatal periods, including the following: appropriate antenatal care, facility delivery, use of a safe delivery kit, hand washing by the birth attendant prior to delivery, use of a sterilised instrument to cut the umbilical cord, immediate wrapping of the newborn after delivery, delayed bathing of the newborn, immediate initiation of breastfeeding, and exclusive breastfeeding for the first 6 weeks after delivery. A safe delivery kit was normally available at low cost and typically included the following, at a minimum: soap, a clean string, a razor blade, and a plastic sheet [21]. Information collected in the different surveillance systems did not allow us to understand whether clean delivery practices were used independent of kit use. Although the Malawi trial collected data on clean delivery practices including hand washing by the birth attendant and use of a sterilised blade to cut the cord, the Ministry of Health's position was to promote facility deliveries, and it was not acceptable for the study's women's groups to discuss clean home delivery practices or TBA training. Table 2 lists and defines the outcomes used in the analysis for each trial. We assessed the quality of evidence for each outcome using Grading of Recommendations Assessment, Development and Evaluation (GRADE) criteria, and these results can be found in S1 Table [22]. The previous meta-analysis assessing the effect of women's groups on mortality outcomes found that the coverage of groups and the proportion of pregnant women participating in them were key to mortality reduction [7]. As part of an additional analysis to test whether coverage also affected the success of the intervention in improving the behaviours of interest, we created a variable indicating whether a woman attended group meetings. Women who were allocated to the intervention arm and reported attending at least 1 group meeting were considered women's group attendees. Statistical methods We examined the prevalence of behaviours of interest either at baseline or, when this was not available, in the trial's control arm. We also tabulated the prevalence of each behaviour by treatment arm and women's group attendance (S2 Table). We then used 2-stage meta-analysis techniques to estimate the effect of the women's group intervention on behavioural outcomes. In the first stage, we used individual records to assess the effect of women's groups on the selected outcomes separately for the different trials. We used logistic regression with random effects (xtmelogit command) in Stata to account for the clustered nature of the data [23]. For trials that used a stratified or paired trial design, we adjusted for the different strata/pairs using a dummy variable that we treated as a fixed effect. These analyses also adjusted for any baseline differences between the intervention and control arms that existed before the inception of any intervention activities (S1 Box). Although the Nepal trial collected information on whether a woman had a facility delivery, due to very few women having a facility delivery and the paired nature of this cluster-randomised trial, these models would not converge. Likewise, for the urban Indian trial, the model assessing the effect of groups on exclusive breastfeeding failed to converge because only 0.9% of women reported a positive response for this outcome. For the second stage of the meta-analysis, we used random effects models via the metan command in Stata [23]. We chose to do a 2-stage meta-analysis rather than use summary estimates from the published trials, as not all trials reported all behaviours of interest for our analysis, and this method also allowed us to adjust for additional confounders that were not accounted for in the original trial. For trials with outcomes or covariates with greater than 10% missing data and significant differences in missingness between the control and intervention arms, we applied multiple imputation by chained equations (MICE) using the MI command in Stata, and assuming data were missing at random (MAR) [24]. Variables included in the MICE models were the outcome of interest, treatment arm, and covariates that were considered to be predictors of missingness [25,26]. We used a weighted sensitivity analysis using the selection model approach with multiple imputed data to test for modest departures from MAR [27][28][29]. In all instances, there was no evidence that missingness biased our main study findings. Women's group attendance For each of the studies, we used logistic regression with random effects (xtmelogit command) in Stata to assess associations between outcomes and the following categories of group attendance and allocation: women attending a group and allocated to the intervention arm, women not attending a group but allocated to the intervention arm, and women allocated to the control arm. Stata's postestimation command 'test' was used to determine if there were significant differences in the ORs between (1) women who attended groups in the intervention arm versus women in the control arm and (2) women who did not attend groups in the intervention arm versus women in the control arm. Models were adjusted using methods similar to those described for the first stage of the meta-analysis in addition to including covariates likely to influence health behaviours and women's group attendance: parity, maternal age, and maternal educational attainment (S1 Box). We identified these covariates by discussing the intervention with principal investigators and reviewing process evaluations and qualitative research on the women's group interventions [14][15][16]. Although the second rural Indian trial (the Jharkhand Odisha Health Action Research [JOHAR] trial), the trial in urban India, and the Malawi trial adjusted for baseline differences, we did not adjust for baseline differences in this analysis as it would not have been possible for women to attend group meetings before their inception [13]. We chose not to do a pooled analysis of the associations between health behaviours and women's group attendance because we expected both the determinants of women's group attendance and the types of behaviours discussed at the women's groups to differ substantially across trials, meaning that a single summary effect would not capture this heterogeneity adequately. All analyses were conducted in Stata 14 [23]. General The prevalence of antenatal, delivery, and postnatal health behaviours among women who were not exposed to the intervention (baseline period or control arm of the trial) differed substantially between studies (Table 3). For example, 2% of women delivered in health facilities in the control group of the trial in rural Nepal, compared with 84% of women in the baseline group in the urban India trial. Appropriate thermal care was uncommon in the first rural India trial, with only 12% of neonates being wrapped within 10 minutes of birth and only 17% having delayed bathing. Exclusive breastfeeding was rarely practised in urban India (1% at baseline, compared with between 20% and 94% at baseline or in the control arm in the other trials). Prevalence of behaviours for both the intervention and control arms can be found in S2 Table. Effect of women's groups on behavioural outcomes in the antenatal, delivery, and postnatal periods The meta-analysis found no evidence that women's groups improved the uptake of antenatal care (OR 1.03, 95% CI 0.77-1.38; I 2 = 86.3%, 95% CI 73.8%-92.8%; Fig 1) (GRADE criteria: low; S1 Table) or health facility delivery (OR 1.02, 95% CI 0.93-1.12; I 2 = 21.4%, 95% CI 0%-65.8%; Fig 2) (GRADE criteria: high; S1 Table), but we cannot rule out changes in the selectivity and speed of uptake of healthcare-seeking behaviours. Prevalence in control clusters. 2 Prevalence in baseline data. 3 Outcome not collected for this study. https://doi.org/10.1371/journal.pmed.1002467.t003 The meta-analysis suggests that women's groups were effective in improving hygiene practices for home deliveries. Overall, there was evidence that women's groups increased hand washing by birth attendants (OR 1.87, 95% CI 1.19-2.95; I 2 = 78.9%, 95% CI 53.8%-90.4%; Fig 3) (GRADE criteria: low; S1 Table). There was also some evidence that women's groups improved the use of new or sterile blades for cord cutting (OR 1.88, 95% CI 1.25-2.82; I 2 = 67.6%, 95% CI 16.1%-87.5%; Fig 4) (GRADE criteria: low; S1 Table). There was moderate Table). Effect of women's group attendance on improving selected behaviours We anticipated a positive relationship between exposure to the intervention and behaviour change, such that there would be a difference in the uptake of preventive and care-seeking behaviours between (1) women who attended groups in the intervention arm versus women in the control arm and (2) women who did not attend groups in the intervention arm versus women in the control arm. We expected that women who attended group meetings in the intervention arm would be more likely to modify their behaviours than women who were also in the intervention arm but did not attend group meetings. In most studies, and for the majority of behaviours, it was more likely that women who reported attending at least 1 group meeting were more likely to practise the behaviour in question. Detailed results can be found in Table 4. Meta-analysis of women's groups to improve healthy behaviours in the perinatal period Results suggested improvements for group attendees compared to non-attendees in increased antenatal care visits with a skilled provider in the first Bangladesh trial (OR comparing non-attendees to control: 0.78, 95% CI 0.55-1.13; OR comparing attendees to control: 1.72, 95% CI 1.11-2.66; p-value of adjusted Wald test comparing equality of parameters: p < 0.001) and the second Bangladesh trial (OR comparing non-attendees to control: 1.31, 95% CI 0.96-1.80; OR comparing attendees to control: 2.01, 95% CI 1.46-2.77; Wald test p < 0.001). Improvements for group attendees compared to non-attendees were also present in the rural Malawi trial (OR comparing non-attendees to control: 0.66, 95% CI 0.35-1.26; OR comparing attendees to control: 0.79, 95% CI 0.42-1.50; Wald test p = 0.019). Facility delivery was more likely for group attendees compared to non-attendees for four trials. The first India trial demonstrated improved rates of facility delivery in group attendees compared to non-attendees (OR comparing non-attendees to control: 0.73, 95% CI 0.56-0.96; OR comparing attendees to control: 0.86, 95% CI 0.65-1.14; p-value of adjusted Wald test comparing equality of parameters: p = 0.027). The second Bangladesh trial also demonstrated a difference between attendees and non-attendees (OR comparing non-attendees to control: 1.13, 95% CI 0.91-1.40; OR comparing attendees to control: 0.99, 95% CI 0.80-1.24; Wald test p = 0.024). The JOHAR trial [13] in rural India also found a difference in facility-based deliveries when comparing group attendees and non-attendees (OR comparing non-attendees to control: 0.89, 95% CI 0.52-1.52; OR comparing attendees to control: 1.17, 95% CI 0.70-1.95; Wald test p = 0.017). Results from the trial in rural Malawi trial also suggest that facility deliveries were more likely for group attendees compared to non-attendees (OR comparing nonattendees to control: 0.99, 95% CI 0.48-2.03; OR comparing attendees to control: 1.17, 95% CI 0.57-2.40; Wald test p = 0.014). Hand washing by the birth attendant prior to delivery was more likely for group attendees compared to non-attendees for all trials, except in the urban Indian trial and the JOHAR trial in rural India. Use of a safe delivery kit was more likely for group attendees compared to nonattendees in all trials except the JOHAR trial in rural India. Cutting the umbilical cord with a sterilised instrument was more likely for group attendees compared to non-attendees in all studies except the Bangladesh trials and the urban Indian trial. Results suggested improvements for group attendees compared to non-attendees in wrapping the newborn within 10 minutes of delivery for the first Bangladesh trial (OR comparing non-attendees to control: 1.76, 95% CI 0.58-5.36; OR comparing attendees to control: 2.85, 95% CI 0.91-8.91; p-value of adjusted Wald test comparing equality of parameters: p < 0.001) and the second Bangladesh trial (OR comparing non-attendees to control: 1.30, 95% CI 0.79-2.12; OR comparing attendees to control: 1.49, 95% CI 0.91-2.45; Wald test p = 0.033). Not bathing a newborn within 24 hours of birth was more likely for group attendees compared to non-attendees for all trials except the Malawi trial and the JOHAR trial. Breastfeeding a newborn within an hour of delivery was more likely for group attendees compared to non-attendees for the two rural Bangladesh trials and the first Indian trial. However, exclusively breastfeeding an infant for the first 6 weeks of life was more likely for group attendees in all trials except the first Bangladesh and the Malawi trial. Discussion This meta-analysis suggests that women's groups practising PLA improved home delivery and home care practices during birth and the postnatal period. We found evidence that women's groups improved clean delivery practices for home deliveries, including the use of safe delivery kits, hand washing with soap by birth attendants prior to delivery, and clean cord cutting. We also found evidence that groups improved home care practices including wrapping newborn infants within 10 minutes of delivery and delaying the bathing of infants for at least 24 hours after delivery. There was no evidence that groups improved the uptake of facility deliveries, antenatal care, early breastfeeding, or exclusive breastfeeding for at least 6 weeks following delivery. Most of the estimates for the separate behaviours had a high degree of heterogeneity. The lack of consistency in improving behaviours across all trials was unsurprising given that groups were involved in a process where women identified, prioritised, and implemented solutions for problems that differed between settings and groups. The previous meta-analysis that assessed the effect of groups on neonatal mortality suggested that the effect of the intervention was partly dependent on the proportion of pregnant women attending groups, and on the population coverage of the groups [7]. Our analysis tested whether the uptake of different behaviours was dependent on group attendance, and found improvements in some of the behaviours for women who attended groups compared to women who did not. Interestingly, although the first Bangladesh trial did not show any differences between the intervention and control arm in either neonatal mortality or the different care practices, results from our analysis demonstrated that attendees in the intervention arm were more likely to improve care practices compared to non-attendees in the intervention arm. This suggests that population coverage is an important factor in improving newborn health. Although not all outcomes measured suggested an improvement for group attendees compared to non-attendees, it is possible that some behaviours were not emphasised in the group meetings for some of the trials. It is also possible that some women did not attend meetings where particular behaviours were discussed. Finally, it is possible that we did not have an adequate sample size to test for these effects, given that the original trial papers were powered to detect a reduction in neonatal mortality and not a difference in behaviours, some of which would have had much higher intracluster correlation coefficients [13,30]. The main limitation of our analyses was the high degree of heterogeneity for most of the selected behaviours. This may be due to the limited number of trials involving women's groups and the contextual heterogeneity of the settings in which they were conducted. Behaviours identified and promoted by groups as part of their solutions to improve maternal and newborn health were likely to be different in different settings, given that 5 of the trials took place in rural South Asia, 1 trial in urban India, and 1 trial in rural Malawi. The mechanisms that influenced improvements in neonatal and maternal health in these different settings are also likely to have been affected by local social and cultural norms and by environmentally specific conditions. For example, neonatal mortality rates are higher in winter in rural India, which may have resulted in more women's groups identifying thermal care as an important practice, compared to groups in the Malawi trial [13,31]. Another potential limitation of this study was that most of the behaviours documented in the surveillance system were self-reported, and women in the intervention arm may have been more likely to report socially desirable behaviours compared to women in the control arm. This is a general limitation of self-reported data from trials that attempt to modify behaviours. Women in the intervention arm may also have been more likely to remember whether a care practice was used compared to women in the control arm. If women in the control arm were also less likely to practise the acceptable behaviour, this could have introduced bias. The sensitivity analysis testing the MAR assumption for the multiple imputation verified that our estimates were likely to be unbiased by missing data. Table 4 Health behaviour Odds ratio (95% CI): intervention arm versus control arm Our findings suggest that home care behaviours over which women and their families had greater control, including the use of clean delivery practices and appropriate thermal care, were more amenable to change than behaviours involving access to routine health services. Given findings from a previous study that found that clean delivery practices were associated with a reduction in neonatal mortality, it seems possible that the groups' ability to improve clean delivery practices reduced cases of neonatal sepsis and that better thermal care practices reduced the danger of hypothermia, an important contributing factor to mortality [21]. The data on care seeking are less clear. Lack of improvement in most care-seeking practices may have been due to concerns around the availability, affordability, or quality of care in these areas [32][33][34][35]. We cannot rule out other mechanisms through which women's groups may work, but these could not be examined in this study. For example, groups may change antenatal risk behaviours in diet, infection prevention, and substance use. Groups may also help families make more timely decisions about appropriate care seeking based on better information about the quality of care in local facilities. Finally, groups may also work by shifting a family's ideas about complications from fatalism to response, and by improving access to resources and help in finding transport and care options [14][15][16]. Although our analysis identified improvements in some behaviours, there are still many unknowns. Attempting to understand the causal pathways behind the success or failure of complex interventions is important, and UK Medical Research Council guidance recommends a rigorous process evaluation to help gain insight into such mechanisms [36]. It is now possible to identify where more insight into the mechanisms behind the women's groups success could be useful. For example, it may be useful to collect information on the number of group meetings attended by each individual participant, as this would provide better estimates of the dose response to exposure. In addition, recording the problems and strategies discussed at each meeting attended by individual women would provide a more sensitive measure of exposure. Trials included in this meta-analysis took place between 2001 and 2012, which was a period of rapid change for maternal and neonatal health [37,38]. Not only did mortality decrease, there were also significant changes in behaviours on the pathway to mortality reduction. Importantly, there were substantial increases in facility deliveries and skilled birth attendance [1]. It is likely that different behaviours were emphasised at different time points between 2001 Attendees are women who were assigned to the intervention arm who attended at least 1 women's group meeting; non-attendees are women who were assigned to the intervention arm but did not attend any women's group meetings. Odds ratios are for these groups compared to women assigned to the control arm. Values in bold indicate behaviours that were affected by women's group attendance or trial arm allocation (p < 0.05) and for which there was a difference between the odds ratios for attendees and non-attendees (p < 0.05 on Wald test comparing 2 parameters). 1 Models would not converge. 2 Outcome not discussed in women's groups meetings. 3 Outcome not measured for this trial. 4 It was not possible to compute estimates due to the category for attended in the 'allocated, attended' variable having too few newborns that were not bathed early. 5 There were too few breastfed children to estimate results. [39]. Likewise, in Malawi, facility deliveries increased nationally from 55% to 91% between 2000 and 2015 [40]. Results from the rural Indian trial taking place between 2005 and 2008 showed that groups did not have an impact on improving the proportion of women delivering in health facilities, but the JOHAR trial (2009)(2010)(2011)(2012) found that groups improved the uptake of facility-based delivery. This may highlight one of the benefits of 'agile' interventions such as participatory women's groups, which are dialogue-based rather than dependent on a fixed set of messages: they are flexible by design, which allows groups to respond to changes in the social environment and health system. The flexibility of women's groups in offering context-specific solutions to problems suggests that this approach may also be appropriate for settings with a medium to high proportion of facility deliveries. For example, findings from a trial in Vietnam suggest that PLA using local stakeholder groups composed of health workers and other community workers may reduce neonatal mortality in areas with mainly facility-based deliveries and moderate levels of mortality [41]. A recent meta-analysis of community-based approaches to improve neonatal mortality found that community interventions had negligible effects in settings where mortality rates were less than 32 per 1,000 live births [42]. Findings from this meta-analysis also suggested that community interventions are less effective when facility-based deliveries are greater than 44% [42]. The authors further explained that in such contexts, unhealthy home care practices are easily addressable risk factors. These findings are supported by results of our-meta-analysis that showed improvements in crucial home care practices including clean deliveries and appropriate thermal care. All trials included in this meta-analysis were conducted by University College London's Institute for Global Health, with separate partner organisations responsible for leading the interventions and data collection. Lessons learned from the initial trials were used to improve subsequent studies. As an example, in the first Bangladesh trial, the population coverage of women's groups was probably insufficient to achieve results. To address this, coverage was increased and a second trial conducted. Questions may arise as to the reproducibility of findings from the studies included in this meta-analysis, and whether PLA will be effective when brought to scale. These are valid concerns that are being addressed in scale-up initiatives, for example with accredited social health activists (ASHAs) and their supervisors supported by the National Health Mission in rural India. Results from the non-randomised, controlled evaluation of this initiative will help us better understand whether PLA will be effective when brought to scale. The Global Strategy for Women's, Children's and Adolescents' Health is a roadmap for ending preventable deaths ('survive'), ensuring health and well-being ('thrive'), and expanding enabling environments ('transform') [43]. The UN Secretary General has made 'community empowerment' the priority for the transformative component of this agenda [44]. Findings from our meta-analysis suggest that women's groups practising PLA can improve care pathways that are key to reducing maternal and neonatal morbidity and mortality. Future research can help to assess whether such interventions can be used to address health-related issues along the continuum of care for women, children, and adolescents. Supporting information S1 Box. List of adjusted covariates used in different models. (DOCX) S1 Table. Results of GRADE scoring system used for chosen behavioural outcomes. (DOCX) S2 Table. Prevalence of behaviours among women allocated to the control arm, women allocated to the intervention arm and not attending women's groups, and women allocated to the intervention arm and attending women's groups.
2017-12-19T04:29:10.030Z
2017-12-01T00:00:00.000
{ "year": 2017, "sha1": "1716c086ceb2e749c970cd9229f63dd703ebbde3", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosmedicine/article/file?id=10.1371/journal.pmed.1002467&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4b0d8c4a7d07a4c0f930fe9e4ef9c89c87933b07", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
37985598
pes2o/s2orc
v3-fos-license
Investigation of the Structural , Optical and Electrical Properties of Copper Selenide Thin Films Copper selenide (CuSe) belongs to I–VI compound semiconductor materials. Copper (I) selenide exists in the cubic, orthorhombic, tetragonal or monoclinic forms1. Copper selenide heterojunction solar cells are cost effective and high‐efficiency devices used in the solar energy conversions. CuSe is also used in the fabrication of photovoltaic devices such as window material, super ionic conductor, electro-optical devices, optical filter, thermo electric converter and photo electrochemical cell. CuSe alloys have been one of the most studied in recent years, with stoichiometric (α‐Cu2Se, Cu3Se2, CuSe, and Cu2Se) and non-stoichiometric (Cu2-xSe) compositions exhibiting a continuous change of physical properties. In addition, various crystalline phases have been reported with orthorhombic, cubic, hexagonal, and tetragonal structure, depending on the stoichiometry and the growth methods2-4. These features make the electrical and optical properties interesting for applications in solar cells5, super ionic conductor6, optical filters7 and lasers8. The CuSe semiconductor could be a direct gap of 2.2 eV or 1.4 eV indirect9. Thin and continuous films with desired electrical and optical properties are required for the preparation of photoelectrochemical solar cells10. It is easier said than done to obtain continuous and single phase CuSe film with the above mentioned properties. Electrodeposition is one of the suitable methods to prepare thin and continuous semiconducting films. This technique provides numerous advantages such as low temperature processing, low cost of synthesis, no need of vacuum facility, no contamination to the surrounding. It is simply possible to control film thickness and morphology by readily adjusting the electrical parameters as well as the composition of the electrolytic solution11.The CuSe thin films prepared by thermal evaporation and their structural, electrical and optical properties have been studied12. Preparation of CuSe thin films by vacuum evaporation technique and its annealing effect on structural, morphological, compositional and optical properties have been investigated13.Growth of CuSe thin films using thermal evaporation method and their properties have been investigated using structural, optical absorption, and Raman spectroscopic techniques14. The grown CuSe thin films and their properties have been investigated using X‐ray diffraction, scanning electron microscopy and optical absorption techniques15. To the best of our knowledge, no such detailed investigation is available for studying the properties of CuSe thin films which have been obtained using electrodeposition technique. Copper selenide has such electrical and optical properties that are appropriate for a number of photovoltaic applications. Copper selenide induces much interest since it has been broadly used as solar cell applications16. CuSe thin films can be deposited by different techniques such as physical vapour deposition, pulse laser evaporation, electro deposition, spray pyrolysis, metal organic vapour phase epitaxy (MOVPE)/metal organic chemical vapour deposition (MOCVD), screen printing, successive ionic layer adsorption reaction (SILAR), RF sputtering, and chemical bath deposition (CBD)17-22. Thin film heterojunctions solar cells play asignificant role as low cost, large area and high Investigation of the Structural, Optical and Electrical Properties of Copper Selenide Thin Films Introduction Copper selenide (CuSe) belongs to I-VI compound semiconductor materials.Copper (I) selenide exists in the cubic, orthorhombic, tetragonal or monoclinic forms 1 .Copper selenide heterojunction solar cells are cost effective and high-efficiency devices used in the solar energy conversions.CuSe is also used in the fabrication of photovoltaic devices such as window material, super ionic conductor, electro-optical devices, optical filter, thermo electric converter and photo electrochemical cell.CuSe alloys have been one of the most studied in recent years, with stoichiometric (α-Cu 2 Se, Cu 3 Se 2 , CuSe, and Cu 2 Se) and non-stoichiometric (Cu 2-x Se) compositions exhibiting a continuous change of physical properties.In addition, various crystalline phases have been reported with orthorhombic, cubic, hexagonal, and tetragonal structure, depending on the stoichiometry and the growth methods [2][3][4] .These features make the electrical and optical properties interesting for applications in solar cells 5 , super ionic conductor 6 , optical filters 7 and lasers 8 .The CuSe semiconductor could be a direct gap of 2.2 eV or 1.4 eV indirect 9 .Thin and continuous films with desired electrical and optical properties are required for the preparation of photoelectrochemical solar cells 10 .It is easier said than done to obtain continuous and single phase CuSe film with the above mentioned properties.Electrodeposition is one of the suitable methods to prepare thin and continuous semiconducting films.This technique provides numerous advantages such as low temperature processing, low cost of synthesis, no need of vacuum facility, no contamination to the surrounding.It is simply possible to control film thickness and morphology by readily adjusting the electrical parameters as well as the composition of the electrolytic solution 11 .The CuSe thin films prepared by thermal evaporation and their structural, electrical and optical properties have been studied 12 .Preparation of CuSe thin films by vacuum evaporation technique and its annealing effect on structural, morphological, compositional and optical properties have been investigated 13 .Growth of CuSe thin films using thermal evaporation method and their properties have been investigated using structural, optical absorption, and Raman spectroscopic techniques 14 .The grown CuSe thin films and their properties have been investigated using X-ray diffraction, scanning electron microscopy and optical absorption techniques 15 .To the best of our knowledge, no such detailed investigation is available for studying the properties of CuSe thin films which have been obtained using electrodeposition technique. Copper selenide has such electrical and optical properties that are appropriate for a number of photovoltaic applications.Copper selenide induces much interest since it has been broadly used as solar cell applications 16 .CuSe thin films can be deposited by different techniques such as physical vapour deposition, pulse laser evaporation, electro deposition, spray pyrolysis, metal organic vapour phase epitaxy (MOVPE)/metal organic chemical vapour deposition (MOCVD), screen printing, successive ionic layer adsorption reaction (SILAR), RF sputtering, and chemical bath deposition (CBD) [17][18][19][20][21][22] .Thin film heterojunctions solar cells play asignificant role as low cost, large area and high efficiency devices in solar energy conversion.In the present paper is discussed how the CuSe thin films can be deposited on the glass substrates by CBD method and how they can be characterized by X-ray diffraction, scanning electron microscopy (SEM), UV analysis, dielectric studies and photoconductivity measurement. Experimental Procedure The substrate cleaning is very important in the deposition of thin films.Commercially available glass slides with a size of 75 mm × 25 mm × 2 mm were washed using soap solution and subsequently kept in hot chromic acid and then cleaned with deionized water followed by rinsing in acetone.Finally, the substrates were ultrasonically cleaned with deionized water for 10 min and wiped with acetone and stored in a hot oven.CuSe thin films were prepared on commercial microscopic glass slide by using the CBD technique.The deposition bath consisted of an aqueous solution of (0.5 M) copper sulfate pentahydrate, (0.1 M) trisodium citrate, (0.5 M) sodium hydroxide, 4 ml sodium selenosulphate solution and deionized water to make a total volume of 50 ml.The deposition was carried out at temperature 60°C.The pH of the solution was about 9 and very slow stirring of the solution was done during the deposition.A glass substrate was placed vertically inside the vessel with the help of a suitably designed substrate holder.After a time period of 60 min, the glass slide was removed from the bath and cleaned with deionized water and dried in the hot oven.Uniform CuSe film with a thickness of 0.6 μm and having good adherence was obtained.Many trials were made by optimizing the deposition parameters to obtain a good quality CuSe thin film.The resultant films were homogeneous and well adhered to the substrate with mirror-like surface.The deposited good quality CuSe thin films were subjected to characterization studies.The XRD pattern of the CuSe thin films was recorded by using a powder X-ray diffractometer (Schimadzu model: XRD 6000 using CuKα (λ=0.154nm) radiation, with a diffraction angle between 0° and 90°.The crystallite size was determined from the broadenings of corresponding X-ray spectral peaks by using Debye Scherrer's formula.Scanning Electron Microscopy (SEM) studies were carried out on JEOL, JSM-67001.The optical absorption spectrum of the CuSe thin films was taken by using the VARIAN CARY MODEL 5000 spectrophotometer in the wavelength range of 400 -1400 nm.The dielectric properties of the CuSe thin films were analyzed using a HIOKI 3532-50 LCRHITESTER over the frequency range 50Hz-5MHz.Photoconductivity measurements were carried out at room temperature by connecting the film in series with a picoammeter (Keithley 480) and a dc power supply. X-ray diffraction analysis The phase composition and the structure of the film were studied by X-ray diffraction analysis.The XRD patterns of CuSe thin films are shown in Figure 1.The excellent peaks (101), ( 102), ( 006), ( 110), (108), and (116) were obtained in the powder X-ray diffraction studies.The peaks were compared with JCPDS diffraction patterns from the [JCPDS Data File No.-00-020-1020].The observed peaks corresponding to the formation of hexagonal phase of CuSe were indexed according to hexagonal structure.Knowing the wavelength (λ), full width at half maximum (FWHM) of the peaks (β), and the diffracting angle (θ), the particle size (D) was calculated by using the Scherrer formula, .cos From the above relation, the average size of the CuSe was determined to be ≈ 37.5 nm which agreed ≈ well with the reported values of 38 nm [23] . SEM analysis Scanning Electron Microscope (SEM) was used for studying the surface morphology and the micro structural features of the as-prepared CuSe thin films.SEM image was obtained for CuSe thin film deposited on glass substrate in order to study the surface of the thin film.Figure 2 shows the SEM image of the CuSe thin films.The CuSe micrograph shows a compact structure composed of single type of small, densely packed microcrystals.The grains are well defined, spherical and of almost similar size.The increase in grain size leads to decrease in the grain boundaries, while in as-deposited film, the grains are of smaller size, more compact with a smooth grain background which is an indication of one-step growth by multiple nucleation. AFM analysis The surface morphology of the film was analyzed by Atomic Force Microscopy (AFM).Figure 3a, b show the AFM image of the as-deposited CuSe thin films grown by CBD technique on the glass substrate.It is observed from the surface image that the particles are uniformly distributed on the surface of the film.From the 2D image, it is seen that the CuSe particles are found to agglomerate on the surface of the film.AFM images show the granular nature of the particles.This observation indicates that the film surface is somewhat rough. Optical studies Optical properties are very significant as far as applications in any optoelectronic devices are concerned.Optical band gap and absorption coefficient are the two important parameters of a solar cell material.In the present study, optical characterization was done for the determination of the nature of absorption spectrum and the energy bandgap of CuSe thin films.The optical transmission spectrum of CuSe films was recorded in the wavelength region 400-1400 nm and it is shown in Figure 4.It is important to note that CuSe films were very much transparent in the visible region.The dependence of optical absorption coefficient on photon energy helps to analyze the band structure and the type of transition of electrons. The optical absorption coefficient (α) was calculated from transmittance using the following relation where T is the transmittance and d is the thickness of the film.Determination of the optical band gap is based on the photon induced electronic transition between the conduction band and the valance band.As a direct band gap material, the film under study has an absorption coefficient (α) obeying the following relation for high photon energies (hν) and can be expressed as where E g is the band gap of the CuSe films and A is a constant.A plot of variation of (αhν) 2 versus hν is shown in Figure 5.Using Tauc's plot, the energy gap (E g ) was calculated to be 2.40 eV which agreed well with the reported values 24 .This was used to find out the nature of transition in the thin film material. Determination of optical constants Two of the most important optical properties are the refractive index and the extinction coefficient which are generally called optical constants.The amount of light that transmits through thin film material depends on the amount of the reflection and the absorption that take place along the light path.The optical constants such as the refractive index (n), the real dielectric constant (ε r ) and the imaginary part of the dielectric constant (ε i ) were calculated.The extinction coefficient (K) could be obtained from the following equation, The extinction coefficient (K) was found to be 10.7 at λ =1400 nm. The transmittance (T) is given by Reflectance (R) in terms of absorption coefficient can be obtained from the above equation.Hence, exp( exp( ) exp( ) Refractive index (n) can be determined from the reflectance data using the following equation, The refractive index (n) was found to be 2.3 at λ =1400 nm.The high refractive index makes CuSe film suitable for use in optoelectronic devices. From the optical constants, electric susceptibility (χ c ) could be calculated using the following relation where ε 0 is the permittivity of free space.The value of electric susceptibility (χ c ) was 4.29at λ=1400 nm.Since electrical susceptibility is greater than 1, the material can be easily polarized when the incident light is more intense. The real part of the dielectric constant (ε r ) and the imaginary part of dielectric constant (ε i ) could be calculated from the following relations The values of the real dielectric constant (ε r ) and the imaginary dielectric constant (ε i ) at λ=1400 nm were estimated to be 3.756 and 9.802 x 10 -5 respectively. The lower value of the dielectric constant and the positive value of the material were capable of producing induced polarization due to intense incident light radiation. Dielectric studies The dielectric constant was analyzed as a function of the frequency at different temperatures as shown in Figure 6, while the corresponding dielectric loss is shown in Figure 7.The dielectric constant could be evaluated using the relation, Figure 5. Plot of (αhν) [2] vs photon energy (hν).where 'C' is the capacitance, 'd' is the thickness of the films, 'ε O ' is the permittivity of free space, and 'A' is the area of the films.The plot between the dielectric constant and the frequency for various temperatures is shown in Figure 6.The curve reveals that the dielectric constant decreases with increase in frequency and then reaches almost a constant value in the high frequency region 25 .This also indicates that the value of the dielectric constant increases with an increase in the temperatures.The huge value of the dielectric constant at low frequencies can be attributed to the lower electrostatic binding strength, arising due to the space charge polarization near the grain boundary interfaces.Owing to the application of an electric field, the space charges are stimulated and dipole moments are produced and are called space-charge polarization.This apart, these dipole moments are rotated by the field applied ensuing in rotation polarization which also contributes to the high values.Whenever there is an increase in the temperature, more dipoles are produced and the value increases 26 .In the high frequency region, the charge carriers might have a tendency to move to higher values and dielectric constant falls to a low value before the field reversal occurs.Figure 7 shows the variation of the dielectric loss with respect to the frequency for various temperatures.These curves show that the dielectric loss is dependent on the frequency of the applied field, comparable to that of the dielectric constant.The dielectric loss decreases with an increase in the frequency at almost all temperatures, but appears to attain saturation in the higher frequency range at all the temperatures 27,28 . In the proposed relation, only one parameter, viz., the high frequency dielectric constant is required as input, to evaluate electronic properties like valence electron plasma energy, average energy gap or Penn gap, Fermi energy and electronic polarizability of the CuSe thin films.The theoretical calculations show that the high frequency dielectric constant is explicitly dependent on the valence electron Plasma energy, an average energy gap referred to as the Penn gap and Fermi energy.The Penn gap is determined by fitting the dielectric constant with the Plasmon energy 29 .The following relation 30 is used to calculate the valence electron plasma energy, ћω P / . According to the Penn model 31 , the average energy gap for the CuSe thin films is given by where P hω is the valence electron plasmon energy and the Fermi energy 29 is given by ( ) / . Then, the electronic polarizability α is obtained using the relation 32,33 . where S 0 is a constant given by The Clausius-Mossotti relation, The following empirical relationship is also used to calculate α [34] , . . Where E g is the bandgap value determined through the UV transmission spectrum. The high frequency dielectric constant of the materials is a very important parameter for calculating the physical or electronic properties of materials 27 .All the above parameters as estimated are shown in Table 1. AC electrical conductivity studies The conductivity of a material depends on its overall characteristics, such as its chemical composition, purity and crystal structure.Measurements taken with continuous currents provide only total conductivity.In the present study, electric ohmic contacts were made using air drying silver paint on the opposite faces.Electrical measurements were taken in the frequency range 20 Hz to 1 MHz using HIOKI 3532-50 LCRHITESTER.A chromel-Alumel thermocouple was employed to record the sample temperature.A 30 minute interval was used prior to thermal stabilization after each measuring temperature.All the measurements were carried out in atmospheric air.The temperature dependent AC electrical conductivity study was carried out.The temperature dependent AC conductivity of the CuSe thin films is shown in Figure 8.It is observed that the conductivity (σ ac ) increases with an increase in the temperature and the frequency.The activation energy of the CuSe thin films was found to be 0.032 eV which agreed well with the reported values 35 .Figure 9 shows the temperature dependent conductivity of CuSe thin films.The figure indicates the exponential behavior of the temperature dependent current confirming the semiconducting property of the material 36 . Photoconductivity studies Photoconductivity is an important property of materials by means of which the conductivity of the material changes due to incident radiation.Photoconduction includes the Electronic polarizability (using the Clausius-Mossotti relation) 6.452 × 10 -24 cm 3 Electronic polarizability (using bandgap) 6.232 × 10 -24 cm 3 generation and recombination of charge carriers and their transport to the electrodes.Obviously, the thermal and hot carrier relaxation process, charge carrier statistics, effects of electrodes, and several mechanisms of recombination are involved in photoconduction.Photoconductivity is due to the absorption of photons (either by an intrinsic process or by impurities with or without phonons), leading to the creation of free charge particles in the conduction band and/or in the valence band.It provides valuable information about physical properties of materials and offers applications in photodetection and radiation measurements.Field dependent dark and photoconductivity plots of CuSe thin films are shown in Figure 10.The plots indicate a linear increase of current in the dark and visible light illuminated CuSe thin films cases with an increase in the applied field depicting the ohmic nature of the contacts 37 .It is observed from the figure that the photo current is always higher than the dark current and both the photo and the dark currents of CuSe thin films increase linearly the applied voltage; thus photo current is more than the dark current which may be attributed to the generation of mobile charge carriers caused by the absorption of photon leading to positive photo conductivity.The low values of the dark current and insignificant rise in the photo current upon the visible light illumination are as expected.But the photocurrent is found to be more than the dark current.Hence it can be said that the material exhibits positive photo conductivity.This is caused by the generation of mobile charge carriers caused by the absorption of photons.This is because of an increase in the number of charge carriers or their life time in the presence of radiation.The positive photoconductivity of the films may be due to the increase in the number of charge carriers and it ascertains the conducting nature of the material.The dark current is less than the photocurrent, signifying positive photoconductivity nature confirmed by the reported results 37,38 . I-V characteristics I-V characteristics of the CuSe thin films are shown in Figure 11.We can see that the flowing current through the film increases linearly with the increasing of the voltage of the electrodes.It is observed that the remarkably larger forward current at all voltages has been obtained and it implies the higher conductivity of these films.The remarkably increased conductivity may be helpful in obtaining higher efficiency in the solar cell. Conclusion The CuSe thin films were prepared by chemical bath deposition (CBD) technique.The structural and the morphological properties of CuSe thin films were investigated by XRD and SEM methods.The XRD studies showed well the crystallized and cubic structure of CuSe thin films.The size and the morphology of the CuSe thin films were characterized by using SEM and AFM studies.The UV-Visible transmission spectrum showed excellent transmission in the entire visible region.The optical properties such as band gap, refractive index, extinction coefficient, and electrical susceptibility were evaluated to analyze the optical property.The dielectric constant and the dielectric loss of the CuSe thin films were calculated for different frequencies and temperatures.In addition, the plasma energy of the valence electron, Penn gap or average energy gap, the Fermi energy, and electronic polarizability of the CuSe thin films were also determined.AC electrical conductivity was found to increase with an increase in the temperature and the frequency.The activation energy was found to be 0.032 eV.The temperature dependent conductivity indicated the exponential behavior of the temperature dependent current confirming the semiconducting nature of the thin films.The photoconductivity study ascertained the positive photoconductivity nature of the CuSe thin films.I-V characteristics of the CuSe thin films were also investigated. Figure 2 . Figure 2. SEM Image of the CuSe thin films. Figure 8 . Figure 8. Variation of conductivity with log frequency. Table 1 . Electronic parameters of the CuSe thin films.
2019-04-28T13:07:14.150Z
2015-10-01T00:00:00.000
{ "year": 2015, "sha1": "042fef2b9bd05f65270a0f3cb85aeed175df1a23", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1590/1516-1439.039215", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "042fef2b9bd05f65270a0f3cb85aeed175df1a23", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
266502350
pes2o/s2orc
v3-fos-license
DNase improves the efficacy of antimicrobial photodynamic therapy in the treatment of candidiasis induced with Candida albicans The study evaluated the association of DNase I enzyme with antimicrobial photodynamic therapy (aPDT) in the treatment of oral candidiasis in mice infected with fluconazole-susceptible (CaS) and -resistant (CaR) Candida albicans strains. Mice were inoculated with C. albicans, and after the infection had been established, the tongues were exposed to DNase for 5 min, followed by photosensitizer [Photodithazine®(PDZ)] and light (LED), either singly or combined. The treatments were performed for 5 consecutive days. Treatment efficacy was evaluated by assessing the tongues via fungal viable population, clinical evaluation, histopathological and fluorescence microscopy methods immediately after finishing treatments, and 7 days of follow-up. The combination of DNase with PDZ-aPDT reduced the fungal viability in mice tongues immediately after the treatments by around 4.26 and 2.89 log10 for CaS and CaR, respectively (versus animals only inoculated). In the fluorescence microscopy, the polysaccharides produced by C. albicans and fungal cells were less labeled in animals treated with the combination of DNase with PDZ-aPDT, similar to the healthy animals. After 7 days of the treatment, DNase associated with PDZ-aPDT maintained a lower count, but not as pronounced as immediately after the intervention. For both strains, mice treated with the combination of DNase with PDZ-aPDT showed remission of oral lesions and mild inflammatory infiltrate in both periods assessed, while animals treated only with PDZ-aPDT presented partial remission of oral lesions. DNase I enzyme improved the efficacy of photodynamic treatment. The addition of DNase enzyme improves the susceptibility of mature C. albicans biofilms against some antifungal agents (Martins et al., 2010).Furthermore, the presence of polysaccharides or eDNA was reported as a bacterial biofilm mechanism of protection against the diffusion of antibiotics (Al-Fattani and Douglas, 2006;Anderson and O'Toole, 2008;Mulcahy et al., 2008). Because the ECM has been related to biofilm protection (Nobile et al., 2008), the use of enzymes capable of hydrolyzing polysaccharides and nucleic acids has been investigated, as it represents an alternative way of increasing the susceptibility of the biofilm to antifungal drugs (Nobile et al., 2008).DNase I enzyme can significantly reduce eDNA, soluble matrix proteins, and water-soluble polysaccharides of a fluconazole-resistant C. albicans (Panariello et al., 2019).This enzyme acts externally to the cell, reducing biofilm stability and enhancing its susceptibility to photodynamic therapy and antifungals (Liao, 1974;Martins et al., 2012;Tetz and Tetz, 2016;Panariello et al., 2019).The treatment of mature biofilms with DNase I (50 mg/mL) inhibited adhesion, biofilm formation, and reduced the biomass by approximately 30% (Perezous et al., 2005).Incubation of in vitro 48 h-old biofilms for 5 min to DNase I reduced eDNA and extracellular polysaccharides in the ECM of fluconazole-susceptible and -resistant C. albicans strains (Panariello et al., 2019). Due to the increase of resistant microorganisms (Kumar et al., 2022), the side effects of antifungals (Campoy and Adrio, 2017), recolonization, and organization into biofilms (Kumar et al., 2022), studies have evaluated alternative strategies to manage fungal infections.In this context, antimicrobial photodynamic therapy (aPDT) is suggested for inactivating microorganisms and treating oral candidiasis (Lambrechts et al., 2005;Konopka and Goslinski, 2007;Donnelly et al., 2008).The photodynamic process requires a photosensitizing agent (PS) combined with light with a wavelength corresponding to the PS absorption band (Donnelly et al., 2008).The interaction of light with PS, in the presence of oxygen, produces reactive species capable of inducing cell inactivation (Machado, 2000).Reactive species have non-specific reactivity with organic molecules and can cause irreversible damage to cellular targets, such as membrane lysis and protein inactivation.Thus, any cellular macromolecule can be considered a target for aPDT (Bonnett and Martínez, 2001;Donnelly et al., 2008). Photodithazine (PDZ)-aPDT inactivated Candida biofilms and treated oral candidiasis (Seneviratne et al., 2008;Martins et al., 2010).A single application of PDZ-aPDT in a murine model decreased the fluconazole-susceptible C. albicans (ATCC 90028) viability by 4.36 log 10 (Carmello et al., 2015).On the other hand, mice infected by a fluconazole-resistant C. albicans strain (ATCC 96901) that received a single session of PDZ-aPDT presented reduced fungal cell viability by 1.96 log 10 (Alves et al., 2018).Five consecutive applications of PDZ-aPDT or antifungal nystatin promoted reductions in the fluconazole-susceptible C. albicans (ATCC 90028) by 3 and 3.2 logs 10 , respectively, and yielded the remission of tongue lesions after 24 h of treatment (Carmello et al., 2016).When the animals were inoculated with a fluconazoleresistant strain (ATCC 96901), PDZ-aPDT was as effective as the topical antifungal nystatin in the treatment, reducing viability by around 1.2 log 10 (Hidalgo et al., 2019); however, the animals showed white or pseudomembranous patches on the dorsum of the tongues.In animals inoculated with fluconazole-resistant C. albicans, the associations of treatments (PDZ-aPDT and nystatin) reduced ~2.3log 10 of fluconazole-resistant C. albicans (Hidalgo et al., 2019), and the macroscopic analysis revealed remission of oral lesions ranging ~95% after 24 h (Hidalgo et al., 2019).In general, previous studies (Carmello et al., 2015(Carmello et al., , 2016;;Alves et al., 2018;Hidalgo et al., 2019) demonstrated the efficacy of PDZ-aPDT in treating infections caused by fluconazole-susceptible C. albicans.However, fluconazoleresistant C. albicans have reduced susceptibility to aPDT, and to find similar outcomes in the treatment of infections with these strains, it is necessary to combine treatments. DNase treatment might be an adjuvant to anti-biofilm therapies since it reduces most ECM components that can hinder antifungal drug penetration into biofilms without interfering with cell viability (Panariello et al., 2019;Abreu-Pereira et al., 2022).The incubation of fluconazole-susceptible C. albicans biofilm with DNase I (5 min) before PDZ-aPDT reduced the counting of viable colonies (CFU) and the quantity of eDNA in the ECM (Panariello et al., 2019).This treatment strategy applied to fluconazole-resistant C. albicans biofilm decreased CFU (~1.62 log 10 ), water-soluble polysaccharides (36.3%), and eDNA (72.3%) (Abreu-Pereira et al., 2022).Hence, the effect of photodynamic treatment was potentiated because DNase I disturbed the ECM and allowed the diffusion of PDZ and light through the ECM of fluconazole-susceptible and -resistant C. albicans biofilm, increasing treatment efficacy (Abreu-Pereira et al., 2022).Hence, the present study evaluated whether the application of DNase could potentiate the action of PDZ-aPDT treatment in mice infected with fluconazolesusceptible and -resistant C. albicans, focusing on fungal viable population recovery and resolution of candidiasis lesions on the mice's tongues. Photosensitizer, DNase enzime and LED parameters Photodithazine ® (PDZ) is a chlorin e6 derivative (VETAGRAND, Co, Moscow, Russia), which has an absorption peak of 660 nm.The PDZ was prepared on the day of use from the stock solution (5,000 mg/L) at a concentration of 200 mg/L in natrosol gel (Farmácia Santa Paula, Araraquara, SP, Brazil) and was kept protected from light (Hidalgo et al., 2019) on the day of use in 0.1 M sodium acetate buffer (pH 5.5) at the concentration of 20 units/mL (Abreu-Pereira et al., 2022). The red LED light device (LXHLPR09, Luxeon ® III Emitter, Lumileds Lighting, San Jose, California, USA) was used with an absorption band of 660 nm, and the light intensity at the end of the device (5 mm in diameter) was 44.6 mW/cm 2 .Thus, a light dose of 50J/cm 2 (19 min) was applied to the tongues of animals infected with C. albicans. Experimental oral candidiasis and treatments performed The present study was approved by the Animal Ethics Committee of the School of Dentistry of Araraquara, UNESP (Case number: 09/2020).A total of 180 female mice of the Swiss strain (≅ 5 weeks old) were used from the vivarium of the School of Dentistry of Araraquara, UNESP.The animals were allocated in cages, with five animals per cage according to the study groups, and kept in a room with a controlled temperature (23 ± 2°C) with standard chow and water ad libitum (Carmello et al., 2016;Hidalgo et al., 2019). For the induction of oral candidiasis, the methodology described before (Takakura et al., 2003;Carmello et al., 2016) was used with some modifications.Tetracycline (0.83 mg/mL) was administered in the water available to the animals during the experimental period.The animals were immunosuppressed with subcutaneous injections of prednisolone at a dose of 100 mg/kg of body mass on days 1, 5, 9, and 13.Inoculation with the strains was performed on day 2 of the experiment (Figure 1; Takakura et al., 2003;Carmello et al., 2016;Hidalgo et al., 2019).For this procedure, the animals were sedated with 0.1 mL of chlorpromazine hydrochloride (2 mg/mL), and sterile mini-swabs soaked in the CaS or CaR suspension were scrubbed across the dorsum of the animals' tongues for 30 s. On day 7, the presence of white patches or pseudomembranous lesions was verified, and the treatments were performed.The animals were anesthetized with an intraperitoneal injection of ketamine [100 mg/kg body weight (National Pharmaceutical Chemistry Union S/A, Embu, SP, Brazil)] and xylazine [10 mg/kg body weight (Veterinary JA Ltda., Sponsor Paulista, SP, Brazil)].Then, the animals were placed in a supine position on the work table, the tongues were gently taken out of the oral cavity, and 50 μL of PDZ diluted in natrosol gel (200 mg/L) was applied with a pipette (Carmello et al., 2016;Hidalgo et al., 2019).The mice stayed in the dark for 20 min for a pre-incubation time.Then, each dorsum of the tongue was illuminated with LED for 19 min (50 J/cm 2 ) (P + L+ group).The effect of the isolated application of PDZ (P + L-) and the LED (P-L+ group) was also evaluated.For DNase treatment, the tongue of the mice received 50 μL of DNase (20 units/mL) for 5 min.One group received the combined treatment with the enzyme and PDZ-aPDT (DNase+P + L+ group).After the treatments, neither DNase nor PDZ was removed from the mice' tongue.The untreated control group (P-L-group) received no PDZ, light, or DNase.In addition, two additional negative infection control groups (NIC groups) with healthy animals were evaluated.In one of them, mice were immunosuppressed on days 1, 5, 9, and 13 (NIC+ group); in the other group, animals did not receive immunosuppression (NIC-group).Seven animals were evaluated for each experimental condition, except the group NIC+ (n = 3) and NIC-(n = 3).The therapies were performed once a day for five consecutive days (from day seven until day 11). Fungal viable population After five consecutive days of treatment (day 11) and 7 days after the end of the treatment (day 18), C. albicans cells were recovered from the tongue of mice.For this procedure, the mini-swabs were swabbed on the dorsum of each tongue for 1 min.Then, they were transferred to tubes with 1 mL of saline solution and vortexed for 1 min to detach C. albicans cells.Then, serial dilutions were made (10 −1 a 10 −3 ) and plated in duplicate in SDA culture medium containing 5 μg/mL of chloramphenicol.After 48 h of incubation at 37°C, the viable colonies were counted, and the values of CFU/mL were determined. Macroscopic analysis of tongue's lesions The white patches or pseudomembranous lesions in the tongues of mice were photographed before the beginning of the treatments, 24 h (day 12), and 7 days (day 18) after the last application.All photographs were standardized and obtained with the same digital camera (Sony Cyber-Shot DSCF717; Sony Corporation, Tokyo, Japan), by the same operator and under the same conditions (place, light, angle, and position of the animals), thus aiming to facilitate the reproducibility.The extension area of the lesion on the tongue in each photograph was evaluated using the ImageJ.exeprogram. 1 The percentage of the extension area of each lesion over the total area of the tongue was calculated using this software (Hidalgo et al., 2019). Histopathological analyses and animal sacrifice Initially, mice were anesthetized with an intraperitoneal injection of ketamine and xylazine.Then, animals' tongues were surgically removed and destinated for histopathological and fluorescence microscopic analyses.After the tongue excision, mice were euthanized by intramuscular injection of a lethal dose of ketamine (0.2 mL) and xylazine (0.4 mL) 24 h (day 12) and 7 days (day 18) after the last application of treatment (Carmello et al., 2016;Hidalgo et al., 2019).The tongues were placed in plastic cassettes for the histopathological and fluorescence microscopic analyses.These cassettes were immersed in 10% paraformaldehyde (pH 7.2) (441.244,Sigma-Aldrich, St Louis, MO, USA).Then, the histological fixation process was made, and the blocks were fixed on wooden supports and placed in a rotating microtome.Sixteen serial histological sections of each block were obtained.These cuts were placed on glass slides and stained with hematoxylin-eosin (HE) stain to evaluate the histological events that occurred in each of the groups through light microscopy [Zeiss microscope LSM 700 (Carl Zeiss, Heidelberg, Germany)] at 100 and 200X magnification.A pathologist performed the histological analysis, and the following aspects were evaluated: the presence/absence of yeast and inflammatory infiltrate, epithelial tissue integrity, and adjacent connective tissue response.The material was classified into scores: 0-the absence of inflammation; 1-the presence of inflammatory infiltrate; 2-moderate inflammation; 3: severe inflammation; and 4: abscess formation (ISO 7405:1997).The evaluation was performed by a single examiner blinded to each experimental group at each evaluated time after treatment. Microscopy analysis of fluorescence to determine fungal colonization on tongues Initially, the samples were deparaffinated and hydrated in water.The antigenic retrieval was performed by heat.The sections were then immersed in 10 mM buffered sodium citrate, pH 6.0, and placed in the microwave twice for 5 min each (El-Habashi et al., 1995).Next, the slides were dried, and the sections were circled with a hydrophobic barrier pen (Sigma Advanced PAP Pen-Z377821) and 20 μL of the primary antibody (1 → 4)-β-mannan and galacto-(1 → 4)-β-mannan (400-4) (Table 1) diluted in 2% bovine serum albumin (BSA) and 0.1% Triton X100 (1:20 dilution) was pipetted on each section (Lobo et al., 2019).The slides were incubated overnight (4°C).After incubation, sections were carefully washed with 0.89% NaCl solution, and a blocking solution (3% BSA) was added, followed by incubation for 15 min (room temperature).Then, the sections were washed again with 0.89% NaCl solution, and the secondary antibody (20 μL) labeled with Alexa Fluor ® 594 nm (1:500 dilution in 2% BSA) was added (Table 1), followed by incubation for 2 h (4°C).After the secondary antibody incubation time, the sections were washed with 0.89% NaCl and incubated with 20 μL of concavalin-A lectin conjugated with Alexa Fluor ® 488 nm (200 μg/mL) (Table 1) and Hoescht (6 μg/mL) (Table 1) for 30 min.Next, samples were washed with 0.89% NaCl.The Experimental protocol.On the days 1, 5, 9, and 13 the animals were immunosuppressed with subcutaneous injections of prednisolone.On day 2, the animals were inoculated with CaS or CaR.The treatment was performed during 5 days (days 7-11).On days 11 and 18, the fungal load was recovered from the mice's tongue.The tongue removal and euthanasia were performed on days 12 and 18.During the experimental period (18 days), tetracycline hydrochloride was administered in the water system.mounting media [Fluoromount ™ Aqueous Mounting Medium (F4680, Sigma-Aldrich, St Louis, MO, USA)] was added, and the slides were ready for image acquisition.Images were acquired using the Leica DM2500 LED microscope (Leica Microsystems, Wetzlar, Germany). Statistical analysis Analyses were performed using the IBM SPSS Statistics for Windows Version 27; IBM Corp., Armonk, NY, USA.Data from each strain was evaluated separately.The normality and homoscedasticity of the data from CFU converted in base-10 logarithms for each strain were assessed using the Shapiro-Wilk and Levene's tests, respectively.The data were normal and heteroscedastic, so they were analyzed by a two-way ANOVA test, considering two treatment evaluation periods (immediate and 7 days after).Games-Howell post-hoc analysis was performed for multiple comparisons (α = 5%).The percentage values of tongue's lesions assumed normality and were homoscedastic for the data evaluated in both periods (24 h and 7 days after the treatments) for CaS and CaR.Thus, they were analyzed by one-way ANOVA, followed by Tukey's post-hoc (α = 5%).Descriptive analyses were performed for the images obtained for the histopathological and fluorescence microscopy evaluations. Fungal viable population from mice inoculated with CaS and CaR The results of viability from CaS immediately after the treatments demonstrated that the animals treated with DNase followed by PDZ-aPDT (DNase+P + L+ group) showed the highest log 10 reduction value (CFU/mL), compared to the negative control group (P-L-) equivalent to 4.26 log 10 (Figure 2) and different from the other groups and the control (p ≤ 0.0001).The P + L+ group (PDZ-aPDT) showed a reduction of approximately 2.50 log 10 compared to the control (P-Lgroup) (Figure 2).The other groups showed statistically similar values with the control (P-L-) (p ≥ 0.05) (Figure 2). The results of CaR showed that DNase+P + L+ group immediately after the treatments was statistically different from the other groups and exhibited the highest log 10 reduction value (CFU/mL), compared to the P-L-group (p ≤ 0.0001), equivalent to 2.89 log 10 (Figure 3).The group treated only with PDZ-aPDT (P + L+ group) showed a reduction of 0.34 log 10 compared to the P-L-group.The P + L-, P-L+, P + L+, DNase and P-L-groups presented statistically similar effects (Figure 3). The results of 7 days after the end of the treatments demonstrated that the DNase+P + L+ group exhibited the greatest reduction in the viable colonies of CaS when compared to the negative control group (P-L-) (p ≤ 0.0001); this reduction was approximately 1.97 log 10 (Figure 2).The P + L+ group was similar to DNase+P + L+ group and showed a statistically different value from the other groups, with a reduction of 1.18 log 10 compared to the P-L-group (p ≤ 0.0001).The other experimental groups (P + L-, P-L+, DNase) showed statistically similar effects among themselves and with the negative control (P-L-) (p ≥ 0.05) (Figure 3). For CaR after 7 days of the end of the treatments, the DNase+P + L+ group showed the greatest reduction in viable colonies when compared to the P-L-group (p ≤ 0.0001), with a reduction of approximately 1.27 log 10 (Figure 3).The P + L+ group showed a reduction of 0.22 log 10 compared to the P-L-group.The other experimental groups (P + L-, P-L+, P + L+, and DNase) showed statistically similar values among themselves and the P-L-group (p ≥ 0.05) (Figure 3). Clinical evaluation from mice inoculated with CaS and CaR The results after 24 h of treatment (Figure 4) for mice inoculated with the CaS showed that the DNase+P + L+ and P + L+ groups significantly reduced oral lesions by 98.92 and 97.71%, respectively, Mean values +/− standard deviation of log 10 (CFU/mL) for different experimental groups and periods evaluated (immediately and 7 days after treatments) for animals inoculated with CaS.Different number of asterisks denotes statistical difference between the groups.when compared to the P-L-group (p ≤ 0.0001).The results for 7 days after the treatments showed a reduction in oral lesions of 83.31% for DNase+P + L+ group, compared to the P-L-group (p ≤ 0.0001).The DNase+P + L+ group was statistically different from P + L+ group that reduced the oral lesions by around 63.81% compared to the P-Lgroup (p ≤ 0.0001).The other groups evaluated immediately and after 7 days (Figure 4) showed a statistically similar effect to the P-L-control (p ≥ 0.05).The images presented in Figure 5 illustrate the presence of white patches or pseudomembranous plaques on the dorsum of the animals' tongues inoculated with CaS for each group 24 h and 7 days after the treatments. The results obtained for CaR (Figure 6) demonstrated that the DNase+P + L+ and P + L+ groups immediately after the treatments exhibited reductions of oral lesions by 96.07 and 50.41%, respectively, when compared to the P-L-group (p ≤ 0.0001).Immediately after the treatments, the other groups showed a statistically similar effect to the P-L-group (p ≥ 0.05).In addition, 7 days after the treatments, the DNase+P + L+ group showed a significant reduction in oral lesions by 75.24% when compared to the P-L-group (p ≤ 0.0001).The other groups showed statistically similar values to the P-L-control (p ≥ 0.05) (Figure 6). The images in Figure 7 illustrate the presence of white patches or pseudomembranous plaques on the dorsum of the tongues 24 h and 7 days after the treatments performed on animals inoculated with CaR (Figure 7). Histopathological evaluation The histopathological evaluation demonstrated that the sections from tongues contaminated with CaS exhibited mild inflammatory infiltrates for groups DNase+P + L+ and P + L+ 24 h after the Mean values +/− standard deviation of log 10 (CFU/mL) for different experimental groups and periods evaluated (immediately and 7 days after treatments) for animals inoculated with CaR.Different number of asterisks denotes statistical difference between the groups.treatment (Figure 8).These groups presented histopathological characteristics similar to those observed in the NIC-and NIC+ groups.The stratified epithelium exhibited normal and healthy features, with lingual papillae covered by a fine keratin layer.The other groups (P-L-, P-L+, P + L-and DNase) presented similar histopathological characteristics with moderate inflammatory infiltration and the presence of numerous hyphae/pseudohyphae on the keratin layer and some hyphae/pseudohyphae invading the epithelial tissue of the tongues (Figure 8).Regarding the histological sections evaluated at 7 days after the treatment for CaS, the morphological characteristics remained relatively unchanged, with the exception of the P-L-, P + L-, P-L+, and DNase groups, which showed moderately degraded muscle tissue (Figure 8). For the animals inoculated with CaR (Figure 9), the group treated with DNase+P + L+ presented histopathological characteristics similar to those observed in the NIC-and NIC+ groups (Figure 8) after 24 h of the treatments.The stratified epithelium exhibited normal and healthy features, with lingual papillae covered by a fine keratin layer (Figure 9).The group treated with P + L+ showed a greater number of hyphae and pseudohyphae within the keratin epithelial layer.The animals in the P-L-, P + L-, P-L+, and DNase groups presented extensive amounts of C. albicans covering the epithelial tissue, and there was a loss of the Representative images of the white or pseudomembranous patches of mice' tongues inoculated with CaS for the groups P-L-, P + L-, P-L+, and DNase 24 h and 7 days after the end of treatment.Also, representative images of the remission of tongue lesions were observed in the mice submitted to the P + L+ 24 h and DNase+P + L+ 24 h and 7 days after the treatments.papillae (Figure 9).This epithelium showed intense inflammation with the existence of mononuclear cells inside the dilated blood vessels caused by the inflammation.The underlying connective tissue was formed by muscle fibers with normal characteristics.The histological findings 7 days after treatments (Figure 9) for the P-L-, P + L-, P-L+, and DNase groups contaminated with CaR were similar to those observed 24 h after treatments associated with damage in the muscular tissue.In the P + L+ and DNase+P + L+ groups, many hyphae and pseudohyphae were observed in the epithelium keratin layer, but the epithelial tissue remained with normal characteristics (Figure 9). Microscopy fluorescence evaluation The representative images from animals for P-L-group inoculated with CaS and CaR 24 h after the treatment (Figures 10,11,respectively) showed a thick layer of biofilm (green) surrounded by polysaccharides (1-4 β-mannan and galacto) (red) produced by C. albicans.The images of DNase+P + L+ group presented similarity with NIC group once the polysaccharides (red) and fungal cells (green) were not labeled.The P + L+ group showed a small amount of biofilm (green) and polysaccharides (red) (Figures 10,11). After 7 days of the treatment for CaS and CaR (Figures 12, 13), the P-L-group was labeled with polysaccharides produced by C. albicans (red), with higher intensity of label in fungal cells (green) (Figures 12,13).The DNase+P + L+ group of animals contaminated with CaS presented a small layer of C. albicans biofilm and polysaccharides (red color) (Figure 12).The P + L+ group demonstrated a slight presence of fungal biofilm (green) and polysaccharides (red color) (Figure 12).For CaR, the group DNase+P + L+ presented less biofilm and polysaccharides than the group treated only with PDZ-aPDT (P + L+ group) once there was a thick layer of CaR biofilm (green) surrounded by polysaccharides (red) (Figure 13). Discussion In vitro studies report the improved efficacy of PDZ-aPDT combined with DNase against C. albicans biofilms (Panariello et al., 2019;Abreu-Pereira et al., 2022).Enzymes decrease the integrity of the ECM by hydrolyzing proteins, polysaccharides, and nucleic acids.DNase I reduces biofilm stability, increasing the susceptibility of the biofilm to the action of antifungals and aPDT (Liao, 1974;Martins et al., 2012;Tetz and Tetz, 2016;Panariello et al., 2019;Abreu-Pereira et al., 2022).The current study investigated whether applying DNase I enzyme could potentiate PDZ-aPDT outcomes in mice infected by susceptible-and fluconazole-resistant C. albicans.Here, DNase I (20 units/mL) combined with PDZ-aPDT promoted antifungal effects against CaR in the oral lesions of mice with experimental oral candidiasis, and macroscopic analysis showed that 24 h after completion of treatment, the animals presented 96.7% remission of lesions.To our knowledge, no previous published study has investigated in vivo the efficacy of DNase I associated with aPDT in treating induced oral candidiasis in mice. The results from mice inoculated with CaR showed that DNase I prior PDZ-aPDT promoted a reduction in viable colony counts by around 2.89 and 1.27 log 10 , respectively, immediately and 7 days after the treatments.In addition, the macroscopic oral lesions were reduced by around 96.07 and 75.24% 24 h and 7 days after the treatment, respectively.These results observed here corroborate with a previous study (Hidalgo et al., 2019), that evaluated the combination of PDZ (200 mg/L) mediated aPDT associated with nystatin in the treatment of mice infected with fluconazole-resistant C. albicans (ATCC 96901).Using the same methodology for induction of infection in mice, the Representative images of the white or pseudomembranous patches of mice' tongues inoculated with CaR for the groups P-L-, P + L-, P-L+, P + L+, and DNase 24 h and 7 days after the end of treatment.Also, representative images of the remission of tongue lesions in the mice submitted to the DNase+P + L+ treatment are observed 24 h and 7 days after the treatment. 10. 3389/fmicb.2023.1274201Frontiers in Microbiology 09 frontiersin.orgcombination of treatments reduced ~2.60 log 10 the fungal viability and ~ 95% the macroscopic oral lesions 24 h after the treatment (Hidalgo et al., 2019).The results obtained after 7 days of the end treatment showed that nystatin combined with PDZ-aPDT promoted a reduction of ~1 log 10 in fungal viability and macroscopic reduction in oral lesions by around ~50% (Hidalgo et al., 2019).Here, DNase yielded an outcome more favorable than nystatin in decreasing fungal viability and reducing the rate of lesion recurrence.Using alternative sources can lead to access to novel therapeutic agents with fewer side effects without the risk of antifungal resistance (Sandai et al., 2016;Torabi et al., 2022).DNase may be an adjuvant for biofilm treatments since it degrades eDNA in the extracellular matrix of biofilms (Rajendran et al., 2014;Hamblin and Abrahamse, 2019).Therefore, we suggest that DNase enabled PDZ diffusion in the extracellular matrix of biofilms, potentiating the effects of PDZ-aPDT and promoting the inactivation of biofilms in vivo.The enzyme associated with PDZ-aPDT reduced the fungal viability of CaS by around 4.26 and 1.97 log 10 , respectively, immediately and 7 days after the treatments.In addition, there was a remission in the oral lesions, around 98.92 and 83.31%, respectively, after 24 h and 7 days.In vivo investigation (Carmello et al., 2015) demonstrated that five applications of PDZ-aPDT or antifungal nystatin promoted reductions in the cell viability of C. albicans (ATCC 90028) of 3 and 3.22 logs 10 , respectively 24 h after the treatment (Carmello et al., 2016).After 7 days of treatment, a reduction of ~2 log 10 for PDZ-aPDT and nystatin groups was observed (Carmello et al., 2016).Compared to antifungal, the lack of development of antimicrobial resistance increases further studies using aPDT (Hamblin and Abrahamse, 2019).Thus, our findings revealed that DNase improves aPDT (Panariello et al., 2019).When fluconazole-resistant C. albicans biofilms were treated with DNase prior to the PDZ-aPDT, there was a decrease of ~1.92 log 10 in the fungal viability, water-soluble polysaccharides (36.3%), and eDNA (72.3%) (Abreu-Pereira et al., 2022).Here, the fluorescence microscopy images of histological tongue sections 24 h after treatment demonstrated that DNase I before PDZ-aPDT promoted a reduction in fungal polysaccharides, similar to healthy animals (NIC).After 7 days of the treatment, the presence of polysaccharides was observed only in mice infected by fluconazoleresistant C. albicans.Still, the fungal viable count was smaller than that in the PDZ-aPDT group.Thus, DNase treatment reduced the fungal polysaccharides in vitro (Panariello et al., 2019;Abreu-Pereira et al., 2022) and in the animal model here.Among the components present in the ECM, eDNA, associated with β-glucans and β-mannans, contributes to the organizational integrity of biofilms and antifungal tolerance of C. albicans (Martins et al., 2012;Rajendran et al., 2014;Mitchell et al., 2016).Some reports found an association between eDNA levels and increased microbial resistance to antibiotics (Rice et al., 2007;da Silva et al., 2008).Therefore, disrupting the ECM and reducing eDNA levels of C. albicans biofilms are essential to optimize antifungal therapies. The histological sections of the tongues recovered after the treatment with DNase before PDZ-aPDT for both strains (CaS and CaR) demonstrated histological characteristics similar to those of the NIC group (healthy animals).The tissues presented a reduced amount of hyphae/pseudohyphae/blastopore on the keratin layer, minor inflammation in the subjacent connective tissue, and intact muscle tissue.In a previous study, mice treated with 5 applications of nystatin associated with PDZ-aPDT presented normal histological characteristics, which are outcomes very similar to those observed in the present study (Hidalgo et al., 2019).In addition, the inoculation of mice with fluconazole-resistant C. albicans promoted an intense inflammatory response in the subjacent connective tissue (Alves et al., 2018;Hidalgo et al., 2019) and 5 applications of PDZ-aPDT decreased an inflammatory reaction from intense to mild (Hidalgo et al., 2019).Here, the control groups (P-L-, P + L-, P-L+, and DNase) 24 h after the treatment presented a large area of hyphae/pseudohyphaes covering Representative fluorescence microscopy images of histological tongues sections recovered 24 h after treatments from the mice inoculated with CaS, labeled with Hoestch (first left column), concanavalin A conjugated with Alexa 488 nm (second column), and with primary antibody (1 → 4)-β-mannan and galacto-(1 → 4)-β-mannan (400-4) paired with secondary antibody conjugated with Alexa Fluoride 594 nm (third column).The tongue tissue cells (cell nucleus) were represented by the color blue.Fungal cells and matrix polysaccharides of C. albicans were represented by the colors green, and red, respectively.In the last column are the merged images.Representative fluorescence microscopy images of histological tongues sections recovered 7 days after the end of treatments from the mice inoculated with CaS, labeled with Hoestch (first left column), concanavalin A conjugated with Alexa 488 nm (second column), and with primary antibody (1 → 4)-β-mannan and galacto-(1 → 4)-β-mannan (400-4) paired with secondary antibody conjugated with Alexa Fluoride 594 nm (third column).The tongue tissue cells (cell nucleus) were represented by the color blue.Fungal cells and matrix polysaccharides of C. albicans were represented by the colors green and red, respectively.In the last column are the merged images.Representative fluorescence microscopy images of histological tongues sections recovery 24 h after the treatments from the mice inoculated with CaR, labeled with Hoestch (first left column), concanavalin A conjugated with Alexa 488 nm (second column), and with primary antibody (1 → 4)-β-mannan and galacto-(1 → 4)-β-mannan (400-4) paired with secondary antibody conjugated with Alexa Fluoride 594 nm (third column).The tongue tissue cells (cell nucleus) were represented by the color blue.Fungal cells and matrix polysaccharides of C. albicans were represented by the colors green and red, respectively.In the last column are the merged images. 10.3389/fmicb.2023.1274201 Frontiers in Microbiology 13 frontiersin.orgthe epithelial tissue, which demonstrated acanthosis associated with the papillae destruction.Moreover, there is an intense inflammation of the epithelium with dilated blood vessels.Seven days after the treatments, the muscle fibers were partially degraded in the superficial region of the tissue in the control groups (P-L-, P + L-, and P-L+ groups).In the sections from animals inoculated with CaR, 7 days after the end of treatment, there were hyphae and pseudohyphae on the keratin layer, suggesting recurrence of the oral infection in the DNase+P + L+ group, but its keratin layer was smaller than that observed for P + L+ group.Furthermore, in the PDZ-aPDT group, there was a return in the oral lesions.Our results demonstrated that DNase before PDZ-aPDT reduces the lesion recurrence rate after 7 days compared to other treatments for fluconazole-resistant C. albicans.Therefore, DNase I affected the biofilm composition and hampered the new formation of C. albicans biofilm, probably because the complete epithelial restructuring was promoted in the groups treated with DNase+P + L+. The overall increased occurrences of pathogens resistant to conventional antifungals and the toxicity of drugs have motivated searches for strategies to inactivate fungal species.In addition, the ECM of C. albicans biofilms limits the penetration of antimicrobials, antiseptics, and photosensitizers, influencing the efficacy of aPDT and other fungal therapies.In summary, DNase before PDZ-aPDT promoted expressive outcomes by reducing fungal viability and healing the oral lesions in mice.Therefore, this study further demonstrates that DNase is a promising alternative adjuvant for fungal photoinactivation in vivo of antifungal-susceptible and -resistant strains. FIGURE 13 Representative fluorescence microscopy images of histological tongues sections recovery 7 days after the end of treatments from the mice inoculated with CaR, labeled with Hoestch (first left column), concanavalin A conjugated with Alexa 488 nm (second column), and with primary antibody (1 → 4)-β-mannan and galacto-(1 → 4)-β-mannan (400-4) paired with secondary antibody conjugated with Alexa Fluoride 594 nm (third column).The tongue tissue cells (cell nucleus) were represented by the color blue.Fungal cells and matrix polysaccharides of C. albicans were represented by green and red, respectively.In the last column are the merged images. FIGURE 4 FIGURE 4Mean values +/− standard deviation of the lesion size (with the size of the patches) in percentages (%) on the tongue's dorsum of the mice inoculated with CaS evaluated 24 h (black circle) and 7 days (blue circle) after the end of the treatments.* denotes statistical difference.Different number of asterisks denotes statistical difference between the groups. FIGURE 6 FIGURE 6Mean values +/− standard deviation of the lesion size (with the size of the patches) in percentages (%) on the tongue's dorsum of the mice inoculated with CaR evaluated 24 h (black circle) and 7 days (blue circle) after the end of the treatment.* denotes statistical difference.Different number of asterisks denotes statistical difference between the groups. FIGURE 8 FIGURE 8Representative images of the histological sections of tongues from mice inoculated with CaS and recovered 24 h and 7 days after the end of treatments.Muscle tissue was stained with hematoxylin-eosin (HE) (40X).Black arrow -keratin layer contaminated with hyphae and pseudohyphae; green arrow -lamina propria and yellow arrow -dilated blood vessels in response to the local inflammatory reaction. FIGURE 9 FIGURE 9Representative images of the histological sections of tongues from mice inoculated with CaR and recovered 24 h and 7 days after the end of treatments.Muscle tissue was stained with hematoxylin-eosin (HE) (40X).Black arrow -keratin layer contaminated with hyphae and pseudohyphae; green arrow -lamina propria and yellow arrow -dilated blood vessels in response to local inflammatory reaction. TABLE 1 Probes and stains used in the fluorescence microscopic analysis.
2023-12-24T16:13:20.931Z
2023-12-22T00:00:00.000
{ "year": 2023, "sha1": "74c6094baa6b5c38beb8b4d1bc1019825760db22", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2023.1274201/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1f34ad989928253251cff41b5fbf09372f559c4e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
266596811
pes2o/s2orc
v3-fos-license
Inclusion of autistic students in schools: Knowledge, self-efficacy, and attitude of teachers in Germany To provide inclusive education to autistic students, it is important that teachers possess knowledge about autism, feel competent in teaching autistic students, and have a positive attitude toward the inclusion of autistic students. In this study, we explored knowledge, self-efficacy, and attitude concerning autism among N = 887 teachers in Germany. The results showed that knowledge about autism was only moderate and teachers held some typical misconceptions about autism. Moreover, teachers did not possess overwhelmingly high self-efficacy beliefs whereas their attitude toward inclusion of autistic students was rather positive. Experience with teaching autistic students was associated with more knowledge and higher self-efficacy. Also, female teachers were more knowledgeable about autism and felt more competent in teaching autistic students than male teachers. However, the type of school where teachers were working made hardly any difference in their knowledge, self-efficacy, and attitude. Overall, the results suggest that teachers in Germany need more autism training to increase their knowledge about autism and their self-efficacy beliefs in teaching autistic students. Lay Abstract Nowadays, autistic students are often enrolled in mainstream schools. To successfully include autistic students in general education, teachers need to possess knowledge about autism, feel competent in teaching autistic students, and have a positive attitude toward their inclusion. However, in Germany, little is known about the knowledge, the self-efficacy, and the attitude concerning autism among teachers working at mainstream schools. Therefore, we conducted a study in which we used items to assess knowledge, self-efficacy, and attitude. A total of 887 general education teachers participated in the study. The results showed that the level of knowledge about autism was moderate among teachers. Similarly, teachers did not hold overwhelmingly high self-efficacy beliefs. However, their attitude toward inclusion of autistic students was rather positive. At the same time, teachers who had experience with teaching autistic students possessed more knowledge and higher self-efficacy than teachers who had no experience with teaching autistic students. In addition, female teachers were more knowledgeable about autism and felt more competent in teaching autistic students than male teachers. In contrast, knowledge, self-efficacy, and attitude were rather similar among teachers from different types of schools. The findings suggest that teachers in Germany should possess more knowledge about autism and feel more competent in teaching autistic students. Therefore, it is important to systematically include autism trainings in teacher education programs. Introduction Autism is a neurodevelopmental disorder characterized by difficulties in social communication and social interaction as well as restricted, repetitive patterns of behavior, interests, or activities (American Psychiatric Association, 2013; for other conceptualizations of autism, see Milton, 2019).Approximately, 1% of children and teenagers are diagnosed with autism spectrum disorder worldwide (Zeidan et al., 2022).Nowadays, a growing number of autistic students are included in general education.For example, in the United States (Meindl et al., 2020) and Australia (Roberts & Webster, 2022), the majority of autistic students are enrolled in mainstream schools.Analogous to the trend around the world, there is an increase in autism prevalence in Germany, too (Bachmann et al., 2018).Markowetz (2020) estimates that about 50% of all autistic students in Germany are in a general education classroom. The growing number of autistic students in mainstream schools provides unique challenges to teachers.Cognitive (e.g.Demetriou et al., 2018), motivational (e.g.Meindl et al., 2020), and emotional factors (e.g.Huggins et al., 2021) can make it difficult for autistic students to perform academically well (e.g.Keen et al., 2023).In addition, communication and interaction with teachers and other students might be challenging for autistic students because of the very characteristics of autism (Watkins et al., 2019).Also, it is possible that autistic students display aggressive or self-injurious behavior which can seriously impair successful inclusion (Odom et al., 2021).To address these challenges, teachers need to be prepared for the inclusion of autistic students in mainstream schools (Reed & Osborne, 2014). In Germany, little is known about how successfully teachers include autistic students.Therefore, we conducted a study to examine knowledge, self-efficacy, and attitude concerning autism among general education teachers in Germany as important aspects of their professional competence in teaching autistic students in mainstream schools. Professional competence of teachers in including autistic students The professional competence of teachers can be conceptualized as a multifaceted construct that consists of different aspects such as knowledge, self-efficacy, and attitude (e.g.Kunter et al., 2013).In the context of inclusive education for autistic students, knowledge about autism is part of the professional knowledge of teachers (e.g.Voss et al., 2011).In addition, feeling competent in including autistic students in mainstream schools refers to teacher self-efficacy (e.g.Woodcock et al., 2022).The attitude toward the inclusion of autistic students is another important aspect of a teacher's professional competence (e.g.Yada et al., 2022). Knowledge about autism Knowledge about autism, such as knowledge about the symptoms of autism (e.g.Harrison et al., 2017), is usually associated with more positive impressions of autistic persons (e.g.Sasson & Morrison, 2019) and lower levels of stigma (e.g.Obeid et al., 2015).In particular, possessing knowledge about autism can help teachers to provide appropriate education to autistic students.For example, Segall and Campbell (2012) showed that more knowledgeable teachers more often reported to use educational practices such as providing choice-making opportunities (e.g.White et al., 2023) and using visual supports (e.g.Watkins et al., 2019) to include autistic students than less knowledgeable teachers.Moreover, having knowledge about autism can prevent teachers from overestimating their own knowledge about autism and help them to overcome their misconceptions about autism (Jones et al., 2021;McMahon et al., 2020). In their summary of 25 studies, Gómez-Marí et al. (2021) showed that knowledge about autism among teachers in countries around the world was rather low.At the same time, the provision of training in autism and contact with autistic students were related to more knowledge about autism.Al-Sharbati et al. (2015) and Vincent and Ralston (2020) additionally found that female teachers possessed more knowledge about autism than male teachers even though this effect was rather small. Self-efficacy for inclusive education of autistic students Teacher self-efficacy refers to a teacher's perception of their capabilities to support students and perform teaching tasks successfully (Tschannen-Moran et al., 1998).In their review, Zee and Koomen (2016) demonstrated that teachers with higher self-efficacy toward the inclusion of students with special educational needs felt more responsible for the learning problems of these students and were more willing to include them in their classroom than teachers with lower self-efficacy.Moreover, Woodcock et al. (2022) reported in their synthesis research showing that teacher self-efficacy was positively associated with the reported use of inclusive practices in classroom (e.g.Sharma et al., 2021;Sharma & Sokal, 2016). Research that specifically focuses on teacher self-efficacy regarding autism suggests that feeling competent in including autistic students is linked with less teacher stress (Boujut et al., 2017) as well as with more teacher engagement and higher student outcomes (Love et al., 2020).However, teachers working at mainstream schools might lack confidence in their capabilities to appropriately deal with autistic students (Anglim et al., 2018).For example, Lu et al. (2020) showed that the level of self-efficacy among primary school teachers of autistic students was only moderate (see also Dignath et al., 2022). Attitude toward the inclusion of autistic students The attitude toward the inclusion of students with special educational needs refers to the way teachers think and feel about teaching these students in mainstream classrooms (Kielblock & Woodcock, 2023).A positive attitude toward inclusion is important because it can influence a teacher's instructional practices in the classroom (Sharma & Jacobs, 2016).In their review, Gómez-Marí et al. (2022) found that teachers in most studies had a neutral attitude toward teaching autistic students in mainstream schools.In contrast, in the synthesis conducted by Russell et al. (2023), the majority of studies reported a positive attitude.In both summaries (Gómez-Marí et al., 2022;Russell et al., 2023), experience with teaching autistic students was not necessarily associated with a more positive attitude.Interestingly, the meta-analysis conducted by Kim et al. (2023) suggests that women in general possess a more positive attitude toward autism than men.This difference has been found between female and male teachers, too (Gómez-Marí et al., 2022). Relationship between knowledge, self-efficacy, and attitude Knowledge, self-efficacy, and attitude are important aspects of a teacher's professional competence (e.g.Kunter et al., 2013).Therefore, they are examined not only in isolation but also in concert with each other (e.g.Lu et al., 2020).Research has shown a positive association between knowledge and self-efficacy (e.g.Lauermann & König, 2016) although there are also findings that do not confirm such a relationship (e.g.Depaepe & König, 2018).In the context of autism, Lu et al. (2020) observed a significant but rather small correlation between both variables for primary school teachers.Also, teachers' knowledge about autism might be associated with their attitude toward the inclusion of autistic students.For example, knowing why autistic students might engage in aggressive behavior can raise a positive attitude toward them.Accordingly, Gómez-Marí et al. (2022) found that a number of studies reported a positive association between knowledge and attitude.As shown by the meta-analysis conducted by Yada et al. (2022), the attitude toward inclusion is linked to teacher self-efficacy, too.Similarly, in the context of teaching autistic students, the attitude toward their inclusion is related to teacher self-efficacy (Lu et al., 2020). The present study The number of autistic students enrolled in mainstream school is increasing (e.g.Meindl et al., 2020).Therefore, it is important that general education teachers possess knowledge about autism, have a high self-efficacy in teaching autistic students, and hold a positive attitude toward their inclusion (e.g.Reed & Osborne, 2014).Although the topic of inclusion has been a mandatory part of all teacher education programs in Germany since 2013 (e.g.Liebner & Schmaltz, 2021), autism is not a special education disability category in most of the federal states of Germany (Lindmeier et al., 2020).As a result, many general education teachers in Germany might not be systematically prepared for working with autistic students in mainstream schools (Markowetz, 2020).At the same time, little is known about their professional competence in including autistic students.To address this gap, the present study investigated knowledge, self-efficacy, and attitude concerning autism among general education teachers in Germany.In line with research examining teachers from other countries, we expected that teachers in Germany would possess limited knowledge about autism (Gómez-Marí et al., 2021), hold moderate self-efficacy beliefs in teaching autistic students (Lu et al., 2020) but have a rather positive attitude toward their inclusion (Russell et al., 2023). A number of factors have been discussed to influence the level of a teacher's professional competence in including autistic students (Reed & Osborne, 2014).In this study, we examined the following teacher factors: First, research suggests that contact with autistic students can positively influence a teacher's knowledge, self-efficacy, and attitude (e.g.Gómez-Marí et al., 2021;Russell et al., 2023).Hence, we looked at whether knowledge, self-efficacy, and attitude would differ depending on whether teachers had experience with teaching autistic students.Second, the type of school where teachers are working might make a difference in their knowledge, self-efficacy, and attitude.For example, the reviews conducted by Gómez-Marí et al. (2021, 2022) suggest that primary school teachers not only have more knowledge about autism but also possess a more positive attitude toward autistic students than secondary school teachers.In Germany, schools are divided into primary and secondary schools.Secondary schools have a three-tiered structure and are split in lower, intermediate, and upper secondary schools.In some federal states of Germany, there are also comprehensive or integrative schools where students are taught in the same classroom but usually achieve different qualifications.We studied knowledge, self-efficacy, and attitude among teachers in Germany as a function of these different school types.Third, the gender of teachers might be linked to what they know about and how they think and feel about the inclusion of autistic students.In particular, women have been found to have more knowledge about autism (Al-Sharbati et al., 2015;Vincent & Ralston, 2020) and a more favorable attitude toward autistic people than men (e.g.Gómez-Marí et al., 2022;Kim et al., 2023).Thus, we examined the role of a teacher's gender for knowledge, self-efficacy, and attitude in the context of autism. Finally, given that knowledge, self-efficacy, and attitude concerning autism are important aspects of a teacher's professional competence in including autistic students, we were interested in how they were related to each other.Lu et al. (2020) who studied primary school teachers found a rather strong association between self-efficacy and attitude whereas knowledge was only weakly related with self-efficacy and attitude.We also investigated possible relationships among knowledge, self-efficacy, and attitude regarding autism but, due to the cross-sectional nature of our study, did not interpret them causally. Overall, we addressed the following research questions: Research Question 1 (RQ1).What is the level of knowledge, self-efficacy, and attitude concerning autism that general education teachers in Germany possess? Research Question 2 (RQ2).Is the level of knowledge, self-efficacy, and attitude concerning autism associated with autism experience, school type, and gender of general education teachers in Germany? Research Question 3 (RQ3).What is the relationship between knowledge, self-efficacy, and attitude concerning autism among general education teachers in Germany? Sample A total of N = 887 teachers from Baden-Wurttemberg, the federal state with the third largest population in Germany, participated in the study.Of them, 726 teachers were female, 123 teachers were male, and 38 teachers did not indicate their gender.The teachers were working at primary schools (Grundschule, n = 351), lower secondary schools (Haupt-/Werkrealschule, n = 41), intermediate secondary schools (Realschule, n = 115), upper secondary schools (Gymnasium, n = 235), and integrative schools (Gemeinschaftsschule, n = 109).Some teachers (n = 36) did not indicate the type of school they were working at.The teachers who reported their teaching experience (n = 820) had M = 15.72 years (SD = 9.18) of teaching experience.A total of n = 695 teachers reported that they had already taught an autistic student whereas n = 154 teachers indicated that they had not.A few teachers (n = 38 teachers) did not answer the question about their experience with teaching autistic students. Measures Knowledge about autism.Knowledge about autism was assessed by using items of the Autism Stigma and Knowledge Questionnaire (ASK-Q) constructed by Harrison et al. (2017).ASK-Q consists of 49 items covering the four aspects (1) diagnosis, (2) etiology, (3) treatment, and (4) stigma.In our study, we used 27 adapted items of the ASK-Q.We did not assess stigma because we separately examined attitude toward the inclusion of autistic students.To add items that assess misconceptions about autism, we also included two adapted items from the Autism Awareness Survey (AAS) developed by Tipton and Blacher (2014) and an adapted item used in the study by Segall and Campbell (2012).Thus, we had in total 30 items with 12 items assessing knowledge about symptoms (e.g.Many autistic students show the need for routines and sameness), 10 items assessing knowledge about etiology (e.g.Genetics play an important role in the development of autism), and 8 items assessing knowledge about treatment (e.g.There is currently no cure for autism).Each item was presented with the three answer options true, false, and don't know.The items that were correctly answered were coded as correct whereas the items that were incorrectly answered or whose answer was not known were coded as incorrect.Reliability was good (Cronbach's α = 0.77).The percentage of items correctly answered by a teacher was computed by dividing the number of all correctly answered items by the total number of items.In addition, the percentage of items correctly answered by a teacher for each of the three aspects of autism (i.e.symptoms, etiology, treatment) was computed by dividing the number of all correctly answered items for each of the three aspects by the total number of items for each of the three aspects. Self-efficacy for inclusive education of autistic students.Selfefficacy for inclusive education of autistic students was assessed by using items from two scales that measure selfefficacy of teachers in the context of autism.The Self-Efficacy for Autism Scale (TSEAS) developed by Love (2016) consists of 14 items and the Autism Self-Efficacy Scale for Teachers (ASSET) developed by Ruble et al. (2013) comprises 30 items.In our study, we used a total of 23 adapted items (for examples, see Figure 2) with a 6-point rating scale ranging from 1 (disagree) to 6 (agree).Reliability was very good (Cronbach's α = 0.94).The overall score for every teacher was computed by averaging the points assigned to all items.A higher score indicated a higher self-efficacy for the inclusion of autistic students. Attitude toward the inclusion of autistic students.Attitude toward the inclusion of autistic students was assessed by using the Autism Attitude Scale for Teachers (AAST) developed by Olley et al. (1981).The original scale consists of 14 items.We used 11 adapted items (for examples, see Figure 3) with a 6-point rating scale ranging from 1 (disagree) to 6 (agree).Reliability was very good (Cronbach's α = 0.87).Most items were worded negatively.To compute an overall score for every teacher, the negatively worded items were recoded.Then, the points assigned to all items were averaged.Thus, a higher overall score reflected a more positive attitude. Demographic information.The teachers provided demographic information such as age, gender, and school type. In addition, we asked them to indicate whether or not they already had experience with teaching autistic students. Procedure Principals of all mainstream schools in Baden-Wurttemberg were contacted via email and asked to inform the teachers at their schools about the study.Teachers who agreed to participate in the study followed a link to the online survey and proceeded as follows: First, after the teachers had been provided with information about the purpose of the study and the principles of data protection, their informed consent was obtained.Second, the teachers completed the questionnaire measuring their attitude toward the inclusion of autistic students.Third, the teachers answered the items assessing their knowledge about autism.Fourth, the teachers filled in the questionnaire that measured their self-efficacy for inclusive education of autistic students.Fifth, the teachers provided demographic information.The study was approved by the Ministry of Education, Youth and Sports Baden-Wurttemberg and the Ethics Committee of the University of Freiburg. Community involvement statement No autistic person was directly involved in the development of the research questions addressed in this study.However, the first author has familial experience with autism. Results We used an alpha level of 0.05 for all statistical analyses. Knowledge, self-efficacy, and attitude The descriptive statistics regarding knowledge, self-efficacy, and attitude concerning autism are displayed in Table 1. On average, the teachers possessed a moderate level of knowledge about autism.Their mean score differed significantly from 50%, that is, from the score that would be obtained when merely guessing the correct answers, t(886) = 32.66,p < 0.001, η 2 = 0.203.At the same time, they had significantly more knowledge about symptoms than about etiology, F(1, 886) = 496.87,p < 0.001, η 2 = 0.359, and treatment, F(1, 886) = 8.74, p = 0.003, η 2 = 0.010.Also, they were significantly more knowledgeable about treatment than about etiology, F(1, 886) = 382.28,p < 0.001, η 2 = 0.301.The 10 items that were most often answered incorrectly by the teachers and the five items that were least often answered incorrectly by the teachers are shown in Figure 1.For example, the majority of teachers were not aware that traumatic experiences in early phases of life cannot cause autism.Conversely, nearly all teachers knew that many autistic students show the need for routines. The mean level of a teacher's self-efficacy for including autistic students was slightly positive and significantly differed from the theoretical mean score of 3.50 of the rating scale, t(854) = 15.36,p < 0.001, η 2 = 0.066.The three items that received the lowest agreement by the teachers and the three items that received the highest agreement by the teachers are displayed in Figure 2.For example, the majority of teachers did not feel very competent in assessing the causes of an autistic student's problematic behavior.In contrast, self-efficacy with regard to collaborating with special educators was quite high among most teachers. The attitude toward the inclusion of autistic students was, on average, rather positive and significantly higher than the theoretical mean score of 3.50 of the rating scale, t(886) = 35.38,p < 0.001, η 2 = 0.262.The three items that received the highest agreement by the teachers reflecting a more negative attitude and the three items that received the lowest agreement by the teachers reflecting a more positive attitude are depicted in Figure 3.For example, nearly half of the teachers assumed that only teachers with extensive special education could help an autistic student.However, almost all teachers disagreed with the statement that they would not want their students to have to put up with autistic classmates. Knowledge, self-efficacy, and attitude as a function of experience with teaching autistic students, school type, and gender To statistically analyze differences in knowledge, self-efficacy, and attitude concerning autism as a function of experience with teaching autistic students, school type, and gender, we performed for every independent variable three ANOVAs with knowledge, self-efficacy, and attitude as dependent variable.In addition, we conducted for every independent variable a MANOVA to examine differences in the three knowledge aspects, namely, symptoms, etiology, and treatment. Experience with teaching autistic students.The descriptive statistics regarding knowledge, self-efficacy, and attitude as a function of experience with teaching autistic students are displayed in Table 2.The first ANOVA with knowledge (total score) as dependent variable showed a significant effect, F(1, 847) = 24.26,p < 0.001, η 2 = 0.028.Teachers with experience possessed significantly more knowledge than teachers without experience.Moreover, the MANOVA with knowledge about symptoms, etiology, and treatment as dependent variables was significant, F(3, 845) = 10.32,p < 0.001, η 2 = 0.035.Teachers with experience had significantly more knowledge about all three aspects of autism than teachers without experience, symptoms, F(1, 847) = 27.88,p < 0.001, η 2 = 0.032, etiology, F(1, 847) = 6.74, p = 0.010, η 2 = 0.008, and treatment, F(1, 847) = 16.55,p < 0.001, η 2 = 0.019.The second ANOVA with self-efficacy as dependent variable was also significant, F(1, 845) = 9.79, p = 0.002, η 2 = 0.011.Teachers with experience held significantly higher self-efficacy beliefs than teachers without experience.The third ANOVA with attitude as dependent variable just failed to reach the level of statistical significance, F(1, 847) = 3.09, p = 0.080, η 2 = 0.004.However, descriptively, teachers with experience had a more positive attitude than teachers without experience.Overall, the teachers who had experience with teaching autistic students possessed higher knowledge and felt more competent in teaching autistic students than teachers who had no experience with teaching autistic students.However, all effects were rather small.School type.To statistically test for differences in knowledge, self-efficacy, and attitude associated with the type of school where teachers were working, we used (1) primary schools, (2) lower and intermediate secondary schools, (3) integrative schools, and (4) upper secondary schools as levels of the independent variable.The descriptive statistics regarding knowledge, self-efficacy, and attitude as a function of school type are shown in Table 3.The first ANOVA with knowledge (total score) as dependent variable showed no significant effect, F(3, 847) = 1.14, p = 0.331, η 2 = 0.004.Similarly, the MANOVA with knowledge about symptoms, etiology, and treatment as dependent variables was not significant, F(9, 2541) = 1.05, p = 0.397, η 2 = 0.004, symptoms, F(3, 847) = 1.04, p = 0.373, η 2 = 0.004, etiology, F(3, 847) = 1.51, p = 0.211, η 2 = 0.005, treatment, F(3, 847) = 0.81, p = 0.490, η 2 = 0.003.However, the second ANOVA with self-efficacy as dependent variable showed a significant effect, F(3, 845) = 4.70, p = 0.003, η 2 = 0.016.Post hoc tests with Bonferroni correction revealed that the self-efficacy of primary school teachers was significantly higher than the self-efficacy of lower and intermediate secondary school teachers, p = 0.018, η 2 = 0.018, and the selfefficacy of upper secondary school teachers, p = 0.011, η 2 = 0.017.Even so, the effects were rather small.All other post hoc tests were not statistically significant.The third ANOVA with attitude as dependent variable was also not significant, F(3, 847) = 1.62, p = 0.183, η 2 = 0.006.Altogether, teachers of different school types were rather similar in knowledge, self-efficacy, and attitude with the exception of small differences in self-efficacy between primary school teachers and lower, intermediate, and upper secondary school teachers. Correlations between knowledge, self-efficacy, and attitude To statistically analyze the relationship between knowledge, self-efficacy, and attitude, we computed correlations.All three variables significantly correlated with each other.The highest correlation was between self-efficacy and attitude, r = 0.59, p < 0.001, followed by the correlation between knowledge and self-efficacy, r = 0.23, p < 0.001.The correlation between attitude and knowledge was the lowest, r = 0.19, p < 0.001. Discussion In this study, we examined knowledge, self-efficacy, and attitude concerning autism among general education teachers in Germany.The results showed that their knowledge about autism was, on average, quite moderate.Even though performance on the knowledge test was better than expected by chance, some statements about autism such as that traumatic experiences can cause autism were not known to be misconceptions by the majority of teachers.At the same time, teachers were rather knowledgeable about symptoms of autism.However, the factors that cause autism were less known.Knowledge about causes of autism could help teachers to better understand and, thus, accept autism.Therefore, teacher trainings might specifically focus on risk factors for autism.Similarly, teachers possessed limited knowledge about treatment of autism.For example, most teachers were not aware that behavior therapy is an effective treatment for autism.Important elements of behavior therapy are modeling, prompting, and Autism 28( 8) reinforcing (Alberto et al., 2021).These techniques have been shown to be evidence-based practices for working with autistic students in schools, too (Hume et al., 2021). Given that many teachers in our study did not know that behavior therapy is an effective treatment for autism, it is very likely that they are also ignorant of modeling, prompting, and reinforcing as effective methods that they could use when teaching autistic students.Overall, the average level of knowledge about autism observed in general education teachers in Germany is comparable with the knowledge about autism possessed by teachers worldwide (e.g.Gómez-Marí et al., 2021).The fact that teachers in our study not only had moderate knowledge about autism but also possessed misconceptions about autism suggests that it could be helpful to use methods such as refutation texts to help teachers to systematically overcome their false beliefs (Paynter et al., 2019;Prinz et al., 2019). In addition, we found that teachers did not feel overwhelmingly competent in teaching autistic students because their self-efficacy beliefs were, on average, moderate.This might not be surprising given that teachers in Germany are usually not systematically prepared for including autistic students in mainstream schools.In particular, their self-efficacy with regard to assessing the causes of an autistic student's problematic behavior was quite low.Usually, determining the cause of a problematic behavior by means of a functional behavior analysis serves as a basis for developing a support plan for autistic students (Alberto et al., 2021).Consequently, when teachers lack confidence in assessing the causes of a problematic behavior, they cannot systematically help autistic students, for example, by teaching a new behavior to replace a problematic behavior.That the level of self-efficacy held by teachers in this study was only moderate is consistent with the study conducted by Lu et al. (2020) who examined self-efficacy in teaching autistic students among primary school teachers in China.Our results also confirm the findings obtained in the meta-analysis by Dignath et al. (2022) who synthesized research on teacher self-efficacy in including students with special educational needs in general. Furthermore, our study showed that a teacher's attitude toward the inclusion of autistic students was, on average, rather positive.For example, the great majority of the teachers disagreed with the statement that they would not want their students to have to put up with autistic classmates.At the same time, some teachers were reserved about the inclusion of autistic students.For example, nearly half of all teachers believed that only teachers with extensive special education could help autistic students.Our results are in line with the synthesis by Russell et al. (2023) who showed that teachers in most studies had a rather positive attitude toward the inclusion of autistic students.This suggests that teacher trainings could systematically draw upon a teacher's positive attitude to increase their willingness to develop professional knowledge and self-efficacy concerning autism. Our study also revealed that experience with teaching autistic students was significantly associated with more knowledge and higher self-efficacy as well as descriptively with a more positive attitude.The finding that experience with autistic students is linked with more knowledge is consistent with previous research (e.g.Gómez-Marí et al., 2021) and suggests that teachers who teach autistic students actively seek to acquire knowledge about autism to work with autistic students effectively.In addition, experience with teaching autistic students could directly serve as a source that teachers use to form their self-efficacy beliefs (Tschannen-Moran et al., 1998).Hence, when teachers feel successful in teaching autistic students, their self-efficacy beliefs might increase.Also, it is plausible to assume that teachers who get to know an autistic student personally have more empathy toward this autistic student and, thus, think about the inclusion of autistic students more positively.Although we found differences in knowledge, self-efficacy, and attitude as a function of experience with teaching autistic students, these differences were rather small.An explanation for this finding might be that we used a rather rough indicator for experience because we only asked the teachers whether or not they had experience with teaching autistic students.It is possible that other factors such as the years of experience with teaching autistic students or the number of autistic students being taught play an even more important role for acquiring knowledge about autism and increasing self-efficacy and attitude.However, we did not measure the years of experience with teaching autistic students or the number of autistic students being taught in our study. We were also interested in whether the type of school where teachers were working made a difference in knowledge, self-efficacy, and attitude.In contrast to the reviews conducted by Gómez-Marí et al. (2021, 2022) that suggested that primary school teachers have more knowledge about autism and possess a more positive attitude toward autistic students than secondary school teachers, knowledge and attitude were quite similar among all teachers in our study.There was only a statistically significant but small difference in self-efficacy in favor of primary school teachers.Our findings might not be surprising given that autism training is usually not an integral part of teacher education programs in Germany (Markowetz, 2020). Moreover, our study showed that female teachers were more knowledgeable about autism than male teachers even though this effect was rather small.This is in line with prior research (e.g.Al-Sharbati et al., 2015;Vincent & Ralston, 2020).In addition, our study revealed that female teachers also felt more competent in teaching autistic students than male teachers.This result is rather new because prior research has shown higher teacher self-efficacy in men than in women (e.g.Klassen & Chiu, 2010) or no effect of gender on teacher self-efficacy (e.g.Desombre et al., 2019).Why female teachers were more knowledgeable and confident than male teachers needs further investigation.For example, Kim et al. (2023) suggest that women are socialized to show more empathy toward others than men.Thus, female teachers might be more inclined to care about autistic students than male teachers. Finally, we explored how knowledge, self-efficacy, and attitude concerning autism were related to each other.We found a positive and rather strong correlation between selfefficacy and attitude, which is in line with prior research (Lu et al., 2020;Yada et al., 2022).This correlation indicates that teachers with higher self-efficacy had a more positive attitude.In contrast and consistent with prior research (Lu et al., 2020), knowledge was only moderately associated with self-efficacy and attitude.Given the fact that we assessed a teacher's general knowledge about autism but not their knowledge about practices that could be used to teach autistic students, it might not be surprising that knowledge was not more strongly related to self-efficacy and attitude. Limitations The sample of teachers in this study, that is, teachers from mainstream schools in Baden-Wurttemberg, was large and the proportions of teachers for school type and gender were similar to the corresponding proportions of the population of teachers in Germany.Nevertheless, the results might not be generalizable to all teachers in Germany.In three of the 16 federal states of Germany, namely, Berlin, Hamburg, and Schleswig-Holstein, autism is officially a disability category requiring schools to provide special education (Lindmeier et al., 2020).Therefore, it would be interesting to examine possible differences in knowledge, self-efficacy, and attitude between teachers in these federal states and teachers in this study. The design of our study was cross-sectional.Thus, we refrained from interpreting the correlations between knowledge, self-efficacy, and attitude in terms of causeeffect relationships.Further research is encouraged to examine in more detail how these variables are causally related to each other.For example, experimental studies might investigate whether teacher trainings that aim to improve knowledge about autism also increase self-efficacy for inclusive education of autistic students.Similarly, longitudinal studies could use cross-lagged panel analysis to delve more deeply into the relationships between knowledge, self-efficacy, and attitude over time. In the study, we assessed a teacher's knowledge about general issues related to the treatment of autism such that there is currently no cure for autism (Harrison et al., 2017).However, we did not examine a teacher's specialist knowledge about interventions that could be used to systematically support autistic students in school.Given that research has identified evidence-based practices that teachers can implement when working with autistic students in school such as task analysis, direct instruction, prompting, or reinforcement (Hume et al., 2021), it would be interesting to investigate the extent to which there is a research-to-practice gap among teachers in Germany.Other studies already suggest that teachers in mainstream schools often lack knowledge about evidence-based practices and, thus, seldom engage in evidence-based practices to support autistic students (e.g.Barry et al., 2022). Implications To successfully work with autistic students in mainstream schools, teachers need knowledge about autism, high selfefficacy beliefs in teaching autistic students, and a positive attitude toward the inclusion of autistic students.However, our study shows that knowledge and self-efficacy are rather low among teachers in Germany.Therefore, it seems to be necessary to systematically prepare teachers for the inclusion of autistic students.To do so, teacher trainings might provide valuable information about autism such as symptoms, risk factors, and treatments (Reed & Osborne, 2014).In addition, trainings could address evidence-based practices that teachers could use to support the academic, social, and communicative skills of autistic students (Hume et al., 2021).As a side effect, such trainings might improve selfefficacy beliefs and the attitude toward the inclusion of autistic students (e.g.Saade et al., 2021).Research suggests that preparation programs in teacher education might not be sufficient to guarantee the successful implementation of evidence-based practices at school (Odom et al., 2021).Therefore, it is important that teachers who already work at school also engage in professional development to acquire more knowledge and skills related to autism (e.g.Ruble et al., 2010).A teacher's professional competence notwithstanding, successful inclusion of autistic students also requires schools to offer educational programs of high quality including, among others, a positive school climate, curriculum accommodations, the cooperation in interdisciplinary teams, and the involvement of the autistic student's family (for more details, see Odom et al., 2022). Figure 1 . Figure 1.Knowledge items with the highest and lowest percentage of wrong answers given by teachers.Note.The figure shows the 10 items with the highest percentage of wrong answers and the five items with the lowest percentage of wrong answers. Figure 2 . Figure 2. Self-efficacy items with the lowest and highest percentage of agreement provided by teachers.Note.The figure shows the three items with the lowest percentage of agreement (= lower self-efficacy) and the three items with the highest percentage of agreement (= higher self-efficacy). Figure 3 . Figure 3. Attitude items with the highest and lowest percentage of agreement provided by teachers.Note.The figure shows the three (negatively worded) items with the highest percentage of agreement (= more negative attitude) and the three (negatively worded) items with the lowest percentage of agreement (= more positive attitude). Table 1 . Mean values and standard deviations of knowledge, self-efficacy, and attitude concerning autism. Table 2 . Mean values and standard deviations of knowledge, self-efficacy, and attitude concerning autism as a function of experience with teaching autistic students. Table 3 . Mean values and standard deviations of knowledge, self-efficacy, and attitude concerning autism as a function of school type. Table 4 . Mean values and standard deviations of knowledge, self-efficacy, and attitude concerning autism as a function of gender.
2023-12-30T06:18:15.070Z
2023-12-28T00:00:00.000
{ "year": 2023, "sha1": "1202b038ffe3a8b83c234aa2db14ad2fa5fe400d", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/13623613231220210", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c68d410263cef819cbb069e0f860e55c9c5799e9", "s2fieldsofstudy": [ "Education", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
2180008
pes2o/s2orc
v3-fos-license
Bi-directional Attention with Agreement for Dependency Parsing We develop a novel bi-directional attention model for dependency parsing, which learns to agree on headword predictions from the forward and backward parsing directions. The parsing procedure for each direction is formulated as sequentially querying the memory component that stores continuous headword embeddings. The proposed parser makes use of {\it soft} headword embeddings, allowing the model to implicitly capture high-order parsing history without dramatically increasing the computational complexity. We conduct experiments on English, Chinese, and 12 other languages from the CoNLL 2006 shared task, showing that the proposed model achieves state-of-the-art unlabeled attachment scores on 6 languages. Introduction Recently, several neural network models have been developed for efficiently accessing long-term memory and discovering dependencies in sequential data.The memory network framework has been studied in the context of question answering and language modeling (Weston et al., 2015;Sukhbaatar et al., 2015), whereas the neural attention model under the encoder-decoder framework has been applied to machine translation (Bahdanau et al., 2015) and constituency parsing (Vinyals et al., 2015b).Both frameworks learn the latent alignment between the source and target sequences, and the mechanism of 1 Our software and models are available at https:// github.com/hao-cheng/biattdp. attention over the encoder can be viewed as a soft operation on the memory.Although already used in the encoder for capturing global context information (Bahdanau et al., 2015), the bi-directional recurrent neural network (RNN) has yet to be employed in the decoder.Bi-directional decoding is expected to be advantageous over the previously developed uni-directional counterpart, because the former exploits richer contextual information.Intuitively, we can use two separate uni-directional RNNs where each one constructs its respective attended encoder context vectors for computing RNN hidden states.However, the drawback of this approach is that the decoder would often produce different alignments resulting in discrepancies for the forward and backward directions.In this paper, we design a training objective function to enforce attention agreement between both directions, inspired by the alignmentby-agreement idea from Liang et al. (2006).Specifically, we develop a dependency parser (BiAtt-DP) using a bi-directional attention model based on the memory network.Given that the golden alignment is observed for dependency parsing in the training stage, we further derive a simple and interpretable approximation for the agreement objective, which makes a natural connection between the latent and observed alignment cases. The proposed BiAtt-DP parses a sentence in a linear order via sequentially querying the memory component that stores continuous embeddings for all headwords.In other words, we consider all possible arcs during the parsing.This formulation is adopted by graph-based parsers such as the MST-Parser (McDonald et al., 2005).The consideration of all possible arcs makes the proposed BiAtt-DP different from many recently developed neural dependency parsers (Chen and Manning, 2014;Weiss et al., 2015;Alberti et al., 2015;Dyer et al., 2015;Ballesteros et al., 2015), which use a transitionbased algorithm by modeling the parsing procedure as a sequence of actions on buffers.Moreover, unlike most graph-based parsers which may suffer from high computational complexity when utilizing high-order parsing history (McDonald and Pereira, 2006), the proposed BiAtt-DP can implicitly inject such information into the model while keeping the computational complexity in the order of O(n 2 ) for a sentence with n words.This is achieved by feeding the RNN in the query component with a soft headword embedding, which is computed as the probability-weighted sum of all headword embeddings in the memory component. To the best of our knowledge, this is the first attempt to apply memory network models to graphbased dependency parsing.Moreover, it is the first extension of neural attention models from unidirection to multi-direction by enforcing agreement on alignments.Experiments on English, Chinese, and 12 languages from the CoNLL 2006 shared task show the BiAtt-DP can achieve competitive parsing accuracy with several state-of-the-art parsers.Furthermore, our model achieves the highest unlabeled attachment score (UAS) on Chinese, Czech, Dutch, German, Spanish and Turkish. A MemNet-based Dependency Parser The proposed parser first encodes each word in a sentence by continuous embeddings using a bidirectional RNN, and then performs two types of operations, i.e. 1) headword predictions based on bidirectional parsing history and 2) the relation prediction conditioned on the current modifier and its predicted headword both in the embedding space. In the following, we first present how the token embeddings are constructed.Then, the key components of the proposed parser, i.e. the memory component and the query component, are discussed in detail.Lastly, we describe the parsing algorithm using a bidirectional attention model with agreement. Token Embeddings In the proposed BiAtt-DP, the memory and query components share the same token embeddings.We use the notion of additive token embedding as in (Botha and Blunsom, 2014) to utilize the available information about the token, e.g., its word form, lemma, part-of-speech (POS) tag, and morphological features.Specifically, the token embedding is computed as where e i 's are one-hot encoding vectors for the ith word, and E's are parameters to be learned that store the continuous embeddings for corresponding feature.Note those one-hot encoding vectors have different dimensions, depending on individual vocabulary sizes, and all E's have the same first dimension but different second dimension.The additive token embeddings allow us to easily integrate a variety of information.Moreover, we only need to make a single decision on the dimensionality of the token embedding, rather than a combination of decisions on word embeddings and POS tag embeddings as in concatenated token embeddings used by Chen and Manning (2014), Dyer et al. (2015) and Weiss et al. (2015).It reduces the number of model parameters to be tuned, especially when lots of different features are used.In our experiments, the word form and fine-grained POS tag are always used, whereas other features are used depending on their availability in the dataset.All singleton words, lemmas, and POS tags are replaced by special tokens. The additive token embeddings are transformed into another space before they are used by the memory and query components, i.e. where P is the projection matrix and is shared by the memory and query components as well.The activation function of this projection layer is the leaky rectified linear (LReL) function (Mass et al., 2013) with 0.1 as the slope of the negative part.In the remaining part of the paper, we refer to x i ∈ R p as the token embedding for word at position i.Note the subscript i is substituted by j and t for the memory and query components, respectively. Components As shown in Figure 1, the proposed BiAtt-DP has three components, i.e. a memory component, a leftto-right query component, and a right-to-left query component.Given a sentence of length n, the parser first uses a bi-directional RNN to construct n + 1 headword embeddings, m 0 , m 1 , . . ., m n ∈ R e , with m 0 reserved for the ROOT symbol.Each query component is an uni-directional attention model.In a query component, a sequence of n modifier embeddings q 1 , . . ., q n ∈ R d are constructed recursively by conditioning on all headword embeddings.To address the vanishing gradient issue in RNNs, we use the gated recurrent unit (GRU) proposed by Cho et al. (2014), where an update gate and a reset gate are employed to control the information flow.We replace the hyperbolic tangent function in GRU with the LReL function, which is faster to compute and achieves better parsing accuracy in our preliminary studies.In the following, we refer to headword and modifier embeddings as memory and query vectors, respectively. Memory Component: The proposed BiAtt-DP uses a bi-directional RNN to obtain the memory vectors.At time step j, the current hidden state vector h l j ∈ R e/2 (or h r j ∈ R e/2 ) is computed as a non-linear transformation based on the current input vector x j and the previous hidden state vector h l j−1 (or h r j+1 ), i.e. h l j = GRU(h l j−1 , x j ) (or h r j = GRU(h r j+1 , x j )).Ideally, the recursive nature of the RNN allows it to capture all context information from one-side, and a bi-directional RNN can thus capture context information from both sides.We concatenate the hidden layers of the left-to-right RNN and the right-to-left RNN for the word at position j as the memory vector m j = h l j ; h r j .These memory vectors are expected to encode the words and their context information in the headword space. Query Component: For each query component, we use a single-directional RNN with GRU to obtain the query vectors q j 's, which are the hidden state vectors of the RNN.Each q t is used to query the memory component, returning association scores s t,j 's between the word at position t and the head- multiplication and addition, respectively.For simplicity, we ignore the token embedding xt connected to the RNN hidden layers mj, q l t and q r t . where φ(•) is the element-wise hyperbolic tangent function, and C ∈ R h×e , D ∈ R h×d and v ∈ R h are model parameters.Then, we can obtain probabilities (aka attention weights), a t,0 , • • • , a t,n , over all headwords in the sentence by normalizing s t,j 's, using a softmax function The soft headword embedding is then defined as mt = n j=1 a t,j m j .At each time step t, the RNN takes the soft headword embedding ml t−1 or mr t+1 as the input, in addition to the token embedding x t .Formally, for the forward case, the q t can be computed as q t = GRU (q t−1 , [ mt ; x t ]).Although the RNN is able to capture long-span context information to some extent, the local context may very easily dominate the hidden state.Therefore, this additional soft headword embedding allows the model to access long-span context information in a different channel.On the other hand, by recursively feeding both the query vector and the soft headword embedding into the RNN, the model implicitly captures high-order parsing history information, which can potentially improve the parsing accuracy (Yamada and Matsumoto, 2003;McDonald and Pereira, 2006).However, for a graph-based dependency parser, utilizing parsing history features is computationally expensive.For example, an k-th order MSTParser (McDonald and Pereira, 2006) has O(n k+1 ) complexity for a sentence of n words.In contrast, the BiAtt-DP implicitly captures high-order parsing history while keeping the complexity in the order of O(n 2 ), i.e. for each direction.we compute n(n+1) pair-wise probabilities a t,j for t = 1, • • • , n and j = 0, • • • , n. In this paper, we choose to use soft headword embeddings rather than making hard decisions on headwords.In the latter case, beam search may potentially improve the parsing accuracy at the cost of higher computational complexity, i.e.O(Bn 2 ) with a beam width of B. When using soft headword embeddings, there is no need to perform beam search.Moreover, it is straightforward to incorporate parsing history from both directions by using two query components at the cost of O(2n 2 ), which cannot be easily achieved when using beam search.The parsing decision can be made directly based on attention weights from the two query components or further rescored by the maximum spanning tree (MST) search algorithm. Parsing by Attention with Agreement For the bi-directional attention model, the underlying probability distributions a l t and a r t may not agree with each other.In order to encourage the agreement, we use the mathematically convenient metric, i.e. the squared Hellinger distance H 2 a l t ||a r t , for quantifying the distance between these two distri-butions.For dependency parsing, when the golden alignment is known during training, we can derive an upper bound on the latent agreement objective as where D(•||•) is the KL-divergence.The complete derivation is provided in the Appendix A. During optimization, we can safely drop the constant scaler and the square root operation in the upper bound, leading to the following loss function where indicates element-wise multiplication.The resulting loss function is equivalent to the crossentropy loss, which is widely adopted for training neural networks. As we can see, the loss function (3) tries to minimize the distance between the golden alignment and the intersection of the two directional attention alignments at every time step.Therefore, during inference, the headword prediction for the word at time step t can be obtained as argmax j log a l t,j + log a r t,j , seeking for agreement between both query components.This parsing procedure is also similar to the exhaustive left-to-right modifier-first search algorithm described in (Covington, 2001), but it is enhanced by an additional right-to-left search with the agreement enforcement.Alternatively, we can treat (log a l t,j + log a r t,j ) as a score of the corresponding arc and then search for the MST to form a dependency parse tree, as proposed in (McDonald et al., 2005).The MST search is achieved via the Chu-Liu-Edmonds algorithm (Chu and Liu, 1965;Edmonds, 1967), which can be implemented in O(n 2 ) for dense graphs according to Tarjan (1977).In practice, the MST search slows down the parsing speed by 6-10%.However, it forces the parser to produce a valid tree, and we observe a slight improvement on parsing accuracy in most cases. After obtaining each modifier and its soft header embeddings, we use a single-layer perceptron to predict the head-modifier relation, i.e. where y t,1 , • • • , y t,m are the probabilities of m possible relations, and U ∈ R m×2e and W ∈ R m×2d are model parameters. Model Learning For the t-th word (modifier) w t in a sentence of length n, let H l t and H r t denote random variables representing the predicted headword from forward (left-to-right) and backward (right-to-left) parsing directions, respectively.Also let R t denote the random variable representing the dependency relation for w t .The joint probability of headword and relation predictions can be written as where at each time step we assume head-modifier relations and headwords from both directions are independent with each other when conditioned on the global knowledge of the whole sentence.Note that the long-span context and high-order parsing history information are injected when we model P (H l t |w 1:n ), P (H r t |w 1:n ) and P (R t |w 1:n ), as discussed in Section 2.2. As discussed in Section 2.3, the model can be trained by encouraging attention agreement between two query components.From (5), we observe that it is equivalent to maximizing the log-likelihood of the golden dependency tree (or minimizing the crossentropy) for each training sentence, i.e. n t=1 log y t,relationt + log a l t,headt + log a r t,headt , where a t,j and y t,r are defined in ( 2) and ( 4), respectively, and relation t and head t are golden relation and headword labels, respectively.The gradients are computed via the back-propagation algorithm (Rumelhart et al., 1986).Errors of y t come from the arc labels, whereas there are two source of errors for a t , one from the headword labels and the other back-propagated from errors of y t .We use stochastic gradient descent with the Adam algorithm proposed in (Kingma and Ba, 2015).The learning rate is halved at each iteration once the loglikelihood of the dev set decreases.The whole training procedure terminates when the log-likelihood decreases for the second time.All learning parameters except bias terms are initialized randomly according to the Gaussian distribution N (0, 10 −2 ).In our experiments, we tune the initial learning rate with a step size of 0.0002, and choose the best one based on the log-likelihood on the dev set at the first epoch.Empirically, the selected initial learning rates fall in the range of [0.0004, 0.0010] for hidden layer size [128,320], and tend to be larger when using a smaller hidden layer size, i.e. [0.0016, 0.0034] for hidden layer size around 80. The training data are randomly shuffled at every epoch. Experiments In this section, we present the parsing accuracy of the proposed BiAtt-DP on 14 languages.We report both UAS and labeled attachment score (LAS), obtained by the CoNLL-X eval.plscript 2 which ignores punctuation symbols.The headword predictions are made through the MST search, which slightly improves both UAS and LAS (less than 0.3% absolutely).Overall, the proposed BiAtt-DP achieves competitive parsing accuracy on all languages as state-of-the-art parsers, and obtains better UAS in 6 languages.We also show the impact of using POS tags and pre-trained word embeddings.Moreover, different variants of the full model are compared in this section. For English, POS tags are obtained using the Stanford POS tagger v3.3.0 (Toutanova et al., 2003), whereas for Chinese, we use gold segmentation and POS tags.When constructing the token embeddings for English and Chinese, both the word form and the POS tag are used.We also initialize E form by pretrained word embeddings3 . For the 12 other languages, we randomly hold out 5% of the training data as the dev set.In addition to the word form and find-grained POS tags, we use extra features such as lemmas, coarse-grained POS tags, and morphemes when they are available in the dataset.No pre-trained word embeddings are used for these 12 languages. Model Configurations The hidden layer size is kept the same across all RNNs in the proposed BiAtt-DP.We also require the dimension of the token embeddings to be the same as the hidden layer size.Note that we concatenate the hidden layers of two RNNs for constructing m j , and thus we have e = 2d.The weight matrices C and D respectively project vectors m j and q t to the same dimension h, which is equivalent to d.For English and Chinese, since the dimension of pretrained word embeddings are 300, we use 300 × h as the dimension of embedding parameters E's.For the 12 other languages, we use square matrices for the embedding parameters E's.For all languages, We tune the hidden layer size and choose one according to UAS on the dev set.The selected hidden layer sizes for these languages are: 368 (English), 114 (Chinese), 128 (Arabic), 160 (Bulgarian), 224 (Czech), 176 (Danish), 220 (Dutch), 200 (German), 128 (Japanese), 168 (Portuguese), 128 (Slovene), 144 (Spanish), 176 (Swedish), and 128 (Turkish). Results We first compare our parser with state-of-the-art neural transition-based dependency parsers on PTB and CTB.For English, we also compare with stateof-the-art graph-based dependency parsers.The results are shown in Table 1 and Table 2 the transition-based parsers, it achieves better accuracy than Chen and Manning (2014), which uses a feed-forward neural network, and Dyer et al. (2015), which uses three stack LSTM networks.Compared with the integrated parsing and tagging models, the BiAtt-DP outperforms Bohnet and Nivre (2012) but has a small gap to Alberti et al. (2015).On CTB, it achieves best UAS and similar LAS.This may be caused by that the relation vocabulary size is relatively smaller than the average sentence length, which biases the joint objective to be more sensitive to UAS.The parsing speed is around 50-60 sents/sec measured on a desktop with Intel Core i7 CPU @ 3.33GHz using single thread.Next, in Table 3 we show the parsing accuracy of the proposed BiAtt-DP on 12 languages in the CoNLL 2006 shared task, including comparison with state-of-the-art parsers.Specifically, we show UAS of the 3rd-order RBGParser as reported in (Lei et al., 2014) since it also uses low-dimensional continuous embeddings.However, there are several major differences between the RBGParser and the BiAtt-DP.First, in (Lei et al., 2014) (Buchholz and Marsi, 2006).We also report corresponding LAS in squared brackets.The results of the 3rd-order RBGParser are reported in (Lei et al., 2014).Best published results on the same dataset in terms of UAS among (Pitler and McDonald, 2015), (Zhang and McDonald, 2014), (Zhang et al., 2013), (Zhang and McDonald, 2012), (Rush and Petrov, 2012), (Martins et al., 2013), (Martins et al., 2010), and (Koo et al., 2010).To study the effectiveness of the parser in dealing with non-projectivity, we follow (Pitler and McDonald, 2015), to compute the recall of crossed and uncrossed arcs in the gold tree, as well as the percentage of crossed arcs. from low-rank tensors.Second, the RBGParser uses combined scoring of arcs by including traditional features from the MSTParser (McDonald and Pereira, 2006) / TurboParser (Martins et al., 2013).Third, the RBGParser employs a third-order parsing algorithm based on (Zhang et al., 2014), although it also implements a first-order parsing algorithm, which achieves lower UAS in general.In Table 3, we show that the proposed BiAtt-DP outperforms the RBGParser in most languages except Japanese, Slovene, and Swedish. It can be observed from Table 3 that the BiAtt-DP has highly competitive parsing accuracy as stateof-the-art parsers.Moreover, it achieves best UAS for 5 out of 12 languages.For the remaining seven languages, the UAS gaps between the BiAtt-DP and state-of-the-art parsers are within 1.0%, except Swedish.An arguably fair comparison for the BiAtt-DP is the MSTParser (McDonald and Pereira, 2006), since the BiAtt-DP replaces the scoring function for arcs but uses exactly the same search algorithm.Due to the space limit, we refer readers to (Lei et al., 2014) for results of the MSTParsers (also shown in Appendix B).The BiAtt-DP consistently outperforms both parser by up to 5% absolute UAS score. Finally, following (Pitler and McDonald, 2015), we also analyze the performance of the BiAtt-DP on both crossed and uncrossed arcs.Since the BiAtt-DP uses a graph-based non-projective parsing algorithm, it is interesting to evaluate the performance on crossed arcs, which result in the non-projectivity of the dependency tree.The last three columns of Table 3 show the recall of crossed arcs, that of uncrossed arcs, and the percentage of crossed arcs in the test set.Pitler and McDonald (2015) reported numbers on the same data for Dutch, German, Portuguese, and Slovene as in this paper.For these four languages, the BiAtt-DP achieves better UAS than that reported in (Pitler and McDonald, 2015).More importantly, we observe that the improvement on recall of crossed arcs (around 10-18% absolutely) is much more significant than that of uncrossed arcs (around 1-3% absolutely), which indicates the effectiveness of the BiAtt-DP in parsing languages with non-projective trees. Ablative Study Here we try to study the impact of using pre-trained word embeddings, POS tags, as well as the bidirectional query components on our model.First of all, we start from our best model (Model 1 in Table 4) on English, which uses 300 as the token embedding dimension and 368 as the hidden layer size.We keep those model parameter dimensions unchanged and analyze different factors by comparing the parsing accuracy on PTB dev set.The results are summarized in Table 4. Comparing Models 1-3, it can be observed that without using pre-trained word embeddings, both UAS and LAS drop by 0.6%, and without using POS tags in token embeddings, the numbers further drop by 1.6% in UAS and around 2.6% in LAS.In terms of query components, using single query component (Models 4-5) degrades UAS by 0.7-0.9% and LAS by around 1.0%, compared with Model 2. For Model 6, the soft headword embedding is only used for arc label predictions but not fed into the next hidden state, which is around 0.3% worse than Model 2. This supports the hypothesis about the usefulness of the parsing history information.We also implement a variant of Model 6 which produces one a t instead two by using both q l t and q r t in (1).It gets 92.44% UAS and 89.26% LAS, indicating that naively applying a bi-directional RNN may not be enough. Related Work Neural Dependency Parsing: Recently developed neural dependency parsers are mostly transition-based models, which read words sequentially from a buffer into a stack and incrementally build a parse tree by predicting a sequence of transitions (Yamada and Matsumoto, 2003;Nivre, 2003;Nivre, 2004).A feed-forward neural network is used in (Chen and Manning, 2014), where they represent the current state with 18 selected elements such as the top words on the stack and buffer.Each element is encoded by concatenated embeddings of words, POS tags, and arc labels.Their dependency parser achieves improvement on both accuracy and parsing speed.Weiss et al. (2015) improve the parser using semi-supervised structured learning and unlabeled data.The model is extended to integrate parsing and tagging in (Alberti et al., 2015).On the other hand, Dyer et al. (2015) develop the stack LSTM architecture, which uses three LSTMs to respectively model the sequences of buffer states, stack states, and actions.Unlike the transition-based formulation, the proposed BiAtt-DP directly predicts the headword and the dependency relation at each time step.Specifically, there is no explicit representation of actions or headwords in our model.The model learns to retrieve the most relevant information from the input memory to make decisions on headwords and head-modifier relations. Graph-based Dependency Parsing: In addition to the transition-based parsers, another line of research in dependency parsing uses graph-based models.Graph-based parser usually build a dependency tree from a directed graph and learns to scoring the possible arcs.Due to this nature, nonprojective parsing can be done straightforwardly by most graph-based dependency parsers.The MST-Parser (McDonald et al., 2005) and the TurboParser (Martins et al., 2010) are two examples of graphbased parsers.The MSTParser formulates the parsing as searching for the MST, whereas the Tur-boParser performs approximate variational inference over a factor graph.The RBGParser proposed in (Lei et al., 2014) can also be viewed as a graph-based parser, which scores arcs using low-dimensional continuous features derived from low-rank tensors as well as features used by MST-Parser/TurboParser.It also employs a sampler-based algorithm for parsing (Zhang et al., 2014). Neural Attention Model: The proposed BiAtt-DP is closely related to the memory network (Sukhbaatar et al., 2015) for question answering, as well as the neural attention models for machine translation (Bahdanau et al., 2015) and constituency parsing (Vinyals et al., 2015b).The way we query the memory component and obtain the soft headword embeddings is essentially the attention mechanism.However, different from the above studies where the alignment information is latent, in dependency parsing, the arc between the modifier and headword is known during training.Thus, we can utilize these labels for attention weights.The similar idea is employed by the pointer network in (Vinyals et al., 2015a), which is used to solve three different combinatorial optimization problems. Conclusion In this paper, we develop a bi-directional attention model by encouraging agreement between the latent attention alignments.Through a simple and interpretable approximation, we make the connection between latent and observed alignments for training the model.We apply the bi-directional attention model incorporating the agreement objective during training to the proposed memory-network-based dependency parser.The resulting parser is able to implicitly capture the high-order parsing history without suffering from issue of high computational complexity for graph-based dependency parsing. We have carried out empirical studies over 14 languages.The parsing accuracy of the proposed model is highly competitive with state-of-the-art dependency parsers.For English, the proposed BiAtt-DP outperforms all graph-based parsers.It also achieves state-of-the-art performance in 6 languages in terms of UAS, demonstrating the effectiveness of the proposed mechanism of bi-directional attention with agreement and its use in dependency parsing. B UAS Scores of MSTParsers Numbers in brackets indicate the absolute improvement of the proposed BiAtt-DP over the MSTParsers. Figure 1 : Figure 1: The structure of the BiAtt-DP.The figure only illustrates the parsing process at the time step for has.Blue and yellow circles are memory and query vectors, respectively.Red and purple circles represent headword probabilities predicted from corresponding query components.Green circles represent soft headword embeddings.Black arrowed lines are connections carrying weight matrices.⊗ and ⊕ indicate element-wise Table 1 : (Andor et al., 2016)5)be seen that the BiAtt-DP outperforms all other graph-based parsers on PTB.Compared with Parsing accuracy on PTB test set.Our parser uses the same POS tagger as C&M (2014) and Dyer et al. (2015), whereas other parsers use a different POS tagger.Results with† and * are provided in(Alberti et al., 2015)and(Andor et al., 2016), respectively. Table 2 : Parsing accuracy on CTB dev and test sets. , the lowdimensional continuous embeddings are derived Table 3 : UAS on 12 languages in the CoNLL 2006 shared task Table 4 : Parsing accuracy on PTB dev set for different variants of the full model.INIT refers to using pre-trained word embddings to initialize E form . POS refers to using POS tags in token embeddings.L2R and R2L respectively indicate whether to use the left-to-right and right-to-left query components.† Table 5 : UAS scores of 1st-order and 2-nd order MSTParsers on 12 languages in the CoNLL 2006 shared task
2016-08-09T08:50:54.084Z
2016-08-06T00:00:00.000
{ "year": 2016, "sha1": "0c9211b3c08a32bc4d73d04f3c427c7db5e0fe91", "oa_license": null, "oa_url": "https://doi.org/10.18653/v1/d16-1238", "oa_status": "BRONZE", "pdf_src": "ArXiv", "pdf_hash": "f61257118b196eb19c2975194617f4ec439741a6", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
225902228
pes2o/s2orc
v3-fos-license
Resolving cosmological singularity problem in logarithmic superfluid theory of physical vacuum A paradigm of the physical vacuum as a non-trivial quantum object, such as superfluid, opens an entirely new prospective upon origins and interpretations of Lorentz symmetry and spacetime, black holes, cosmological evolution and singularities. Using the logarithmic superfluid model, one can formulate a post-relativistic theory of superfluid vacuum, which is not only essentially quantum but also successfully recovers special and general relativity in the “phononic” (low-momenta) limit. Thus, it represents spacetime as an induced observer-dependent phenomenon. We focus on the cosmological aspects of the logarithmic superfluid vacuum theory and show how can the related singularity problem be resolved in this approach. Introduction It is a general consensus now that the physical vacuum, or a non-removable background, is a nontrivial object whose properties' studies are of utmost importance, because it affects the most fundamental notions our physics is based upon, such as space, time, matter, field, and fundamental symmetries. An internal structure of physical vacuum is still a subject of debates based on different views and approaches, which generally agree on the main paradigm, but differ in details; some introduction can be found in monographs by Volovik and Huang [1,2]. It is probably Dirac who can be regarded as a forerunner of the superfluid vacuum theory (SVT). As early as 50's, he noticed that if space was filled with a medium of quantum nature then the Michelson-Morley-type experiments would be insensitive to it, unlike a case of the classical aether which was abandoned in 20th century [3]. The reason is that the velocity of the quantum matter would be related to a gradient of its wavefunction's phase, while such phases are non-observable (at least in a space of trivial topology). In other words, such a quantum "aether", unlike its classical counterpart, would create no preferred directions in space, therefore an observer would see the mostly isotropic universe around. A simplest analogy of this phenomenon would be an s-wavefunction of a hydrogen atom, which is rotationally invariant despite the atom's original (classical) two-body Hamiltonian is not. Dirac's theoretical views were definitely a step in the right direction, but not a full story as yet. In particular, he did not not explain why this background medium does not slow down celestial bodies moving through it for billions years, as well as how would the Lorentz symmetry emerge in this conventional quantum-mechanical picture. This is where the superfluid vacuum approach takes from. Superfluid vacuum theory is a post-relativistic approach in high-energy physics and classical and quantum gravity, which advocates that physical vacuum is a superfluid, and all elementary particles are excitations above its ground state; the latter is assumed to be observed as a Bose-Einstein condensate of some sort. The term 'post-relativistic' implies that SVT is generally a non-relativistic theory but contains relativity as a subset, or as a special case or limit with respect to some dynamical value -akin to the Newtonian theory of gravity turned out to be a special limit of the Einstein's theory of general relativity, at small values of gravitational fields. As for the notion of superfluid itself, then it is usually understood as a non-relativistic quantum liquid with suppressed dissipative fluctuations and absence of macroscopic viscosity [4,5,6]. Its laboratory examples include liquid helium-4 below the 2.17 K (at the normal pressure), known as superfluid helium-4 or helium II phase [7,8], and "bosonized" fermionic fluids of Cooper pairs of electrons in the BCS-type superconductors. In this paper, we are going to describe why and how the Lorentz symmetry, relativistic gravity and cosmology occur in a superfluid vacuum theory, and discuss the physical implications thereof. Relativity and SVT An easiest way to see how does relativity arise in superfluid vacuum theory is the energy spectrum of excitations of a typical superfluid. If one plots this energy versus momentum for superfluid helium-4, then one finds out that it has the following distinct shape, predicted by Landau: as the excitation's momentum increases, energy grows from origin until it reaches a local maximum (called the "maxon" peak'), which is crucial for suppressing the dissipative fluctuations. Then it dives down to a local minimum (called the "roton" regime, for historical reasons); then it climbs up again, ad infinitum. In the regime of small momenta, called the phononic regime, the dispersion relation is approximately linear with respect to momentum, which is a behaviour typical for relativistic particles if one replaces speed of light by a speed of sound. As a matter of fact, one can still use some sort of relativistic description (by adding extra fields to account for small deviations from the linear law), until the momentum reaches its value corresponding to the "maxon" peak. From there up, relativistic approximation is no longer robust or natural. If one assumes that the actual physical vacuum has a similar energy spectrum of excitations then one can project the above-mentioned picture from the condensed-matter realm to the realm of elementary particle physics and gravity -by replacing phonons with photons, speed of sound with speed of light, etc. [9]. Moreover, one can further explore this analogy to find the analogs of helium-II phenomena in high-energy physics and quantum gravity. There exists also a more formal way to describe a theory of relativity as a subset of the superfluid vacuum theory. It starts with the mathematical map between the inviscid liquids and manifolds of non-vanishing Riemann curvature, usually referred as the fluid/gravity correspondence [1,2]. Essentially, it means that the propagation of small acoustic perturbations inside an inviscid irrotational barotropic fluid, described by background values of density ρ, pressure P and velocity u, is analogous to the propagation of probe particles along the geodesics of a four-dimensional pseudo-Riemannian manifold whose metric is, in Cartesian coordinates: where c s = ∂P/∂ρ is a propagation speed of fluid oscillations. The metric tensor is defined up to a constant factor whose value is determined by measurement units and boundary conditions. Notice that while the background fluid is essentially non-relativistic, the small perturbations themselves couple to the metric which treats space and time as a spacetime. If we regard such fluid as a physical vacuum or a non-removable background then this metric describes the induced spacetime geometry. This effect should not be confused with the relativistic gravitational effect of the ideal fluid as a source introduced via stress-energy tensor in the Einstein field equations. Instead, for a given metric (1), one can always define the induced matter stress-energy tensor where κ is the Einstein's gravitational constant, R µν (g) and R(g) are, respectively, Ricci tensor and scalar curvature corresponding to the induced metric g µν . Thus, superfluid vacuum theory interprets Einstein field equations not as the differential equations for an unknown metric but rather as a derivation procedure of an induced stressenergy tensor of the matter to which the small fluctuations and probe particles couple. If an observer operates with those only, then this is the matter he is going to observe. In other words, we reveal two types of observers in the SVT approach. The first type, called the relativistic observer or R-observer, is the one whose measuring apparatus is based on small fluctuations of superfluid vacuum. This observer sees a relativistic picture, the Lorentz symmetry is a fundamental symmetry for him. The second type, called the full observer or F-observer, can measure things with objects which can violate the smallness condition. She measures the Bose liquid or condensate, which technically flows in an empty Euclidean space. Note that a symmetry of the latter is no longer relevant, because this empty space is unobservable to an F-observer as far as the vacuum condensate exists as an integral entity described by its wavefunction. Obviously, an F-observer is capable of seeing phenomena an R-observer is unable to, therefore her picture of reality must be more consistent and free of any divergences or anomalies. A condensed-matter analogy of such difference would be the so-called sonic black holes in liquids [10], which "exist" in a picture drawn by phonons (sound waves), but not in a picture drawn by photons. However, our current conventional observer is still of an R-type, therefore the information from F-observer's models must be translated to an R-observer's language. The corresponding "dictionary" must be based on a metric (1) and associated Einstein field equations. Spacetime induced by background superfluid Let us further specify the values of the fluid's density, velocity and speed of oscillations in eq. (1). Typically, one deals with a quantum liquid described by a condensate wavefunction obeying a nonlinear wave equation. The latter can be chosen to have a minimal U (1)-symmetric form where m is the mass of a constituent particle, F (ρ) is a differentiable function on a positive semiaxis ρ, V ext (x, t) is an external potential representing a trapping potential or container (we shall neglect it in what follows), and Ψ is a condensate wavefunction which obeys the normalization condition: where M and V are the total mass and volume of the liquid. Then the condensate wavefunction can be written in the Madelung form [11]: where and S = S(x, t) is a phase which is related to fluid velocity: By substituting the Madelung ansatz into eq. (3), and separating real and imaginary parts, one obtains in the leading-order approximation with respect to the Planck constant, where a prime denotes a derivative with respect to an argument in brackets; the sign ∓ differentiates between regimes of stable and unstable flow, and must be chosen in such a way that c s stays real-valued. The detailed derivation of these formulae can be found in refs. [12,13]. With these formulae in hand, 4D metric (1) takes the form [12]: where c s ≈ ( /m) 1/2 |Ψ| ∓F (|Ψ| 2 ). The value c s thus becomes a maximum attainable propagation velocity of any fluctuation of physical vacuum whose quantum wave amplitude is much smaller than |Ψ|. In our case, in the low-momenta ("phononic") limit where c = 2.9979 · 10 8 m s −1 is a universal constant, which is historically called the speed of light in vacuum. In the framework of superfluid vacuum theory, c s is a maximal velocity which can be measured by an R-observer. Why logarithm The logarithmic fluid is a quantum liquid described by eq. (3), where where b andρ being real-valued parameters; the former is also called the nonlinear coupling. One can show that different signs of b mark two different phases our fluid can be in. According to equations (8), its macroscopic equation of state has an ideal-fluid form, P ∝ ρ, in the leadingorder approximation with respect to Planck constant. Although a logarithmic nonlinearity itself was studied since the works by Rosen and Bialynicki-Birula and Mycielski [14,15] (there were also extensive mathematical studies, to mention just a very recent literature [16,17,18,19,20,21,22,23]), the logarithmic fluid approach itself was proposed relatively recently [12], as a further development and generalization of the non-perturbative theory of quantum gravity with a logarithmically nonlinear wave equation [24,25]. Currently, there exist at least two independent arguments for why it is this type of fluid, which describes the physical vacuum. The first argument is of a statistical nature and closely related to a theory of many-body open quantum systems. One can show that the logarithmic nonlinearity universally occurs in leading-approximation models of a large class of condensate-like matter in which the interaction potentials between constituent particles are substantially larger than the kinetic energies thereof [26,27]. According to that approach, the nonlinear coupling must be related to thermodynamic values of the fluid: where T and T Ψ are the thermal and quantum-mechanical temperature, respectively; a symbol "∼" means "a linear function of". The quantum-mechanical temperature is defined as a thermodynamical conjugate of the quantum information entropy function S Ψ = − V |Ψ| 2 ln(|Ψ| 2 /ρ) dV , which was proposed and studied by Everett, Hirschman and others, some bibliography can be found in ref. [26]. In particular, this entropy function directly emerges from equations (3) and (11), when averaged using a Hilbert space's inner product [24,28,29]. It is thus a not so big surprise that the logarithmic model turns out be robust for describing microscopic properties of the superfluid component of liquid 4 He: it analytically reproduces with high accuracy three main observable facts: Landau spectrum of excitations, the structure factor, and the speed of sound at normal pressure, whereby using only one non-scale parameter to fit the excitation spectrum's experimental data [5,6]. According to an introductory section above, it is natural to expect the physical vacuum being condensate-like matter, composed of a superfluidic condensate and quantum fluctuations thereof. Therefore, the logarithmic fluid should be a robust model here too. The second argument is not related to any statistics but relies on the correspondence principle. The latter implies that in the low-momenta limit, superfluid vacuum theory has to recover the Einstein's relativistic postulates. One of them, about a constancy of c, implies that c s should not depend on density, at least in the leading order with respect to the Planck constant. Recalling eq. (8), this results in the following differential equation whose solution is a logarithmic function (11), as one can easily check. Similarly to eq. (8), an approximation sign indicates a leading-order approximation with respect to . The solution of the equation above implies that for an R-observer, the physical meaning of the nonlinear coupling is dynamical while for an F-observer it is quantum thermodynamical, cf. eq. (12). In other words, thermal processes inside superfluid vacuum, such as change of temperature, heavily influence dynamical processes therein hence the structure of induced spacetime observed by an R-observer. This should have profound implications in many high-energy and stronggravity phenomena, including those occurring in cosmology. It also means that one can use the relativistic approach (with tweaking by adding additional fields) for a large range of energies, all the way up to the "maxon" peak threshold, which can be as high as hundreds TeV and above. These intermediate relativistic models can still provide valuable understanding about various fundamental phenomena, such as the mass generation mechanism and non-zero extent of particles [30,31]. However, drastically new physics will step in when one manages to reach the "maxon" threshold and thus go into an essentially nonrelativistic regime; vacuum Cherenkov radiation and superluminal boom being some examples of phenomena which will occur [9,25]. Superfluid vacuum cosmology Let us study here a case when background logarithmic superfluid is in a state described by the wavefunction Ψ 0 (x, t), while quantum fluctuations are disregarded. If we assume the simplest hence its gradient is a constant three-vector. Therefore, the mapping (9) gives us the following induced 4D geometry: where c b is constant, according to above; we can assume here that c b ≈ |b| /m. From the viewpoint of an R-observer, the value of u (0) is not observable and can be set to any value by an appropriate coordinate transformation. This confirms our remarks on isotropy made in the introductory section above. Obviously, for manifolds with the line element (16), the Weyl tensor vanishes. Therefore, they are of the type O, according to the Petrov classification [32]. This is the class where all Friedmann-Lemaítre-Robertson-Walker (FLRW) spacetime metrics belong to, including those which describe the worlds expanding with an acceleration. A subtle technical point is that in the SVT approach the induced spacetime metrics come out written in conformally-flat coordinates [33,34], which thus requires additional coordinate transformations to present metrics in a form which is more popular in relativistic cosmology nowadays. By using eq. (16) and the conformal rescaling technique, one can derive the induced fourdimensional stress-energy tensor corresponding to this metric, see ref. [12] for details. This stress-energy tensor strongly resembles a theory with the non-minimally coupled scalar field, which can be interpreted as a dilaton or inflaton. One can also demonstrate this process by moving in opposite direction: it was shown that logarithmic nonlinearity appears in field equations when performing an ADM-type reduction of dilatonic gravity [35]. Conclusion In conclusion, let us demonstrate how the cosmological singularity problem gets resolved in the SVT approach. In the previous section, it was shown that if the superfluid vacuum is presented by a logarithmic fluid being in a state with a phase linearly dependent on radius-vector, then our R-observer sees himself embedded into the FLRW-kind Universe. This means that the expansion of Universe is a phenomenon, whose existence and interpretation is a matter of which type of an observer we are talking about: an R-observer sees the expanding 4D spacetime while an F-observer observes a non-relativistic superfluid flow in a 3D space. Interestingly, superfluid vacuum cosmology offers its own explanation for a temperature of the cosmic microwave background (CMB): it is a temperature of photon-type excitations of superfluid vacuum, which are close to being in a thermal equilibrium with the background superfluid itself. Such a conjecture immediately explains, without involving specific models or fine-tuning of the initial conditions, why CMB temperature's value is so close to a temperature scale of quantum liquids we know of, which is about two Kelvin. Furthermore, a metric (16) obviously becomes singular in a domain where the factor |Ψ 0 (x, t)| 2 approaches zero. From the viewpoint of an R-observer, this looks like a serious issue: one cannot impose Cauchy-type initial conditions in a singular point, therefore the whole dynamics is ill-defined. From the viewpoint of an F-observer, however, nothing drastic happens. In a quantummechanical theory, be it a theory of point-like particles or Bose liquids and condensates, wavefunctions' amplitudes can take zero values. This can happen, for example, at a boundary of a system, or even in the origin if a wavefunction is odd. Wavefunctions can also take asymptotically zero values if a system occupies an infinite-size region of space. As for infinite values of |Ψ| 2 , then these are usually forbidden by normalization conditions, like the one given by eq. (4), which ensure a probabilistic or condensate interpretation of a wavefunction. To summarize, cosmological singularities "exist" only in the incomplete picture seen by an R-observer whose measuring facilities are restricted to small excitations of vacuum, as discussed in section 2. This illustrates and reaffirms a nearly obvious fact that the Einstein's theory of relativity, like any other viable physical theory we have dealt with, has a finite applicability domain. Any physical processes in a vicinity of, or resulting from, spacetime singularities must be described by means of post-relativistic theories and notions.
2020-02-24T10:34:57.906Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "887dd30f0f32f3270d761f73435ab1f200b29319", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1557/1/012038", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "57e98b38051877be35b18bc3747d79c6fb9deb38", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
229720355
pes2o/s2orc
v3-fos-license
Localization of Scattering Objects Using Neural Networks The localization of multiple scattering objects is performed while using scattered waves. An up-to-date approach: neural networks are used to estimate the corresponding locations. In the scattering phenomenon under investigation, we assume known incident plane waves, fully reflecting balls with known diameters and measurement data of the scattered wave on one fixed segment. The training data are constructed while using the simulation package μ-diff in Matlab. The structure of the neural networks, which are widely used for similar purposes, is further developed. A complex locally connected layer is the main compound of the proposed setup. With this and an appropriate preprocessing of the training data set, the number of parameters can be kept at a relatively low level. As a result, using a relatively large training data set, the unknown locations of the objects can be estimated effectively. Introduction Determining the location and material properties of objects is an important practical problem in a number of measurement systems. For this, a general approach is to detect how the objects scatter certain incoming waves. One can apply either sonic or electromagnetic waves, depending on the experimental setup. This general framework includes radar and sonar systems, seismic methods in geophysics, and a number of methods in medical imaging, such as ultrasonic methods and computed tomography. In many cases, only the reflection of certain pulse-waves is measured. At the same time, a scattered periodic wave involves more information, which can make it possible to perform the quantitative analysis of the scattered objects. This is used, e.g., in ground penetrating radar systems and through-the-wall imaging for monitoring buildings, assisting archaeological explorations and seismic surveys. In this case, given point-source wave pulses or plane waves are emitted. Typically, time-harmonic waves are used, often with multiple frequencies. In realistic cases, even the detection of the scattered waves is non-trivial, since one can only detect these in a very limited region in contrast to a conventional computer tomography or magnetic resonance imaging. The corresponding mathematical models-leading to inverse problems-contain a number of challenges also for the theoretical studies. Accordingly, different numerical methods were developed in order to assist and fine-tune different radar systems, ultra sound systems or evaluate various geological measurements. All of these conventional methods use a kind of regularization, since the "raw" mathematical equations corresponding to the physical laws lead to unstable and ill-posed problems. Even these regularized methods, which become stable and possess unique solutions, are computationally rather expensive and somewhat arbitrary, owing to the choice of the regularization. In any case, these lead to involved models and the corresponding simulations demand extensive computational resources. Neural networks offer a modern approach for replacing this conventional method [1,2]. Having a large number of direct measurements or simulations, one can try to automatize the solution and develop a relatively easy localization procedure using neural networks. In this case, the time-consuming compound can be to obtain sufficiently large training data and train the network. This can be what identified with an immense number of observations and, based on this, carrying out a long calibration procedure. Having a trained network at hand, the corresponding inverse problem can be solved quickly, even in real-time. Accordingly, in the last decade, a number of neural networks were developed for specific tasks to assist or replace conventional sensing and measuring. Statement of Problem We focus on the case, where a scattered plane wave is analyzed for finding the location of multiple objects. In order to mimic a realistic situation, the scattered waves were only detected in a limited region. In concrete terms • Two uniform unit disks with reflecting boundaries were placed in free space. • Given plane waves were applied from several directions. • The scattered waves were measured only on the bottom edge of a square-shaped domain, where the phenomenon was simulated. • The position of the obstacles had to be determined while using the scattered waves. Absolute value of the scattered wave in a simulation with given obstacles. An upward planar incident wave was applied with k = 4π. Mathematical Model The conventional direct mathematical model of wave scattering is the following partial differential equation (PDE from now on) for the unknown complex-valued function u : where |u| corresponds to the amplitude, Ω − denotes (all of) the obstacles (the union of the two disks in Figure 1) with the surface ∂Ω − , and k denotes the frequency of the scattered wave. In the first line, the Helmholtz equation is given for the propagation, the boundary condition in the second line corresponds to the reflecting obstacles, while, in the third line, the usual Sommerfeld far-field radiation condition is formulated. For more details, we refer to the review paper [3]. For fixed obstacles Ω − , this leads to a well-posed problem, which is usually solved with an integral representation while using the Green function for the operator ∆ + k 2 . This is also applied to perform training data for our approach. However, we investigate an inverse problem for the following: the solution u is known in Ω 0 ⊂ R 2 \ Ω − and we have to determine Ω − . For our case, Ω 0 is neither an open subset of R 2 \ Ω − nor a closed boundary: this is only a side of the computational domain. In this way, any theory ensuring the well-posedness cannot be applied for our inverse problem. Nevertheless, despite all of these difficulties, methods that are based on the mathematical analysis are developed, see, e.g., [4]. Neural Network Approach As mentioned, a recent research direction for investigating inverse problems that correspond to (1) is given by neural networks. Here, using an enormous number of location-scattered wave "pairs", the network will learn to associate a pair of locations to a certain scattered wave. A comprehensive work explaining the background, the basics, and dealing with earlier achievements can be found in [5,6]. We refer to these works in order to understand the standard structure of the corresponding neural networks. In the last few years, these methods were developed further and applied to real-life cases: deep neural networks were constructed to also detect complex shapes using scat-tered waves in [7,8], and the method is also applicable in the case of non-linear waves [9]. A recent review on this topic with a number of further references can be found in [10]. Our geometrical setup and the equations in (1) are also related to the problem of sound source localization, which is also investigated while using neural networks, see, e.g., [11,12]. Detecting the structure of entire domains is one of the most important and challenging problems in this area, which have also been recently tackled by applying neural networks [13]. Obviously, the structure of the neural network has a significant impact on the results. The starting point of such constructions is mostly a well-working neural network structure, which is often used for image recognition problems. Such general structures for the present purposes are shown in [5,6]. In [14], a new direction considering the neural networks is suggested: efficient shallow networks are constructed, combined with special activation functions and parallel backpropagation gradient algorithms in order to keep the number of unknown parameters at a relatively low level. The approach of the work is the same here: using a neural network with a special structure, involving a moderate number of parameters. Besides speeding up the corresponding simulations, this can prevent overfitting and lead to stable and reliable computing processes. Another novelty of our approach is that scattered waves are only detected here on a narrow section, such that we use only a part of the information that arises from the scattering. Of course, this realistic case has its own limitation: complex structures can hardly be recognized in this way. Recent results that are closely related to our work can be found in [7,8]. Here, the authors developed neural networks to detect very complex shapes. At the same time, they only assumed one object at a fixed position. Moreover, the scattered wave was detected all around the scattering object. We did not apply these assumptions, at the same time, only the locations of simple disks were detected. Obtaining Training and Validation Data In our case, the training data consist of a set of vector-pairs (F j , G j ) j=1,2,...,J . Here G j = (G j1 , G j2 ) ∈ R × R 2 corresponding to a geometrical setup, which can be identified with the two coordinate pairs of the disk-midpoints, while F j ∈ R 161 denotes the dimensionless wave intensity in our observation points, as shown in Figure 3. Shortly and formally: the further task is to predict G j1 and G j2 while using F j . In order to obtain a satisfactory set of training data for this, we considered each geometrical setup of disks with different integer midpoint-coordinates, i.e., the elements of the set where denotes the lexicographical ordering. This means 735 different cases, which proved to be sufficiently large. Note that, taking much more training data could not further increase the accuracy of our prediction. On the contrary, several runs made the learning process have a more oscillating accuracy. For the pairs of disks that are given by G, we have simulated the scattered waves while using the simulation package µ-diff in Matlab [15]. In this framework, it is also possible to choose different boundary conditions and point sources for the incident waves. Figures 1 and 2 depict sample simulated waves. The dimensionless wave intensity is measured on the bottom edge of the computational domain in 161 gridpoints. For a geometrical setup G j , this gives F j . The choice of 161 is a good balance: this means approximately eight measurement points per the dimensionless wave length 1 2 , which is a minimum for identifying a scattered wave. Using more gridpoints could lead to extremely long simulation time and an unnecessarily large number of parameters. A total of 92 percent of the generated data set was used for training, and the remaining eight percent for validation purposes. In concrete terms, we used J = 676 samples for training and a validation data set with 59 samples, which were chosen randomly. We have used sixteen different plane waves with angles: k π 16 , k = 0, 1, . . . , 15 and a dimensionless wave length of 1 2 . To summarize, the raw training data that we have worked with can be given by an array of size 16 × 676 × 161. Preprocessing of Data The data coming from the simulations can be highly oscillating, such that a small shift results in a completely different data at a fixed observation point. To get rid of this main problem, we considered observation points with maximal amplitude and the measurement was then interpolated and extrapolated from these locations, in order to obtain an estimated amplitude function in each 161 points of the measurement. The corresponding process was also executed while using the built-in subroutines in Matlab. Figure 3 shows this interpolation step in a sample case. One may expect that the simplified data set can have a shorter representation. Accordingly, we applied a max pooling layer before sending data to the convolution layer in order to push down the number of parameters in the consecutive computation. It turned out that this radical simplification does not harm the prediction: we could successfully localize the disks while using 11 data points instead of the original number 161. One can observe in Figure 3 that the real simplification occurs in the previous step of the preprocessing. All of this can lead to a loss of data. From a practical point of view, we may obtain then very similar transformed data sets that correspond to a completely different geometric setup of the scattering objects. This problem will also be discussed later. This last step could also correspond to a layer within the neural network, but, in this discussion, we consider the above preprocessed data as the input of the network. Additionally, by applying this procedure, noise can be filtered out, which pollutes most real observations. For testing purposes, we have added Gaussian noise to all of of our training and simulation data. The amplitude of the noise was one-fifth of the original plane waves' standard deviation. The top of Figure 4 shows a pair of these. Additionally, in this example, one can compare the preprocessed data that arise from the original simulated data and the corresponding noisy in the bottom of Figure 4. Note that no additional denoising procedure was applied, and our preprocessing performed this task automatically. The Structure of the Neural Network When working with neural networks, the main challenge one can face is to figure out the structure to use. This usually consists of modifying an existing network and then fine-tuning the parameters that determine its structure. In concrete terms, one can modify the size of dense layers and convolution windows, the number of features in convolutions to obtain an optimal performance. Additionally, inserting dropout steps and tuning the parameters of the underlying optimization process is of great importance. This is a really time-consuming process. As a starting point, we have used the neural networks shown in [12,16] and the general framework in [5]. Here, the data are immediately sent to consecutive convolutional layers and then the information is collected in one or more dense layers, which is finally summarized into the desired output values. We present here one specific network, which has delivered the most accurate estimation for the locations of the unknown disks. Finding appropriate convolution windows is an important building block in neural networks. In concrete terms, a convolution window w = [w 1 , w 2 , . . . , w n ] transforms the vector v = [v 1 , v 2 , . . . , v N ] with N > n to w * v ∈ R N−n+1 , as given by This window is associated with some "feature", which is found by the neural network during the learning process by optimizing the components (or weights) of w. Usually, a couple of such convolution windows are included in the networks to gain all of the characteristic features in the training set. The main improvement in our construction is that we have used a so-called twodimensional locally connected layer. In this case, for all components in (2), we had to use different weights. Formally, the convolution window now becomes w LC = [w 1,1 , w 1,2 , . . . , w 1,n , w 2,1 , w 2,2 , . . . , w 2,n , . . . . . . , w N−n+1,1 , w N−n+1,2 , . . . , w N−n+1,n ] and instead of (2), we have We defined a couple of these complicated windows in order to obtain the features of the scattered wave. Moreover, these are varied according to the angle of the incident plane waves. The motivation of this is that the observation points are located on the bottom edge and, therefore, depending on the angle of the incident plane waves, different kinds of scattering occur. The description in the forthcoming points can be easily followed in Figure 5, where the overall structure of our network is demonstrated. The connections between the layers are represented with dashed and straight lines, according to their operations. When information was just passed through the layers (no parameters were added), we used dashed lines. In other cases, continuous lines correspond to operations with some parameters. For a clear visualization, the lengths of the layers are scaled down, so that they fit in one figure. The size (dimension) of the different layers is always displayed. the Layers The first layer collects the information from the scattered waves that arise from given incident plane waves with 16 different angles. In concrete terms, after preprocessing, for each geometric setup, we have an input of size 11 × 16. In the first hidden layer, for each incident wave direction, we allowed 12 different two-dimensional locally connected convolution windows of length 8 × 4. The application of each two-dimensional convolution window in (3) to the input vectors of length 11, results in a vector of length 12 − 4 + 1 = 8. An arrow in Figure 5 corresponds to one of the convolutions in (3). This is performed in 12 cases, such that we obtain a matrix of size 8 × 12. The total size of the first hidden layer is 16 × 8 × 12 because we have 16 incident waves. In the second hidden layer, we collect the information that is given by the first hidden layer. This does not increase the complexity of the model, since we did not use any weight or unknown parameters for this layer. This results in a layer that consists of one vector of length 1536. The following hidden layers are all dense. In these layers, each component is affected by the entire data in the previous layer. In practice, dense layers are used when the information in the data of the previous layer has to be compressed into a smaller one. In our neural network, we achieved this by using dense layers with the lengths of 50, 40, and 8, respectively. Neither the numbers nor the sizes of these layers are indicated a priori. This choice is the result of a series of experiments. Obviously, as an output, we have used a dense layer of size 4, which means the four coordinates of the unknown midpoints of the scattering objects. Parameter Reduction The transformation between the layers is governed by parameters that, when combined with the above structure, completely describes the action of the neural network. In concrete terms, we have used the following number of parameters: Even though it seems to be an overly large number of parameters, in the practice of neural networks, this is still a moderate value. Observe that the major compound of this sum corresponds to the application of the first dense layer. The contribution of the second one is still larger than that of the locally connected layer. The so-called overfitting effect is the real danger of using an unnecessary large number of parameters. When overfitting occurs, the prediction becomes consistently worse after a certain number of iterations. Therefore, "unnecessary" parameters should be eliminated. This is done automatically while using a dropout step [17] in some given layers. As it turned out, the optimal choice was a radical cut of the parameters by halving them in the first dense layer, such that, indeed, we used 48509 of them. Note that this step may also result in a loss of information. Accordingly, the application of further dropout steps led to less accurate predictions. The Activation Function The data that we are using are bounded and positive. Therefore, we applied the RELU activation function in each step, but the final one. This real function is constant 0 for negative inputs and identical for positive ones. Other activation functions were also tried, but they delivered less accurate results. In the literature, for similar neural networks [18], a sigmoid activation function was applied in the last step. Our experiments confirm that this choice is optimal. Note that the complexity and the computational time is not affected by trying different activation functions. Loss Function and Optimization When working with neural networks, the difference between the predicted and real values must be minimized. We chose the mean squared error as a common measure of the prediction error. The accuracy of the neural network is measured by this value. In practice, we can only minimize the loss function of the training set and validate the result on the validation data set. A significant difference between the training and validation loss indicates the presence of overfitting. In this case, the model will be extremely inaccurate for everything, except the training dataset. To sum up, the quality of our predictions can be characterized with the validation loss. The conventional optimization processes in neural networks are stochastic gradient methods [19], from which we chose the ADAM algorithm [20][21][22]. Because inverse problems are extremely unstable, a small change in the parameters can result in a completely different output. Accordingly, by the calibration of the parameters, a relatively small learning rate 0.0005 was the optimal one. Two further parameters had to be chosen in the ADAM algorithm; we took β 1 = 0.9 and β 2 = 0.8. The number of global iteration steps, called the epochs, where the full training data set is used, was also experimentally determined. The process was terminated when the validation loss does not decrease any more. Results The neural network was implemented while using the Keras library in Python. Using the parameters shown above, after around 100 epochs, the validation loss function does not decrease significantly anymore. Therefore, taking a lot of further epochs would not increase the quality of our prediction. The train and validation loss are strongly correlated and they remain close to each other. This is a common indicator of the success of the learning process. The oscillation of the loss function is accompanied with the optimization algorithm. The validation dataset is smaller and it was not used in the minimization procedure, such that it exhibits larger oscillations. Figure 6 shows this behavior, which is typical for neural networks. Prediction of the Locations In order to measure the quality of our prediction, we have computed the average squared error of the obstacle-midpoints. In Figure 6, one can observe that this value goes below 0.002 during the optimization process. Figures 7 and 8 display some concrete midpoint-predictions. The larger disks represent the real size of the scattering objects, while the smaller show their midpoints in order to demonstrate the accuracy of the predictions. The deviations of the predicted and real midpoints are shown next to the graphs. The total (real) distance between the centers is given shortly by dist. We have used the short notations dev b and dev g for the values of deviations between the predicted and real centers, while b and g refer to the blue− and the green centers, respectively. These are summed up to obtain the variable dev_total. We have calculated the distances between the scattered waves on the bottom edge in order to analyze the simulation results further. According to our previous optimization procedure, the distance was taken the square distance of the two data (of length 161). In this way, we intended to detect the problematic cases, where scattered waves are very close to each other. Figure 9 shows the distance distribution of the waves. As one can realize, a lot of relatively small distances occur, which clearly indicates the unstable nature of the present scattering problem. One would expect that the almost identical scattering waves correspond to the very similar geometry of the scattering objects. Surprisingly, this is not the case, as shown in Figures 10 and 11. A natural attempt to enhance the efficiency of the neural network is to remove these pairs from the training data set. Interestingly, this did not significantly improve the accuracy, pointing again to the rather complex nature of the problem. We have also investigated the effect of noisy data in the simulations. Because our preprocessing could filter the additive noise, the input of the neural network was very slightly affected by this, see the bottom of Figure 4. Therefore, we obtained the same accuracy for the midpoint-prediction. We could not even separate the final results arising from noisy and noiseless data sets, due to the randomness of the splitting into training and learning data and the built-in stochastic optimization method ADAM. Comparison with Other Approaches The main compounds of our construction were the applications of an appropriate preprocessing of data and an appropriate locally connected layer in the network. In order to demonstrate the importance of these, we performed two additional series of simulations. For testing purposes, we have used noiseless data in all cases. In the first series, we did not apply any preprocessing. In this case, the loss function remained 2-3 times the one in the original network, as shown in Figure 12. At the same time, since we had an input of size 16 × 161 (instead of 16 × 11), the computational time was about ten times more when compared to the original case. In the second case, we have substituted the locally connected layer with a convolution layer, while using convolution windows. Here, the computational costs are at the same level as compared with the original case. At the same time, the loss function becomes about two times of the original value. Figure 13 shows the simulation results. The following observations also refer to the power of our approach: • Our network could process even the oscillating full data set with an acceptable loss. • Using a conventional convolution network, train loss is above the validation loss, which suggests that new variables should be included. The locally connected layer purpose. The simulation codes with detailed explanations, comments, and figures on the results can be found in the Supplementary Materials. They are given as Python notebooks, where each of the main steps can be run independently. All of the relevant information on these is collected in the file readme.txt. It is completed with the data sets that were used for our simulation. Computing Details In order to point out the computational efficiency of our method, we mention that a simple laptop with Intel i3 processor and 4 GB RAM was used for the simulations. In the following, we summarize the computing time of the main steps in the simulations. • Computing time of the simulation data: approx. 20 h. • Computing time for the data transformation: approx. 10 s. • Training the neural network and computing the prediction: 2-3 min. Conclusions A neural network based approach was presented for inverse scattering problems with restricted observation of the scattered data. Including a complex locally connected layer in the network seems to be the right balance: this ensures sufficient complexity, while a moderate number of parameters are used. A feasible preprocessing of the data was the other cornerstone: we have assisted the network by pre-mining information. In this way, the number of parameters could be reduced. This not only resulted in the speed of the simulations, but also made them more reliable by avoiding overfitting. Moreover, it has the capability to diminish the effect of noise. This study provides a general framework, fixing that deep neural networks with an appropriate preprocessing and locally connected layers are perfect for this job. At the same time, the present work has its limitations: in a robust method, the number and shape of the scattering objects and their material properties are unknown. For a corresponding extension, one should first detect the number of objects with a simple network and perform an enormous number of training data for different shapes and locations.
2020-12-24T09:07:01.682Z
2020-12-22T00:00:00.000
{ "year": 2020, "sha1": "69a40d97bc824fec7ddd2c2ff1006eeec24f2cc8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/21/1/11/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e5d7df0073755a2816138e3da8d13bd9d4cad589", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
58551461
pes2o/s2orc
v3-fos-license
Identification of Novel Protein Targets of Dimethyl Fumarate Modification in Neurons and Astrocytes Reveals Actions Independent of Nrf2 Stabilization* Dimethyl fumarate (DMF) is a reactive fumarate ester used in the treatment of relapsing remitting multiple sclerosis; however, the neuroprotective mechanisms of DMF action are incompletely understood. The results uncover novel DMF-modified cysteine residues in neurons and astrocytes, including cytoskeletal proteins whose modulation by DMF may alter the response to neurodegenerative cues and myelination. Graphical Abstract Highlights Dimethyl fumarate covalently modifies cysteine residues in neurons and astrocytes. Cofilin-1, tubulin and collapsin response mediator protein 2 (CRMP2) are targets. DMF-modified cofilin-1 reduces actin-severing ability, preserving filamentous actin. The fumarate ester dimethyl fumarate (DMF) has been introduced recently as a treatment for relapsing remitting multiple sclerosis (RRMS), a chronic inflammatory condition that results in neuronal demyelination and axonal loss. DMF is known to act by depleting intracellular glutathione and modifying thiols on Keap1 protein, resulting in the stabilization of the transcription factor Nrf2, which in turn induces the expression of antioxidant response element genes. We have previously shown that DMF reacts with a wide range of protein thiols, suggesting that the complete mechanisms of action of DMF are unknown. Here, we investigated other intracellular thiol residues that may also be irreversibly modified by DMF in neurons and astrocytes. Using mass spectrometry, we identified 24 novel proteins that were modified by DMF in neurons and astrocytes, including cofilin-1, tubulin and collapsin response mediator protein 2 (CRMP2). Using an in vitro functional assay, we demonstrated that DMF-modified cofilin-1 loses its activity and generates less monomeric actin, potentially inhibiting its cytoskeletal remodeling activity, which could be beneficial in the modulation of myelination during RRMS. DMF modification of tubulin did not significantly impact axonal lysosomal trafficking. We found that the oxygen consumption rate of N1E-115 neurons and the levels of proteins related to mitochondrial energy production were only slightly affected by the highest doses of DMF, confirming that DMF treatment does not impair cellular respiratory function. In summary, our work provides new insights into the mechanisms supporting the neuroprotective and remyelination benefits associated with DMF treatment in addition to the antioxidant response by Nrf2. Identification of Novel Protein Targets of Dimethyl Fumarate Modification in Neurons and Astrocytes Reveals Actions The fumarate ester dimethyl fumarate (DMF) has been introduced recently as a treatment for relapsing remitting multiple sclerosis (RRMS), a chronic inflammatory condition that results in neuronal demyelination and axonal loss. DMF is known to act by depleting intracellular glutathione and modifying thiols on Keap1 protein, resulting in the stabilization of the transcription factor Nrf2, which in turn induces the expression of antioxidant response element genes. We have previously shown that DMF reacts with a wide range of protein thiols, suggesting that the complete mechanisms of action of DMF are unknown. Here, we investigated other intracellular thiol residues that may also be irreversibly modified by DMF in neurons and astrocytes. Using mass spectrometry, we identified 24 novel proteins that were modified by DMF in neurons and astrocytes, including cofilin-1, tubulin and collapsin response mediator protein 2 (CRMP2). Using an in vitro functional assay, we demonstrated that DMF-modified cofilin-1 loses its activity and generates less monomeric actin, potentially inhibiting its cytoskeletal remodeling activity, which could be beneficial in the modulation of myelination during RRMS. DMF modification of tubulin did not significantly impact axonal lysosomal trafficking. We found that the oxygen consumption rate of N1E-115 neurons and the levels of proteins related to mitochondrial energy production were only slightly affected by the highest doses of DMF, confirming that DMF treatment does not impair cellular respiratory function. In summary, our work provides new insights into the mechanisms supporting the neuroprotective and remyelination benefits associated with DMF treatment in addition to the antioxidant response by Nrf2. Fumarate esters have been utilized for the treatment of autoimmune psoriasis for two decades (1). More recently, a dimethyl fumarate (DMF) 1 formulation has been developed for the treatment of relapsing-remitting multiple sclerosis (RRMS), a chronic inflammatory condition resulting in neuronal demyelination and axonal loss (2). Two randomized, double-blind, placebo-controlled trials (DEFINE and CONFIRM) with DMF demonstrated that treatment of RRMS results in sustained clinical and neuroradiological efficacy, and a reduced progression toward disability (3)(4)(5). As a result of these positive outcomes, this DMF formulation (marketed as Tecfidera®) was approved in the US in March 2013. DMF therapy is associated with beneficial immunomodulatory and neuroprotective effects; however, the complete mechanism of action remains unknown. In the current study we propose that DMF chemically modifies cysteine residues on a range of intracellular protein targets. Early studies demonstrated that fumarate esters mediated anti-inflammatory effects through the modulation of T helper (Th) cells (6,7), inducing a shift from the Th1 profile toward a favorable Th2 profile and the production of IL-4 and IL-5 (8), as well as an increase in type II dendritic cells (DCs) (9). Mechanistically, the immunomodulatory and neuroprotective effects are mediated in part through the cysteine modification of both reduced glutathione and reactive thiols on Kelch-like ECH-associated protein 1 (Keap1) by the fumarate esters (10,11). The depletion of reduced glutathione and modification of Keap1 results in the stabilization of nuclear factor (erythroid-derived 2)-related factor 2 (Nrf2), a transcription factor regulating the cellular response to oxidative stress via the transcription of antioxidant response element (ARE) genes. This leads to an increase in proteins such as heme oxygenase 1 (HO-1), NADPH-quinone-oxidoreductase-1 (NQO1) and glutamate cysteine ligase (GCL), facilitating the replenishment of glutathione and a sustained defense against oxidative stress (12,13). Although Nrf2 activation remains the primary described mechanism of action of DMF (14 -16), recent studies demonstrate that DMF is therapeutically beneficial for the treatment of multiple sclerosis models in Nrf2 knockout mice (17). Upon ingestion DMF may be rapidly converted to monomethyl fumarate (MMF, generated after removal of a methyl group), and MMF agonism of the hydroxycarboxylic acid receptor 2 (HCAR2) also appears to be responsible for some of the positive immunologic effects of DMF therapy (18). A comprehensive quantitative proteomic approach (isotopic tandem orthogonal proteolysis-activitybased protein profiling, isoTOP-ABPP), focused on the immune system has recently identified ϳ40 DMF-sensitive cysteines in primary human T cells, confirming the reactivity of this potent electrophile with immunomodulatory proteins including inhibitor of ␤ kinase (IKK␤), tumor necrosis factor-␣induced protein 3 (TNFAIP3) and IL-16 (19). DMF decreased cell surface levels of the IL-2 receptor to a similar level in Nrf2 ϩ/ϩ and Nrf2 Ϫ/Ϫ mouse splenocytes, again suggesting that the modulation of T cell activation by DMF involves additional protein targets. Blewett et al. demonstrated that the modification of a CXXC motif on PKC by DMF disrupted its interaction with CD28 at the immunological synapse, preventing T cell activation and IL-2 production (19). We have recently described the increased succination of protein thiols by endogenously produced fumarate in the brainstem of NDUFS4 knockout mice (20), a model of the mitochondrial disease Leigh Syndrome, prompting us to consider further the impact of succination in neurons. The chemical modification of proteins by fumarate yields S-(2-succino-)cysteine (2SC) (21), and is also increased in adipocytes under diabetic conditions (22)(23)(24)(25) as well as fumarase deficient tumors (26,27), where fumarate also leads to the succination of Keap1 thiols (28). We have used several molecular and chemical experimental approaches to increase intracellular fumarate levels, including DMF treatment (29), and we have described the reactivity of tubulin thiols with DMF in vitro (30). When using DMF to rapidly increase protein succination in cells in vitro, we had predicted that the methyl groups of DMF would be hydrolyzed by esterases upon entry into the cell, and that fumarate would react with protein thiols and be detected using a specific anti-2SC antibody (22). However, the lack of visible protein succination upon DMF or MMF treatment led us to consider that the continued presence of the methyl groups was preventing detection of the modified proteins using the anti-2SC antibody. It is therefore significant to note that in vitro DMF treatment does not replicate the modification generated by endogenous fumarate accumulation, such as that observed in the fumarate deficient tumors described above. Consequently, we designed a brief saponification procedure to facilitate immunodetection and observed a wide range of succinated protein bands by immunoblotting (29). In the current investigation we first confirmed increased modification of proteins by DMF using this procedure, followed by a proteomic approach to identify and confirm the site of modification of novel DMF-modified proteins in neurons and astrocytes. We propose that DMF exerts direct effects in neural cell populations that are independent of the activation of the peripheral immune response to modulate the disease progression in multiple sclerosis. This is supported by evidence that DMF and its metabolites accumulate in the brain and induce differential changes in gene expression in regions such as the cortex, cerebellum and hippocampus (17,31). We further propose that the mechanism of action of DMF in neural cell populations is not centered solely upon the activation of the Keap1/Nrf2 antioxidant system; instead other abundant targets of DMF modification may impact a broad range of cellular functions that also contribute to therapeutic efficacy. We performed select functional analyses on several of the novel protein targets we identified to determine if DMF modification altered well-characterized functions of these proteins; in some cases, the reactive thiols identified on these proteins have been studied in the context of neurodegenerative processes. These observations better define the extensive action of DMF in vitro and provide more insight on neuroprotective mechanisms that may be exploited to improve MS treatment through the design of targeted therapeutics. Primary Neuron Isolation and Culture-All animal use described in this and other sections was consistent with the guidelines issued by the National Institutes of Health and were approved by the University of South Carolina Institutional Animal Care and Use Committee. Primary neurons from newborn rat brain cortices were isolated and cultured using an adaptation of the method described by Brewer (32). Briefly, postnatal day 1 rats were sacrificed by decapitation, the brains were aseptically dissected and cortices were separated from the rest of the brain in ice-cold Hibernate A medium (Cat # A1247501, Gibco/Thermo Fisher Scientific, Waltham, MA), containing 2% (v/v) B-27 supplement (Cat # 17504044, Gibco/Thermo Fisher Scientific) and 0.5 mM glutamine (Cat # 25030149, Gibco/Thermo Fisher Scientific). The tissue was minced in fragments of about 1 mm 3 with a scalpel and subjected to digestion with 2 mg/ml papain (Cat # LS003120, Worthington Biochemical Corp., Lakewood, NJ) in the supplemented Hibernate A medium for 20 min at 30°C in a shaker incubator set at 100 rpm. After thorough trituration through a fire polished Pasteur pipette, the tissue was allowed to settle for 5 min and the supernatant was carefully layered on top of a discontinuous OptiPrep (Cat # D-1556, Sigma-Aldrich) gradient prepared in Hibernate A medium; the layers contained 35, 25, 20 and 15% OptiPrep. The gradient was centrifuged at 800 g for 15 min at room temperature, and layers 1 (15% OptiPrep) and 2 (20%) were discarded. Layer 3 (25%) was collected and added a 5-fold volume of Neurobasal A medium (Cat # 10888022, Gibco/Thermo Fisher Scientific), containing 0.5 mM glutamine and 2% B-27 supplement. After a centrifugation at 500 ϫ g for 5 min, cells were resuspended in Neurobasal A medium containing 0.5 mM glutamine, 2% B-27 supplement, and 5 ng/ml bFGF (Cat # 13256029, Invitrogen/Thermo Fisher Scientific); counted and plated on 24-well plates pretreated with 0.01% poly-L-lysine (Cat # P4707, Sigma-Aldrich) at a density of 200,000 cells/well. Fifty percent of the media was replaced every third day, with the addition of 5 M AraC (Cat # BP2512100, Fisher Scientific, Thermo Fisher Scientific, Waltham, MA) from DIV 3 to inhibit glial proliferation. On DIV 8, cells were left untreated or treated for 24 h with 10 M or 100 M dimethylfumarate (DMF, Cat # 242926, Sigma-Aldrich) prepared in Dulbecco's PBS (DPBS, Cat # 21316003, Corning Cellgro, Manassas, VA) and filtered. On DIV 9, medium was removed, cells were rinsed 3 times with DPBS and collected after the addition of 250 l radioimmunoprecipitation assay (RIPA) lysis buffer [50 mM Tris-HCl (Cat # BP152-5, Fisher Scientific, Thermo Fisher Scientific), 150 mM NaCl (Cat # S7653, Sigma-Aldrich), 1 mM EDTA (Cat # ED2SS, Sigma-Aldrich), 0.1% Triton X-100 (Cat # T9284, Sigma-Aldrich), 0.1% SDS (Cat # BP166 -5, Fisher Scientific, Thermo Fisher Scientific), 0.5% sodium deoxycholate (Cat # D6750, Sigma-Aldrich), pH 7.4], with the addition of 2 mM diethylenetriaminepentaacetic acid (Cat # D6518, Sigma-Aldrich), and a protease inhibitor mixture (Cat # P8340, Sigma-Aldrich). Homogenization was performed by pulse sonication at 2 watts using a Model 100 sonic dismembrator (Thermo Fisher Scientific) for 30 s before resting on ice for 30 min in lysis buffer as described before (30). Protein in the lysates was precipitated with 9 volumes of cold acetone for 10 min on ice. After centrifugation at 3000 ϫ g for 10 min and removal of the acetone, the protein pellet was resuspended in 150 ìl RIPA buffer. The protein content in the lysates was determined by the Lowry assay (33). N1E-115 Cell Culture-N1E-115 cells (subclone N1E-115-1 neuroblastoma cells) were obtained from Sigma-Aldrich (Cat # 08062511). The cells were expanded in nondifferentiation medium (NDM): 90% DMEM (Cat # 12430 -054, Gibco/Thermo Fisher Scientific, containing with 25 mM glucose, no pyruvate, 25 mM HEPES, 4 mM glutamine) and 10% FBS. At 80% confluence, the cells were differentiated into neurons in the presence of 2% FBS and 1.25% dimethyl sulfoxide (DMSO, Cat # 32434, Alfa Aesar, Thermo Fisher Scientific) in DMEM for 5 days (34). In addition to the assessment of the neuronal phenotype by light microscopy, the detection of synaptophysin protein levels confirmed successful differentiation. During the final 24 h of differentiation the cells were treated with 10 -100 M DMF and protein was collected as described for primary neurons. Lentiviral Transduction of 3T3-L1 Fibroblasts-The lentiviral vectors were prepared by the University of South Carolina Viral Vector Facility. Briefly, TRC2 Fh1 shRNA, clone-TRCN0000246831 or MIS-SION TRC2 pLKO.5-puro nonmammalian shRNA control plasmids (Cat # SHC202, Sigma-Aldrich) were used to generate the lentiviral vectors that also contained a puromycin resistance gene. Fifteen microgram vector plasmid, 10 g psPAX2 packaging plasmid (Cat # 12260, Addgene, Cambridge, MA), 5 g pMD2.G envelope plasmid (Cat # 12259, Addgene) and 2.5 g pRSV-Rev plasmid (Cat # 12253, Addgene) were transfected into 293T cells. The filtered conditioned medium was collected and stored at Ϫ80°C. 3T3-L1 fibroblasts (Cat # CL-173 ™ , ATCC, Manassas, VA) were incubated overnight with 150 l of filtered conditioned medium containing Fh1 shRNA or control lentivirus. Successfully transduced fibroblasts were selected using 1 g/ml puromycin (Cat # P9620, Sigma-Aldrich). The selected fibroblasts were propagated in the presence of puromycin until confluent and harvested in RIPA buffer as described above. Successful knockdown of fumarase expression was determined by immunoblotting and fumarate levels were determined by GC-MS as described previously (20). Saponification of Fumarate Esters-Sixty micrograms of protein from control and DMF treated cell lysates was incubated with 80% DMSO, 6 mM potassium hydroxide (KOH, Cat # 484016, Sigma-Aldrich), and 1 mM EDTA at room temperature for 30 min, with vortexing at 5 min intervals. The pH was adjusted to 7, and the protein was precipitated with 90% acetone (Cat # BDH1101-4LG, VWR, Radnor, PA) before being resuspended in 40 l RIPA buffer. The pH was again adjusted to 7 before gel electrophoresis and immunoblotting. One-dimensional PAGE and Western Blotting-Western blotting was performed as described previously, after separation of the proteins by SDS-PAGE (22,23). For protein identification purposes gels were stained with Coomassie Brilliant Blue R (Cat # 27816, Sigma-Aldrich) following electrophoresis to allow band isolation and mass spectrometry (see below). In some cases, membranes were stripped with 62.5 mM Tris, pH 6.8, containing 2% SDS and 0.7% 2-mercapto ethanol (Cat # M6250, Sigma-Aldrich) for 20 min at 65°C before reprobing with a different antibody. Protein Identification from SDS-PAGE Gel Bands by LC-MS/ MS-To identify the sites of fumarate ester modification, 60 g of protein from primary neurons, 200 g of protein from primary astrocytes or 120 g of protein from differentiated N1E-115 neurons were resolved by SDS-PAGE, and the gels were stained with Coomassie Brilliant Blue R. The visible protein bands were excised from the gels and subjected to in-gel digestion with trypsin (the gel was cut into segments based on the intensity of the Coomassie stain and the protein bands were excised). Briefly, after destaining the proteins were reduced with 10 mM dithiothreitol (Cat # V3155, Promega, Madison, WI) and alkylated with 170 mM 4-vinylpyridine (Cat # V3204, Sigma-Aldrich). Protein digestion was carried out overnight at 37°C in the presence of 500 ng sequencing grade modified trypsin (which cleaves C-terminal to lysine and arginine residues, Cat # V5280, Promega) in 50 mM ammonium bicarbonate (Cat # 09830, Sigma-Aldrich). After gel extraction, the peptides were analyzed in a blinded manner on a Dionex Ultimate 3000-LC system (Thermo Scientific) coupled to a Velos Pro Orbitrap mass spectrometer (Thermo Scientific). The LC solvents were 2% acetonitrile (Cat # 34851, Sigma-Aldrich)/0.1% formic acid (Cat # 85178, Pierce, Thermo Scientific) (Solvent A) and 80% acetonitrile/0.1% formic acid (Solvent B); the water used for these solvents was LC-MS grade (Cat # 39253, Honeywell International Inc., Morris Plains, NJ). Peptides were first trapped on a 2 cm Acclaim PepMap-100 column (Thermo Scientific) with Solvent A at 3 l/min. At 4 min the trap column was placed in line with the analytical column, a 75 m C18 stationary-phase LC Pico-Chip Nanospray column (New Objective, Inc., Woburn, MA). The peptides were eluted with a gradient from 98%A:2%B to 40%A: 60%B over 30 min, followed by a 5 min ramp to 10%A:90%B that was held for 10 min. The Orbitrap was operated in data-dependent acquisition MS/MS analysis mode and excluded all ions below 200 counts. Following a survey scan (MS1, up to 8 precursor ions were selected for MS/MS analysis. All spectra were obtained in the Orbitrap at 7500 resolution. The DDA data were analyzed using Proteome Discover 1.4 software with SEQUEST algorithm against the uniprot_ref_mouse database (2014 -10 Ϫ03 version, 52,474 proteins) or uniprot_ref_rat database (2011-5-11 version, 39765 proteins) with XCorr validation Ͼ2 (ϩ2) or Ͼ2.5 (ϩ3). An allowance was made for 2 missed cleavages following trypsin digestion. No fixed modifications were considered. The variable modifications of methionine oxidation (M OX ), proline hydroxylation (P OX ), cysteine pyridylethylation (C PE , 105.058) or cysteine succination by fumarate (C 2SC , 116.011), cysteine modification by monomethyl fumarate (C MMF , 130.026) or cysteine modification by dimethyl fumarate (C DMF , 144.042) were considered with a mass tolerance of 15 ppm for precursor ions and a mass tolerance of 10 ppm for fragment ions. The results were filtered with a false discovery rate of 0.01 for both proteins and peptides (Percolator node). A minimum of 8 unique peptides was reported for all proteins identified (maximum 106 peptides). For all identifications the spectra were manually inspected to confirm identity of the proposed DMF-modified Cys containing peptides. Any low-quality spectra/incorrect identifications were discarded before performing MS/MS analyses on select masses of interest from the high quality protein identifications. Select MS/MS Analyses-The samples were reanalyzed to target the expected DMF-modified Cys containing peptide mass from the data obtained in DDA mode, further MS/MS analyses were used to monitor select modified peptide masses of interest. The spectrometer repeatedly acquired an MS1 spectrum followed by the desired MS/MS spectra (CID fragmentation in the ion trap, all spectra in the Orbitrap at 7500 resolution). These MS/MS spectra were then averaged over the entire LC peak yielding a spectrum superior to that obtained by DDA (where the acquisition time was split looking for many peptides at once). The CID-MS/MS data was inspected in Proteome Discover 1.4 software for the masses of interest with either cysteine modification by monomethyl fumarate (C MMF , 130.026), or cysteine modification by dimethyl fumarate (C DMF , 144.042) being considered based on the expected modification from the DDA run. The variable modifications of methionine oxidation (M OX ), proline hydroxylation (P OX ) or cysteine pyridylethylation (C PE , 105.058) were considered. The MS/MS spectra were inspected again using Thermo Xcalibur 2.2 software. Manual sequencing of the spectra was used to confirm the sequence and modification site of the peptides. The mass spectrometry analyses constitute a Tier 3 measurement and the proteomics data has been deposited to the ProteomeXchange Consortium via the PRIDE (35) partner repository with the data set identifier PXD008314. The file names in the PRIDE data reflect the protein identified in the original DDA run from a gel band. DRG Culture and Fluorescence Time-Lapse Microscopy-Dorsal Root Ganglion (DRG) neurons were isolated from rat sciatic nerve as described previously (36). 48 h after sciatic nerve crush to induce a regenerative response, adult rat DRG neurons were plated for 16 -24 h onto poly-D-lysine (Cat # P6407, Sigma-Aldrich) and laminin (Cat # 11243217001, Sigma-Aldrich)-coated German glass coverslips and maintained in Ham's F12 medium (Cat # 11765-047, Gibco, Thermo Fisher Scientific) supplemented with 10% horse serum (Cat # 16050 -122, Gibco, Thermo Fisher Scientific). Neurons were exposed to 500 nM Lysotracker Red (Cat # L7528, Invitrogen, Thermo Fisher Scientific) for 5 min before imaging. Coverslips were transferred into fresh medium containing 25 mM HEPES (Cat # 15630080, Gibco, Thermo Fisher Scientific), pH 7.4 and 1% OxyFluor (Cat # OF-0005, Oxyrase, Inc., Mansfield, OH) in a water-heated microscope stage warmed to 37°C. Cells expressing a relatively low level of Lysotracker Red were selected for time-lapse imaging using an Axiovert 200 inverted microscope (Carl Zeiss, Inc.). Fluorescent images were acquired every 0.3 s for 2 min using a Plan-Apo 63ϫ/1.2 W/0.17 water objective. Kymographs were generated from time-lapse movies using NIH Im-ageJ software. The kymographs were generated such that the direction toward the cell body was always to the right, so lines that sloped toward the right at any point with a net displacement of Ͼ5 m were categorized as retrograde organelles. Lines that sloped toward the left Ͼ5 m at any time during the recording interval were considered anterograde organelles. Lines that zigzagged were categorized as bidirectional, and lines that showed Ͻ5 m lateral displacements in any direction during the recording interval were categorized as static. A total of fifteen axons per experimental group in three independent experiments were used for quantification purposes, and the number of organelles for each category of movement expressed as % of total organelles was used to construct a box and whiskers graph (Fig. 4B). The data was further analyzed by calculating the speed (in m/s) and run length (in m) for the anterograde and retrograde motile events; results were expressed as mean Ϯ S.E. (Fig. 4C). Finally, the speed and run length for anterograde and retrograde motile events were grouped in 4 different intervals, transformed in % and expressed as mean Ϯ S.E. (Fig. 4D). Cofilin Activity Assay-Cofilin activity was determined by measuring its ability to sever actin filaments, as described by Yonezawa et al. (37). Briefly, aliquots of 7.5 g human recombinant cofilin (containing Tris, NaCl, sucrose and dextran, Cat # CF01, Cytoskeleton Inc., Denver, CO) were dissolved in water (final concentrations:10 mM Tris-HCl, pH 8.0, 10 mM NaCl, 5% sucrose, 1% dextran), added sodium diethylenetriaminepentaacetate (Cat # D6518, Sigma-Aldrich) to 100 M, and Tris(2-carboxyethyl)phosphine hydrochloride (Cat # 20490, Thermo Fisher Scientific) to 250 M, and incubated overnight at room temperature with a 5-fold molar excess of dimethylfumarate (DMF), or with vehicle. The following day, aliquots of 15 g rabbit muscle actin (Cat # AKL99, Cytoskeleton Inc.) dissolved in ice-cold General Actin Buffer (5 mM Tris-HCl pH 8.0, 0.2 mM CaCl 2 , Cat # BSA01001, Cytoskeleton Inc.) supplemented with 0.2 mM ATP (Cat # BSA04, Cytoskeleton Inc.), were polymerized into F-actin by addition of 1/10 volume of Actin Polymerization Buffer (100 mM Tris-HCl pH 7.5 containing 20 mM MgCl 2 , 500 mM KCl, 10 mM ATP and 50 mM guanidine carbonate, Cat # BSA02, Cytoskeleton Inc.) followed by 1 h incubation at room temperature. At this point, the pH of the F-actin aliquots was adjusted to 8.0 with an excess of 10 mM Tris-HCl before the addition of the cofilin aliquots that were incubated overnight with or without DMF, and the incubation continued for another 30 min at room temperature. The samples were then centrifuged at 150,000 ϫ g for 1 h at 25°C, with deceleration set without brakes. After separation of the supernatants (containing depolymerized actin monomers) from the pellets (containing F-actin), fractions of supernatants and pellets containing the equivalent to 3 g actin and 1.5 g cofilin in the initial mix were resolved by SDS-PAGE, and gels were stained with Coomassie Brilliant Blue R (n ϭ 4 replicates per experimental group). Images of the gels were captured using a Gel Doc XRϩ scanner (Bio-Rad Laboratories, Inc.), and the integrated OD x area value of the bands was analyzed with the Image Lab Software V5.2.1 (Bio-Rad Laboratories, Inc.). For each particular protein (actin and cofilin), results of the band integrated OD in supernatants and pellets were expressed as % of the total integrated OD (supernatant band ϩ pellet band), grouped by treatment, and then reported as the mean Ϯ S.E. Additional control experiments included F-actin samples that did not have added cofilin, or F-actin samples that were adjusted to pH 6.8 (to inhibit cofilin severing activity on F-actin) before cofilin addition. Ubiquitin C-Terminal hydrolase L1 (Uchl1) Activity Assay-UchL1 deubiquitinase activity was assessed using recombinant UchL1 protein (Cat # 50690-M07E, Sino Biological, Beijing, China) and the substrate Ubiquitin-Rhodamine110-Glycine (Cat # U-555, Boston-Biochem, Cambridge, MA). UchL1 protein (0.1 g) was preincubated with 0, 25, 50, or 100 M DMF for 24 h prior the addition of 0.1 M fluorogenic ubiquitin rhodamine in a deubiquitination buffer (50 mM Tris-HCl, 150 mM NaCl, 5 mM DTT). Fluorescence was measured at 535 nm (following excitation at 485 nm) over a 2 h period using the Tecan SAFIRE Spectrofluorimeter. The initial reaction velocities were analyzed and the results were averaged for each experimental group and expressed as the mean Ϯ S.E. n ϭ 5 per condition analyzed). Measurement of Oxygen Consumption Rate (OCR)-N1E-115 cells were seeded on V7 cell culture microplates coated with 0.2% gelatin (Cat # G1890, Sigma-Aldrich) at a density of 10,000 cells/well. After 3 days in culture, the cells were differentiated for 5 days as described above, and treated with 0, 10, or 50 M DMF for the last 24 h (n ϭ 6/group). The Seahorse extracellular flux analyzer XF-24 (Agilent Technologies, Inc., Santa Clara, CA), was used to measure the oxygen consumption rate (OCR), using XF Assay Medium (Cat # 102365-100, Agilent Technologies, Inc.) supplemented with 25 mM glucose (Cat # G8270, Sigma-Aldrich) (38). After measurement of basal respiration, oligomycin (5 g/ml, Cat # O4876, Sigma-Aldrich), carbonyl cyanide 4-(trifluoromethoxy)phenylhydrazone (FCCP, 0.5 M, Cat # C2920, Sigma-Aldrich), and rotenone (3 M, Cat # R8875, Sigma-Aldrich) ϩ antimycin A (4 M, Cat # A8674, Sigma-Aldrich) were added sequentially to determine ATP production/proton leak, spare respiratory capacity and nonmitochondrial respiration. After completion of the assay, the medium was removed and the cells were immediately washed 3 times with cold PBS. The plate was stored at Ϫ70°C before the measurement of the total protein content to normalize individual measurements. The results were averaged for each experimental group and expressed as the mean Ϯ S.E. Experimental Design and Rationale-The purpose of these experiments was to identify novel targets of DMF mediated electrophilic modification of cysteine residues in neurons and astrocytes in vitro, in an attempt to better understand the biological targets and mechanism of action of this drug. The detection of fumarate ester derived succination will only be present in the DMF treated cells. For experiments using primary rodent cultures, at least 3-6 rodent pups were used to generate cells for primary culture. A minimum of three replicate treatment plates/wells (untreated controls versus DMF treated) were used and each separate experiment was reproduced with new cells onefour times. In some cases, immunoblotting for increased heme oxygenase Ϫ1 was used to confirm a biological effect of DMF treatment on the cells. Protein from individual cell culture replicates were separated by electrophoresis and peptides extracted from parallel excised gel bands were pooled in order to obtain enough material for confirmation of the site of modification. At least three technical repeats (LC-MS/MS analyses) were conducted in order to confirm the protein identification and site of modification. All other analyses (including respiration analyses and monitoring of lysosomal trafficking) were performed with a minimum of 3 independent biological replicates per group, with some up to 8 replicates per group (n ϭ 3-8). The Uchl1 deubiquitinase activity was studied with n ϭ 5 per group. Data are summarized throughout as mean Ϯ S.E. and are plotted using SigmaPlot 11 software (Systat Software, Inc. San Jose, CA) and Prism 4 (GraphPad Software, La Jolla, CA). Statistical analyses were performed using SigmaPlot 11 and Prism 4. Differences between more than two groups were analyzed using one-way ANOVA with either the Holm-Sidak or Tukey's post-test. When two groups were compared, the unpaired Student "t" test was used. In all cases, p Ͻ 0.05 was considered statistically significant. DMF-Induced Protein Modification in Neural Cells-Al- though the immunomodulatory and neuronal benefits of DMF are often attributed to the succination of Keap1 and activation of the antioxidant response element (ARE), we predicted that a wider range of protein thiols might also be modified by DMF or its primary metabolite monomethyl fumarate (MMF), because we had observed that this occurs in adipocytes in vitro (29). In order to detect succinated proteins in neurons treated with DMF for 24 h, we employed a procedure that we had developed previously using alkaline hydrolysis to remove the ester and permit the immunological detection of protein succination (supplemental Fig. S1B) (29). In the absence of ester hydrolysis there was limited detection of succinated proteins in rat primary neurons using the anti-2SC antibody, with only one band ϳ50 kDa showing a significant increase in intensity following 100 M DMF treatment (Fig. 1A, lanes 7-9). This indicated that at least one or both of the methyl groups had not been removed by intracellular esterases and was preventing interaction with the anti-2SC antibody that recognizes the S-(2-succino)cysteine epitope (22). To ensure that the succinated proteins detected were solely of neuronal origin we differentiated N1E-115 neuroblastoma cells to a neuronal phenotype (confirmed by increased synaptophysin protein content) (20), and treated these with DMF for 24 h. The hydrolysis of the ester in the presence of KOH facilitated the detection of a large number of succinated proteins (Fig. 1B, last panel, lanes 5 and 6), similar to what we have observed previously in adipocytes following fumarate ester treatment (29). Fibroblasts that had fumarase knocked down using a lentiviral mediated shRNA approach to increase endogenous fumarate levels (39) were used as a positive control for succination (Fig. 1B, lane 2). The intensity of succinated proteins in the DMF treated neurons versus the positive control indicated that fumarate esters readily enter cells and react with a wide range of protein thiols. Because DMF is known to modify thiols in Keap1 leading to the induction of ARE derived proteins such as heme oxygenase 1 (HO-1), we next confirmed that DMF treatment significantly induced HO-1 protein levels (Fig. 1C). In order to identify the novel protein targets in DMF treated neurons we used LC-MS/MS to analyze distinct gel bands representing electrophoretically separated proteins. The peptides obtained were analyzed to determine if modification of protein thiols by either DMF (C DMF ), or the de-methylated metabolites MMF (C MMF ) and fumarate (C 2SC ) had occurred. This proteomic approach allows targeted confirmation of the exact sites of modification and it does not require the base hydrolysis of the ester that can also result in a partial loss of protein. As noted in Table I, all of the proteins identified were modified by DMF or MMF, rather than 2SC, suggesting that both of these fumarate esters reacted with intracellular proteins more rapidly than they could be converted to fumarate. In primary rat neurons we confirmed the identity of 4 modified proteins, and in N1E-115 neurons we confirmed a total of 15 protein subunits modified by either DMF or MMF. Because the N1E-115 cultures were devoid of any glial cells that may be present in primary neuronal cultures, the identified proteins reflect true neuronal targets of fumarate ester modification. Because fumarate esters were previously shown to induce Nrf2 in glial cells in vivo (13), we also investigated additional protein targets of DMF-mediated protein modification in primary astrocyte cultures (supplemental Fig. S2A confirms the enrichment of Glial Fibrillary Acid Protein (GFAP) positive cells). Table I confirms the detection of 11 modified protein subunits in primary rat astrocytes after a 24 h treatment with up to 100 M DMF. Overall, this targeted approach confirmed the novel identification of 24 distinct protein subunits in both neurons and astrocytes that are directly modified by either DMF or MMF. DMF Modification Affects Cofilin-1 Functionality-The chemical modification of several cytoskeletal proteins was observed in all cell types examined (Table I). In astrocytes this included abundant cytoskeletal proteins such as Glial Fibrillary Acid Protein (GFAP) and vimentin, as well as cofilin-1, a dynamic regulator of actin polymerization. Cofilin-1 modulates the actin cytoskeleton by depolymerizing filamentous (F) actin, generating monomeric actin that can be used to reorganize the actin cytoskeleton in response to cellular dynamics. To confirm the initial observation that Cys139 of Cofilin-1 is succinated by MMF in astrocytes in vitro, we performed additional selected reaction monitoring and detected the ϩ3 Fig. 2A). Because 100% modification of the peptide by fumarate esters was not expected, we also detected the pyridylethylated version of the same peptide (supplemental Fig. S2B, ϩ3 charge state, 613.6218 m/z). Importantly, this cysteine residue is conserved across rat, mouse and human sequences. To determine if the succination of Cys139 impacted its activity as a regulator of actin dynamics, we examined the pH-sensitive depolymerization activity of cofilin-1 on F-actin. Cofilin binds to F-actin in a 1:1 ratio below neutral pH, but does not sever the actin filaments (37). In contrast, at pH 8.0 the presence of cofilin results in actin severing and the production of monomeric actin. Fig. 2B confirms that F-actin remains polymerized in the absence of cofilin, and in the presence of cofilin at pH 6.8 (F-actin is detected in the pellet fraction (P)). The adjustment of the pH to 8.0 facilitates F-actin depolymerization and the FIG. 2. Modification of cofilin impairs the depolymerization of actin filaments. A, MS/MS spectrum from astrocyte protein extracts after DMF treatment showing the MMF succinated Cys139 of cofilin 1 in the peptide HELQANC MMF YEEVKD. B, Confirmation that the cofilin severing effect on F-actin is pH-dependent. Samples of cofilin (7.5 g, lanes 3-6) were incubated with F-actin (15 g) in a buffer at pH 6.8 (lanes 3-4) or 8.0 (lanes 5-6); tubes containing actin only at pH 8.0 (lanes 1-2) were included as negative controls. After centrifugation, SDS/PAGE separation and Coomassie blue staining, the distribution of actin and cofilin was studied in the pellet after centrifugation (P) and the supernatant (S). Only the mix incubated at pH 8.0 showed cofilin severing activity on F-actin (lanes 5-6, note that the distribution of actin and cofilin is similar in P and S fractions); cofilin was inactive at pH 6.8 (lanes 3-4). C, Succination by DMF reduces cofilin severing effect on F-actin. Samples of cofilin (7.5 g) were incubated with vehicle (0, lanes [1][2][3][4][5][6][7][8] detection of monomeric actin in the supernatant (S). Cofilin preincubated with DMF in vitro (500 M, equivalent to DMF: cofilin cysteines, 5:4) was unable to fully sever F-actin at pH 8.0 (Fig. 2C, 2D and 2E), resulting in a 29.3% decrease in actin severing activity). Supporting this, there was a greater proportion of actin (DMF: 53.49 Ϯ 0.60% versus CONT: 34.20 Ϯ 2.12%, n ϭ 4, p Ͻ 0.01) and cofilin (DMF: 52.94 Ϯ 0.84% versus CONT: 27.23 Ϯ 3.25%, n ϭ 4, p Ͻ 0.01) present in the pellet fraction (P) when the DMF modified cofilin was used (Fig. 2C, 2D and 2E). The specific modification of Cys139 in these in vitro cofilin preparations was also confirmed separately following CID (data available via PRIDE repository). These data suggest that the chemical modification of Cys139 on cofilin decreases its ability to dynamically regulate actin depolymerization. Modification of Tubulin by DMF Does Not Alter Axonal Trafficking of Lysosomes-Tubulin was also identified as another cytoskeletal target of DMF reactivity (Table I). Tubulin ␣ and ␤ subunits form dimers that polymerize into microtubules, and we have previously described that in vitro treatment of purified porcine tubulin with DMF results in the succination of 11 of the 20 cysteines in the ␣␤ tubulin dimer (30). In the current study we confirmed the DMF mediated succination of tubulin ␣-1A peptide TIQFVDWC DMF PTGFK (ϩ2 charge state, 843.3976 m/z) in N1E-115 neurons; the site of succination was identified as Cys347 as designated by the prominent y 6 and b 8 fragment ions in the annotated spectrum (Fig. 3A). In addition, the pyridylethylated Cys347 containing peptide was also detected in the same N1E-115 neuronal protein preparation (ϩ2 charge state, 823.9054 m/z, supplemental Fig. S3). We have previously described that increased succination by fumarate can affect the detectability of tubulin by antibodies directed against a cysteine containing antigen (30), as succination alters the epitope size and conveys two novel carboxylate groups. We have observed this in adipocytes cultured in high glucose (30 mM) versus normal glucose (5 mM), and on succinated porcine tubulin prepared in vitro (30). We examined if succination of purified porcine tubulin in vitro by DMF (versus fumarate) would also interfere with tubulin detection. Porcine tubulin was incubated with either 100 or 500 M DMF, 500 mM fumarate, or without any addition for up to 24 h. Fig. 3B demonstrates that tubulin succination is pronounced in fumarate treated samples after a short film exposure (2SC short panel), whereas it is undetectable for the tubulin sam- FIG. 3. Tubulin as a target of DMF succination. A, MS/MS spectrum showing the DMF succinated Cys347 of ␣-tubulin in the peptide TIQFVDWC DMF PT-GFK from N1E-115 neuron protein digests. B, Tubulin modification by DMF does not affect detectability by tubulin antibodies. 1 g purified porcine brain tubulin was incubated for 6 or 24 h with 0 (control) or 100 or 500 M DMF, or for 24 h with 50 mM fumarate (Fum) before SDS/PAGE separation and immunoblotting. If no saponification is performed, the anti-2SC antibody only detects succination by fumarate but not by DMF, even after a long exposure (2SC short and long panels; compare with Fig. 1). The ␣-tubulin B.7 antibody shows decreased detectability of tubulin after succination by fumarate, but no change when tubulin is modified by DMF, whereas the DM1A antibody is insensitive to succination (␣-tubulin B.7 and DM1A panels, respectively). Coomassie staining of a duplicate gel run in parallel was used to verify even loading of the lanes. ples treated with DMF or left untreated. A longer exposure (Fig. 3B, 2SC long panel) shows some basal tubulin succination in untreated samples and a very strong signal in fumarate treated samples. However, we found no increases in modification of DMF treated tubulin compared with the untreated samples, which confirms the inability of the anti-2SC antibody to detect succination in the absence of ester hydrolysis, as shown above (Fig. 1A). After stripping the membranes, we probed the same blots with the ␣-tubulin antibody B-7, which is sensitive to tubulin succination (30). As shown in Fig. 3B (panel ␣-tubulin B-7), only the tubulin samples treated with fumarate showed a decrease in tubulin detectability, and this correlates with the increased succination detected by our 2SC antibody (Fig. 3B, panels 2SC short and long). Interestingly, tubulin samples treated with up to 500 M DMF, though succinated ( Fig. 3A and Table I), did not show decreased detectability by the ␣-tubulin antibody B-7 compared with controls (panel ␣-tubulin B-7), suggesting that succination by DMF, which leads to an uncharged modification of the cysteine residues, does not affect the interaction of succinated tubulin with the antibody. Moreover, pronounced succination of tubulin by either fumarate or DMF did not affect its detectability by the ␣-tubulin antibody DM1A, as already described, and we routinely use and recommend this antibody to detect total levels of tubulin in samples where succination may be present (30). Considering the abundance of commercially available antibodies against diverse epitopes for the same protein, it is important to examine the antigens used in order to establish reliable immunoblotting methods for total protein detection when post-translational modifications are endogenously present or introduced. Although the succination of tubulin by DMF did not affect tubulin interaction with the tested antibodies, we were interested in determining if tubulin interactions with motor proteins such as dynein or kinesin were altered, potentially impacting microtubule dynamics. Primary cultures of rat dorsal root ganglion (DRG) neurons were prepared and were treated with 50 M DMF for 24 h or left untreated, and axonal lysosomal trafficking was monitored following lysosomal labeling with LysoTracker. Fig. 4A shows representative kymographs for the untreated DRG neurons (control panel) and DRG neurons treated with 50 M DMF (50 M DMF panel). No differences because of DMF treatment were observed in the percentage of lysosomes undergoing either anterograde or retrograde trafficking, or in the percentage of lysosomes changing directions (both), or of static lysosomes per axon (Fig. 4B). The average speed and run length of the motile lysosomes were also unchanged by DMF treatment (Fig. 4C). However, when motile events were separated according to different speeds (SP), DMF treatment changed the distribution of speed for retrograde events (Fig. 4D). Compared with control, there was higher percentage of 1-2 m/sec motile events but lower percentage of 0.5-1 m/sec motile events in DMF treated DRG neurons, with no changes in the run length (RL). The anterograde distribution of speed and run length were unaffected by DMF treatment. Overall the results indicate that lysosomal trafficking was not significantly altered, suggesting that DMF does not adversely affect this aspect of microtubule dynamics. Collapsin Response Mediator 2 (CRMP2) as a Target of DMF Modification-All three cell preparations confirmed succination of the Collapsin Response Mediator 2 (CRMP2) peptide GLYDGPVCEVSVTPK by either MMF or DMF (Table I). The y 8 and b 8 fragment ion designations in the annotated spectrum shown in Fig. 5 confirm the designation of Cys504 as the site of succination by DMF in primary rat neurons (ϩ2 charge state, 854.4109 m/z). The presence of pyridylethylated Cys504, representative of the unmodified peptide, was also detected in the primary rat neurons (ϩ2 charge state, 834.9182 m/z, supplemental Fig. S4A). In addition, the MMF modification of Cys504 was also confirmed as shown by the annotated spectra of the GLYDGPVC MMF EVSVTPK peptide from DMF-treated primary astrocytes and DMF-treated N1E-115 cells ( Fig. 5B and supplemental Fig. S4B, respectively). The C terminus of CRMP2 contains several GSK3␤ phosphorylation sites (Thr509, Thr514 and Ser 522) that have a role in mediating axonal growth cone retraction. The proximity of Cys504 to these phosphorylation sites, as well as the dependence of phosphorylation on Cys504 oxidation (40), suggest that site-specific succination of CRMP2 may contribute to DMF mediated axonal preservation (41). Ubiquitin C-Terminal Hydrolase L1 (Uchl1) Activity is Reduced-Ubiquitin C-Terminal hydrolase L1 (Uchl1) is an abundant deubiquitinase that constitutes up to 5% of total neuronal protein (42). Uchl1 cleaves ubiquitin from small peptide substrates and contributes to the regeneration of the intracellular ubiquitin pool (43). The modification of Uchl1 by both DMF and MMF was confirmed by a prominent y 2 ion on the Cys152-containing peptide NEAIQAAHDSVAQEGQCR supplemental Fig. S5A and S5B), and this site has previously been shown to subject to chemical modification by other agents (44 -46). DMF treatment did not alter the intracellular protein abundance of Uchl1 (detected using an antibody that specifically recognizes an N-terminal Uchl1 portion that did not contain any cysteines), nor did it alter the total levels of ubiquitinated protein by immunological detection supplemental Fig. S5C and S5E). UchL1 deubiquitinase activity was reduced in the presence of increasing concentrations of DMF (supplemental Fig. S5D, n ϭ 5, p Ͻ 0.002 for 100 M versus control), suggesting that further analyses are warranted to determine if the intracellular pools of free ubiquitin or Uchl1 structure are negatively impacted following exposure to DMF. Mitochondrial Respiration is Minimally Altered by DMF-Because DMF treatment also resulted in succination of the abundant mitochondrial outer membrane protein voltage dependent anion channel-1 (VDAC-1, Table I) ergy production, including ETC complex II 30 kDa subunit and succinate dehydrogenase a, complex III core 2 subunit, complex V subunit ␣ and tricarboxylic acid cycle fumarase, were unaffected by the 50 M DMF treatment. Taken together, these results suggest that low dose 10 M treatment with DMF does not affect the energy production machinery or respiration, whereas higher concentrations of DMF (50 M) may lead to a partial decrease in some respiration parameters. DISCUSSION Dimethyl fumarate (DMF) is an approved anti-inflammatory agent for the treatment of relapsing remitting multiple sclerosis and psoriasis. As an electrophile, DMF has been demonstrated to chemically modify cysteine residues in Keap1 and induce Nrf2-driven antioxidant response element gene transcription. Because DMF has been shown to improve the survival of neurons and astrocytes both in vivo and in vitro, we examined novel targets of DMF thiol reactivity in these cells types in order to better explain why DMF also offers benefit in Nrf2-knockout mouse models of inflammatory disease (17). Using both primary neurons and astrocytes, in addition to a neuronal cell line, we confirmed the identity of 27 uniquely modified cysteine residues, representing 24 distinct protein subunits in these cell types (Table I). We observed that DMF entered the cells and reacted rapidly with specific intracellular proteins, often with both methyl groups intact, indicating that DMF modifies many proteins before being hydrolyzed to the less reactive monomethyl fumarate (MMF) or fumarate itself. Fig. 1 further confirms that DMF is not completely hydrolyzed to fumarate inside the cell, as we could only detect succinated using an anti-2-succinocysteine antibody following base hydrolysis. Our results agree with the observations of Blewett et al., who recently described the reactivity of DMF versus MMF toward primary T cell proteins after 4 h of exposure to DMF in vitro (19). They indirectly quantified ϳ40 DMF-sensitive cysteine residues, following detection of the cysteine residues that no longer reacted with the electrophile iodoacetamidealkyne (isoTOP-ABPP method) (19) These residues were representative of ϳ1% of the 2400 total cysteine residues identified, and confirm that fractional modification of functionally significant thiols is sufficient to contribute to altered T-cell activity (measured by reduced IL-2 production). In contrast to the isoTOP-ABPP, we searched directly for cysteine residues that had been variably modified by DMF, MMF or fumarate, in addition to pyridylethylated cysteines, and found that both DMF and MMF modified cysteine residues were detected across 24 proteins. Because our DMF incubation was for 24 h, it is possible that MMF may have more of an opportunity to be generated by esterase activity and then react with proteins. We did not detect any endogenously succinated residues directly modified by fumarate, however, because we were using positive ion mode mass spectrometry the detection of the negatively charged succinocysteine is more difficult to detect versus the fumarate esters. The longer exposure to DMF likely increased the probability of detecting modified proteins with extended half lives in the cell, such as the cytoskeletal protein subunits that were detected for tubulin, vimentin, glial fibrillary acid protein and cofilin-1. Exogenously applied DMF is more likely to react quickly with cytosolic proteins versus endogenously produced intramitochondrial fumarate that increases in other models such as fumarase deficient cancer cells (28). In the current study we observe the modification of only one mitochondrial protein by exogenous DMF, Voltage-Dependent Anion Channel-1, which is located on the outer mitochondrial membrane. Cofilin-1 contributes to the remodeling of the cytoskeleton by depolymerizing filamentous (F) actin to provide actin monomers for growing actin filaments. We detected the modification of cofilin by MMF on Cys139 in astrocytes ( Fig. 2A) and confirmed this site of modification when cofilin was incubated with DMF in vitro. The functional assessment of cofilin activity revealed that Cys139 modification resulted in a ϳ30% decrease in cofilin's ability to depolymerize actin versus unmodified cofilin. DMF has been shown to enhance the differ- to a PVDF membrane and blotted with antibodies to the following ETC components: Complex I NDUFB8, Complex II subunit 30 and succinate dehydrogenase a (SDHa), Complex III core 2 subunit, and Complex V ␣-subunit. DMF treatment did not significantly change the levels of the ETC markers, except for CI-NDUFB8, which was decreased with 50 M DMF. Fumarase and ␣-tubulin antibodies, as well as Coomassie staining of the membrane, were used to verify even loading, n ϭ 3 per treatment group. The molecular masses of the proteins are indicated on the left-hand side. entiation of oligodendrocyte precursor cells (OPC) to oligodendrocytes, as evidenced by increased expression of O4 (47). Recently, Zuchero et al. described that early oligodendrocyte differentiation from OPCs is accompanied by the extension of processes containing ordered arrays of actin filaments, requiring polymerized actin and low cofilin activity (48). As the oligodendrocyte begins to mature further the actin cytoskeleton is disassembled, particularly by proteins such as cofilin, therefore differentiation requires dynamic changes in actin structure. Our data suggests that DMF-modified cofilin would be beneficial for actin arborization during early OPC differentiation. DMF may also act in a neuroprotective manner by inducing oligodendrocyte ensheathment of the axon, even if myelination is not dramatically increased. In addition, DMF treatment reduces localized microglial activation in models of neuroinflammation (49,50), and cofilin has been shown to be critical for lipopolysaccharide mediated microglial activation as cofilin knockdown significantly inhibited microglial activity (51). Recently, magnetization transfer ratio measurements in humans have suggested that myelin density may be increased in those receiving delayed-release DMF treatment versus placebo (52), suggesting that DMF may beneficially modulate aspects of myelination during MS treatment. Together with the current literature, our data suggests that the DMF-mediated regulation of cofilin's actin-severing activity in dynamic glial cell populations, including oligodendrocytes, warrants further investigation. Collapsin Response Mediator Protein 2 (CMRP2) is an intracellular phosphoprotein that contributes to axonal growth by transporting ␣␤ tubulin heterodimers to the plus ends of the growing microtubule. The phosphorylation of CRMP2 leads to the collapse of the growth cone in response to environmental cues such as Semaphorin3A (Sema3A). CRMP2 is phosphorylated by Cdk5 at Ser522, and subsequently by GSK3␤ at Thr509, Thr514 and Ser518, leading to the disruption of the CRMP2:tubulin association and neurite retraction (53). In the current study, DMF contributed to the modification of Cys504 GLYDGPVCEVSVTPK in both primary neurons and N1E-115 derived neurons ( Fig. 5A and supplemental Fig. S4A). Interestingly, the oxidation of Cys504 has been demonstrated to be critical for the Sema3A mediated recruitment of GSK3␤ (40). Sema3A signaling stimulates the production of intracellular hydrogen peroxide, leading to the oxidation and dimerization of adjacent Cys504. Thioredoxin (TRX) interacts to reduce the oxidized proteins and this CRMP2/TRX complex recruits GSK3␤, resulting in CRMP2 phosphorylation and growth cone collapse (40). Because the oxidation of CRMP2 is necessary for GSK3␤ recruitment, it is possible that modification of this cysteine prevents oxidationmediated phosphorylation. Consequently DMF treatment might be expected to preserve axonal integrity and prevent retraction in response to local degenerative cues. Enhanced phosphorylation of another CRMP2 site, Thr555 is abundant in active multiple sclerosis lesions from human autopsy (54). It is also significant that DMF, in combination with interferon ␤ therapy, resulted in significant axonal preservation in a murine MS model (41). Our data suggests that CRMP2 Cys504 succination could be a significant mediator of the beneficial effects of DMF directly in neuronal cells, making it an attractive site for targeted therapies to prevent axonal loss. We observed a trend toward the stimulation of mitochondrial respiration parameters following 10 M DMF treatment of N1E neurons for 24 h, indicating that this dose does not acutely impair mitochondrial function. In contrast, 50 M DMF tended to reduce ATP production, maximal respiration and spare respiratory capacity, however, this did not appear to be because of a loss of mitochondrial content as components of the electron transport chain (ETC) and tricarboxylic acid cycle examined were not affected (Fig. 6C). The exception to this was the decreased protein levels of the NDUFB8 subunit of Complex I, a 22kDa accessory protein that is required for the complete assembly of Complex I, it is possible that a loss of efficient electron transfer because of impaired Complex I assembly decreased NADH oxidation at this site. This may also explain why basal respiration appears to be unaffected by 50 M DMF (Fig. 6A and 6B); FADH 2 may sustain respiration under basal conditions, but the subsequent chemical challenges to determine maximal respiration and spare capacity put pressure on the ETC to increase electron transfer in cells where Complex I function may be compromised. The decrease in mitochondrial respiration observed with 50 M DMF is like the decreased OCR parameters reported by Ahuja et al. using 20 M DMF in murine embryonic fibroblasts for 24 h (55). Interestingly, Hayashi et al. have recently reported that 10 and 30 M DMF increase mtDNA and maximal respiration after 48 h in human fibroblasts, and have shown tissue specific increases in mtDNA content (including the cerebellum) following 2 weeks of DMF administration to healthy mice (56). Taken together, the data suggests that variable doses and incubation times can have different effects on the models studied, although prolonged exposures in humans need to be monitored (57). In the current study we observed only a mild effect of DMF to alter lysosomal trafficking along microtubules in neurons, however in patients DMF may have more profound effects on other aspects of microtubule dynamics in cells directly exposed to the oral dosage. In summary, although DMF is a prominent activator of Nrf2 mediated transcription, it also ameliorates experimental autoimmune encephalitis and the inflammatory status in Nrf2knockout models. The novel discovery in this study that DMF and MMF may modify regulatory thiols on proteins such as cofilin-1 and CRMP2 suggest that DMF treatment may directly contribute to axonal preservation and remyelination, which is distinct from other MS treatments that are only known to modulate the inflammatory cell profile. Further investigation of the effects of DMF therapy on these novel neuroprotective targets in vivo will guide the development of more specific compounds for MS therapy.
2019-01-22T22:22:51.648Z
2018-12-26T00:00:00.000
{ "year": 2018, "sha1": "7edc99f1c293f1d7e5eaf3aa2b46551d28e3265e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1074/mcp.ra118.000922", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "b408d4fd22f2d84db6087e8d844e9b944c406551", "s2fieldsofstudy": [ "Medicine", "Chemistry", "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
53727163
pes2o/s2orc
v3-fos-license
Convolutional neural network for automated mass segmentation in mammography Background Automatic segmentation and localization of lesions in mammogram (MG) images are challenging even with employing advanced methods such as deep learning (DL) methods. We developed a new model based on the architecture of the semantic segmentation U-Net model to precisely segment mass lesions in MG images. The proposed end-to-end convolutional neural network (CNN) based model extracts contextual information by combining low-level and high-level features. We trained the proposed model using huge publicly available databases, (CBIS-DDSM, BCDR-01, and INbreast), and a private database from the University of Connecticut Health Center (UCHC). Results We compared the performance of the proposed model with those of the state-of-the-art DL models including the fully convolutional network (FCN), SegNet, Dilated-Net, original U-Net, and Faster R-CNN models and the conventional region growing (RG) method. The proposed Vanilla U-Net model outperforms the Faster R-CNN model significantly in terms of the runtime and the Intersection over Union metric (IOU). Training with digitized film-based and fully digitized MG images, the proposed Vanilla U-Net model achieves a mean test accuracy of 92.6%. The proposed model achieves a mean Dice coefficient index (DI) of 0.951 and a mean IOU of 0.909 that show how close the output segments are to the corresponding lesions in the ground truth maps. Data augmentation has been very effective in our experiments resulting in an increase in the mean DI and the mean IOU from 0.922 to 0.951 and 0.856 to 0.909, respectively. Conclusions The proposed Vanilla U-Net based model can be used for precise segmentation of masses in MG images. This is because the segmentation process incorporates more multi-scale spatial context, and captures more local and global context to predict a precise pixel-wise segmentation map of an input full MG image. These detected maps can help radiologists in differentiating benign and malignant lesions depend on the lesion shapes. We show that using transfer learning, introducing augmentation, and modifying the architecture of the original model results in better performance in terms of the mean accuracy, the mean DI, and the mean IOU in detecting mass lesion compared to the other DL and the conventional models. Supplementary materials Pre-processing The AMF [1] is a nonlinear filter that removes impulse noise while preserving edges and corners to improve the image quality. The CLAHE filter increases the contrast between the masses and their surrounding tissues [2][3][4][5]. The CLAHE [1] filter operates on small regions in the image, called tiles, rather than the entire image. It calculates the contrast for each tile individually producing local histograms. Each tile's contrast is enhanced and the neighboring tiles are then combined using bilinear interpolation to eliminate artificially induced boundaries. The contrast in the homogeneous regions can be limited using a clipLimit factor to avoid amplifying any noise that might be present in the image. We used tiles of [8 8] and a clipLimit factor of 0.005 with the CLAHE technique. Figure 1 shows a sample of the combined data-set we used in our experiments. Figure 2 shows images containing suspicious areas and its associated pixel-level GTMs. All full MGs and GTMs are converted into png format and re-sized to 512×512. All pixels in the GTM are labeled as belonging to the background (0) or breast lesion (255) (see Fig. 2). Semantic segmentation using FCN Semantic segmentation is an active research area for medical images where deep CNNs are used to classify each pixel in the image individually. Semantic segmentation results in a map image that is segmented by classes. The fully convolutional network (FCN) [6] is an encoder-decoder network. The encoder path uses a pre-trained VGG16 model [7] and transfer their learned representations by fine-tuning to the segmentation task. The decoder path uses up-sampling operations, and replace the final fully connected layer (FCL) with an N×1×1 convolution layer, which output probabilities for N classes. A skip architecture is proposed by [6] where the weights of shallow, fine layer features are combined with deep, coarse layer features to produce accurate and detailed segmentations, as intensive up-sampling can lead to coarse segmentation maps. There are 3 versions of FCN (FCN-32s, FCN-16s, FCN-8s) based on VGG16 network [6]. In this research, we adapt the FCN-8s VGG16 based network [6] to our segmentation task. FCN-8s up-samples the final feature map by a factor of 8 after fusing feature maps from the third and fourth max-pooling layers. Semantic segmentation using SegNet The SegNet architecture [8] adopts the VGG16 network [7] along with an encoder-decoder framework wherein it drops the FCLs of the network. SegNet shares a similar architecture to the encoder-decoder U-Net described in the previous subsection. However, in SegNet, the indices at each max-pooling layer in the encoder contracting path at each level are stored and later used to up-sample the corresponding feature map in the decoder by unpooling it using those stored indices (Fig. 3). Storing the indices from the contraction path helps keep the high-frequency information intact, however, it also misses neighboring information when unpooling from low-resolution feature maps. Finally, a Softmax classifier is used to produce the final segmentation maps with the same resolution of the original MG image. In this work, we used a Segnet that is preinitialized with layers and weights from a pre-trained VGG16 model with an encoder D of 5. Semantic segmentation using Dilated-Net Recently, Dilated-Net [9], also known as atrous convolutions, have been used in different image segmentation tasks [10][11][12][13][14]. Dilated convolutions [9] allow us to explicitly control the resolution at which feature responses are computed and incorporate larger context without increasing the number of parameters or the amount of computation. We adopt the dilated CNN in [9] with some modification to the network. The implemented dilated CNN architecture consists of ten cascaded 3×3 convolutional layers with dilation factors 1, 1, 2, 4, 8, 16, 32, 1, 1 and 1 (Fig. 4). Figure 4 illustrates a 3×3 convolution kernels with different dilation factor as 1,2, and 3. The last three layers are FCLs of 1×1 convolutions followed by dropout of 0.5 [15]. The first nine convolutional layers are followed by BN layer [16] and a ReLU activation function [17]. To classify the pixels, the last convolutional layer has two 1×1 convolutions, followed by a Softmax classifier. Localization using Faster R-CNN We adapt the Faster R-CNN method proposed in [18] to compare its performance in terms of accuracy of detection and inference time with that of the proposed Vanilla U-Net model. Faster R-CNN is based on a VGG16 model [7] with additional components for detecting, localizing and classifying lesions in MG image. Faster R-CNN outputs a BB for each detected lesion, and a score, which reflects the confidence in the class of the lesion. The Faster R-CNN method in [18] is trained with our pre-processed, and augmented data-set. Further details about the implemented Faster R-CNN method can be found in the original article in [18]. One limitation stated in the study of [18] is that the training data-set comes from small sized publicly available pixel-level annotated data-set. However, in our study we are using our combined large-sized data-set to reproduce their work. Semantic segmentation using region growing (RG) We also implemented the region growing (RG) model proposed in [19] and apply it to our MG images. RG is a traditional image segmentation CAD model that starts with selecting an initial seed point and then groups pixels or sub-regions into larger regions according to a similarity criterion. As RG results are sensitive to the initial seeds, the automated accurate seed selection is very critical for image segmentation. Further details about the implemented RG method can be found in the original article [19]. Comparison between state-of-the-art DL methods Table 1 lists the information about the architecture, databases, the number of images, the evaluation methods (i.e. Accuracy (ACC.), area under curve (AUC), Dice index (DI)), TPR@FPR, the testing time per image as provided in the literature. Author details List of tables Table 1 Shows a comparison between the proposed segmentation method and the current state-of-the-art DL methods for segmentation or localization of lesions in MG images. Fig. 1 The databases used in our experiments. Fig. 2 MG images and their corresponding GTMs. Fig. 3 In Segnet, the indices at each max-pooling layer in the encoder contracting path at each level are stored and later used to up-sample the corresponding feature map in the decoder by unpooling it using those stored indices. Fig. 4 Architecture of the dilated-Net, containing ten convolutional layers with dilation factors, indicated in red, increasing from 1 in the first layer to 32 in the seventh layer. The last 1×1 convolutional layer is followed by a Softmax classifier.
2018-11-28T22:44:46.690Z
2018-10-01T00:00:00.000
{ "year": 2020, "sha1": "063a8d0d2023e6ec95487191e921b6ecda7efc24", "oa_license": "CCBY", "oa_url": "https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/s12859-020-3521-y", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8cad5968ba84eec0cff7097a01bde6b629311cd6", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Biology", "Computer Science", "Medicine" ] }
122545818
pes2o/s2orc
v3-fos-license
PET imaging during hypoglycaemia to study adipose tissue metabolism Abstract Background Disturbances in adipose tissue glucose uptake may play a role in the pathogenesis of type 2 diabetes, yet its examination by 2‐deoxy‐2‐[18F]fluorodeoxyglucose ([18F]FDG) PET/CT is challenged by relatively low uptake kinetics. We tested the hypothesis that performing [18F]FDG PET/CT during a hypoglycaemic clamp would improve adipose tissue tracer uptake to allow specific comparison of adipose tissue glucose handling between people with or without type 2 diabetes. Design We enrolled participants with or without diabetes who were at least overweight, to undergo a hyperinsulinaemic hypoglycaemic clamp or a hyperinsulinaemic euglycaemic clamp (n = 5 per group). Tracer uptake was quantified using [18F]FDG PET/CT. Results Hypoglycaemic clamping increased [18F]FDG uptake in visceral adipose tissue of healthy participants (P = 0.002). During hypoglycaemia, glucose uptake in visceral adipose tissue of type 2 diabetic participants was lower as compared to healthy participants (P < 0.0005). No significant differences were observed in skeletal muscle, liver or pancreas. Conclusions The present findings indicate that [18F]FDG PET/CT during a hypoglycaemic clamp provides a promising new research tool to evaluate adipose tissue glucose metabolism. Using this method, we observed a specific impairment in visceral adipose tissue [18F]FDG uptake in type 2 diabetes, suggesting a previously underestimated role for adipose tissue glucose handling in type 2 diabetes. | INTRODUCTION The amount and distribution of adipose tissue (AT) are important contributing factors in the development of insulin resistance and type 2 diabetes. 1 While hepatic 2 and muscle 3 insulin resistance are generally regarded as the key factors in disrupted glucose homeostasis in type 2 diabetes, AT glucose handling could also play an important role. AT has been shown to make a considerable contribution to insulin-stimulated glucose uptake from the circulation. [4][5][6] Furthermore, there is substantial mechanistic evidence for impaired insulin-stimulated glucose uptake in AT in relation to obesity. 7 Several previous studies have used [ 18 F]FDG PET during a hyperinsulinaemic euglycaemic clamp to examine glucose uptake in different AT depots in individuals of different metabolic health, but results have not been completely consistent. Both Oliveira et al 8 and Virtanen et al 4 have shown an obesity-associated decrease in [ 18 F]FDG uptake in visceral AT. However, a similar effect in subcutaneous AT was only demonstrated by one study 4 and not by the other. 8 In both these studies, [ 18 F]FDG uptake in AT was not significantly different between metabolically healthy obese participants and obese individuals with type 2 diabetes. 4,8 These studies have used different methods of [ 18 F]FDG PET imaging which could in part explain the different results. However, using 18 F-FDG PET during a hyperinsulinaemic euglycaemic clamp to study tracer uptake in AT is challenging because of the low uptake values of [ 18 F] FDG in AT as compared to skeletal muscle. Consequently, it is difficult to determine differences in tracer uptake between different AT depots and between individuals. The role of AT in glucose homeostasis could therefore be underestimated. An interesting case was reported in 2011 by Hofman et al in which an [ 18 F]FDG PET/CT scan was performed in a healthy lean individual during unintentional insulin-induced hypoglycaemia. Surprisingly, an altered biodistribution of the radiopharmaceutical [ 18 F]FDG was observed, with a markedly increased uptake in AT, most prominently in visceral AT, as compared to the uptake in skeletal muscle. 9 Performing [ 18 F]FDG PET during a hyperinsulinaemic hypoglycaemic clamp could thus stimulate tracer uptake in AT and thereby represent a new method to better examine AT glucose handling. We hypothesized that this method could elucidate differences in AT metabolism associated with type 2 diabetes, which would remain undetected in previous studies. In the current study, we therefore performed [ 18 F]FDG PET/CT during a hyperinsulinaemic hypoglycaemic clamp in overweight or obese people with type 2 diabetes and healthy participants matched for BMI, to compare [ 18 F]FDG uptake in AT between these groups. | Enrolment We enrolled five participants with type 2 diabetes who were at least overweight (BMI > 25 kg/m 2 ) and five BMI-matched individuals without diabetes to undergo a hyperinsulinaemic hypoglycaemic clamp. In addition, five nondiabetic participants, again matched for BMI, underwent a hyperinsulinaemic euglycaemic clamp. All people with type 2 diabetes met the following inclusion criteria: clinically overt type 2 diabetes for at least 2 years, treated by diet or oral glucose-lowering medication alone (no previous insulin use), free from microand macrovascular complications except for background retinopathy and haemoglobin A1c (HbA1c) levels below 75 mmol/mol (9.0%). Healthy obese participants met the following inclusion criteria: fasting glucose < 6.1 mmol/L, HbA1c < 42 mmol/mol (6%) and a normal glucose tolerance test (plasma glucose levels below 6.5 mmol/L 2 hours after a 75 mg glucose challenge). The study was approved by the Radboud University Medical Center institutional review board and has been performed in accordance with the ethical standards as laid down in the 1964 Declaration of Helsinki and its later amendments. All participants provided written consent. | Hyperinsulinaemic glucose clamps All participants presented in the morning at 8.00 AM after an overnight fast (at least 8 hours). Participants with type 2 diabetes omitted their morning oral glucose-lowering medication (if applicable). Upon arrival, two catheters were inserted intravenously. One catheter was placed into the antecubital vein for frequent blood sampling and was positioned in a heated box (55-60ºC) to obtain arterialized venous blood. The second catheter was inserted in the antecubital vein of the contralateral arm for infusion of insulin (Insulin Aspart; Novo Nordisk) and glucose 20% w/w (Baxter). Subsequently, infusion of insulin was initiated at a rate of 120 mU per m 2 per min after a bolus of 1 U (referred to as time point 0). Plasma glucose levels were brought to predetermined levels using a variable infusion of glucose 20%, based on plasma glucose measured at 5-minutes intervals (Biosen C-line; EKF Diagnostics), which was sustained during infusion of [ 18 F]FDG, the incubation time and the scan. At the start of the clamping procedure, when plasma glucose levels were within the euglycaemic range, and just prior to positioning the patient in the scanner, when plasma glucose levels were within the predetermined range, blood was drawn to determine levels of growth hormone, cortisol, epinephrine and norepinephrine. | PET imaging with [ 18 F]FDG PET imaging was performed using a Siemens Biograph mCT-40 time-of-flight PET/CT scanner. When stable plasma glucose levels were reached (targeting at 3.0 and 5.0 mmol/L for hypoglycaemia and euglycaemia, respectively), 1.6 MBq/kg [ 18 F]FDG was infused. PET/CT images were obtained approximately 1 hour after injection from the base of the skull until the knees at 4 minutes per bed position. A low-dose CT acquisition (40 mAs and 130 kV) of the same area as that covered by the PET scan was used for PET attenuation correction and as an anatomic reference. The size of the CT transaxial matrix was 512 × 512 (0.98 × 0.98 mm), and the CT slice width was 3 mm. High definition reconstruction of the images was performed with three iterations, 21 subsets and a post-reconstruction Gaussian filter of 3 mm in full width at half maximum. The transaxial PET matrix size was 256 × 256, and pixel size was 3.18 × 3.18 × 3. | PET image tissue uptake quantification PET/CT images were reviewed using Inveon Research Workplace software (version 4.1; Siemens Healthcare). CT images were smoothed with a Gaussian filter of 1 mm. Subsequently, regions of interest (ROIs) were drawn delineating different tissues on the CT image. AT was delineated by thresholding at −110 to −70 HU. 6 The resulting ROI was then manually divided between visceral AT, abdominal subcutaneous AT and gluteofemoral AT. Skeletal muscle was delineated by manually drawing ROIs around the upper legs, lower back and right shoulder and performing thresholding at −21 to 104 HU within these ROIs to delineate skeletal muscle. ROIs around the liver and pancreas were drawn manually. All ROIs were drawn by a blinded observer. Tracer activity within the ROIs was recorded and expressed as standard uptake value (SUV), defined as activity per mL of tissue divided by the injected dose in MBq/g bodyweight. | Statistical analysis Data analysis was performed using SPSS (version 22; SPSS) with P < 0.05 considered statistically significant. Results are expressed as mean ± SD. Glucose infusion rates during the procedure were compared between the groups using a two-way ANOVA repeated-measures analysis. The effects of type 2 diabetes and hypoglycaemia were tested using one-way ANOVA. Post hoc analyses were performed using the Bonferroni-Dunn test to reveal statistically significant differences between the groups. Levels of hormones before and during the procedure were compared using a Wilcoxon signed rank test. Table 1 shows the baseline characteristics of the participants. Patients with type 2 diabetes were generally well-controlled and older than the two control groups, but well matched for BMI to the healthy obese participants. | Plasma glucose levels during the clamps During the clamping procedure, glucose levels were 5.0 ± 0.3 mmol/L for the euglycaemia group. In the hypoglycaemia groups, glucose levels were 3.0 ± 0.1 for the healthy obese participants and 3.0 ± 0.1 for the patients with type 2 diabetes (P = NS, Figure 1A). Mean glucose infusion rates during the procedure were 7.1 ± 1.9 mg/kg/min for the Figure 1B). | Counterregulatory hormones For each group, levels of cortisol, growth hormone, epinephrine and norepinephrine measured before the clamping procedure were compared to levels measured during the procedure, when stable blood glucose values were reached. Hypoglycaemia significantly stimulated the release of the counterregulatory hormones epinephrine, norepinephrine, cortisol and growth hormone, whereas plasma levels of these hormones did not change in response to euglycaemia ( Table 2). There were no major differences in these responses between subjects with type 2 diabetes and those without diabetes. | The effect of hypoglycaemia on AT [ 18 F]FDG uptake The distribution of [ 18 F]FDG was altered under hypoglycaemic conditions compared to euglycaemic conditions in healthy obese individuals, with glucose uptake in visceral AT being significantly higher during hypoglycaemia (P = 0.004). Hypoglycaemia also caused a numerical increase in [ 18 F] FDG uptake in abdominal subcutaneous AT and gluteofemoral subcutaneous AT, but this effect did not reach statistical significance (Figure 2A). | Comparison of [ 18 F]FDG uptake in AT between healthy participants and patients with type 2 diabetes Under hypoglycaemic conditions, patients with type 2 diabetes had a lower [ 18 F]FDG uptake in visceral AT than the participants without diabetes (Figure 2A, P < 0.0001). Patients with type 2 diabetes and healthy participants showed no statistically significant difference in [ 18 F]FDG uptake either in abdominal subcutaneous AT or in gluteofemoral subcutaneous AT (Figure 2A). The total AT volume as well as the AT volume in the different depots did not differ significantly between the groups (data not shown, but available upon request). (Figure 2A, P < 0.0001). Quantification of [ 18 F]FDG accumulation in skeletal muscle, liver and pancreas did not reveal significant differences between the groups. However, because there was a numerically lower [ 18 F]FDG uptake in skeletal muscle during hypoglycaemia, the tracer uptake ratio between AT and skeletal muscle was 1:6.3 versus 1:3.2 in subjects under euglycaemic and hypoglycaemic conditions, respectively (P = 0.018, Figure 2A). | DISCUSSION The main finding of this study is the stimulatory effect of hypoglycaemia on the uptake of [ 18 F]FDG by AT. This method thereby enables a more precise and specific analysis of glucose handling by both visceral and subcutaneous AT, which has been challenging so far because of the low uptake values in AT compared to skeletal muscle under nonhypoglycaemic conditions. By using this approach, we also showed that the uptake of glucose in visceral AT of patients with type 2 diabetes is about 40% lower than in BMI-matched healthy participants (Figure 2), which suggests a specific impairment in visceral AT metabolism in patients with type 2 diabetes. [ 18 F]FDG PET provides a tissue-specific, noninvasive method for whole-body analysis of glucose handling. Previous studies evaluating AT glucose metabolism using [ 18 F]FDG PET were performed either under fasting conditions without a clamping procedure or during a hyperinsulinaemic euglycaemic clamp. [3][4][5][6]8,10 Under these conditions, uptake values of [ 18 F]FDG in AT are low, especially compared to uptake levels in skeletal muscle. Performing [ 18 F] FDG PET during a hyperinsulinaemic hypoglycaemic clamp significantly increased the tracer uptake in visceral AT of healthy participants and also resulted in a numerical increase in [ 18 F]FDG uptake in subcutaneous AT. Since tracer uptake was not raised in skeletal muscle during hypoglycaemia, the uptake ratio between AT and skeletal muscle increased, allowing better discrimination (Figure 2A). We therefore propose that this method may provide a new, attractive research tool that permits the comparison of glucose uptake in different AT depots and examination of glucose handling in patients with differences in metabolic health. This technique could potentially be valuable as a research tool to examine the underlying pathophysiology of patients at risk of developing type 2 diabetes. While providing interesting methodological opportunities, it is not fully clear what causes the stimulatory effect of hypoglycaemia on visceral [ 18 F]FDG uptake. Catecholamines, which increase in response to hypoglycaemia, suppress insulin-stimulated glucose uptake in skeletal muscle by activating protein kinase A through the stimulation of G-protein coupled receptors. [11][12][13] In contrast, while the counterregulatory response to hypoglycaemia was shown to increase lipolysis rates in AT, insulin signalling seems to remain intact. 14 This could provide a possible explanation for the increased uptake ratio between AT and skeletal muscle observed in this study. Another contributing factor could be activation of the enzyme 5'adenosine monophosphate-activated protein kinase (AMPK) in adipocytes. AMPK is activated by metabolic stresses that inhibit ATP production, like hypoglycaemia. 15 This can in turn result in an increase of glucose uptake by adipocytes. [16][17][18] By performing [ 18 F]FDG PET during a hyperinsulinaemic hypoglycaemic clamp, we found a specific impairment in visceral AT metabolism in patients with type 2 diabetes compared to BMI-matched, healthy participants. In subcutaneous AT depots, skeletal muscle, liver and pancreas, no significant differences in [ 18 F]FDG uptake were found between healthy and type 2 diabetic participants ( Figure 2A). Also, in line with previous literature, [4][5][6]8,10 the uptake of glucose was found to be significantly higher in visceral AT than in subcutaneous AT (Figure 2A), which can be explained by greater abundance and metabolic activity of visceral AT adipocytes. 10,19 Because of the tissue specificity of the observed impairment in [ 18 F]FDG uptake in this study, combined with the glucose handling capacity of visceral AT, we hypothesize that visceral AT glucose handling could play a role in the pathophysiology of type 2 diabetes. Following this pilot study, future investigations with larger patient numbers are key to further substantiate this hypothesis. Also, in this study, static PET scans were performed one hour after the injection of the radiopharmaceutical to quantify the tissue-specific uptake of [ 18 F]FDG. Future studies with dynamic PET could give insight into uptake kinetics and provide further information on systemic glucose metabolism and its role in type 2 diabetes. The mechanism behind the observed impairment in visceral AT remains unclear. A possible explanation is a loss of visceral AT insulin sensitivity in type 2 diabetes, which is supported by clear mechanistic evidence for AT insulin resistance. 7 However, differences in glucose uptake by AT between healthy and type 2 diabetic subjects have not been found during euglycaemia in previous studies. 4,8 Metabolic effects of counterregulatory hormones could also play a role. Cortisol and growth hormone, which were increased significantly in participants with type 2 diabetes during hypoglycaemia, while remaining stable in healthy participants (Table 2), are known to repress glucose uptake in AT. 20,21 However, basal levels of these hormones were also higher in healthy participants. Also, effects of these hormones would not become acutely apparent, 22,23 making their influence on the observed differences in this study less likely. Norepinephrine is known to increase the metabolic activity of adipose tissue via beta3-adrenergic receptors. Levels of norepinephrine were slightly increased in participants with type 2 diabetes as compared to healthy participants. While this effect is generally described for brown adipose tissue, a deficiency in the beta3-adrenergic pathway could potentially contribute to the observed impairment in [ 18 F]FDG uptake in VAT of participants with type 2 diabetes in this study. Limitations of this study that should be considered are the inequality in age and gender between the participants with type 2 diabetes and the healthy participants. However, it has been reported that age alone does not significantly influence insulin sensitivity. 24 Furthermore, uptake of [ 18 F]FDG in AT has been shown to increase with age, 25 which would merely mask the decrease in [ 18 F]FDG uptake in AT of type 2 diabetes patients observed in this study. Also, there is no report on any influence of gender on biodistribution of [ 18 F]FDG. We therefore do not expect the inequality in age and gender between the groups to affect the validity of these results. In conclusion, the present data indicate that [ 18 F]FDG PET/CT performed during a hyperinsulinaemic hypoglycaemic clamp could be a promising new research tool to examine AT metabolism. By using this method, we here show that [ 18 F]FDG uptake during hypoglycaemia is impaired in type 2 diabetes patients, specifically in visceral AT. This study thereby suggests a role for AT glucose handling in the pathophysiology of type 2 diabetes. | PRIOR PRESENTATION Parts of this study were presented at.
2019-04-20T13:03:26.477Z
2019-04-29T00:00:00.000
{ "year": 2019, "sha1": "8753583b0364c18e06d67f277809ca5df567bdfb", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/eci.13120", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "a8c87b7e36baea31c792f5c88210200b746c3029", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269999996
pes2o/s2orc
v3-fos-license
Pest categorisation of Diaphania indica Abstract The EFSA Panel on Plant Health performed a pest categorisation of Diaphania indica (Lepidoptera: Crambidae), the cucumber moth for the territory of the European Union (EU), following the commodity risk assessment of Jasminum polyanthum from Uganda, in which D. indica was identified as a pest of possible concern to the European Union. D. indica is native to South Asian countries and is now distributed in tropical and subtropical areas of the Americas, Africa, Asia and Oceania. In the EU, D. indica occurs in Madeira (Portugal). It is a polyphagous pest, feeding on 16 genera in 6 plant families, primarily on plants of the Cucurbitaceae family. Important cucurbit hosts in the EU include cucumber (Cucumis sativus), melon (Cucumis melo), pumpkin (Cucurbita moschata), summer squash (Cucurbita pepo) and watermelon (Citrullus lanatus). Plants for planting, fruits and cut flowers provide potential pathways for entry into the EU. Climatic conditions and availability of host plants in southern EU countries would most probably allow this species to successfully establish and spread. Establishment could also occur in greenhouses in the northern parts of the EU. Economic impact in cultivated hosts, especially cucurbit crops is anticipated if establishment occurs. This insect is not listed in Annex II of Commission Implementing Regulation (EU) 2019/2072. Phytosanitary measures are available to reduce the likelihood of entry and further spread. D. indica meets the criteria that are within the remit of EFSA to assess for this species to be regarded as a potential Union quarantine pest. The new Plant Health Regulation (EU) 2016/2031, on the protective measures against pests of plants, is applying from 14 December 2019.Conditions are laid down in this legislation in order for pests to qualify for listing as Union quarantine pests, protected zone quarantine pests or Union regulated non-quarantine pests.The lists of the EU regulated pests together with the associated import or internal movement requirements of commodities are included in Commission Implementing Regulation (EU) 2019/2072.Additionally, as stipulated in the Commission Implementing Regulation 2018/2019, certain commodities are provisionally prohibited to enter in the EU (high risk plants, HRP).EFSA is performing the risk assessment of the dossiers submitted by exporting to the EU countries of the HRP commodities, as stipulated in Commission Implementing Regulation 2018/2018.Furthermore, EFSA has evaluated a number of requests from exporting to the EU countries for derogations from specific EU import requirements. In line with the principles of the new plant health law, the European Commission with the Member States are discussing monthly the reports of the interceptions and the outbreaks of pests notified by the Member States.Notifications of an imminent danger from pests that may fulfil the conditions for inclusion in the list of the Union quarantine pest are included.Furthermore, EFSA has been performing horizon scanning of media and literature. As a follow-up of the above-mentioned activities (reporting of interceptions and outbreaks, HRP, derogation requests and horizon scanning), a number of pests of concern have been identified.EFSA is requested to provide scientific opinions for these pests, in view of their potential inclusion by the risk manager in the lists of Commission Implementing Regulation (EU) 2019/2072 and the inclusion of specific import requirements for relevant host commodities, when deemed necessary by the risk manager. | Terms of reference EFSA is requested, pursuant to Article 29(1) of Regulation (EC) No 178/2002, to provide scientific opinions in the field of plant health. EFSA is requested to deliver 53 pest categorisations for the pests listed in Annex 1A, 1B, 1D and 1E (for more details see mandate M-2021-00027 on the Open.EFSA portal).Additionally, EFSA is requested to perform pest categorisations for the pests so far not regulated in the EU, identified as pests potentially associated with a commodity in the commodity risk assessments of the HRP dossiers (Annex 1C; for more details see mandate M-2021-00027 on the Open.EFSA portal).Such pest categorisations are needed in the case where there are not available risk assessments for the EU. When the pests of Annex 1A are qualifying as potential Union quarantine pests, EFSA should proceed to phase 2 risk assessment.The opinions should address entry pathways, spread, establishment, impact and include a risk reduction options analysis. Additionally, EFSA is requested to develop further the quantitative methodology currently followed for risk assessment, in order to have the possibility to deliver an express risk assessment methodology.Such methodological development should take into account the EFSA Plant Health Panel Guidance on quantitative pest risk assessment and the experience obtained during its implementation for the Union candidate priority pests and for the likelihood of pest freedom at entry for the commodity risk assessment of High Risk Plants. | Interpretation of the Terms of Reference Diaphania indica is one of a number of pests relevant to Annex 1C of the Terms of Reference (ToR) to be subject to pest categorisation to determine whether it fulfils the criteria of a potential Union quarantine pest (QP) for the area of the EU excluding Ceuta, Melilla and the outermost regions of Member States referred to in Article 355(1) of the Treaty on the Functioning of the European Union (TFEU), other than Madeira and the Azores, and so inform EU decision making as to its appropriateness for potential inclusion in the lists of pests of Commission Implementing Regulation (EU) 2019/ 2072.If a pest fulfils the criteria to be potentially listed as a Union QP, risk reduction options will be identified. | Additional information This pest categorisation was initiated following the commodity risk assessment of Jasminum polyanthum plants for planting from Uganda performed by EFSA (EFSA PLH Panel, 2022), in which D. indica was identified as a relevant non-regulated EU pest which could potentially enter the EU on J. polyanthum plants. | Information on pest status from NPPOs In the context of the current mandate, EFSA is preparing pest categorisations for new/emerging pests that are not yet regulated in the EU.When official pest status is not available in the European and Mediterranean Plant Protection Organization (EPPO) Global Database (EPPO, online), EFSA consults the NPPOs of the relevant MSs.To obtain information on the official pest status for D. indica, EFSA has consulted the NPPO of Portugal.The results of this consultation are presented in Section 3.2.2. | Literature search A literature search on D. indica was conducted at the beginning of the categorisation in the ISI Web of Science bibliographic database, using the scientific name of the pest as search term.Papers relevant for the pest categorisation were reviewed, and further references and information were obtained from experts, as well as from citations within the references and grey literature. | Database search Pest information, on host(s) and distribution, was retrieved from the European and Mediterranean Plant Protection Organization (EPPO) Global Database (EPPO, online), the CABI databases and scientific literature databases as referred above in Section 2.1.1. Data about the import of commodity types that could potentially provide a pathway for the pest to enter the EU and about the area of hosts grown in the EU were obtained from EUROSTAT (Statistical Office of the European Communities). The Europhyt and TRACES databases were consulted for pest-specific notifications on interceptions and outbreaks.Europhyt is a web-based network run by the Directorate General for Health and Food Safety (DG SANTÉ) of the European Commission as a subproject of PHYSAN (Phyto-Sanitary Controls) specifically concerned with plant health information.TRACES is the European Commission's multilingual online platform for sanitary and phytosanitary certification required for the importation of animals, animal products, food and feed of non-animal origin and plants into the European Union, and the intra-EU trade and EU exports of animals and certain animal products.Up until May 2020, the Europhyt database managed notifications of interceptions of plants or plant products that do not comply with EU legislation, as well as notifications of plant pests detected in the territory of the Member States and the phytosanitary measures taken to eradicate or avoid their spread.The recording of interceptions switched from Europhyt to TRACES in May 2020. GenBank was searched to determine whether it contained any nucleotide sequences for Diaphania indica which could be used as reference material for molecular diagnosis.GenBank® (www.ncbi.nlm.nih.gov/genbank/) is a comprehensive publicly available database that as of August 2019 (release version 227) contained over 6.25 trillion base pairs from over 1.6 billion nucleotide sequences for 450,000 formally described species (Sayers et al., 2020). | Methodologies The Panel performed the pest categorisation for D. indica, following guiding principles and steps presented in the EFSA guidance on quantitative pest risk assessment (EFSA PLH Panel, 2018), the EFSA guidance on the use of the weight of evidence approach in scientific assessments (EFSA Scientific Committee, 2017) and the International Standards for Phytosanitary Measures No. 11 (FAO, 2013). The criteria to be considered when categorising a pest as a potential Union QP is given in Regulation (EU) 2016/2031 Article 3 and Annex I, Section 1 of the Regulation.Table 1 presents the Regulation (EU) 2016/2031 pest categorisation criteria on which the Panel bases its conclusions.In judging whether a criterion is met the Panel uses its best professional judgement (EFSA Scientific Committee, 2017) by integrating a range of evidence from a variety of sources (as presented above in Section 2.1) to reach an informed conclusion as to whether or not a criterion is satisfied. The Panel's conclusions are formulated respecting its remit and particularly with regard to the principle of separation between risk assessment and risk management (EFSA founding regulation (EU) No 178/2002); therefore, instead of determining whether the pest is likely to have an unacceptable impact, deemed to be a risk management decision, the Panel will present a summary of the observed impacts in the areas where the pest occurs, and make a judgement about potential likely impacts in the EU.While the Panel may quote impacts reported from areas where the pest occurs in monetary terms, the Panel will seek to express potential EU impacts in terms of yield and quality losses and not in monetary terms, in agreement with the EFSA guidance on quantitative pest risk assessment (EFSA PLH Panel, 2018).Article 3 (d) of Regulation (EU) 2016/2031 refers to unacceptable social impact as a criterion for quarantine pest status.Assessing social impact is outside the remit of the Panel.The EPPO code 1 (EPPO, 2019;Griessinger & Roy, 2015) for this species is: DPHNIN (EPPO, online). | Biology of the pest D. indica is a multivoltine species with four development stages: egg, larva (five larval instars), pupa and adult (Hosseinzade et al., 2014;Pilania et al., 2022).Females lay their eggs singly or in groups on the lower surface of leaves, leaf buds and young stems (Barma & Jha, 2014;Ganehiarachchi, 1997), and preferably on mature leaves rather than on younger developing 1 An EPPO code, formerly known as a Bayer code, is a unique identifier linked to the name of a plant or plant pest important in agriculture and plant protection.Codes are based on genus and species names.However, if a scientific name is changed the EPPO code remains the same.This provides a harmonised system to facilitate the management of plant and pest names in computerised databases, as well as data exchange between IT systems (EPPO, 2019; Griessinger & Roy, 2015). T A B L E 1 Pest categorisation criteria under evaluation, as derived from Regulation (EU) 2016/2031 on protective measures against pests of plants (the number of the relevant sections of the pest categorisation is shown in brackets in the first column).Is the identity of the pest clearly defined, or has it been shown to produce consistent symptoms and/or to be transmissible? Criterion of pest categorisation Yes, the identity of the pest is established and Diaphania indica (Saunders) is the accepted name. In Bangladesh, the life cycle (from eggs to adults) of the insect took about 17.5 days (Rahman et al., 2023).In laboratory experiments at 30°C, the life cycle varied from about 18 days in a Japanese stock colony, 20 days in an Iranian colony, to 23 days in an Indian one (Hosseinzade et al., 2014).In China, no adults are found earlier than July or later than November, and the peak abundance of adults occurs from August to early September (Ke et al., 1988).D. indica has a similar phenology in South Korea (Choi et al., 2003).In Bangalore district in India, it was present throughout the year indicating overlapping generations (Sharada Devi & Venkatesha, 2022).Ke et al. (1988) recorded a maximum of 4 generations per year in China while in Hainan, China, Liu (2004) predicted that D. indica could complete 12 generations per year (Everatt et al., 2015;MacLeod, 2005). Larvae feed mainly on the leaves, but also attack flowers and fruits (Hosseinzade et al., 2014).The first two instars feed on the lower epidermis of leaves, while the later instars (third to fifth) feed on the whole leaf (Debnath et al., 2020).In Bangladesh, the larval development lasted about 12 days under field conditions (Rahman et al., 2023).In laboratory experiments at Raichur, Karnataka, India, on bitter gourd (Momordica charantia), the larval duration was 9.5 days (Nagaraju et al., 2018). Pupation takes place within a white silky cocoon, remaining attached to leaves which were rolled by the larvae prior to pupation (Barma & Jha, 2014).The pupal duration takes about 5 days in Bangladesh (Rahman et al., 2023) and 7-9 days in Dalugama, Sri Lanka (Ganehiarachchi, 1997). In South Korea, D. indica was reported to overwinter as pupae in the soil (Choi et al., 2003).The biology of the pest is summarised in Table 2. In a field study, Ba-Angood (1979) found that among three cucurbit crops, D. indica preferred to lay eggs on melon (C.melo) over watermelon (C.lanatus) and cucumber (C.sativus).In field host preference experiments in Bangladesh, T A B L E 2 Important features of the life history strategy of Diaphania indica. Egg In a laboratory study in Yemen, the average egg incubation period was found to last about 6, 4, 3 and 3.5 days at 20°C, 25°C, 30°C and 35°C, respectively (Ba-Angood, 1979) In New Zealand eggs hatch in 7-20 days whereas in Malaysia in 2-4 days (Ali et al., 2016) Kinjo and Arakaki (2002) found the developmental threshold temperature for egg at 13.7°C, whereas Peter and David (1992) at 12.92°C with 52.88 degreedays required for hatching Prepupa-Pupa The average pupal period in Yemen (lab experiment) was found to be about 10, 7, 5 and 6 days at 20°C, 25°C, 30°C and 35°C, respectively (Ba-Angood, 1979).In South Korea, larvae were found to descend from hosts and enter the soil during October and burrow to between 5 and 10 cm below the soil surface where they form pupae and overwinter (Choi et al., 2003;MacLeod, 2005) Kinjo and Arakaki (2002) found the developmental threshold temperature for pupa at 14.90°C, whereas Peter and David (1992) At 25°C, the mean adult longevity of males (21.6 days) on cucumber (Cucumis sativus) was significantly longer than that of females (16.7 days) (Kinjo & Arakaki, 2002). The number of eggs laid varied for different hosts and at different seasons of year.Ke et al. (1986) found that 510 eggs per female laid on Cucurbita pepo in August and about 340 in September Rahman et al. (2023) tested three summer cucurbit species (bitter gourd, ridge gourd and snack gourd) and found that snake gourd was the most preferred host, while bitter gourd was the least preferred one.Choi et al. (2003) ranked the larvae host preference as follows: Cucumis sativus, Lagenaria siceraria, Citrullus lanatus > Cucumis melo L. var.makuwa, 2 Sicyos angulatus > Luffa aegyptiaca, 3 Gossypium arboretum. 4 A complete list of hosts is provided in Appendix A. | Intraspecific diversity No intraspecific diversity has been reported for D. indica. Detection Visual examination of plants is an effective way to detect D. indica larvae as soon as they start to hatch (CABI, online).D. indica pheromone lures can be used for monitoring (Choi et al., 2009;Wakamura et al., 1998).Βucket traps or Delta traps are used as trapping and monitoring tools of adult moths (Lenin, 2011). Symptoms Τhe main symptoms of D. indica infestation are (Everatt et al., 2015;Patel & Kulkarny, 1956;Pilania et al., 2022): • folding or binding of leaves; • skeletonisation or lace like patches of intact small leaf veins; • damage to the inner portions of flowers can prevent fruit developing; • entry of larvae into fruit and feeding on it often makes the fruit unmarketable, particularly after suffering from secondary infection by pathogens. Identification The identification of D. indica requires microscopic examination and verification of the presence of key morphological characteristics.Detailed morphological descriptions for all development stages, illustrations, and keys of D. indica can be found in Clarke (1986); Clavijo (1990); Everatt et al. (2015); Mondal et al. (2020) and Neunzig (1990). D. indica is often confused with D. hyalinata to which it is similar in external appearance.D. indica can be distinguished from D. hyalinata by the lack of an expansion of the brown marking at the tornus of the forewing, which is not found in D. hyalinata.D. indica can be distinguished from some other species of Diaphania genus by the absence of a brown spot internal to the brown band along the costa in the forewings.For a confirmed identification, male or female genitalia should be examined. Description Eggs are white or whitish.They are small, usually laid in very small clumps of around 2-6 eggs and are roughly 0.7-0.95mm long and 0.3-0.6 mm wide in D. indica. Larva development goes through five larval instars, with mature larvae growing up to 25 mm.The young larvae are transparent and change to green or yellow-green as they develop.Upon maturity, two white dorsal stripes can be seen to run the length of their bodies and they may have 4 very small black spots in a square just behind the head. Are detection and identification methods available for the pest? Yes, visual detection is possible, and morphological and molecular identification methods are available. Pupae are 12-20 mm long and around 3-4 mm wide and are often found in a loose cocoon formed by spinning leaves together with silk.The pupae turn from white to brown as they develop. Adults are about 13-16 mm long, with a wingspan of 24-33 mm.The wings have a white patch that is banded by brown and exhibit a purple iridescence.A well-developed tuft of light brown hairs at the tip of abdomen is present in females, but it is vestigial in males.The tuft is formed by long scales which are carried in a pocket on each side of the seventh abdominal segment.The head, first two thoracic segments and a section near to the tuft are generally white.The antennae in males with the two first basal segments (three in females) are fully covered by brown scales (scape with dorsal side white), the rest of the segments only with dorsal side scaled (Clavijo, 1990;Everatt et al., 2015;Mondal et al., 2020). | Pest distribution outside the EU D. indica is native to south-Asian countries (Dai et al., 2018).The present distribution of D. indica includes tropical and subtropical regions in Africa, Asia, the Caribbean, Oceania and South America.It is also distributed in Florida, United States (EPPO, online) (Figure 2).In areas with cold winters such as Jiroft in Iran, Japan and South Korea, D. indica is a greenhouse pest (Hosseinzade et al., 2014;Kinjo & Arakaki, 2002;MacLeod, 2005).A record of D. indica has been reported in the UK, but it seems that it was transient, and the pest is not established in the UK (Everatt et al., 2015). | Pest distribution in the EU In the EU, D. indica is known to be present only in Madeira Island in Portugal (EPPO, online; CABI, online; Aguiar & Karsholt, 2006).The Portuguese NPPO confirmed that the pest is present in Madeira for a long time with few occurrences and does not occur in Portugal mainland nor in the Azores islands.So far, no damage has been reported, and official surveys are not carried out. Is the pest present in the EU territory?If present, is the pest in a limited part of the EU or is it scarce, irregular, isolated or present infrequently?If so, the pest is considered to be not widely distributed. Yes. D. indica has been recorded in the EU territory, in Madeira, Portugal.3.3.2| Hosts or species affected that are prohibited from entering the Union from third countries According to the Commission Implementing Regulation (EU) 2019/2072, Annex VI, introduction of D. indica hosts in the Union are not prohibited from third countries.However, soil is prohibited from third countries other than Switzerland (Table 3).According to Annex I of Regulation (EU) 2018/2019, fruits of Momordica L. originating from third countries or areas of third countries where Thrips palmi Karny is known to occur and where effective mitigation measures for that pest are lacking are considered high-risk plants, plant products and other objects within the meaning and their introduction into the Union territory shall be prohibited pending a risk assessment.As regards fruits of Momordica charantia L., originating in Honduras, Mexico, Sri Lanka and Thailand, Momordica fruits are allowed under special requirements ((EU) 2022/853).D. indica is present in Sri Lanka and Thailand. | Entry Plants for planting, fruits and cut flowers are the main potential pathways for entry of D. indica (Table 4). Is the pest able to enter into the EU territory? If yes, identify and list the pathways. Diaphania indica has entered the EU territory (Madeira, Portugal).Possible pathways of entry are plants for planting, fruits, cut flowers and soil. Comment on plants for planting as a pathway. Plants for planting provide the most likely pathway for entry into, and spread within, the EU (Table 4). | Establishment Climatic mapping is the principal method for identifying areas that could provide suitable conditions for the establishment of a pest taking key abiotic factors into account (Baker, 2002).Availability of hosts is considered in Section 3.4.2.1.Climatic factors are considered in Section 3.4.2.2. Many genera of D. indica host plants are present or are grown widely across the EU such as beans (Phaseolus spp.), cowpea (Vigna unguiculata), cucumber (Cucumis sativus), melon (Cucumis melo), pumpkin (Cucurbita moschata), watermelon (Citrullus lanatus) and summer squash (Cucurbita pepo), which are all important within the EU region.The main hosts of the pest cultivated in the EU between 2018 and 2022 are shown in Table 5. | Climatic conditions affecting establishment The global Köppen-Geiger climate zones (Kottek et al., 2006) describe terrestrial climate in terms of average minimum winter temperatures and summer maxima, amount of precipitation and seasonality (rainfall pattern) (EFSA PLH Panel, 2022).D. indica is currently present in tropical and sub-tropical areas in America, Africa, Asia, Macaronesia (Madeira) and Oceania (Figure 3).Based on locations where the pest is reported in literature, D. indica may be capable of establishment outdoors in southern Europe, but it seems unable to survive in cooler climates.Low temperatures, as indicated by frost, may limit establishment to northern areas.In Japan and South Korea, D. indica is a pest in greenhouses protected from cold and wet stress, and it may be capable of living in more temperate climates in these situations (Choi et al., 2003;Kinjo & Arakaki, 2002).Moreover, in South Korea, pupae of D. indica overwinter in soil (Choi et al., 2003).There is uncertainty as to whether D. indica could establish outdoors in central Europe.Nevertheless, there is a possibility that D. indica could occur in greenhouses and indoor plantings in cooler areas. FIGURE 4 shows frost free areas in the EU which could perhaps be colonised by D. indica.Data for Figure 4 represent the 30-year period 1988-2017 and was sourced from the Climatic Research Unit high resolution gridded data set CRU TS v. 4.03 at 0.5° resolution (https:// cruda ta.uea.ac.uk/ cru/ data/ hrg/ ). Is the pest able to become established in the EU territory? Yes.There are climate zones in the EU that match those where D. indica occurs and hosts occur in these zones that can support establishment.The pest has already been established in Madeira, Portugal. | Spread Plants for planting are the main spread mechanism for D. indica over long distances.There is no information on the flight capacity of the species. Describe how the pest would be able to spread within the EU territory following establishment? Natural spread by flying adults can occur.Although adults fly, spread is unlikely to be rapid and would probably be restricted to the southern EU (MacLeod, 2005).Eggs, larvae and pupae may be moved over long distances in trade of infested plant materials, specifically plants for planting, fruits and cut flowers. Comment on plants for planting as a mechanism of spread. Plants for planting are the main spread mechanism for D. indica over long distances. | Impacts D. indica feeds on the leaves; however, it was also observed to feed on tender stems, flowers and fruits of cucurbitaceous vegetables (Hosseinzade et al., 2014).After defoliation, the caterpillars also attack flowers and fruits of the plant resulting in loss of crop yield (Debnath et al., 2020).In Madeira, Portugal, no damage is reported so far but the magnitude of impact if D. indica were to spread to areas of concentrated cucumber production, such as in the Netherlands or south-eastern Spain, is uncertain.In India, the pest was regarded a minor pest of cucurbits in the past, but in recent years, its infestation has become significant and on a regular basis (Halder et al., 2017).In Goa, India, D. indica caused an average of about 93% damage with maximum of 97.5% on watermelon (Maruthadurai & Veershetty, 2023).In Gujarat, India, it caused 60% and 90% fruit damage in bitter gourd and little gourd, respectively, during 2003 and 2004.In pointed gourd, the foliage damage by the larvae was 25%-30% (Jhala et al., 2005).In Karnataka, India, the foliage damage by larvae ranged from 25% to 30% in pointed gourd and 3%-14% in bitter gourd (Nagaraju et al., 2018).D. indica was also found in cucumber plants in West Java, Indonesia.In cucumber plants, one larva per leaf of D. indica can cause a yield loss of 10% (Schreiner, 1991).In Bengkulu Tengah Regency, Indonesia, Nadrawati et al. (2023) observed variations in the density of D. indica larvae, and the percentage of melon leaf damage with the mean population density of larvae being 1.5 per plant, and the percentage of infected leaves 29.5.Also, in India and Sri Lanka, D. indica is an important pest of edible snake gourd (Debnath et al., 2020;Debnath et al., 2022) and gherkins (Ganehiarachchi, 1997). | Identification of potential additional measures Phytosanitary measures (prohibitions) are currently applied to soil (see Section 3.3.2). | Additional potential risk reduction options Potential additional control measures are listed in Table 6. Would the pests' introduction have an economic or environmental impact on the EU territory? Yes.If D. indica established more widely in the EU, larval feeding would probably cause an impact on cucurbit crops, but the magnitude of impact is uncertain. Are there measures available to prevent pest entry, establishment, spread or impacts such that the risk becomes mitigated? Chemical treatments on crops including reproductive material Used to mitigate likelihood of infestation of pests susceptible to chemical treatments.In a field efficacy trial against D. indica on snake gourd and ridge gourd in Bangladesh, Barmon et al. (2021) found that the application of neem oil, deltamethrin, mahogany oil and cypermethrin reduced the infestation level compared to untreated control (23.3%, 18.8%, 30.8%, 24.2% and 37.0% leaf infestation, respectively.The efficacy of insecticides against D. indica larvae was investigated on zucchini flowers in Queensland, Australia.It was found that compared to unsprayed control, less larvae were found in Bacillus thuringiensis aizawai treatment, while only few were found in the other treatments (methomyl, emamectin benzoate, indoxacarb, bifenthrin, spinosad, novaluron and methoxyfenozide) (Kay, 2007).Laboratory experiments in Pakistan on watermelon plants revealed that treatments with emamectin benzoate, triazophos, cartap hydrochloride, dimethoate were effective (100% mortality) against D. indica caterpillars (Khanzada et al., 2021) Entry/Establishment/ Spread/Impact Chemical treatments on consignments or during processing Use of chemical compounds that may be applied to plants or to plant products after harvest, during process or packaging operations and storage.The treatments addressed in this information sheet are: a) fumigation; b) spraying/dipping pesticides; c) surface disinfectants; d) process additives; e) protective compounds Physical treatments on consignments or during processing This information sheet deals with the following categories of physical treatments: irradiation/ionisation; mechanical cleaning (brushing, washing); sorting and grading; and removal of plant parts.This information sheet does not address: heat and cold treatment (information sheet 1.14)The measure is expected to have an effect although specific info for the pest is not available Entry/Spread Cleaning and disinfection of facilities, tools and machinery The physical and chemical cleaning and disinfection of facilities, tools, machinery, facilities and other accessories (e.g.boxes, pots, hand tools) Waste management Treatment of the waste (deep burial, composting, incineration, chipping, production of bioenergy) in authorised facilities and official restriction on the movement of waste.The measure is expected to have an effect although specific info for the pest is not available Establishment/Spread Heat and cold treatments Controlled temperature treatments aimed to kill or inactivate pests without causing any unacceptable prejudice to the treated material itself. | Uncertainty No key uncertainties of the assessment have been identified. | CONCLUSIONS D. indica satisfies all the criteria that are within the remit of EFSA to assess for it to be regarded as a potential Union quarantine pest ( Perpetuation, for the foreseeable future, of a pest within an area after entry (FAO, 2023).Greenhouse A walk-in, static, closed place of crop production with a usually translucent outer shell, which allows controlled exchange of material and energy with the surroundings and prevents release of plant protection products (PPPs) into the environment. Hitchhiker An organism sheltering or transported accidentally via inanimate pathways including with machinery, shipping containers and vehicles; such organisms are also known as contaminating pests or stowaways (Toy & Newfield, 2010).Impact (of a pest) The impact of the pest on the crop output and quality and on the environment in the occupied spatial units Introduction (of a pest) The entry of a pest resulting in its establishment (FAO, 2023). Pathway Any means that allows the entry or spread of a pest (FAO, 2023).Phytosanitary measures Any legislation, regulation or official procedure having the purpose to prevent the introduction or spread of quarantine pests, or to limit the economic impact of regulated nonquarantine pests (FAO, 2023).Quarantine pest A pest of potential economic importance to the area endangered thereby and not yet present there, or present but not widely distributed and being officially controlled (FAO, 2023).Risk reduction option (RRO) A measure acting on pest introduction and/or pest spread and/or the magnitude of the biological impact of the pest should the pest be present.A RRO may become a phytosanitary measure, action or procedure according to the decision of the risk manager.Spread (of a pest) Expansion of the geographical distribution of a pest within an area (FAO, 2023). CO N F L I C T O F I N T E R E S T If you wish to access the declaration of interests of any expert contributing to an EFSA scientific assessment, please contact interestmanagement@efsa.europa.eu. CO PY R I G H T F O R N O N -E FS A CO N T E N T EFSA may include images or other content for which it does not hold copyright.In such cases, EFSA indicates the copyright holder and users should seek permission to reproduce the content from the original source. M A P D I S C L A I M E R The designations employed and the presentation of material on any maps included in this scientific output do not imply the expression of any opinion whatsoever on the part of the European Food Safety Authority concerning the legal status of any country, territory, city or area or of its authorities, or concerning the delimitation of its frontiers or boundaries. | Identity and biology of the pest 3.1.1| Identity and taxonomy D. indica (Saunders) (Figure 1) is an insect within the order Lepidoptera, and the family Crambidae.It is commonly known as cucumber moth, melon moth, pumpkin caterpillar and cotton caterpillar (EPPO, online).D. indica was originally described as Eudioptes indica by Saunders in 1851 (CABI, online).Other synonyms are Glyphodes indica, Hedylepta indica, Margaronia indica, Palpita indica and Phacellura indica (EPPO, online). F I G U R E 2 Global distribution of Diaphania indica (data source: EPPO, online, Ashfaq et al., 2017; for details, see Appendix B).The polygons with highlighted orange colour indicate the administrative areas where D. indica is present.3.3 | Regulatory status 3.3.1 | Commission Implementing Regulation 2019/2072 D. indica is not listed in Annex II of Commission Implementing Regulation (EU) 2019/2072, an implementing act of Regulation (EU) 2016/2031, or in any emergency plant health legislation. F World distribution of Köppen-Geiger climate types that occur in the EU and which occur in countries where Diaphania indica has been reported.F I G U R E 4 Annual frost days in the world (mean 1988-2017) (source: Climatic Research Unit, University of East Anglia, UK). Criterion in Regulation (EU) 2016/2031 regarding Union quarantine pest (Article 3) Identity of the pest (Section 3.1) Is the identity of the pest clearly defined, or has it been shown to produce consistent symptoms and to be transmissible? of plants, plant products and other objects whose introduction into the Union from certain third countries is prohibited Description CN code Third country, group of third countries or specific area of third country List of plants, plant products and other objects that are Diaphania indica hosts whose introduction into the Union from certain third countries is prohibited (Source: Commission Implementing Regulation (EU) 2019/2072, Annex VI).Potential pathways for Diaphania indica into the EU.Annual imports of D. indica hosts from countries where the pest is known to occur are provided in Appendix C. Notifications of interceptions of harmful organisms began to be compiled in Europhyt in May 1994 and in TRACES in May 2020.As of December 2023, 114 interceptions of D. indica, and 5 interceptions of Diaphania spp.have been reported in the Europhyt and TRACES databases.The interceptions for the period 2005-2023 of D. indica are provided in Appendix D. Annex XI, Part A) unless exempt by being listed in 2019/2072 Annex XI, Part C).However, no specific requirements are specified in relation to D. indica Soil Pupae Soil as such consisting in part of solid organic substances is prohibited to be introduced into the EU from third countries other than Switzerland (Regulation 2019/2072, Annex VI).No requirements are specified for D. indica Crop area of Diaphania indica key hosts in the EU in 1000 ha (Eurostat accessed on 22/1/2024). T A B L E 5 Selected control measures (a full list is available in EFSA PLH Panel, 2018) for pest entry/establishment/spread/impact in relation to currently unregulated hosts and pathways.Control measures are measures that have a direct effect on pest abundance. T A B L E 6Used to mitigate likelihood of infestation at origin.Plants collected directly from natural habitats, have been grown, held and trained for at least two consecutive years prior to dispatch in officially registered nurseries, which are subject to an officially supervised control regime Selected supporting measures (a full list is available in EFSA PLH Panel, 2018) in relation to currently unregulated hosts and pathways.Supporting measures are organisational measures or procedures supporting the choice of appropriate risk reduction options that do not directly affect pest abundance. 3.6.1.3|Biologicalor technical factors limiting the effectiveness of measuresInternal feeding in fruit with entry holes sealed make infested fruit difficult to detect unless cut open.T A B L E 7 Table 8 The Panel's conclusions on the pest categorisation criteria defined in Regulation (EU) 2016/2031 on protective measures against pests of plants (the number of the relevant sections of the pest categorisation is shown in brackets in the first column).There are measures available to prevent entry, establishment and spread of D.indica in the EU.Risk reduction options include inspections, chemical and physical treatments on consignments of fresh plant material from infested countries and the production of plants for import in the EU in pest free areas NoneConclusion (Section 4)D.indica satisfies all the criteria that are within the remit of EFSA to assess for it to be regarded as a potential Union quarantine pestAspects of assessment to focus on/ scenarios to address in future if appropriate Fresh or chilled pumpkins, squash and gourds 'Cucurbita spp.' imported in tonnes into the EU from regions where Diaphania indica is known to occur (Source: Eurostat accessed on 23/2/2024).Cucumbers and gherkins, fresh or chilled imported in tonnes into the EU from regions where Diaphania indica is known to occur (Source: Eurostat accessed on 23/2/2024).Fresh or chilled beans 'Vigna spp., Phaseolus spp.', shelled or unshelled imported in tonnes into the EU from regions where Diaphania indica is known to occur (Source: Eurostat accessed on 23/2/2024). T A B L E C .1
2024-05-26T05:14:19.619Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "6ee17789997801cebc075b0aaa3ba69e6ae37413", "oa_license": "CCBYND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "6ee17789997801cebc075b0aaa3ba69e6ae37413", "s2fieldsofstudy": [ "Environmental Science", "Biology", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
235428542
pes2o/s2orc
v3-fos-license
A Secure IoT-Based Cloud Platform Selection Using Entropy Distance Approach and Fuzzy Set Theory With the growing emergence of the Internet connectivity in this era of Gen Z, several IoT solutions have come into existence for exchanging large scale of data securely, backed up by their own unique cloud service providers (CSPs). It has, therefore, generated the need for customers to decide the IoT cloud platform to suit their vivid and volatile demands in terms of attributes like security and privacy of data, performance efficiency, cost optimization, and other individualistic properties as per unique user. In spite of the existence of many software solutions for this decision-making problem, they have been proved to be inadequate considering the distinct attributes unique to individual user. This paper proposes a framework to represent the selection of IoT cloud platform as a MCDM problem, thereby providing a solution of optimal efficacy with a particular focus in user-specific priorities to create a unique solution for volatile user demands and agile market trends and needs using optimized distance-based approach (DBA) aided by Fuzzy Set Theory. Introduction One of the greatest inventions of Gen Z, Internet has rapidly emerged over the last two decades, connecting people and organizations together into one giant family. This connectivity has generated the urgency of Internet of Things (IoT) [1], which involves sensors, software devices, and other technologies, for the purpose of maintaining the security and privacy of humongous data transmission among other devices and systems [2]. For this sole purpose, several distinct IoT platforms have come into existence with their own unique cloud service providers (CSP) at the backend. But, like every coin has two sides, i.e., it has also led to a problematic situation when it comes to the selection of ideal CSP for a selected set of attributes that purview a finite set of requirements, to assist the process of decisionmaking where one has to deliberate multiple attributes, possible scenarios, market trends, and user biases [3]. According to the best of research and knowledge and observation, no compatible and comprehensive study and solutions been done for this integrated set of requirements in the field of cloud service provider selection (CSPS). However, there exist a lot of work that includes some of our set of factors quality which elaborately and accurately formulates algorithm for some quality factors and some for the technical aspects [4,5]. Divergent from the preexisting schemes, thus, providing a flexible, realistic, and compatible methodology towards cloud service selection (CSS) considering all the possible factors under the sun required for an ideal cloud service. 1.1. Significance of the Research. In spite of the existence of many software solutions for this decision-making problem, they have been proved to be inadequate considering the distinct attributes unique to individual user. Research Gaps in the IoT-Based Cloud Service Providers. The requirement of decision-making among the various cloud service providers amalgamated with IoT applications has led to the emergence of several software solutions in recent years. However, multiple demerits can be observed in the performance and efficiency of these platforms [6]. The current short comings present in the IoT-based cloud service provider solutions involve numerous dimensions including the inefficiency of the platforms to extend for supporting the heterogeneous sensing technologies. Other demerits include the proprietorship of data, providing insinuations of privacy and security [7]. The processing and sharing of information can also be counted as another gap especially in scenarios where it is essential to support novel services. The absence of assistance provided by application developers is another shortcoming faced by several IoT cloud platforms [4]. Furthermore, most of these IoT platforms do not possess the property of expansion for the addition of new components to withstand the emergence of new technologies and provide economies of scale. Lastly, the delivery of the purchased software to the respective connected devices is also not supported by a majority of the marketplaces dedicated for IoT applications. Multicriteria Decision-Making (MCDM) techniques provide a scientific and easy solution. MCDM deals with organizing variegated attributes which come under the purview of decision-making. It specializes in handling issues where the proximity of attributes is close, and human cognitive abilities are not able to take the logical decisions. It does so by performing bargains or trade-offs by replacing one criterion by equivalent another. This paper presents an integrated set of factors that contribute the solution to the problematic issue of the selection of an optimal IoT cloud platform. In a nutshell, the qualities mentioned below make the proposed methodology novel when compared to state-ofthe-art techniques: (1) Identification and categorization of selection attributes (SA): after thorough and detailed studying of more than fifty research papers, few factors, i.e., selection attributes (SA) were filtered out. About 90 factors were carefully studied, and explicitly observed and relevant factors were mined out by removing redundant elements and those which were similar to each other. Finally, these factors were then categorized into broad three categories after extensive reasoning and filtration, namely, Literature Review This section of the research is concerned with the existing studies in the field of selecting optimal cloud computing service provider for IoT-based applications, where the problem of service selection has been represented as an MCDM problem. To search the relevant data, the keywords like Cloud Computing for IoT, Cloud Platforms for IoT Services, IoT based Cloud Service Selection, IoT Service Selection Attributes, and Cloud Service Selection for IoT were used. As a result of this research, a total of 104 research papers from various highly reputed journals and conferences were analyzed in detail. Now, these papers were screened by examining their primary focus, whether it is related to the cloud service selection or not. Then, in the second screening, the approaches used and the case studies mentioned along with the selection attributes (SA) which were mentioned in these research papers were deliberate to make a comparative study of the same. The comprehensive tabular literature survey is shown in Table 1. This paper presents and develops a hybrid decisionmaking framework using two methodologies, namely, Fuzzy Set Theory and Matrix Multicriteria Decision-Making (MMCDM) where identification and categorization of 14 selection attribute are prepared into three categories, namely, quality factors, technical factors, and economic factors. After removing redundant features and filtering unnecessary information, thereby making this framework relatively less vulnerable to prerequisites and limitations as compared to available frameworks and techniques in cloud computing service selection. Security and Privacy Challenges in Cloud-Based IoT Platforms. While IoT and its applications are well explored and secure, the cloud-based IoT platforms are still comparatively less explored and nascent in nature [18]. Categorized in two purviews, static and mobile-based platforms both have variegated challenges on grounds of security and privacy. There are multiple security challenges including identity privacy that deals with protection of details of user of the cloud devices like his/her personal real-world information. Other threats include disclosure of the real-time location of user termed as location privacy [19]. Node compromising attack is also one of the most enduring threats to user's privacy as it includes planned attacked to gain access to user's private information [20]. Removal or addition of transmission multiple layers is a very mundane breach performed by various IoT users; it involves manipulating the concept of reward Table 1 Citation/name Methodology Advantages Disadvantages [8] This study proposes a multistep approach to evaluate, categorize, and rate cloud-based IoT platforms via implementing Multicriteria Decision-Making (MCDM), probabilistic linguistic term sets (PLTSs), and finally, a probabilistic linguistic best-worst (PLBW) is used to score all platforms Though the proposed method seems complex but a real-time implementation via case study provides cogent proof of its efficiency. It also outperforms individual scoring, classification, and evaluating methods. The data used in the case study is limited which explains the flow of the method but falls back to prove its cogency. Moreover, inclusion of latest hybrid techniques in the domain for comparative analysis could further edify the study's significance. [9] Cloud service provider selection approach is proposed via application of Multicriteria Decision-Making (MCDM), analytical hierarchical process (AHP), technique for order of preference by similarity to ideal solution (TOPSIS), and the best-worst method (BWM). Case study is presented to support the same. The study successfully identifies and provides solutions the drawbacks of classical multicriteria decision-making (MCDM) approaches in terms of accuracy, time required, and complexity of computation. AHP is outperformed by proposed approach. The use case scenario presented used stimulated scenarios and data that raises question against the cogency of the proposed study. [10] Additive manufacturing based cloudbased service providing framework is proposed to include both hard and soft services for the ease of customer use. These include data-based testing, design, 3D printing, remote control of printers, and face recognition using AI. This study understands and provides solution to the real-time consumer or customer problems. Its feature providing framework proves to be easy, feasible, and effective. The study only provides a framework along with it merit without any details of implementing or developing the framework for real world application. [11] The study is aimed at identifying various determinants that cause deprecation of various ministry of micro, small, and medium enterprises in India, contributing a huge impact on Indian economy. Data is collated from 500 Indian MSMEs. Multiple criteria include social influence, Internet of Things, perceived ease of use, trust, and perceived IT security risk among others. This study evaluates real-time data from 500 MSMEs that proves its cogency. Moreover, it provides insight that can be directly deliberated by policy makers to create maximum impact. A comparative analysis with other policy-making insight provider algorithms along with impact of implementing the recommended changes would create more clarity and value for the research. [12] A comparative analysis is performed to obtain the best cloud-based IoT platform for any business or organization by deliberating multiple criteria, functional and nonfunctional requirements among five giants, namely, Azure, AWS, SaS, ThingWorx, and Kaa IoT by application various techniques like analytical hierarchical process (AHP), K-means clustering, and statistical tests. The hierarchal method of requirement classification provides edge to the method and various statistical tests implemented on the results obtained creates increased sense of cogency or significance to the study. The cloud-based IoT platforms are limited creating false sense of performance in terms of evaluating more than 5 platforms. Moreover, requirement classification into hierarchy is very time and effort intensive. [13] IoT applications built via cloud-based platforms are assayed for any kind of security challenge or data inconsistency issues that arise due to third party auditors, phishing attacks. It also provides strategies to prevent the same. The objective of the study is very relevant to the need of the hour, providing valid and much needed information. It also provides recommendations handle the same. The scope of study is limited to theoretical analysis without any real data implementation or case study to prove the cogency of the points mentioned in the paper. 3.2. Distance-Based Approach. Distance-based approach (DBA) is an effective and efficient MCDM method. Identifying and defining the optimized state of the multiple attributes that are part of the process is the initial gradation in the proposed method. The optimal state represented by the vector OP is the set of best values of criteria over a range of alternatives. The best values can be maximum or minimum, defining the type of criteria. Reference to Figure 1, as indicated, vector "OP" is the optimal point in a multidimensional space. It acts as a reference point to which the other values of all the alternatives are analyzed to one another quantitatively. In other words, an arithmetical difference of the current values of alternatives from their corresponding optimal values is taken, which represents the ability of the considered alternatives to achieve the optimal state. The decision-making issue which needs to be dealt with is searching for a viable solution on basis of its proximity to the optimal state. In Figure 1, "H" represents the feasible region and "Alt" as the alternative. The distance-based technique is aimed at determining a point in the "H" region and is in closest proximity to the optimal point. To implement the above approach, let i = 1, 2, 3, 4… n = alternatives, and j = 1, 2, 3, 4… m = selection attributes. A matrix is created to represent the entire set of alternatives along with their respective criteria, which is shown in (1). This study poses to create a need for authorization in cloud-enabled IoT systems by assaying various security threats that such a set up encounters via two case studies in order. Proposing control-based authorization system The aim of the study very cogent and current, deliberating recent developments of cloud-based IoT applications. The case studies presented aid to the cause of study while contributing to the significance of the proposed framework. The framework proposed for controlbased authorization lacks any sense of implementation or efforts towards prototype development. [15] An attack distribution detector is proposed to prevent malfunctioning of trust bounders in IoT-based applications, leading to severe data theft. A downsampler-encoder-based cooperative data generator is proposed to discriminate noisy data that may lead of data theft that malfunction trust boundaries. The continuous updating and verification of the model provides it optimal results and performance to detection probable data thefts. The model outperforms primordial machine learning and deep learning techniques. Inclusion of latest hybrid techniques in the domain for comparative analysis could further edify the study's significance. [16] Various cogent issues with IoT middleware are brought to attention while proposing a state of art IoT middleware that can integrate with MQTT, CoAP, and HTTP as application-layer protocols. The problem addressed by In.IoT framework is cogent, and its relevance has been shown very accurately in the study. A comparative analysis with classical middleware and latest hybrid techniques could further edify the significance of the study. [17] An intrusion detection technique for cloud-based IoT application is proposed by implementing machine learning, to obtain state of art accuracy and in-depth analysis of source or type of intrusion. The survey of 95 developments in intelligence-based intrusion detection techniques provides the study significant relevance and ground for comparative analysis with proposed technique. Though the study shows optimal accuracy, false-positive results still hamper its cogency. Wireless Communications and Mobile Computing This matrix is known as the decision matrix [d]. Now, we take the priority weights of these attributes according to the opinions of various experts and calculate their averages. We take the sum of these averages and divide the sum by each of these averages. The result is the creation of another matrix with only one row and columns equal to the number of attributes. This matrix is known as priority weights matrix [PW] as shown in (2). Using the following Equations (3), (4) and (5), the decision matrix is standardized to minimize the impact of different units of measurement and to simplify the process. where d j is the average of each attribute for all alternatives. where S j is the standard deviation of each attribute for all alternatives. where d ij is the value of each attribute for an alternative and d′ ij is the standardized value of each attribute for an alternative. The final matrix is known as the standardized matrix [d′] and is represented as in (6). : The best value of each attribute is selected over the set of alternatives. The best values can be maximum or minimum values, depending on the type of attribute specified. The matrix formed using this set of values is known as the optimal matrix [O] as shown in (7). The distance of each of the alternatives from its optimal state is calculated as the numerical difference between the values of each of the attributes and their corresponding optimum counterparts. The resulting values form a matrix called the distance matrix [O ′ ] as represented in (8). Each value of this matrix is then squared and multiplied by their corresponding priority weights, as explained by Equation (9). Wireless Communications and Mobile Computing The resulting matrix is called the weighted distance matrix [W] as shown in (10). Equation (11) is used to calculate the composite distance, "CD" between each alternative to the optimal state. The one-column matrix formed as a result of this equation is called the composite distance matrix [CD] as shown in (12). The last step of this method involves calculating the rank of each alternative by using their composite distance values. The smallest value gets the 1st rank, the second smallest value gets the 2nd rank, and so on. This is how the DBA MCDM approach is used for cloud service provider selection. Below, Figure 2 represents the model development of the methodology. Estimation-of-Distribution Algorithms. These algorithms are general metaheuristics applied in optimization to represent a recent alternative to classical approaches [21]. EDAs build probabilistic models of promising solutions by repeatedly sampling and selecting points from the underlying search space. EDAs typically work with a population of candidate solutions to the problem, starting with the population generated according to the uniform distribution over all admissible solutions [22]. Many distinct approaches have been proposed for the estimation of probability distribution, Implementation, Results, and Discussions Evaluating various cloud service providers using DBA (distance-based approach) methodology with Fuzzy Set Theory to calculate ranks based upon selection attributes is described by the following steps: (2) Identification of selection attributes: three major factors were identified which were, namely, quality factors, technical factors, and economic factors. They were further classified as: quality factors (functionality, reliability, usability, efficiency, maintainability, and portability), technical factors (storage capacity, CPU performance, memory utilization, platform design, and network speed), and economic factors (service induction cost, maintenance cost, and promotion cost) after the detailed analysis and intensive study of the cloud service providing industry and its various prerequisites along with understanding the market where this industry thrives ( Figure 3). Wireless Communications and Mobile Computing distinguishes real-world problems based upon human comprehensive skills rather than absolute Boolean logic. In other words, the fuzzy system implements scales rather than 0/1 for coherent human understanding where 0 represents absolute fallacy, 1 represents absolute truth, and the middle values represent the fuzziness or the fuzzy values. In this study, we have implemented a triple fuzzy number scale which uses a triplet set of the form [a, b, c] with a sensory scale (Tables 2 and 3) [28]. A survey was conducted among a group of 40 selected experts associated with the technical field. The ( Table 2) questionnaire consisted of 14 pristine questions, based upon which a priority weights matrix was created consisting of the weights or values of the assorted attributes. While in the second questionnaire (Table 3), the nine already selected CSPs were appraised on the grounds of the 14 categorized selection attributes by an adept team of 5 experts. The extracted data from the questionnaires mentioned above were converted from a literal scale to TFN (Triple Fuzzy Number) scale, thereby averaged to a fuzzy number (4) Determination of weights and performance ratings: the expert-assigned linguistic terms were first converted into corresponding TFNs using the fuzzy scale and then defuzzied to get crisp score values. The data was extracted from the questionnaires and then evaluated using a combination of the mathematical formulas and concepts of aggregation and average (5) Creating performance rating matrices: a decision matrix of the performance ratings (Figure 4(a)) and a single-row matrix of the priority weights (Figure 4(b)) were created under the supervision of expert guidance using the fuzzy scale and MCDM (6) Calculating standardized matrix: root mean square of each selection attributes is carefully evaluated; furthermore, the mean previously determined is subtracted from each value and simultaneously divided by the corresponding root mean square of that particular selection attribute to get the standardized matrix ( Figure 5). (7) Creating optimal and distance matrix: the optimal matrix is estimated by targeting the best values of each selected attribute of the standardized matrix ( Figure 6(a)), i.e., the maximum values for quality and technical factors and minimum values for economic factors. Additionally, the distance matrix is calculated by finding the distance between each value of a particular selection attribute with its corresponding best value (Figure 6(b)). (8) Calculating weighted and composite distance matrix: by squaring the respective values of the distance matrix and multiplying them to the corresponding priority weights, the weighted distance matrix was obtained (Figure 7(a)). The matrix formed was then used to evaluate the composite distance matrix by calculating the square root of the total sum of each alternative (Figure 7(b)). (9) Ranking of cloud service providers: finally, the alternatives are ranked in decreasing order of their corresponding values in the composite distance matrix. Therefore, the least rank or rank 1 is most preferable while the maximum rank, i.e., rank 9, is least preferable considering the given set of alternatives (Table 4). The selection of cloud service providers is a problematic task as many decision-making parameters are taken into consideration like security, cost optimization, availability, reliability, and fault tolerance, to name a few. Most of the mentioned factors are not constant and individualistic wherein every consumer who requires a cloud service provider has an almost unique set of demands and requisites, each having selected set of attributes has a different weight than other, i.e., prioritized attributes are not rare. Considering the presented scenario, Multicriteria Decision-Making technique has shown significant efficacy and is implemented in the field widely as it provides both individualistic and concurrent results. Table 4 shows the ranking of nine cloud service providers based on fourteen carefully discerned attributes categorized in three categories, namely, quality factors (functionality, reliability, usability, efficiency, maintainability, and portability), technical factors (storage capacity, CPU performance, memory utilization, platform design, and network speed), and economic factors (service induction cost, maintenance cost, and promotion cost). The step-by-step gradation procedure described above where priority weights and decision matrix are extracted from surveys data using a fuzzy scale, which is then optimized-standardized and refined by priority weights as extracted from user survey data proves to be a simple, effective, and reliable method of action for selection of optimal cloud service provider. Figure 8 shows the graphical representation of the same. Conclusion and Future Work Taking into consideration the current extensive use in the cloud-based IoT services for building computation, storage, infrastructure, and other needs has led to a greater demand for an efficient methodology to drive which given cloud service provider meets one's unique and individualistic demands for fulfilling ever-changing solutions in the field of IoT. Given the scenario of current cloud-based IoT applications with multiple service providers and vivid requirements, a lot of decisionmaking criteria and methodologies already exist, some of which include TOPSIS that is a useful and straightforward technique for ranking several possible alternatives according to closeness to the ideal solution, or AHP, or VIKOR which is based on the aggregating fuzzy merit that represents the closeness of an alternative to the ideal solution by compromising between two or more options to get a unified opinion between multiple criteria, and PROMETHEE compares various measures available by the technique of outranking. Despite the preexisting techniques to classify, evaluate, and rate various IoT-based cloud service providers due to continuously changing set of attributes, challenges user's privacy and security, user authentication, location privacy, disparate demands of customers, and colossus pool of available attributes ranging from performance, cost optimization to quality, it becomes very peculiar to get virtually concurrent results for one's ever-changing characteristics and agility even after discerning the given set of methodologies in the available literature. Therefore, this research meets all the scenarios mentioned above, demands and unique characteristics by using an optimized matrix methodology aided by distance-based approach (DBA), some of its salient features are as follows: (a) Considering a broad set of categories that are further graded into subattributes, i.e., performance, technical and economic factors are individually optimized to get ultimate efficacy (b) Simple, straightforward, reader-friendly, and easily captured and understood by anyone concept and procedure (c) It is obtained by taking into consideration priorities among attributes as extracted from a user by surveyed data Conclusively, in the presented research methodology, a distance-based approach with an optimized approach to 9 Wireless Communications and Mobile Computing consider priorities as set by data extracted from user survey in a simple, lucid yet compelling procedure to select an ideal cloud service provider for IoT applications. This study finds nine alternatives or popular cloud service providers and 14 attributes or deciding criteria. Privacy and security are the two most emerging challenges in IoT applications as provided by cloud service providers due to the nascent nature of the field. Though IoTbased applications have already been explored from the aspects of privacy and security, implementing IoT applications via cloud-based platforms leads to a new set of possible threats. In future work, this study intends to evaluate variegated cloud-based IoT platforms from the aspects of security and privacy by analysing the same under the purview of three criteria. Firstly, the future work will deal with user's individualistic threats of privacy and security like location privacy, breach of personal information, protection of user's hardware and software devices, and user profile authentication. Other criteria include privacy and security challenges for a multilevel organization, namely, secure route establishment, isolation of malicious nodes, self-stabilization of the security protocol, and preservation of location privacy. Lastly, this study would assay multiple case studies of leading cloudbased IoT platforms' breach of security and privacy to perform a comparative analysis of the same. Data Availability The data will be provided based on a request by the evaluation team. Consent All the authors of this paper have shown their participation voluntarily.
2021-06-15T13:15:52.643Z
2021-05-17T00:00:00.000
{ "year": 2021, "sha1": "7af139cdba4d569ff9ed3c30027c3fffc9148982", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/wcmc/2021/6697467.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "faa43dbc67e77aac12fa1a2d19b62385cd8ab056", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
145023154
pes2o/s2orc
v3-fos-license
Super chirped rogue waves in optical fibers The super rogue wave dynamics in optical fibers are investigated within the framework of a generalized nonlinear Schrödinger equation containing group-velocity dispersion, Kerr and quintic nonlinearity, and self-steepening effect. In terms of the explicit rogue wave solutions up to the third order, we show that, for a rogue wave solution of order n, it can be shaped up as a single super rogue wave state with its peak amplitude 2n + 1 times the background level, which results from the superposition of n(n + 1)/2 Peregrine solitons. Particularly, we demonstrate that these super rogue waves involve a frequency chirp that is also localized in both time and space. The robustness of the super chirped rogue waves against white-noise perturbations as well as the possibility of generating them in a turbulent field is numerically confirmed, which anticipates their accessibility to experimental observation. © 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement soliton, which can possess an extra doubly localized chirp while keeping the intensity features of the original Peregrine soliton. Such chirped version of Peregrine solitons can be reminiscent of the well-established chirped soliton concept [30,31] that has been used for many practical purposes (e.g., compression [32], amplification [33], communication [34], etc.). In the past decade, intriguing rogue wave dynamics of higher hierarchy were also explored, which can be classified into two categories: super rogue waves and multi-rogue waves. While the multi-rogue wave, as its name implies, is a combination of multiple well-separated Peregrine solitons [35,36], by the super rogue wave [37] we mean a rogue wave whose intensity takes its maximum allowable value and is much stronger than that of Peregrine solitons. Typical experiments include those carried out by Chabchoub et al. who observed the super hydrodynamical rogue waves in a water-wave tank [37], and by Baronio et al. who observed vector dark 'three sisters' in a telecommunication fiber [38], to name a few. No doubt, the success of these experimental observations justifies the quest of higher-order rogue wave solutions [39][40][41]. In this article, we investigate the chirped version of higher-order rogue waves, termed super chirped rogue waves for their super high peak amplitude, within the framework of a generalized NLS equation that contains the group-velocity dispersion (GVD), the Kerr and quintic nonlinearity, and the self-steepening effect [42]. Such kind of generalized NLS equation, with different reductions, usually applies to the description of ultrashort pulse propagation in optical fibers [43,44] and also to the description of high-intensity pulse propagation [45], controllable self-steepening [46,47], and generation of Cherenkov radiation [48] in quadratic crystals. In terms of the explicit rogue wave solutions up to the third order, we show that, for a rogue wave solution of order n, it can be shaped up as a single super rogue wave state with its peak amplitude 2n + 1 times the background level, which arises from the superposition of n(n + 1)/2 Peregrine solitons. In particular, we reveal that these super rogue waves involve a frequency chirp that is also localized in both time and space. The stability of the super chirped rogue waves as well as the possibility to generate them in a turbulent field is numerically confirmed, which anticipates an accessibility to experimental observation. Theoretical framework The propagation of ultrashort pulses in a single-mode optical fiber can be modelled by the dimensionless cubic-quintic (CQ) NLS equation [42]: where E(z, t) is the normalized complex envelope of an optical pulse, and z and t are the distance and retarded time, respectively. Subscripts z and t stand for partial derivatives. While the constant coefficient 1 2 points to the GVD effect, the coefficient σ denotes the Kerr nonlinearity, γ accounts for the pulse self-steepening effect (we assume γ 0 without loss of generality) [46,47], and µ relates to the nonlinearity dispersion, which can result in self-frequency shift if µ is complex [43], and to the quintic nonlinearity, which was often found in highly nonlinear materials such as chalcogenide fibers [44]. In cases of self-focusing, self-defocusing, and zero Kerr nonlinearities, σ can be normalized to 1, −1, and 0. In the context of fiber optics, the term |E | 2 E in Eq. (1) is often referred to as self-phase modulation (which is actually a temporal analog of self-focusing), and then the coefficient σ can be scaled out to the GVD term, which will be termed anomalous dispersion if σ > 0 and normal dispersion if σ < 0 [6,44]. It is worth noting that, to attain integrability [42], the last three terms on the left hand side of Eq. (1) have been related by two real free parameters γ and µ. Besides, in order to weigh the nonlinearity factors that affect the chirped rogue wave dynamics, we have excluded the higher-order dispersion terms from Eq. (1), which usually appear in the high NLS equation hierarchy [49]. It is easy to show that this general integrable equation is equivalent to the compatibility of the following Lax pair of the linear eigenvalue problem where R = [r(z, t, λ), s(z, t, λ)] T (T means a matrix transpose), and with λ being the free spectral parameter, σ 3 = diag(1, −1), and The asterisk over the field variables signifies the complex conjugate and 'diag' means a diagonal matrix. For our present purpose, we consider the plane-wave solution of Eq. (1) as an initial potential, which can be defined by its amplitude (a), wavenumber (k), and frequency (ω) through under the dispersion relation With this plane-wave potential, the linear eigenvalue problem (2) can be readily solved to obtain where Γ j ( j = 1, 2, the same below) are arbitrary complex constants, G = diag(1, E * 0 /a), and with An inspection of Eqs. (8) and (9) reveals that, if λ = (β/a + iγa) 2 ≡ λ 0 , there will be φ 1 = φ 2 = φ 0 and θ 1 = θ 2 = θ 0 , where This implies further that N 1 = N 2 . Then, by choosing appropriate parameters Γ 1,2 in Eq. (6), the ratio of the entry r of the column vector R to the other one s could have a simple rational form (excluding the plane-wave exponential factor), which, according to the Darboux dressing formalism, results in the rogue wave solutions of Eq. (1) [60,61]. On the other hand, as is evident below, λ 0 must be complex, or equivalently, β must be real, which gives the parameter condition for existence of a rogue wave: This condition is the same as that obtained using the theory of baseband MI [3,62,63] and suggests that rogue waves can exist in both the self-focusing (or equivalently, anomalous dispersion) and the defocusing (normal dispersion) regimes, when the self-steepening effect, denoted by the parameter γ, comes into play [29,61]. As employed in [59], there is a more convenient way to obtain the rogue wave solutions. To this end, one can let where is a complex perturbation parameter, and let the parameters Γ 1,2 be where γ j ( j = 1, 2, · · · , 2n) are arbitrary complex constants (which should not be confused with the system parameter γ). Then, the factorized eigenvector Θ(λ) = G −1 R(λ) = Γ 1 N 1 + Γ 2 N 2 can be expanded in a Taylor series form (in powers of 2 ): where Θ (m) = [R m , S m ] T e i(θ 0 z+φ 0 t) denotes the series coefficient of order m. As a result, the nth-order rogue wave solution can be expressed as where the dagger sign † indicates the complex-conjugate transpose, 'det' means taking the determinant of a square matrix, Y j ( j = 1, 2) are 1 × n row vectors defined through and M is an n × n matrix with its entries M i j determined by We would like to emphasize that the compact expression (16) for the nth-order rogue wave solution was never reported before, to our best knowledge, which is seen to be distinctly different from the previous solution forms obtained for the NLS equations or their extensions [40,59]. Super chirped rogue wave dynamics As an illustrative example of our general solution (16), we demonstrate in Fig. 1 the first-order (fundamental), second-order, and third-order rogue wave dynamics, respectively, obtained with the same set of system parameters a = 1, σ = 1, γ = 1, µ = 3/2, and ω = −1, but with different structural parameters γ j . For convenience, the explicit solution forms for these three low-order rogue waves have been provided in Appendix A (see Eqs. (A3)-(A5)). It is seen that, for given structural parameters as specified in the caption, the fundamental rogue wave always takes the shape of Peregrine soliton (see Fig. 1(a)), while the second-order and third-order rogue waves appear as the rogue wave triplet (see Fig. 1(b)) and sextet (see Fig. 1(c)), which consist of 3 and 6 Peregrine solitons, respectively. Depending on the relative values of the structural parameters γ j , the multiple rogue wave dynamics can display patterns that might not be so regular as seen in Figs. 1(b) and 1(c). Generally, for an nth-order rogue wave, it can evolve into at most n(n + 1)/2 Peregrine solitons, each with a peak amplitude three times the level of the background field [36]. (c) γ 2 = 1, γ 5 = 2000. In each case, the other unshown γ j will be set zero. More interestingly, we find further that, by a specific choice of parameters γ j , the nth-order rogue wave can reach a climax of 2n + 1 times the background height. In contrast to the above-mentioned multiple rogue wave dynamics, this kind of rogue wave state manifests itself as a single main hump, and hence can be referred to as a super rogue wave [37] when n > 1. Physically, such super rogue wave state results from the superposition of n(n + 1)/2 Peregrine solitons. Figure 2 shows the preceding three low-order rogue wave states on the same plane-wave background formed in the normal dispersion regime (σ = −1), which have peak amplitudes higher by a factor of 3, 5, and 7, respectively, as compared to the background height. In these plots, we have used special sets of structural parameters given in the caption that can give rise to the unique super rogue wave states as shown, after translations on the plane (z, t). As one can check via Eqs. (A3)-(A5) in Appendix A, these rogue wave states can not have a peak amplitude higher than their respective factors specified above, no matter what values of γ j are used and no matter whether the nonlinear system is self-focusing or not. In addition, different from the symmetric super rogue waves in the NLS system [37,39], the super rogue waves associated to Eq. (1) are generally anti-symmetrical in shape, as indicated in Figs. 2(b) and 2(c). Moreover, aside from the anti-symmetrical amplitude (or intensity) distribution, these super rogue waves are generally endowed with an extra nonlinear phase as well, as implied by the exponential factor [−det(M † )/det(M)] µ/γ in solution (16). More exactly, as will be shown below, this extra phase is actually caused by the exponential factor [−det(M † )/det(M)] µ/γ−1 , i.e., equal to the phase of det( χM † ) multiplied by 2(µ/γ − 1). It is time and space dependent, and hence would lead the rogue waves to undergo a frequency shift (or chirping) during evolution. Such a chirping effect does not exist in many integrable systems, e.g., in the NLS equation [37], in the Maxwell-Bloch system [25], and in the Manakov system [59,62]. For this reason, our super rogue wave solutions discussed here can be termed super chirped rogue waves, to show distinction. In the following, let us take a closer look at this super chirped rogue wave dynamics. First of all, we find that the super rogue waves (including the fundamental solution) actually have a unique solution form, which does not involve any structural parameters γ j . For instance, by performing the replacements z → z − z 0 and t → t − t 0 , where z 0 and t 0 are the magnitudes of translation along the z and t axes, respectively, given by the fundamental rational solution given by Eq. (A3) in Appendix A can be simplified as where Here Im and Re denote the imaginary and real parts of a complex number, respectively. It is clear that this simplified fundamental solution has now become independent of any structural parameter, displaying a 3-fold peak amplitude that has been located on the origin and an extra phase Φ that is proportional to the factor 2(µ/γ − 1). Indeed, as one can see, this rational solution is none other than the chirped Peregrine soliton solution obtained in [29] via a gauge transformation method. Noteworthily, as discussed in [29], the solution (20) has also an inherent phase caused by the complex term inside square brackets, but that phase is intrinsic to all Peregrine soliton categories and thus will not be used to define what we mean by chirped Peregrine soliton. On the basis of the above translations defined by Eq. (19), if one further sets γ 2 = 1 without loss of generality, and then expresses γ 3 by the second-order rogue wave solution given by Eq. (A4) can reduce to the super rogue wave state: which involves an extra nonlinear phase that is again proportional to 2(µ/γ − 1): Here C, D, G, and H are real polynomials of z and τ ≡ t − (a 2 µ + ω)z, given by +384a 6 β 2 γ 3 τz 3 − 288(2a 4 γ 2 + β 2 )(a 4 γ 2 + β 2 )τ 2 z 2 + 384a 6 γ 3 τ 3 z − 48(3a 4 γ 2 − β 2 )τ 4 + 36(28a 8 γ 4 + 35a 4 β 2 γ 2 + 11β 4 )z 2 − 288a 6 γ 3 τz + 36(7a 4 γ 2 + 3β 2 )τ 2 + 9, D = 192a 2 γ(a 4 γ 2 + β 2 ) 2 (β 2 z 2 + τ 2 ) 2 (a 2 γz − τ) + 96a 2 γ a 2 γ(6a 8 γ 4 + 13a 4 β 2 γ 2 + 9β 4 )z 3 +(6a 8 γ 4 + 15a 4 β 2 γ 2 + 3β 4 )τz 2 + 3a 2 γ(a 4 γ 2 − β 2 )τ 2 z − (3a 4 γ 2 + β 2 )τ 3 + 36a 2 γ(11a 2 γz − 3τ), Obviously, the simplified rational solution (22) does not involve any structural parameters γ j . As the polynomial C is always positive definite for arbitrary system parameters, this solution can now describe the super second-order rogue wave dynamics in either the anomalous or normal dispersion regime. It is easy to check that this super rogue wave will have a 5-fold peak amplitude, as shown in Fig. 2(b). In addition, it will undergo a frequency chirp defined by [44] which is also localized in both time and space. This chirp is different from that of traveling solitons, which is usually of tanh shape in the transversal dimension, namely, nearly linear across the pulse width [30,31]. Here, for the same reason that applies to the chirped Peregrine soliton, we do not consider the intrinsic chirping effect arising from the complex term inside the big round brackets in Eq. (22). Figure 3 shows the super second-order rogue wave solutions in the self-focusing (or anomalous dispersion) regime for the GI equation (µ = 0), the CLL-NLS equation (µ = γ), and the KN-NLS equation (µ = 2γ), respectively, with the other system parameters kept the same, i.e., a = 1, σ = 1, γ = 1, and ω = −1. It is exhibited that all these super rogue waves have a 5-fold peak amplitude, in addition to an extended spatiotemporal distribution as µ increases. Meanwhile, depending on the GI, CLL, and KN models used, the chirp of these rogue waves will exhibit a dark doubly localized structure, zero, and a bright doubly localized structure correspondingly, as suggested by surface plots in the right column of Fig. 3. As a limiting case, when γ = 0 (which corresponds to the KE equation scenario), it follows that D = 0 and β = a √ σ. Then, the second-order rogue wave solution (22) can boil down to where C = 64β 6 (β 2 z 2 + τ 2 ) 3 + 48β 4 (3β 2 z 2 − τ 2 ) 2 + 36β 2 (11β 2 z 2 + 3τ 2 ) + 9, It is clear that the polynomial C will be positive definite when σ > 0, but fails to be so if σ < 0. Therefore, only in the self-focusing (or anomalous dispersion) regime does the rational solution (26) represent a genuine rogue wave, as occurred in the NLS situation [3]. Besides, the extra nonlinear phase Φ would lead to a frequency chirp given by As seen in Fig. 4, this special super second-order rogue wave is symmetric in amplitude distribution (see left column), as in the NLS equation, but, however, has a nonvanishing nonlinear phase (see middle column) and thus a doubly localized chirp (see right column) that will be absent in the NLS equation. Naturally, if we further let µ = 0, the solution (26) can be reduced to that of the NLS equation, in which the nonlinear phase Φ is vanishing. In a similar fashion, one can readily obtain the super third-order rogue wave solution from Eq. (16) or from Eq. (A5) in Appendix A, which will also be unique in form, although lengthy. It is easy to show that this simplified super rogue wave solution will involve a 7-fold peak amplitude, as shown in Fig. 2(c), and, if µ γ, a nonlinear phase Φ that may result in a chirp. Here, for the sake of brevity, we do not present this simplified yet lengthy solution. Numerical simulations We performed extensive numerical simulations to inspect the stability of super chirped rogue waves against white-noise perturbations, based on the split-step Fourier method [11]. Here, in contrast to the intuitionistic "stability" concept intended for usual solitons, we would refer to the rogue wave as being stable if its structure can unfold without significant distortion over a rather long distance, irrespective of whether this type of wave-packet is transient or not. As typical examples, we chose to simulate the GI and KE super chirped rogue waves whose structures are already shown in Figs. 3(a) and 4, respectively. We put the noise onto the initial profile by multiplying the real and imaginary parts of the optical field E by a factor [1 + εr i (x)] (i = 1, 2), respectively, where r 1,2 are two uncorrelated random functions uniformly distributed in the interval [−1, 1] and ε is a small parameter defining the noise level. Figure 5 displays the numerical results, where, in order to show up other periodical wave structures arising from MI, we used a quite large noise level in both situations, namely, ε = 0.01 for the GI super chirped rogue wave and ε = 0.02 for the KE super chirped rogue wave. The initial amplitude profiles used for our simulations are indicated by red lines in Figs. 5(a) and 5(d), each having been compared to their respective analytical solutions given at z = −2 and −5 (see blue lines). It is exhibited that, even under such a large noise perturbation, these super chirped rogue waves can still propagate very neatly for a rather long distance, despite the onset of the spontaneous MI activated by the white noise, as seen in Figs. 5(b) and 5(e). To evaluate the consistency, we plotted in Figs. 5(c) and 5(f) the numerical amplitude profiles obtained at t = 0 (red lines), which agree very well with the analytical solutions (blue lines). Besides, we notice that, compared with its GI cousin, the KE super chirped rogue wave can recover from larger noise level on a less unstable background and thus can propagate over longer distance without significant distortion. For example, for the case shown in Fig. 5(e), such a distortion-free propagation can unfold within around 8 dispersion lengths, or more intuitively, around 0.4 km for a 1.55 µm pulse of duration 1 ps propagating in a telecommunication fiber with GVD of −20 ps 2 /km. Further, we have also investigated the robustness of these super chirped rogue waves by inspecting if they can appear spontaneously in a turbulent field. For this purpose, we have performed a number of simulations, integrating Eq. (1) numerically with an initial field that can be defined by the plane-wave solution (4) perturbed by a white noise of very low amplitude. This low-amplitude noise may produce a turbulent field via an MI process and then one can monitor the maximums of the field amplitude for all t at each z value (i.e., |E(z)| peak ) so as to detect the presence of extreme waves in such a field. Here we still take the GI super chirped rogue wave shown in Fig. 3(a) as an example and use the same noise level ε = 0.01 as in Fig. 5(a). For some realizations, we got typically what is shown in Fig. 6(a). It is seen that the continuous wave remains stationary for a short propagation distance, and then it develops exponentially to create a turbulent field, as expected. Quite strikingly, in this turbulent field, one can clearly observe extreme peaks with amplitudes close to 5, which can be associated to super rogue waves. One of them can be seen at around z = 14 (see the yellow region). We presented the evolution of the field amplitude around this z value and in a narrow temporal interval properly chosen in Fig. 6(b), where the rogue wave encircled by the black curve bears a strong resemblance to the GI super chirped rogue wave shown in Fig. 5(b) or in Fig. 3(a), despite there being random fields surrounding it. Accordingly, as one might envision, this impressive robustness of super chirped rogue waves may enable them to be observed in realistic physical settings (e.g., in optical fibers), as long as the self-steepening effect functions properly. Fig. 6. Numerical excitation of the GI super chirped rogue wave from a turbulent field under otherwise the same parameter condition as in Fig. 3(a). The panel (a) shows the maximum peak amplitude chosen from a very large t window for each specific value of z, and (b) displays the evolution of the field amplitude around z = 14 within a narrow temporal interval, where a typical super chirped rogue wave has been singled out by the black curve. Conclusion We have studied the super rogue wave dynamics of optical pulses in optical fibers within the framework of a generalized CQ NLS equation that contains the GVD, the Kerr and quintic nonlinearity, and the self-steepening effect. With the help of the nonrecursive Darboux transformation technique, we have presented for the first time the nth-order rogue wave solution and particularly its explicit solution forms up to the third order. It is unveiled that, for a rogue wave solution of order n, it can be shaped up as a single super rogue wave state whose peak amplitude is as high as 2n + 1 times the background level, which results from the superposition of n(n + 1)/2 Peregrine solitons. More interestingly, we have found that these super rogue waves involve a frequency chirp that is also localized in both time and space. In addition, we have performed numerical simulations to confirm the stability of these super chirped rogue waves in spite of the onset of the spontaneous MI activated by white noises, and have demonstrated their numerical excitation from a turbulent field caused by a low-amplitude noise. In the light of this impressive recurrence stability and the universality of the model used, we anticipate that these super chirped rogue waves can be observed in optical fibers, e.g., in highly nonlinear chalcogenide fibers [44], where the cubic-quintic nonlinearity (including the self-steepening effect) is important, while the higher-order dispersions beyond GVD can be ignored, for pulses in the picosecond range and for small propagation distances. We would also like to remark that such super chirped rogue waves could be observed as well in quadratic crystals (e.g., β-barium borate or periodically poled lithium tantalate crystals) in the high phase-mismatch cascading regime, which may produce a controllable self-steepening effect [45][46][47]. On the other side, as Eq. (1) has significantly generalized such integrable models as the NLS equation, the CLL-NLS equation, the KN-NLS equation, the GI equation, and the KE equation, we expect that the universal solutions presented here might be used as a platform for exploring the interesting rogue wave dynamics of many complex and non-integrable systems, which, to the first order approximation, are well described by the latter equations [64,65]. Appendix A-Explicit rogue wave solutions up to the third order In this Appendix, we would like to derive the explicit rogue wave solutions up to the third order from Eq. (16). As seen, only the preceding three series coefficients Θ (0,1,2) ≡ [R 0,1,2 , S 0,1,2 ] T e i(θ 0 z+φ 0 t) in Eq. (15) need to be determined, which, after some algebra, can be found as below: where γ j ( j = 1, 2, · · · , 6) are arbitrary complex constants, and Accordingly, the expressions of Y 1,2 and M intended for the former three low-order rogue wave solutions can be found as well, via Eqs. (17) and (18). Therefore, from Eq. where |[m i j ]| signifies the determinant of the involved matrix and m i j are defined by It should be noted that in the above formulas for m i j , one can use √ λ 0 = β/a + iγa and λ * 0 = β/a − iγa, which have been separated into real and imaginary parts. As one might check, the solutions given by Eqs. (A3)-(A5) have no singularity problems.
2019-05-05T13:03:10.624Z
2019-04-09T00:00:00.000
{ "year": 2019, "sha1": "935b90668f735a934d8ffc539c9c4d4da4e695d5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1364/oe.27.011370", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "7e41144dc598f1be13ef3fbeb3844aff73776afb", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
11054162
pes2o/s2orc
v3-fos-license
Structural Correlates of Cytoplasmic and Chloroplast Lipid Body Synthesis in Chlamydomonas reinhardtii and Stimulation of Lipid Body Production with Acetate Boost ABSTRACT Light microscopy and deep-etch electron microscopy were used to visualize triacylglyceride (TAG)-filled lipid bodies (LBs) of the green eukaryotic soil alga Chlamydomonas reinhardtii, a model organism for biodiesel production. Cells growing in nitrogen-replete media contain small cytoplasmic lipid bodies (α-cyto-LBs) and small chloroplast plastoglobules. When starved for N, β-cyto-LB formation is massively stimulated. β-Cyto-LBs are intimately associated with both the endoplasmic reticulum membrane and the outer membrane of the chloroplast envelope, suggesting a model for the active participation of both organelles in β-cyto-LB biosynthesis and packaging. When sta6 mutant cells, blocked in starch biosynthesis, are N starved, they produce β-cyto-LBs and also chloroplast LBs (cpst-LBs) that are at least 10 times larger than plastoglobules and eventually engorge the chloroplast stroma. Production of β-cyto-LBs and cpst-LBs under the conditions we used is dependent on exogenous 20 mM acetate. We propose that the greater TAG yields reported for N-starved sta6 cells can be attributed to the strain's ability to produce cpst-LBs, a capacity that is lost when the mutant is complemented by a STA6 transgene. Provision of a 20 mM acetate “boost” during N starvation generates sta6 cells that become so engorged with LBs—at the expense of cytoplasm and most organelles—that they float on water even when centrifuged. This property could be a desirable feature for algal harvesting during biodiesel production. There is currently keen interest in cultivating eukaryotic algae as sources of triacylglycerides (TAGs) to be converted into diesel and jet transportation fuel (16,37,43,50). In the past 2 years, several laboratories, including ours, have reported that the unicellular green soil alga Chlamydomonas reinhardtii, in response to nitrogen (N) starvation, produces TAG-filled lipid bodies (LBs) (9,21,23,24,30,31,44,49,51), also called lipid droplets, oil droplets, and oil bodies. Since C. reinhardtii currently boasts the best-developed resources for algal molecular-genetic analysis and manipulation (14), it could serve as an important model organism for algal biodiesel research even if it eventually proves to be unsuitable as a production strain. The structural correlates of LB formation are poorly detailed in algae, in part because algal morphology tends to be poorly preserved when chemical fixatives are used. We therefore undertook an analysis of LB formation in C. reinhardtii using phase-contrast and bright-field light microscopy of living cells and deep-etch electron microscopy (DEEM) of quickfrozen living cells. We compared starch-forming strains, primarily a cw15 strain (herein designated the STA6 strain), with the sta6 strain, a starch-null mutant strain derived from the STA6 strain that has been shown to produce more LBs and TAG than starch-forming strains in several studies (23,49,51). We analyzed cells in log and stationary phases, in various stages of N starvation both in liquid medium and on agar plates, and in maturing zygotes. We also assessed the influence of exogenous acetate on LB formation. Results of ongoing collaborations to analyze TAG and gene expression profiles under these various conditions will be reported in separate communications. Our microscopic findings include the following. (i) Starch-forming and starchless cells growing in N-replete medium contain occasional small LBs in the cytoplasm (␣cyto-LBs) and occasional small plastoglobules in the chloroplast stroma that make punctate contact with thylakoids. (ii) N-starved starch-forming cells greatly augment starch biosynthesis and ␤-cyto-LB production; ␤-cyto-LBs are intimately associated with both the outer membrane of the chloroplast envelope and the endoplasmic reticulum. (iii) N-starved starchless cells augment ␤-cyto-LB production in the same fashion as starch-forming strains. In addition, they produce chloroplast LBs (cpst-LBs) that are far larger than plastoglobules and are commonly enclosed within one or more thylakoids. (iv) Formation of both ␤-cyto-LBs and cpst-LBs is dependent on exogenous acetate. (v) When given a 20 mM acetate "boost" after 2 days of N starvation, both the STA6 strain and the sta6 strain continue to augment their LB content until, after 7 to 9 days, they are filled with LBs at the expense of cytoplasmic organelles, notably the chloroplast, a condition we term "obese." Obese starchless cells float on water. MATERIALS AND METHODS Strains and culture conditions. Most experiments were conducted with the non-arginine-requiring cw15 strain CC-4349, described in reference 49, and the cw15 sta6 strain CC-4348 (Chlamydomonas Center), derived from the cw15 strain. For clarity, these strains are designated the STA6 strain and the sta6 strain in this report to emphasize that their key difference lies in their ability versus inability to synthesize starch. Wild-type (wt) C. reinhardtii has cell walls and flagella; the STA6 strain lacks both flagella and cell walls but engages in normal starch biosynthesis, while the sta6 strain, derived from the parental STA6 strain by insertional mutagenesis, is wall-less and flagellumless, carries a deletion of the STA6 gene (which encodes the small subunit of ADP-glucose pyrophosphorylase [53]), and synthesizes no detectable starch (references 23 and 51 and our microscopic observations presented here). Complemented sta6 strains C2, C4, and C6 were kindly provided by David Dauvillée and Steven Ball (CNRS, Villeneuve d'Asq, France). Zygotes were the products of matings between wt CC-125 and CC-621. Liquid cultures (75 ml in 150-ml Erlenmeyer flasks) were grown in phosphatebuffered high-salt medium (HSM) (45) containing 9.3 mM NH 4 Cl as a nitrogen source and supplemented with 20 mM potassium acetate. Flasks were rotated at 125 rpm under continuous 30-E illumination from five 20-W daylight fluorescent bulbs (GE F20T12/D). Plate-grown cells were maintained for 30 days on TAP (13) medium supplemented with 1.5% agar (Fluka). Zygotes were matured on N-free TAP plates. Cultures were inoculated from plates and grown to log phase (mean hemacytometer count of 48 cultures, 2.6 ϫ 10 6 Ϯ 1.2 ϫ 10 6 cells/ml), pelleted at 800 ϫ g, and resuspended in 75 ml HSM containing 20 mM acetate and lacking NH 4 Cl. In some experiments, 1 ml of 1.5 M potassium acetate was added to cultures that had been N starved for 2 days (the 20 mM acetate boost). The pH of a freshly inoculated HSMϩacetate culture is 7.0; during 5 days of N starvation with an acetate boost, the pH of the culture increases to 8 to 8.5. Microscopy. For light microscopy, 750 l of cell culture was pelleted at 800 ϫ g and brought up in 15 l of its own supernatant to generate dense fields of cells for photography (a procedure not possible for obese cells; hence, their images are more dispersed). Cells were examined and photographed using a Wild M20 phase-contrast bright-field microscope with a 40ϫ objective, a 1.25ϫ Variomag, and a 2.5ϫ camera adapter (Canon EOS Rebel XTi). All fields were photographed at the same magnification. Calculations of numbers of LBs/cell (see Fig. 6) were made using micrographs of "popped" cells or intact cells that were sufficiently dry that their LB content could be readily scored. More than 2,000 light micrographs from 72 independent samples were examined for this study. For electron microscopy, live cells were pelleted at 800 ϫ g or scraped from agar plates. Obese cells were recovered from the meniscus after centrifugation. Cells were layered onto cushioning material, dropped onto the surface of a helium-cooled copper block, fractured, etched, and rotary replicated with platinum and carbon using the protocols and apparatus developed by Heuser (15). More than 2,000 DEEM micrographs from 48 independent samples were examined for this study. RESULTS Light microscopy. Figures 1 to 5 present montages of living STA6 and sta6 cells visualized by phase-contrast and brightfield ( Fig. 4) microscopy, all photographed and printed at the same magnification; additional images are presented in File 1 in the supplemental material. The immotile cells settle onto the glass slide without fixation, and since they lack cell walls, they flatten out as they dry, permitting high-resolution images. Eventually they "pop" when the plasma membrane lyses (49), depositing their starch and LBs (the STA6 strain) or their LBs (the sta6 strain) in situ on the slide. The two contractile vacuoles continue to pump until a cell pops, indicating that the cells are still operant during the drying process. (i) N starvation for 2 days from log phase. Figure 1A shows sta6 cells in mid-log phase (2 ϫ 10 6 to 3 ϫ 10 6 cells/ml); Fig. 1B and C show sta6 cells after 1 and 2 days of N starvation from log phase in 20 mM acetate. The range in cell size reflects different stages of the cell cycle. Cell numbers in cultures increase after transfer to N-free medium (27), stabilizing at 0.8 ϫ 10 7 to 1 ϫ 10 7 after 1 day. Round luminous LBs are just visible in 1-day N-starved sta6 cells (Fig. 1B) and conspicuous in 2-day N-starved sta6 cells (Fig. 1C). Included in Fig. 1C (arrow) is a popped cell (49) displaying its LB content. A time course of popping cells, found in File 2 in the supplemental material, illustrates several important features of the popping process: when the cell membrane lyses, the LBs neither fragment nor fuse, nor do they change in size when they adsorb to the glass slide. STA6 cells make little starch during growth (Fig. 1D), but during the first 2 days of N starvation they produce abundant starch (44,49,51), visible in Fig. 1E as a rim of refractile granules around each cell perimeter and as brown clumped material in the cell interior. Larger brown spherical LBs are also evident in the cell interior (Fig. 1E, arrows). When the (ii) N starvation of cells from log versus stationary phase. In our previous study (49), cells were N starved after entering stationary phase (ϳ1 ϫ 10 7 to 2 ϫ 10 7 cells/ml). As will be described in detail elsewhere (R. Roth and U. Goodenough, unpublished data), STA6 and sta6 cells undergo an extensive autophagocytic program when they enter stationary phase, observable in Fig. 2A as large populations of cytoplasmic vacuoles containing polyphosphate granules (derived from organelles called acidocalcisomes [8,39]). Stationary-phase cells produced smaller LBs after 2 days of N starvation than logphase cells (compare Fig. 1C and 2B); moreover, they became moribund and tended to lyse after 2 days (Fig. 2B). Therefore, most of the images in this report depict cells that were Nstarved from log phase. (iii) N starvation of cells without acetate. Figure 2C shows sta6 cells that were N starved in the light for 2 days without acetate. The omission of acetate prevents the accumulation of any visible LBs in both the sta6 strain and the STA6 strain, although there is no inhibitory effect on starch accumulation (27). When the acetate-free cells are provided with 20 mM acetate after 2 days, they engage in robust LB formation during the following 2 days (see File 3 in the supplemental material). (iv) N starvation of cultures grown in the absence of acetate. While both strains require exogenous acetate for N-stressinduced LB formation under the conditions employed, it is not necessary that the cells be cultivated in the presence of acetate. Figure 2D shows sta6 cells that were grown in the light in acetate-free HSM and then transferred in log phase to acetatecontaining N-free HSM for 2 days. LB levels are comparable to those in cells that were grown in acetate-containing medium (Fig. 1C). (v) Long-term maintenance of N-starved sta6 cells and effects of an acetate boost. Figure 3A shows sta6 cells N starved in acetate from log phase for 4 days. LB size clearly increased compared with that in 2-day cells (Fig. 1C), but when such cells were incubated for more than 4 days, they became moribund and lysed. However, when sta6 cells were given an additional 20 mM acetate (from a concentrated stock) after a 2-day N starvation in 20 mM acetate, they remained viable up to 2 weeks, and their LBs continued to enlarge. An acetate boost also enhanced the LB size of sta6 cells first grown to log phase in minimal medium (see File 3 in the supplemental material). Figure 3B to E show N-starved acetate-boosted sta6 cells after 4, 6, 8, and 10 days of culture. The LBs greatly increased in size with continued incubation. This is also evident in popped-cell fields (a gallery of popped-cell images is found in File 4 in the supplemental material) and in bright-field images ( Fig. 4A to D), where LBs increasingly fill the cells until they appear "stuffed." We designate such cells as obese. When observed by phase-contrast microscopy, the LBs of obese sta6 cells are lighter in color at the cell perimeter than in the interior (Fig. 3). As documented with DEEM (see below), the LBs at the perimeter are located in the peripheral chloroplast (cpst-LBs), while those in the interior are located in the cytoplasm (␤-cyto-LBs). The color differential is lost when the sta6 cells pop ( Fig. 3D and E), indicating that it is a phaseoptics effect. Beginning at 5 days after N starvation, obese sta6 cells became sufficiently lipid filled that they tended to float up to the meniscus of a culture tube, and they collected at the meniscus when a culture aliquot was centrifuged at 800 ϫ g, 16,000 ϫ g, or 100,000 ϫ g (cells normally pellet at 800 ϫ g). Full "floatability" was displayed by boosted sta6 cultures after 7 to 9 days. Obese sta6 cultures became increasingly yellow with longterm culture, during which time their thylakoids were all but eliminated (see below). Although the LBs continued to increase in size, cell size itself remained fairly constant, since the cells were losing both chloroplast and cytoplasmic volume (see below). When aliquots of yellow cultures were inoculated into N-replete medium, at least some of the cells remained viable (quantitative analyses of viability are in progress). The culture turned white when the cells died (typically after 2 weeks), at which time the cells lysed and the LBs tended to clump. (vi) Long-term maintenance of N-starved STA6 cells and effects of acetate boost. As documented with DEEM (see below), the STA6 strain did not make cpst-LBs, producing LBs only in the cytoplasm. After 4 days of N starvation, these LBs appeared as brown internal spheres (white arrows in Fig. 5A) surrounded by a rim of white refractile starch, and they were visibly larger than in the 2-day sample (Fig. 1E). When given an acetate boost after 2 days of N starvation, STA6 cells at 4 days displayed two differences from nonboosted cells (compare Fig. 5A and B): the LBs in the boosted cells were somewhat larger, and some were located at the periphery and hence appeared lighter (white arrows in Fig. 5B). These trends continued with longer incubations: at 9 days (Fig. 5C) and 14 days (Fig. 5D), the LBs were greatly enlarged, and many had the light color imparted by a peripheral location. Accompanying the increase in LB size and change in distri- [46]). Hence, the majority of STA6 cells continued to pellet at 800 ϫ g up to 14 days. Obese STA6 cultures turn yellow and die more slowly than sta6 cultures, and the cells retain more cytoplasm (see below), possibly because the cells are provided with starch reserves. (vii) Lipid bodies per cell. In a previous study (49) we quantitated the area of Nile red-stained LBs to obtain an estimate of LB yield. Since area increases as a function of the square of the radius (r 2 ), whereas volume increases as a function of r 3 , area measurements become increasingly uninformative when LB volume increases, as occurs during the long-term studies reported here. TAG yield is therefore being evaluated with biochemist collaborators and will be reported elsewhere. Meanwhile, the light-microscopy images obtained in the current study are well suited to evaluation of the number of LBs per cell, regardless of size, under various induction conditions. Figure 6 plots these findings. The STA6 strain maintained a narrow range of numbers of LBs/cell (4 to 12) during the first 96 h, while the median increased somewhat with extended culture. The sta6 strain had 1.5 to 2 times more LBs/cell than the STA6 strain at each time point, presumably due to its cpst-LB population, and the range (6 to 25 LBs/cell) was considerably larger, but the median held steady at 12 to 15 LBs/ cell. While not evaluated at all time points, sta6 cells complemented with STA6 transgenes showed an LB/cell distribution identical to that of the STA6 strain after 48 h N starvation, with a range of 4 to 12, a mean of 7.2, and a median of 7 (n ϭ 22). Deep-etch electron microscopy. Pellets of live cells were quick-frozen at liquid-helium temperatures, fractured, deepetched, and replicated using Pt/C rotary-shadowing (15). Cellular inclusions containing neutral lipid are unmistakable in such replicas: the fracture plane travels through their interior to create a blunt, smooth, featureless domain, much like a pat of butter. Additional DEEM figures are presented in File 5 in the supplemental material. (i) Neutral-lipid-containing inclusions in N-replete logphase cells. When N-replete C. reinhardtii cells are solvent extracted and analyzed biochemically, low levels of TAG are detected (23,31,44), as are low levels of Nile red-fluorescent bodies (49,51). DEEM identifies three morphological correlates of this "constitutive TAG": eyespot granules and plastoglobules in the chloroplast and ␣-cyto-LBs in the cytoplasm. (a) Eyespot granules. Eyespot granules (Fig. 7A), 75 to 100 nm in diameter, associate to form an opaque shield behind a circular photosensitive patch of the plasma membrane (20). They contain orange carotenoids (Fig. 4), and the TAG profile of purified eyespots has been characterized (31). The granules tend to be hexagonal (Fig. 7A), suggesting that the fibrillin proteins in their encapsulating membranes or coats (41) may impose structural constraints. (b) Plastoglobules. Plastoglobules (Fig. 7B), 50 to 150 nm in diameter, are round proteo-membrane-limited inclusions that are ubiquitous in land plant chloroplasts (5,18). Presumed homologous inclusions have been observed in the stroma of certain algae (reviewed in reference 20) and are evident in EM studies of C. reinhardtii from the Palade laboratory (4,7,34,40). They make punctate contacts with thylakoids in the fashion of their land plant counterparts (2). Their TAG content is inferred from their featureless fracture faces and from analyses of land plant plastoglobules (17), but nothing is yet known about their protein content or their TAG profile or about whether they contain pigments. (c) ␣-cyto-LBs. ␣-Cyto-LBs ( Fig. 7C and D) presumably correspond to the bodies occasionally visualized by Nile red or Bodipy fluorescence in N-replete cells (49,51). They are infrequent and small, ranging in size from 250 to 1,000 nm. Most are in contact with the endoplasmic reticulum (ER) and/or nuclear envelope (Fig. 7C), and some make contact as well with mitochondria (Fig. 7D) or acidocalcisomes, but chloroplast contact, when observed, is punctate. ␣-cyto-LBs often localize between the chloroplast and the plasma membrane mon and were not encountered after 24 h of N starvation, suggesting that they may serve to "seed" the ␤-cyto-LBs (see below). ␤-Cyto-LBs were also abundant in developing and mature wt zygotes (see File 6 in the supplemental material). In our previous study (49), ␤-cyto-LBs from STA6 and sta6 cells were purified, and their fatty acid methyl esters (FAMEs) and charged polar lipids were characterized. A fraction enriched in wt ␤-cyto-LBs was also analyzed biochemically by Moellering and Benning (31). ␤-Cyto-LBs invariably localized between the interior surface of the cup-shaped chloroplast and the nucleus (Fig. 8A), and they displayed close relationships with two membrane systems, the ER and the outer membrane of the chloroplast envelope (OMCE). Features of these relationships are shown in Fig. 8 to 10. (iii) ␤-Cyto-LB relationships with ER. An element of the ER is almost invariably found in close association with one surface of a ␤-cyto-LB, a relationship also encountered in land plant seeds (42) and animal cells (11). Some cross-fractures (Fig. 8A, arrow; enlarged in Fig. 8B) provide excellent views of the continuity between the outer leaflet of the ER membrane and the lipid monolayer surrounding the ␤-cyto-LB, concordant with ER-LB topology in other systems (42). In many cases, the fractures reveal more extensive ER-␤-cyto-LB associations, sometimes involving up to half the ␤-cyto-LB surface, with numerous punctate associations between the enfolding ER bilayer and the ␤-cyto-LB monolayer. Figure 8C shows a "multitasking" ER cisterna coming off the nuclear envelope, blebbing off vesicles to the Golgi on the left side and making extended contact with a ␤-cyto-LB on the right side. Figure 8D shows a large ER cisterna giving off a narrow tubular element that makes extended ␤-cyto-LB contact. Figure 9A and B show additional features of the LB/ER relationship. In Fig. 9A, an ER membrane displays a signature 1598 GOODSON ET AL. EUKARYOT. CELL array of intramembranous-particle (IMP) pits that form when transmembrane proteins are pulled out of the membrane during fracture. At its junction with an LB (arrow), a naked monolayer spreads over the LB, leaving the IMP pits behind. Figure 9B shows two ␤-cyto-LBs and two ER cisternae. The cisterna to the right makes lateral contact with the one LB and then feeds directly into the second LB. Such "direct feeds" were visualized ϳ20 times in the course of this study (additional examples are presented in File 5 in the supplemental material), suggesting that they are rare and/or transient occurrences. The cisterna to the left, coming off the nuclear envelope, makes the same kind of pit/no-pit junction as was noted in Fig. 9A. Such junctions support our earlier finding that purified ␤-cyto-LBs are devoid of protein (49). (iv) ␤-Cyto-LB relationships with the OMCE. The OMCE is almost invariably closely associated with the non-ER-associated surface of a ␤-cyto-LB ( Fig. 8 and 9); indeed, ␤-cyto LBs typically "snuggle" into infoldings of the chloroplast surface (Fig. 8A, C, and D). Figure 10A to C show additional features of the ␤-cyto-LB-OMCE relationship. In Fig. 10A and B, the OMCE, which is virtually IMP free in N-starved cells (cf. Fig. 8A), appears to "flow" over the ␤-cyto-LB exterior in the manner described above for the flow of the ER membrane ( Fig. 9A and B); an interpretation of these configurations is offered in the Discussion. Figure 10C shows these relationships in cross-fracture: a ␤-cyto-LB is in extensive contact with the nuclear envelope and ER on one face (upper arrow), while a second face makes a long association, with frequent punctate contacts, with the OMCE (left arrow). (v) Relationship between ␣-cyto-LBs and ␤-cyto-LBs. Careful scrutiny of hundreds of DEEM micrographs recording the early hours of N starvation failed to yield examples of small nascent ␤-cyto-LBs sandwiched between ER and OMCE membranes; instead, when ␤-cyto-LBs were first encountered, at ϳ15 h, they were invariably already in the size range of large ␣-cyto-LBs (Fig. 7D). This observation suggests that in response to N starvation, ␣-cyto-LBs may seed the formation of ␤-cyto-LBs, recruiting the stable ER and OMCE associations entailed in the extensive ␤-cyto-LB enlargement that occurs at later stages. (vi) Cpst-LBs: general features. In exhaustive analyses of growing and N-starved starch-producing cells in liquid medium, on agar plates sampled over the course of 30 days, and in zygotes, including three starch-producing sta6 strains complemented by STA6 transgenes, none was observed to contain cpst-LBs. In contrast, cpst-LBs are an invariant feature of N-starved sta6 cells. In a time course DEEM study, cpst-LBs were not detected at 2, 4, or 8 h after sta6 cells were N starved from log phase, but they were frequently encountered at 12 h, and they increased in size until they dominated the chloroplast stroma. In our previous study (49), cpst-LBs were scored in poppedcell assays of the sta6 strain, but they did not contribute to our purified sta6 LB preparations, since the cell breakage procedure employed was designed to leave chloroplasts intact. Cpst-LBs are shown in Fig. 10A and 11A to C. Each cpst-LB is delimited by a membrane monolayer; examples are indicated with asterisks in Fig. 11A and B. In addition, most are at least partially enveloped by one or more thylakoids, a configuration we designate the "thylakoid wrap." Cross-fractured single wraps are indicated by arrows in Fig. 11A and B. In en face views of wraps ( Fig. 10A and 11A and C), the IMP-free thylakoid membrane appears to flow over the cpst-LB surface, reminiscent of ER and OMCE relationships with ␤-cyto-LBs. Images such as the upper region of Fig. 11C suggest that the thylakoid membrane may also disassemble in conjunction with cpst-LB assembly. (vii) Relationship between cpst-LBs and plastoglobules. When first encountered in N-starved sta6 cells, cpst-LBs al- FIG. 9. Membrane relationships between ␤-cyto-LBs and ER (additional images are presented in File 5 in the supplemental material). (A) ␤-Cyto-LB associated with an ER cisterna whose membrane flows over the LB surface. The arrow indicates the junction between the intramembranous-particle (IMP)-pit-rich and IMP-free domains. ce, OMCE; v, vacuole. sta6 cells were N starved for 12 h from stationary phase. Bar, 100 nm. (B) Two ␤-cyto-LBs, the right fractured through its TAG interior, the left along its surrounding monolayer. Right arrow, ER membranes feeding directly into the IMP-free LB monolayer; left arrow, IMP-pit-rich ER, extending off the nuclear envelope, continuous with the IMP-free LB monolayer. a, acidocalcisome; n, nucleus; th, thylakoids. sta6 cells were N starved for 30 h from stationary phase. Bar, 250 nm. VOL. 10, 2011 CHLAMYDOMONAS LIPID BODIES AND ACETATE BOOST ready measured 0.5 to Ͼ1 m; small nascent cpst-LBs with thylakoid wraps were not observed, although these might be difficult to identify. This is reminiscent of our failure, noted above, to identify nascent ␤-cyto-LBs. Possibly, therefore, the small plastoglobules in the chloroplast stroma serve to seed cpst-LB formation in the sta6 strain, with their punctate thylakoid contacts shifting to the more extensive wrapped configurations (see Discussion). (viii) Fine structure of cells subjected to extended N starvation and acetate boost. After 4 days of N starvation with an acetate boost (see above), the chloroplasts of obese STA6 cells are starch replete and contain extended thylakoids, and the ␤-cyto-LBs retain their canonical relationship with the ER and OMCE (Fig. 12A). After 14 days (Fig. 12B), STA6 chloroplasts are greatly reduced in size but retain some starch, and the remaining thylakoids are extended and display IMPs. While the enormous ␤-cyto-LBs often extend out to the cell surface, as previously noted by phase microscopy (Fig. 5), their relationship with the ER and OMCE persists. After 4 days of N starvation with an acetate boost, the thylakoids of obese sta6 cells lost their extended configuration and were severed into short segments (Fig. 13A), a phenotype that correlates temporally with the onset of yellowing of the culture. After 10 days, the chloroplasts containing severed thylakoids were reduced to small islands (Fig. 13B), and the cellular interior was largely LBs, where it was no longer possible to distinguish cytoplasmic and chloroplast species. Figure 14 compares a 14-day-boosted STA6 cell with a 7-day-boosted sta6 cell, highlighting the extreme obesity achieved by the sta6 strain in a far shorter period. The only identifiable organelles in such sta6 cells are the nucleus, the chloroplast envelope (which retains its normal surface area even when devoid of thylakoids), the eyespot (Fig. 4D), and the plasma membrane. and size, their LB content and size, and the extent to which LBs fill cellular volumes. The light micrographs in Fig. 1 and 2 illustrate a feature of LB content noted in our previous report (49), namely, an unexplained range in LB number and LB area per cell in a given culture during the first 2 days of N starvation. Interestingly, a similar range is encountered in differentiating adipocytes in culture (25). When N starvation was continued beyond 2 days, the per-cell variation persisted (Fig. 3 to 5) even as the LBs became much larger. DISCUSSION Since LB area does not accurately translate into volume as LB size increases, LB area was not quantitated in this study, but four features of LB biogenesis were revealed by counting total LBs per cell regardless of their size (Fig. 6). First, sta6 cells had 1.5 to 2 times as many LBs as STA6 cells at all stages of induction, presumably reflecting their production of both ␤-cyto-LBs and cpst-LBs. Second, the range of numbers of LBs/cell after the first 2 days of N starvation was much greater in the sta6 strain than in the STA6 strain, suggesting that control of LB number is more stringent for ␤-cyto-LBs than for cpst-LBs. Third, the general overall similarity in numbers of LBs/cell with time for a given strain indicates that LBs do not increase in size during induction by fusing with one another. And fourth, the increase in TAGs/cell with N starvation was not accomplished by the de novo synthesis of more LBs but rather by the progressive filling of existing LBs. Conditions conducive to LB formation. We found that Nstress-induced LB formation is acetate dependent in the STA6 and sta6 strains (Fig. 2C), as reported for the STA6 strain using TAG quantitation (9). In contrast, Li et al. (24) reported robust TAG production in the sta6 strain in the absence of acetate. A possible explanation for this discrepancy (24) is that the cells were exposed to 10-fold-higher light intensities in addition to N starvation, which may enhance photosynthetic contributions and/or induce additional stress. A distantly related species, C. monoica, is an obligate phototroph, incapable of exogenous acetate utiliza- tion, yet it produces abundant LBs after N starvation in low light (26, 38; K. VanWinkle-Swift and U. Goodenough, unpublished data). Hence the acetate requirement trait is clearly plastic and putatively amenable to genetic and/or environmental manipulation. When acetate is present and log-phase STA6 or sta6 cells are N starved in the dark, LB production is compromised (9,24). The contribution of photosynthesis to final LB yield in C. reinhardtii may not be extensive, given reported decreases in CO 2 fixation rates (28), RuBisCO and cpst-ATP synthase levels (35), chlorophyll levels (44,51), thylakoid integrity (28, 31; also this study), and photosynthetic electron transport (6,24,51) during the course of N starvation. That said, the dark inhibition indicates that a feature(s) of photosynthesis, and/or perhaps an additional light-sensitive pathway(s), may be important to at least the early phases of LB production. During the course of N starvation in C. reinhardtii, cells undergo extensive autophagy, influenced by the target-of-rapamycin (TOR) pathway (36), that initially involves the destruction of ribosomes (27,28) and then the dismantling of other organelles, notably the chloroplast thylakoids (this study), until each LB-engorged obese cell is reduced to a minimal set of organelles, a chloroplast envelope, and a nucleus (Fig. 12 to 14). We are currently collaborating with investigators who employ solid-state nuclear magnetic resonance (NMR) methods to ascertain the extent to which LB biogenesis utilizes carbon skeletons derived from photosynthesis versus exogenous acetate versus products of autophagy. In one FIG. 12. Extended N starvation and acetate boost of STA6 cells (additional images are presented in File 5 in the supplemental material). (A) A STA6 cell that was N starved for 4 days with an acetate boost. The chloroplast is replete with starch (s) and contains extended thylakoids (th) and a plastoglobule (arrow). ce, OMCE; n, nucleus. Bar, 500 nm. (B) STA6 cell that was N starved for 14 days with an acetate boost. Reduced chloroplast domain contains some thylakoid stacks (th), limited starch (s), and a plastoglobule (arrow). n, nucleus. Bar, 500 nm. Canonical ␤-cyto-LB interactions with the ER and nuclear envelope and OMCE (ce) can be seen in both panels. 1602 GOODSON ET AL. EUKARYOT. CELL scenario, autophagy may supply substrates to maintain cell viability while exogenous acetate feeds into LB production; in other scenarios, these tasks may be more evenly distributed. Importantly, while exogenous acetate is necessary for the production of ␤and cpst-LBs under the conditions we employed, acetate is not necessary for the production of C. reinhardtii biomass; the LB output of cells grown phototrophically in minimal salt medium and then N starved in the presence of acetate was comparable to that of cells grown mixotrophically ( Fig. 2E; also, see File 3 in the supplemental material). Hence, the energy and carbon needed to generate C. reinhardtii cellu-lar biomass can be derived exclusively from photosynthetic electron transport and CO 2 fixation, an important consideration should this species be considered as a production strain for biodiesel. Cytoplasmic LBs. LBs are often regarded as organelles associated with lipid storage in specialized tissues (e.g., adipocytes and seeds); however, as detailed in reviews (10,29,33,47), LBs are in fact ubiquitous and dynamic eukaryotic cellular components that feature in various aspects of intracellular lipid trafficking. In the C. reinhardtii cytoplasm, the small ␣-cyto-LBs organelles: they are encountered at low levels in cycling cells, they are restricted to a narrow size range, and they exist either as apparently single entities or in contact with the ER or other organelle membranes, possibly thereby receiving signals and/or substrates for expansion or breakdown. The small constitutive plastoglobules ( Fig. 7B and 12A and B) may play analogous roles in lipid trafficking in the C. reinhardtii chloroplast. When subjected to N stress, the C. reinhardtii cytoplasm developed ␤-cyto-LBs that enlarge until, with an acetate boost, they eventually occupy most of the cytoplasmic volume (Fig. 3 to 5 and 12 to 14). In our previous study (49), the ␤-cyto-LBs from the STA6 strain and the sta6 strain were purified and analyzed biochemically (the purification protocol excluded cpst-LBs) and were found to have very similar fatty acid profiles. The ER is well established as the primary locus of glycerol-3-P esterification to form diacylglycerides (DAGs) and TAGs (3); in land plants, the ER localization of diacylglycerol acyltransferase (DAGAT) and phospholipid:diacylglycerolacyltransferase (PDAT), which catalyze TAG synthesis, is under tight regulation (12). We observed a close association between one face of each ␤-cyto-LB and an ER cisterna, where continuities between the outer leaflet of the ER bilayer and the delimiting cyto-LB monolayer could be resolved (Fig. 8B). Unexpectedly, the opposite face of each ␤-cyto-LB was associated with the outer membrane of the chloroplast envelope (OMCE) (Fig. 8 to 10 and 12A and B), with membrane continuities also readily resolved. These observations suggest that both the ER membrane and the OMCE participate in ␤-cyto-LB biogenesis in C. reinhardtii. In green organisms, fatty acids are synthesized in the chloroplast and are then somehow shuttled to the ER for esterification to glycerol backbones (3,9,19). Evidence has recently been obtained in land plants for ER domains called PLAMs (plastid-associated membranes) that represent stable associations between the ER and the outer chloroplast envelope (1,48,52). We have not identified stable PLAM configurations in log-or stationary-phase C. reinhardtii, but they may exist transiently, allowing chloroplast-derived acyl coenzyme A to interact with ER enzymes to generate polar lipids without an intervening transit through the cytoplasm. Such configurations, in contrast, are highly stable in N-starved cells, the result being TAG-filled ␤-cyto-LBs enclosed in a monolayer of polar lipids apparently provided by the outer leaflets of both the OMCE and ER membranes. A model depicting these relationships is shown in Fig. 15. The monolayer lipids may occupy segregated domains as depicted in Fig. 15 or, more likely, may diffuse in the plane of the half-membrane to form a hybrid mixture. Recent data from Fan et al. (9) indicate that such a route may be traversed by chloroplast-derived DAG as well as by free fatty acids. Two biochemical observations support this model. We showed previously (49) that the two prominent polar lipids associated with purified ␤-cyto-LBs in C. reinhardtii are 1,2diacylglyceryl-3-O-4-(N,N,N-trimethylhomoserine) (DGTS), unique to ER membranes (3), and sulfoquinovosyldiacylglycerol (SQDG), unique to chloroplast membranes (3), suggesting that both organelles contribute polar lipids to the enclosing ␤-cyto-LB monolayer. We also found that ϳ10% of the lipid in purified ␤-cyto-LBs takes the form of free fatty acids. The morphological studies presented here are best interpreted as indicating that under conditions of N stress, ␣-cyto-LBs establish stable ER-OMCE relationships and enlarge to form ␤-cyto-LBs, as contrasted with the alternative possibility that ␤-cyto-LBs form de novo. Chloroplast LBs. Cpst-LB formation in the sta6 strain is prominent during the 12-to 24-h period following N starvation from log phase, a period of intensive starch accumulation in wt strains (27,32,41,51), consistent with the hypothesis that the block in starch biosynthesis allows substrates and/or ATP/ NADPH to shuttle into cpst-LB production. In our confocal microscopic assays (49), sta6 cells were calculated to contain three times the total LB volume of STA6 cells after a 2-day N starvation from stationary phase; enhanced TAG synthesis by the sta6 strain has also been reported by others (23,51). While Siaut et al. (44) documented a variability in the numbers of TAGs/cell of various C. reinhardtii strains under various induction conditions, the microscopic data presented here demonstrate that under the conditions we employed, the sta6 strain accumulates more (Fig. 6), and larger, LBs than the STA6 strain until late in the acetate boost regimen, at which time the FIG. 15. Proposed relationship between ER and chloroplast during ␤-cyto-LB formation. Fatty acids synthesized in the chloroplast stroma, and perhaps also DAG (9), diffuse along the LB-delimiting monolayer created jointly by the OMCE and ER; when they reach diacylglycerol acetyltransferases (DAGAT) in the ER membrane, they are esterified to glycerol to form TAG, which then enters the LB interior. 1604 GOODSON ET AL. EUKARYOT. CELL STA6 strain has consumed most of its starch reserves and appears to catch up ( Fig. 5 and 14). To date, cpst-LBs have been observed only in the sta6 strain; we have not observed any cpst-LBs in starch-producing strains under a variety of conditions or in three sta6 strains complemented with STA6 transgenes. Recently, Fan et al. (9) published thin-section images showing sta6 cpst-LBs. They did not, however, examine images of starch-producing cells and hence failed to recognize that cpst-LBs do not form in wt C. reinhardtii under the induction conditions that they, and we, employed. In STA6 and sta6 cells, the chloroplasts contain small plastoglobules ( Fig. 7B and 12A and B) that are presumably homologous to the plastoglobules detected in other algae (20,22) and land plants (5,18), where they are reported to contain TAG (17). The chloroplasts also contain carotenoid-rich eyespot granules (Fig. 7A) that contain TAG (31). Hence it seems likely, although it has not yet been demonstrated experimentally, that the C. reinhardtii chloroplast contains enzymes that catalyze TAG biosynthesis in N-replete cells. One possible explanation for the sta6 cpst-LB phenotype, therefore, is that substrates, reductants, and ATP normally directed to starch biosynthesis are diverted to a constitutive cpst-TAG-biosynthetic pathway under N-stress conditions. An alternative explanation is that the C. reinhardtii genome includes information for an inducible cpst-TAG-biosynthetic pathway that is expressed under N-stress conditions when starch biosynthesis is blocked (and possibly under other conditions as well). The finding that C. reinhardtii possesses two TAG-biosynthetic pathways, one of which is exposed or stimulated in the presence of a single gene mutation, expands opportunities for genetic and/or environmental manipulation of the LB trait in this species, and possibly in other algae as well. Morphologically, the sta6 cpst-LBs are very different from plastoglobules, growing to Ͼ1 m in diameter and commonly wrapped with thylakoid membranes rather than making punctate thylakoid contact. As noted above for ␤-cyto-LBs, no DEEM images have been encountered that could be interpreted as showing small incipient cpst-LBs, consistent with the possibility that plastoglobules may serve as "seeds" for cpst-LBs. That said, the plastoglobule population of sta6 cells remains stable throughout cpst-LB production, as does the cpst-LB population (Fig. 6), so presumably only a subpopulation of plastoglobules would serve as such seeds. Thylakoids in apparent states of disassembly are also often encountered in proximity to cp-LBs (Fig. 11C), consistent with the possibility that their breakdown (via lipase or PDAT activity) may contribute to cpst-LB TAG synthesis. And presumably, the bulk of cpst-LB TAG biosynthesis takes advantage of pathways and substrates that otherwise would have participated in starch biosynthesis. Each cpst-LB is surrounded by a lipid monolayer, and each is also partially or totally wrapped by one or more IMP-free thylakoids. The images suggest that cpst-LB monolayers derive from thylakoid wrap bilayers in a fashion similar to the derivation of ␤-cyto-LB monolayers from ER and OMCE bilayers. The structural complexity of this system is concordant with the thesis that cpst-LB formation in C. reinhardtii is an encoded trait and not simply the consequence of aberrations generated by the sta6 mutation. Extended induction and the acetate boost. In our prior study using stationary-phase STA6 and sta6 cells (49), cell viability be-came compromised after 2 days of N starvation. In contrast, cells remain viable up to 4 days when N starved from log phase, the protocol employed in this study and also in other published studies of the C. reinhardtii LB system (9,21,23,24,30,31,44,51). Siaut et al. (44) documented a linear increase in number of TAGs/cell in the STA6 strain during days 1 to 5 after N starvation from log phase, followed by a plateau; we found that the STA6 and sta6 strains become moribund after 4 days even when induced from log phase. If, however, either strain was given a 20 mM acetate boost after 2 days of N starvation, the cells remained viable for up to 2 weeks, and the sizes of both ␤-cyto-LBs and, in the sta6 strain, cpst-LBs increased dramatically until the cells became fully engorged, or obese (Fig. 3 to 5 and 12 to 14). Obese STA6 cells are hardier and die later than the sta6 cells, possibly because they only slowly deplete their starch reserves and retain normal-looking thylakoids. Neither STA6 nor sta6 cells changed in size while becoming obese, because their cytoplasmic contents, most prominently their large chloroplasts, undergo autophagy as the LBs enlarge. Given that C. reinhardtii cells can be induced to fill virtually their entire volume with LBs, further augmentation of acetatepromoted TAG yield with this species is most likely to be achieved by either increasing cell size or increasing cell number/ml. By 7 days after N starvation with an acetate boost, obese sta6 cells floated and failed to pellet with centrifugation at 100,000 ϫ g. This was not the case for obese STA6 cells even after 14 days, presumably because they continued to contain starch (Fig. 4, 5, 12, and 14), which is very dense (46). The sta6 flotation property may have applications in cell harvesting, a key challenge in developing algae for biofuel production.
2018-04-03T05:32:33.181Z
2011-10-28T00:00:00.000
{ "year": 2011, "sha1": "2a856f7dbccbb49ba862c2ac8f9fce155cd18168", "oa_license": null, "oa_url": "https://doi.org/10.1128/ec.05242-11", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "04edd5d81f4cb377d7b1197ef89561eb1c8e7d18", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
9540876
pes2o/s2orc
v3-fos-license
Short-term single treatment of chemotherapy results in the enrichment of ovarian cancer stem cell-like cells leading to an increased tumor burden Over 80% of women diagnosed with advanced-stage ovarian cancer die as a result of disease recurrence due to failure of chemotherapy treatment. In this study, using two distinct ovarian cancer cell lines (epithelial OVCA 433 and mesenchymal HEY) we demonstrate enrichment in a population of cells with high expression of CSC markers at the protein and mRNA levels in response to cisplatin, paclitaxel and the combination of both. We also demonstrate a significant enhancement in the sphere forming abilities of ovarian cancer cells in response to chemotherapy drugs. The results of these in vitro findings are supported by in vivo mouse xenograft models in which intraperitoneal transplantation of cisplatin or paclitaxel-treated residual HEY cells generated significantly higher tumor burden compared to control untreated cells. Both the treated and untreated cells infiltrated the organs of the abdominal cavity. In addition, immunohistochemical studies on mouse tumors injected with cisplatin or paclitaxel treated residual cells displayed higher staining for the proliferative antigen Ki67, oncogeneic CA125, epithelial E-cadherin as well as cancer stem cell markers such as Oct4 and CD117, compared to mice injected with control untreated cells. These results suggest that a short-term single treatment of chemotherapy leaves residual cells that are enriched in CSC-like traits, resulting in an increased metastatic potential. The novel findings in this study are important in understanding the early molecular mechanisms by which chemoresistance and subsequent relapse may be triggered after the first line of chemotherapy treatment. Introduction Epithelial ovarian cancer (EOC) is the fifth most common cancer among women and is the leading cause of death among gynaecological cancers. Over 80% of women with EOC are diagnosed at a late-stage with dissemination of tumor implants throughout the peritoneal cavity [1]. The combination of cisplatin and paclitaxel based chemotherapy was introduced as a first line of treatment for the clinical management of advanced-stage ovarian cancer patients nearly 17 years ago [2]. Cisplatin is a DNA strand cross-linking drug that generates DNA damage leading to the activation of cyclin-dependent kinase inhibitors such as p21 and wee1/mik1, which subsequently arrest cells in either G1 or G2 phase [3]. Resistance to cisplatin has been associated with increased glutathione and metallothionein levels, decreased drug uptake, increased DNA repair (due to enhanced expression of excision repair enzymes) and the tolerance of the formation of platinum-DNA adducts [4]. The status of p53 mutation plays a significant role in DNA repair, proliferative arrest and apoptosis and there is a correlation between cancer cell p53 status and cisplatin sensitivity [5,6]. Paclitaxel on the other hand, is a mitotic inhibitor that promotes the formation and stabilization of microtubules leading to a cell cycle block at the metaphase to anaphase transition [7]. In contrast to cisplatin, the cytotoxic effect of paclitaxel is independent of p53 status [8] and alterations in β-tubulin isotypes have been associated with paclitaxel resistance in cancer cells [8]. Both cisplatin and paclitaxel through distinct molecular mechanisms trigger an apoptotic cascade resulting in the death of the majority of ovarian cancer cells. In spite of this, approximately 80% of ovarian cancer patients experience incurable recurrent cancer within 6-20 months post-chemotherapy [1] as a consequence of the survival of a very small percentage of chemotherapy resistant residual tumor cells which facilitate the development of recurrent progressive disease [1]. Concerted research efforts to tackle the failure of combination chemotherapy have resulted in no effective salvage strategies for the last 17 years [9]. Hence, there is an increasing pressure to seek alternative approaches, which has resulted in the use of combinations of drugs that usually belong to the platinum or taxane families [9]. These alternative drug combinations have provided temporary hope to the patients but have had no clinically effective outcome [9]. To establish an effective treatment protocol for advanced-stage ovarian cancer patients a systematic approach is needed to understand responses of ovarian cancer cells to platinum and taxane-based drugs, individually and in combination. In vivo experiments initially with each drug treatment will result in insights into the molecules that facilitate the evasion of chemotherapyassociated cytotoxicity against each individual drug and the subsequent re-growth of tumour cells as recurrent tumor masses. This is particularly important for a large proportion of chemorefractory ovarian cancer patients who are resistant to platinum-based drugs and are normally prescribed taxane-based treatment. On the other hand, some ovarian cancer patients respond badly towards taxane-based drugs and develop serious side effects, in which case they are prescribed platinum-based treatment. We and others have recently demonstrated an association between chemoresistance and the acquisition of epithelial mesenchymal transition (EMT) and CSC-like phenotypes in cancer [10][11][12] and found chemoresistant recurrent ovarian tumors to be enriched in CSCs and stem cell pathway mediators, suggesting that CSCs may contribute to recurrent disease [13,14]. The first involvement of stem cells in ovarian cancer was reported in the ascites of an ovarian cancer patient, derived from a single cell that could sequentially propagate tumors over several generations [15]. CSCs have also been isolated from ovarian cancer cell lines based on their abilities to differentially efflux the DNA binding dye Hoechst 33342 [16]. This population of cells termed the 'side population' (SP) displayed the classical stem cell property in tumorigenicity assays. More recently, a population of normal murine OSE [17] have been identified to have putative stem cell characteristics indicating that these may be the originators of CSCs in the ovaries. Few other recent reports have shown the presence of CSCs in ovarian tumors as well as in patients' ascites [18][19][20]. CSCs in these studies were reported to be resistant to conventional chemotherapy and were able to recapitulate in vivo the original tumor suggesting that these CSCs control self-renewal as well as metastasis and chemoresistance. In this study, we demonstrate that a short-term single exposure of chemotherapy (cisplatin, paclitaxel or both in combination) treatment induced in surviving ovarian cancer cells a CSC-like profile which was independent of the type of chemotherapy and the associated cytotoxicity. We further demonstrate that chemotherapy surviving residual cells were able to generate tumors with greater capacity (tumor burden) than control untreated cells, and that they retained their inherent CSC-like profile in tumor xenografts. These novel findings emphasize the need to understand the CSC-like phenotype of ovarian tumors which may arise after the first line of chemotherapy treatment and may be crucial in facilitating the aberrant events leading to recurrent disease. Cell lines The human epithelial ovarian cancer line OVCA 433 was derived from the ascites of an ovarian cancer patient and generously provided by Dr Robert Bast Jr. (MD Anderson Cancer Centre, Houston, TX). The cell line was grown as described previously [11]. The human ovarian HEY cell line was derived from a peritoneal deposit of a patient diagnosed with papillary cystadenocarcinoma of the ovary [21]. The cell line was grown as described previously [22]. Treatment of ovarian cancer cells with cisplatin, paclitaxel and combination of both Ovarian cancer cell lines OVCA 433 and HEY were treated with cisplatin and paclitaxel concentrations at which 50% growth inhibition was obtained (GI50) for 3-5 days. OVCA 433 cells were treated with cisplatin (5 μg/ml) for five days, paclitaxel (2 ng/ml) and combination (2.5 μg/ml of cisplatin and 1 ng/ml of paclitaxel) for three days. HEY cells were treated with cisplatin (1 μg/ml) five days, paclitaxel (1 ng/ml) and combination (1 μg/ml of cisplatin and 1 ng/ml of paclitaxel) for three days. For combination treatment, samples were screened for response to different combination of drug treatments and the concentration of combination treatment which gave the GI50 value while maintaining the enhancement in resistant phenotype (ERCC1 and β-tubulin expression) and cancer stem cell marker expression was chosen for experiments. Immunofluorescence analysis Immunofluorescence analysis of ERCC1 and β-tubulin isotype III was performed as described previously [13]. Images were captured by the photo multiplier tube (PMT) using the Leica TCS SP2 laser, and viewed on a HP workstation using the Leica microsystems TCS SP2 software. The mean fluorescence intensity was quantified using Cell-R software (Olympus Soft Imaging Solutions). Flow cytometric analysis Flow cytometry was performed as described previously [23]. Briefly, untreated or chemotherapy treated cells were collected and rinsed twice with phosphate buffered saline (PBS). 10 6 cells were incubated with primary antibody for 1 hr at 4°C and excess unbound antibody was removed by washing twice with PBS. Cells were stained with secondary antibody conjugated with phycoerythrin for 20 minutes at 4°C, washed twice with PBS and then resuspended in 0.5 ml PBS prior to FACScan analysis. In each assay background staining was detected using an antibody-specific IgG isotype. All data were analysed using Cell Quest software (Becton-Dickinson, Bedford, MA, USA). Results are presented as histogram overlay. Sphere forming assay The sphere forming ability of untreated and chemotherapy treated OVCA 433 cells and HEY cells were determined as described previously [11]. The sphere forming ability of the cells was photographed over 21 days using a phase contrast microscope (Axiovert 100, Zeiss, Germany) and assessed with the DeltaPix Viewer software (Denmark). Cellular aggregates with a diameter larger than 50 μm were classified as 'spheres'. RNA extraction and Real Time-PCR RNA extractions were performed using Trizol (Life Technologies, USA) using the Qiashredder and RNeasy kits (QIAGEN, Australia) according to the manufacturer's instructions. The concentration and purity of RNA was determined using spectrophotometry (Nanodrop ND-1000 spectrophotometer, Thermo Scientific, USA) and 0.5 μg of RNA was used for cDNA synthesis. cDNA synthesis was performed using Superscript VILO (Invitrogen, Australia) according to manufacturer's instructions. Quantitative determination of mRNA levels of various genes was performed in triplicate using SYBR green (Applied Biosystems, Australia) as described previously [13]. The primers for Oct-4A, Nanog, CD44, CD117, EpCAM have been described previously [11]. Animal studies Animal ethics statement This study was carried out in strict accordance with the recommendations in the Guide for the Care and Use of the Laboratory Animals of the National Health and Medical Research Council of Australia. The experimental protocol was approved by the Ludwig/Department of Surgery, Royal Melbourne Hospital and University of Melbourne's Animal Ethics Committee (Project-006/11), and was endorsed by the Research and Ethics Committee of Royal Women's Hospital Melbourne, Australia. Animal experiments Female Balb/c nu/nu mice (age, 6-8 weeks) were obtained from the Animal Resources Centre, Western Australia. Animals were housed in a standard pathogenfree environment with access to food and water. HEY cells were treated with cisplatin and paclitaxel as described previously. 5x10 6 residual cisplatin or paclitaxel surviving cells treated for 4 days were injected intraperitoneally (ip) in nude mice. Mice were inspected weekly and tumor progression was monitored based on overall health and body weight until one of the predetermined endpoints was reached. Endpoint criteria included loss of body weight exceeding 20% of initial body weight, anorexia, general patterns of diminished wellbeing such as reduced movement and lethargy resulting from lack of interest in daily activities. Mice were euthanized and organs (liver, stomach, lungs, gastrointestinal tract, pancreas, uterus, skeletal muscle, colon, kidney, peritoneum, ovaries and spleen) and solid tumors were collected for further examination. Metastatic development was documented by a Royal Women's Hospital pathologist according to histological examination (H & E staining) of the organs. Immunohistochemistry of mouse tumors For immunohistochemistry, formalin fixed, paraffin embedded 4 μm sections of the xenografts were stained using a Ventana Benchmark Immunostainer (Ventana Medical Systems, Inc, Arizona, USA). Detection was performed using Ventana's Ultra View DAB detection kit (Roche/Ventana, Arizona, USA) using the method described previously [24]. Briefly, tumor sections were dewaxed with Ventana EZ Prep and endogenous peroxidase activity was blocked using the Ventana's Universal DAB inhibitor. Primary antibodies against Oct4, Ki67, Ecadherin, vimentin, CA125, cytokeratin 7 and CD117 (c-Kit) were diluted according to the instruction provided by the manufacturer. The sections were counter stained with Ventana Haematoxylin and Blueing Solution. Immunohistochemistry images were taken using Axioskop 2 microscope, captured using a Nikon DXM1200C digital camera and processed using NIS-Elements F3.0 software. Statistical Analysis Student's t-test was used for the statistical analyses of sphere formation and qPCR analysis. Data are presented as mean ± SEM. A probability level of p < 0.05 was adopted throughout to determine statistical significance. Treatment groups were compared with the control group using one way-ANOVA and Dunnett's Multiple Comparison post-tests. Chemotherapy induced morphological changes in ovarian cancer cell lines Treatment with cisplatin resulted in a loss of cell polarity in epithelial OVCA 433 cells and was consistent with fibroblast-like spindle-shaped morphology in treated cells as described previously [11,12] ( Figure 1A). Due to the inherent mesenchymal morphology of HEY cells, changes in mesenchymal morphology in response to cisplatin treatment was not prominent in HEY cells ( Figure 1B). On the other hand, treatment with paclitaxel resulted in the appearance of rounded epitheloid cells within three to five days in both cell lines ( Figures 1A-B). The change to epithelial morphology in response to paclitaxel was more prominent in HEY than in OVCA 433 cells, due to their initial mesenchymal appearance. Some HEY cells seemed to undergo phenomenal cellular enlargement which was up to five-fold (approximately 50 μm in diameter) more than the control untreated cells. This may be due to the formation of multi-nucleated cells in response to paclitaxel treatment which may result from the inhibition of the mitotic cycle at the metaphase to anaphase stage i.e. when the cell fails to divide into two daughter cells even though the distribution of centrosome/nucleosome for the daughter cells have occurred. Morphological changes in response to cisplatin or paclitaxel were dose-dependent (data not shown). Cisplatin-induced morphological changes were evident at concentrations between 1-10 μg/ml (GI50~5 μg/ml) for OVCA 433 cells. However, HEY cells responded to much lower cisplatin concentration of 0.5-5 μg/ml (GI50~1 μg/ml) ( Figures 1A-B). On the other hand, paclitaxel-induced epithelial morphology was evident at a concentration of 0.5-2.5 ng/ml (GI50~2 ng/ml) for OVCA 433 cells, and 0.1-2 ng/ml (GI50~1 ng/ml) for HEY cells. Similar change to epithelial morphology in clones of surviving cells, but to a greater extent than that observed with paclitaxel only, was evident after combination treatment (cisplatin + paclitaxel). Both OVCA 433 and HEY demonstrated discrete epithelial colonies and very few mesenchymal cells which were scattered in between epithelial cells (Figures 1A-B). Different concentrations of combination treatments were tried but as described previously [11] the drugs concentration at or below the GI50 value were used for further study. Chemotherapy induces the expression of cisplatin and paclitaxel resistant phenotypes In order to determine if the morphological changes induced by cisplatin and paclitaxel were consistent with the chemoresistant phenotype of the ovarian tumors as described previously [25,26], we evaluated the expression of ERCC1 and β-tubulin isotype III by cancer cells which survived cisplatin, paclitaxel and combination treatments using immunofluorescence. Compared to untreated control cells, enhanced expression of ERCC1 was evident in cisplatin, paclitaxel and combination treated HEY cells ( Figure 2). Enhanced β-tubulin isotype III staining was also evident in HEY cells surviving cisplatin, paclitaxel and combination treatment (Figures 2A). In most of the cases, the same population of residual cells stained for ERCC1 and β-tubulin isotype III after the 3 treatments, suggesting cross resistance for cisplatin and paclitaxel in HEY cells. However, β-tubulin isotype III was more dominant in paclitaxel and combination treated cells ( Figure 2B). The expression of ERCC1 was confined mainly to peripheral membranes in most cells and few cells displayed nuclear staining. In response to paclitaxel treatment an increase in the expression of β-tubulin isotype III was evident on the peripheral membrane as well as in the nucleus of the surviving cells ( Figure 2B). However, there was more nuclear β-tubulin isotype III staining compared to membrane staining after combination treatment ( Figure 2B). OVCA 433 cells demonstrated a similar ERCC1 and β-tubulin isotype III staining pattern (Additional file 1: Figure S1). Quantitative measurement of the expression of ERCC1 demonstrated significant enhancement in the expression of ERCC1 in Figure 2 Expression and immunolocalization of (A) ERCC1 and (B) β-tubulin isotype III in HEY cell line in response to cisplatin, paclitaxel and combination treatment. The images were evaluated using mouse monoclonal (green) and rabbit polyclonal (red) antibodies as described in the Materials and methods section. Cellular staining was visualized using secondary Alexa 488 (green) and Alexa 590 (red) fluorescent labelled antibodies. Nuclear staining was visualized using DAPI (blue) staining. Images are representative of three independent experiments. Magnification 200x; scale bar = 10 μM. (C) The mean fluorescence intensity was quantified using Cell-R software (Olympus Soft Imaging Solutions). Significant variations between the groups are indicated by ** P < 0.01, *** P < 0.001. both HEY and OVCA 433 cells in response to cisplatin treatment (Figure 2 and Additional file 1: Figure S1). The expression of ERCC1 was significantly higher in paclitaxel and combination treated OVCA 433 cells but was not evident in HEY cells under similar treatment conditions (Figure 2 and Additional file 1: Figure S1). On the other hand, β-tubulin isotype III expression was significantly higher in paclitaxel treated OVCA 433 and HEY cells. No change in β-tubulin isotype III expression was observed in cisplatin and combination treated OVCA 433 cells, while significant enhancement in the expression was observed in cisplatin and combination treated HEY cells compared to control untreated cells (Figure 2 and Additional file 1: Figure S1). Chemotherapy enhances the expression of CSC markers Recently a CSC-like phenotype has been demonstrated in drug resistant ovarian cancer cell lines [16,27] and also in primary and metastatic ovarian cancer cells from patients [14,19,28]. In order to assess the status of this phenomenon in response to cisplatin or paclitaxel and combination chemotherapy treatments, we assessed the cell surface expression of some known CSC markers [18] by flow cytometry in OVCA 433 and HEY cells. Moderate to low expression of CD44, CD24, CD117, CD133 and EpCAM was evident by flow cytometry in OVCA 433 and HEY cells (Figure 3 and Additional file 2: Figure S2). The expression of CD24, CD117, CD133 and EpCAM increased in HEY cells with cisplatin, paclitaxel and combination treatments, while there was no change in the expression of CD44 in response to cisplatin and combination treatments (Figure 3). Paclitaxel treatment on the other hand, resulted in the decrease of CD44 expression in HEY cells. In OVCA 433 cells there was an increase in the expression of CD44, CD24, CD117, CD133 and EpCAM in response to cisplatin, paclitaxel and combination treatments (Additional file 2: Figure S2). However, the increase in CD44 was not pronounced in response to cisplatin. The CSC-like profile of drug-treated ovarian cancer cells was further assessed at the mRNA level by qPCR ( Figure 4). Significantly enhanced mRNA expression of CD44, EpCAM, CD117, Oct-4 and Nanog in response to paclitaxel and combination chemotherapy was observed in HEY cells (Figure 4). Although significant increases in the mRNA levels of CD44, CD117, Oct4 and Nanog were observed in response to cisplatin treatment, no enhancement in the expression of EpCAM was observed. Hence, the results obtained for EpCAM and CD44 in response to cisplatin treatment differed at the protein and mRNA levels ( Figure 4). In OVCA 433 cells however, the mRNA expression of CD44, EPCAM, CD117, Oct4A and Nanog was significantly enhanced under all three treatment conditions compared to untreated controls (Additional file 3: Figure S3). As sphere formation has been described as an important feature for the survival of ovarian CSCs [15], we evaluated the sphere forming abilities of control, cisplatin, paclitaxel and combination treated HEY and OVC 433 cells ( Figure 5 and Additional file 4: Figure S4). In long term cultures, control and chemotherapy treated cells demonstrated the ability to form spheres on low attachment plates ( Figure 5 and Additional file 4: Figure S4). Within 21 days, the aggregates formed by cisplatin, paclitaxel and combination therapy treated cells took the shape of spheres with a defined outer rim and were significantly greater in numbers than control cells ( Figure 5 and Additional file 4: Figure S4). However, majority of the spheres formed by paclitaxel-treated HEY cells were much bigger in size than the spheres generated from cisplatin or combination treated cells. This was due to the aggregation of relatively bigger multinucleated cells. Hence, the number of spheres with a diameter larger than 50 μm was less than the Figure 4 mRNA expression of EpCAM, Nanog, CD44, CD117 and Oct4 in HEY cell line in response to chemotherapy treatments (cisplatin, paclitaxel and combination). Cells were treated with or without chemotherapy, RNA was extracted, cDNA was prepared and qPCR was performed as described in the Materials and methods section. The resultant mRNA levels were normalized to 18S mRNA. The experiments were performed using four independent HEY samples in triplicate. Significant intergroup variations are indicated by *P <0.05, ** P < 0.01, *** P < 0.001. spheres of cisplatin or combination treated HEY cells in each field counted under the microscope ( Figure 5). In response to combination treatment, cells produce viable spheres but these were smaller than spheres formed by either cisplatin or paclitaxel treated cells. This may be due to the mixture of epithelial and mesenchymal cells which may not have the inherent capacity to aggregate and form bigger spheres. Many cellular aggregates (spheres) formed from control untreated cells disaggregated within the 21 day time point but those formed by drug-treated cells persisted, suggesting that chemotherapy transformed residual cells have a greater ability to survive in anchorage independent conditions and are enriched in self-renewing capability compared to control untreated cells. Residual cancer cells after chemotherapy treatments exhibited metastatic and CSC-like features in nude mice In order to assess if the residual cancer cells after chemotherapy treatment retain tumorigenic abilities, an in vivo mouse intra-peritoneal (ip) HEY xenograft model was established. Five out of six mice injected with untreated HEY cells developed solid tumors in the form of 3-4 small lesions (<0.5 cm 3 ) in the peritoneum within six to eight weeks. Tumors weighing 4.7% ± 1.1 of the total body weight were observed in all five cases ( Figure 6). All twelve mice injected with the same number (5×10 6 ) of cisplatin or paclitaxel treated cells (n = 6 in each group) developed tumors at the same time as control untreated cells, but with significantly enhanced tumor burden, being almost double that seen for cisplatin treated (8.7% ± 2.1 of the total body weight) and three-fold that of paclitaxel treated cells (13.32% ± 2.3 of the total body weight) ( Figure 6). H & E staining of tumor infiltrated organs generated by control and treated cells showed the epithelial morphology of the cells infiltrating the abdominal organs ( Figure 7). Injected control cells in mouse infiltrated liver, pancreas, stomach and colon but surrounded the kidney with no invasion (Figure 7A-B). Invasion into the liver and pancreas was common for cisplatin and paclitaxel treated injected cells ( Figure 7A). Paclitaxel-treated HEY cells invaded kidney, but the invasion with the cisplatin treated cells was not consistent and differed between mice. In two out of the three mice analysed invasion to the kidney was observed, but in one mouse, cells surrounded the kidney with no invasion ( Figure 7B). Immunohistochemistry analysis of mouse tumors demonstrated positive staining of cyt 7 in xenografts from both untreated and treated HEY cells ( Figure 8A). Mouse xenografts also exhibited positive staining for Ki67, which was enhanced in cisplatin and paclitaxel treated cell-derived xenografts compared to untreated control xenografts ( Figure 8A). Patches of E-cadherin staining localized to discrete cell-cell junctions were observed in untreated HEY xenografts ( Figure 8B). This pattern of staining was enhanced in cisplatin and paclitaxel treated cell derived mouse xenografts ( Figure 8B). A similar pattern of enhanced staining of CA125 was evident in treated cell mouse xenografts, compared to xenografts obtained from mice injected with untreated cells ( Figure 8A). Mouse xenografts were also assessed for the expression of stem cells marker CD117 (c-Kit) and the embryonic stem cell marker Oct4. A dramatic increase in the expression of these two markers was observed in xenografts derived from cisplatin or paclitaxel treated cells, compared to the xenografts derived from control cells ( Figure 8B). Discussion Chemoresistance is a major obstacle towards the successful treatment of ovarian cancer patients. The molecular and the cellular mechanisms of the resistance of ovarian cancer cells to platinum and taxane-based therapies, the two agents used as standard treatment, remains unknown in vitro and in vivo. In this study we have used two very different ovarian cancer cell lines, OVCA 433 (mainly epithelial) and HEY (mainly mesenchymal), treated short-term with cisplatin or paclitaxel or the combination of both to dissect those initial cellular responses that facilitate the survival of residual cells and their subsequent regrowth in an in vivo mouse model. We have demonstrated that cisplatin or paclitaxel or combination treatment of ovarian cancer cell lines, generates in each case a population of residual cells with features of CSC-like cells. An enhanced expression of CSC markers in the residual cancer cells after chemotherapy treatments coincided with an enhanced expression of ERCC1 and/or β-tubulin isotype III, the two proteins commonly associated with resistance of cancer cells to platinum and taxane-based chemotherapies [29,30]. Enhancement in ERCC1 expression in response to cisplatin was consistent with the enhanced expression of β-tubulin isotype III within the same population of cells after paclitaxel treatment. However, in response to paclitaxel and combination treatments a greater degree of β-tubulin isotype III expression was observed, suggesting that cisplatin resistant cells may be cross resistant to paclitaxel but the reverse may not be the case. ERCC1 has been associated with cisplatin resistance in ovarian tumors and cancer cell lines [25,29]. Recent clinical trials suggest that patients with low ERCC1 levels benefit preferentially from cisplatin-based chemotherapy compared to patients who have a higher expression of ERCC1 in their tumors [31]. On the other hand, tumors resistant to paclitaxel or cancer cell lines rendered resistant to paclitaxel have substantially enhanced levels of isotypes III or IV β-tubulin Figure 6 Tumor burden of mice injected with untreated control and chemotherapy treated HEY cells. (A) Total tumor burden obtained from mice 6 weeks after ip injection of control and chemotherapy treated HEY cells. 5x10 6 cells were inoculated in each case. (B) Average percentage of tumor debulked from mice 6 weeks post ip injection of control and chemotherapy treated HEY cells. The average tumor weight was standardised to total mouse body weight. Data has been extrapolated from a minimum of n = 6 mice in each group. Significant increase in tumor burden in cisplatin and paclitaxel treated HEY cell derived tumors compared to control untreated group, *P < 0.05. Images represent tumors debulked from one mouse in each group. [32][33][34]. Evidence for the enhancement in isotypespecific taxane-resistant tubulin has also been described in the tumors of ovarian cancer patients [26]. Paired samples from advanced-stage ovarian cancer patients who developed clinical paclitaxel resistance showed increases in β-tubulin isotypes I (3.6-fold), III (4.4-fold) and IV (7.6-fold) [26]. Long-term repeated chemo-treatment approaches have been shown to generate chemoresistant cancer cell lines with features of CSCs [35,36]. The novelty of the current study is the demonstration of CSC-like features in ovarian cancer cell lines by a single short-term exposure of chemotherapeutic agents. The fact that short-term single exposure of chemotherapeutic agents is capable of modulating the expression of specific chemoresistant genes (ERCC1 and β-tubulin III) and potential CSC genes, suggests that selection of existing chemoresistant CSC-like subpopulation of ovarian cancer cells is embedded within the bulk of the original cancer population. As shown in our previous studies, this pattern of selection of CSC-like cells is not limited to ovarian cancer cell lines but can be displayed in tumor cells isolated from primary ovarian tumors and ascites of ovarian cancer patients [11]. This suggests that in the clinical scenario, CSC enriched residual cells are generated in the host tumor microenvironment after the first round of chemotherapy treatment. Whether these cells further enrich their CSClike characteristics after consecutive chemotherapy treatments or retain the original CSC-like features to facilitate the re-growth of secondary tumors is not known. However, we have previously demonstrated that the expression level of CSC-like markers in OVCA 433 cells remains unchanged after single or long-term treatments with cisplatin [12]. In this context, few previous studies have demonstrated the existence of CSC-enriched side population of cells [28,37] or CD44, CD117, CD133, CD24 enriched population of cells in ovarian cancer cell lines or ovarian cancer patient's samples [38][39][40]. These CSC-enriched cells have been shown to develop tumors on sequential inoculation in nude mice and retain the original CSC-like phenotype observed in the parental sample. Recent data suggest that CSCs rely on the presence of a 'CSC niche' which controls their self-renewal and differentiation [41]. Current studies have also shown that residual cells after chemotherapy treatment secrete soluble factors that provide a favourable microenvironment to facilitate the growth of residual cells [42,43]. This close relationship between chemotherapy-surviving cells and their secretory microenvironment represents a potential 'CSC niche' that can provide survival signals to residual cells for re-growth into a recurrent cancer. Moreover, CSCs can also be generated by the complex tumor microenvironment composed of diverse stromal cells, including tissue specific fibroblasts, cancer associated fibroblasts, tissue specific and bone marrow-derived mesenchymal stem cells, infiltrating immune cells, endothelial cells and their associated vascular network, soluble and other growth factors and/or extracellular matrix component [41]. Growth of recurrent tumors seems to rely on the permissive microenvironment provided by each component of 'CSC niche'. The CSCs retain their exclusive abilities to self-renew and give rise to differentiated progenitor cells, while staying in an undifferentiated state themselves [41]. In the current study we have demonstrated that both the epithelial OVCA 433 and mesenchymal HEY cell lines respond to cisplatin or paclitaxel by enhancing the expression of CD24, CD117, CD133 and EpCAM. However, the enhancement of CD44 in response to cisplatin or paclitaxel treatments differed between the cell lines and may depend on the inherent epithelial or mesenchymal phenotype of the cell line. CD44 is not only a stem cell marker but has been shown to be highly expressed in cells with mesenchymal phenotype. The HEY cell line is inherently mesenchymal, with high endogenous expression of CD44 prior to chemotherapy. On the other hand, OVCA 433 is an epithelial cell line with a minimal expression of CD44. The addition of cisplatin drives both the cell lines to a mesenchymal state [12]. This correlates nicely with a slight increase in the expression of CD44 in both OVCA 433 and HEY cell lines. On the contrary, paclitaxel treatment induced a more epithelial-like morphology in the inherently mesenchymal HEY cell line, which may result in the down regulation of CD44 expression. This holds true only at the protein level. At the mRNA level, the expression of CD44 was elevated with all chemotherapy treatments in both the cell lines. This suggests, an inability of translation of CD44 mRNA in HEY cells. This may occur due to epigenetic changes in CD44 with paclitaxel treatment in HEY cells [44]. However, the disparity of EpCAM expression at the protein and mRNA levels in HEY cells is difficult to explain. One possible explanation can be that DNA damage response initiated by cisplatin has no effect on the transcriptional expression of EpCAM but it may trigger enhanced translation of EpCAM from the existing endogenous EpCAM mRNA. Tumors generated from control untreated and cisplatin/paclitaxel treated cells were invasive and invaded peritoneal organs such as pancreas and liver. With the small number of tumor xenografts analysed in this study (n = 3) we have demonstrated some differences in the invasion to kidney by chemotherapy treated cells. No pattern of kidney invasion was observed with control untreated mice. However, paclitaxel-treated HEY cells invaded kidney, but the invasion with cisplatin treated cells was not consistent and differed between mice. In two out of the three mice analysed, invasion to kidney was observed, but in one mouse tumor cells surrounded the kidney with no apparent invasion. This variation in the invasion pattern between the control and chemotherapy treated cells may be due to the phenotypic changes induced in the cells by the chemotherapeutic agents or it may be due to the induced 'CSC-niche' created by the cells within the tumor microenvironment. Enhanced CSC-like characteristics observed in ovarian cancer cells after a single dose of chemotherapy treatment were retained in in vivo mouse xenografts (enhanced expression of Oct4 and CD117 in tumors derived from cisplatin and paclitaxel treated cells). Tumor cells within the xenografts of chemotherapy treated cells had a greater proliferative potential as evaluated by enhanced Ki67 staining, and a greater tumor burden within the same time frame as that of the tumors derived from control untreated cells. In addition, tumors derived from chemotherapy treated cells had an enhanced expression of CA125 and were more epithelial in phenotype with enhanced E-cadherin expression compared to tumors generated from control untreated HEY cells. The relative high abundance of epithelial markers (enhanced expression of E-cadherin and CA125) in tumors derived from HEY cells treated with chemotherapy in vitro, compared to untreated control cells, is consistent with our recent observation of ascites tumor cells of recurrent patients which had an enhanced expression of epithelial and CSC-like markers compared to tumor cells of ascites obtained from chemonaive untreated patients [13]. We have previously reported that ovarian cancer cells possess a certain level of epithelial mesenchymal plasticity that allows them to change their phenotype and acquire different functions and properties under the influence of the local tumor environment [12,45,46]. Considering that HEY cells have inherent mesenchymal phenotype and very low/no expression of E-cadherin and CA125 in vitro, the expression of E-cadherin and CA125 in vivo control mouse xenografts implies such plasticity. The dynamics of ovarian tumor cell plasticity in relation to tumor cell dissemination and engraftment on secondary site is not well understood but the potential 'mesenchymal to epithelial transition' (MET) is assumed to occur in the late phase of ovarian tumor dissemination when the tumor cells adapt to the ascites microenvironment [46][47][48][49]. The expression of E-cadherin and CA125 in xenografts obtained from mesenchymal HEY cells, and enhancement of that expression in mouse xenografts derived from residual chemotherapy treated cells, further illustrates plasticity related changes in HEY cells influenced by the in vivo microenvironment which acts as a 'CSC niche' , and may facilitate the rapid proliferation of chemotherapy-treated CSC-rich residual cells resulting in increased tumor burden. These novel observations are consistent with a recent study that demonstrated the epithelial phenotype of side population cells sorted from ovarian cancer lines and ascites of ovarian cancer patients [50]. These stemlike side population cells exhibited decreased adhesive and invasive potential compared to the more differentiated non-side population cells and were localized on tumor boundary when implanted into nude mice along with non-side population cells [50]. These results suggest that the relationship between malignant potential, CSC phenotype and cellular plasticity in ovarian cancer is a Figure 9 Mouse model of chemoresistance and associated recurrence in ovarian cancer. Control untreated and residual HEY cells after treatment with cisplatin or paclitaxel in vitro were injected (ip) into nude mice (n = 18, n = 6/group) and followed for 5-7 weeks. Cisplatin and paclitaxel treated cells enriched in CSC-like markers generated significantly increased tumor burden as well as xenografts with enhanced expression of CD117, Oct4, CA125, Ki67 and E-cadherin compared to tumors derived from non treated HEY cells. This suggests that chemotherapy treatment promotes CSC-dependent enhanced tumor progression in a mouse model of ovarian cancer. developing field and more research is needed to understand the processes. In this context, the identification of E-cadherin rich metastatic tumors in breast and brain cancers [48,51,52], and an association between increased pluripotency and the epithelial subcomponent of human bladder and prostatic carcinoma cells [53], and normal breast cells [54] exerts a strong link between epithelial plasticity and CSCs. Perhaps consistent with this is the observation that BRCA1-associated basal breast cancers better resemble aberrant luminal progenitor cells rather than the mesenchymal-like mammary stem cells [55,56]. The results from this novel study show that (a) a shortterm early phase chemotherapy treatment leaves residual cells that are enriched for CSC-like traits, (b) in an in vivo environment, these cells are more proliferative and result in a larger tumor burden, and (c) the cells retain the CSC enriched phenotype in the resultant tumors. These findings are strikingly similar to ovarian cancer patients who relapse post-chemotherapy treatment with increased tumor burden and metastasis with recurrent tumors that are enriched for CSC-like traits [13,14]. On the basis of our novel findings a model of chemoresistance and recurrence in ovarian carcinomas is described in Figure 9.
2016-05-04T20:20:58.661Z
2013-03-27T00:00:00.000
{ "year": 2013, "sha1": "decdaf62c1a4aa2015ae2e7c780f389af0d76cc0", "oa_license": "CCBY", "oa_url": "https://molecular-cancer.biomedcentral.com/track/pdf/10.1186/1476-4598-12-24", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "decdaf62c1a4aa2015ae2e7c780f389af0d76cc0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
252600981
pes2o/s2orc
v3-fos-license
Quantum advantage through the magic pentagram problem Through the two specific problems, the 2D hidden linear function problem and the 1D magic square problem, Bravyi et al. have recently shown that there exists a separation between $\mathbf{QNC^0}$ and $\mathbf{NC^0}$, where $\mathbf{QNC^0}$ and $\mathbf{NC^0}$ are the classes of polynomial-size and constant-depth quantum and classical circuits with bounded fan-in gates, respectively. In this paper, we present another problem with the same property, the magic pentagram problem based on the magic pentagram game, which is a nonlocal game. In other words, we show that the problem can be solved with certainty by a $\mathbf{QNC^0}$ circuit but not by any $\mathbf{NC^0}$ circuits. I. INTRODUCTION Although Shor's factoring algorithm [1] tells us that there exists a quantum algorithm which can be almost exponentially faster than any known classical algorithms, it has still been unproven that there do not exist any classical algorithms which can be faster than the Shor's algorithm. Similarly, quantum speed-up in most studies on quantum algorithms has been proved by comparison with the most efficient known classical algorithms or computational complexity assumptions. Recently, it has been rigorously proved [2,3] that quantum computers can outperform classical computers by showing that there exists a problem which can be solved on a quantum computer with constant time independent on input size, but cannot be solved on any classical (probabilistic) computers with constant time. Furthermore, it has also been shown that such a problem is related with quantum nonlocality, which is one of the unique quantum features with no classical counterpart. In particular, Bravyi et al. [3] have shown that a problem called the magic square problem can be defined by exploiting the magic square game [4][5][6][7] based on quantum nonlocality, and the problem provides us with a rigorously provable quantum advantage. Interestingly, there is another game similar to the magic square game, which is called the magic pentagram game [6,8]. Note that the magic square game and the magic pentagram game have similar quantum strategies based on the two and three copies of Bell states, respectively. Hence it is natural to ask a question about whether from the magic pentagram game we can construct the magic pentagram problem which such a quantum advantage can be derived from even though the magic pentagram game has the more complicated structure than the magic square game. In this paper, we give a completely affirmative answer to the question. In other words, as in the case of the magic square game, we construct the magic pentagram problem related to the magic pentagram game, and we prove that the problem can be solved by a QNC 0 circuit but not by any NC 0 circuits, where QNC 0 and NC 0 are the classes of polynomial-size and constant-depth quantum and classical circuits with bounded fan-in gates (unbounded fan-out gates are allowed in NC 0 circuits), respectively. Before getting into our main results, we review the magic pentagram game in the following subsection. A. Magic pentagram game The magic pentagram game [6,8] is a win-win game which requires two players with a referee, that is, either both of the players win or both lose. The two players, Alice and Bob, start the game with preparing a pentagram like Figure 1(a). From the referee, they receive two distinct random numbers x and y in {0, 1, 2, 3, 4} respectively, where x and y mean two different hyperedges of the pentagram. Each player assigns the number +1 or −1 to each of the four vertices on his or her hyperedge in the following order determined by the function o s (t) in the table of Figure 1(b): if a player receives a hyperedge s, then his or her j-th assignment is given to the vertex at which the hyperedge s intersects with the hyperedge t satisfying o s (t) = j. For example, if the hyperedge 0 is given to a player then the player assigns the numbers ±1 to the vertices on the hyperedge in the order, v 10 , v 5 , v 7 and v 2 in Figure 1(a), since o 0 ( j) = j for j ∈ {1, 2, 3, 4}. Although the players can discuss before the game, once the game starts, then their communication is not allowed. We say that the players win the game when the following two winning conditions are satisfied: 1. The product of the values assigned to the four vertices on each player's hyperedge s is equal to the value of the following function e, where e is the function from the set {0, 1, 2, 3, 4} to the set {+1, −1} defined by e(s) = (−1) δ s,4 , that is, In other words, let z = (z 1 , z 2 , z 3 , z 4 ) in {+1, −1} 4 be an assignment given to the four vertices on the hyperedge s of a player, then i=4 i=1 z i = e (s). 2. If Alice and Bob receive the hyperedges x and y, and z = (z 1 , z 2 , z 3 , z 4 ) and w = (w 1 , w 2 , w 3 , w 4 ) in {+1, −1} 4 are their assignments given to the vertices on the hyperedges, then both of them return the same value on the vertex at which x and y intersect, that is, It is clear that we cannot construct the classical strategy which always makes the players win the magic pentagram game, since there is no way to assign the numbers ±1 to all vertices such that the two winning conditions are satisfied, as we can see through an example in Figure 1(c). In particular, the players can exchange any amount of classical information at the outset of the game, and can employ shared randomness in the classical strategy. However, they cannot always win the game, which is precisely shown in the following proposition. Proposition 1 The maximal probability that a classical (probabilistic) strategy makes the players win the magic pentagram game is 19/20. On the other hand, it has been known that there exists a quantum strategy [6,8] which always allows the players to win the game with certainty if they share a proper number of copies of the Bell states |Φ = 1 √ 2 (|00 + |11 ) in advance, as follows. Proposition 2 Players can win the magic pentagram game by sharing the maximally entangled state |Φ ⊗3 and measuring with the observables corresponding to the vertices as in Figure 2(a). Hence the magic pentagram game can be considered as an example to demonstrate the quantum nonlocal characteristics as other nonlocal games since the quantum strategy allows the players to share unlimited amounts of entanglement and to perform local quantum operations on their qubits while the classical one allows them to share randomness [7]. This paper is organized as follows. In Sec. II, we generalize the magic pentagram game, and construct the magic pentagram problem from the generalized game in Sec. III. In Sec. IV, we prove that the same quantum advantage as in the magic square problem can be gained from the magic pentagram problem. Finally, we conclude with discussion on our results in Sec. V. For the readability, we defer all the proofs to the Appendix. II. GENERALIZED MAGIC PENTAGRAM GAME In this section, we define the generalized magic pentagram game, and show that the players can win the game with certainty by an entanglement-based quantum strategy, while the win probability of the classical strategy is at most 19/20. For each (α 1 , β 1 , α 2 , β 2 , α 3 , β 3 ) ∈ {+1, −1} 6 , the generalized magic pentagram game with 6 parameters α i 's and β i 's is defined as follows. The rule of the generalized magic pentagram game is essentially equivalent to that of the original magic pentagram game except for the second winning condition. The following two conditions are the winning conditions of the generalized magic pentagram game: for players' hyperedges x and y, let z = (z 1 , z 2 , z 3 , z 4 ) and w = (w 1 , w 2 , w 3 , w 4 ) be their assignments given to the vertices on the hyperedges x and y, respectivley. Remark 3 Since when α s = β s = +1 for all s ∈ {1, 2, 3} the generalized magic pentagram game turns into the original magic pentagram game, the success probability over all possible classical strategies is less than or equal to 19/20 as in the original one. Moreover, there exists a quantum strategy that allows the players to win the generalized magic pentagram game with probability 1 as in the original magic pentagram game, which we will see in Proposition 4. For α and β in {+1, −1}, let Φ α,β be a maximally entangled state defined as Then the following proposition can be obtained. Proposition 4 Players can win the generalized magic pentagram game with certainty by sharing the state ⊗ 3 s=1 Φ α s ,β s and measuring with the observables in Figure 2(a), which are the same as ones in the original magic pentagram game. We can now construct a constant-depth quantum circuit of the generalized magic pentagram game, and obtain the following proposition since U(·) can be shown to be the unitary operator changing the computational basis to the basis related to the observables in Figure 2(a) from its proof, which will be seen in Appendix D. Proposition 5 The quantum circuit in Figure 3(a) exhibits the quantum strategy in Proposition 4. III. MAGIC PENTAGRAM PROBLEM In this section, we define the magic pentagram problem which has 6n input bits and 6n output bits. We see that the problem can be solved by a constant-depth quantum circuit with nearest neighbor gates. On the other hand, we also show that any classical probabilistic circuits composed of bounded fan-in gates cannot solve the problem with certainty. In order to define the magic pentagram problem, let us consider the quantum circuit C MPP in Figure 4, which is clearly a constant-depth quantum circuit. For x, y ∈ {0, 1} 3 , the gate V(y, x) in Figure 4 is defined as where M i j = (H i ⊗ I j )CNOT i j is the Bell basis change on the i-th and j-th qubits, mapping the Bell basis to the computational one. Remark 6 Let X in and Z out be the classical input and the classical output of the circuit C MPP . Then the input X in can be expressed as X in = (x 1 , x 2 , · · · , x n , y 1 , y 2 , · · · , y n ) ∈ {0, 1} 6n where x j , y j ∈ {0, 1} 3 for j ∈ {1, 2, · · · , n}, as seen in Figure 4. Assume that x k , y l ∈ {000, 001, 010, 011, 100} for some k and l with 1 ≤ k < l ≤ n, and x i , y j ∈ {101, 110, 111} for all i k and j l. Then the two outputs z k and w l after running the circuit C MPP is equivalent to the outputs z and w after running the circuit in Figure 3(a) for some proper α i 's and β i 's, since entanglement swapping is implemented through the Bell measurements on the qubits p s (2t) and p s (2t + 1) for s = 1, 2, 3 and t = k, k + 1, . . . , l − 1 in the circuit. We now define the magic pentagram problem by exploiting the input-output relation obtained from the circuit. Definition 7 (Magic pentagram problem) We say that a circuit with 6n-bit input and 6n-bit output solves the magic pentagram problem if for every input X in ∈ {0, 1} 6n , the circuit outputs Z out ∈ {0, 1} 6n such that where C MPP X in is the quantum circuit C MPP with the input X in in Figure 4. Then we can directly have the following theorem by definition of the magic pentagram problem. Theorem 8 (Magic pentagram problem is in QNC 0 ) The magic pentagram problem can be solved with certainty by a QNC 0 circuit. IV. MAGIC PENTAGRAM PROBLEM IS NOT IN NC 0 In this section, we first consider a specific subset of the full instance set for the magic pentagram problem, and then show that any NC 0 circuits with the full instance set as input, cannot solve the magic pentagram problem with probability greater than 19/20 for a randomly chosen input from the subset. FIG. 4: Quantum circuit C MPP for the magic pentagram problem. In the circuit, there are 6n data qubits labelled by p s (t) with s ∈ {1, 2, 3} and t ∈ {1, 2, · · · , 2n}. The gate U is the same as in Figure 3(a) and Figure 3(b), and the gate V is defined as in Eq. (1). Here, we also abuse the notation of three-bit strings z i and w j so that z i and w j are in {+1, −1} 3 . Remark that for each j = 1, 2, . . . , n, the partial circuit between classical inputs x j and y j before applying the gate V is equivalent to the circuit in Figure 3(a) when all α i 's and β i 's are one, and V(y j , x j+1 ) represents the basis change for performing entanglement swapping properly according to the values y j and x j+1 . For each 1 ≤ k < l ≤ n, let S k,l be the set of all 6n-bit strings (x 1 , x 2 , · · · , x n , y 1 , y 2 , · · · , y n ) such that x k , y l ∈ {000, 001, 010, 011, 100}, x i = 111 for all i k, and y j = 111 for all j l. and let S be an instance subset of the magic pentagram problem defined as S = k<l S k,l . Remark 9 Assume that X in = (x 1 , x 2 , · · · , x n , y 1 , y 2 , · · · , y n ) is randomly chosen from S . Then there exist 1 ≤ k < l ≤ n such that x i = 111 and y j = 111 for all i k and j l , and x k and y l are random numbers in {000, 001, 010, 011, 100}. Let Z out = (z 1 , z 2 , · · · , z n , w 1 , w 2 , · · · , w n ) ∈ {+1, −1} 6n be a measurement outcome after applying the circuit C MPP X in to 0 6n , and for i ∈ {1, 2, 3}, let where z j+1 = z 1 j+1 z 2 j+1 z 3 j+1 and w j = w 1 j w 2 j w 3 j are in {+1, −1} 3 . Then by employing the way almost same as in Lemma 3 of the Bravyi et al.'s result on the magic square problem [3], we can see that for players' hyperedges x k and y l , the two quadruples, z = z 1 k , z 2 k , z 3 k , z 1 k z 2 k z 2 k e(x k ) and w = w 1 l , w 2 l , w 3 l , w 1 l w 2 l w 3 l e(y l ) , satisfy the second winning condition of the generalized magic pentagram game with 6 parameters α i 's and β j 's, that is, But, this does not imply that the players win the generalized magic pentagram game, since it is not guaranteed that z and w are independent of y l and x k , respectively. Applying the properties in the Appendix E to the magic pentagram problem, we can derive our main theorem described below. Theorem 10 (Magic pentagram problem is not in NC 0 ) Let C be a classical probabilistic circuit with 6n-bit inputs, 6n-bit outputs and gates of fan-in at most B, and assume that C solves the magic pentagram problem with probability p > 19/20 for any arbitrary random input from S . Then the depth of C is at least The value in Eq. (2) cannot be bounded above by a constant number, since as n tends to infinity, it also goes to infinity. Therefore, Theorem 10 implies that the magic pentagram problem cannot be solved with certainty by any NC 0 circuits. Remark 11 In Ref. [3], there is a similar result to Theorem 10, which states that if a classical circuit with gates of fan-in at most B solves the magic square problem with probability at least 9/10 for any arbitrary random input from a proper subset, then the depth of the circuit has a lower bound As in Theorem 10, the above result in Ref. [3] can slightly be improved as follows: if a classical circuit with gates of fan-in at most B solves the magic square problem with probability p > 8/9 for any arbitrary random input from the subset, then the depth of the circuit is at least which is greater than the original lower bound of the circuit depth in Eq. (3) for p ≥ 0.89. V. CONCLUSION AND DISCUSSION We have first considered the magic pentagram game which is based on quantum nonlocality, and have constructed the magic pentagram problem by exploiting a quantum strategy to win the game, and have then shown that the problem can be solved with certainty by a QNC 0 circuit, whereas no NC 0 circuits can solve the problem with certainty. Hence, we can conclude that the magic pentagram problem presented here is another example to show the explicit separation between shallow quantum circuits and bounded fan-in shallow classical ones. We note that there exists a problem to solve with near certainty using a noisy shallow quantum circuit if the noise rate is below a certain threshold value while the problem cannot be solved with high probability by any noise-free shallow classical circuits [3]. The problem is the noise-tolerant version of the magic square problem, which results from the rigidity of the magic square game [9]. This implies that all near-optimal strategies for the game are approximately equivalent to a unique quantum strategy exploiting quantum entanglement. Since it was also proved that the magic pentagram game is rigid [8], if the win probability of a strategy for the magic pentagram game is close to one, then the strategy is approximately equivalent to the quantum strategy based on three copies of the Bell states presented in Proposition 2. Hence, we may have the same result by defining the noise-tolerant version of the magic pentagram problem which can be satisfied with probability close to one by the input-output statistics of a noisy shallow quantum circuit as in the result [3]. In the result of Watts et al. [10], it has been shown that the 2D hidden linear function problem providing the same quantum advantage [2] as in the magic pentagram problem cannot be solved with certainty even by any AC 0 circuits, where AC 0 is the class of polynomial-size and constant-depth classical circuits in which unbounded fan-in gates are allowed to use. It would be an interesting future work to investigate whether the magic pentagram problem is not in AC 0 . There are several quantum nonlocal games, called the quantum pseudo-telepathy [7], in which the win probability of quantum players dealing with quantum entanglement is even greater than that of classical players with shared randomness but no shared entanglement. Therefore, by properly exploiting those games, we could construct various kinds of problems to attain quantum advantage. Comparing the observables in Figure 2(a) and the definition of L x,y in Figure 2(b), we can see that α j and β k correspond to X j and Z k , respectively. Consequently, the outcomes satisfy the winning conditions of the generalized magic square game. D. PROOF OF PROPOSITION 5 It can readily be seen that Φ α,β = CNOT(H ⊗ I) 1−α 2 , 1−β 2 , and U(·) implements the basis changes necessary for measuring by the observables in Figure 2(a). In particular, we can see the role of the gate U(100) from its properties in Figure 5. The players can assign the outputs z = (z 1 , z 2 , z 3 ) ∈ {+1, −1} 3 and w = (w 1 , w 2 , w 3 ) ∈ {+1, −1} 3 to vertices on the received hyperedges s and t, respectively, following the order given from o s (t). In addition, z 4 = z 1 z 2 z 3 e(x) and w 4 = w 1 w 2 w 3 e(y) can be assigned to the remaining final vertices, respectively. Therefore, this completes the proof. E. CLASSICAL CIRCUITS AND DISJOINT LIGHTCONES In this section, by means of the concept of the lightcones, we first consider the situation that the input values and the output values related to the generalized magic pentagram game as in Remark 9 are independent, and then investigate the relation with the magic pentagram problem, which is similar to the Bravyi et al.'s result about the magic square problem [3]. Definition 12 For variables x i , z j ∈ {0, 1} with 1 ≤ i ≤ N and 1 ≤ j ≤ M which mean the i-th input bit and the j-th output bit of a classical circuit C with N input bits and M output bits, respectively, we say that x i , z j are correlated if there exists an input string X in ∈ {0, 1} N such that j-th bit of C(X in ) changes when the i-th bit of X in flips. For an input variable x i , let L C (x i ) be the set of output bits z t such that (x i , z t ) are correlated, which is called the lightcone of x i . For a set of input bits I, we define L C (I) by We hereafter assume that C is a depth-D classical probabilistic circuit composed of gates of fan-in at most B which has inputs (x 1 , x 2 , · · · , x n , y 1 , y 2 , · · · , y n ) ∈ {0, 1} 6n and outputs (z 1 , z 2 , · · · , z n , w 1 , w 2 , · · · , w n ) ∈ {0, 1} 6n . For each 1 ≤ k < l ≤ n, let E k,l be the subset of S k,l such that L C (x k ) ∩ L C (y l ) = ∅ , w l L C (x k ) and z k L C (y l ), and let E ⊆ S be the event defined as E = k<l E k,l . Then we can obtain the following proposition which is almost the same as Lemma 7 in Supplementary Information of the Bravyi et al.'s result [3]. Proposition 13 If we choose an input from S uniformly, the probability that E occurs is at least 1 − 216 n B 2D . We now assume that a classical circuit C solves the magic pentagram problem for a randomly chosen input from S . Then its output is equal to a measurement outcome on the resulting state after applying the circuit C MPP with the input to the initial state 0 6n . Thus Remark 9 tells us that two quadruples satisfying the second winning condition for the generalized magic pentagram game with some proper 6 parameters can be obtained from the output. If the event E occurs for a randomly chosen input from S and the input is in S k,l for some 1 ≤ k < l ≤ n then the outputs correlated with x k are all independent of the outputs correlated with y l , and x k and y l are independent of w l and z k , respectively. Hence, we can show that the players with the hyperedges x k and y l win the generalized magic pentagram game with the 6 parameters obtained from the output of the circuit C as in Remark 9. Accordingly, by Remark 3, we clearly obtain the following lemma. Lemma 14 Assume that the event E occurs for a randomly chosen input from S . Then the average probability that C solves the magic pentagram problem is at most 19/20. F. PROOF OF THEOREM 10 Let D be the depth of C. Then by Lemma 14 and Proposition 13, we can find the upper bound on p as follows. This completes the proof.
2022-09-30T14:01:11.375Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "677f1eac2549c9560ad6b8b621975eaa5c2e9a80", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "74ae6d026fd9967fbe23e0cb46a9773caa6e3db9", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
232076374
pes2o/s2orc
v3-fos-license
Unramified Cohomology of Quadrics in Characteristic Two Let $F$ be a field of characteristic 2 and let $X$ be a smooth projective quadric of dimension $\ge 1$ over $F$. We study the unramified cohomology groups with 2-primary torsion coefficients of $X$ in degrees 2 and 3. We determine completely the kernel and the cokernel of the natural map from the cohomology of $F$ to the unramified cohomology of $X$. This extends the results in characteristic different from 2 obtained by Kahn, Rost and Sujatha in the nineteen-nineties. Introduction Let F be a field. Let m be a positive integer not divisible by the characteristic of F . For each j ≥ 1, the tensor product Z/m(j − 1) := µ ⊗(j−1) m of m-th roots of unity can be viewed as anétale sheave on F -schemes. Let X be a proper smooth connected variety over F . The unramified cohomology group H j nr X, Z/m(j − 1) is the group H 0 Zar X, H j m (j − 1) , where H j m (j − 1) denotes the Zariski sheaf associated to the presheaf U → H j et (U, Z/m(j − 1)). By taking the direct limit, we can also define H j nr X, (Q/Z) ′ (j − 1) := lim − → char(F )∤m H j nr X, Z/m(j − 1) . These groups can also be described in terms of residue maps in Galois cohomology, thanks to the Bloch-Ogus theorem on Gersten's conjecture ( [BO74]). As important birational invariants, they have found many important applications, for instance to the rationality problem (see e.g. [Sal84], [CTO89], [CTP16]), and have been extensively studied in the literature. With the development of the motivic cohomology theory (by Beilinson, Lichtenbaum, Suslin, Voevodsky, et al.), even more machinery can be applied to compute unramified cohomology nowadays. When F has characteristic different from 2 and X is a smooth projective quadric, the above unramified cohomology groups are computed up to degree j ≤ 4 by Kahn, Rost and Sujatha in a series of papers ( [Kah95], [KRS98], [KS00], [KS01]). Some of their results are further developed and used by Izhboldin [Izh01] to solve a number of problems on quadratic forms, including a construction of fields of u-invariant 9 in characteristic = 2. It has been noticed for decades that unramified cohomology theory can be formulated in a more general setting (see [CT95], [CTHK97], [Kah04]). In particular, when F has positive characteristic p, the aforementioned groups have p-primary torsion variants. Indeed, the unramified cohomology functors H j nr · , Z/p r (j − 1) for all r ≥ 1 and their limit H j nr · , Q p /Z p (j − 1) can be defined by using the Hodge-Witt cohomology (see (3.1) and (3.10) for a brief review). In contrast to the prime-to-p case, there has been much less work on these p-primary unramified cohomology groups. In this paper, we are interested in the case of a smooth projective quadric X over a field F of characteristic 2. We investigate the unramified cohomology groups via the natural maps η j r : H j F, Z/2 r (j − 1) −→ H j nr X, Z/2 r (j − 1) , r ≥ 1 and η j ∞ : H j F, Q 2 /Z 2 (j − 1) −→ H j nr X, Q 2 /Z 2 (j − 1) . For each j ≥ 1, it is not difficult to see that the maps η j r for different r ≥ 1 have essentially the same behavior (Lemma 4.3). So we may focus on the two maps η j := η j 1 and η j ∞ . They are both isomorphisms if j = 1 (Prop. 3.4) or X is isotropic (Prop. 4.1 (1)). In our main results, we determine completely the kernel and the cokernel of η j and η j ∞ for j = 2, 3. For any c ∈ F , we denote by (c] its canonical image in the quotient F/℘(F ) (where ℘ is the map x → x 2 − x). It is the e 1 -invariant (or Arf invariant) of the quadratic Pfister form c]] : (x, y) → x 2 + xy + cy 2 . A similar and perhaps more familiar notation is (a) ∈ F * /F * 2 , which we use to denote the canonical image of an element a ∈ F * . The following theorem extends Kahn's results in [Kah95] to characteristic 2. Theorem 1.1. Let F be a field of characteristic 2 and let X be the smooth projective quadric defined by a nondegenerate quadratic form ϕ with dim ϕ ≥ 3. Assume that ϕ is anisotropic. Theorem 1.2 (See 4.3, 6.7 and 8.5). Let F be a field of characteristic 2 and let X be the smooth projective quadric defined by a nondegenerate quadratic form ϕ with dim ϕ ≥ 3. Then Ker(η j r ) = Ker(η j ∞ ) for all r ≥ 1, j ≥ 1, and Coker(η 2 ∞ ) = 0. If ϕ is an anisotropic Albert form, then Coker(η 3 ∞ ) ∼ = Z/2. Otherwise Coker(η 3 ∞ ) = 0. As in [Kah95] and [KRS98], main tools in our proofs include the Bloch-Ogus and the Hochschild-Serre spectral sequences. A key difference between the p-primary torsion cohomology and the prime-to-p case is the lack of homotopy invariance. This results in the phenomenon that our spectral sequences look different from their analogues in characteristic different from 2. Due to vanishing theorems for the local cohomology and the fact that the field F has 2-cohomological dimension at most 1, these spectral sequences still have many vanishing terms. Cycle class maps with finite or divisible coefficients are also studied and used in the paper. In this respect we need information about the structure of Chow groups in low codimension. This information can be found in Karpenko's work [Kar90] in characteristic = 2, and recently the paper [HLS21] has provided the corresponding results in characteristic 2. We mention a situation where the distinction of characteristic affects the study of a cycle class map, and hence also makes a remarkable difference in our proofs. Suppose the quadric X is defined by a neighbor of a 3-Pfister form and consider the cycle class map cl 2 X on the codimension two Chow group. In characteristic different from 2, the image of the torsion element under cl 2 X can be described as a cup product element ([Shy90, Prop. 5.4.6]). The lack of such a description forces us to proceed differently for results concerning Coker(η 3 ) and Coker(η 3 ∞ ) (see the proofs of Theorem 6.11 and Corollary 8.5). In the above two theorems the case of Albert quadrics is more subtle than the others. In that case we have to utilize more techniques from the algebraic theory of quadratic forms, especially residue maps on Witt groups of discrete valuation fields of characteristic 2 ([Ara18]). Notation and conventions. For any field k, denote by k a separable closure of k. For an algebraic variety Y over k, we write Y L = Y × k L for any field extension L/k, and Y = Y × k k. We say Y is k-rational if it is integral and birational to the projective space P dim Y k over k. We say Y is geometrically rational if Y L is L-rational for the algebraic closure L of k. Milnor K-groups of a field k are denoted by K M i (k), i ∈ N. For an abelian group M, we denote by M tors the subgroup of torsion elements in M. For any positive integer n, we define M[n] and M/n via the exact sequence For any scheme X, let Br(X) = H 2 et (X, G m ) denote its cohomological Brauer group. In the rest of the paper, F denotes a field of characteristic 2. 2 Quadrics and their Chow groups (2.1) We recall some basic definitions and facts about quadratic forms in characteristic 2. For general reference we refer to [EKM08]. For two quadratic forms ϕ and ψ over F , we say ψ is a subform of ϕ if ψ ∼ = ϕ| W for some subspace W in the vector space V of ϕ. For n ≥ 2, an n-Pfister neighbor is a form of dimension > 2 n−1 that is similar to a subform of an n-Pfister form. We write I q (F ) or I 1 q (F ) for the Witt group of even-dimensional nondegenerate quadratic forms over F . For n ≥ 2, let I n q (F ) denote the subgroup of I q (F ) generated by the n-Pfister forms. For a quadratic form ϕ over F , we will write ϕ ∈ I n q (F ) if ϕ is nondegenerate, of even dimension, and its Witt class lies in I n q (F ). We also have the Witt ring W (F ) of nondegenerate symmetric bilinear forms over F , in which the classes of even-dimensional forms form an ideal I(F ), called the fundamental ideal. For each n ≥ 1, let I n (F ) be the n-th power of the ideal I(F ) and put I 0 (F ) = W (F ). The group I q (F ) has a W (F )-module structure, and we have I n q (F ) = I n−1 (F ) · I q (F ) for all n ≥ 1. The Galois cohomology group H 1 (F, Z/2) can be identified with F/℘(F ) by Artin-Schreier theory, where ℘ denotes the map x → x 2 − x. For any b ∈ F , we write (b ] for its canonical image in F/℘(F ) = H 1 (F, Z/2). The map is a well defined homomorphism, often called the discriminant or Arf invariant. It is well known that e 1 is surjective with Ker(e 1 ) = I 2 q (F ). A 6-dimensional form in I 2 q (F ) (i.e. a 6-dimensional nondegenerate form with trivial Arf invariant) is called an Albert form. For n ≥ 2, by using the Kato-Milne group H n F, Z/2(n − 1) (cf. [Kat80], [Mil76]), a generalization of which will be discussed in (3.1), one can also define a functorial homomorphism (see [Sah72] for n = 2 and [Kat82b] for general n) e n : I n q (F ) −→ H n F, Z/2(n − 1) which is surjective with Ker(e n ) = I n+1 q (F ) such that e n a 1 , · · · , a n−1 ; a n ]] = (a 1 ) ∪ · · · ∪ (a n−1 ) ∪ (a n ] , where for any a ∈ F * , (a) denotes its canonical image in F * /F * 2 . (The maps e 2 and e 3 are more classical, called the Clifford invariant and the Arason invariant respectively.) Note that for all n ≥ 1, (2.1.1) e n (ϕ) = e n (c.ϕ) for all ϕ ∈ I n q (F ) and all c ∈ F * because ϕ − cϕ = c ϕ ∈ I n+1 q (F ) = Ker(e n ). The group H 2 F, Z/2(1) may be identified with the 2-torsion subgroup of the Brauer group Br(F ) of F . For any a ∈ F * and b ∈ F , let (a, b ] be the quaternion algebra generated by two elements i, j subject to the relations (2.2) Now we recall some known facts about Chow groups of projective quadrics (which are valid in arbitrary characteristic). More details can be found in [Kar90, § 2], [EKM08, § 68] and [HLS21, § 5]. Let ϕ be a quadratic form of dimension ≥ 3 over F . Let X be the projective quadric defined by ϕ. Unless otherwise stated, we always assume ϕ is nondegenerate, which means X is smooth as an algebraic variety over F . For each i ∈ N, let CH i (X) denote the Chow group of codimension i cycles of X. Let h ∈ CH 1 (X) be the class of a hyperplane section. Using the intersection pairing as multiplication in the Chow ring ([EKM08, § 57]), we get elements h i ∈ CH i (X) for each i. Set d = dim X. For every integer j ∈ [0, d/2], let ℓ j ∈ CH d−j (X) be the class of a j-dimensional linear subspace contained in X. Then, for each 0 ≤ i ≤ d, we have If dim X = 2m is even, there are exactly two different classes of m-dimensional linear subspaces ℓ m , ℓ ′ m in CH m (X) and the sum of these two classes is equal to h m . Moreover, CH m (X) is a trivial Galois module if and only if e 1 (ϕ) = 0 ([Kah99, Lemma 8.2]). When e 1 (ϕ) = 0, the Galois action permutes the two classes ℓ m and ℓ ′ m . Unramified cohomology in positive characteristic Throughout this section, we fix a positive integer r and a field k of characteristic p > 0. (3.1) For each i ∈ N, let ν r (i) = W r Ω i log be the i-th logarithmic Hodge-Witt sheaf on the bigétale site of k ( [Ill79], [Shi07]). Define Z/p r (i) := ν r (i)[−i], as an object in the derived category ofétale sheaves. This object can also be viewed as anétale motivic complex ( [GL00]). Without giving details of the constructions, let us mention that our whole paper relies on the fact that Z/p r (i) is the correct analogue of the more frequently usedétale sheaf Z/m(i) for m prime to p, from both theétale cohomological and the K-theoretic points of view. In particular, the Bloch-Kato-Gabber theorem ( [BK86]) plays a crucial role in the subsequent discussions. For every integer b, we have the cohomology functor on k-schemes For shorthand, we sometimes write H b p r (·, i) instead of the precise notation H b · , Z/p r (i) . The Zariski sheaf associated to the presheaf U → H b U, Z/p r (i) is denoted by H b p r (i). For a smooth connected k-variety X, we define the unramified cohomology group This group can also be described by using the Cousin complex of X. It is naturally a subgroup of H b k(X), Z/p r (i) . In the theorem below, we collect some well known results that are needed in this paper. It is worth noticing that (3.2.2) is a special phenomenon in characteristic p. (It fails dramatically in characteristic = p.) Theorem 3.2. Let X be a smooth connected k-variety. 1. We have the Bloch-Ogus spectral sequence There are natural isomorphisms 3. For smooth proper connected k-varieties, the group H b nr X , Z/p r (i) is a k-birational invariant. 4. Let π : X → Y be a proper morphism between smooth connected k-varieties whose generic fiber is k(Y )-rational. Then the natural map π * : Proof. (2) The isomorphisms in ( (3.3) Let X be a smooth connected k-variety. For every j ≥ 1, we have a natural restriction map .3) and the natural map . For i = 0 this coincides with the natural map Proposition 3.4. With notation as in (3.3), suppose that X is proper and geometrically rational. Then the natural map η 1 r : H 1 (k, Z/p r ) → H 1 nr X, Z/p r is an isomorphism. Proposition 3.5. Let X be a smooth, proper, k-rational variety. Then for every j ≥ 1, the map is an isomorphism. Proof. This is a special case of Theorem 3.2 (4). (3.6) Let X be a smooth connected k-variety. The Bloch-Ogus spectral sequence (3.2.1) is concentrated in two horizontal lines by (3.2.2) (which relies particularly heavily on the characteristic p assumption). So we can obtain natural homomorphisms (3.6.1) In particular, we have a natural map called the cycle class map. Let j ∈ N be another integer with j ≥ i. By composing e i (j) with the map µ i, j r in (3.3.4) we get a natural map Compatibility of the Bloch-Ogus spectral sequence with cup products (cf. [KRS98, p.868]) gives the commutative diagram (3.6.4) In particular, the cycle class map cl 1 X is injective, and if X is proper and k-rational, we have Coker(cl 1 Zar X, H 1 (1) = 0 by (3.2.2). Thus, taking i = 1 in (3.6.1) and noticing the isomorphism (3.2.5) yields the result. Proposition 3.8. Let X be a smooth proper k-rational variety. Then there is an exact sequence Proof. Taking i = 2 in (3.6.1) we obtain an exact sequence The desired exact sequence follows from (3.8.2), because for the (smooth proper) krational variety X we have H 3 nr X, Z/p r (2) ∼ = H 3 k, Z/p r (2) (Prop. 3.5). (3.9) Let X be a smooth connected k-variety. For each i ∈ N we have the Hochschild-Serre spectral sequence Since cd p (k) ≤ 1, we have H a k , H b (X , Z/p r (i)) = 0 for all a > 1. (Here again, there is a significant difference with the prime-to-p cohomology theory.) Thus, the spectral sequence (3.9.1) yields an isomorphism and an exact sequence Now assume X is proper and geometrically rational (e.g. X is a smooth projective quadric of dimension ≥ 1). Then by (3.2.4) and Thm. 3.2 (4), we have Here the identification to the Milnor K-theory in the last equality of (3.9.4) is given by the Bloch-Kato-Gabber theorem ([BK86, Cor. 2.8]). It is proved in [Izh91, Cor. 6.5] that (3.9.5) Hence, combining (3.9.4), (3.9.5) and the case j = i of (3.9.3), we obtain an exact sequence On the other hand, since by Thm. 3.2 (4), it follows from (3.6.1) that For i = 0, we can deduce from (3.9.7) and (3.2.2) that H 1 (X, Z/p r ) = 0. Hence (3.9.6) yields H 1 (k, Z/p r ) ∼ = H 1 (X, Z/p r ) in this case, recovering the result of Prop. 3.4. For i = 1, from (3.9.7) and (3.2.3) we get (3.9.8) (This can also be deduced from Prop. 3.7.) So the case i = 1 of (3.9.6) gives an exact sequence Let K i denote the Zariski sheaf defined by Quillen's K-theory. Using the Bloch-Kato-Gabber theorem, we can deduce an exact sequence ([GS88, Thm. 4.13]) Now assume further that X is a projective homogeneous variety. Then CH i (X) is torsion-free and by [Mer95, § 1, Prop. 1], . (As is well known, the Quillen K-group K 1 (k) here agrees with the Milnor K-group K M 1 (k). But the Quillen K-theoretic viewpoint is a more natural way to understand the essentials in the proof.) So, in this case from (3.9.10) we get (3.9.11) For i = 2 we can combine (3.9.11) and (3.9.7) to get an isomorphism (3.9.12) In particular, this holds when X is a smooth projective quadric of dimension ≥ 1. We end this section with a few remarks on cohomology with divisible coefficients. (3.10) Given integers b, i ∈ N, by taking direct limits we can define the functors It is easy to extend all the previous discussions in this section to these cohomology groups with divisible coefficients. In particular, for a smooth connected k-variety X, by taking the limits of (3.3.1) and (3.6.2) we obtain a natural map A useful standard fact (which follows from [Izh91, Lemma 6.6]) is that the sequence is exact for any field extension K/k. As a consequence, we have an identification Some general observations for quadrics From now on, we work over a field F of characteristic 2 (although this characteristic restriction is unnecessary in some results, e.g., Prop. 4.1, Cor. 4.2 and Prop. 4.4). Let X denote a smooth projective quadric of dimension ≥ 1 over F . Notation and results in § 3 will be applied with p = 2. For each j ≥ 1, we have the natural maps (cf. (3.3.1) and (3.10)) Proposition 4.1 ([KRS98, Prop. 2.5]). With notation as above, the following statements hold: 1. If X is isotropic over F , then the maps η j r, X and η j ∞, X are all isomorphisms. 2. In general, the maps η j r, X and η j ∞, X all have 2-torsion kernel and cokernel. 3. Let Y be another smooth projective quadric of dimension ≥ 1 over F . If X is isotropic over the function field F (Y ), then there is a natural commutative diagram If moreover Y is isotropic over F (X), the map ρ in the above diagram is an isomorphism. Similar results hold in the case of divisible coefficients. Note that Prop. 4.1 is characteristic free, because the proof is more or less a formal consequence of some general facts from the algebraic and geometric theories of quadratic forms. For example, part (1) holds simply because a smooth quadric with a rational point is a rational variety, and part (2) is immediate from the functoriality and the easy fact that any anisotropic quadratic form becomes isotropic over a quadratic extension. The lemma below relies on the characteristic 2 assumption, because in the proof we need the surjectivity part of the exact sequence (3.10.1). In fact, this is also a source of some differences between our results and the known results in characteristic = 2. (See e.g. the proof of Prop. 5.3 below.) Lemma 4.3. For every r ≥ 1 and j ≥ 1, Ker(η j r ) = Ker(η j ∞ ) and there is an exact sequence The first assertion is immediate from (3.10.2) and Prop. 4.1 (2). By functoriality and (3.10.1) we have the commutative diagram with exact rows Applying the snake lemma to this diagram yields the desired exact sequence, noticing that Coker(η j ∞ ) is 2-torsion. Thanks to the above lemma, when studying the kernel and the cokernel of η j r we may restrict to the case r = 1. The following result is a special case of [Kah99, Prop. 5.2]. Results for conic curves To simplify the notation, we henceforth write In this section and the next, we study the maps for the quadric X. They are both isomorphisms if X is isotropic (Prop. 4.1 (1)) or j = 1 (Prop. 3.4). So we assume X is anisotropic and j ≥ 2 in the sequel. For a conic curve an explicit description is known for the kernel of the map η j . Proposition 5.1. Let X ⊆ P 2 F be the conic associated to a quaternion F -algebra D. Then for all j ≥ 2, denotes the Brauer class of D. Proof. Since H j nr (X) = H 0 Zar X, H j (j − 1) is a subgroup of H j (F (X)), Ker(η j ) coincides with the kernel of the natural map H j (F ) → H j (F (X)). The result thus follows from [AJ09, Thm. 3.6]. Remark 5.2. In Prop. 5.1, the case j = 2 amounts to an exact sequence where the map δ sends 1 to the Brauer class (D). This is in fact a special case of Amitsur's theorem (cf. [GS17, Thm. 5.4.1]). For j = 3, the proposition gives a characteristic 2 analogue of a theorem of Arason [Ara75, Satz 5.4], i.e., we have an exact sequence The kernel of the first map is the group Nrd(D * ) of reduced norms of D by [Gil00, p.94, Thm. 6]. So in this case we have an isomorphism F * /Nrd(D * ) ∼ = Ker(η 3 ). The following result is slightly different from its counterpart in characteristic = 2 (cf. [Kah95, p.246, Remarks (4)]). As in Lemma 4.3, the characteristic 2 assumption is crucial since the surjectivity in (3.10.1) is needed. Our goal now is to extend Peyre's results in [Pey95, § 2] to characteristic 2. A common feature in the arguments of Peyre's and ours is the use of vanishing results for terms in the Hochschild-Serre spectral sequence. But the spectral sequences in the two different cases look remarkably different, because vanishing holds for different reasons and hence the positions of vanishing terms are not same. (In characteristic = 2, homotopy invariance is an ingredient guaranteeing some vanishing results.) Indeed, the cohomology groups of the projective line are already different in different characteristics (see (5.4) below). As was mentioned in (3.9), in our situation the cohomological 2-dimension of the base field plays a key role, and the description of certain cohomology groups relies on the vanishing result in (3.2.2) (or its consequence (3.2.4)). (5.4) Let X ⊆ P 2 F be a smooth conic. Fix i ∈ N. In the Hochschild-Serre spectral sequence H p (F, H q (X, i)) ⇒ H p+q (X, i) we have and an exact sequence Since dim X = 1, (3.6.1) gives a short exact sequence Also, by the Gersten resolution of H i (i) we have In particular, there is a natural surjection is commutative. Let D be the quaternion F -algebra corresponding to the conic X. For each x ∈ X (1) we have (D) κ(x) = 0 in H 2 (κ(x)) = Br(κ(x))[2]. Therefore, This together with the commutative diagram (5.4.3) shows that the composite map is 0. Now, using (5.4.2) we can define to be the unique homomorphism making the following diagram commute: Note that (5.4.2) also yields a homomorphism is commutative with exact rows. Now we have the following complex and N ′ is induced by the map N in (5.4.4). Proof. The injectivity of τ is a consequence of the injectivity of H i+1 (F ) → H i+1 (X), and the surjectivity of N ′ follows from that of the map ρ (cf. (5.4.1)). The equality Im(τ ) = Ker(ρ•ι) can be easily shown by a diagram chase, and the equality Im(∪(D)) = Ker(η i+1 ) was Prop. 5.1. To get the isomorphism Ker(N ′ ) Im(ρ•ι) , it suffices to apply the snake lemma to the following commutative diagram From the diagram (5.4.3) we find Im(ρ • ι) = Im ⊕Cor κ(x)/F : and it is equal to Ker Z/2 Corollary 5.6. Let X ⊆ P 2 F be a smooth conic with associated quaternion algebra D. Then we have isomorphisms Proof. The first isomorphism follows from the i = 2 case of Prop. 5.5. The other isomorphisms have been discussed in Remark 5.2. Corollary 5.7. Suppose ϕ is the reduced norm form of a quaternion division algebra D. Then we have isomorphisms Proof. This follows from Cor. 4.2 and the corresponding results for conics (Props. 5.2, 5.3 and Cor. 5.6). By Prop. 4.4, if one of the two groups Ker(cl 2 ∞ ) and Coker(η 3 ∞ ) is trivial, then so is the other. If dim ϕ = 3, then dim X = 1 and CH 2 (X) = 0, so trivially Ker(cl 2 ∞ ) = 0. If dim ϕ = 4, as in the proof of Cor. 5.7 we may reduce to the case where ϕ is a 2-Pfister form. Then we can apply Cor. 4.2 to obtain Coker(η 3 ∞ ) = 0 by passing to the case of conics. As a corollary, we have an isomorphism Ker(η 3 ) ∼ = Coker(η 3 ) in view of Lemma 4.3. In fact, it is shown in [HLS21,Thm. 5.6] that a, b, c ∈ F * , ϕ is similar to a subform of a, b ; c]]} . Quadrics of dimension ≥ 2 In this section, we prove our main results about low degree unramified cohomology for quadrics of dimension at least 2. These extend the main theorems of [Kah95] to characteristic 2. While the basic strategy is derived from Kahn's paper, we do need to check quite a few details, and in doing so (e.g. in the proofs of Lemma 6.3 (2) and Prop. 6.4) we do use some ingredients (such as the exact sequences (3.9.3) and (3.9.6)) that do not show up in characteristic = 2. In the proof of Thm. 6.11 we even have to use a different approach which builds upon Lemma 4.3, a result that is special in characteristic 2 itself. Throughout this section, let ϕ be a nondegenerate quadratic form of dimension ≥ 3 over the field F and let X be the projective quadric defined by ϕ. Otherwise ξ 1 is an isomorphism. Proof. The idea of proof is to compute the maps ξ i explicitly, by using the Galois module structures of the Chow groups in question. Indeed, the results we need about Chow groups can be found in [Kar90] and [HLS21]. Recall that we have defined the cycle class maps in (3.6.2). Here we only need the mod 2 case. Note that the proof of Lemma 6.3 (2) used a special case of the second isomorphism in (3.9.5), which is only valid in characteristic 2. This result is relied on in the proof of Prop. 6.4 below, and together with (3.9.3) this serves as a characteristic 2 substitute for Shyevski's computation in [Shy90,§ 5], which was used in [Kah95] in characteristic = 2. We have the commutative diagram (6.4.1) The left vertical map in (6.4.1) is injective since (D) = 0 in H 2 (F ). The bottom horizontal map ι * in (6.4.1) can be identified with the map via the first isomorphism given in Lemma 6.3 (2). Hence ι * is injective. Thus, the diagram (6.4.1) implies that (D) ∪ cl 1 X (h) = 0 in H 2 (F, H 2 (X) ⊗ Z/2(1)). As we have said before, this completes the proof. The statement of the following lemma resembles [Kah95, Lemma 3], but in its proof we need results that are special in characteristic 2. The following theorem is the characteristic 2 version of [Kah95, Thm. 1]. Notice however that unlike the case of characteristic different from 2, the maps µ 0, j and η j are different in our situation. Proof. By Prop. 3.7, we have a natural exact sequence This sequence together with Lemma 6.5 (1) proves the theorem immediately. Applying the snake lemma to the above diagram and using Lemma 6.5 (2) (and (3.8.2)), we get an exact sequence The theorem follows immediately from the above sequence. In Case (a), the map δ in (6.5.1) can be identified with the zero map from H 1 (F, 1) to itself, since the map CH 1 (X)/2 → CH 1 (X)/2 is the zero map from 2Z/4Z to Z/2Z. Thus, (6.5.1) yields an exact sequence In Case (b), we can get an exact sequence of the same form, because the map δ can be viewed as the map Theorem 6.10. Assume dim ϕ > 4. The maps µ 1, 2 and η 3 are both isomorphisms in each of the following cases: In Theorem 6.11 below, the result for Coker(η 3 ) is different from its counterpart in characteristic = 2 ([Kah95, Thm. 2 (b)]). The proof given in [Kah95] relies on a description of cl 2 X (h 2 − 2ℓ 1 ) provided in [Shy90,Prop. 5.4.6]. We do not have a characteristic 2 analog of that result. So we proceed with a different method. Now we have computed Ker(η 3 ) and Coker(η 3 ) except in the case where ϕ is an Albert form. This last case will be treated in § 8. Unramified Witt groups in characteristic 2 We need to use residue maps on the Witt group of a discrete valuation field of characteristic 2. We recall some key definitions and facts that will be used in the next section. Throughout this section, let K be a field extension of F (so char(K) = 2) and let R be the valuation ring of a nontrivial discrete valuation v on K. Let π ∈ R be a uniformizer and let k be the residue field of R. (7.1) Let I q (R) = W q (R) be the Witt group of nonsingular quadratic spaces over R as defined in [Bae78,p.18,(I.4.8)]. It is naturally a subgroup of I q (K). For n ≥ 2, let I n q (R) be the subgroup of I q (R) generated by Pfister forms of the following type: a 1 , · · · , a n−1 ; b]] where a i ∈ R * , b ∈ R . Put I 1 q (R) = I q (R). There is a natural homomorphism I n q (R) → I n q (k), ϕ → ϕ for each n ≥ 1. Following [Ara18], we define the tame (or tamely ramified ) subgroup of I q (K) to be the subgroup I q (K) tr := W (K) · I q (R). For general n ≥ 1, we put I n q (K) tr := I n−1 (K) · I q (R) = I n q (R) + 1, π bil · I n−1 q (R) . By [Ara18, Props. 1.1 and 1.2], there is a well defined residue map is exact. For each n ≥ 1 we have an induced residue map ∂ : I n q (K) tr → I n−1 q (k) (with I 0 q (k) = I 1 q (k) by convention), and putting I n q (K) nr := Ker(∂ : I n q (K) nr → I n−1 q (k)) = I n q (K) tr ∩ I q (R) , we get an exact sequence Note that the term "tame" and the residue map depend on the discrete valuation v (or the valuation ring R). (7.2) We use shorthand notation for cohomology functors introduced at the beginning of § 5. Given i ∈ N, localization theory inétale cohomology theory gives rise to a long exact sequence By the Bloch-Kato-Gabber theorem, the map δ 0 in the above sequence can be identified with the residue map K M i+1 (K)/2 → K M i (k)/2 in Milnor K-theory, which is surjective. Hence, we have an exact sequence An element of H i+2 (K) is called unramified at v if it lies in the subgroup H i+2 (R) = Ker(δ 1 ). For each n ≥ 2, we define the tame (or tamely ramified ) part of H n (K) to be the subgroup H n tr (K) : whereK denotes the v-adic completion of K andK nr is the maximal unramified extension ofK. Kato constructed (cf. [Kat82a], [Kat86, § 1]) a residue map ∂ H : H n tr (K) → H n−1 (k) satisfying (7.2.1) ∂ H ((uπ) ∪ (a 1 ) · · · (a n−2 ) ∪ (b ]) = (a 1 ) ∪ · · · (a n−2 ) ∪ (b ] , u, a i ∈ R * , b ∈ R , such that the sequence is exact. Therefore, an element α ∈ H n (K) is unramified if and only if α ∈ H n (K) tr and ∂ H (α) = 0. Using the formula (7.2.1), one easily proves the following result through calculation. Proposition 7.3. For each n ≥ 2, we have e n (I n q (R)) ⊆ H n (R) , e n (I n q (K) tr ) ⊆ H n tr (K) and the following diagram with exact rows is commutative: Nontrivial unramified class for Albert quadrics In this section we investigate the case of Albert quadrics and complete our study of the map η 3 . Recall some standard notation. The hyperbolic plane H is the binary quadratic form (x, y) → xy. For any nondegenerate form q of dimension ≥ 3 over F , let F (q) denote the function field of the projective quadric defined by q. Lemma 8.1. Let q be an Albert form over F which represents 1, and let q 1 be a form over F (q) such that q F (q) = q 1 ⊥H. If q 1 represents 1, then q must be isotropic over F . We claim that e 3 (τ − ϕ 1 ) lies in the unramified cohomology group H 3 nr (X). In fact, we can prove the following: Proposition 8.2. For every discrete valuation v of K that is trivial on F , e 3 (τ − ϕ 1 ) is unramified at v. Proof. Let R be the valuation ring of v in K = F (X). By the commutative diagram in Prop. 7.3, it is sufficient to show τ − ϕ 1 ∈ I 3 q (K) nr = I q (R) ∩ I 3 q (K) tr . It remains to prove τ − ϕ 1 ∈ I q (R). We already know ϕ 1 ∈ I 2 q (R) ⊆ I q (R). So it suffices to show τ ∈ I q (R). This is equivalent to τK ∈ I q (R), whereR denotes the completion of R andK is the fraction field ofR (see e.g. [Ara18, p.106]). Proof. Since e 3 : I 3 q (F ) → H 3 (F ) is a surjection and I 3 q (F ) is additively generated by 3-Pfister forms, every element of H 3 (F ) is a sum of finitely many symbols (by a symbol in H 3 (F ) we mean the class e 3 (π) of a 3-Pfister form π). We may define the symbol length of an element β ∈ H 3 (F ) to be the smallest positive integer n such that β is a sum of n symbols. We use induction to show: For every n ≥ 1, there is no element β ∈ H 3 (F ) of symbol length n such that β F (X) = e 3 (τ − ϕ 1 ). This contradicts the induction hypothesis. Theorem 8.4. Let X be the projective quadric defined by an anisotropic Albert form over F .
2021-03-02T02:15:35.301Z
2021-02-28T00:00:00.000
{ "year": 2021, "sha1": "01a52c92b6e85e41cc707d82608894b8d690754a", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2103.00426", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "01a52c92b6e85e41cc707d82608894b8d690754a", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
59403729
pes2o/s2orc
v3-fos-license
PREPARATION AND CHARACTERIZATION OF THERMOSENSITIVE MUCOADHESIVE IN _ SITU GELS FOR NASAL DELIVERY OF ONDANSETRON HYDROCHLORIDE BY A nasal mucoadhesive thermo reversible in-situ gel appears very attractive since it is fluid-like prior to nasal administration and can thus easily be installed as a drop allowing accurate drug dosing. The feasibility of developing an efficacious intranasal formulation of the potent antiemetic drug Ondansetron HCL has been undertaken in this work. The ultimate goal is to circumvent the first-pass elimination of the drug when taken orally. Poloxamers P407 and P188 (20/5% w/v) were used using cold method to prepare thermo reversible gels as they have excellent thermo-sensitive gelling properties, water solubility , good drug release, low toxicity and irritation. Mucoadhesive polymers like chitosan high molecular weight (HMW), sodium carboxymethyl cellulose low molecular weight (LMW) and polyvinylpyrrolidone K30 (PVP) were used at concentration of 0.5 % (w/v) to form thermo reversible gels. Three nasal in-situ gels with desirable Tsol-gel in the range of 30-35oC were developed. pH, mucoadhesion, rheological measurements, in vitro release and ex-vivo permeation studies were performed to evaluate the prepared gels. The incorporation of chitosan to poloxamer polymers showed significant increase in the mucoadhesion ability. The prepared gels exhibited non-Newtonian shear thinning behavior at 35oC. Drug contents were in the range of 97.8-100.1%. The release pattern was enhanced by the PVP polymer, in opposition; chitosan and NaCMC retarded it. Concerning permeation through sheep nasal mucosa, the steady state flux (Jss) of the three formulae was found to be 3.57, 5.64 and 3.81 μg/cm 2 .min., respectively. No marked alteration in the histological structure of the nose epithelial cell membrane of male Wister rats after application of the formed gels was observed to confirm their safety. The bioavailability for the optimized formulation was 86.98% providing that intranasal route could be promising for Ondansetron HCL delivery. INTRODUCTION Nasal delivery is increasingly considered to be an alternative route for drugs that currently require parentral administration. As a site for systemic absorption the nasal route provide means of avoiding first pass metabolism (Ultarwar et al., 2012). The development of in situ gel systems has received considerable attention over the past few years. These systems possess potential advantages like simple manufacturing process, reduced frequency, ease of administration, improved patient compliance and comfort (Miyazaki et al 2003). In-situ gel forming drug delivery is a type of mucoadhesive drug delivery system. In contrast to very strong gels, they can be easily applied in liquid form to the site of drug absorption, swell to form a strong gel capable of prolonging the residence time of the active substance. Both natural and Az. J. Pharm Sci. Vol. 50, September, 2014 192 synthetic polymers can be used for the production of in-situ gels. In-situ gel formation occurs due to one or combination of different stimuli like pH change, temperature modulation and ionic crosslinking (Kant et al., 2011). Poloxamers or pluronics are the series of commercially available copolymers of non-ionic nature. They were used as an in situ gel forming polymer together with mucoadhesive polymers like NaCMC, chitosan and PVP to ensure long residence time at the application site (Alexandridis and Hatton, 1995). Ondansetron hydrochloride (ON) has been used to prevent and control both nausea and vomiting after cancer chemotherapy, radiotherapy and surgery. Unlike metoclopramide, (ON) is known not to induce the undesirable side effect such as extrapyramidal reactions. It should be administered 30 min before chemotherapy, and it tends to be discharged by vomiting (Rolia and Del Favero, 1995). It has been used by oral and injectable administration. It is rapidly absorbed orally, but extensively metabolized by the liver (Figg et al., 1996). Based on this, the feasibility of developing an effective intranasal formulation of the potent antiemetic drug (ON) has been undertaken in this study. The aim of this study was to formulate (ON) in a mucoadhesive in-situ gelling system to increase the residence time of the drug in the nasal cavity. The system would allow accurate drug dosing. The poloxamer 407/188 gel was used as the base whereby its gelation temperature was modulated so as to be liquid at 25°C and gels at 32°C. Additionally, different mucoadhesive polymers were used together with poloxamer to fortify the adhesion of the in-situ gel to the nasal mucosal surface. MATERIALS AND METHODS Materials Ondansetron hydrochloride dihydrate (ON) was kindly supplied from (Ameriah pharmaceutical company, Cairo, Egypt). Risperidone (pharo, Egypt), Zofran 8mg tablets, Danset ampoules 4mgm/2ml (glakso, Egypt), Chitosan (Cs) high Mol.Wt, Mucin from porcine stomach, Poloxamer 407 and 188 (Sigma–Aldrich Company, St. Louis USA). Haematoxylin–eosin (Bark Scientific Limited, UK), thiopental (Epico, Egypt), 0.9% saline solution (Haydelina, Egypt), sodium carboxy methyl cellulose (NaCMC), poly vinyl pyrrolidone K30 (PVP), sodium hydroxide, sodium chloride, potassium chloride, calcium chloride dihydrate, propylene glycol (Fluka Chemika-BioChemika, Switzerland), benzalkonium chloride, iso-propyl alcohol, formalin, acetonitrile, zinc sulphat, ammonium acetate, glacial acetic acid, acetonitrile were purchased from (Sigma-Aldrich Chemicals St. Louis, MO, USA). Freshly prepared phosphate buffer saline solution pH 6.4. All other chemicals were of reagent grade and were purchased from (EL-Nasr Company Cairo, Egypt). Equipment Electric balance (Sartorius GMBH, Germany), Brookfield DV-III ultra programmable cone and plate rheometer fitted with a spindle number 52 and controlled with rheometer operating software (Brookfield, USA), dissolution tester apparatus II (Hanson research test, USA), UV 240 double beam spectrophotometer (Schimadzu Corporation, Kyoto, Japan), pH meter (Genway ltd, UK), Ultra Centrifuge (Jouan, France), magnetic stirrer (Jenway, UK), Fridge (Toshiba, Egypt), light microscope (Euromex, The Netherlands), HPLC equipped with G1311A quaternary pump and UV detector (VWD-G1314A, agilent, Germany). Thermostatic water bath (Poly Science, USA), diffusion cell (designed as per the dimensions given by (Pisal et Az. J. Pharm Sci. Vol. 50, September, 2014 193 al), ultrasonic sonicator (Crest Trenton, U.S.A), vortex (snijders, Holland), 0.45μm membrane filter (nupore, India). Preparation of (ON) thermoreversible gels The formulations were prepared on the weight ratio according to cold method (Pisal et al., 2004). Medicated in-situ gelling formulations composed of 20/5% w/v P407/P188 were prepared with the addition of mucoadhesive polymers namely: NaCMC (LMW), Chitosan (HMW) and PVP K30. The drug, benzalkonium chloride and the polymers were stirred in the calculated amount of distilled water with proper amount of propylene glycol (10% v/v) at room temperature. The dispersions were cooled; the poloxamers were added slowly and then left to hydrate at 4°C. (chitosan was dissolved in 0.1N Acetic Acid). Table (1) illustrates composition ratio of in-situ gel components. Visual appearance, clarity and pH of in-situ gel The clarity and color of the formulated solutions determined by visual inspection under black and white background (Mahadlek 2008). The pH of the medicated formulations was determined by bringing the electrode of the pH meter in contact with the surface of the formulation and allowing it to equilibrate for 1 min. The experiments were run in triplicates. Gelation temperature determination The gelation temperatures of formulations were determined using a modified “Visual Tube Inversion Method” (Ur-Rehman et al., 2011). Approximately 4 g of thermo-sensitive gel was transferred to vials and incubated in a thermostatic water bath with an increasing rate of 1oC/min; an equilibration period of 5 min was applied after each temperature raise. Observation of the gel surfaces was taken at every temperature point by tiling the vials to the horizontal position, the temperatures at which the surfaces remained immobile within 30 sec were measured by an inserted thermometer and were recognized as the gelation temperatures. The measurements were performed in triplicate. Measurement of steady shear viscosity The rheological properties of the in-situ gelling formulations were studied using (cone and plate Brookfield viscometer) (Zaki et al., 2007). The measurements were made at 35±0.1°C using spindle 52 at a shear rate ranging from 0.5 to 100 rpm. The shear rate (γ) in S -1 and the viscosity (η) in cps were determined and fitted to the power law constitutive equation: η = m γ n-1 (Tung and Fang, 1994) where m is the consistency index and n is the flow index. If n=1, this indicates Newtonian behavior while n<1 indicates shear thinning flow and the lower the value of n the more the thinning the formulation (Asasutjarit et al., 2011). Mucoadhesion measurement The mucoadhesive behavior was evaluated according to the method described by (Hassan and Gallo, 1990) based on the idea that the chemical interaction and entanglements between the polymer and glycoproteins in mucus causes a rheological synergism. Dried mucin was hydrated with simulated nasal electrolyte fluid (SNEF) by stirring for 3 hrs at room temperature. Six grams of mucin dispersion were mixed for 15 min with 2 gm of each polymer solutions before measurement. The viscosity of mucin (15% w/w) was measured in absence and presence of polymer solution to evaluate the mucoadhesion properties of the tested polymer solution. The Az. J. Pharm Sci. Vol. 50, September, 2014 194 measurement was done at 35±1 o C and shear rates (D) of 10, 20, 50 and 100 s −1 . All measurements were performed in triplicate. The viscosity of mucoadhesion component (ηb) was calculated from the following equation: ηb = ηt − ηm − ηp. Where ηt is viscosity of mucin with polymer, ηm is viscosity of mucin without polymer and ηp is visco INTRODUCTION Nasal delivery is increasingly considered to be an alternative route for drugs that currently require parentral administration.As a site for systemic absorption the nasal route provide means of avoiding first pass metabolism (Ultarwar et al., 2012).The development of in situ gel systems has received considerable attention over the past few years.These systems possess potential advantages like simple manufacturing process, reduced frequency, ease of administration, improved patient compliance and comfort (Miyazaki et al 2003).In-situ gel forming drug delivery is a type of mucoadhesive drug delivery system.In contrast to very strong gels, they can be easily applied in liquid form to the site of drug absorption, swell to form a strong gel capable of prolonging the residence time of the active substance.Both natural and synthetic polymers can be used for the production of in-situ gels.In-situ gel formation occurs due to one or combination of different stimuli like pH change, temperature modulation and ionic cross-linking (Kant et al., 2011).Poloxamers or pluronics are the series of commercially available copolymers of non-ionic nature.They were used as an in situ gel forming polymer together with mucoadhesive polymers like NaCMC, chitosan and PVP to ensure long residence time at the application site (Alexandridis and Hatton, 1995). Ondansetron hydrochloride (ON) has been used to prevent and control both nausea and vomiting after cancer chemotherapy, radiotherapy and surgery.Unlike metoclopramide, (ON) is known not to induce the undesirable side effect such as extrapyramidal reactions.It should be administered 30 min before chemotherapy, and it tends to be discharged by vomiting (Rolia and Del Favero, 1995).It has been used by oral and injectable administration.It is rapidly absorbed orally, but extensively metabolized by the liver (Figg et al., 1996).Based on this, the feasibility of developing an effective intranasal formulation of the potent antiemetic drug (ON) has been undertaken in this study. The aim of this study was to formulate (ON) in a mucoadhesive in-situ gelling system to increase the residence time of the drug in the nasal cavity.The system would allow accurate drug dosing.The poloxamer 407/188 gel was used as the base whereby its gelation temperature was modulated so as to be liquid at 25°C and gels at 32°C.Additionally, different mucoadhesive polymers were used together with poloxamer to fortify the adhesion of the in-situ gel to the nasal mucosal surface. Preparation of (ON) thermo-reversible gels The formulations were prepared on the weight ratio according to cold method (Pisal et al., 2004).Medicated in-situ gelling formulations composed of 20/5% w/v P407/P188 were prepared with the addition of mucoadhesive polymers namely: NaCMC (LMW), Chitosan (HMW) and PVP K30.The drug, benzalkonium chloride and the polymers were stirred in the calculated amount of distilled water with proper amount of propylene glycol (10% v/v) at room temperature.The dispersions were cooled; the poloxamers were added slowly and then left to hydrate at 4°C. (chitosan was dissolved in 0.1N Acetic Acid).Table (1) illustrates composition ratio of in-situ gel components. Visual appearance, clarity and pH of in-situ gel The clarity and color of the formulated solutions determined by visual inspection under black and white background (Mahadlek 2008).The pH of the medicated formulations was determined by bringing the electrode of the pH meter in contact with the surface of the formulation and allowing it to equilibrate for 1 min.The experiments were run in triplicates. Gelation temperature determination The gelation temperatures of formulations were determined using a modified "Visual Tube Inversion Method" (Ur-Rehman et al., 2011).Approximately 4 g of thermo-sensitive gel was transferred to vials and incubated in a thermostatic water bath with an increasing rate of 1ºC/min; an equilibration period of 5 min was applied after each temperature raise.Observation of the gel surfaces was taken at every temperature point by tiling the vials to the horizontal position, the temperatures at which the surfaces remained immobile within 30 sec were measured by an inserted thermometer and were recognized as the gelation temperatures.The measurements were performed in triplicate. Measurement of steady shear viscosity The rheological properties of the in-situ gelling formulations were studied using (cone and plate Brookfield viscometer) (Zaki et al., 2007).The measurements were made at 35±0.1°C using spindle 52 at a shear rate ranging from 0.5 to 100 rpm.The shear rate (γ) in S -1 and the viscosity (η) in cps were determined and fitted to the power law constitutive equation: η = m γ n-1 (Tung and Fang, 1994) where m is the consistency index and n is the flow index.If n=1, this indicates Newtonian behavior while n<1 indicates shear thinning flow and the lower the value of n the more the thinning the formulation (Asasutjarit et al., 2011). Mucoadhesion measurement The mucoadhesive behavior was evaluated according to the method described by (Hassan and Gallo, 1990) based on the idea that the chemical interaction and entanglements between the polymer and glycoproteins in mucus causes a rheological synergism.Dried mucin was hydrated with simulated nasal electrolyte fluid (SNEF) by stirring for 3 hrs at room temperature.Six grams of mucin dispersion were mixed for 15 min with 2 gm of each polymer solutions before measurement.The viscosity of mucin (15% w/w) was measured in absence and presence of polymer solution to evaluate the mucoadhesion properties of the tested polymer solution.The measurement was done at 35±1 º C and shear rates (D) of 10, 20, 50 and 100 s −1 .All measurements were performed in triplicate.The viscosity of mucoadhesion component (η b ) was calculated from the following equation: η b = η t − η m − η p .Where η t is viscosity of mucin with polymer, η m is viscosity of mucin without polymer and η p is viscosity of corresponding in-situ gelling prepared solution.The mucoadhesion index M [cp] was calculated using the shear rate D [s −1 ] and the viscosity of mucoadhesion component (η b ) [cp] according to the equation: M=η b *D where (η b ) was calculated from previous eq.and D is the shear rate per second.Since (η b ) may decrease with the increase in the applied shear rate D, it was decided to use a high value of D to eliminate weakly mucoadhesive materials (Hassan and Gallo, 1990).The SNEF was composed of 7.45 mg/ml NaCl, 1.29 mg/ml KCl and 0.32 mg/ml CaCl 2 .2H 2O and pH were adjusted to 5.5 (Pund and Borade, 2013). Drug content Accurately measured 1ml of each formula was shaken with 100 ml SNEF until drug completely dissolved.The solutions were filtered through whatmann filter paper.Drug content was estimated spectrophotometrically at 310 nm using plain SNEF as a blank and was calculated using standard calibration curve.The mean percent of drug content was calculated as an average of 3 readings. In-vitro drug release study Drug release was monitored by the USP dissolution test apparatus type II (Zaki et al., 2007).A dialysis tube containing 1 ml gel formulation was immersed in 500 ml of SNEF as a dissolution medium at 35°C ± 0.5°C and rotation at 50 rpm.Aliquots of 1 ml were withdrawn at time intervals at 15,30,45,60,90,120,180,240,300,360,420,480 min and each aliquot was replaced by 1 ml of fresh SNEF.The samples were measured spectrophotometrically as mentioned earlier.The experiments were run in triplicates. Ex-vivo permeation studies using diffusion cell The freshly excised sheep nasal mucosa, except septum part was collected from a local slaughter house.The superior nasal membrane was identified and separated from the nasal cavity and made free from adhered tissues.Maintaining the viability of the excised nasal tissues during the experimental period is crucial.Within 10 min of killing the animal, the mucosa was carefully removed, then immediately immersed in phosphate buffer saline solution pH 6.4 for 15 min and was aerated (Pund and Borade, 2013).The membrane was mounted in between the donor and the receptor compartment of the diffusion cell.The nasal diffusion cell was designed as per the dimensions given by (Pisal et al., 2004) as seen in fig( 1).The position of the donor compartment was adjusted so that the mucosa just touches the permeation medium.Formulation equivalent to 0.5 ml of prepared in-situ gel was taken in the donor compartment which was in contact with the mucosal surface of the membrane, while the receptor compartment was filled with 67 ml of SNEF and its temperature was maintained at 37 °C.The content of the receptor compartment was stirred using a magnetic stirrer.An aliquot of 1 ml was withdrawn at suitable time intervals and replaced with the same volume of fresh medium.These samples were analyzed spectrophotometrically at 310 nm (Samson et al., 2012;Nisha et al., 2012).The experiments were run in triplicates. In-vivo nasal irritation test Briefly, male Wister rats weighing 250-300 g were sedated with an intra-peritoneal injection of thiopental (∼45 mg/kg) before each dosing to facilitate nasal administration.The rats were divided into 5 groups 3 rats in each group.Group I received 0.9% saline solution in the right nostril (-ve control), group II received iso-propyl alcohol in the right nostril (+ve control), group III, IV and V received in-situ gel formulae M1, M2 and M3 respectively once daily for 14 consecutive days, after which, the rats were sacrificed.The nasal septum with the epithelial cell membrane on each side was carefully separated from the bone.The septum was fixed with 10% formalin, sliced on a microtome and stained with haematoxylin-eosin.The left nostril (un-dosed) was used as a control.The slides of control and treated nasal mucosal tissues with the in-situ gels were examined using a light microscope (Banchroft and Stevens, 1996). In-vivo pharmacokinetic study Animal handling and drug administration Nine male albino New Zealand rabbits weighing about 2.5±0.1 kg were used.The rabbits were housed individually in stainless steel cages, fed a commercial laboratory rabbit diet.The rabbits were fasted for 18 hrs prior to and during the pharmacokinetic study.The animals were conscious during experiments and held in restrainers during withdrawing blood samples.The animals were randomly divided into three groups of three rabbits each.One group received 400µl (equivalent to 8mg ON) of the selected developed nasal in-situ gel formula deposited into both nostrils.The second group received the commercial oral product (Zofran 8mg tablets) administered at the back of the pharynx using gastric intubation silicone rubber with one tablet set on the tip of the tube and immediately 5ml water was administered through the tube to ease swallowing.Finally, the third group received the commercial I .V (2 ampoules of Danset 4mg each) injected into the animal marginal ear vein.(Mahajan and Gattani, 2010).The study was conducted according to a 2-period, 2-sequence crossover design with one week wash out period between the phases.All animal procedures were approved by the Ethics Committee of Faculty of Pharmacy, Cairo University. Sample collection and analysis After administration of the previously described three different dosage forms, blood samples were collected at time intervals of 0.25, 0.5, 1, 1.5, 2, 2.5, 3, 4, 6, 8, 12, 16 and 24 hrs.Blood samples were centrifuged at 4000 rpm for 15 min to separate plasma which were stored at -20ºC pending HPLC analysis.Risperidone (internal standard) was added to 0.5 ml plasma and the sample was then deproteinized with mixture of 1ml acetonitrile and 50µl 10% w/v zinc sulphat solution.The treated samples were vortexes for 2 min, centrifuged at 10,000 rpm for 20 min .The supernatant then filtered through a nylon membrane filter (0.45µm) and was injected into the HPLC column, cyano (CN) (Phenomenex, 250x4.6mmID, 5mm).The mobile phase consisted of a mixture of 50 mmole ammonium acetate adjusted to pH 3.5 with glacial acetic acid and acetonitrile (35:65 v/v), filtered through a 0.45µm membrane filter, and degassed by sonication prior to use.The flow rate was 1ml/min, and the detection wavelength was 310 nm.All measurements were performed at ambient temperature (Shelsha et al., 2011).The experiments were run in triplicates. Data treatment and statistics The maximum plasma drug concentration (C max) and the time to achieve this peak (t max) were determined directly from the data.The area under the concentration-time curves from 0 to the last measurable concentration (AUC 0-t ) was calculated by the linear trapezoidal rule.The (AUC 0-∞ ) was summation of area under plasma concentration-time curve from 0 to time t (AUC 0-t ) and area under plasma concentration-time curve from time t to infinity (AUC t -∞ ).(AUC t -∞ ) was calculated by dividing the last measurable plasma concentration with the terminal elimination rate constant (K e ).The value of (K e ) was calculated using the least-squares regression analysis of the terminal portion of the log plasma concentration vs time curve.The elimination half-life (t 1/2 ) was calculated by dividing 0.693 with (K e ).The pharmacokinetic parameters, (AUC 0-∞ ), (C max ), (t 1/2 ), (K e ) were analyzed statistically using one way analysis of variance (ANOVA) (Wagner, 1975).The values of AUC 0-∞ and C max were logarithmically transformed before analysis.The (t max) values were analyzed using Wilcoxon Signed Rank test for paired samples.A statistically significant difference was considered when p < 0.05.All plasma concentrations data were dose and weight normalized and analyzed using Wagner-Nelson Method for determination of (ON) pharmacokinetics. The absolute bioavailability F (%) of intra nasal administration (IN) was calculated using the following equation: F (%) = (AUC 0-t IN ×Dose IV ) / (AUC 0-t IV ×Dose IN ) ×100.The relative bioavailability F (%) of nasal administration was calculated using the following equation: Visual appearance, Clarity and pH of in-situ gel Table (2) illustrates the appearance, clarity and pH.All prepared in-situ gels were in acceptable range of pH for nasal administration.Greater drug permeation is usually achieved at a pH lower than the drug pKa because under such conditions the penetrating molecules exist as unionized species.Because the pH of the nasal cavity can alter that of the formulation and vise versa, the ideal pH of a formulation should be within 4.5-6.5 (Nishan et al., 2012). Gelation temperature determination It could be concluded from table (2) that, the addition of mucoadhesive polymers generally reduced the gelation temperatures.This was in good agreement with that reported by (Choi et al. 1998).The gelation temperature-lowering effect of the mucoadhesive polymers used could be explained by their ability to bind to the polyoxyethylene chains of the poloxamer molecules, which promotes dehydration and causes an increase in the entanglement of adjacent molecules with more extensive intermolecular hydrogen bonding, thus producing gelation at lower temperature (Ryu et al., 1999). Measurement of steady shear viscosity As seen in fig.2-5, all gels exhibited non-Newtonian flow and exhibited shear thinning behavior as n value ˂ 1.The lower the value of (n), the more shear thinning of the formulation (Owen et al., 2000).Mucoadhesive polymers had a viscosity-enhancing effect as revealed by the values of the consistency index (m) as shown in table (3). Mucoadhesion measurement Use of polymers with strong mucoadhesive capacities can significantly limit the total clearance of the formulation from the nasal cavity (Dodou et al., 2005).Findings show that the polymer/mucous mixtures exhibit synergistic rheological profiles, as the viscosity value of the mixture is higher than the sum of the corresponding values of separate components at all the shear rates investigated (Alsarra et al;2009).Table (3), represents the extent to which the viscosity of the polymer mucin mixture differs from the expected value, based on the addition of polymer and mucin contributions which is the viscosity component (η b ).As seen in table 3 the mucoadhesive force (F) calculated at D = 40 S -1 were 6520, 11000 and 3920 (dyne/cm 2 ) for M1, M2 and M3 respectively and their viscosities and mucoadhesion forces are listed in Table (3). The values and forces of mucoadhesion for formula M2 are significantly higher than for M1 (P˂0.01) and M3 (P < 0.001).This indicated that formula M2 is able to interact more strongly with mucin due to hydrogen bonding between mucin and chitosan and the electrostatic interaction between the amine function of chitosan and sialic acid and sulfonated residues of mucin may be possible (Suknuntha et al., 2011). Drug content As seen in Table (2), the drug content was found to be in acceptable range for all three formulated in situ gels.It was in the range 97.8-100.1% that indicates uniform distribution of (ON) in the gels. In-vitro drug release study The release profiles of (ON) mucoadhesive nasal in situ gels (formulae M1, M2, and M3) in Fig. 6 Show that in-situ gel with PVP K30 enhanced drug release when compared to in-situ gel without mucoadhesive polymers (formula M).This enhancement was attributed to the water soluble nature of PVP which allowed more rapid penetration of dissolution medium into semisolid matrix and initiated surface dissolution/erosion (Jones et al., 1999).The retardation of drug release with NaCMC and Chitosan could be due to the possible squeezing effect on the aqueous channels of poloxamer micelles through which drug diffuses as well as to an increase in overall product viscosity (Desai and Blnchard, 1998).The correlation coefficient value R 2 was found to be ˃0.95indicating goodness of fit of the data in the Korsmeyer-Peppas equation.When n is equal to 0.5, the fraction of drug released is proportional to the square root of time (Higuchi kinetics) and the drug release is solely diffusion controlled (Fickian diffusion kinetics).If n = 1, indicates drug release is swelling controlled (zero-order kinetics), while if 0.5 < n < 1 indicates anomalous transport and superposition of both phenomenon (non-Fickian kinetic) (Zaki et al., 2007).The results of the in vitro dissolution study revealed the non-Fickian (n = 0.5977) or anomalous behavior of release of (ON) from the in situ gel (M2) as shown in table (4).This indicates that the dissolution of the gel controlled the (ON) release.The decrease in the diffusion rate of drug with time due to decrease in the concentration gradient can be due to gel dissolution. Ex vivo permeation studies To be successfully delivered through the nasal route, drug candidates should have adequate permeability.Fig. ( 7) and table (5) illustrate profile of (ON) permeation through sheep nasal mucosal membrane.Linear regression analysis of pseudo-steady state diffusion data allowed calculation of the steady state flux (J ss ) and were found to be 3.57, 5.64 and 3.81 µg/cm 2 .min., respectively.The apparent permeability coefficients (P app ) were 0.06, 0.0945 and 0.0638 cm.min -1 .Diffusion coefficient (D) were 0.55, 1.566 and 0.7162 cm 2 .min - .Ranking the three formulae in descending order according to percent amount of drug permeated/cm 2 after 300 min, we find M2≥ M1≈ M3.Formula M 2 containing chitosan as a mucoadhesive polymer showed higher percent of drug permeated and this is related to higher mucoadhesion ability of Chitosan.The pKa of (ON) is 7.4 (Rolia and Del Favero, 1995), so it will be ionized at physiological pH (5-6.5) and hence will polar.As polar drugs with molecular weight less than 1000D a they generally pass the membrane by paracellular route (Pires et al., 2009). In-vivo nasal irritation test The successful use of mucoadhesive nasal delivery systems is not only limited to their mucoadhesion efficacy, but of equal importance is their safety.After treatment the epithelial cell membrane of male Wister rats with the three prepared in-situ gels (M1, M2 & M3), no signs of irritation such as vascular congestion or sub-epithelial edema were observed and no marked alteration as compared to negative control from the histological structure as seen in Fig. 8 (A-E). In-vivo pharmacokinetic study The concentration of ON in rabbits , plasma was determined by a validated HPLC assay.Fig. 9 shows a representative chromatogram for rabbit plasma containing (ON) and risperidone (IS) that were well separated at retention times 4.94 and 5.7 min.respectively.The mean percentage recovery of (ON) from spiked plasma samples was 97.85% and the mean correlation coefficient of the standard curve was 0.9973.The bioavailability of in situ (ON) gel was determined for the optimized formulation M2 (composed of 2% (ON), 20% poloxamer 407, 5% poloxamer 188, 10% PG and 0.5% chitosan) due to its high drug contents, slow release rate and highest permeation.In situ gel formula M2 was compared to commercial oral tablets and intravenous solution having the same (ON) dose.The mean plasma drug concentration-time profiles after administration of the IV, oral as well as the in-situ gel of (ON) are illustrated in fig 10 .From the profile Plasma data of mucoadhesive nasal in-situ gel two peaks at 0.5 and 2 hr are seen.The first one corresponded to direct absorption from nasal cavity and the second to oral drug absorption that might have occurred due to portion of drug solution swallowing before conversion into gel following nasal instillation.Table 5 shows Plasma pharmacokinetic parameters for different routes formulations. The C max values were 165.4±15.15ng/ml, and 324.1±20.18ng/ml for oral tablets and nasal in-situ gel respectively.Statistical analysis revealed that the C max was significantly higher in case of the nasal in situ gel (P ˂ 0.001).Concerning the rate of absorption, the results show that t max values were 2 hr and 0.5 hr for oral tablets and nasal in-situ gels respectively.Statistical analysis revealed that the t max was significantly higher in case of nasal in situ gel when compared to oral tablets (P ˂ 0.001).The high values of plasma mean residence time (MRT) of (ON) obtained from nasal in-situ gels (6.38 hr) than in case of IV solution (2.46 hr.) indicates a sustained drug release.Statistical analysis revealed that the (MRT) was significantly higher in case of nasal in situ gel when compared to oral tablets (P ˂ 0.001).The calculated rate of elimination (K el ) values of nasal in-situ gel (0.119 hr -1 ) was significantly lower than that of IV (0.375 hr. - ) (P ˂ 0.01).Significant difference was found between plasma (AUC 0-∞ ) of nasal in-situ gel (1026.815ng.hr./ml) and oral tablets (695.76 ng.hr./ml) (p ˂ 0.05) indicating the nasal route achieves excellent absolute and relative bioavailability of (86.98%) and (147.5%)respectively for nasal in-situ gel.Improved nasal over oral bioavailability has been previously reported for verapamil chitosan microspheres (Abdel Mouez et al., 2014).Chitosan has been reported to improve bioavailability by achieving dual effect: its ability to increase the epithelial permeability and its mucoadhesive nature (Hinchcliffe et al., 2005). CONCLUSION Taken together, the mucoadhesive nasal in situ gel was developed in the present study, so as to have favorable gelation, rheological and release properties in vitro.It has demonstrated an adequate safety to the nasal mucosa of rats.Nevertheless, the most prominent advantage of the in situ gel over the silent gel is that it is fluid-like prior to contact with the nasal mucosa: a feature that is warranted for convenience of administration for patients, accuracy of drug dosing and avoidance of the bitter taste of the antiemetic drug.Intranasal (ON) in situ gel could be considered as an alternative route for both oral and intravenous administration.
2018-12-30T01:47:20.058Z
2014-09-01T00:00:00.000
{ "year": 2014, "sha1": "fba09cfd39591d59e0022d738166ff195e9ccfd5", "oa_license": "CCBY", "oa_url": "https://ajps.journals.ekb.eg/article_6953_baa7fb95145a143314ef337c661717c7.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "fba09cfd39591d59e0022d738166ff195e9ccfd5", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Chemistry" ] }
18321223
pes2o/s2orc
v3-fos-license
Linguistic adaptation and psychometric evaluation of original Oral Health Literacy-Adult Questionnaire (OHL-AQ). INTRODUCTION Linguistically adapted oral health literacy tools are helpful to assess oral health literacy among local population with clarity and understandability. The original oral health literacy adult questionnaire, Oral Health Literacy Adult Questionnaire, was given in English (2013), consisting of 17 items under 4 domains. The present study rationalizes to culturally adapt and validate Oral Health Literacy Adult Questionnaire into Hindi language. Thus, we objectified to translate Oral Health Literacy Adult Questionnaire into Hindi and test its psychometric properties like reliability and validity among primary school teachers. METHODS The Oral Health Literacy Adult Questionnaire was translated into Oral Health Literacy Adult Questionnaire - Hindi Version using the World Health Organization recommended translation back-translation protocol. During pre-testing, an expert panel assessed content validity of the questionnaire. Face validity was assessed on a small sample of 10 individuals. A cross-sectional study was conducted (June-July 2015) and OHL-AQ-H was administered on a convenient sample of 170 primary school teachers. Internal consistency and test-retest reliability were assessed using Cronbach's alpha and Intra-class correlation coefficient (ICC), respectively, with 2 weeks interval to ascertain adherence to the questionnaire response. Predictive validity was tested by comparing OHL-AQ-H scores with clinical indicators like oral hygiene scores and dental caries scores. The concurrent and discriminant validity was assessed through self-reported oral health and through negative association with sociodemographic variables. The data was analyzed by descriptive tests using chi-square and bivariate logistic regression in SPSS software, version 20 and p<0.05 was considered as the significance level. RESULTS The mean OHL-AQ-H score was 13.58±2.82. ICC and Cronbach's alpha for Oral Health Literacy Adult Questionnaire - Hindi Version were 0.94 and 0.70, respectively. Comparisons of varying levels of oral health literacy with self-reported oral health established significant concurrent validity (p=0.01). Significant predictive validity was observed between OHL-AQ-H scores and clinical parameters like oral hygiene status (p=0.005) and dentition status (p=0.001). CONCLUSION The translated and culturally adapted Oral Health Literacy Adult Questionnaire - Hindi Version indicated good reliability and validity among primary school teachers to assess oral health literacy among Hindi speaking population. Hence, improving OHL levels and implementing education oriented policies can improve the quality of life. H ealth literacy was defined in 1998 by World Health Organization as ''the cognitive and social skills which determine the motivation and ability of individuals to gain access to, understand and use information in ways which promote and maintain good health'' (1). Oral Health Literacy, has emerged since late 90's and defined as "degree to which individuals have the capacity to obtain, process and understand basic oral health information and services needed to make appropriate health decisions" (2). There is a strong association between the level of health literacy and its impact on general health as evident from previous literature (3,4). Oral health literacy has been an issue of concern both at the ground level as well as policy making criterion. In accordance with the Disease Control Priorities in Developing Countries, "What gets measured gets done", to promote good oral health outcomes, various tools have been implemented to first measure oral health literacy and then strategically employ prevention and promotion plans (5). The majority of early oral health literacy tools were adapted from the Medical-health literacy counterpart scales. Tools like Rapid Estimate of Adult Literacy in Dentistry, REALD-30 (6), Test of Functional Health Literacy in Dentistry, TOFHLiD (7), Oral Health Literacy Instrument, OHIL, Comprehensive Measure of Oral Health Knowledge, CMOHK (8), Oral Health Literacy Assessment-Spanish, and Hong Kong Oral Health Literacy Assessment Task for Pediatric Dentistry were few of the most commonly used oral health literacy instruments. A chronological overview of oral health literacy tools from 2007-2014 magnifies a list of 14 such tools (9). Most of the tools measured oral health literacy in terms of specific domains expanded from word recognition ability to reading comprehension capacity. These domains provided scales with a limited objective to assess oral health literacy. The existing functional oral health literacy instruments are long and difficult considering the comprehension of general population. Recently developed oral health literacy -adult questionnaire (OHL-AQ) stands amidst the preexisting tools as a more stable and comprehensive assessment instrument. Beside reading comprehension and numeracy sections, this questionnaire-based instrument also encompasses listening and decision making as other two domains (10). Our literature search revealed no previous study to navigate the reliability and validity of Oral Health Literacy Adult Questionnaire (OHL-AQ) for Hindi speaking inhabitants. Despite the presence of certain modified tools to assess oral health literacy, the need to configure a reliable, valid and comprehensive instrument for Hindi speaking individuals still persists (11,12). According to a critical appraisal, there is limited empirical evidence on reliability and psychometric properties specially the construct validity of oral health literacy tools and great variation also exists in item content across the domain distribution; thus, this study aimed to compensate for this psychometric gap (13). We aimed to translate Oral Health Literacy -Adult Questionnaire (OHL-AQ) into Hindi and make the necessary cultural adaptation so that the instrument can be used with utmost reliability and validity to assess the oral health literacy of Hindi speaking population. Methods A cross-sectional study was conducted among primary school teacher community for a period of two months from June to July 2015. A list of registered primary schools was obtained from the office of Director of education department, Indore city. A total of 15 primary schools including both government and private schools were randomly approached. A detailed description of the study methodology and significance of the study was provided to the school authorities. The primary schools that provided written permission were enrolled as a part of the study. The study sample of primary school teachers was selected using a simple random sampling technique. The sample size was derived based on the concept of N/p ratio, i.e. item to participant ratio of at least 1:10; indicative of 10 responders for each question in the scale (14). The 17-item questionnaire enabled us to compute a sample size of 170 participants. Hindi translation of oral health literacy adult questionnaire (OHL-AQ) The translation of OHL-AQ was done as per four sequential stages of translation backtranslation recommended by World Health Organization (15). Primary instructions emphasized conceptual rather than literal translation as well as the need to use natural and acceptable linguistic approach for the majority of Hindi speaking audience while avoiding technical terms and jargons. A bilingual expert panel consisting of the original translator, experts in public health and experts with experience in translation and development of questionnaires reached a consensus regarding the translated version of OHL-AQ and sorted out discrepancies. The initial forward translated Hindi version was back-translated into English by a native single independent translator who was blind to the questionnaire. The back-translated English version was cross-matched with original OHL-AQ. Pre-testing phase was carried out on 10 participants from the same sampling frame but not a part of the main study. The instrument was administered to participants with small de-briefing of the content. Face to face interview sessions were carried out by the primary investigator. The answers obtained from this session were matched with actual responses marked by the respondents in the questionnaire. The respondents were also interviewed regarding the content and ease of understanding the questionnaire. Psychometric assessment of oral health literacy adult questionnaire Hindi (OHL-AQ-H) Four main aspects of validity considered in the study were face validity, content validity, criterion validity and construct validity (16). Content validity was undertaken to ascertain whether the content was appropriate and relevant. Complete ranges of attributes under the study were subjected to appointed expert panel to assess all intrinsic aspects of the questionnaire. The panel analyzed the stability of the questionnaire despite cultural or linguistic reframing. Face validity assessment of the translated scale indicated that the questionnaire appeared appropriate for the study purpose and content area. The target population was made a part of the assessment protocol to ensure the feasibility, readability, consistency, formatting, and clarity of language. Iterations were made based on the difficulty encountered by the participants. In the present study, construct validity was assessed by examining the predictive and discriminant validity. Clinical parameters like oral hygiene status and dentition status were compared with the OHL-AQ-H scores to determine predictive validity while discriminant validity was assessed by negative/insignificant association of OHL-AQ-H scores with sociodemographic variables like gender, education and socioeconomic status. Criterion validity was assessed through concurrent validity by examining the correlation between self-reported oral health and oral health literacy levels. Internal consistency or homogeneity of the translated OHL-AQ scale was determined by subjecting the participant's responses for all 17-items of the scale to alpha reliability analysis. Cronbach's alpha values above 0.70 were considered to establish acceptable consistency. The reliability of the translated questionnaire was evaluated through "test-retest" approach. OHL-AQ-H was randomly re-administered to one-half of the participants. Although there is no definite evidence on the duration between the two tests, in the present study, the questionnaire was re-administered after 2 weeks to minimize the chances of either deterioration or improvement in individual's literacy levels. A single trained and calibrated investigator (Kappa value = 0.84 for intra-rater examination) conducted the clinical examination by recording Oral Hygiene Index -simplified (OHI-S) and caries experience in terms of DMFT scores. The socioeconomic status of the participants was evaluated using modified Kuppuswamy's socioeconomic scale via reframing the socioeconomic classes as per Ministry of Labor and Employment, Consumer Price Index, May 2015 (17,18). Self reported oral health responses were also documented. Measurement tools used in the study Type III oral examination was carried out under natural light and illumination using mouth mirror, explorer and CPI probe to record clinical data. A strict sterilization protocol was followed during clinical examination. Ethical approval This study was approved by the ethics committee of Sri Aurobindo College of dentistry and informed consent was taken from all participants. At the beginning of the interview, the participants were acquainted with the purpose of the study, method of questionnaire filling, privacy and confidentiality of the study. The study sessions were carried out in the respective primary school premises either in classrooms or staff-room at convenience of the participating teachers. The participants were allowed to leave the study at any possible time. The data collected was entered into Microsoft Excel data sheet and analyzed using Statistical Package for Social Sciences (SPSS, IBM Version 20.0). The statistical analysis consisted of Cronbach's alpha, Intra-class Correlation Coefficient, Kappa statistics, Chi-square tests and binary logistic regression analysis to assess the reliability and validity of the translated questionnaire at 95% confidence interval and 5% significance level (p<0.05). Results Among the various sociodemographic variables, age (p=0.014) and gender (p=0.005) were found to be significant factors for oral health literacy. No significant differences in literacy score was observed in relation to varying education level and socioeconomic status (see Table 1). Both the aforementioned results suggested partial fulfillment of discriminant validity for OHL-AQ-H. The mean oral health literacy score 14.56±2.16 for younger age group (18-30 years) was significantly higher (p=0.014) than middle and post-middle age groups. These findings suggest that younger participants were more concerned and aware regarding oral health care. Female respondents also had significant higher mean oral health literacy scores (13.94±2.45) than males (12.57±2.87) (p=0.005). The results suggested that female respondents had more proficiency in reading comprehension on oral health, listening oral health advices and making appropriate decisions for better oral health. Non-normal distribution of the data enabled us to evaluate the statistical significance using non-parametric test like Krushkal Wallis and Mann Whitney U test. The mean total score of 17-item OHL-AQ-H indicated a shift towards high oral health literacy levels ( Table 2). The maximum percentage difficulty (29.50%) was encountered in responding to questions pertaining to listening domain followed by difficulty in appropriate decision making (25.00%) regarding oral health. Internal consistency of the OHL-AQ-H was acceptable with alpha value of 0.70. The "Testretest reliability" assessment, using bivariate correlation analysis, showed significant results with almost perfect agreement (ICC=0.93, CI=0.88-0.96) indicating highly reliable translated scale (p<0.001). No drastic increment in alpha values was encountered upon itemdeletion. So it was decided to have all the 17 items on the final questionnaire with overall internal consistency score of 0.70. Inter-item correlation matrix revealed a significant but weak correlation with maximal correlation between items from numeracy section (0.48). Percentage difficulty (%) Reading comprehension (6) Oral hygiene status and DMFT values were found to be significantly associated with oral health literacy levels. Respondents having high oral health literacy had good to fair oral hygiene status (p<0.005) and DMFT values <5 (p<0.001). The results indicated a good predictive validity for the translated scale. Self-rated oral health was also found to be significantly associated with OHL levels (see Table 3). Participants having high OHL reported good self-rated oral health, indicating OHL-AQ-H to have good concurrent validity. A bivariate logistic regression analysis implemented to analyze the determinants of poor self-rated oral health concluded that males belonging to middle age group, having at least moderate oral health literacy, and brushing twice daily were more likely to have poor self-rated oral health (see Table 4). Although insignificant, our findings ascertained the correlation between the impact of oral health literacy level and selfreported oral health status further strengthening the evidence for concurrent validity. Discussion The purpose of our study was to translate the original English version of OHL-AQ into Hindi and evaluate its psychometric properties. Sufficient evidence for scientific basis of our study was provided by systematic sample size determination through the existing literature (14). In contrast to the majority of studies conducted using conveniently selected sample population, the present study adopted a random sampling procedure to obtain the sample population. Supportive evidence enabled us to derive the sample size using item to participant ratio, N/p ratio. As a rule of thumb, the number of subjects per variable may vary from 4 to 10, with a minimum of 100 subjects to ensure the stability of the variance-covariance matrix. The results of a systematic review on the quality of factor analysis of Medical Outcome Short Form (SF-36) scale identified 3 out of 22 studies on crossvalidation, justifying the use of a similar method for sample size estimation. Another systematic review considered a range from 2 to 20 subjects per item, with an absolute minimum of 100 to 250 subjects for cross-validation research (14,19). We focused on implementing WHO proposed methodology for translation back-translation procedure which was in contrast to similar studies on linguistic adaptation of OHLI (6). The OHL-AQ-H was found to have an acceptable internal consistency (0.70), which was comparable to prevalidated OHL-AQ (0.72) and OHLI (>0.70) (6,10). The translated scale illustrated an inter-item correlation of 0.15, which is acceptable for scales measuring diverse characteristic domains (20). Higher test-retest reliability (0.93) was comparable to similar oral health literacy scales, indicating the understandability and reproducibility of the responses (9,10,(21)(22)(23). The high test-retest reliability could be attributed to the acceptable face and content validity of the translated questionnaire. Despite the cross-sectional nature of the data, computation of test-retest reproducibility was an added advantage. Another advantage of OHL-AQ-H scale was a limited number of questions, which the respondents found less time consuming and easy to respond. In order to avoid discrepancies pertaining to literacy level of the language used in Hindi translated questionnaire, the study was conducted on primary school teachers. The comparison of sociodemographic variables highlighted an insignificant association between education and socioeconomic status but revealed significant results for age and gender categories. The evidence to support divergent validity of our study was partially favored by the results. The study findings were in agreement to the NAAL (National assessment of adult literacy) instrument survey conducted by Ian M. Bennett et al. 2009 (24). The influence of socioeconomic status on the level of oral health literacy was not significant in the present study; this can be illustrated by the fact that the majority of participants belonged to either upper or middle socioeconomic class, disabling us to ascertain whether socioeconomic status did actually influence oral health literacy outcome. The significant association between poor selfrated oral health and oral health literacy levels was in line with similar studies and represented acceptable concurrent validity (11,12,21). The clinical parameters like oral hygiene status and dentition status were highly correlated with the scale scores. This significant association ascertained the predictive validity of OHL-AQ-H scale. The participants having poor oral hygiene status and higher DMFT scores had a low level of oral health literacy as compared to respondents having good oral hygiene status and lower DMFT scores. The majority of the studies conducted on translation and validation of literacy scales confirmed a significant association with clinical parameters (9-12, 21, 22). The rationale behind conducting the present study on primary school teachers was to reconsider the concept of "Dental socialization" and "significant others" in contribution to better OHL (23,24). The scientific layout behind selecting our study population was a key factor differentiating the present study from other similar studies. The limited sample size for the study was a major concern, meaning that psychometric properties of the scale may vary in a larger subset of population. The participants brushing twice daily had poor oral hygiene and caries status. The same participants reported of having good self-rated oral health. This enabled us to suspect the probability of social desirability bias in the study. The study results can be generalized to school teachers but external validation on a larger sample consisting of local population with limited educational level and differing levels of literacy on language used in OHL-AQ-H should be done cautiously. We recommend conducting similar studies on a larger sample consisting of local population so as to have a more comprehensive assessment of psychometric properties of the OHL-AQ-H like discriminant and convergent validities. Comparative trials should be conducted in future using OHL-AQ-H and similar other scales to evaluate the effectiveness of different OHL assessment tools Conclusion The initial testing of Oral Health Literacy Adult Questionnaire Hindi (OHL-AQ-H) demonstrated a valid and reliable instrument to assess oral health literacy levels among primary school teachers. Although OHL-AQ-H is an oral health literacy tool which is easy to administer and use, studies are needed to be conducted on local, tribal and rural communities to ascertain its external validity. OHL-AQ-H can effectively be used to conduct researches to assess the literacy levels and implement preventive programs. Thus the Hindi version has enabled its use at both epidemiological and clinical levels.
2018-04-03T02:10:37.098Z
2016-10-01T00:00:00.000
{ "year": 2016, "sha1": "f6ae5644d3cf0b0e44513ca5e21b1467a595260e", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "f6ae5644d3cf0b0e44513ca5e21b1467a595260e", "s2fieldsofstudy": [ "Linguistics", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
30886576
pes2o/s2orc
v3-fos-license
The role of core and accessory type IV pilus genes in natural transformation and twitching motility in the bacterium Acinetobacter baylyi Here we present an examination of type IV pilus genes associated with competence and twitching in the bacterium Acinetobacter baylyi (strain ADP1, BD413). We used bioinformatics to identify potential competence and twitching genes and their operons. We measured the competence and twitching phenotypes of the bioinformatically-identified genes. These results demonstrate that competence and twitching in A. baylyi both rely upon a core of the same type IV pilus proteins. The core includes the inner membrane assembly platform (PilC), a periplasmic assemblage connecting the inner membrane assembly platform to the secretin (ComM), a secretin (ComQ) and its associated pilotin (PilF) that assists with secretin assembly and localization, both cytoplasmic pilus retraction ATPases (PilU, PilT), and pilins (ComP, ComB, PilX). Proteins not needed for both competence and twitching are instead found to specialize in either of the two traits. The pilins are varied in their specialization with some required for either competence (FimT) and others for twitching (ComE). The protein that transports DNA across the inner membrane (ComA) specializes in competence, while signal transduction proteins (PilG, PilS, and PilR) specialize in twitching. Taken together our results suggest that the function of accessory proteins should not be based on homology alone. In addition the results suggest that in A. baylyi the mechanisms of natural transformation and twitching are mediated by the same set of core Type IV pilus proteins with distinct specialized proteins required for each phenotype. Finally, since competence requires multiple pilins as well as both pilus retraction motors PilU and PilT, this suggests that A. baylyi employs a pilus in natural transformation. Introduction Natural transformation is the ability of physiologically competent bacteria to internalize exogenous DNA and incorporate it into their own genomes. Twitching is a method of motility in PLOS which bacteria extend pili, attach to a surface using the distal end of the pili, then retract the pili thereby pulling themselves toward the attached tips of the pili. In many species, natural transformation and twitching are mediated by type IV pili. Type IV pili are nanomachines with extracellular, transenvelope, and cytoplasmic components (recently reviewed in [1,2]). The extracellular fiber is between 6-9 nm in diameter, can be many times longer than a cell, and in most species can be extended and retracted. The extracellular fiber is comprised primarily of subunits known a pilins. While the N-terminal region of all pilins is highly conserved, the rest of any given pilin can vary wildly and these variations are associated with differences in pilin function (reviewed in [2]). The identity of which pilins comprise the fiber may also vary in differing machines or organisms. In some systems, the fiber's distal terminal subunit is not a pilin but an adhesin, which is not structurally related to pilins [3][4][5][6][7][8]. The pilus is assembled in the envelope by a machine consisting of an assembly platform in the inner membrane, a cytoplasmic assembly ATPase (motor), and a prepilin peptidase [9][10][11]. There is also at least one cytoplasmic retraction ATPase [11]. The pilus shaft exits the periplasm of a Gram negative cell through an aqueous pore in the outer membrane formed by a secretin protein [12][13][14][15], which in turn is stabilized by another outer membrane protein known as a pilotin [16]. A transenvelope pilus alignment complex with cytoplasmic and periplasmic components stabilizes this multiprotein arrangement and also plays a dynamic role during extension and retraction [17][18][19][20][21][22]. The various components of a Type IV pilus machine are used in a similar manner in the mechanisms of both competence and twitching. In the case of twitching, the bacteria extend a twitching pilus. In contrast to the known mechanism of twitching, the use of a pilus during natural transformation is not as clear, although recent work has shed light on this issue. It is known that competence in most organisms requires type IV pilus proteins including pilins, the secretin, and the motor proteins [23]. Evidence regarding the role of an actual extracellular pilus fiber in Neisseria, however, is mixed. Neisseria uptakes only species-specific DNA that contains repeated DUS sequences. For many years it was observed that the Neisseria type IV pilus does not bind preferentially to DNA with DUS sequences, suggesting that in Neisseria there is no competence associated pilus [24]. But more recently a minor pilin in the Neisseria pilus has been shown to bind better to DNA with DUS sequences than to DNA lacking them [25,26]. While a model to explain these complex observations in Neisseria has not emerged uncontested, a competence pilus has now been reported in other species such as Streptococcus pneumoniae and Vibrio cholerae [27][28][29]. Thus it now seems that in at least some species there is a competence pilus that cells extend that binds to extracellular DNA in order for transformation to occur (recently reviewed in [23,30]). Once the pilus has encountered its target, DNA for transformation or a surface for twitching, there must be some mechanism to bind the pilus to its respective target. For transformation, in cases where it is known that the pilus binds to DNA, the components of particular pili that bind to DNA are species-specific and, when known, are usually pilin subunits [23][24][25][26][27][29][30][31][32]. In an analogous manner, for twitching it may be an adhesin protein or particular pilin at the end of the pilus that allows it to attach to the surface, or there may be several sites of binding between a twitching pilus and the target surface (reviewed in [2,23]). After the pilus has attached to DNA for transformation or to a surface for twitching, in both cases dedicated ATPase retraction motor proteins reel in the pili (recently reviewed in [2,33,34]). For transformation in broth, pilus retraction likely moves the DNA to the cell's surface and for twitching, retraction pulls the cell forward. Because in one case the pilus is pulling only the weight of DNA while in the other it is pulling on the weight of the cell, these two behaviors may require different amounts of force to be exerted by the motors and different physical properties (such as elasticity) of the pilus. Some strains of Acinetobacter baumannii are exclusively competent during twitching on a surface, so there may be cases where pilus retraction drags the cell forward, fortuitously allowing it to collide with DNA in the surrounding environment [35]. For both twitching and natural transformation, the pilus protrudes through an aqueous channel formed by a secretin located in the cell's outer membrane [2,13]. For transformation the DNA enters the periplasm through the secretin pore. Now speaking only of transformation, the mechanism by which DNA enters the periplasm in both Neisseria and Vibrio appears to be biased diffusion facilitated by a periplasmic DNA binding protein [28,36,37]. After crossing the outer membrane, one strand of the DNA is degraded by an exonuclease while the other passes through a dedicated inner membrane transporter. Once inside the cytoplasm, the single strand of DNA is either degraded or recombined with DNA already resident in the cytoplasm. Although Type IV pili are required for both natural transformation and twitching and Type IV pili genes are present in many bacterial species, these species do not all share the same competence and twitching phenotypes. Neisseria is both competent and twitches and these phenotypes are both known to require many of the same type IV pilus genes in Neisseria [24]. Recently, other model organisms with type IV pilus genes (Myxococcus xanthus) that were once studied to understand twitching motility or biofilm formation were discovered to also be competent [34, [38][39][40][41][42]. In contrast, Pseudomonas aeruginosa is a well-studied model that appears to twitch but is not competent [33,43]. The lack of competence is probably not due to absent pilins; indeed PilA can bind to DNA, but model strains such as PAO1 do not appear to encode proteins for transporting DNA through the periplasm and inner membrane [32,44]. Since many bacteria have type IV pilus homologs, including all those mentioned above, it remains an open question whether such bacteria will be found to be both competent and twitch, and if not, why not? Perhaps there are core type IV pilus proteins required for both competence and twitching, while the presence or absence of accessory proteins specialized for one or the other determine physiology. Acinetobacter baylyi is a bacterium that can help address these questions. A. baylyi strain ADP1 (ATCC33305, BD413) is a non-pathogenic model for the Gram negative Acinetobacter genus, comprised of aerobic soil organisms that are tolerant of wide temperature ranges and desiccation, and can use a diverse array of carbon sources (reviewed in [45,46]). ADP1 is ideal for studying the genetic link between twitching motility and competence primarily because the species is both very good at DNA acquisition and very motile under the same nutritional conditions. As an example of A. baylyi's competence, every cell in the population imports more than 60 different molecules of DNA at peak transformation efficiency [47]. In terms of twitching, as reported herein, ADP1 twitch zones on soft agar are large and therefore accurately measured. In addition A. baylyi's entire genome is sequenced, and a full library of single-gene nonpolar deletion mutants of ADP1 is available, thereby facilitating its use in studies of the genetic basis of competence and twitching. Despite these advantages, genetic connections between twitching and competence have not been previously reported in A. baylyi. Twitching motility was actually first discovered through study of Acinetobacter in 1961 [48,49]. Since the discovery of competence in A. baylyi in 1969, about a dozen genes were tested for their effect on competence and twitching, but none of them were found to affect both phenotypes (reviewed in [50]). However, no studies have tested a comprehensive set of predicted type IV pilus genes for their effect on competence and twitching in ADP1. Here we present such an examination of ADP1 type IV pilus operons associated with competence and twitching. We began with a bioinformatics analysis to identify potential competence and twitching genes and their operons in ADP1. We found that the operons are scattered throughout the chromosome and that some of those operons contain essential genes. For each of the identified operons, we measured the competence and twitching phenotypes of at least one non-polar single-gene knockout mutant. We found that in A. baylyi competence and twitching both rely upon a core of the same type IV pilus proteins; however there are also proteins, including pilins, specialized in one functionality or the other. These findings suggest that the mechanism of natural transformation is likely very similar to that of twitching and that the function of accessory proteins in any given organism may not be predictable based on homology alone. Bioinformatic analysis We used the curated Kyoto Encyclopedia of Genes and Genomes (kegg.jp) database to search for ADP1 homologs of type IV pilus proteins from Neisseria gonorrhoeae FA 1090, Pseudomonas aeruginosa PAO1, and Vibrio cholerae N16961 [51]. We used the ComEC and ComEA sequences from N. gonorrhoeae and V. cholerae to identify ADP1 periplasmic or inner membrane DNA transport proteins. We used the P. aeruginosa twitching PilR, PilS, PilG, PilH, PilI, PilJ, and ChpA signal transduction proteins to find ADP1 homologs. We used an algorithm to predict operon composition in ADP1 (http://meta.microbesonline.org/operons/gnc62977. html; [52,53]. We used Kyoto Encyclopedia of Genes and Genomes software for analysis of protein motifs and evolutionary history [51]. Multiple alignments were performed using CLUSTAL-O [54]. Proteins were analyzed using Pfam [55,56] made available through KEGG. Pairwise alignments were performed using EMBOSS [57]. Bacteria and media We used an A. baylyi strain from the American Type Culture Collection (ATCC) strain 33305/ ADP1/BD413. We rehydrated ADP1 from the ATCC on minimal media plates with recipe: 25 ml 0.5 M KH 2 PO 4 , 10 ml 10% (NH 4 ) 2 SO 4 , 1 ml concentrated base, 3.35 g Na 2 HPO 4 , 18 g BD Bacto TM Agar, 10 We prepared LB using a recipe of 10 g tryptone, 5 g yeast extract, and 10 g NaCl per liter of H 2 O. Frozen stocks consisting of LB with 25% glycerol were prepared from rehydrated streak plates and used for no longer than two years from the date of cryopreservation. Starting with the cryogenically preserved cells, we made streak plates and used colonies from them to inoculate broths. Streak plates kept at 4˚C were used for up to seven days and then discarded. Plates contained 1.5% agar unless they were soft agar used specifically for twitching assays. Strain construction Gene knockouts in which a tdk-kan cassette replaces the coding sequence were obtained from the Genoscope collection [45]. Before testing the effects of mutations they were moved into wild type cells from ATCC (ATCC33305/ADP1/BD413). A strain table is found in the supplementary materials. To move the mutations into the ATCC33305 background, crude lysates from the knockout collection were prepared by growing overnight cultures in LB supplemented with kanamycin (10 μg/mL) at 37C with high aeration. The overnight cultures were pelleted and 1.5 ml of cells were resuspended in 100 μL of sterile water and heated at 95˚C in a heat block for 2 hours to lyse the cells. We plated 5 μl samples of each lysate to assess sterility. The lysates were used to transform wild type cells to kanamycin resistance. Location of the mutation was verified by PCR using published P7 and P8 primers; strains giving unexpected amplicons were not used in further analysis. Attempts were made to complement the mutants by cloning pilX, comE, fimT, and pilV using a variety of single-copy strategies but were unsuccessful, likely due to toxicity [58,59], and thus no further complementation was attempted. In support of the toxicity hypothesis, fimT could be cloned when its expression was extremely depressed through the use of an especially poor ribosome binding site or replacing the start codon with GUG (data not shown). Donor DNA isolation for transformation assays To use as donor DNA for transformation assays, we isolated chromosomal DNA from a spontaneous streptomycin-resistant ADP1 strain provided by Bruce Voyles (Grinnell College). After culturing these str R cells in 12 ml of LB-supplemented with 20 μg/ml streptomycin (str 20 ) broth overnight with high aeration at 37˚C, we pelleted the cells and resuspended them in 1 ml of 1% sterile saline. We split the total volume into two microfuge tubes and added 500 μl of phenol-chloroform-isoamyl alcohol (25:24:1) to each tube. After vortexing to mix and microfuging to separate the organic and aqueous phases, the aqueous phase was removed to a new microfuge tube, and the DNA precipitated using standard methods with sodium acetate and ethanol. The DNA that originated from 12 ml of cells was resuspended in a final volume of 100 μl in EB buffer (Qiagen), composed of 10 mM Tris-Cl (pH = 8.5). A Nanodrop spectrophotometer indicated a final concentration of about 1 mg ml -1 for each preparation with a typical A 260 /A 280 of 1.6 for different preparations of chromosomal DNA. Measuring natural transformation efficiency We grew wild type and mutant cells overnight at 37˚C with high aeration in LB or LB supplemented with kanamycin (10 μg ml -1 ), respectively. The next day we mixed 5 μL of DNA from a streptomycin-resistant mutant (1 mg ml -1 ) with 50 μL of an overnight culture and then transferred the 55 μL to the center of an LB plate, forming a puddle. After overnight incubation at 37˚C, the next day we scraped the cells in each puddle into 750 μL of saline (10 g L -1 NaCl), pipetting and vortexing to mix thoroughly. We performed a ten-fold dilution series from 10 0 −10 7 in a microtiter plate, diluting with saline (10 g L -1 NaCl). For each independent trial, we plated two or three ten μl spots of each dilution on LB agar to obtain total CFU ml -1 and on LB agar supplemented with streptomycin (20 μg ml -1 ) to obtain transformed CFU ml -1 . Transformation efficiency is defined as transformed CFU ml -1 /total CFU ml -1 . The transformation efficiency of every mutant was determined using three independent trials on three separate days. We never observed any spontaneous streptomycin-resistant colonies from multiple control 10 μl spots of wild type cells plated on streptomycin for any trial. Measuring twitching zones We grew wild type and mutant cells overnight at 37˚C with high aeration in LB or LB supplemented with kanamycin (10 μg/mL), respectively. The next day we applied 4 μL of overnight cells to one quadrant of an LB soft agar plate (0.5% agar) containing 0.01% triphenyltetrazolium chloride, a redox sensitive dye that turns red when oxidized by respiration. For every trial we always had two wildtype spots. We incubated the plates in an incubator at 37˚C for 4-6 hours until wild type twitching zones achieved a minimum diameter of 17 mm. When necessary because of prevailing meteorological conditions, a humidified incubator was used because low agar plates dehydrate faster than 1.5% agar plates. We did not use data from trials with wild type growth below this minimum. After the twitching period, we refrigerated plates overnight to allow the cells to become deep red. We measured the mutant and both wild type twitching diameters. We define the twitching ratio as mutant diameter/wild type diameter. We obtain two such ratios for each trial, one from each of the two wild type measurements, and we average the two ratios to obtain one twitching measurement. At least two independent twitching measurements were obtained for every mutant and for all but three mutants, six or more independent measurements were obtained. Candidate competence and twitching proteins in ADP1 Twitching is known to originate in type IV pili in many organisms. In addition, in ADP1 several type IV pilus genes are required for competence [50]. Motivated by this knowledge, we started with known type IV pilus protein sequences from Neisseria gonorrhoeae, Pseudomonas aeruginosa, and Vibrio cholerae. We used these sequences to search for the most similar protein encoded by the ADP1 genome in the Kyoto Encyclopedia of Genes and Genomes. By the same method, we also identified homologs of two other proteins, ComEA (ACIAD3064) and ComA (ACIAD2639), that are needed for competence in N. gonorrhoeae and V. cholerae for DNA transport (reviewed in [60,61]). Finally we sought homologs of signal transduction proteins that affect twitching in P. aeruginosa [62]. Table 1 lists the ADP1 type IV pilus proteins together with their homologs in the other bacteria and the known functionalities of the proteins in the other bacteria. Table 2 lists other candidate ADP1 competence and twitching proteins identified with their homologs in the other bacteria. Tables 1 and 2 thereby present a comprehensive list of the type IV pilus, DNA transport, and signal transduction proteins likely to be responsible for competence and/or twitching in ADP1. The homologous protein functionalities taken together with the known structures of type IV pilus machines in the other bacteria allow us to construct a diagram of the type IV pilus nanomachine in A. baylyi as shown in Fig 1 [23. 29]. After identifying the candidate proteins, we identified the operon encoding each. To do this we used software developed by Price [52,53]. In this tool operons are predicted based on the strand on which two adjacent genes are encoded, the distance between genes in nucleotides, whether the genes are conserved and near each other in multiple genomes, the correlation in gene expression data, whether they share a narrow gene ontology category, and whether they share a Cluster of Orthologous Groups (COG) functional category. To confirm the operons we used the Gene Cluster algorithm in KEGG. In this tool operons are suggested by conserved groups of genes found in a similar order across multiple genomes. The Gene Cluster results agreed with operons identified by the Price methodology. Based on these results, the type IV pilus, competence, and twitching genes are distributed among eleven operons as reported in Tables 1 and 2 and shown schematically in Fig 2. As seen in Fig 2, the eleven operons encoding potential twitching and competence genes are encoded all over the chromosome. The pilin genes in particular are scattered among three operons. The operons are not preceded by any common repeated sequences or other motifs that suggest co-regulation by a shared activator or repressor, and nor is any subset of them. About half of them are encoded on the leading strand for DNA replication. For each of the type IV pilus operons in Table 1, we referred to a genome-scale knockout screen to list the essential genes in the operons [45]. The essential genes include ones that are needed for isoprenoid synthesis (ispG), amino acid synthesis (aroB and aroK), protein synthesis (hisS), central metabolism (coenzyme A synthesis; coaE), or pilus extension (pilB). None of the proteins in Table 2 are encoded by operons that include essential genes. Null mutations in candidate competence and twitching genes Having identified candidate competence and twitching genes in ADP1 as listed in Tables 1 and 2, we then constructed single-gene deficient mutants to test their phenotypes. The knockouts employed originated from a whole-genome set and are non-polar by design [45]. Several of the knockouts tested (pilC, pilF, comQ, comM) were found in operons with downstream essential genes as seen in Fig 2 thus confirming the non-polar nature of the mutations. The mutants were not complemented because many of the type IV proteins were toxic when cloning was attempted, as has been observed in the past [58,59]. We tested at least one knockout mutation in each of the eleven operons and at least one knockout mutation that would disrupt each of the major sub-assemblies of the type IV pilus such as the assembly platform or the trans-periplasm assemblage that connects the platform to the secretin. Prefix to the protein numbers is "ngo_" as in "ngo_1670." b Prefix to the protein numbers is "PA" as in "PA4528." c Prefix to the protein numbers is "VC" as in "VC2426." https://doi.org/10.1371/journal.pone.0182139.t001 Type IV pilus genes, natural transformation, and twitching motility in Acinetobacter baylyi Competence phenotypes of null mutations in candidate genes We tested a total of 20 knockout mutants by comparing their transformation efficiency with that of wild type cells. These results are given along the x-axis of Fig 3. We first focus on the 6 data points that report on proteins that comprise the basal structural components of a type IV pilus and its associated motor proteins. These proteins are colored green in Figs 1, 2 and 3. They include an inner membrane assembly platform (PilC), a periplasmic assemblage that connects the inner membrane assembly platform to the secretin (ComM), a secretin (ComQ) and its associated pilotin (PilF) that assists with secretin assembly and localization. Because these four proteins are universal components of a Gram negative type IV pilus, we would expect they are required for competence, and Fig 3 shows they are. Both cytoplasmic pilus retraction ATPases (PilU, PilT) are also required for competence, which could not have been predicted from homology alone because PilT homologs play different roles in type IV pilus phenotypes depending on the organism [65][66][67][68][69]. There are 8 pilins (red) reported on in Fig 3. Since the mechanism of DNA capture in ADP1 is unknown, it is unclear whether the absence of pilins should affect transformation. In agreement with previous results (reviewed in [50]), we found that it does. Although several pilins (ComP, PilV, ComB, PilX, and FimT) were required to observe any transformation, some were not. ComF, for example, is only partially required, while both ComE (a close homolog of ComF [58]) and FimU are not required at all. In the case of FimU, this is in contrast to predictions based on homology. Because of homology to a P. aeruginosa pilin, we expected that FimU would be required for the assembly of an extracellular pilus, and thus that it would be required for transformation [2]. We discuss this unexpected finding regarding FimU later in this paper. Taken together, the fact that pilins are required for transformation, as shown here and previously, suggests a role for an extracellular pilus in competence in A. baylyi; Prefix to the protein numbers is "ngo_" as in "ngo_1670." b Prefix to the protein numbers is "PA" as in "PA4528." c Prefix to the protein numbers is "VC" as in "VC2426." https://doi.org/10.1371/journal.pone.0182139.t002 Type IV pilus genes, natural transformation, and twitching motility in Acinetobacter baylyi however the fact that not all pilins are required for transformation suggests that some pilins may have other specialized functions unrelated to transformation. The only non-pilin protein encoded by the same operon as pilV, pilX, comB, comE, comF, and fimU is ComC (purple in Fig 3). The comC mutant was not competent. This phenotype could not have been predicted from homology alone because ComC homologs are associated with a variety of phenotypes depending on the organism. For example, they have been implicated as adhesins for attachment to biotic surfaces, as necessary for pilus retraction during twitching, or as required for type IV pilus assembly itself, depending on the organism (reviewed Table 1 and are supported by data in Fig 3. https://doi.org/10.1371/journal.pone.0182139.g001 Type IV pilus genes, natural transformation, and twitching motility in Acinetobacter baylyi in [2]). ComC proteins have been detected in isolated pilus fibers and because of their role in surface adhesion have often been predicted to be at the pilus tip [3,5,7,[70][71][72][73][74][75]. But, a role for ComC proteins in natural transformation appears to be species-specific. Vibrio cholerae bacteria apparently do not require ComC for natural transformation because they do not encode a Operons tested in this work. Genes are represented by arrows indicating their approximate length and color-coded according to the general function of the proteins they encode using the same color scheme as in Figs 1 and 3. Red = pilin; purple = adhesin; green = basal apparatus and extension/retraction motors; blue = signal transduction; grey = competence-specific DNA transport; white indicates no known twitching or competenceassociated function. To help orient relative to the genome annotation, the ACIAD number for the first and last genes in each operon has been indicated. Gene 3337 has been outlined in red for although it does not encode a pilin, it encodes a glycosylltransferase that modifies ComP [63,64]. Genes tested in this work are labeled in boldface italics. Essential genes are indicated by pink names. https://doi.org/10.1371/journal.pone.0182139.g002 Type IV pilus genes, natural transformation, and twitching motility in Acinetobacter baylyi homolog. N. gonorrhoeae, in contrast, requires a ComC homolog for natural transformation [76]. The pilR, pilS, and pilG genes (blue in Fig 3) were selected for investigation because they regulate type IV pilus function in P. aeruginosa, where the pilus is used only for twitching since these bacteria are not competent. They may affect the frequency and/or directionality of pilus movement [62]. Although regulation of motor proteins might have been important for competence in A. baylyi, these genes have no apparent effects on transformation in ADP1. Finally we consider the two DNA transport proteins, ComA and ComEA (grey). ComA is predicted to encode an inner membrane protein that serves as a channel for single stranded DNA to pass from the periplasm into the cytoplasm. Based on this functionality we expect the comA mutant to be not transformable, and that is what we found in agreement with previous results [77]. ComEA homologs are periplasmic proteins needed for maximum transformation in genera such as Neisseria and Vibrio [28, 31, 37]. While deletion of the comEA homolog from Neisseria has a mild effect on transformation efficiency, such deletion in Vibrio has a dramatic Competence and twitching phenotypes of null mutations. All data in this figure were taken using complex media and incubating at 37˚C. Both competence and twitching assays were performed on agar: 1.5% for competence and 0.5% for twitching. Color scheme is the same as Fig 1. Like symbols indicate that genes are part of the same operon. All data points contain multiple measurements for both competence and twitching. Error bars represent the standard deviation of multiple measurements. For the x-axis standard deviation is given by Dx ¼ 0:434 DTE <TE> where x = log <TE> and ΔTE is defined as the standard deviation from the mean transformation efficiency, <TE>. The detection limit for competence is 10 −9 and for twitching is 0.4. Data points that fall below either detection limit appear on the graph in a "below detection limit" region. Their position within this region has no physical interpretation beyond indicating that they fall below this limit. https://doi.org/10.1371/journal.pone.0182139.g003 Type IV pilus genes, natural transformation, and twitching motility in Acinetobacter baylyi effect [28-31, 37]. Based on this we expect some effect on the transformation efficiency in A. baylyi, and we find an intermediate one. Based on its homology the mechanism for this effect could be that ComEA binds to DNA in the periplasm thereby trapping it there and increasing the chance that the DNA will encounter the ComA channel and enter the cytoplasm [36]. The residual transformation efficiency of a comEA knockout in ADP1 may be due to chance encounters of the DNA with ComA in the absence of the ComEA binding to bias the random walk of the DNA, especially if cytoplasmic recombination-related proteins are abundant and assist in moving the DNA into the cell. Another observation from Fig 3 is that knockouts in the fimU pilV comB pilX comCEF operon (ACIAD3321-3314) do not all have the same natural transformation phenotype, further demonstrating that the insertion cassette is non-polar [45]. For example the upstream comE mutant has a higher transformation efficiency than the downstream comF mutant, indicating that the comE mutation is not polar on comF. By similar reasoning, the fimU mutation is not polar either, which has already been established by [45]. Some of the A. baylyi genes we tested (comP, comC, comE, comF, comB, pilX and comA) had been previously tested for competence using different mutations and a different methodology: transformation in liquid at 30˚C with minimal succinate broth [58,59,[77][78][79]. Despite the different methodologies and different null mutations, all but one of our results are the same as those published previously. The exception is pilX; the non-polar mutant we tested [45] is not competent while a previous publication reported that a mutation in pilX caused a 100-fold loss in transformation efficiency [59]. Fig 4(A) shows an example twitch plate with multiple twitching phenotypes displayed. As described in detail in the methods section, we tested twitching by using a soft 0.5% agar surface made with LB and a redox-sensitive dye to improve contrast between the cells and agar. We used exactly the same temperature and nutritional conditions for testing natural transformation and twitching. A twitching ratio is found by measuring the diameter of the mutant's twitch zone and dividing by the diameter of wild type twitch zone with mutants and wild types growing at the same time on the same batch of plates. It is clear from the figure that the comA and fimU mutants have similar twitching phenotypes to the wildtype, whereas the comE and comP mutants have substantial twitching impairments. The comA result agrees with previous work [77]. The comE and comP results disagree with previous work in which these mutants twitched the same as the wildtype when measured on 2% hard agar [58,79] . Fig 4(B) provides a reason for these differing results. Twitching phenotypes of null mutations in candidate genes As shown in Fig 4(B), on 1.5% hard agar the 4 μl inoculum spreads a small distance as the liquid pools before the water absorbs into the plate. All the differentiated twitching phenotypes seen in 4(a) show undifferentiated sized regions in 4(b), thus confirming that the cells cannot twitch on hard agar. We use this fact to find the lowest detectable twitch ratio. We spotted wild type cells on soft and hard agar. On soft agar we let the wildtype cells twitch to a minimum 17 mm diameter because this is the minimum wild type twitch diameter used in any mutant trial. At that time we measured the wild type diameter on hard agar. We divide the hard agar diameter by 17 mm yielding a detection limit of 0.4. We now focus on the y-axis of Fig 3 to consider the twitching phenotypes of the knockout mutants. Looking at Fig 3, we focus first on the four green data points that report on the basal structural components of a type IV pilus (ComQ, ComM, PilF, and PilC). Given their role in constructing the core type IV pilus machinery, we expected all six of these to be required for twitching, and indeed they are. Both of the pilus retraction ATPases (PilT and PilU) are also required for twitching. There are 8 pilins (red) reported on in Fig 3. Given that twitching is known to require a pilus, we expected the pilins to be required for twitching. Indeed many of the pilins (ComP, PilX, ComB, ComF and ComE) are required for twitching; however there are several pilins that are not essential to twitching (PilV, and especially FimT and FimU). This suggests an alternative use for pilins by this machine, a topic that will be explored further later. ComC is encoded by an operon with pilins, so we discuss it here. The comC mutant had diminished twitching motility. In some cells, a ComC homolog is required for assembly and/ or retraction of the type IV pilus (reviewed in [2]). But in ADP1, ComC is apparently dispensable for assembly and/or retraction of the pilus, or else the cells would not be able to twitch at all. At 1,450 amino acids in length, the ADP1 ComC homolog is much larger than the homologs from Neisseria (PilC1 and PilC2; about 1,000 amino acids) or Pseudomonas (PilY1; about 1,160 amino acids long), which may account for these functional differences. Although PilC1, PilC2, and PilY1 have regions of similarity along their whole length, the ADP1 homolog ComC is only similar to these in the predicted beta-propeller domain. Deletion of the signal transduction pilR, pilS, and pilG genes (blue in Fig 3) resulted in reduced twitching. In P. aeruginosa, the PilS-PilR two-component system regulates transcription of the major pilin. Deletion of pilR or pilS reduces expression of the major pilin to basal levels [80]. We therefore expected deletion of pilR or pilS to reduce twitching motility in ADP1, which it does, although only partially, suggesting that PilR/PilS are not the only factors regulating twitching pilus production in ADP1. In P. aeruginosa, PilG regulates pilus extension so deletion of pilG causes a severe loss of piliation and twitching motility [62]. Given this we expected the pilG mutant in ADP1 to have reduced twitching motility, which it does. But as with deletion of pilS or pilR, the phenotype is milder in ADP1 compared with P. aeruginosa. Finally we consider the two DNA transport proteins, ComA and ComEA (grey). Since these proteins are associated only with competence in N. gonorrhoeae and V. cholerae, we do not expect them to affect twitching. The comA mutant behaves as expected, while the comEA mutant shows slightly diminished twitching motility. This phenotype was unexpected based on the phenotype of a knockout in Neisseria [31]. However, in Acinetobacter baumannii, transformation occurs preferentially during twitching [35], indicating there may be some mechanism that connects DNA import and twitching motility in Acinetobacter. So we speculate that the comEA deletion might affect twitching in A. baylyi because twitching and competence are closely physiologically intertwined in the genus, at least when the cells are associated with a surface. Some of the A. baylyi genes we studied (comP, comC, comE, comF, comB, comA) have been previously tested for twitching motility using different mutations and a different methodologytwitching from streaks made on 2% LB agar at 30˚C [58,59,[77][78][79]. These papers reported that all mutants twitched the same amount as compared to the wildtype. In contrast on 0.5% soft agar we found varying degrees of twitching impairment from none (comA) to intermediate impairment (comC) to entirely impaired (comP, comE, comF, and comB) as shown in Figs 3 and 4(A). There was no quantification or photographs of previous results to allow for a detailed comparison with our results; however Fig 4 demonstrates that the difference in methodology between using hard (previous results) and soft (present results) agar explains these discrepancies. We did not observe twitching motility by wild type or any mutants on hard agar (!1.5%) under any environmental or nutritional conditions as shown in Fig 4(B). Using a liquid inoculum, observed spreading on hard agar is due to pooling of the liquid on the agar before it is absorbed, and therefore the same size pool is observed independent of the cells in the pool. In contrast Fig 4(A) shows differentiated twitching zones for those same strains on soft agar. Homolog case studies Many of the proteins listed in Tables 1 and 2 have close, full-length homologs elsewhere in the ADP1 genome. We define close, full-length homologs as ones that have the same predicted conserved domains or protein motifs according to the KEGG database, which relies on multiple sequence alignments and the Pfam, PROSITE, and INTERPRO databases. These homologs are given in Table 3. Two sets of homologs that present interesting case studies are discussed below. We consider first the case of homologs FimU and FimT. FimU homologs are core minor pilins, required in small amounts for the structure of an extracellular pilus [74], so we expected that FimU would be required for competence and/or twitching as other such pilins are. Contrary to expectation, FimU is not required for competence or twitching. Fig 3 together with Table 3 may provide an explanation. They show that while FimU is not needed for competence, its homolog FimT is. In addition Fig 5 shows a sequence alignment between FimU and FimT. Both proteins have a GspH motif and are especially similar to each other for the first 100 amino acids. Taken together, their overlapping structures and their phenotypes as given in Fig 3, suggests that some aspect of FimU's functionality is for transformation, and in FimU's absence its homolog FimT can substitute for it. The evolutionary histories of FimU and FimT, described next, support this suggestion. Their operon structure suggests that the two genes encoding FimU and FimT have different evolutionary histories. FimT is encoded by a monocistronic operon unlinked to the polycistronic operon encoding FimU and other pilins PilV, ComB, PilX, ComE, and ComF. Because the genes are unlinked they likely have different origins in the ADP1 genome. Indeed using the Gene Cluster tool in KEGG, we discovered that fimT is part of a cluster found only in the same family (Moraxellaceae) as A. baylyi, whereas fimU is part of a cluster of pilin genes that have conserved gene order in many more organisms, including distant relatives. FimT thus appears to have a more recent origin in the Moraxellaceae than FimU. So we posit that the function of a FimU/T homolog is in fact required for type IV pili and in ADP1 the latecomer homolog FimT can substitute for FimU as needed. We now consider the retraction motor proteins PilU and PilT which are close homologs of each other (Fig 6). Many organisms have multiple PilT homologs and their function, where known, is species-specific [65][66][67][68][69]. For example in N. gonorrhoeae, there are three closely related homologs PilT, PilT2, and PilU. Deletion of each one has different effects. Deletion of pilT prevents pilus retraction [66]; deletion of pilU has little effect on twitching motility, and deletion of pilT2 causes a 2-fold decrease in the speed of twitching motility [65]. In ADP1, there are two possible type IV pilus retraction motors: PilT and PilU. They are encoded in the Table 3. Full-length homologs of competence and twitching proteins encoded by the ADP1 genome. Homolog pair Proteins in the pair SW-Score a Identity Type IV pilus genes, natural transformation, and twitching motility in Acinetobacter baylyi same operon and both are required for competence and for twitching. This fact implies they cannot substitute for each other and therefore have unique functionalities like the PilT homologs in N. gonorrhoeae discussed above. To explore a structural explanation, we used protein alignment to examine their similarities and differences (Fig 6). We find that PilT and PilU are similar to one another along their whole lengths except that PilU is longer with a 37-amino acid extension at the C-terminal end. This C-terminal region is rich in charged amino acids such as R and K (10, or 27% of the amino acids) and E (5, or 13% of the amino acids). The Type IV pilus genes, natural transformation, and twitching motility in Acinetobacter baylyi unique C-terminal ends may account for why PilT and PilU are not interchangeable and are both necessary for both twitching and transformation. Interrelationship of competence and twitching Looking at Fig 3, we now consider the interrelationship between the transformation and twitching phenotypes. A striking observation is that 9 of the 20 genes tested are found in the lower left corner of the graph, including all that form the basal type IV pilus machine (green data points) along with the pilins, ComP, ComB and PilX (red). This position on the graph indicates that these genes are required for both transformation and twitching. Therefore these proteins are likely required for type IV pilus assembly irrespective of the function of that pilus. These pilins should therefore be considered the core pilins in ADP1 and likely play an essential role in pilus assembly or in an activity common to both twitching and natural transformation, such as extension or retraction (reviewed in [2]). Further these observations imply that the molecular basis of both twitching and natural transformation in ADP1 involves a shared type IV pilus comprised of the same core components. Another striking observation from Fig 3 is that mutants that are not in the lower left corner are located mostly on the periphery of the graph. The middle of the graph is mostly empty. This implies that most proteins not required for both transformation and twitching, have specialized in either transformation or twitching. For example and as expected, ComA, the protein that transports DNA across the inner membrane, specializes in transformation, while the PilG, PilS, and PilR signal transduction proteins specialize in twitching. We refer to proteins specialized in one phenotype as accessory proteins in contrast to the core type IV pilus proteins needed for both twitching and natural transformation. The pilins (red) are particularly varied in their specialization. Mutants lacking a single pilin are located in every region of Fig 3. As discussed earlier, three of the ADP1 pilins (ComP, ComB, and PilX) are required for both twitching and transformation, suggesting that they are universal components of the type IV pilus irrespective of its function. Other pilins are specialized for either transformation (FimT) or twitching (ComE). ComF is an accessory that enhances transformation but is absolutely required for twitching. The FimU pilin is not needed for either transformation or twitching and could therefore be used for another functionality entirely or, given the arguments above regarding its homolog FimT, may be involved in transformation. The varied functionalities for the pilins is supported by Fig 7 which shows the similarities and differences in the pilins' identifiable protein motifs. All the pilins are similar in their N-termini regions. This similarity indicates they are all targets for the same peptidase (PilD), and that they all can be incorporated into the pilus filament. However, despite the similarities in their N termini regions, the pilins are very dissimilar along the rest of their lengths and in fact cannot be aligned as a group. These differences are the structural basis for their different specializations in agreement with the varied locations of the pilin mutant phenotypes in Fig 3. Discussion In this paper we used bioinformatics to identify potential natural transformation and twitching genes and their operons in the bacterium Acinetobacter baylyi. We then measured the transformation efficiency and twitching phenotypes of the bioinformatically-identified genes. Although competence and twitching are two bacterial functionalities that on their surface appear dissimilar, the results presented here show that natural transformation and twitching in A. baylyi both utilize the same core type IV pilus proteins. The core includes the inner membrane assembly platform (PilC), a periplasmic assemblage connecting the inner membrane platform to the secretin (ComM), a secretin (ComQ) and its associated pilotin (PilF) that assists with secretin assembly and localization, both cytoplasmic pilus retraction ATPases (PilU, PilT), and three pilins (ComP, ComB, and PilX). We also find that these required proteins are encoded in 6 different unlinked operons. In this paper we also found that proteins not needed for both competence and twitching are instead found to specialize in one or the other. The pilins studied herein (FimT, PilV, ComP, PilX, ComB, ComF, ComE, and FimU) are particularly varied in their specialization with some specialized for transformation (FimT), others for twitching (ComE), and some required for both (ComP, PilX, ComB). In addition we find these pilins are dissimilar along most of their length. Nonetheless all pilins are found to be similar in their N termini regions and therefore can be used in filaments attached to the same basal machine. These findings allow us to address the nature of the transformation and twitching nanomachines. Some have argued that in some species twitching, but not natural transformation, nanomachines involve extracellular pili despite the involvement of type IV pilus genes in both phenotypes [23,82]. However, short extracellular transformation pili were recently detected in Type IV pilus genes, natural transformation, and twitching motility in Acinetobacter baylyi Vibrio cholerae and Streptococcus [27,29]. In Acinetobacter baumannii, twitching and competence are physiologically linked so that many isolates are naturally transformable only while they are twitching [35]. These findings and the results presented here suggest a model for twitching and competence in ADP1. Given the overlap of required genes for competence and twitching shown in Fig 3, it appears that both competence and twitching require use of the same type IV pilus basal apparatus. Since both functionalities require multiple pilins as well as both pilus retraction motors PilU and PilT, we suggest that both functionalities make use of a pilus protruding from this nanomachine. However given that competence and twitching require a different, yet overlapping, set of pilins and that those pilins have unique amino acid sequences, this suggests that the competence and twitching pili employ a distinct set of pilins to create different pili specialized for each functionality, or that incorporation of different pilins in the same fiber allows a single appendage to carry out both transformation and twitching. Species lacking certain key pilin or other accessory proteins required for natural transformation may not then be competent, and likewise for twitching. Thus, differences in competence and twitching phenotypes between species that encode core type IV pilus genes may arise from variations in which pilins or other accessory proteins are encoded in their genomes and under what conditions those proteins are expressed.
2018-04-03T01:46:04.745Z
2017-08-03T00:00:00.000
{ "year": 2017, "sha1": "ea10f1e58cdae2d95d9353e68fc4af1545a7039e", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0182139&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ea10f1e58cdae2d95d9353e68fc4af1545a7039e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
16664702
pes2o/s2orc
v3-fos-license
Central limit theorem for Hotelling's $T^2$ statistic under large dimension In this paper we prove the central limit theorem for Hotelling's $T^2$ statistic when the dimension of the random vectors is proportional to the sample size. Introduction and main results Since the famous Marcenko and Pastur law was found in [16], the theory of large sample covariance matrices has been further developed. Among others, we mention Jonsson [14] , Yin [24], Silverstein [18], Watcher [23], Yin, Bai and Krishanaiah [25]. Lately, Johnstone [13] discovered the law of the largest eigenvalue of the Wishart matrix, Bai and Silverstein [5] established the central limit theorems (CLT) of linear spectral statistics, and Bai, Miao and Pan [2] derived CLT for functionals of the eigenvalues and eigenvectors. We also refer to [12], [22], [9] for CLT on linear statistics of eigenvalues of other classes of random matrices. The sample covariance matrix is defined by s j and s j = (X 1j , · · · , X pj ) T . Here {X ij }, i, j = · · · , is a double array of independent and identically distributed (i.i.d.) real r.v.'s with EX 11 = 0 and EX 2 11 = 1. However, in the large random matrices theory (RMT), the commonly used sample covariance matrix is where X n = (s 1 , · · · , s n ). Note that S = S −ss T and thus by the rank inequality there is no difference when one is only concerned with the limiting empirical spectral distribution (ESD) of the eigenvalues in large random matrices. Therefore the limiting ESD of S is Marcenko and Pastur's law F c (x) (see [16] and [14]), which has a density function where the Stieljes transform for any function G(x) is defined by Observe that the spectra of n −1 X n X T n and n −1 X T n X n are identical except for zero eigenvalues. This leads to the equality (1.2) m S n (z) = − 1 − p/n z + p n m S n (z), and therefore where m S n (z) and m S n (z) denote, respectively, the Stieljes transform of the ESD of n −1 X n X T n and n −1 X T n X n , and, correspondingly, m(z) is the limit of m S n (z). Sample covariance matrices are also of essential importance in multivariate statistical analysis because many test statistics involve their eigenvalues and/or eigenvectors. The typical example is T 2 statistic, which was proposed by Hotelling [10]. We refer to [1] and [15] for various uses of the T 2 statistic. The T 2 statistic, which is the origin of multivariate linear hypothesis tests and the associated confidence sets, is defined by (1.4) T 2 = n(s − µ 0 ) T S −1 (s − µ 0 ), whose distribution is invariant under the transformation s ′ j = Σ 1/2 s j , j = 1, 2, · · · , n with Σ any non-singular p by p matrix when µ 0 = 0. If {s 1 , · · · , s n } is a sample from the p-dimensional population N(µ, Σ), then T 2 /(n−1) (n− p)/p follows a noncentral F distribution and moreover, the F distribution is central if µ = µ 0 . When p is fixed, the limiting distribution of T 2 for µ = µ 0 is the χ 2 -distribution even if the parent distribution is not normal. In recent three or four decades, in many research areas, including signal processing, network security, image processing, genetics, stock marketing and other economic problems, people are interested in the case where p is quite large or proportional to the sample size. Thus it will be desirable if one can obtain the asymptotic distribution of the famous Hotelling's T 2 statistic when the dimension of the random vectors is proportional to the sample size. It is the aim of this work. In addition, we would like to point out that some discussions about the two-sample T 2 statistic under the assumption that the underlying r.v.'s are normal were presented in [3]. Before stating the results, let us introduce some notation. Let m(z) = (x − z) −1 dF c (x) and m n (z) = (x − z) −1 dF cn (x), where c n = p/n and F cn (x) denotes F c (x) by substituting c n for c. The main results are then presented in the following theorems. Theorem 2. In addition to the assumption (1) of Theorem 1, suppose that c n = p/n → c > 0, EX 11 = 0, g(x) is a function with a continuous first derivative in a neighborhood of c, and f (x) is analytic on an open region containing the interval (1.5) [ Then, (1.16) in [17] or [20]), Therefore,s/ s can be treated as a fixed unit vector x n when dealing with Theorem 2 relies on Lemma 1 below, which deals with the asymptotic joint distribution of The stochastic process X n (z) is defined on a contour C, given as below. Let v 0 > 0 be arbitrary and set C u = {u+iv 0 , u ∈ [u l , u r ]}, where u l is any negative number if the left endpoint of (1.5) is zero, otherwise u l is any positive number smaller than the left end-point of (1.5), and u r any number larger than the right end-point of (1.5). Then define and let C − be the symmetric part of C + about the real axis. Then C = C + ∪C − . We further defineX n (z), a truncated version of X n (z), as in [5]. Select a sequence of positive numbers ρ n satisfying for some β ∈ (0, 1), We can now define the truncated process for z = u + iv ∈ C by (1.9) n denotes the symmetric part of C + n about the real axis. ThenX n (z) may be viewed as a random element in the metric space C(C, R 2 ) of continuous functions from C to R 2 . We are now in a position to state Lemma 1. Lemma 1. Under the assumptions of Theorem 2, we have for z ∈ C, , which is independent of X(z), a Gaussian stochastic process with mean zero and covariance function Cov(X(z 1 ), X(z 2 )) equal to Remark 2. Also, note that X(z) is exactly the weak limit of the stochastic process √ n(x T n (S − zI) −1 )x n − m n (z)) when max i x ni → 0, whose covariance function is, (see [2] and [17]). We conclude this section by presenting the structure of this work. To transfer Lemma 1 to Theorem 2 we introduce a new empirical distribution function where t = (t 1 , · · · , t n ) T = Us/ s and U is the eigenvector matrix of S. It turns out that F S 2 (x) and the ESD of S have the same limit. That is, Thus,s T f (S)s/ s 2 in Theorem 2 is transferred to the Stieljes transform of F S 2 (x). Moreover, note that where B and B + arr T are both invertible, r ∈ R p and a ∈ R. The stochastic process X n (z) in Lemma 1 is then transferred to the stochastic process M n (z), where ). The convergence of the stochastic process M n (z) is given in the next two sections. The proofs of Theorems 1, Lemma 1 and Remark 2 are included in section 4. The last section picks up the truncation of the underlying r.v.'s. Throughout this paper, to save notation, M may denote different constants on different occasions. Weak convergence of the finite dimensional distributions For z ∈ C + n , let M n (z) = M (1) ). In this section, the aim is to prove that for any positive integer r and complex numbers a 1 , · · · , a r , converges in distribution to a Gaussian r.v., and to derive the asymptotic covariance function. Before proceeding, r.v.'s need to be truncated. However, we shall postpone the truncation of r.v.'s until the last section. As a consequence of Lemma 7, we assume that the underlying r.v.'s satisfy where ε n is a positive sequence which converges to zero as n goes to infinity. We begin with a list of notation, mathematical tools and estimates. 2.1. Notation, mathematical tools and estimates. We first introduce some notation. Next we list some results which will be frequently used below. Lemma 3. (Theorem 35.12 of Billingsley (1995)) Suppose for each n, Y n,1 , Y n,2 , · · · , Y n,rn is a real martingale difference sequence with respect to the increasing σ-field {F n,j } having second moments. If as n → ∞ where σ 2 is a positive constant and ε is an arbitrary positive number, then rn j=1 Y n,j D → N(0, σ 2 ). where B * denotes the complex conjugate transpose of B. Lemma 5. Let C = (c ij ) p×p be a complex matrix with c jj = 0 and Y = (Y 1 , · · · , Y p ) T , defined in Lemma 4. Then for any k ≥ 2, Lemma 5 directly follows from the argument of Lemma A.1 of [4]. A direct calculation indicates that the following equalities are true: a ii e T i Br, (2.4) where B = (b ij ) p×p and A = (a ij ) p×p are deterministic complex matrices and r is a deterministic vector. Here e i is the vector with the i-th element being 1 and zero otherwise. In what follows, to facilitate the analysis in the subsequent subsections, we shall assume v = ℑz > 0. Note that β j (z), β tr j (z), β ij (z), b 1 (z), b 12 (z) are bounded in absolute value by |z|/v ( see (3.4) of [4]). From (1.13) we have and from Lemma 2.10 of [4] for any matrix B where · denote the spectral norm of a matrix. Moreover, Section 4 in [4] shows that . To simplify the statements, assume that the spectral norms of B, B i , A i , C involved in the equalities (2.8)-(2.16) are all bounded above by a constant. For k ≥ 2, it follows from Lemma 4, (2.1) and (2.7) that and that We shall establish the estimates (2.10)-(2.12) below: One should note that (2.10) and (2.11) also give the estimates for k = 2. For example . In addition, from (2.8) and (2.11) we also conclude that (2.14) Applying Lemma 2 twice to the second expectation in (2.15) gives The third expectation in (2.15) can be estimated by using Lemma 2 three times, where G i = σ(s 2 , · · · , s i ). It follows from (2.15) that for k ≥ 4 where · denotes the spectral norm of a matrix. This, together with Lemma 4, ensures that for k ≥ 4 which gives (2.10) as well as the order of E|α 1 (z)| k . Second, consider (2.11). Let y = (y 1 , · · · , y p ) T = Bs 2 and then by lemma 2 and (2.8), for k ≥ 4, where we also use the fact that for k ≥ 4 As for (2.12), if m = 0 and r = 0, then (2.12) directly follows from (2.8) and the Hölder inequality. If m ≥ 1 and r = 0, then by induction on m we have Repeating the argument above gives (m = 0 by (2.8) and m ≥ 1 by induction). Thus, for the case m ≥ 1 and r ≥ 1, by (2.10) we obtain When m = 0 and r ≥ 1, (2.12) can be obtained similarly. Thus we have proved (2.12). The simplification of M (1) n (z). Define the σ-field F j = σ(s 1 , · · · , s j ), and let E j (·) = E(·|F j ) and E 0 (·) be the unconditional expectation. Now write . The above first two terms will be further simplified one by one below. One should note that M and A −1 j (z) and splittings into the sum ofs j and s j /n, we have Appealing to (2.12) we have which, together with (2.20), leads to because, by (2.17) in [5], (2.7) and (2.10), Secondly, splittings into the sum ofs j and s j /n further gives and thus, as in treating a where in the last step we also use the estimate [5], (2.7) and (2.10), Consequently, for finite dimension convergence, we need consider only the sum and α (1) Lemma 5 and (2.16) show that E|α where k = 2 or 4 and A −1 j (z) denotes the complex conjugate of A −1 j (z). Since EX 4 11 I(|X 11 | > log n) → 0 we have E|α Here we also use E|Y Thus the condition (ii) of Lemma 3 is satisfied. Hence, the next task is to find, for z 1 , z 2 ∈ C\R, the limit in probability of To this end, it is enough to find the limits in probability for the following: The limits of (2.25), (2.26), (2.27) and finally (2.24) will be determined in the subsequent subsections. 2.3. The limit of (2.25). Introduce A −1 j (z) ands j like A −1 j (z) ands j , respectively, but A −1 j (z) ands j are now defined by s 1 , · · · , s j−1 , s j+1 , · · · , s n instead of s 1 , · · · , s j−1 , s j+1 , · · · , s n . Here {s j+1 , · · · , s n } are i.i.d copies of s 1 and independent of {s j , j = 1, · · · , n}. Therefore (2.25) is equal to Applyings j = 1 n n i =j s i and (1.13) further gives The next aim is to replace β ij (z 2 ) in the equality above by β tr ij (z 2 ). To this end, consider the case i > j first. By (2.12) ands ij =s j − s i /n. Then, when i < j, with notation , and It follows from (2.12) that E|c nj | ≤ Mn −1/2 , j = 1, 2, 3, 4. Moreover, note that In what follows we use the notation o L 1 (1) to denote convergence to zero in L 1 . This, together with (2.29) and (2.30), implies that Here in the last step we applys j = s i /n +s ij first, then use (1.13) and finally split A −1 j (z 1 ) into two parts as before. We claim that the terms d n2 and d n3 are both negligible. To see this, we first prove the following estimate Indeed, the left side of (2.32) may be expanded as From (2.10), the above term corresponding to i 1 = i 2 is bounded by 1 n 2 To treat the case i 1 = i 2 , we need to further split A −1 i 1 j (z 2 ) as the sum of i 1 j (z 1 ) ands i 1 j are also needed to be similarly split. To simplify notation, define By (2.10), (2.11) and (2.14) we have The above four estimates, together with the fact that imply that all terms in (2.33) corresponding to i 1 = i 2 are bounded in absolute value by Mn −3/8 , which ensures (2.32). Consider the term d n2 now. In view of (2.7) and (2.12) we may substitute b 12 (z 2 ) for β tr ij (z 2 ) in the term d n2 first and then applying (2.32) we conclude that E|d n2 | = o(1). As for the term d n3 , it follows from (2.7) and (2.12) that β tr ij (z 2 ), β ij (z 1 ) and , (note: b 12 (z) = b 12 (z)). Moreover, by an inequality similar to (2.6) we have Therefore from (2.10) we obtain As in (2.32) we may prove that (even simpler) which then implies that E|d n3 | = o(1). As for d n1 , we conclude from (2.7), (2.12) and (2.6) that Summarizing the above, we have thus proved that using the fact that, by (2.17) in [5] and (2.6), 2.4. The limit of (2.26). We first present a lemma below, which is necessary for finding the limit of (2.26), for the next subsection and Section 3. Multiplying by A −1 1 (z) from the right on both sides of the above equality gives we obtain (2.39) It follows that for where we also uses (2.9) and the fact that, as in (2.8), by Lemma 4 and (2.7), Here and in what follows (in this lemma) O(n −1/2 ) and other bounds are independent of i and j. We conclude from (2.9) that . For the second term in (2.40), first, by a martingale method similar to (2.18) and (2.9) we have, for e l = e i or e j , This and (2.7) ensure that Second, appealing to (2.3) gives It follows that = O(n −1/2 ). On the other hand, in view of (2.9) and (2.41) we obtain Therefore, combining the above argument with (2.36), we have Next, applying (2.38) two times gives ) . Obviously, we conclude from (2.41), (2.9) and Hölder's inequality that while (2.4), (2.6) and (2.43) yield Here we also use the estimate, via (2.7) and (2.42) Next we shall prove that e T i A −1 j (z 1 )s j above may be replaced by E(e T i A −1 j (z 1 )s j ). Using martingale decompositions as in (2.18) and the fact that Here one should notice that θ ij (z) and g nm (z) are the same. As in (2.17), one can verify that (2.45) . Thus, for k = 2 or 4, via (2.10), . and, via (2.8), These yield that E|θ ijm (z)| 2 = O(n −2 ), E|θ ijm (z)| 4 = O(n −2 ε n ) and then Here by (2.16) Thus, e T i A −1 j (z 1 )s j may be replaced by E(e T i A −1 j (z 1 )s j ), as expected. In addition, by (2.16) and (2.37) It follows from (2.47) and (2.48) that As we shall see, the above first term converges to zero in probability and the second term has a close connection with (2.25). Consider the second term of (2.50) first. Write We claim that To see this, let E ij = E(·|s 1 , · · · , s i , s j+1 , · · · , s n ). Then, recalling the definitions of A −1 j (z) ands j we havē and Note thats j is independent of s i for i > j. Then applying (2.10) yields which ensures that So (2.52) follows from the above estimate and , which may be obtained immediately by checking the argument of (2.16). As in (2.52) we may also prove that We now turn to the first term in (2.50) and claim that The second term above is not greater than Moreover, it follows from Lemma 6 and (2.49) that Consequently, the proof of (2.55) is complete. It follows that . Tightness ofM (1) n (z) and Convergence of M (2) n (z) First, we proceed to prove the tightness ofM (1) n (z) for z ∈ C, which is a truncated version of M n (z) as in (1.9). By (2.10) we have which ensures that Condition (i) of Theorem 12.3 of [6] is satisfied, as pointed out in [5]. Here Y j (z) is defined in (2.23). Condition (ii) of Theorem 12.3 of [6] will be verified if the following holds, In the sequel, since C + n and C − n are symmetric we shall prove the above inequality on C + n only. Throughout this section, all bounds including O(·) and o(·) expressions hold uniformly for z ∈ C + n . In view of our truncation steps, (1.9a) and (1.9b) in [5] apply to our case as well. That is, for any η 1 > (1 + √ c) 2 , 0 < η 2 < I(0, 1)(c)(1 − √ c) 2 and any positive l Note that when either z ∈ C u or z ∈ C l and u l < 0, A −1 j (z) is bounded in n. But this is not the case for z ∈ C r or z ∈ C l and u l > 0. In general, for z ∈ C + n , we have As in Section 2.1, now write Moreover, expanding the above difference we get It follows from (1.8), (2.10), (3.2) and (3.1) that For q n2 , expanding its difference term by term we have We conclude from (2.8), (2.10), (3.1), (3.2) and (3.4) that where we use, on the event ( S ≥ h r or λ min (A 1 ) ≤ h l ), by (2.5). Obviously, this argument also works for q (j) n2 , j = 2, · · · , 6. Moreover, we may split q n1 further and apply the above argument to conclude that Here the details are skipped. The proofs of Lemma 1, Theorem 1 and Theorem 2 Proof of Lemma 1 To finish Lemma 1,s Ts − c n needs to be written as a sum of martingale difference sequence so that we can get a central limit theorem fors Ts − c n , and, more importantly, obtain the asymptotic covariance between s Ts − c n ands T A −1 (z)s. Thus write From (2.10) we have which implies Condition (ii) of Lemma 3. Look at Condition (i) of Lemma 3 next. It is easily seen that Furthermore, for the term corresponding to k 1 = k 2 , we have On the other hand, when k 1 = k 2 It follows that Therefore, by Lemma 3 We conclude from Section 2 and 3 thatM n (z) converges weakly to a Gaussian process on C. Moreover, m n (z) → m(z) uniformly on C by (4.2) in [5] and (1.2). These, together with (1.12), (4.3), (2.23) and (4.1), give, for any constants a 1 and a 2 a 1 X n (z) + a 2 √ n(g( s 2 ) − g(c n )) (4.4) Here, the first o p (1) denotes convergence in probability to zero in the C space and in the first step we use the fact that g(x) = g(c n )+g ′ (a)(x−c n )+o(|x−c n |) as x → c n . Thus, the tightness ofX n (z) is from the tightness ofM n (z). Proof of Theorem 1. By taking f (x) = x −1 and g(x) = x in Theorem 2 and noting that c n → c as n → ∞, we can complete the proof. Truncation of the underlying random variables To guarantee the results holding under the fourth moment, it is necessary to truncate and centralize the underlying r.v.'s at an appropriate rate. As in (1.8) in [5] one may select a positive sequence ε n so that (5.1) ε n → 0 and ε −4 n EX 4 11 I(|X 11 | ≥ ε n √ n) → 0. Finally, the above argument for (5.2) of course works for (5.3). We are done.
2012-01-09T08:56:58.000Z
2008-02-01T00:00:00.000
{ "year": 2008, "sha1": "ff77d1ac44ce5e3aa2c5d2ce8095d96565deab03", "oa_license": "implied-oa", "oa_url": "https://doi.org/10.1214/10-aap742", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "ff77d1ac44ce5e3aa2c5d2ce8095d96565deab03", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
85941867
pes2o/s2orc
v3-fos-license
Review of Chinese Species of Deferunda (Hemiptera: Fulgoromorpha: Achilidae) with Descriptions of Two New Species Abstract The Chinese species of the genus Deferunda Distant, 1912 are reviewed. They include 9 species as follows: D. acuminata Chou & Wang, 1985, D. diana Chen & He, 2010, D. ellipsoidea sp. nov., D. lua sp. nov., D. qiana Chen & He, 2010, D. rubrostigma (Matsumura, 1914), D. striata Wang & Liu, 2008, D. trimaculata Wang & Peng, 2008 and D. truncata Chen, Yang & Wilson, 1989. The 2 new species, D. ellipsoidea sp. nov. and D. lua sp. nov., are described and illustrated. A checklist of Chinese species and a key to all known species in the genus are provided. The planthopper genus Deferunda (Hemiptera: Fulgoromorpha: Achilidae: Plectoderini) was established by Distant (1912) with D. stigmatica Distant, 1912 from Bangladesh as its type species. The genus consists of 13 species, which are widely distributed in the Oriental (China, India, Bangladesh and Philippines), Palaearctic (Tajikistan, Japan and Korea) and Australian (Australia) regions. Most are distributed in the Oriental region, especially in the southern China (Chen & He 2010). In this paper, the Chinese species of the genus Deferunda are reviewed and 2 new species are described and illustrated from Guizhou and Shandong Provinces, China. A checklist of Chinese species and a key to all species of Deferunda are provided. MATERIALS AND METHODS The morphological terminology and measurements used in this study follow Chen et al. (1989) and Yang & Chang (2000). The color photographs of examined specimens were taken with a Keyence VHX-1000C camera. External morphology was observed under an Olympus SZX7 stereoscopic microscope and characters measured with an ocular micrometer. The genital segments of the examined specimens were macerated in 10% KOH and drawn from preparations in glycerin using Olympus CX41 stereomicroscope. Illustrations were scanned by Canon CanoScan LiDE100 and imported into Adobe Photoshop CS5 for labeling and plate composition. Spinal formula refers to the numbers of apical spines of the hind tibiae and 1 st and 2 nd hind tarsomeres. The type materials and examined materials are deposited in the Institute of Entomology, Guizhou University, Guiyang, Guizhou Province, China (IEGU). Diagnosis Small achilids. Head with eyes distinctly narrower than pronotum. Vertex produced before eyes two-thirds to a half length of vertex, anterior margin rounded, subtruncate or truncate, posterior margin subtruncate or slightly concave, anterior half of median carina obsolete, basal half prominent, lateral margins distinctly foliate, highly elevated, diverging posteriorly. Frons, in lateral view, slightly convex, longer in mid line than widest part, basal margin roundly convex or truncate, one-fifth as wide as broadest part; median carina simple, with basal half obsolete, lateral margins strongly foliate basally, extending laterad beneath antennae, hence incurved into suture, disk of frons depressed at basal third, or apparently so on account of deeply foliate margins. Postclypeus shorter than frons in mid line, nearly straight in lateral view. Rostrum reaching mesothoracic trochanters. Pronotum shorter than vertex in mid line, anterior margin angularly or rounded convex, lateral carinae diverging posteriorly, reaching hind margin. Mesonotum longer than vertex and pronotum combined. Forewing with Sc+R forked in basal third of forewing, Cu 1 forked level with union of claval veins, M forked level with node, Cu 1 deeply convex distad of claval apex, almost reaching M, hence slightly detached; with a callus in costal cell, 6 apical areoles distad of stigma, apical part behind apex of clavus folding down and covering apex of abdomen. Spinal formula of hind leg 8-7(8)-6(5). Male genitalia. Anal segment in dorsal view (Fig. 8) ellipsoidal with length greater than width (1.6:1), apical margin rounded, anal style relatively short and not extending beyond apical margin of anal segment. Pygofer in lateral view (Fig. 9) distinctly shorter dorsally than ventrally, anterior margin with a strongly concave, posterior margin with a rounded process in ventral quarter, pygofer in ventral view (Fig. 10) subtrapezoidal, each medioventral process finger-like, narrowing distally, 2 processes connected basally, median cleft deep. Parameres (Figs. 11 and 12) arched in dorsal and ventral views, apex rounded, dorsal margin with 2 large processes, 1 extending anteriorly, the other extending laterally, the latter with a small basal process on inner surface. Phallic appendages (Fig. 14) longer than phallobase (2.8:1). Aedeagus with phallobase tubular, membranous, divided into 4 lobes at apex, in ventral view (Fig. 13) not quite bilaterally symmetrical, lateral processes reduced, with a strong spine-like process at mid line, directed basad, ventral lobe cleft at apex medially, 2 ventral lobes rounded at apex, left side with 3 subapical spines, right side with 2 subapical spines, lateral margins with the dorsal aspect with dentate margins near the base; in dorsal view (Fig. 14), aedeagus with phallobase almost symmetrical, dorsal lobes apically digitate and bent forward, most of lateral margins strongly dentate. Remarks This species is similar to D. truncata Chen, Yang & Wilson, 1989 (China: Taiwan), but can be distinguished by the frons centrally without any marks (frons centrally with a V-shaped dark marks in truncata); anterior margin of vertex rounded (truncate in truncata); hind tibiae with a lateral spine at middle near to base, spinal formula of hind leg 8-7-6 (8-7-5 in truncata). This species also differs from other species of Deferunda in the anal segment in dorsal view ellipsoidal with distinctly longer than widest part (1.6:1); dorsal margin of the parameres in dorsal view with 2 large processes, 1 extending anteriorly, the other extending laterally, the latter process with a small process at inner surface basally. The feminine specific name refers to the type locality, with the word "lu" being the transliteration of the Chinese shortened form for Shandong Province. Description Measurement. Body length (from apex of vertex to tip of forewings): male 3.5-3.9 mm (n = 42), female 3.8-4.5 mm (n = 22); forewing length: male 2.8-3.2 mm (n = 42), female 3.0-3.6 mm (n = 22). Coloration. General color yellowish white to fuscous . Vertex (Figs. 15,17 and 19) yellowish white with 2 long fuscous stripes along mid line from apical two thirds to apex, lateral carinae brown. Frons and clypeus (Fig. 21) yellowish white. Rostrum light brown except apex fuscous. Gena (Fig. 20) (Fig. 23) pale brown, veins brown, with a brown marks on anal region. Thorax with ventral areas brown to fuscous. Legs yellowish brown to brown, outside margin of each end of hind tibiae with a small brown mark. Abdomen fuscous, except lateral margin pale fuscous. Genital segment yellowish brown to brown. Male genitalia. Anal segment in dorsal view (Fig. 24) slightly shorter than widest part (0.84:1), with apical margin slightly incised in middle, anal style slightly extending out apical margin of anal segment. Pygofer, in lateral view (Fig. 25), distinctly shorter dorsally than ventrally, anterior margin strongly concave above ventral fourth, posterior margin with a rounded process at middle; pygofer, in ventral view (Fig. 26), with basal margin incised in middle, medioventral processes swollen, narrowing apically, 2 processes connected basally, median cleft deep. Parameres arched in dorsal and ventral views (Figs. 27 and 28), apex rounded, dorsal margin with 2 processes. Phallic appendages (Figs. 29 and 30) longer than phallobase (2.5:1). Aedeagus with phallobase almost bilaterally symmetrical, tubular, membranous, dividing into 4 lobes at apex, in ventral view (Fig. 29), ventral lobe cleft at apex medially, with a strong spine-like process at mid line, directed basad, lateral margins dentate, left lobe at apex dividing into 4 small processes, right lobe at apex diving into 5 small processes; in dorsal view (Fig. 30) dorsal lobes reduced, rounded at apex, each with a longitudinal row of spines. Deferunda truncata Chen, Yang & Wilson 1989 Deferunda truncata Chen, Yang & Wilson 1989: Materials Examined No specimens of this species were available for this study.
2019-03-30T13:08:28.600Z
2013-12-01T00:00:00.000
{ "year": 2013, "sha1": "31d98ca84d84819674f1101233eed4d8a5909e90", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1653/024.096.0404", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "2d1ce5d19e019aabd0a12c2d706b3b651f4a3e75", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
119047106
pes2o/s2orc
v3-fos-license
Pion constituent quarks couplings strong form factors: a dynamical approach Form factors for pions interactions with constituent quarks are investigated as the leading effective couplings obtained from a one loop background field method applied to a global color model. Two pion field definitions are considered and the resulting eleven form factors are expressed in terms of components of the quark and gluon propagators that compose only two momentum dependent functions. A momentum dependent Goldberger Treiman relation is also obtained as one of the ratios between the form factors. The resulting form factors with pion momenta up to 1.5 GeV are exhibited for different quark effective masses and two different non-perturbative gluon propagators and they present similar behavior to fittings of experimental data from nucleons form factors. The corresponding pseudoscalar averaged quadratic radii (a.q.r.) and correction to the axial a.q.r. are presented as functions of the sea quark effective mass, being equal respectively to the scalar and vector ones at the present level of calculation. Introduction Strong, electromagnetic and weak content of hadrons have been under continous intense theoretical and experimental scrutiny. Different hadrons form factors are among the main observables for understanding details of their interactions and structures, including sizes, and they are important quantities to compare theoretical and experimental results [1,2,3,4,5]. For example, the vector form factors provide the charge and electromagnetic hadron structure and interactions, the nucleon axial form factor provides important information for their spin structure and weak interaction observables such as neutron beta decay or CKM matrix unitarity. There are many theoretical calculations for the light hadrons strong form factors, for example [6,7,8,9,10,11,18,12,13,14,15,16,17] and references therein. Lattely, lattice estimations for pion-nucleon/baryons interactions were provided for progressively lower values of the pion mass, for example in [17,10,18]. Concerning their very low momentum behavior, experimental results for nucleon electromagnetic and strong averaged radii provide values √ < r 2 > 0.8 − 0.9fm [19,20,1,13]. In spite of the many difficulties to provide a complete description of hadrons and their interactions compatible with experimental data directly from QCD, in particular in the low and intermediary energies regimes, both effective models and effective theories have been considered to understand partial or isolated aspects of Strong Interactions. Among these models the constituent quark models (CQM) has shown to describe many aspects of hadron structure and interactions by considering dressed quark degrees of freedom, Dynamical Chiral Symmetry Breaking (DChSB) and eventually a pion cloud, [21,22,23,12]. Within the constituent quark model it has been argued that the zero momentum limit of the axial form factor should be g A (0) = 3/4 or g A (0) = 1 [12,21]. Also, a radius of the order of 0.2 − 0.3fm has been estimated for constituent quarks [12,24]. In the Weinberg's large Nc Effective Field Theory (EFT) constituent quarks and gluons interact with pions whose dynamics is ruled by the leading terms of Chiral Perturbation Theory (ChPT), coping with the large Nc expansion [21]. In [25,26] this EFT has been derived as the leading terms from a large quark and gluon effective masses expansion for the one loop background field method applied to a global color model in the vacuum and with leading couplings to the electromagnetic field. It can be expected that, by comparing the strong and electromagnetic nucleon and light mesons form factors with those for constituent quarks, the detailed role and contribution of each internal degree of freedom for the details of hadron structure and interactions might be elucidated clearly. Of course, to accomplish this program, besides further comparisons between different theoretical frameworks, it is also important to improve the amount and precision of experimental data. This means that the related developments might shed light on the partial or even complete reliability of CQM-type models to describe hadron interactions in particular energy ranges. Moreover, these comparisons might make explicit particular effects or mechanisms present in hadrons structure and interactions by means of analytical or semi-analytical approaches besides well established lattice QCD framework. Eventually it can be used to assess or to improve field theoretic schemes for an eventual unambigous parameterization of the nucleon and nuclear potentials [27]. In the present work the strong constituent quark form factors associated to the leading pion couplings to constituent quarks are derived and investigated. This method was considered before for the zero momentum limit of the corresponding pion-constituent quark couplings [25,26] and for the light vector mesons momentum dependent couplings to constituent quarks [28,29]. The form factors are obtained from a large quark and gluon effective masses expansion for the one loop background field applied to a global color model. The background field quark becomes the constituent quark due to the one loop calculation in which an internal (non perturbative) gluon line dresses the (background) quark This is nearly independent from the dynamical symmetry breaking, except for the fact that the same gluon propagator required to yield DChSB is considered. This momentum dependent constituent quark mass emerges therefore by means of a different mechanism from the usual DChSB. This might be in agreement with recent calculations [30]. The resulting couplings and form factors therefore correspond to tree level pion-constituent quark vertices. These pion-constituent quarks form factors are investigated and comparisons with experimental data for pion nucleon are presented. Furthermore four further pion derivative couplings with scalar and pseudoscalar constituent quark currents that emerge at the same leading terms of the determinant expansion are also presented. They might contribute to the vector and axial channels. Direct and simple momentum dependent and independent relations between different form factors are also presented. In particular one relation corresponds to a generalized momentum dependent Goldberger Treiman relation (GTR). Besides that the corresponding strong quadratic radii of constituent quarks (scalar, pseudoscalar, vector and axial) are also presented as functions of the quark effective mass. The axial (and vector) pion coupling presented in this work provides a further contribution for the corresponding axial (and vector) form factors and quadratic radii to those calculated in [28]. Two pion field definitions are considered, the Weinberg pion field, in terms of covariant derivatives, and the usual parameterization in terms of the operators U = e i π· σ . The conventional definition in terms of the functions U = e i π· τ provides the welll known pseudoscalar pion coupling that is not found in the Weinberg pion field case. The isospin non degeneracy of up-down quark masses is not considered in this work since it should be responsible for smaller (higher order) effects. This work is organized as follows. In the next Section the steps of the method are briefly reminded and the large quark effective mass expansion of a sea quark determinant is performed. By keeping the full momentum dependence of the resulting constituent quark -pion couplings the corresponding form fators are presented for the two definitions of the pion field in the following section. Due to the momentum structure of some of the form factors it is also convenient to perform a truncation that provides, latter, corresponding positive averaged quadratic radii. All the eleven form factors, five for the Weinberg pion field and six for the second pion field definition, are written in terms of only two momentum dependent functions, denoted F 1 (K, Q) and F 2 (K, Q). Besides that, the momentum dependent constituent quark mass correction, M 3 (Q) is investigated. In the following Section numerical results are exhibitted for different values of quark effective mass and for two very different gluon propagators: an effective longitudinal confining propagator considered by Cornwall [31] and a transversal one used extensively and successfully to provide hadron observables by Tandy and Maris [32]. Some ratios and comparisons of the form factors are also presented including the estimation of a momentum dependent Goldberger Treiman relation. The corresponding contributions for the pseudoscalar and axial strong constituent quark quadratic radii are also investigated as a functions of the quark effective mass for the different gluon propagators. In the last Section a summary is presented. The quark determinant, pions and constituent quark currents Consider the non perturbative one gluon exchange quark-quark interaction as one of the leading terms of QCD effective action whose generating functional is given by [33,34]: Where N is the normalization, J, J * the quark sources, x stands for d 4 x, and a, b... = 1, ...(N 2 c − 1) stands for color in the adjoint representation being N c = 3. The functional measure for the quark field was written as D[ψ, ψ] = D[ψ]D[ψ]. The quark gluon coupling constant is assumed to be g and the development below is akin to the Rainbow Ladder Schwinger Dyson equation (SDE). Below indices i, j, k = 0, ...(N 2 f − 1) will be used for SU(2) isospin indices and therefore N f = 2. The quark current mass will be assumed to be equal for u, d quarks. The color quark currents are given by j µ a =ψλ a γ µ ψ, and the sums in color, flavor and Dirac indices are implicit. A Landau-type gauge will be considered for a non pertubative gluon propagator that can be written asR µν , where the transversal and longitudinal components are R T (x − y) and R L (x − y). This non perturbative gluon kernel therefore incorporates to some extent the gluonic non Abelian character with a corrected quark-gluon coupling such that they will provide enough strength to yield dynamical chiral symmetry breaking (DChSB). This has been found in several approaches and extensions [35,7,36,31,37,38,39]. The method was explained in details in Refs. [40,25,26,28,29] and therefore it will be succintly described below. A Fierz transformation for the model (1) is performed and, by picking up the leading color singlet terms that provide the usual pion couplings, it allows to investigate the flavor structure in a more complete way. Besides that, color singlets, in one hand, avoid problems with unconfined spurious color degrees of freedom and, on the other hand, provides a direct relation with quark-antiquark lightest observed states. These states are to be identified with the light hadrons degrees of freedom and the scalar chiral condensate by means of the corresponding fields to be introduced. Chiral structures with combinations of bilocal currents are obtained. The quark field must be responsible for the formation of mesons and baryons and these different possibilities are envisaged by considering the Background Field Method (BFM) [41,42]. Therefore we consider the quark field is splitted into sea quark, ψ 2 , composing (light) quark-antiquark states including light mesons and the chiral condensate, and the (constituent) background quark, ψ 1 , to compose baryons. The shift of quark bilinears corresponds to performing a one loop BFM calculation and it might be written for each of the color singlet Dirac/isospin channels m = s, p, si, pi, ps, v, a, as, vs (scalar, pseudoscalar, scalar-isospin triplet, pseudoscalar-isospin triplet, vector, axial, vector-isospin triplet, axial-isospin triplet, where the isospin singlet states were omitted). Each of these channels might have a corresponding auxiliary field. However only the lightest pseudoscalar-iso-triplet and isoscalar-scalar degrees of freedom will be investigated in the present work. The quark field shift is of the following form: This separation preserves chiral symmetry. The sea quark can be integrated out exactly by means of the auxiliary field method that give rise to colorless quark-antiquark states, light mesons and the chiral quark condensate. Auxilary fields are introduced by means of the unity integrals multiplying the generating functional. The only degrees of freedom considered in this work are the chiral scalar and pseudoscalar-iso-triplet ones which are needed for the pion sector in the leading order. The heavier vector and axial mesons can be neglected in the lower energy regime. Therefore one will be left with a model for pions and a scalar field interacting wtih constituent quarks. The corresponding unity integral for the scalar and pseudoscalar auxiliary bilocal fields S(x, y), P i (x, y) is the following: where N is a normalization, and Bilocal auxiliary fields for the different flavors can be expanded in an infinite orthogonal basis with all the excitations in the corresponding channel. For the pseudoscalar isotriplet fields one has: where F k are vacuum functions invariant under translation for each of the local field P µ i,k (u). For the low energy regime one might pick up only the lowest energy modes, lighest k = 0 which corresponds to the pions in this channel, i.e. P i,k=0 = π i , making the form factors to reduce to constants in the zero momentum limit F k (z) = F k (0). The saddle point equations for each of the remaining auxiliary fields, after the integration of the sea quark, can be written from the condition: ∂S ef f ∂φq = 0. These equations for the NJL model and for the model (1) with Schwinger Dyson equations at the rainbow ladder level have been analyzed in many works in the vacuum or under a finite energy density. The scalar field has the only saddle point equation with non trivial solution for the quark-antiquark chiral condensate. This classical solution generates an effective mass for sea quarks. Chiral symmetry leaves a freedom to define the pion field and chiral rotations can be done to modify its definition. The scalar field can be frozen by means of a chiral rotation and this produces the chiral condensate and a strongly non linear pion sector. An usual pion field definition is parameterized by the functions: U = exp(i π · σ) and U † = exp(−i π · σ). To investigate this aspect another pion field definition, the Weinberg ones, is characterized by writing all the chiral invariant sector in terms of a covariant pion derivative given by: The chiral symmetry breaking terms however can depend on combinations of π and π 2 . By doing the corresponding chiral rotations particular set of constituent quark-pion interactions are obtained. The corresponding Jacobian of the path integral measure will not be calculated and it might induce extra terms for the resulting form factors. By performing a Gaussian integration of the sea quark field, the resulting determinant can be written, by means of the identity det A = exp T r ln(A), as: where T r stands for traces of all discrete internal indices and integration of spacetime coordinates and Ξ s (x − y) stands for the coupling of sea quark to the scalar-pseudoscalar fields for a particular pion field. This coupling term can be written respectively for the Weinberg pion field (Ξ W s (x − y)) and for the usual pion field (Ξ U s (x − y)) in terms of unitary functions U, U † as [25,26]: where F = f π is the pion field normalization, P R/L = (1 ± γ 5 )/2 are the chirality right/left hand projectors. The free quark kernel can be written as where m is so far the current quark mass. The classical solution for the scalar field, found from its gap equation, is directly incorporated into an effective quark mass M * = m− < s >. The redefined quark kernel can be written as: In expression (8) the following quantity, with the usual chiral constituent quark currents that yield the leading couplings to pions, has been used: In this expression, α = 2/9 from the Fierz transformation, R(x − y) was given in (4) and Leading form Factors In the following, consider the quark (and gluon) large effective mass expansion for the case in which quark and pion fields exchange momenta. To provide the reader with one example, one of the leading pion constituent quark effective interactions is the pseudoscalar coupling and it shows up in the first order terms of the expansion as it follows: With the insertion of complete sets of orthogonal momentum states, a pseudoscalar form factor at the constituent quark level emerges in momentum space, G U ps (K, Q), where the momenta K, Q are defined below. For this, the trace in internal indices (isospin, color and Dirac) were calculated. By considering incoming quark with momentum K, and pion(s) with total momenta Q the set of leading momentum dependent effective couplings for the first pion definition (W) in the weak pion field limit (1 + π 2 1) is given by: where Q = Q π is the total momentum carried by one or two pion in each of the vertices, and it will be for both pion field definitions W and U , being that, in the vector and scalar constituent quark currents couplings, Q = q a + q b and the pion field was kept dimensionless. The last two terms, momentum dependent ones, were obtained with an integration by parts. In this expression M 3 (K) is a running effective mass that will be defined below in (23), and the following dimensionless form factors were defined in terms of the functions F 1 (K, Q) given below: where N c = 3, d n = (−1) n+1 /(2n). It is interesting to note that the scalar pion coupling is proportional to the current quark mass and therefore it is a consequence of explicit chiral symmetry breaking. There are a scalar and a pseudoscalar momentum dependent form factors. Although the usual pseudoscalar pion coupling to pseudoscalar quark current does not emerge at this level of calculation for the W pion field definition, there is the coupling G p,W ps (K, Q) that might contribute for the axial channel. Because it is simply proportional to other form factors by means of the function F 2 (K, Q) it will not be investigated explicitely numerically below. An analogous conclusion can be drawn for the derivative-scalar term G p,W s (K, Q) that might contribute for the vector channel. The complete set of leading momentum dependent couplings with their form factors for the second pion definition, with the same convention for momenta of expression (15) and dimensionless pion filed, is given by: where M 3 (K) is the same as the mass in expression (15) and it will be defined in expression (23). The other form factors were defined as: The derivative couplings with form factors G p ps (K, Q) and G p s (K, Q) have simply a different normalization with respect to the ones from the W pion field definition: G p,W ps (K, Q) and G p,W s (K, Q). For example, it can be seen that G p,W ps (K, Q) = M * F G p ps (K, Q). At this level, it is interesting to note that G ps (K, Q) = G 2js (K, Q) in reasonable agreement with other results [43], and also G A (K, Q) = G V (K, Q) for both pion field definitions. The loop momentum integrals of each of the form factors above will be written and investigated for constituent quark with K = 0, except for the effective mass M 3 (Q). After a Wick rotation for the Euclidean momentum space these functions are given by: where k = d 4 k (2π) 4 and the following functions in momentum space for components of the quark and gluon propagator used:S The only form factor that might have an ultraviolet divergence UV is M 3 (Q) if the gluon propagator does not possess particular UV behavior. The other are completely finite if the non perturbative gluon propagator is infrared regular. The momentum structure of the form factor F 1 (0, Q) has a positive first derivative with respet to Q 2 for very small Q, and therefore it yields negative quadratic radii. To overcome that, F 1 (0, Q) might be truncated by approximating the quark kernel by S 0 (k) M * S 0 (k). It yields for the function F 1 (0, Q) the following expression: This truncation might be expected to correspond to making an effective mass M * to be momentum dependent in the expression of F 1 (K, Q). In Figure (1), the diagrams corresponding to the expressions (15) for the Weinberg pion field definition are presented, where the pion-quark vertices with a square are the derivative ones and diagram (1d) stands for the effective mass M 3 (Q). The dressed (non perturbative) gluon propagator is indicated by a wavy line with a full circle and pion is represented by dashed lines. In diagrams (1a-c) he incoming constituent has momentum K and the outgoing constituent quark has momentum K + Q, being Q the total momentum transfered by pion(s). Figure (2) exhibits the diagrams for the pion constituent quark couplings for the usual pion field definition given in expression (19) with the same conventions of Figure 1. (19). The wavy line with a full dot is a (dressed) non perturbative gluon propagator, the solid lines stand for a constituent quark (external line) or sea quark (internal line), and dashed lines represents pion field, the full square in a vertex represents a derivative coupling. Numerical results To provide numerical results, two gluon propagators were chosen. A transversal one from Tandy-Maris D I (k) [32] and the other is an effective longitudinal confining one by Cornwall D II (k) [31]. Both of them yield DChSB and they are written below with the following association: where D µν a (k) (a = I, II) is one of the chosen gluon propagators from the quoted articles, h a is a real positive constant factor used in previous works [26,29] to fix the quark gluon (running) coupling constant such as to reproduce one expected value either of the vector/axial pion coupling constant in the vacuum or vector meson coupling to constituent quarks constant, g V h a = 1, g A h a = 1 or g ρ h a 12. In the present work this factor was chosen for each of the gluon propagators and pion field definition to provide g A (0)h a = 1. Their values will be shown in the caption of the corresponding figure. The expressions for the two gluon propagators are the following: where for the first expression γ m = 12 , ω = 0.5GeV, D = 0.55 3 /ω (GeV 2 ); and for the second expression K F = (2πM k /(3k e )) 2 where k e = 0.15 and M k = 220MeV. In Figure (3) the resulting constituent quark (running) effective mass M * 3 (Q) is shown as a function of the constituent quark momentum for an UV cutoff Λ = 2GeV, in dashed and continuous lines and it is compared to a result from Schwinger Dyson equations at the rainbown ladder approximation from Ref. [44]. The multiplicative factors 1/4 and 3/4 were chosen to fit the curves into a suitable scale and they are needed because of the large value of Λ. Figures (4) and (5) present the same behavior without meaninful differences except for the relative normalization of the non truncated form factor. Besides that, a dipolar fitting for experimental results of axial pion-nucleon coupling is drawn with symbols + with a normalization to allow for comparison of the momentum dependence. It is given by [16,20,18]: by considering M A = 1.1 GeV and by adopting a normalization for G par A (Q 2 = 0) obtained in the present work for each of the gluon propagators, for the case of M * = 0.31GeV. The fitting for experimental values decreases slower than the (constituent quark) form factors G W A (0, Q) and two reasons might directly identified for that. It might signal there is missing strength from more complete quark and gluon kernels. However it also might indicate the need to account other effects rather related to nucleon structure degrees of freedom. These two possibilities are not excludent, however they correspond to different types of constituent quark models for hadrons (baryons) since they would correspond to different roles of constituent quark interactions for the baryon structure. In any case, apart from a possible difference on the overall normalization, the difference is not very large and it appears in intermediary momenta. It can be noted that the non truncated expressions provide a positive momentum slope at Q = 0, these expressions therefore would provide a negative averaged quadratic axial radii. The truncated expressions correct this behavior. In Figure (6 The axial coupling constant at the constituent quark level has been argued to be close to g A 3/4 [12] or g A 1 [21]. Results from the form factors are very well of the correct order of magnitude and value. Also, in the present work, it was shown in expressions (16) and (21) the axial and vector form factors are equal to each other, due to chiral symmetry, for each the two pion field definitions considered. Pseudoscalar coupling In figures (7) and (8) the pseudoscalar form factor G U ps (0, Q) and its truncated version G U,tr ps (0, Q) are presented for the gluon propagators D II (k) and D I (k) respectively. The zero momentum Q = 0 values are basically one order of magnitude larger than the zero momentum axial form factor as expected from phenomenology. Results with D I (k) have considerably larger absolute values than wtih D II (k). The dipolar fitting for data from lattice QCD calculations (30) [45] is also shown with a suitable normalization at G ps (0, 0) to compare with the results from expressions above for the case M * = 0.31GeV. All the results from the truncated expressions for G U ps (0, Q) yield similar results for M * = 0.31 and 0.35GeV. Whereas the truncated version presents a monotonic decrease with momentum Q the complete expression has an increase up to around Q ∼ 0.40 − 0.45GeV and then it decreases for larger Q. It has therefore the same behavior of G W A (Q) shown in the previous section. The deviation of the form factor G U,tr ps (0, Q) momentum dependence from the fitting G par ps (0, Q) is slightly larger than the deviation of the axial G U A (0, Q) form factor with respect to the corresponding nucleon-pion experimental fitting. The reasons must be the same, the momentum dependence of the quark and gluon kernels and/or internal nucleon effects. Standard hadron effective coupling constants are usually obtained for particular values of the transfered momentum such as Q 2 = 0 or Q 2 −m 2 π . The only numerical values for the form factors at the timelike momenta Q 2 < 0 shown in this work are these next ones for the usual pseudoscalar pion coupling at Q 2 = −m 2 π , i.e. closer to the physical definition of G πN that is taken from timelike momenta at the muon or pion mass. For the quark effective mass M * = 0.31GeV and the two gluon propagators two values were obtained: for the complete expression (20) and for the momentum truncated expression G W,tr ps (0, Q) with (26). By considering the same factors h a adopted for the figures of the pseudoscalar form factors (h I = 1/0.83 and h II = 1/0.27), they are given by: II G U,tr ps (0, Q 2 = −m 2 π ) = 15.2, , G tr ps (0, 0) = 13.3. The difference between the form factor G U ps (0, Q) and its truncated version, G U,tr ps (0, Q), is of course present in these timelike values. The values from the truncated expression are also closer to experimental data for the nucleon-pion coupling constant and results from other calculations. Goldberger Treiman and other relations in spacelike momenta Next ratios of the form factors are calculated. The following momentum dependent ratios between dimensionless quantities were considered: where the first one GT W (Q) is an equivalent of the GTR expression for the Weinberg pion field in which the pseudoscalar pion coupling does not appear but the (symmetry breaking) scalar two pion coupling to constituent quark appears. This ratio is momentum independent and it depends on the current quark mass m ∼ 5.75MeV for which 16m f π = F = 92MeV and therefore GT W 1. The function GT (Q) for the second pion definition has a constant factor F/M * such that if the GTR relation is satisfied the ratio GT (Q) → 1 and this is verified for very large M * . The last expression has two chiral symmetry relations for form factors, and their corresponding effective coupling constants for the second pion field definition. In figure (9) the ratio GT (Q) is presented as a function of momentum for different effective quark masses M * . The ratio GT (Q) does not satisfy necessarily the GTR at Q = 0 because the quark effective masses are not large enough. This ratio GT (Q) has the same behavior found in other works [15]. The deviation from the GTR intrinsically due to the momentum dependence of each of the form factors for the nucleon level Goldberger-Treiman relation is usually denoted by R(Q). It is usually parameterized in terms of the nucleon mass M [15], and by substituing M by the quark effective mass M * it is given by the following expression: where G πN (Q 2 ) is to be substituted by G ps (Q). By considering the constituent quark mass M * = 0.28GeV and 0.31GeV this function is exhibitted in figure (10) for the second pion definition. It goes to zero quite fast with increasing (Q) depending not only on the quark effective mass M * but also on the gluon propagator considered. Averaged quadratic radii Next, the corresponding strong averaged quadratic radii are defined from the different pion-constituent quark couplings presented above. Since the form factors are dimensionless the corresponding axial and pseudoscalar quadratic radii were defined by: < r 2 > W,tr where in the right hand side of these expressions the relations to vector and scalar quadratic radii from the form factors defined in the previous sections are exhibitted. In [28] the light vector/axial mesons couplings to constituent quarks were considered to provide corresponding quadratic radii. The corresponding averaged axial and vector quadratic radii seen by the coupling to the pion, presented in this work, also turn out to be equal. Both results, from the pion and axial mesons couplings, are to be added, i.e. in fact expressions (39)(40)(41) provide corrections to the corresponding quadratic radii. However their experimental values, at the nucleon level, must receive further corrections since vector and axial a.q.r. are different from each other and expected to follow: < r 2 V > / < r 2 A > 1.6 [12]. In figure (11) the different estimations for the axial quadratic radius contribution for the two pion definitions, W and U , and for the two gluon propagators, as functions of the quark effective mass M * . In the figures with a.q.r. the factors h a were considered h I = 3 and h II = 1, such that results could be compared with results from [28]. In the case of the Weinberg definition there are also results for the truncated expression. The axial radius (contribution) < r 2 > W A is negative because of the behavior of the axial form factor close to zero exchanged momentum and this unexpected behavior is corrected by the truncated expression as discussed above. Besides the problem with the sign for < r 2 > W A it is also noted a different behavior in the M * -dependence of the axial quadratic radii between < r 2 > W A and < r 2 > W,tr A , being that the former presents a stronger variation for increasing M * and the latter a smoother variation. These axial quadratic radii correction due to the pion are smaller than the vector/axial quadratic radii due to the vector/axial light mesons calculated with the same method for both gluon propagators in [28]. In that work the axial quadratic radii found from the coupling to the A 1 meson, < r 2 a.m. > A , were estimated to be in the following range of values -for the same range of values of the quark effective mass M * -by keeping the corresponding h a to the ones used in the figures for the a.q.r.,: respectively for gluon propagators D II (k) and D I (k). Of course the estimations for < r 2 a.m. > A with D I (k) are extremely large, also present in figures (11,12) and this had been attributed rather to the corresponding quarkgluon coupling constant and gluon propagator strengths. Both resulting values however are basically of the order of magnitude as (or larger than) the estimation for constituent quark radius < r 2 > CQ 0.2 − 0.3fm [12,24], apart from normalizations of the quark-gluon coupling constant. The experimental value for the axial radius of the nucleon is < r 2 A > 1/2 0.68 fm [1,13] and there are many estimations from lattice < r 2 A > 1/2 0.45 − 0.50 fm, for example in [46,18] and references therein. and for the two gluon propagators. The limit in which the Goldberger Treiman relation is recovered corresponds to GT (0) = 1. Similar behavior was found for the pseudoscalar quadratic radii presented in the next figure (12) from expressions (42,43), complete and truncated ones, as functions of the quark effective mass M * for the two gluon propagators. The non truncated expression provides negative values and they are presented with a sign minus. One of them is divided by factor 10 for D I (k) to fit into a reasonable scale of the figure. To make possible a correct calculation with the previous figure it was assumed h I = 3 and h II = 1. The axial < r 2 > A contribution was found to be smaller than the pseudoscalar < r 2 > ps in all cases. This is related to the fact that the pseudoscalar form factor normalization is larger than the axial form factor one. At this level all the form factors reduce to only F 1 (K, Q) and F 2 (K, Q) and the truncated version F tr 1 (K, Q). However the difficulty in fixing the quark-gluon vertex and the overal momentum behavior of the quark and gluon propagators cannot be neglected. When compared to the value < r 2 > CQ 0.2 − 0.3fm from [12,24] the gluon propagator D I (k) provides larger values for < r 2 > and the gluon propagator D II (k) again provides smaller values. The reasons for the differences between < r 2 > ps and the truncated-< r 2 > ps must be the same as the ones responsible for the discrepancies in the axial radii from figure (11). Besides that, it might be interesting, for the sake of comparison, to compare with the scalar radius of the lightest hadron, the pion, that has been calculated, for example, in lattice with < r 2 > s = 0.6fm 2 [47]. The pion charge radius has estimations for example in lattice < r 2 >= 0.37fm 2 [8] and with SDE < r 2 >= 0.46 − 0.48fm 2 [48], whereas its experimental value < r 2 > 0.45fm 2 [20,9]. The pion scalar radius seems therefore to be larger than its charge radius analogously to the fact that according to the present results the pseudoscalar, and also scalar, radii are larger than the axial and vector radii. Summary and discussion Pion-constituent quark momentum dependent form factors were investigated from one loop background field method for the one non perturbative gluon exchange quark interaction from the QCD effective action. At this level, the pseudoscalar coupling only shows up for the usual pion field definition in terms of unitary functions U, U † but not for the Weinberg pion field. Besides the usual pseudoscalar pion coupling, other derivative pion -scalar and pseudoscalar currents form factors were also found in the leading order of the determinant expansion in expressions (18,15) and also (19,22). Several of them have a reduced strength with respect to the usual scalar and pseudoscalar form factors by a constant coefficient of the order of 1/M * . By means of an integration by parts these terms might contribute for the vector and axial channels. All the (eleven) resulting form factors, pseudoscalar, scalar, vector and axial, were found to be written in terms of only two momentum dependent functions F 1 (0, Q) and F 2 (0, Q) for zero external constituent quark momentum, with different coefficients. A truncated momentum dependence of the quark kernel for F 1 (0, Q) was also considered such that the resulting form factors, G W,tr A (0, Q) and G U,tr ps (0, Q), were shown to have a decreasing monotonic behavior more similar to the experimental results, corresponding rather to the function F 2 (0, Q). The truncated expressions might in fact correspond to considering running momentum dependent effective sea quark mass from the gap equation. Besides that, these truncated expressions yield positive quadratic averaged radii. Different values for the sea quark effective mass M * were considered and it mostly contributes for the overal normalization of the form factors. The first momentum dependent function presented was the constituent quark effective mass correction M 3 (Q). Its momentum dependence is in excellent agreement with estimations from SDE calculations, except for its overall normalization that appeared to be very large due to absence of an UV cutoff. It is important to stress that the mechanisms that give rise to the gap effective mass M * and to the mass M 3 (Q) are different. However the behavior of constituent quark mass M 3 (Q) is nearly independent of the scalar condensate contribution for the (constant) quark effective mass M * . At the level of the calculation presented, the axial and vector form factors are equal to each other for each of the pion field definitions. The same chiral relation appeared for the scalar and pseudoscalar form factors for the second pion definition. The axial and pseudoscalar form factors were compared to fittings of available experimental data for pion nucleon form factors by adjusting the values at zero momenta. Results showed that the momentum dependence of constituent quark coupling to pions is not very different from the nucleon coupling to pions. The larger difference between experimental (nucleon form factor) values and the present form factors appear in the range of 0.15 < Q < 1.4GeV for M * = 0.31GeV. This might signal the need for improved momentum structure of the quark and gluon kernels but it might also signal need to account for effects from nucleon structure. The pseudoscalar form factor has a larger strength than the axial one, in agreement with expectations from phenomenology. This conclusion remains valid if other components for the axial form factor are included such as the coupling to light axial mesons, as seen by comparing with results from Ref. [28] in which vector/axial mesons couplings to constituent quark had been investigated by means of the same method employed in the present work. A systematic and more general analysis will be presented elsewhere. The pseudoscalar form factor at the timelike point Q 2 = −m 2 π , closer to current physical definitions of g πN , was obtained for the complete (or truncated) expressions being smaller (or larger) than the zero momentum Q 2 = 0 case. Different momentum dependent and independent ratios between the form factors were also presented. Some of them simply show the resulting chiral symmetry relations, eg. between vector and axial ones, or between scalar Figure 11: The axial quadratic averaged radius (contribution) for the two pion definitions, W and U , and two gluon propagators, I and II, as functions of the effective quark mass M * . The factors ha were chosen to be hI = 3 and hII = 1. The numerical result for < r 2 > W A has a sign minus and the results for the gluon propagator DI it is divided by 5 to fit in the scale of the figure. and pseudoscalar ones. The momentum dependence of the Goldberger Treiman relation (GTR) was also presented by considering the pseudoscalar and axial form factors for spacelike momenta and a qualitative agreement with calculations at the nucleon level as found. Finally the corresponding results for the pseudoscalar and contribution to the axial constituent quark averaged quadratic radii were obtained as functions of a constant quark effective mass M * from the gap equation. In particular resulting values for the axial/vector quadratic radii are somewhat smaller than estimations of the constituent quark axial/vector radii from the coupling to light axial/vector mesons obtained with the same method [28]. The structureless pion limit might have had effect on the estimations but this structureless limit had also been considered for the vector/axial mesons. In general the pseudoscalar quadratic radius is larger than the axial radius (from both couplings to pions and axial mesons) due to the corresponding form factors normalizations. This becomes clear by noting all the quadratic radii and form factors depend on only two momentum dependent functions. The relevance of each of the constituent quark degree of freedom presented in this work and [28] for nucleon structure and corresponding form factors is to be investigated elsewhere.
2018-12-06T18:46:02.000Z
2018-09-20T00:00:00.000
{ "year": 2018, "sha1": "6c7da26eaf979c257ee8049370c6525fc8649976", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.99.014001", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "dd1cba9052d380e661d6023566027185e1a31738", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
268181864
pes2o/s2orc
v3-fos-license
The Metaphysics of Meaning: Aquinas and the Meaning of Life While ‘the meaning of life’ has grown in prominence as a topic of philosophical inquiry, few Thomists have addressed it. Joshua Hochschild has recently offered a plausible explanation, arguing that ‘the meaning of life’ is a late modern ‘invention’, at home in a conceptual framework both philosophically problematic and incompatible with the principles of St. Thomas’ thought. He, therefore, counsels Catholic intellectuals to avoid the question of life’s meaning. I argue in contrast that St. Thomas offers the kind of metaphysical perspective that originally made ‘the meaning of life’ intelligible. First, I show that closer attention to the context in which the phrase emerges (that of German Romanticism) can clarify why much of the modern discourse on ‘the meaning of life’ succumbs to Hochschild’s critique. I then show that, even in the writings of its earliest modern proponents, we find compelling reasons to hold that ‘the meaning of life’ was always more at home within a Christian conceptual framework. Finally, I argue that St. Thomas’ account of providence and divine art in particular explain the purposefulness and significance of the world, such that Thomists who appeal to these notions are well positioned to address the question of life’s meaning in contemporary philosophical debates. Introduction In her memoirs, Raïssa Maritain recounts the famous suicide pact that she formed with the young fellow student, Jacques Maritain, who would later become her husband and a prominent Thomist philosopher.Disillusioned with the 'metaphysical anguish' they encountered in the academic culture of the Sorbonne, the couple agreed to take their own lives if they could not discover the meaning of life.Mercifully, they were spared this fate: [W]e decided for some time longer to have confidence in the unknown; we would extend credit to existence, look upon it as an experiment to be made, in the hope invention'. 7Because the question's very formulation marks a dramatic shift away from the concepts St. Thomas employed, Hochschild urges Catholic intellectuals to simply refrain from asking it. 8n spite of his counsel to avoid it, however, Hochschild does entertain the possibility that the question of life's meaning could be 'adopted as a relevant question', so long as it is 'brought back within the orbit of a more substantive moral and metaphysical perspective' -an approach he associates with John Paul II. 9 In what follows, I argue that Thomists have good reason to pursue this possibility.This is because, even though he never uttered the phrase, St. Thomas offers the kind of moral and metaphysical perspective that originally made 'the meaning of life' intelligible.First, I argue that closer attention to the context in which the phrase emerges (that of German Romanticism) can clarify why much of the modern discourse on 'the meaning of life' succumbs to Hochschild's critique.I then argue, contra Hochschild, that even in the writings of its earliest modern proponents, there are compelling reasons to hold that 'the meaning of life' was always more at home within a Christian conceptual framework like St. Thomas'.Finally, I argue that St. Thomas' account of providence and divine art in particular explain the purposefulness and significance of the world, such that Thomists who appeal to these notions are well positioned to address the question of life's meaning in contemporary philosophical debates. A modern history of meaning In one sense, Hochschild is correct that 'the meaning of life' is a nineteenth -or rather, late eighteenth -century invention.Variants of the phrase (Der Sinn des Lebens) first appear in the writings of the German Romantics at the University of Jena; all of whom were disciples of Johann Gottlieb Fichte.As Stephen Leach and James Tartaglia have documented, Johann Wolfgang von Goethe was likely the first to use this phrasing when he wrote of 'life and life's meaning' (Leben und Lebenssinn) in a letter to Friedrich Schiller in 1796. 10Georg Philipp Friedrich Freiherr von Hardenberg (better known as Novalis) used similar wording in an unpublished manuscript from 1797 or 1798: 'only an artist can divine the meaning of life'. 11Soon after, their companion Friedrich Schlegel wrote of 'the holy meaning of life' at the end of his novel, Lucinde (1799), which popularized the phrase and influenced Thomas Carlyle, the first to use its English equivalent in his novel, Sartor Resartus (1834). 12It is indisputable, then, that 'the meaning of life' emerged within a small circle of European scholars and artists who shared a common intellectual heritage. 13hese Jena Romantics also employ similar phrases, such as 'the meaning of the world' or 'the meaning of human existence' (Sinn des Daseins), and they do not consistently distinguish these from 'the meaning of life'. 14But what do they mean by 'meaning' in such phrases?Terms they use -such as Sinn and Bedeutung -had a variety of definitions by the end of the eighteenth century.All of them suggest some mental activity or content.According to Steven Cassedy, 'Sinn' originally meant 'sending', 'movement', or 'direction' and quickly came to encompass the relation between a mind and its object. 15In many instances, it overlaps with 'Bedeutung' (from Bedeuten: 'to signify'): as when words, expressions, and works of art signify some idea in the mind or spirit (Geist) of an author.So when the Romantics speak of the 'meaning of life', they are suggesting that life has significance: that it signifies something intelligible like words and works of art do.Novalis, for instance, writes that 'everything we experience is a communication.Thus the world is indeed a communication-the revelation of a spirit'. 16Schlegel notes that nature 'speaks' to us with 'the deep significance of the mysterious hieroglyphs'. 17And in his Wilhelm Meister's Apprenticeship (1795-1796), Goethe speaks of events in the world possessing 'a great sense' akin to the meaning of a story. 18Sinn' and 'Bedeutung' could also mean 'will', 'desire', or 'inclination' -that is, the purpose that an agent gives to its objects or its actions; as when we ask after the purpose of an artifact or behavior ('did you mean to do that?').19 When the Romantics speak of meaning in this sense, they are suggesting that our lives or the world are directed toward some end, as if intended by a conscious agent.Such descriptions are unsurprising given the influence of Fichte, who saw the will of an 'I' (Ich) as the world's ultimate origin and explanation.20 Schlegel too affirms that only the creative intention of spirit could give meaning and purpose to the world.21 And in his The Novices of Sais (1802), Novalis notes that spirit can '[impart] to a whole life guidance, stability, and meaning'.22 So for the Jena Romantics, our lives and the world 'mean' in the sense that they (1) signify or (2) have purpose.But these poets, novelists, and artists do not justify their use of this language in light of one, unified conceptual framework as Hochschild suggests.The Romantics are all reacting to the 'disenchanting' effects of the French Enlightenment: the transformation in our understanding of nature from something sacred and purposeful to little more than a lifeless and quantifiable mechanism.So they see their work as part of a broader re-valuation of nature, and, in consequence, of human life.Their use of 'meaning', therefore, embodies a more complex range of possibilities than Hochschild allows for.It is informed by competing conceptual frameworks, as they attempt to both champion Enlightenment ideals (such as subjectivity, autonomy, and freedom) and revitalize premodern 'enchanted' understandings of the world. These competing conceptual frameworks unsurprisingly entail irreconcilable accounts of how and why our lives possess meaning.The Enlightenment framework to which the Romantics are indebted leads to a theory of meaning's origin that Leach and Tartaglia call the 'Romantic idea'.This is the view that we cannot discover an existing reference or order in the world, but must rather create it ourselves. 23We, in other words, are the minds who determine what things signify and what purposes they serve.Take, for instance, Novalis' assertion that 'the world must be romanticized'. 24o romanticize, he says, is to impose upon the world a reference or purpose of our own devising and to 'invest all actions with a great, deep sense [Sinn]-giving life a higher meaning [Bedeutung]'. 25If nature is a 'book', he notes, then we are its authors: 'each life is a story … life must not be a novel that is given to us, but one that is made by us'. 26et because such a view is only intelligible in light of modern Enlightenment idealsmany of which are shared by the Romantics' French antagonists -it suffers from the irresolvable tensions generated by those ideals.Here Hochschild's criticisms find their mark.If our creative efforts are really the source of meaning in the world, the implication is that the world as such -including human life -is originally devoid of meaning.The world is not a bearer or source from which we can derive significance and purpose.Rather, things only possess meaning insofar as we act upon them.To many critics, this is tantamount to denying that there is a 'meaning of life' after all: there can only be, at most, a meaning to our experience of it.Some of the earliest critics of Fichte and his heirs, such as Friedrich Jacobi, characterize this view as 'nihilism': the view that there is ultimately nothing meaningful in itself apart from our ego.In his Letter to Fichte (1799), he suggests that anything meaningful on this view amounts to little more than 'determinations of our own self ', rather than something objective characterizing the way things are. 27Yet paradoxically, this ego of ours proves incapable of imbuing its own existence with any significance or purpose.It is itself no thing -'the empty illusion of something'. 28We can perhaps give some significance and direction to our discrete conscious acts.But these, like our ego, would be adrift on a vaster sea of meaninglessness.'Our entire cognition', Jacobi concludes, 'contains nothing, nothing whatsoever, that could have any truly objective meaning at all'. 29he subsequent history of this 'Romantic idea' appears to confirm Hochschild's judgment that the question of life's meaning dispenses with metaphysical claims about the world, nature, or existence and contents itself with a kind of subjectivism. 30Arthur Schopenhauer was among the first to use the phrase 'the meaning of life' after the Jena Romantics in 1844. 31Yet for him, meaning is not a significance or purpose that characterizes reality as such.It is something that applies merely to our experience (the realm of 'representation').The world in itself, by contrast, is simply an aimless and unending motion, unguided by any purposes. 32At a deeper level, therefore, life is utterly vain: [E]very person invariably has purposes and motives by which he guides his conduct; and he is always able to give an account of his particular actions.But if he were asked why he wills generally, or why in general he wills to exist, he would have no answer; indeed, the question would indeed seem to him absurd. 33iedrich Nietzsche likewise invokes 'the meaning of life' in his 1874 'Untimely Meditations'. 34And his proclamation that 'God is dead' in The Gay Science (1882) is in part an acknowledgment that the world is devoid of significance and purpose.Meaning for Nietzsche becomes the sole patrimony of his 'Supermen' (Übermenschen): the future race he believed strong enough to face the inherent meaninglessness of the world and to forge new values of their own: 'It is a measure of the degree of strength of will to what extent one can do without meaning in things, to what extent one can endure to live in a meaningless world because one organizes a small portion of it oneself '. 35 By the twentieth century, thinkers such as Jean-Paul Sartre and Albert Camus draw the logical conclusion that the absurd, rather than a world of meaning, is the real legacy of the 'Romantic idea'.For these theorists of the absurd, we are faced with the tension between our desire for significance and purpose in our lives and the world's apparent refusal to provide them.In Being and Nothingness (1943), Sartre interprets 'meaning' as a feature found only within our subjective agency: human agents have form and pursue ends for a variety of projects in life.But no discrete choices of ours have the power to render the world itself or the fact that we exist in the first place something meaningful.Underlying our free acts is a brute reality devoid of sense and definition. 36n 'Existentialism is a Humanism' (1945), he confirms that a world of genuine meaning would require a divine artisan -whose existence Sartre himself rejects in the name of human freedom: When we think of God the Creator, we usually conceive of him as a superlative artisan ….Thus each individual man is the realization of a certain concept within the divine intelligence …. [However] there is no human nature, because there is no God to conceive of it ….Man is indeed a project that has a subjective existence ….Prior to that projection of the self, nothing exists, not even in divine intelligence and man shall attain existence only when he is what he projects himself to be. 37is brief historical survey suggests that even though 'the meaning of life' emerges in the milieu of post-Enlightenment European philosophy, it is nonetheless radically attenuated within this conceptual framework.The legacy of the 'Romantic idea' allows for a life whose distinct projects can be given some reference and direction, but whose existence as such can never be rendered meaningful; in a world without intelligible form or ends.We are left with a discourse that offers no account of the meaning of life after all, but only an account of how and why our lives lack significance and purpose.Consequently, a number of contemporary Analytic philosophers have resorted to labeling variants of this view theories of meaning 'in' life, rather than 'of ' life. 38And they have largely abandoned the task of providing an account of the latter, as meaninglessness or absurdity appears to be the only possible fruit of such a labor. 36Sartre, Being and Nothingness, pp.481-556. 37Jean-Paul Sartre, Existentialism Is a Humanism, trans.by Carol Macomber (New Haven: Yale University Press, 2007), pp.21-23.Sartre's rejection of God as an artisan helps to illustrate what, in my estimation, makes 'the meaning of life' unintelligible within a modern philosophical framework.It is not merely that early modern philosophers abandoned a conception of nature governed by intrinsic formal and final causes (substantial form and teleology).It is that, on their principles, it no longer made sense to characterize nature and its intrinsic features as expressions of divine intelligence. 38 An ancient alternative? However, the ambiguity of Romanticism's historical context permits us to draw a conclusion that Hochschild does not.The failure of post-Enlightenment philosophy to account for significance and purpose in the world suggests that 'the meaning of life' was never truly 'at home' in this conceptual discourse.In fact, Novalis himself suggests that it may be more intelligible within the kind of premodern religious world that he was attempting to revive.He notes that, far from a nineteenth century 'invention', 'meaning' is a word for what disappears in the late eighteenth and early nineteenth centuries.In his unpublished fragments from 1798, he argues that it is only with the rise of 'the modern way of thinking' that life and the world are first conceivable as meaningless (unbedeutend): 'The age has passed when the spirit of God could be understood.The meaning of the world is lost' and all that remains is its empty 'letter'. 39Moreover, Novalis occasionally describes 'romanticizing' not as investing the world with meaning, but as rediscovering something that the world already possesses, apart from our creative efforts. 40Both Goethe and Novalis characterize this as reclaiming an ancient, rather than a modern conception of nature: 41 that for which nature is 'visible spirit' and natural things signify the presence of souls (Seelen) and spirits (Geistern). 42ccording to Frederick Beiser, because the Romantics were engaged in a project of 're-enchanting' our understanding of the world, what they admired most about ancient religion is the notion that nature is the visible expression of an infinite Spirit: 'a unitary, self-sufficient substance' 43 capable of infusing the natural world with 'a higher meaning … the finite with the appearance of the infinite'. 44This is the conceptual frame for which Novalis expresses nostalgia: 'Formerly, all things were spirit appearances.Now we can see nothing but dead repetition, which we do not understand.The meaning [Bedeutung] of the hieroglyph is missing.We are still living on the fruit of better times'. 45So although 'the meaning of life' is first uttered in the late eighteenth century, it is not always, as Hochschild would have it, uttered to express late eighteenth century ideas.It is just as correct to characterize it as a modern way of referring to something decidedly premodern: the significance and purpose given to the world by a divine spirit. Remarkably, in his Christendom or Europa (1799), Novalis identifies Medieval Catholicism as the zenith of this 'ancient' view of meaning.Catholic Europe, he writes, was a world of 'immortal meaning', wherein the 'meaning of the invisible' suffused all of life.Enlightenment disenchantment, with its 'stripping' of nature, is, therefore, a symptom of Europe's hatred of its Catholic past. 46Admittedly, Novalis' depiction of the Medieval Church was deeply contested and not particularly informed by history.But it contains nonetheless a suggestive kernel of truth.Cassedy affirms that in the ancient world, there is 'virtually nothing like' this use of 'meaning' before the rise of Christianity. 47It appears first, he argues, in the writings of early Christians such as St. Augustine, who compares natural things to signs (signa) possessing a meaning (sensus) intended by a divine author. 48In fact, Novalis' image of life as a novel is undoubtedly a legacy of early Christian reflection.As Hans Blumenberg argues, the metaphor of nature as a 'book' that God authors was ubiquitous among the Church Fathers and Medieval Christians.St. Anthony of Egypt compared the nature of created things to words written by God, always available for him to read. 49St. Augustine, Hugh of St. Victor, and St. Bonaventure all compare the natural world to Scriptureand thus things in the world to words whose 'meaning and significance [sensum et significationem]' reflects the intention of their creator. 50Mirela Oliva concurs, arguing that the 'spiritual meaning' (sensus spiritualis) developed in biblical hermeneutics is the true precedent for later, existential uses of 'meaning' -concerning as it does the significance of human life and created things conforming to the divine will. 51She is right therefore to conclude that "'the meaning of life" comes from a long linguistic sedimentation' and 'is not, as Hochschild claims, a sudden appearance in Western philosophical vocabulary'. 52Indeed, in certain passages, it is likely that the Jena Romantics utilized the lexicon of Sinn and Bedeutung in an attempt to reclaim the connotations that Latin meaning words (like sensus, sententia, significatio, etc.) took on in an ancient and Medieval Christian context. Meaning as purpose in St. Thomas We are now in a position to evaluate Hochschild's recommendation with respect to St. Thomas.Is the notion of life's 'meaning' incompatible with St. Thomas' metaphysical framework, as Hochschild suggests?As we've seen, phrases like 'the meaning of life' among the Romantics can refer to (1) significance: things in the world bear a signifying relation to ideas in the mind of an author, like words and artifacts do; or (2) purpose: things in the world exhibit order and direction, as if intended by a mind and will.We've also seen that, in at least some of his writings, Novalis uses the term 'meaning' to describe these as objective features of nature, caused by a divine mind or spirit.Finally, we've seen that for Novalis, this notion finds its most coherent expression in a pre-modern Christian worldview -especially that of Medieval Catholicism.If this suggestion has merit, then far from conflicting with the 'the meaning of life', we would expect a framework like St. Thomas' to render it intelligible. Consider first meaning as purpose.Hochschild contrasts St. Thomas' understanding of purpose (finis) with meaning.Whereas 'meaning' suggests subjectivity, awareness, or consciousness, 'by "purpose", we don't mean an individual agent's intention or conscious sense of purpose, nor a particular path or vocation to fulfill, but the intrinsic, essential why of the species'. 53As Robert Pasnau notes, this view of nature as intrinsically purposeful was almost universally held in medieval physics and theology. 54As a medieval Aristotelian, St. Thomas agrees with Aristotle that nature (physis) is 'a certain principle and cause of change and stability' within things, determining not only what they are but what they act 'for the sake of '. 55 A thing's purpose is therefore determined intrinsically by its substantial form (morphe): 'upon the form follows an inclination to the end, or to an action, or something of the sort; for everything, insofar as it is in act, acts and tends towards that which is in accordance with its form'. 56Aristotle too contrasts this kind of determination to ends with the kind imposed by an extrinsic agent or intelligence.He therefore argues in Book II of the Physics that nature is a sufficient cause of end-directedness in things, without the deliberative activity of a mind or will. 57No appeal to a divine intelligence or 'imperative ruler' is required to explain it. 58owever, while St. Thomas affirms with Aristotle that nature is a genuine cause of purposefulness within things, he denies that it is a sufficient cause.Nature may be an intrinsic principle of motion, but not a principle of motion toward an end. 59Its teleological character must be 'traced back to an intellect' as its first, 'directing principle'. 60ake, for example, the pattern of reasoning displayed in St. Thomas' fifth way (quinta via): We see that things which lack intelligence, such as natural bodies, act for an end, and this is evident from their acting always, or nearly always, in the same way, so as to obtain the best result.Hence it is plain that not fortuitously, but designedly [ex intentione], do they achieve their end.Now whatever lacks intelligence cannot move towards an end, unless it be directed by some being endowed with knowledge and intelligence; as the arrow is shot to its mark by the archer. Therefore some intelligent being exists by whom all natural things are directed to their end; and this being we call God. 61 find parallel arguments 'from the governance of things' (ex gubernatione rerum) 62 in a number of his writings; among them his Commentary on Aristotle's Physics and the Summa Contra Gentiles: [Aristotle] says, therefore, first that it must be pointed out that nature is among the number of causes which act for the sake of something ….For things which do not know the end do not tend toward the end unless they are directed by one who does know, as the arrow is directed by the archer.Hence if nature acts for an end, it is necessary that it be ordered by someone who is intelligent … 63 Moreover, that natural bodies are moved and made to operate for an end, even though they do not know their end, was proved ….But it is impossible for things that do not know their end to work for that end, and to reach that end in an orderly way, unless they are moved by someone possessing knowledge of the end, as in the case of the arrow directed to the target by the archer.So, the whole working of nature must be ordered by some sort of knowledge. 64 Lawrence Dewan notes, all of these arguments affirm that teleology has its proper origin in intelligence: there is a fundamental 'link between finality, i.e. the "telic", and mind'. 65And as St. Thomas notes in the De potentia, for an agent to order something to an end in the truest sense, it must know the end, grasp the concept (ratio) of an end, and know the relation between the thing and its end. 66Each of these requires a mind, and so is characteristic of 'an intelligent and voluntary agent' capable of directing and moving itself to ends: 'All ordering, therefore, is necessarily effected by means of the wisdom of a being endowed with intelligence'. 67t follows for St. Thomas that anything lacking intelligence which nonetheless exhibits purposefulness -such as nature -derives this feature from some intelligence.Even though in one sense nature is intrinsically purposeful, in another and more fundamental sense it owes this to the action of an intellect. 68The presence of purposefulness in the non-rational world is therefore a sign that nature has received something proper to intelligence; just as the determination of an arrow's flight to a definite target points to the intention of the archer. 69or St. Thomas, then, nature is revealed to be the instrument or medium of an intellectual activity transcending nature: 'the work of nature is the work of an intelligence'. 70It is, in the end, nothing but 'a certain kind of art [ratio cuiusdam artis], i.e., the divine art, impressed upon things, by which these things are moved to a determinate end'. 71And this entails that the ends of all things -as well as the order they bear toward those ends -must exist in God's mind and will, before and apart from their existence in nature.This is the feature of St. Thomas' account that best harmonizes with the notion of meaning, since 'meaning' suggests purpose as it exists in a mind and will and not merely in nature.St. Thomas simply, and more traditionally, describes this as God's providence.Providentia for St. Thomas refers to the ends and the 'eternal ratio by which God orders all things' to those ends in the divine mind (ratio ordinis rerum in finem in mente divina).In its more general use, it is a kind of disposition (habitus) in the practical intellect that 'implies ordination to ends' and pertains to 'the form of a thing considered as directed to an end …'. 72In God, it is the aspect of his intellect to which purposes in the created world correspond: that 'type or order of things toward their end' that 'pre-exists in the divine mind', like the foresight exercised by a father over his family or by a ruler over his subjects. 73And this only comes to exist in nature by an act of God's will. 74 Meaning as significance in St. Thomas Consider next meaning as significance.Does St. Thomas hold that things in the world express or signify anything existing in a mind; akin to the way that words or artifacts signify?For thinkers such as Anaxagoras and Plato, natural things bear a 'likeness' to a mind (Nous) or eternal forms and 'patterns' (paradeimata) existing apart from them. 68 ST I-II, q. 6, a. 2, co. 69 7 De veritate 5.1. 73Ibid.cf.ST I, q. 22, a. 1, co.; De veritate 5.2, ad.10: 'that order which is found in nature is not caused by nature ….Consequently, nature needs providence to implant such an order in it'. 74ST I, q. 14, a. 8; De veritate, 5.1; SCG, 3.64.'Governance' for St. Thomas signifies when an agent intends an end for another.It implies that a thing's tendency toward its end exists first in the intelligent agent directing it and only exists within the thing itself because the agent wills it.It is analogous to how a ruler intends the good of his or her people and then communicates this tendency to them, ordering them to the common good.In the case of nature, this occurs principally through God's will, since both intention and inclinations -even those found within nature -are caused by the will: 'inclination is through the will'.Yet since on St. Thomas' view the 'will does not ordain' (ST I-II, q. 12, a. 1, ad. 3), it is more proper to say God's intellect is responsible for orienting things to their ends while his will acts as an efficient cause to move them.These function as exemplars, which are necessary to explain the order and determination of natural kinds: how they come to exist in this way rather than that.Yet in the Metaphysics, Aristotle dismisses this kind of causal explanation as so many 'empty words'. 75In the Physics, he notes that what distinguishes a natural substance from an artifact is that the former possesses a principle that is 'directly present in it' (substantial form). 76On his view, nature is already ordered and determined from within, so its forms need not 'refer' to any extrinsic exemplars, whether existing in a mind or not.Aristotle, therefore, not only rejects Plato's Ideas, but he denies that gods design or craft natural things as human artisans do. 77n many respects, St. Thomas appears to be a faithful Aristotelian.Like Aristotle, he gives substantial form pride of place in explaining the determination of natural things.Moreover, in his Commentary on the Metaphysics, Thomas echoes Aristotle's critique of Plato's Ideas.Yet just as he does with respect to purpose, St. Thomas denies that intrinsic forms are sufficient causes of the order and determination found within the natural world. 78He insists that appeal must ultimately be made to exemplars in the divine mind: even substantial forms in nature must be 'reduced to the divine wisdom as its first principle, for divine wisdom devised the order of the universe …'. 79t. Thomas once more distinguishes between the limited way in which nature functions as a cause and the fuller, sufficient way in which the divine intellect functions as a cause.He notes, following Aristotle, that every natural agent -that is, an efficient cause acting in virtue of its substantial form -acts to induce its form in the things it generates: humans generate other humans, fire generates more fire, etc.The resulting form thus bears a relation of likeness to -or signifies -the form of the agent.However, for St. Thomas, natural agents are in this way only able to educe or draw out the forms of what they generate from matter, determining why 'this' matter takes 'this' form.They cannot account for the very existence of the forms they educe. 80This requires an act of creation, and thus the operation of an intelligent cause beyond nature.And since this cause acts by intellect and will, he must possess in his mind ideas which serve as exemplars of the intrinsic forms he creates in nature.These then help determine the forms of natural things, akin to the way that the idea of a house in the mind of a builder does; 'since the builder intends to build his house like the form conceived in his mind'. 81t. Thomas draws the same conclusion from the principles of teleology we have already examined.In a variety of his works, he notes that when nature acts to generate new substances -'as a man generates a man, or fire generates fire' -the form of the new substance 'must be the end' or goal that some natural agent acts for.Generation even though 'the question of the meaning of life barely even arises' for St. Thomas, it can 'readily be given an affirmative answer when it does arise'. 88While Hochschild is correct that much of the post-Enlightenment 'meaning of life' discourse ends in incoherence, I hope to have shown that Thomists needn't respond by ignoring the question it raises.Careful attention to the diverse ways in which the German Romantics used the phrase supports a conclusion Hochschild refrains from drawing: that the 'meaning of life' was always in a sense more at home in a conceptual framework like St. Thomas'.This is because his metaphysics can account for the purposefulness and significance of the world.More specifically, it reveals that what some of its earliest proponents intended by 'the meaning of life' was always in principle accounted for by St. Thomas' understanding of providence and divine art.Rather than refrain from asking the question, then, Thomists ought to engage in philosophical debates about life's meaning with confidence, ready to demonstrate the superior explanatory power of St. Thomas' thought before the many post-Enlightenment voices that dominate the discourse.In doing so, they may very well do for their contemporaries, despairing of the conceptual poverty of the alternatives, what St. Thomas did for the Maritains: 'enlist their total allegiance' and 'deliver' them 'from the nightmare of a sinister and useless world'. 89 Society, 53 (2016), 294-96; Susan Wolf, Meaning in Life and Why It Matters (Princeton: Princeton University Press, 2012); Candace Vogler, 'The Place of Virtue in a Meaningful Life', in Self-Transcendence and Virtue: Perspectives from Philosophy, Psychology, and Theology, ed. by Jennifer A. Frey and Candace Vogler (New York: Routledge, 2018), pp.84-92.
2024-03-03T16:41:42.421Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "ec192bc95f54aae59a009b740d13abefee8d0751", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/4A74338DFB6402E33D096BF1093D6368/S0028428924000064a.pdf/div-class-title-the-metaphysics-of-meaning-aquinas-and-the-meaning-of-life-div.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "217df1b1f24ca0e96d618ab651c8eaa89ebaa800", "s2fieldsofstudy": [ "Philosophy" ], "extfieldsofstudy": [] }
126276181
pes2o/s2orc
v3-fos-license
The Significance of Spatial Reconstruction in Finite Volume Methods for the Shallow Water Equations We study the significance of the spatial reconstruction when solving the one dimensional shallow water equations using a finite volume method. For that aim, we implement the explicit forward Euler method for temporal integration while the spatial discretization is performed by finite volume method. We compare the results of constant spatial reconstruction with those of linear spatial reconstruction. The numerical tests include the steady state of a lake at rest, the steady state of moving water and an unsteady state of dam break problem. It is shown that the spatial reconstruction has a significant role in the accuracy of the finite volume method. 1412 Noor Hidayat, Suhariningsih, Agus Suryanto and Sudi Mungkasi Introduction The free surface, unsteady water flow is modeled by the well-known Saint-Venant equations.This model is also called the shallow water (wave) equations.Accurately solving these equations is important, because it can help simulations of natural events, such as floods, tsunamis, dam breaks, tides, etc.To get numerical solutions of these equations, there are many numerical methods available in the literatures [8,9,12,15], for examples finite difference and finite volume methods.Finite difference methods are based on the differential form of the equations.They may lead to some difficulties when we want to resolve discontinuities, because differential equations assume that solutions are smooth.In contrast, finite volume methods are based on the integral form of the equations.Integral equations do not assume smoothness of their solutions, and hence finite volume methods are able to resolve smooth and nonsmooth solutions (see [2,6,7,8,13]).However the accuracy of those numerical methods will be dependent on the integration with respect to both time (temporal) and space (spatial). In this paper we investigate the significance of spatial reconstruction in finite volume methods when solving the shallow water equations.We show that a higher order reconstruction of the spatial domain can improve the accuracy of the numerical methods.To do so we use one type of temporal integration.We then compare the performance of two types (that is, constant and linear) of spatial reconstructions. This paper is organized as follows.Shallow water equations are recalled in Section 2. We present the finite volume method that we use to solve the shallow water equations in Section 3. Numerical results are presented in Section 4. We draw some concluding remarks in Section 5. Shallow Water Equations We consider the following one dimensional shallow water equations (1) where denotes the time variable, denotes the space variable, is water height or depth, is velocity, represents the bottom elevation or topography, and is the acceleration due to gravity.The absolute water level (stage) is defined as .Equations ( 1) and ( 2) can be written in vector form as (3) in which We refer to [5] for these forms of shallow water equations. Finite Volume Methods In this section, we recall a finite volume method proposed in [5,7], which was developed for steady state problems.The finite volume method can then be used to solve steady and unsteady state problems.Here we assume that the space is discretized into a finite number of cells uniformly with cell width and that time is also discretized uniformly with size of time step is .Then equation ( 3) can be solved using the finite volume method where Here subscript represents the th cell and superscript denotes the time level at .This means is the right vertex of the th cell.The variable is an approximation of the analytical source At a vertex of a cell, we use approximations for both sides in which the superscript "-" is for the left side approximation and the superscript "+" is for the right side approximation of that vertex.Both approximations at left and right sides of the vertex is obtained from polynomial reconstructions where is a polynomial supported on the interval [ ] which is centered at the midpoint , and is defined at time , is the characteristic function.Note that for the first order space discretization we only require constant polynomials, while for the second order we need piecewise linear polynomials.Let the linear functions are where is the slope.This slope must be chosen with care so that numerical solutions of the shallow water equations are non oscillatory.This requires that the value of the slope must be limited.A well-known type of that limiter is the minmod slope.The minmod limiter was used by a number of authors for their work, such as in [1,4,5,10,11,16].In this paper, we use the following minmod limiter as in [5]: where (sgn( )+sgn( ) ) | | | | .Note that if for all , then the space discretization becomes first order. In order that the finite volume method is able to solve the steady state problems (as well as unsteady state problems), we use the central semi discrete scheme.Here we implement a numerical flux, as in [7] and numerical source terms, ̅ , as given in [5].This scheme is then where Here ̅ and . ( For simplicity, we apply the forward Euler method to solve equation (12). Numerical Results In this section we present numerical results for three test cases, namely 1415 (a) the steady state of a lake at rest, (b) the steady state of moving water and (c) an unsteady state of dam break problem.We compare the results of first order discretization in space (Method I) and those of second order spatial discretization (Method II).Our numerical setting is as follows.We test the finite volume method for the following three cases using the uniform cells.The number of cells ( are chosen to be 100, 200, 400, 800, 1600, 3200.For the time step, we take uniform .We calculate the numerical error and the convergence rate.To quantify numerical errors , we use the absolute error where and are the exact and numerical solution at , respectively.To compute the convergence rate, we use the following formula [14] Rate ∑ Here is the error at the th cell.All quantities are measured in SI units.Therefore, any omitted units should be noted to have SI units. (a). The steady state of a lake at rest The test of a lake at rest problem is intended to see if the above finite volume method is able to resolve the steady state of still water.We follow the test presented in [3].Consider a lake with 1500 m of length.At the downstream boundary the water level is imposed to be 12 m and at the upstream there is no discharge.The initial condition is water at rest at the level of 12 m.The analytical solution is obviously  water at rest: discharge and flow velocity are zero,  flat free surface: water level stays at the initial level of 12 m.We consider the geometry as given in Figure 1 and the complete description of this geometry (see [3]) is given in the following Table 1 We confirm that both methods are well-balanced, that is, the steady state of the lake at rest is preserved up to discrete level, see Figure 2 for the stage, momentum and velocity produced by Method I. We note that for that scale, Method II generates the same plots.These results are calculated using 400 cells with final time 10 seconds. (b). The steady state of moving water over a bump This test of steady flow over a bump is intended to verify if the numerical Velocity of lake at rest at time t=10.00Position u method can resolve the steady state of moving water.We use the following data geometry [3].The channel length is 25 m (meter) and the bottom equation is The boundary and initial conditions are as follow.At downstream, the water level is imposed to be 2 m.At upstream, the water discharge is imposed to be 4.42 m 3 /s (s= second).Initially, we have a constant water level which is equal to the level imposed downstream with discharge equals to zero.The stage, momentum and velocity obtained by Method II at final time 30 seconds are plotted in Figure 2. It is seen that the numerical solutions agree very well with the exact solution.We note that Method I also generates the same plot.However, detail analysis shows that Method II has much better accuracy as shown in Table 2.Here we use 400 cells and final time 30 seconds. Conclusions The influence of spatial reconstruction in finite volume methods when solving the shallow water equations has been investigated.The constant reconstruction for the space domain is simple and cheap to compute.However, we find that linear reconstruction of the space domain has a great improvement to the accuracy of the methods.We conclude that the accuracy of the spatial reconstruction has a significant role in the accuracy of the numerical methods. Figure 1 . Figure 1.The geometry profile of the lake at rest. Figure 2 . Figure 2. Stage, momentum and velocity of lake at rest problem by Method I.Here we use 400 cells and final time 10 seconds. Figure 3 . Figure 3. Stage and momentum on flow of obstruction by Method II.Here we use 400 cells and final time 30 seconds. Figure 4 . Figure 4. Stage and momentum on simulation of dam-break problem by Method I and II.Here we use 400 cells and final time 0.05 seconds. Table 1 . . Complete description of the geometry.Here x is abscissa of B and B(x) is value of B function at x point.Both x and B(x) are measured in meters. Table 2 . The error of obstruction problems by Method I and II. (c).An unsteady state of dam break problemThe dam break problem is intended to test if the numerical method can
2019-01-02T00:11:16.240Z
2014-04-01T00:00:00.000
{ "year": 2014, "sha1": "f6e18d0c76b5b02c12b7c6895dd074f953fcfe6a", "oa_license": "CCBY", "oa_url": "http://repository.usd.ac.id/3867/1/1014_Hidayat_Mungkasi_2014_AMS.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParseMerged", "pdf_hash": "772c4be7472ea5e4092ab9cc6839e985da494cfd", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
220047292
pes2o/s2orc
v3-fos-license
Metabolomics to Predict Antiviral Drug Efficacy in COVID-19 Infection with the novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) can lead to severe pneumonia, lung function impairment, and multiple organ failure that can be fatal (1). There are currently no U.S. Food and Drug Administration–approved therapies across the spectrum for patients affected with coronavirus disease (COVID-19). However, several experimental approaches, including repurposing of the RNA polymerase–inhibiting antiviral agents, have improved the health outcomes among patients with COVID-19 (2). In Southeast Asia, a combination therapy of ribavirin, a nucleoside analog, together with two nonnucleosidic antivirals used to treat the human immunodeficiency virus (HIV) has shown some promise in mild-to-moderately ill patients (3), as did a study employing another nucleoside-based antiviral agent, favipiravir (4). In the United States, the most promising drug therapy thus far has been remdesivir (GS-441524). A multisite trial indicated that treatment with remdesivir was associated with speedy recovery among hospitalized patients infected with SARS-CoV-2, which prompted the U.S. Food and Drug Administration to allow emergency use access of the drug for COVID-19 treatment on May 1, 2020 (5). Despite these promising recent developments, strategies that could help clinicians predict which patients are most likely to respond effectively to a given therapeutic regimen remain perfunctory. Patient prioritization and treatment matching should be paramount in ensuring optimization of therapeutics to thwarting this pandemic. Along these lines, we reported that patients who die from sepsis syndrome and acute respiratory failure initially present in the emergency department and the medical intensive care unit with a conspicuous metabolomic profile (6–9). Among the most striking changes were the increases in metabolites related to the de novo production of nicotinamide adenine dinucleotide (NAD; a key cofactor central to metabolism), mitochondrial function, and production of ATP as summarized in Table 1. In these patients, the normal endogenous precursors to NAD, as well as purine and pyrimidine nucleobases and nucleosides, were rerouted from their normal biosynthetic pathways. Furthermore, patients with poor outcomes presented with metabolomic dysfunction that appears to be irreversible as evidenced by the accumulation of unprocessed tricarboxylic acid cycle metabolites and carnitine esters. Together, these markers not only predict mortality but also suggest that nonsurvivors have an acute bioenergetic crisis likely attributable to severe decrements in mitochondrial function and metabolism that we have observed several days prior to death (6–9). Recent metabolomics and proteomics studies on patients with COVID-19 with associated severe respiratory distress demonstrated plasma metabolomic signatures similar to those described above for sepsis syndrome (10, 11). The results implicated dysregulation of macrophage function, platelet degranulation and complement system pathways, and metabolic suppression, similar to the acute bioenergetic crisis profile we previously observed in patients with sepsis with poor outcomes (6, 8). Here, we posit that success in reducing the viral burden in patients with SARS-CoV-2 using antiviral drugs that first require intracellular ATP-dependent activation will be contingent on the overall bioenergetic phenotype of the patient. All the nucleosidebased drugs currently considered for SARS-CoV-2 treatment (e.g., remdesivir, ribavirin, and favipiravir) require functional activation by host enzymes that employ endogenous ATP for their conversion to the active triphosphate species. For instance, remdesivir must be converted to its triphosphate form to become a substrate for the viral replicase-transcriptase and get integrated into the growing viral RNA chain to prevent the full replication of the virus (12). Ribavirin also needs ATP for activation, whereas favipiravir, a nucleobase analog, requires initial conversion to its nucleotide form via a mechanism that requires phosphoribosyl Infection with the novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) can lead to severe pneumonia, lung function impairment, and multiple organ failure that can be fatal (1). There are currently no U.S. Food and Drug Administration-approved therapies across the spectrum for patients affected with coronavirus disease (COVID-19). However, several experimental approaches, including repurposing of the RNA polymerase-inhibiting antiviral agents, have improved the health outcomes among patients with COVID-19 (2). In Southeast Asia, a combination therapy of ribavirin, a nucleoside analog, together with two nonnucleosidic antivirals used to treat the human immunodeficiency virus (HIV) has shown some promise in mild-to-moderately ill patients (3), as did a study employing another nucleoside-based antiviral agent, favipiravir (4). In the United States, the most promising drug therapy thus far has been remdesivir (GS-441524). A multisite trial indicated that treatment with remdesivir was associated with speedy recovery among hospitalized patients infected with SARS-CoV-2, which prompted the U.S. Food and Drug Administration to allow emergency use access of the drug for COVID-19 treatment on May 1, 2020 (5). Despite these promising recent developments, strategies that could help clinicians predict which patients are most likely to respond effectively to a given therapeutic regimen remain perfunctory. Patient prioritization and treatment matching should be paramount in ensuring optimization of therapeutics to thwarting this pandemic. Along these lines, we reported that patients who die from sepsis syndrome and acute respiratory failure initially present in the emergency department and the medical intensive care unit with a conspicuous metabolomic profile (6)(7)(8)(9). Among the most striking changes were the increases in metabolites related to the de novo production of nicotinamide adenine dinucleotide (NAD; a key cofactor central to metabolism), mitochondrial function, and production of ATP as summarized in Table 1. In these patients, the normal endogenous precursors to NAD, as well as purine and pyrimidine nucleobases and nucleosides, were rerouted from their normal biosynthetic pathways. Furthermore, patients with poor outcomes presented with metabolomic dysfunction that appears to be irreversible as evidenced by the accumulation of unprocessed tricarboxylic acid cycle metabolites and carnitine esters. Together, these markers not only predict mortality but also suggest that nonsurvivors have an acute bioenergetic crisis likely attributable to severe decrements in mitochondrial function and metabolism that we have observed several days prior to death (6-9). Recent metabolomics and proteomics studies on patients with COVID-19 with associated severe respiratory distress demonstrated plasma metabolomic signatures similar to those described above for sepsis syndrome (10,11). The results implicated dysregulation of macrophage function, platelet degranulation and complement system pathways, and metabolic suppression, similar to the acute bioenergetic crisis profile we previously observed in patients with sepsis with poor outcomes (6,8). Here, we posit that success in reducing the viral burden in patients with SARS-CoV-2 using antiviral drugs that first require intracellular ATP-dependent activation will be contingent on the overall bioenergetic phenotype of the patient. All the nucleosidebased drugs currently considered for SARS-CoV-2 treatment (e.g., remdesivir, ribavirin, and favipiravir) require functional activation by host enzymes that employ endogenous ATP for their conversion to the active triphosphate species. For instance, remdesivir must be converted to its triphosphate form to become a substrate for the viral replicase-transcriptase and get integrated into the growing viral RNA chain to prevent the full replication of the virus (12). Ribavirin also needs ATP for activation, whereas favipiravir, a nucleobase analog, requires initial conversion to its nucleotide form via a mechanism that requires phosphoribosyl pyrophosphate, another high-energy intracellular biomolecule (13). These activation processes and their dependence on ATP levels may explain the limited success of some of the nucleoside-derived drugs targeted at the viral replicase-transcriptase. An impaired "energy status" of a patient, characterized by the decrement of high-energy metabolites such as ATP and phosphoribosyl pyrophosphate (14), may impede effective drug conversion and thus decrease efficacy against viral replication. The implications of the present considerations (as outlined in Figure 1) offer opportunities for stratification of patients with COVID-19 based on their metabolic phenotype to maximize drug efficacy. Monitoring patients' bioenergetics status might help rationalize why a given replicase-trancriptase inhibitor is successful in some patients and not in others. With this perspective, drugs with significant dependence on ATP will be less effective in patients presenting with advanced metabolic dysfunction. Therefore, we propose that ribavirin or favipiravir, drugs that require multiplestage functionalization, would have a better chance of success in patients presenting with a near-normal metabolic profile. However, patients that present with a metabolomic phenotype of an acute bioenergetic crisis could be treated with drugs that require less energy, such as remdesivir, as it requires low ATP commitment for drug activation. One could also consider the severity of the bioenergetic crisis in the context of cellular metabolism and its relationship to patient outcomes and survival. For example, antiviral agents may be infective in patients presenting with an advanced metabolic dysfunction. This may explain why remdesivir can improve duration of symptoms but has no statistical benefit on patient survival (11,15). In such cases, targeted metabolic strategies or nutritional supplementation that include remediation of the NAD and ATP pools could be implemented to reduce the impact of the acute bioenergetic crisis on dysregulated immune and repair responses that lead to multiorgan failure. Moreover, correction of these nutritional deficiencies may be necessary to optimize drug responses. In conclusion, metabolomic phenotyping may represent an important step toward personalized therapeutics in patients infected with COVID-19. First, it will help enhance the therapeutic efficacy of ATP-dependent replicase-trancriptase inhibitors currently under clinical investigations against COVID-19. Drugs with significant dependence on ATP to achieve functionality against the viral target might be less effective in patients presenting with advanced metabolic dysfunction. Second, this metabolomic phenotyping will also inform the need to integrate balanced metabolic and nutritional strategies within the treatment regimen to optimize patient recovery. Defining and modulating the bioenergetic state in a risk-stratified and personalized approach could have long-term impact in improving patient outcomes to SARS-CoV-2 infections. n Author disclosures are available with the text of this letter at www.atsjournals.org. Figure 1. Response to viral drug therapies in patients with severe acute respiratory distress syndrome coronavirus 2 (SARS-CoV-2) may be dependent on the metabolic status of the patient. A patient's metabolomic phenotype can predict patient outcomes as well as the status of cellular metabolism. In particular, the function of nicotinamide adenine dinucleotide (NAD) is critical for cellular metabolism as well as energy production such as ATP. Because most viral transcriptase inhibitors are dependent on ATP for activation and incorporation with viral RNAs, cellular metabolism and energy production can critically affect the efficacy of certain antivirals. Monitoring the metabolomic phenotype in clinical trials that use antiviral drugs will be critical for optimization of drug efficacy.
2020-06-25T09:08:31.161Z
2020-06-23T00:00:00.000
{ "year": 2020, "sha1": "c69116c898973fd31914e52d08e7ff4762bd13e0", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1165/rcmb.2020-0206le", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "edf73cbbaaae5be8521243eaf9b49eecc2f8ca7d", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233560137
pes2o/s2orc
v3-fos-license
Studies on Cleistopholis patens (benth.) [Magnoliales: Annonaceae] powders as fumigant and contact insecticides to Plodia interpunctella (Hübner) [Lepidoptera: Pyralidae] Post-harvest losses of agricultural produce caused by store grain pest most especially cereals and grains to store pest require developing cheap, ecofriendly and readily available insecticide to combat this threat and achieve the goal of food security in developing countries. This study investigated the effects of Cleistopholis patens (leaf and stem bark) on Plodia interpunctella infestation in stored maize grains. Powders from the plant were administered at 0.5, 1.0, 1.5, 2.0 and 2.5 g dosages to maize grains containing developmental stages of P. interpunctella both as contact and fumigant insecticides. The insecticidal activities were monitored at 24, 48, 72 and 96 h post-treatment periods. As a contact insecticide, C. patens was significantly (P˂0.05) more effective than as a fumigant against all developmental stages of the pest. The leaf powder was observed to be weakly effective both as contact and fumigant against P. interpunctella. At its peak, 69.17% larval mortality was achieved at 2.5 g dosage after 96 h exposure, but 1.5 g dosage of the stem bark achieved 0% egg hatchability and 100% larval and adult mortalities at the same length of exposure. Inferences from these results suggest that the plant has some bioactive constituents which if properly harnessed can be co-opted into integrated management of P. interpunctella infesting stored products. INTRODUCTION Damages to stored grains and their products by insects had been estimated as 5-10% in the temperate countries and 20-30% in the tropical zones (Nakakita, 1998). Grain storage around the world had been relying so heavily on the use of synthetic pesticides, which of course have played a major role in food storage and protection and have tremendously benefited humankind in the past. Aside these great contributions, their continued usage has triggered several ecological, resistance and health-related challenges (Verma and Dubey, 1999). It has been reported that over 2.5 million types of such pesticides are used in the agricultural crop protection annually across the globe and that over $100 billion was being spent annually to either combat or manage the side effects of these pesticides on man and environments (USEPA, 2011). Hence the search for ecofriendly and biodegradable pesticides for crop protection and management had been greatly encouraged over the last *Corresponding author. E-mail: laoluadeyera@yahoo.com. Author(s) agree that this article remain permanently open access under the terms of the Creative Commons Attribution License 4.0 International License five decades (Sengottaiyan, 2013). It is expected that the ideal insecticide should control the target pest adequately, rapidly degradable and non-toxic to human and livestock. The use of botanical pesticides to make up for various shortfalls identified with synthetic pesticides had been promising over the years. There have been reviews on the use of plants' secondary metabolites / phytochemicals to control the threat of pests' infestation on stored grains by several authors (Mason et al., 1987;Rosenthal and Berenaum, 1992;Tan and Luo, 2011). This study is therefore designed to assess the bioactivity of the powders from Cleistopholis patens (Benth.) Engl. & Diels against the egg, larval and adult stages of the Indian meal moth, Plodia interpunctella (Hübner, 1813), a world-wide insect pest of stored products and processed commodities. Sourcing of plant materials The required quantity of the plant C. patens was completely uprooted from the source in the forest region along Akure-Ondo Road, Akure, Nigeria (located at latitude 7.2571°N and longitude 5.2058°E). The maize grains were bought at Isikan Market, Akure, Nigeria. Preparation of plant materials C. patens samples collected from the field were transported in protective bags to the Biology laboratory 2 of the Federal University of Technology, Akure. The plants were thoroughly washed with water, then the root barks were carefully removed with sharp knife and air-dried in the laboratory for 50 days, after which they were pulverized into fine powder using Binatone electric blender (Model 373). The powdery samples were further sieved to pass through 1 mm 2 perforations to obtain labeled samples of fine powders that were kept in separate airtight plastic containers and stored at ambient temperature of 28±2°C and 75 ± 5% Rh pending use. The maize grains used were first sorted to remove both contaminants and damaged ones. The grains were later disinfected in the oven at 60°C for 4 h and allowed to cool on open laboratory bench for 5-6 h before been stored in plastic containers until usage. Who identified the botanical specimens of C. patens? and in which collections C. patens specimen vouchers were deposited? Insect culture 500 g of maize grains was weighed into two kilner jars (1 L each). Ten newly emerged adults 5 males and 5 females P. interpunctella were introduced into each of the jars. The jars were kept in the culturing chamber until the F1 generation emerged. Insects were identified using standard entomological keys. Contact bioassay of C. patens against stages of development of P. interpunctella Twenty freshly laid eggs (0-24 h old) were placed on 20 g of maize grains treated with 0.5, 1.0, 1.5, 2.0, 2.5 g and control (untreated) leaf and stem bark powders were separately placed inside plastic container (8 cm diameter and 4 cm depth). Each treatment was replicated thrice with a corresponding control. Daily observations were made with dissecting microscope to determine the number of hatching eggs from the total number of infested eggs. The experiments were randomly arranged and kept inside a breeding cage with wire mesh cage (75 × 50 × 60 cm). After the hatchability period (0-7days) the rearing containers were covered with muslin cloths and held in place with rubber bands; and during 40 days, the number and percentage of adult emerged was determined. The aforementioned procedure was repeated for both the larvae and adult contact bioassay. The same procedure was repeated for the larval and adult experiments; but the container covers were punched with hot iron rod and lined with muslin on the inside to prevent larval and adult from escaping and allow aeration. Ten larvae (third instar) were introduced into each bioassay treated and untreated grains and were replicated three times. The number of dead larvae and adult were counted and percentage mortality was determined. Fumigant bioassay of C. patens against stages of development of P. interpunctella The following dosages; 0.5, 1.0, 1.5, 2.0 and 2.5 g and control (untreated) leaf and stem powders of the plant were separately weighed and sealed in muslin cloth (5cm by 5cm); they were hung on the lid of each of the plastic containers (8 cm depth 4 cm diameter). Twenty freshly laid eggs (0-24 h old) were introduced into each of the plastic containers containing 20 g of maize grains and covered with lid. The plant powder was hung between the lid and the bottom and was made air tight at equal distance. The treated and the control (untreated) were replicated three times. Daily observations were made using dissecting microscope to determine the number of eggs hatched from the total numbers of eggs introduced and the experiment was left inside the insect breeding wire mesh cage pending adult emergence. At the end of 40 days post-treatment period the total number of adults that emerged was determined and percentage mean was calculated. The same procedure was repeated for larvae. The dead larvae were counted and percentage mortality calculated after 24, 48, 72 and 96 h post treatment. The same procedure was repeated for the adult experiments. The fumigant protocols are the same as the contact protocols, except that leaf and stem powders of the plant were separately weighed and sealed in muslin cloth (5 cm by 5 cm); they were hung on the lid of each of the plastic containers. The plant powder was hung between the lid and the bottom and was made air tight at equal distance. However, it is not clear whether the effect was caused by the release of gases from the powder or by particles from the powder itself. It is suggested to delete this part because C. patens was not actually used as a fumigant according to the protocols written in this study. Nevertheless, if the author or authors decide to keep it, then they have to do a one-way analysis of variance to compare Contact vs Fumigant toxicity. Data analysis Analysis of data was done using SPSS version 23. Means were separated using Turkey's test. Contact bioassay of C. patens against stages of development of P. interpunctella The leaf powder of C. patens was observed to be weakly (Tables 2 and 3). Unlike the leaf powder, C. patens stem powder was observed to be very effective in the control of the developmental stages of P. interpunctella. Table 4 reveals the contact effects of the powder on egg hatchability. It shows that hatchability was completely suppressed to 0% at 1.5 g powder dosage as against 63. 33% recorded when the leaf powder was used. Table 5 reveals 100% larval mortality at 2.0 g powder dosage after 72 h post-treatment exposure. Also as contact insecticide, 100% adult mortality was achieved at 1.5 g dosage but after 96 h exposure as reflected in Table 6. Fumigant bioassay of C. patens against stages of development of P. interpunctella The leaf powder was slightly less effective in the control of the developmental stages of the pest when used as fumigant insecticide. Table 4 shows 68.33% egg hatchability at the highest powder dosage of 2.5 g. The same dosage yielded 36.26 and 53.33% larval and adult mortalities after 96 h post-treatment exposure (Tables 7 and 8). Tables 7 to 9 show that the fumigant insecticidal activities of the stem powder was observed to slightly less effective than the contact effect. Table 7 shows 0% hatchability at 2.0 g powder dosage. It was also 0% under contact treatment but at 1.5 g dosage (Tables 10 to 12). DISCUSSION Food security for the increasing world population, most especially in the countries where pest control management is not of major concern had been and still being significantly challenged over years (Olotuah, 2014). Grain storage across the globe had been relying so heavily on the use of synthetic pesticides against insect infestation, the use of which have triggered a number of ecologica, health-related and pest resistance problems (Verma and Derbey, 1999). Works on organic pesticides to make up for these short falls had been promising. Several botanical products have been discovered as potent in the control of storage pest infestation (Ofuya and Dawodu, 2002;Adedire and Ajayi, 2003;Tan and Luo, 2011). The results of this study have shown that the botanical powders of various compositions from C. patens are toxic to egg, larval and adult stages of P. interpunctella in stored products, most especially maize grains. This is in agreement with Akinneye et al. (2006) that showed the efficacy of root bark, stem bark and leaf powders of C. patens at varied compositions both as contact and fumigant insecticides in the control of egg and adult emergence stages of some Coleopteran and Lepidopteran storage pests. This result reveals a significant contact effect as compared with the fumigant effects of the powders on the moth pest. A concentration of 1.5 g leaf powder at all levels affected 100% egg, larval and adult mortalities after 72 h exposure when used as contact insecticides. The fumigant effect only larval and adult mortalities after 96 h exposure. The inability of the eggs to hatch may be because powder inhibits gaseous exchange between the eggs and
2021-05-04T22:06:29.692Z
2021-03-31T00:00:00.000
{ "year": 2021, "sha1": "58d2c4622a32bdc82abc7bf3a4dc593c7210a908", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/AJAR/article-full-text-pdf/A7D5BF666295.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "78d9f0cf10faa961a04248ce888aaecde37340f7", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
247011503
pes2o/s2orc
v3-fos-license
Manage risks in complex engagements by leveraging organization-wide knowledge using Machine Learning One of the ways for organizations to continuously get better at executing projects is to learn from their past experience. In large organizations, the different accounts and business units often work in silos and tapping the rich knowledge base across the organization is easier said than done. With easy access to the collective experience spread across the organization, project teams and business leaders can proactively anticipate and manage risks in new engagements. Early discovery and timely management of risks is key to success in the complex engagements of today. In this paper, the authors describe a Machine Learning based solution deployed with MLOps principles to solve this problem in an efficient manner. Introduction For project-centric organizations, cost-effective, differentiated delivery is key to success. Early understanding of risks and mitigations play a crucial role in achieving this. Our discussions with project managers and business leaders revealed a need to learn from the experience in similar projects, understand the risks they faced and plan to mitigate such risks in advance. Such learning from the experience of similar projects executed across the enterprise results in significant business benefits. • Early discovery of risks results in proactive risk mitigation, cost savings, enhanced customer satisfaction and increased revenue-generation opportunities. • Collaboration between teams from similar projects helps in sharing of ideas and best practices to improve delivery quality and create a culture of knowledge-sharing. Manually maintained rule-based methods to identify similar projects involve using multiple, restrictive, subspace search rules. Rules needs to be continuously managed and constantly updated. This approach has serious limitations. • Inability to do contextual text comparison: It becomes an arduous task to define and maintain scalable rules to search similar terms, e.g., similar tools and technologies. It is almost impossible for the manual rules to scale and pick contextually similar risks. • Poor User Experience: Using filters defined by manual rules results in a very restrictive subspace search, resulting in no results beyond a point. Users typically expect auto-populated results, rather than a filtering approach. Hence a scalable, enterprise level, Machine Learning (ML) based solution is required to overcome these limitations. Solution Our solution comprises of two components as outlined in Figure 1. • Project similarity: This component identifies similar projects across the organization. • Risk similarity: This component then maps the risks tracked in such similar projects to contextually similar risks from a set of curated risks. Project similarity There is no labeled data available that identifies similar projects; hence this needs to be an unsupervised ML solution. We envisioned project information as a collection of all important text that describes the project. This way of envisioning the project information is highly scalable as additional information that becomes available in future can be easily added as text without any change in architecture. Significant expressions related to the project are extracted from the text using key phrase extraction algorithm [1]. The fastText [2] embedding is used instead of word2vec [3] due to its ability to produce rich word embeddings at sub-word level and its ability to handle minor misspellings. Arc cosine similarity is used to get similar projects instead of cosine similarity to avoid the limitation of flattening at the extreme ends of spectrum, making similar projects indistinguishable at high similarities. Risk discovery from similar projects Risk discovery from similar projects involves discovering risks from the curated set that are contextually similar to the risks tracked in similar projects. Risk and mitigation curation The risks tracked in similar projects often have reference to context and information that is very specific to the project. In order to make the risks and mitigation suggestions more useful to the users, it was decided to present these from a curated risk database instead of the raw risks from the similar projects. Commonly occurring risks and recommended mitigations were manually compiled by subject matter experts as a curated risk database, after analyzing historic risks and mitigations recorded in the enterprise risk platform. Contextual risk similarity Universal Sentence Encoder [4], large transformer-based module trained and released by Google, Tensorflow [5] Hub has shown an excellent ability to understand the context of paragraphs and provide semantic similarity with high relevancy. It is used to get cosine similarity between the raw risks and the manually curated risks. Highly similar, curated risks above a similarity threshold are extracted. This threshold was decided based on functional evaluation of a random set of raw risks and curated risks. The raw risks that do not have a matching curated risk above the threshold are periodically evaluated and accommodated by combination of the following approaches: 1. Creation of new curated risks 2. Usage of advanced pretrained modules 3. Siamese fine tuning of module using semantically similar, but low threshold inputs. The Appendix section of this paper has further details on the research conducted on Siamese fine tuning. Duplicate risk removal through semantic similarity When a set of subject matter experts write curated risks in silos, same curated risks can be represented in different words, resulting in outputs with duplicate information to the end user. Hence prior to showing the risks to the end user, duplicate removal is done by applying the same risk similarity check on the interim output. As a result, only unique risks are presented to the user. Azure MLOps deployment Microsoft Azure Machine Learning (ML) platform was chosen as the ML deployment platform to automate end to end flow of this solution using MLOps. Azure ML Pipelines are used to schedule and run the ML job frequently, connecting to Azure storage where project and risk data is stored. Registered models precompute similar projects and risks, to provide recommendations for a given project. These models are deployed in scalable Azure Kubernetes clusters, and REST APIs are exposed to enterprise portals via secured Apigee gateway as shown in Figure 2. Business benefits • Enterprise knowledge discovery: Integrated with the enterprise knowledge discovery portal, the solution presents learning from similar projects to the project owners. Here, collaboration options between similar project owners are provided through integration with the enterprise messaging and mailing platform where they can either chat or get connected over email. • Enterprise project management and risk discovery: Risk suggestions from similar projects is integrated with the enterprise risk management platform. This enables project owners to discover relevant risks, assess recommendations to mitigate, import them in their project's risk register and act on these risks in a timely manner. This solution can be used across all projects in the organizations. Following are a few real-life cases where the project teams benefitted from the solution. • For a large electrical manufacturing client, the project team was working on an ecommerce platform Magento. Since there were limited projects on Magento in the repository, similar projects in Drupal were also identified. The solution was able to correlate the two related, competing technologies without being explicitly instructed to do so. Magento being a niche skill, relevant risks related to resource availability were highlighted. • For a large UK telecom provider, we were running an ETL testing project on Ab Initio. In addition to listing similar projects doing Ab Initio testing, highly relevant risks related to inadequate ETL configuration in test environment leading to delay in testing and defect leakage were shown, along with suggestions to mitigate. • For an Australian financial services client, the team was working on a development project with secure connectivity requirements. They were able to anticipate potential infrastructure challenges due to the COVID-enforced work from home setup upfront and planned ahead based on learning from similar projects. Further work The manual risk and mitigation curation is an effort-intensive exercise. A hybrid approach to risk curation where an ML-led abstractive-summarization is reviewed by experts is in experimental stage. This is expected to assist the experts by substantially reducing their effort on risk curation. Usage of advanced pretrained modules and Siamese fine tuning of the prebuilt module to uplift similarity scores of functionally similar, but low similarity score risks, is being experimented. Work is also in progress to build a search functionality on curated risks which can provide the relevant risks based on search keywords, independent of the pipeline flow of this solution. Appendix: Siamese Fine Tuning There will be a portion of base risks which will not find any matching curated risks above the similarity threshold when we use pretrained embedding modules without tuning. During the functional evaluation we found that some of these risks were functionally similar to already written curated risks and needed to be given higher similarity scores. This led to the research related to Siamese fine tuning, where parallel corpus of the raw risks and corresponding curated risks are given to universal sentence embedding to fine-tune in a Siamese finetuning architecture to elevate the similarity scores. During this work a document improvement to Tensorflow Hub was suggested related to fine-tuning with a generic code. This change was accepted and published as a document improvement for the fine-tuning section of Tensorflow Hub document [6]. During fine-tuning experiments, it was observed that while fine-tuning increases the similarity of parallel corpus as per expectation, it also increased the similarity scores for others which were in low score region prior to fine-tuning. Sample parallel corpus cosine similarity results are presented in Table 1. The diagonal of the table represents parallel corpus similarity, while other values show intra parallel corpus similarity. Using the Semantic Textual Similarity (STS) evaluation benchmark, out of the box module Pearson correlation coefficient is found to be at 0.78, with p value of 3.8e -285 , whereas fine-tuned Pearson correlation coefficient is found to be at 0.75, with p value of 7.5e -254 . This shows the drop in generalization post fine-tuning and the need for doing careful regularization during fine-tuning. Further experiments are being conducted to ensure the results generalize well, using dropouts [7] and regularizations, before the fine-tuned module can replace the out of the box pretrained universal sentence embedding module.
2022-02-22T06:47:21.937Z
2022-02-21T00:00:00.000
{ "year": 2022, "sha1": "1ba40222083ca37f2efdbe2320e0756e07d4cb2d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "1ba40222083ca37f2efdbe2320e0756e07d4cb2d", "s2fieldsofstudy": [ "Computer Science", "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
237321561
pes2o/s2orc
v3-fos-license
The clinical association between Periodontitis and COVID-19 Objectives The study aimed to clinically assess the association between periodontitis and COVID-19-related outcomes. Material and methods Data pertaining to patient demographics, medical history, blood parameters, periodontal clinical examination and aMMP-8 point-of-care diagnostics (both site-level and patient-level) was recorded for eighty-two COVID-19-positive patients. COVID-19-related outcomes such as COVID-19 pneumonia, death/survival, types of hospital admission and need of assisted ventilation were also assessed. Results Males were predominantly afflicted with COVID-19, with advanced age exhibiting a greater association with the presence of periodontitis. Higher severity of periodontitis led to 7.45 odds of requiring assisted ventilation, 36.52 odds of hospital admission, 14.58 odds of being deceased and 4.42 odds of COVID-19-related pneumonia. The aMMP-8 mouthrinse kit was slightly more sensitive but less specific than aMMP-8 site-specific tests. Conclusions Based on the findings of the present study, periodontitis seems to be related to poorer COVID-19-related outcomes. However, within the constraints of this work, a direct causality may not be established. Periodontitis, by means of skewing the systemic condition for a number of comorbidities, may eventually influence COVID-19 outcomes in an indirect manner. Clinical relevance The study is the first to clinically, and by means of a validated point-of-care diagnostic methodology, assess the association between periodontal health and COVID-19-related outcomes. Assessment of the periodontal status of individuals can aid in the identification of risk groups during the pandemic along with reinforcing the need to maintain oral hygiene and seeking periodontal care. Introduction The COVID-19 pandemic has presented a conundrum like never before in terms of understanding its pathophysiology. With no cure in sight, it remains a significant aspect of research to identify and delineate factors which may alter the course of the disease in order to aid in its understanding and subsequent management. This would continue to assume importance even with the advent of anti-COVID-19 vaccinations. Periodontal disease is considered a pandemic in its own right, with the reported case load far exceeding that of COVID-19. The disease process, though being non-fatal and chronic in nature, plays a crucial role not only in determining oral health but also as a significant contributor to the pathophysiology of a number of systemic conditions. There is sufficient evidence in literature to warrant an association between the presence of periodontal disease and the development and Shipra Gupta and Ritin Mohindra contributed equally to this work. * Shipra Gupta shipra1472@gmail.com 1 course of respiratory illnesses [1]. These mechanistic links range from a direct aspiration of these pathogens into the lungs to more indirect mechanisms wherein virulence factors and enzymes released by periodontopathogens may modify mucosal surfaces to make them more amenable to colonisation, destroy bacterial salivary pellicle to inhibit their subsequent clearance or modify the respiratory epithelium via cytokines in order to promote infection [2]. Indeed several hypotheses have pointed towards the possibility of a link between periodontal disease and COVID-19 [3,4]. Detection of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) in the gingival crevicular fluid (GCF) further gives credence to this theory and introduces the possibility of another point of entry [5]. SARS-CoV has been known to cause alterations in lung tissue due to numerous pathways, of which one involves mediation via matrix metalloproteinases (MMPs) [6]. MMPs cause extracellular matrix degradation along with mediating lung tissue remodelling; these factors eventually contribute to enhanced vascular permeability as well as damage to the endothelium. Acute respiratory distress syndrome (ARDS) management involves the utilisation of mechanical ventilation which can further lead to lung injury via ventilation-induced MMP-8 expression [6,7]. Indeed, the eventual mortality of patients has been related to the expression of MMP-8, MMP-9, MMP-2 and TIMP-1, as is observed in early sepsis. MMPs have also been implicated in facilitating early virus entry into cells [8]. Over time, it has also become clear that proteinases, particularly collagenase-2, responsible for causing matrix degradation are chiefly obtained from polymorphonuclear leukocytes (PMNs) found in the diseased periodontium [9,10]. Upon release from PMNs, the latent forms of these MMPs convert to their activated states by means of their interaction with reactive oxygen species or by proteolytic cleavage. Indeed, PMNderived MMP-8 activity is elevated in the gingival tissue, GCF, and saliva of patients suffering from periodontitis [9]. The active MMP-8 point-of-care (aMMP-8 POC) test has been validated in various countries in both adolescent and adult populations as a means to define active and inactive sites of periodontal disease, assess prognosis and evaluate patients in the treatment and maintenance phases [11][12][13][14]. This particular point-of-care testing methodology possesses a sensitivity of 76-83% and specificity of 96% with results being returned within 5-7 minutes [15,16]. A number of hypothetical models have been put forth to assess the possibility of a link between oral hygiene and/or periodontal disease and the COVID-19 disease process [3,17,18]. A few studies based on patient data have also been published, which generally point towards periodontal disease as a determinant of poorer COVID-19-related outcomes [19,20]. These studies, however, suffer from the fatal flaw of simply utilising previously collected patient data or few self-reported oral health indicators and correlating it to their current COVID-19 disease process. In the present study, the authors went several steps further and performed real-time clinical assessments of patients suffering from COVID-19 along with utilising a validated aMMP-8 point-of-care bedside diagnostic test kit in order to evaluate the presence of active periodontal disease. It is the belief of the authors that this is the first such study to perform clinical and diagnostic assessments in COVID-19 patients in the manner described. The aim of this study was to assess the association of periodontal health on the complications of COVID-19. Methods The cross-sectional analytical study was carried out by the The present study conforms to STROBE guidelines. Eightytwo patients reporting to the communicable diseases ward or admitted in the hospital between 15·January·2021 and 20· February·2021 were recruited into the study after their COVID-19 status was confirmed by nasopharyngeal swab (NPS) testing. A patient information sheet was given to all the patients, and written informed consent was obtained from all the subjects. Pregnant ladies, patients less than 18 years old and those unwilling or not in a position to give written informed consent were excluded from the study. The sample size was based on convenient sampling owing to the fact that the study setting was a dedicated COVID-19 centre and the close proximity required on part of the healthcare worker (HCW) with a potentially infectious patient to conduct intraoral examination and aMMP-8 analysis. However, as no sample size estimation was done a priori, a post hoc power analysis was performed to validate the same. Demographic data was recorded, and chairside tests run for evaluating the expression of aMMP-8 at the site with maximum periodontal breakdown as well as via a mouthrinse-based kit for general disease activity. Training and calibration For training and calibration of the examiners, a COVID-19negative cohort of 10 subjects was enrolled from the Out Patient Department of the Oral Health Sciences Centre. It involved comprehensive periodontal clinical examination by a single examiner (SG) and running of chairside tests for evaluating expression of aMMP-8 by another examiner (MS). Interexaminer reliability was found to be 0.91 using Cohen kappa for categorical variable and 0.93 using intraclass correlation coefficient for continuous variable. Covariates Covariates like age, sex, smoking habits and other COVID-19-related comorbidities/risk factors such as diabetes, hypertension, pulmonary disease, chronic kidney disease, cancer, coronary artery disease, obesity and any other comorbidities were recorded. Blood parameters relevant to the disease progression such as C-reactive protein (CRP), D-dimer, platelet count, ferritin, glycosylated haemoglobin (HbA1c), haemoglobin (Hb), vitamin D3, neutrophil/lymphocyte ratio (N/L), troponin, procalcitonin and N-terminal-pro-brain natriuretic peptide (NT-proBNP) were recorded. These parameters were noted from the patients' records, if available. Hence, the number of samples varied in each parameter. Periodontal clinical examination Periodontal clinical examination was conducted by a single calibrated examiner (SG) using a 10-mm round-tip manual Williams's periodontal probe. All permanent teeth, excluding the third molars, were examined at six sites per tooth (distobuccal, mid-buccal, mesio-buccal, disto-palatal, mid-palatal, mesio-palatal). Gingival recession (GR), gingival marginal level (GML), periodontal probing depth (PPD), bleeding on probing (BOP) and number of teeth present/missing/carious were recorded. Clinical attachment loss (CAL) was calculated. Patients were categorised into periodontally healthy, gingivitis and stage I-IV periodontitis, as per the new classification of periodontitis as described by Chapple et al. Sample collection and qualitative analysis for aMMP-8 PoC mouthrinse-and site-specific kits These tests were conducted by a second periodontist (MS) a priori unaware of the clinical examination results. aMMP-8 chairside lateral flow mouthrinse immunoassay test (PerioSafe, Dentognostics GmbH, Solingen, Germany) and aMMP-8 chairside lateral flow site-specific immunoassay test (ImplantSafe, Dentognostics GmbH, Jena, Germany) were run step by step according to the manufacturer's instructions as described in literature [11][12][13][14]. The colour changes due to immunoreactions were read after exactly 5 min. In both cases, a single blue line indicated aMMP-8 levels less than 20 ng/ml (negative; no risk), whereas two blue lines were representative of aMMP-8 levels more than 20 ng/ml (positive; increased risk), indicating active periodontal disease. Outcome variables COVID-19-related complications such as presence of COVID-19 pneumonia, death due to COVID-19, type of hospital admission and need of assisted ventilation were also assessed. Patients requiring oxygen via high-flow nasal cannula (HFNC), non-invasive ventilation (NIV) or through intubation and ventilator were categorised as patients requiring assisted ventilation, whereas those able to maintain their status quo on room air were categorised as patients not requiring assisted ventilation. Admissions were categorised into those isolated at home and those admitted in the hospital either in the wards or in the ICU as per their disease severity and treatment requirements. An attempt was made to evaluate the presence of active periodontal disease using a validated aMMP-8 point-of-care bedside diagnostic test kit. Statistical analysis Descriptive and inferential statistical analyses have been carried out in the present study. The results were analysed by using IBM SPSS Statistics for Windows, Version 25.0. Armonk, NY: IBM Corp. Results for continuous measurements were presented as mean ± SD (Min-Max) and those for categorical as frequency (Percentage). Normality of the data was assessed using Shapiro Wilk test/Kolmogorov-Smirnov test. Bivariate associations were examined using Fischer exact test/chi-square test. Kruskal-Wallis test was used to compare the variables at different levels of periodontal disease. Mann-Whitney U test/Kruskal-Wallis test was also used to compare within the group. Logistic regressions were applied to obtain odds ratio with 95% confidence interval wherever possible. Since no sample size calculation was undertaken a priori, a post hoc power analysis was calculated from clinical.com (http://clincalc.com/stats/Power.aspx) using primary endpoint as dichotomous. A maximum power of 98. 9% was achieved between periodontal status of the patient and requirement of assisted ventilation. Table 1 presents the association of various parameters with stages of periodontitis. Forty-eight male patients and thirty-four female patients were enrolled in the study. Age has been shown to be associated with periodontitis in literature. The present study results also exhibited an overall increase in age distribution with increasing stages of periodontitis in COVID-19 patients. Fifty-one patients had typical symptoms of COVID-19, whereas thirty-one were asymptomatic on presentation. Presence or severity of periodontal diseases was not found to be associated with gender or presence/ absence of COVID-19 symptoms. Of the patients, 52.43% presented with one or more comorbidities. A statistically significant association was observed for diabetes mellitus, cardiovascular diseases and cancer. Predictors of COVID-19-related outcomes such as hospital admission, requirement of assisted ventilation, COVID-19 pneumonia and eventual survival were observed to increase with a concomitant rise in the stage of periodontitis. Particularly, patients with a higher stage of periodontitis underwent ICU admission as opposed to those with a healthy periodontium or gingival disease who were found to be under home isolation or ward admission. COVID-19-positive cohort characteristics Likewise, requirement of assisted ventilation was more amongst patients with stage III and IV periodontitis. Twenty-two patients presented with COVID-19 pneumonia and fourteen had ground-glass opacities on CT chest. Majority of the patients survived and 9.7% (n = 8) of the patients succumbed. These patients had a greater severity of periodontitis. One of the eight deceased patients had diabetes along with hypertension. Five of the deceased had other comorbidities like hypertension, CKD, history of CAD and acute necrotising pancreatitis. Bleeding on probing was commensurate with the stage of periodontal disease. Any other comorbidity Tables 2, 3 and 4 report the associations between the periodontal status and blood parameters recorded at the time of examination. Bleeding on probing was not associated with any recorded blood parameter. Gingival recession and number of teeth missing due to periodontal reasons were associated with D-dimer and troponin values. Probing depth was significantly associated with HbA1c, CRP, Ddimer and ferritin levels. Higher CAL was associated with elevated levels of CRP, D-dimer, pro-BNP, troponin and procalcitonin. Subjects with more severe forms of periodontitis had higher levels of D-dimer, pro-BNP and troponin. Blood parameters Association of periodontal and blood parameters with COVID-19 complications Table 5 presents the association between selected periodontal parameters and COVID-19 complications, in terms of requirement of assisted ventilation, hospital admission, presentation of COVID-19 pneumonia and survival. Patients with bleeding on probing had 4.14 odds of requiring assisted ventilation, 3.18 odds for hospital admission and 3.63 odds of suffering from COVID-19 pneumonia. Probing depth, gingival recession and CAL were significantly associated with all the included complications of COVID-19. Increasing probing depth, CAL and presence of gingival recession in these patients put them at increased odds for these complications. Patients with gingival recession required assisted ventilation (OR = 8.22), had less chances of survival (OR = 14.07), and 6.50 odds of COVID-19 pneumonia. However, missing teeth was only associated with increased odds of hospital admission (OR = 12.52). Also, it was found that deceased patients had significantly higher mean probing depth, gingival recession and CAL compared to the survivors. Periodontal status was associated with all the included complications of COVID-19 in the present study. Higher severity of periodontitis led to 7.45 odds of requiring assisted ventilation, 36.52 odds of hospital admission, 14.58 odds of death and 4.42 odds of COVID-19 pneumonia. Table 6 presents the association of COVID-19 complications and blood parameters recorded at the time of examination. Subjects requiring admission in hospital had significantly elevated levels of HbA1c, CRP, D-dimer, ferritin, N/L ratio, haemoglobin, pro-BNP, troponin and procalcitonin. Survival was found to be associated with elevated N/L ratio and platelet count, whereas subjects with higher levels of HbA1c, CRP, D-dimer, ferritin and procalcitonin required assisted ventilation. aMMP-8 mouthrinse tests and aMMP-8 site specific tests aMMP-8 mouthrinse kit was positive in 38.1% , and aMMP-8 site-specific kit was positive in 33.3% of patients with periodontal disease. However, the kits also tested positive for 21.1% (mouthrinse kit) and 13.9% (site specific kit) in periodontally healthy subjects ( Table 7). Discussion The findings of the present study establish an association between periodontal disease and COVID-19-related outcomes. The results are in concordance with a study conducted by Marouf et al. (2021) who found a significant association to exist between periodontal disease and COVID-19-related outcomes [20]. This study utilised patient records available to predict periodontal outcome measures as risk factors for COVID-19 prognosis. However, no clinical assessment of their periodontal status at the time of suffering from the COVID-19 infection was made and hence patients with previous bone loss, but no active disease at the time of the study, might have been grouped along with those experiencing active disease. In our study, we not only conducted a real-time clinical examination of all the patients involved but also ran the aMMP-8 PoC chairside kits to determine the current activity of periodontal disease in the cohort. The current study, hence, makes the important distinction of assessing active periodontal disease and its relation to COVID-19-related outcomes. The failure to maintain adequate ventilation has been touted as a significant marker of worsening COVID-19related outcomes. Indeed, hypoxemia has been observed to have been independently associated with in-hospital mortality [24]. In the present study, the authors found a significant association between bleeding on probing, increased periodontal probing depth, the presence of gingival recession, clinical attachment loss and oxygen requirement in COVID-19 patients. By this extension then, it would seem logical to conclude that poorer periodontal disease outcome measures might imply a worse COVID-19-related prognosis. Compromised periodontal outcome measures correlated significantly with the event of death in this patient population. Based on this finding it would thus be justified to state that patients with periodontal disease seemed to have a poorer chance of survival when compared to those without this ailment. It is generally regarded that patients necessitating admission had taken or were expected to take a turn for the worse in terms of their prognosis. A significant rate of mortality has been demonstrated in patients hospitalised as a result of COVID-19 [25]. Compromised periodontal outcome measures correlated significantly with the event of hospital admissions; i.e. *Statistically significant (p < 0.05); OR odds ratio NIV non-invasive ventilation patients suffering from COVID-19 with periodontal disease were more likely to be admitted to a hospital as compared to those who were not. It is established that a number of comorbidities such as diabetes mellitus, obesity and those affecting the cardiovascular and respiratory systems do play a significant role in determining the prognosis of COVID-19 [26]. At the same time, it is also well established in literature that periodontal disease has definitive links to these chronic disease processes and is a bonafide part of their overall pathophysiological presentation [27,28]. Periodontal infections incite events which involve both innate and adaptive host immunity. That periodontitis, despite being a largely chronic disease is a systemic inflammation, is confirmed by the presence of acute-phase reactants as part of the innate immune response. These acute-phase reactants, such as CRP, are pro-inflammatory in nature and develop complement activation along with stimulating tissue healing and neutralising invading pathogens. [29] CRP levels have been touted as an early biomarker for triaging the severity of COVID-19 infections [30]. The present study found that poorer periodontal outcome measures correlated with increased CRP levels in patients suffering from COVID-19. This increase in CRP levels in relation to periodontal compromise is individually substantiated in literature predating the pandemic [29]. Similarly, elevated D-dimer, ferritin, neutrophil lymphocyte ratio and NT-proBNP have been reported to be prognostic markers associated with deteriorating prognosis in patients afflicted with COVID-19, wherein D-dimer is a cross-linked fibrin considered to be a sensitive marker for venous thromboembolism, ferritin is an indicator of systemic inflammation and NT-proBNP is a marker of reduced left ventricular systolic function [31][32][33][34][35]. Brain natriuretic peptide (BNP) is primarily released from the myocardium of the ventricles in response to stress exerted upon the myocardial walls. Produced as a pro-hormone, BNP, a thirty-two amino acid peptide, is cleaved into two peptides, one of which is the active form and the other, N-terminal-pro-BNP (NT-pro-BNP), which remains inactive biologically. A longer half-life renders NT-pro-BNP a more viable biomarker of inflammation [36]. Increased levels of serum ferritin, an acute phase reactant, have been detected in inflammation and have been demonstrated to correlate positively with CRP levels as well. The concentration of serum ferritin occurs as a result of tissue leakage of this intracellular protein shell. Serum ferritin differs slightly from its tissue form in that it contains minimal to no levels of iron. Inflammation may render clearance of serum ferritin ineffective or suboptimal which may account for its elevated levels. In the midst of an acute phase response, TNFalpha and IL-1 beta upregulate the synthesis of ferritin H-and L-subunits which is reflected as an increase in serum ferritin levels [37]. Another cardiac biomarker, troponin, has been evidenced to be increased significantly in severe forms of COVID-19 [38]. Myocardial infarct size is significantly related to eventual patient outcomes and troponins (cTnI and cTnT) form the gold standard biomarkers for this evaluation. Of the two types, cTnI is considered to possess greater reliability for the purposes of determining survival and risk stratification [39]. Neutrophil lymphocyte ratio (NLR) has been evidenced as a marker for systemic inflammation. Epidemiological studies have revealed that NLR correlated with classical systemic inflammation risk factors such as obesity, smoking, diabetes mellitus, hypercholesterolemia, hypertension and metabolic syndrome. In such a manner then, it is safe to say that NLR may be indicative of the severity of inflammatory disease processes [40]. Serum levels of ferritin, NT-proBNP, neutrophil lymphocyte ratio and troponin have been found to be significantly associated with periodontal disease [36,38,39,41,42]. This association points towards a commonality between periodontal disease and COVID-19-related adverse outcomes. Our study found that there was a correlation between periodontal compromise and increased levels of these blood parameters in COVID-19 patients. HbA1c is regarded as the gold standard in blood glucose estimation along with providing an average value over the past 3 months with high levels being associated with complications in diabetic patients. Inflammatory markers such as CRP and serum ferritin have been found to be positively correlated with levels of HbA1c [43]. It has generally been acknowledged that increased levels of HbA1c are associated with hypercoagulability, low oxygen saturation and inflammation in patients suffering from COVID-19. The overall mortality rate of diabetic COVID-19 patients is also reportedly high [43]. In our study, we found high levels of HbA1c correlating with compromised periodontal outcome measures. Procalcitonin levels remain within reference ranges in patients suffering from non-complicated forms of COVID-19 infection. However, these values increase in patients with super-added bacterial infection [44]. In our study, we found that periodontal disease correlated significantly with elevated procalcitonin levels. Indeed, there is evidence in literature to support the contribution of periodontal disease in the pathophysiology of respiratory illnesses [45]. It has recently also been hypothesised that the breakdown of the oral immune barrier, as may occur in periodontitis, may lead to the dissemination of the SARS-CoV-2 into systemic circulation via its oral reservoirs in saliva and GCF [5,46]. Within the constraints of the sample size utilised in the present study, it would be plausible to argue that, while the results seem indicative of this, a direct causal relationship may not be established between the presence of periodontitis and poorer COVID-19-related outcomes. It would hence be prudent to state that an indirect link may exist in the form of periodontal disease resulting in chronic systemic compromise which may further cascade into a socalled comorbidity affecting the eventual outcome of COVID-19 infection. Periodontal disease would then have both a direct and indirect impact upon COVID-19-related outcomes by virtue of its presence. The aMMP-8 mouthrinse and site-specific kits were able to correctly identify periodontal disease in 38.1% and 33.3% of patients with periodontal disease, respectively. However, the kits also tested positive for 21.1% (mouthrinse kit) and 13.9% (site specific kit) in periodontally healthy subjects. A higher number of false positives were being recorded by the kits on account of the elevated aMMP-8 levels in the oral cavity due to the ensuing cytokine storm associated with COVID-19 wherein various inflammatory cytokines could have led to an upregulation in the expression and degranulation of aMMP-8. A previous study utilised three self-reported oral health indicators to determine a relationship between the presence of periodontal disease and COVID-19 prognosis and found significant association of the indicators with mortality [19]. The authors, in the current study, found a similar relation in the included cohort. Seeing as periodontal disease predominantly stems from bacterial interactions with the host, the maintenance of oral hygiene assumes greater importance in the face of this novel entity. However, it would serve the scientific community well to base recommendations on substantiated claims and avoid the temptation of joining certain dots where none may exist. In the same vein, maintenance of oral hygiene does continue to be of importance in the COVID-19 era, not only due to a direct correlation between periodontal compromise and the COVID-19 disease process but also due to the indirect systemic effects periodontal disease may have, to eventually determine COVID-19-related prognosis and in the identification of potentially at-risk patient populations. Most research currently seems to have been concentrated on verifying whether the presence of periodontal disease affects COVID-19-related outcomes. It would, however, be interesting to see whether there exists the possibility of crosstalk between the SARS-CoV-2 and the oral microbiome either directly or in a phage-mediated manner [47,48]. Limitations The study had some limitations and results need to be extrapolated with caution. A causal relationship cannot be established due to the cross-sectional design of the study. Another limitation can be the small sample size. However, this is the first study in literature to conduct intra-oral examination among potentially infectious patients. Further studies are required to validate the result of the present work. Conclusion There is a direct association between periodontal disease and COVID-19-related outcomes. However, as periodontal disease is both reflective and deterministic of systemic health, it might also play an indirect role in worsening the status of comorbidities more directly associated with a poorer prognosis of COVID-19-related adverse outcomes. Authors' contribution S.G. contributed to the conception, design, data acquisition, analysis and interpretation; drafted the manuscript; and critically revised the manuscript. R.M. contributed to the conception and design, acquisition, analysis and interpretation and critically revised the manuscript. M.S. contributed to the acquisition and critically revised the manuscript. S.K. contributed to the acquisition and critically revised the manuscript. V.S. contributed to the analysis and interpretation and drafted the manuscript. P.K. contributed to the analysis and interpretation and critically revised the manuscript. R.K.S. contributed to the acquisition and critically revised the manuscript. A.K. contributed to the analysis and interpretation and drafted the manuscript. Kr.G. contributed to the analysis and interpretation and critically revised the manuscript. Ka.G contributed to the analysis and interpretation and critically revised the manuscript. M.P.S. contributed to the analysis and interpretation and critically revised the manuscript. K.K. contributed to the analysis and interpretation and critically revised the manuscript. V.M. contributed to the analysis and interpretation and critically revised the manuscript. A.B. contributed to the analysis and interpretation and critically revised the manuscript. T.S. contributed to the analysis and interpretation and critically revised the manuscript. I.R. contributed to the analysis and interpretation and critically revised the manuscript. All authors gave final approval and agree both to be personally accountable for the authors' own contributions and to ensure that questions related to the accuracy or integrity of any part of the work, even ones in which the authors were not personally involved are appropriately investigated, resolved and the resolution documented in the literature.
2021-08-28T06:17:23.958Z
2021-08-27T00:00:00.000
{ "year": 2021, "sha1": "9307e16b8d466695ee2eeee25d76c896a768d877", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s00784-021-04111-3.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "7e42b045c9fe2dea65114e3cc5dee614d3fbb82a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
134237243
pes2o/s2orc
v3-fos-license
The Nordic Tools for advanced analysis of interferometry data The Nordic tools for Interferometry are a set of algorithms developed by the Nordic Node of the ALMA Regional Center (Nordic ARC) and its collaborators. These are tools specially designed for an advanced processing and analysis of astronomical interferometric observations. Here we present a subset of our tools and also give a list of tools currently being implemented. Please visit our website for more information: PoS(EVN 2014)096 The Nordic tools for Interferometry I. Martí-Vidal, W. Vlemmings, L. Lindroos, S. Muller, et al. Onsala Space Observatory Chalmers Univ. of Technology (Sweden) The Nordic tools for Interferometry are a set of algorithms developed by the Nordic Node of the ALMA Regional Center (ARC) and its collaborators.These are tools specially designed for an advanced processing and analysis of astronomical interferometric observations.Here we present a subset of our tools and also give a list of tools currently being implemented.Please visit our website for more information: http://nordic-alma.se/support/software-toolsAny comment, suggestion, and/or bug report can be sent to contact@nordic-alma.se UVMULTIFIT is a versatile tool for fitting generic source models to the visibilities.Any combination of model components, with a generic frequency dependence in any of their defining parameters, can be specified.Any algebraic relationship among the defining parameters of the model components can also be specified (see examples below).Fits to continuum and/or spectral lines are supported.Limited support also for mosaic observations.CLOSURES generates diagnostic plots based on the statistics of amplitude and phase closures.This is useful to identify problematic antennas and/or bad frequency channels, even before the calibration. Example plot of CLOSURES, showing a problematic antenna (ID 4) and ranges of problematic channels (from 0 to 1300). OTHER TOOLS (DEVELOPED OR BEING DEVELOPED): FAKEOBS: Allows the user to substitute data from real observations with model visibilities, obtained from Fourier inversion of a FITS image (cube).This is useful for simulations, proposal preparation, tests, and even for data modelling (a rough alternative to UVMULTIFIT, if the user computes the Chi Square minimization by him/herself). CUBE-ANIMATE: A multimedia tool to make movies from data cubes. … and others coming. Note: ChandraFermi will be published soon in our website. The Nordic Tools for Interferometry SeeMarti-Vidal et al. 2014, A&A, 563, A136 (Also IMMULTIFIT, an extension working in the image domain).UV-STACKING, developed by L. Lindroos, is a tool to perform stacking of weak sources in the Fourier domain.This approach has many advantages, compared to the image-based stacking.See Lindroos et al.MNRAS (submitted)EXAMPLE SOURCES: (a) two points with fixed relative position, but free absolute position.(b) a disc plus a point at its center (the absolute position of the two sources and/or the disc size and intensity can also be set free).(c) a disc with a hole (built as another disc with negative intensity).(d) an optically-thick jet (i.e., with a core shift).In all these cases, the free model components can depend on any algebraic function of frequency and/or fitting parameters.PRACTICAL (REAL) EXAMPLE: ALMA OBSERVATIONS OF THE LENSED BLAZAR PKS1830-211 AT 93 GHzThe two lensed images (each one with a different absorption-line spectrum) are fully blended in the image plane, due to the limited resolution.But UVMULTIFIT allows us to extract the spectrum of each image without blending, by fitting two sources with a known separation (but free absolute position!),i.e., example (a) in the figure above.Example plot of ChandraFermi, generated from SMA observations of NGC1333.(fromGirart et al., 2006, Science, 313, 812) CHANDRAFERMI, developed in collaboration with the German ARC Node (Bonn), allows the user to estimate magnetic fields from full-polarization images, using the Chandrasekhar-Fermi algorithm.Chandrasekhar, S., Fermi.,E.1953, ApJ 118, 113 Example of a stacked source using UV-STACKING (left) and image-based stacking (right), on the same dataset.
2019-04-17T08:17:03.737Z
2015-05-21T00:00:00.000
{ "year": 2015, "sha1": "5ed7d7e8f3328df761f070c656546243258857bd", "oa_license": "CCBYNCSA", "oa_url": "https://pos.sissa.it/230/096/pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "5ed7d7e8f3328df761f070c656546243258857bd", "s2fieldsofstudy": [ "Physics", "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Geography" ] }
243938721
pes2o/s2orc
v3-fos-license
Efficient Neural Network Training via Forward and Backward Propagation Sparsification Sparse training is a natural idea to accelerate the training speed of deep neural networks and save the memory usage, especially since large modern neural networks are significantly over-parameterized. However, most of the existing methods cannot achieve this goal in practice because the chain rule based gradient (w.r.t. structure parameters) estimators adopted by previous methods require dense computation at least in the backward propagation step. This paper solves this problem by proposing an efficient sparse training method with completely sparse forward and backward passes. We first formulate the training process as a continuous minimization problem under global sparsity constraint. We then separate the optimization process into two steps, corresponding to weight update and structure parameter update. For the former step, we use the conventional chain rule, which can be sparse via exploiting the sparse structure. For the latter step, instead of using the chain rule based gradient estimators as in existing methods, we propose a variance reduced policy gradient estimator, which only requires two forward passes without backward propagation, thus achieving completely sparse training. We prove that the variance of our gradient estimator is bounded. Extensive experimental results on real-world datasets demonstrate that compared to previous methods, our algorithm is much more effective in accelerating the training process, up to an order of magnitude faster. Introduction obtained from network pruning. However, most of them narrowly aim at finding a sparse subnetwork instead of simultaneously sparsifying the computation of training by exploiting the sparse structure. As a consequence, it is hard for them to effectively accelerate the training process in practice on general platforms, e.g., Tensorflow [1] and Pytorch [33]. Detailed reasons are discussed below: • Non-parametric methods find the sparse network by repeating a two-stage procedure that alternates between weight optimization and pruning [11,8], or by adding a proper sparsityinducing regularizer on the weights to the objective [24,44]. The two-stage methods prune the networks in weight space and usually require retraining the obtained subnetwork from scratch every time when new weights are pruned, which makes training process even more time-consuming. Moreover, the computation of regularized methods is dense since the gradients of a zero-valued weights/filters are still nonzero. • All the parametric approaches estimate the gradients based on chain rule. The gradient w.r.t. the structure parameters can be nonzero even when the corresponding channel/weight is pruned. Thus, to calculate the gradient via backward propagation, the error has to be propagated through all the neurons/channels. This means that the computation of backward propagation has to be dense. Concrete analysis can be found in Section 3. We notice that some existing methods [5,30] can achieve training speedup by careful implementation. For example, the dense to sparse algorithm [30] removes some channels if the corresponding weights are quite small for a long time. However, these methods always need to work with a large model at the beginning epochs and consume huge memory and heavy computation in the early stage. Therefore, even with such careful implementations, the speedups they can achieve are still limited. In this paper, we propose an efficient channel-level parametric sparse neural network training method, which is comprised of completely sparse (See Remark 1) forward and backward propagation. We adopt channel-level sparsity since such sparsity can be efficiently implemented on the current training platforms to save the computational cost. In our method, we first parameterize the network structure by associating each filter with a binary mask modeled as an independent Bernoulli random variable, which can be continuously parameterized by the probability. Next, inspired by the recent work [50], we globally control the network size during the whole training process by controlling the sum of the Bernoulli distribution parameters. Thus, we can formulate the sparse network training problem into a constrained minimization problem on both the weights and structure parameters (i.e., the probability). The main novelty and contribution of this paper lies in our efficient training method called completely sparse neural network training for solving the minimization problem. Specifically, to fully exploit the sparse structure, we separate training iteration into two parts, i.e., weight update and structure parameter update. For weight update, the conventional backward propagation is used to calculate the gradient, which can be sparsified completely because the gradients of the filters with zero valued masks are also zero. For structure parameter update, we develop a new variance reduced policy gradient estimator (VR-PGE). Unlike the conventional chain rule based gradient estimators (e.g., straight through [3]), VR-PGE estimates the gradient via two forward propagations, which is completely sparse because of the sparse subnetwork. Finally, extensive empirical results demonstrate that our method can significantly accelerate the training process of neural networks. The main contributions of this paper can be summarized as follows: • We develop an efficient sparse neural network training algorithm with the following three appealing features: -In our algorithm, the computation in both forward and backward propagations is completely sparse, i.e., they do not need to go through any pruned channels, making the computational complexity significantly lower than that in standard training. -During the whole training procedure, our algorithm works on small sub-networks with the target sparsity instead of follows a dense-to-sparse scheme. -Our algorithm can be implemented easily on widely-used platforms, e.g., Pytorch and Tensorflow, to achieve practical speedup. • We develop a variance reduced policy gradient estimator VR-PGE specifically for sparse neural network training, and prove that its variance is bounded. • Experimental results demonstrate that our methods can achieve significant speed-up in training sparse neural networks. This implies that our method can enable us to explore larger-sized neural networks in the future. Remark 1. We call a sparse training algorithm completely sparse if both its forward and backward propagation do not need to go through any pruned channels. For such algorithms, the computational cost in forward and backward propagation cost can be roughly reduced to ρ 2 * 100%, with ρ being the ratio of remaining unpruned channels. Related Work In this section, we briefly review the studies on neural network pruning, which refers to the algorithms that prune DNNs after fully trained, and the recent works on sparse neural network training. Neural Network Pruning Network Pruning [11] is a promising technique for reducing the model size and inference time of DNNs. The key idea of existing methods [11,10,49,22,29,15,51,43,46,35,18] is to develop effective criteria (e.g, weight magnitude) to identify and remove the massive unimportant weights contained in networks after training. To achieve practical speedup on general devices, some of them prune networks in a structured manner, i.e., remove the weights in a certain group (e.g., filter) together, while others prune the weights individually. It has been reported in the literature [10,27,49,22] that they can improve inference efficiency and reduce memory usage of DNNs by orders of magnitudes with minor loss in accuracy, which enables the deployment of DNNs on low-power devices. We notice that although some pruning methods can be easily extended to train sparse networks, they cannot accelerate or could even slow down the training process. One reason is they are developed in the scenario that a fully trained dense network is given, and cannot work well on the models learned in the early stage of training. Another reason is after each pruning iteration, one has to fine tune or even retrain the network for lots of epoch to compensate the caused accuracy degradation. Sparse Neural Network Training The research on sparse neural network training has emerged in the recent years. Different from the pruning methods, they can find sparse networks without pre-training a dense one. Existing works can be divided into four categories based on their granularity in pruning and whether the network structures are explicitly parameterized. To the best of our knowledge, no significant training speedups achieved in practice are reported in the literature. Table 1 summarizes some representative works. Weight-level non-parametric methods, e.g., [8,11,51,31,32], always adopt a two-stage training procedure that alternates between weight optimization and pruning. They differ in the schedules of tuning the prune ratio over training and layers. [11] prunes the weights with the magnitude below a certain threshold and [51,8] gradually increase the pruning rate during training. [32,6] automatically reallocate parameters across layers during training via controlling the global sparsity. Channel-level non-parametric methods [14,44] are proposed to achieve a practical acceleration in inference. [44] is a structured sparse learning method, which adds a group Lasso regularization into the objective function of DNNs with each group comprised of the weights in a filter. [14] proposes a soft filter pruning method. It zeroizes instead of hard pruning the filters with small 2 norm, after which these filters are treated the same with other filters in training. It is obvious that these methods cannot achieve significant speedup in training since they need to calculate the full gradient in backward propagation although the forward propagation could be sparsified if implemented carefully. Parametric methods multiply each weight/channel with a binary [50,47,40,45] or continuous [26,28,21,20] mask, which can be either deterministic [26,45] or stochastic [50,47,28,40,21,20]. The mask is always parameterized via a continuous trainable variable, i.e., structure parameter. The sparsity is achieved by adding sparsity-inducing regularizers on the masks. The novelties of these methods lie in estimating the gradients w.r.t structure parameters in training. To be precise, • Deterministic Binary Mask. [45] parameterizes its deterministic binary mask as a simple step function and estimates the gradients via sigmoid straight through estimator (STE) [3]. • Deterministic Continuous Mask. [26] uses the linear coefficients of batch normalization (BN) as a continuous mask and enforces most of them to 0 by penalizing the objective with 1 norm of the coefficients. [20] defines the mask as a soft threshold function with learnable threshold. These methods can estimate the gradients via standard backward propagation. • Stochastic Binary Mask. [47,40] model the mask as a bernoulli random variable and the gradients w.r.t. the parameters of bernoulli distributions are estimated via STE. [50] estimates the gradients via Gumbel-Softmax trick [17], which is more accurate than STE. • Stochastic Continuous Mask. [28,21] parameterize the mask as a continuous function g(c, ), which is differentiable w.r.t. c, and is a parameter free noise, e.g., Gaussian noise N (0, 1). In this way, the gradients can be calcuated via conventional backward propagation. Therefore, we can see that all of these parametric methods estimate the gradients of the structure parameters based on the chain rule in backward propagation. This makes the training iteration cannot be sparsified by exploiting the sparse network structure. For the details, please refer to Section 3. Why Existing Parameteric Methods Cannot Achieve Practical Speedup? In this section, we reformulate existing parametric channel-level methods into a unified framework to explain why they cannot accelerate the training process in practice. Notice that convolutional layer can be viewed as a generalized fully connected layer, i.e., viewing the channels as neurons and convolution of two matrices as a generalized multiplication (see [9]). Hence, for simplicity, we consider the fully connected network in Figure 1. Moreover, since the channels in CNNs are corresponding to the neurons in fully connected networks, we consider neuron-level instead of weight-level sparse training in our example. As discussed in Section 2, existing methods parameterize the 4 kinds of mask in the following ways: where the function φ(s i ) is binary, e.g., step function; ψ(s i ) is a continuous function; g(s i , ) is differentiable w.r.t. s i . All existing methods estimate the gradient of the loss (ŷ, y) w.r.t. s i based on chain rule, which can be formulated into a unified form below. Specifically, we take the pruned neuron x 3 in Figure 1 as an example, the gradient is calculated as Existing parametric methods developed different ways to estimate ∂m3 ∂s3 . Actually, for cases (ii) and (iii), the gradients are well-defined and thus can be calculated directly. STE is used to estimate the gradient in case (i) [45]. For cases (iv), [47,40,50] adopt STE and Gumbel-Softmax. In Eqn.(1), the term (a) is always nonzero especially whenx 3 is followed by BN. Hence, we can see that even for the pruned neuron x 3 , the gradient ∂m3 ∂s3 can be nonzero in all four cases. This means the backward propagation has to go though all the neurons/channels, leading to dense computation. At last, we can know from Eqn.(1) that forward propagation in existing methods cannot be completely sparse. Although w :,3 x in can be computed sparsely as in general models x in could be a sparse tensor of a layer with some channels being pruned, we need to calculate it for each neuron via forward propagation to calculate RHS of Eqn. (1). Thus, even if carefully implemented, the computational cost of forward propagation can only be reduced to ρ * 100% instead of ρ 2 * 100% as in inference. That's why we argue that existing methods need dense computation at least in backward propagation. So they cannot speed up the training process effectively in practice. Remark 2. The authors of GrowEfficient [47] confirmed that actually they also calculated the gradient of q c w.r.t, s c in their Eqn.(6) via STE even if q c = 0. Thus need dense backward propagation. Channel-level Completely Sparse Neural Network Training Below, we present our sparse neural network training framework and the efficient training algorithm. Framework of Channel-level Sparse Training Given a convolutional network f (x; w), let {F c : c ∈ C} be the set of filters with C being the set of indices of all the channels. To parameterize the network structure, we associate each F c with a binary mask m c , which is an independent Bernoulli random variable. Thus, each channel is computed as with * being the convolution operation. Inspired by [50], to avoid the problems, e.g., gradient vanishing, we parameterize m c directly on the probability s c , i.e., m c equals to 1 and 0 with the probabilities s c and 1 − s c , respectively. Thus, we can control the channel size by the sum of s c . Following [50], we can formulate channel-level sparse network training into the following framework: is the training dataset, w is the weights of the original network, f (·; ·, ·) is the pruned network, and (·, ·) is the loss function, e.g, cross entropy loss. K = ρ|C| controls the remaining channel size with ρ being the remaining ratio of the channels. Discussion. We'd like to point out that although our framework is inspired by [50], our main contribution is the efficient solver comprised of completely sparse forward/backward propagation for Problem (2). Moreover, our framework can prune the weights in fully connected layers together, since we can associate each weight with an independent mask. Completely Sparse Training with Variance Reduced Policy Gradient Now we present our completely sparse training method, which can solve Problem (2) via completely sparse forward and backward propagation. The key idea is to separate the training iteration into filter update and structure parameter update so that the sparsity can be fully exploited. Filter Update via Completely Sparse Computation It is easy to see that the computation of the gradient w.r.t. the filters can be sparsified completely. To prove this point, we just need to clarify the following two things: • We do not need to update the filters corresponding to the pruned channels. Consider a pruned channel c, i.e., m c = 0, then due to the chain rule, we can have the last equation holds since x out,c ≡ 0. This indicates that the gradient w.r.t the pruned filter F c is always 0, and thus F c does not need to be updated. • The error cannot pass the pruned channels via backward propagation. Consider a pruned channel c, we denote its output before masking asx out,c = x in * F c , then the error propagating through this channel can be computed as This demonstrates that to calculate the gradient w.r.t. the unpruned filters, the backward propagation does not need to go through any pruned channels. Therefore, the filters can be updated via completely sparse backward propagation. Structure Parameter Update via Variance Reduced Policy Gradient We notice that policy gradient estimator (PGE) can estimate the gradient via forward propagation, avoiding the pathology of chain rule based estimators as dicussed in Section 3. For abbreviation, we denote L(w, m) as L(m) since w can be viewed as a constant here. The objective can be written as which can be optimized using gradient descent: with learning rate η. One can obtain a stochastic unbiased estimate of the gradient ∇Φ(s) using PGE: leading to Policy Gradient method, which may be regarded as a stochastic gradient descent algorithm: In Eqn. (3), L(m) can be computed via completely sparse forward propagation and the computational cost of ∇ s ln p(m|s) = m−s s(1−s) is negligible, therefore PGE is computationally efficient. However, in accordance with the empirical results reported in [36,17], we found that standard PGE suffers from high variance and does not work in practice. Below we will develop a Variance Reduced Policy Gradient Estimator (VR-PGE) starting from theoretically analyzing the variance of PGE. Firstly, we know that this variance of PGE is which can be large because L(m) is large. Mean Field theory [39] indicates that, while L(m) can be large, the term L(m) − L(m ) is small when m and m are two independent masks sampled from a same distributionp(m|s) (see the appendix for the details). This means that we may consider the following variance reduced preconditioned policy gradient estimator: where H α (s) is a specific diagonal preconditioning matrix with α ∈ (0, 1) and • being the element-wise product. It plays a role as adaptive step size and it is shown that this term can reduce the variance of the stochastic PGE term ∇ s ln p(m|s). The details can be found in the appendix. Thus Φ(s) can be optimized via: In our experiments, we set α to be 1 2 for our estimator VR-PGE. The theorem below demonstrates that VR-PGE can have bounded variance. Algorithm 1 Completely Sparse Neural Network Training Input: target remaining ratio ρ, a dense network w, the step size η, and parameter α in (4) . 1: Initialize w, let s = ρ1. 6: Update s and w end for 8: end for 9: return A pruned network w • m by sampling a mask m from the distribution p(m|s). Finally, we provide a complete view of our sparse training algorithm in Algorithm 1, which is essentially a projected stochastic gradient descent equipped with our efficient gradient estimators above. The projection operator in Algorithm 1 can be computed efficiently using Theorem 1 of [50]. Discussion. In our algorithm, benefited from our constraint on s, the channel size of the neural network during training can be strictly controlled. This is in contrast with GrowEfficient [47], which ultilizes regularizer term to control the model size and has situations where model size largely drift away from desired. This will have larger demand for the GPU memory storage and have more risk that memory usage may explode, especially when we utilize sparse learning to explore larger models. Moreover, our forward and backward propagations are completely sparse, i.e., they do not need to go through any pruned channels. Therefore, the computational cost of each training iteration can be roughly reduced to ρ 2 * 100% of the dense network. Experiments In this section, we conduct a series of experiments to demonstrate the outstanding performance of our method. We divide the experiments into five parts. In part one, we compare our method with several state-of-the-art methods on CIFAR-10 [19] using VGG-16 [38], ResNet-20 [13] and WideResNet-28-10 [48] to directly showcase the superiority of our method. In part two, we directly compare with state-of-the-art method GrowEfficient [47] especially on extremely sparse regions, and on two high capacity networks VGG-19 [38] and ResNet-32 [13] on CIFAR-10/100 [19]. In part three, we conduct experiments on a large-scale dataset ImageNet [4] with ResNet-50 [13] and MobileNetV1 [16] and compare with GrowEfficient [47] across a wide sparsity region. In part four, we present the train-computational time as a supplementary to the conceptual train-cost savings to justify the applicability of sparse training method into practice. In part five, we present further analysis on epoch-wise train-cost dynamics and experimental justification of variance reduction of VR-PGE. Due to the space limitation, we postpone the experimental configurations, calculation schemes on train-cost savings and train-computational time and additional experiments into appendix. Table 2 presents Top-1 validation accuracy, parameters, FLOPs and train-cost savings comparisons with channel pruning methods L1-Pruning [22], SoftNet [14], ThiNet [29], Provable [23] and sparse training method GrowEfficient [47]. SoftNet can train from scratch but requires completely dense computation. Other pruning methods all require pretraining of dense model and multiple rounds of pruning and finetuning, which makes them slower than vanilla dense model training. Therefore the train-cost savings of these methods are below 1× and thus shown as ("-") in Table 2. GrowEfficient [47] is a recently proposed state-of-the-art channel-level sparse training method showing train-cost savings compared with dense training. As described in Section 3, GrowEfficient features completely dense backward and partially sparse forward pass, making its train-cost saving limited by 3 2 . By contrast, the train-cost savings of our method is not limited by any constraint. The details of how train-cost savings are computed can be found in appendix. Table 2 shows that our method generally exhibits better performance in terms of validation accuracy, parameters and particularly FLOPs. In terms of train-cost savings, our method shows at least 1.85× speed-up against GrowEfficient [47] and up to 9.39× speed-up against dense training. Wider Range of Sparsity on CIFAR-10/100 on VGG-19 and ResNet-32 In this section, we explore sparser regions of training efficiency to present a broader comparision with state-of-the-art channel sparse training method GrowEfficient [47]. We plot eight figures demonstrating the relationships between the Top-1 validation accuracy, FLOPs and train-cost savings. We find that our method generally achieves higher accuracy under same FLOPs settings. To be noted, the train-cost savings of our method is drastically higher than GrowEfficient Table 3: Comparison with the channel pruning methods L1-Pruning [22], SoftNet [14], Provable [23] and one channel sparse training method GrowEfficient [47] on ImageNet-1K. [47], reaching up to 58.8× when sparisty approches 1.56% on ResNet-32 on CIFAR-100, while the speed-up of GrowEfficient is limited by 3 2 . ResNet-50 and MobileNetV1 on ImageNet-1K In this section, we present the performance boost obtained by our method on ResNet-50 and Mo-bileNetV1 on ImageNet-1K [4]. Our method searches a model with 76.0% Top-1 accuracy, 48.2% parameters and 46.8% FLOPs beating all compared state-of-the-art methods. The train-cost saving comes up to 1.60× and is not prominent due to the accuracy constraint to match up with compared methods. Therefore we give a harder limit to the channel size and present sparser results on the same Table 3, reaching up to 7.36× speed-up while still preserving 69.3% Top-1 accuracy. For the already compact model MobileNetV1, we plot two figures in Figure 3 comparing with GrowEfficient [47]. We find that our method is much stabler in sparse regions and obtains much higher train-cost savings. Actual Training Computational Time Testing In this section, we provide actual training computational time on VGG-19 and CIFAR-10. The GPU in test is RTX 2080 Ti and the deep learning framework is Pytorch [33]. The intent of this section is to justify the feasibility of our method in reducing actual computational time cost, rather than staying in conceptual training FLOPs reduction. The computational time cost is measured by wall clock time, focusing on forward and backward propagation. We present training computational time in Table 4 with varying sparsity as in Figure 2. It shows that the computational time savings increases steadily with the sparisty. We also notice the gap between the savings in FLOPS and computational time. The gap comes from the difference between FLOPs and actual forward/backward time. More specifically, forward/backward time is slowed down by data-loading processes and generally affected by hardware latency and throughput, network architecture, etc. At extremely sparse regions, the pure computational time of sparse networks only occupies little of the forward/backward time and the cost of data management and hardware latency dominates the wall-clock time. Despite this gap, it can be expected that our train-cost savings can be better translated into real speed-up in exploring large models where the pure computational time dominates the forward/backward time, which promises a bright future for making training infeasibly large models into practice. Further Analysis [Epoch-wise Train-cost Dynamics of Sparse Training Process] We plot the train-cost dynamics in Figure 3. The vertical label is the ratio of train-cost to dense training, the inverse of train-cost savings. This demonstrates huge difference between our method and GrowEfficient [47]. [Experimental Verification of Variance Reduction of VR-PGE against PGE] We plot the mean of variance of gradients of channels from different layers. The model checkpoint and input data are selected randomly. The gradients are calculated in two approaches, VR-PGE and PGE. From the rightmost graph of Figure 3, we find that the VR-PGE reduces variance significantly, up to 3 orders of magnitude. Conclusion This paper proposes an efficient sparse neural network training method with completely sparse forward and backward passes. A novel gradient estimator named VR-PGE is developed for updating structure parameters, which estimates the gradient via two sparse forward propagation. We theoretically proved that VR-PGE has bounded variance. In this way, we can separate the weight and structure update in training and making the whole training process completely sparse. Emprical results demonstrate that the proposed method can significantly accelerate the training process of DNNs in practice. This enables us to explore larger-sized neural networks in the future. Supplemental Material: Efficient Neural Network Training via Forward and Backward Propagation Sparsification This appendix can be divided into four parts. To be precise, 1. Section A gives the detailed proof of Theorem 1 and discuss the convergence of our method. 2. Section B present experimental configurations of this work. 3. Section C present calculation schemes on train-cost savings and train-computational time. Section D discusses the potentials and limitations of this work. A Proof of Theorem 1 A.1 Properties of Overparameterized Deep Neural Networks Before giving the detailed proof, we would like to present the following two properties of overparameterized deep neural networks, which are implied by the latest studies based on the mean field theory. We will empirically verify these properties in this section and adopt them as assumptions in our proof. is small. The mean field theory based studies [39,7] proved that discrete deep neural networks can be viewed as sampling neurons/channels from continuous networks according to certain distributions. As the numbers of neurons/channels increase, the output of discrete networks would converge to that of the continuous networks (see Theorem 3 in [39] and Theorem 1 in [7]). Although in standard neural networks we do not have the scaling operator as [39,7] for computing the expectation, due to the batch normalization layer, the affect caused by this difference can largely be eliminated. The subnetworks m and m here can be roughly viewed as sampled from a common continuous network. Therefore, L(m) − L(m ) would be always small. That's why Property 1 holds. In the mean field based studies [39,7], they model output of a neuron/channel as a expectation of weighted sum of the neurons/channels in the previous layer w.r.t. a certain distribution. Therefore, the affect of flipping one component of the mask on expectation is negligible. Therefore Property 2 holds. A.2 Detailed Proof Proof. In this proof, we denote (L(m) − L(m )) H α (s)∇ s ln p(m|s) as G α (m, m |s). Note that the total variance Var(G α (m, m |s)) =E m∼p(·|s) E m ∼p(·|s) G α (m, m |s) 2 2 − E m∼p(·|s) E m ∼p(·|s) G α (m, m |s) 2 2 , we only need to prove that the term E m∼p(·|s) E m ∼p(·|s) G α (m, m |s) 2 2 is bounded. We let m −j and s −j be all the components of m and s except the j-th component with j ∈ C. We consider the j-th component of G α (m, m |s), i.e., G α j (m, m |s), then E m∼p(·|s) E m ∼p(·|s) G α j (m, m |s) 2 2 can be estimated as Thus E m∼p(·|s) E m ∼p(·|s) G α (m, m |s) 2 2 can be estimated as follows: Thus, when α ∈ [ 1 2 , 1), we have ≤|C|V max (s). The last inequality holds since the term s 2α . Therefore, from Property 1 and 2, we can see that the variance is bounded for any s. Remark 3. Eqn. (8) and (9) indicate that H α (s) is introduced to reduce the variance of the stochastic PGE term ∇ s ln p(m|s). Without H α (s) (i.e., α = 0), from Eqn. (11), we can see that the total variance bound would be Because of the sparsity constraints, lots of s j would be close to 0. Hence, the total variance in this case could be very large. Remark 4. Our preconditioning matrix H α (s) plays a role as adaptive step size. The hyperparameter α can be used to tune its effect on variance reduction. For a large variance ∇ s ln p(m|s) we can use a large α. In our experiments, we find that simply letting α = 1 2 works well. A.3 Convergence of Our Method For the weight update, the convergence can be guaranteed since we use the standard stochastic gradient descent with the gradient calculated via backward propagation. For the parameter s, as stated in Section 4.2.2, we update it as: Therefore, we can see that ∆s(m, m |s) is an unbiased gradient estimator associated with an adaptive step size, i.e., our VR-PGE is a standard preconditioned stochastic gradient descent method. Thus, the convergence can be guaranteed. A.4 Experiments Verfiying Properties 1 and 2 in A.1 Figure 4 presents the values of E m∼p(·|s) L 2 (m), V (s) and V max (s) during the training process of ResNet-32 on CIFAR-10. We can see that V (s) and V max (s) are very close during the whole training process and they are smaller than E m∼p(·|s) L 2 (m) by four orders of magnitude. This verifies our Property 1 and 2. [34,20,27,51]. The channels of ResNet32 for CIFAR experiments are doubled following the same practice of [42]. The train-cost of vanilla dense training can be computed as two parts: in forward propagation, calculating the loss of weights and in backward propagation, calculating the gradient of weights and gradient of the activations of the previous layers. The FLOPs of backward propagation is about 2∼3 times of forward propagation [2]. In the following calculation, we calculate the FLOPs of forward propagation concretely and consider FLOPs of backward propagation 2 times of forward propagation for simplicity. B Experimental [GrowEfficient] The forward propagation of dense network is f D . The forward propagation of GrowEfficient is partially sparse with FLOPs being f S and backward propagation is dense. Therefore the train-cost saving is computed as f D +2f D f S +f D = 3 2+f S /f D , upper-bounded by 3 2 . [Ours] The forward propagation of dense network is f D . The forward propagation and backward propagation is totally sparse. The FLOPs of forward propagation is f S and the FLOPs of backward propagation is 2 * f S . The forward propagation has to be computed two times. Therefore the train-cost saving is computed as f D +2 * f D 2 * f S +2 * f S = 3 4f S /f D . Actually, f S /f D is roughly equal to ρ 2 , leading to drastically higher train-cost savings. C.2 Train-computational Time The calculation of train-computational time focuses on the forward and backward propagation of dense/sparse networks. For both of the dense and sparse networks, we sum up the computation time of all the forward and backward propagation in the training process as the train-computational time. The detailed time cost is presented in Table 5. We can see that we can achieve significant speedups in computational time. D Potentials and Limitations of This Work [On Computational Cost Saving] Although our method needs two forward propagation in each iteration, we have to point out that our method can achieve significant computational cost saving. The reason is that our forward and backward is completely sparse, whose computational complexity is roughly ρ 2 * 100% of the conventional training algorithms with ρ being the remain ratio of the channels. [On Exploring Larger Networks] About the potential of our method in exploring larger networks, we'd like to clarify the following three things: 1. The memory cost of the structure parameters s is negligible compared with the original weight w as each filter is associated with only one structure parameter, therefore our s would hardly increase the total memory usage. 2. Although in our method, we need to store the parameter of the full model, this would not hinder us from exploring larger networks. The reason is that, in each iteration, we essentially perform forward and backward propagation on the sparse subnetwork. More importantly, we find that reducing the frequency of sampling subnetwork, e.g., sample a new subnetwork for every 50 iterations, during training would not affect the final accuracy. In this way, we can store the parameters of the full model on CPU memory and store the current subnetwork on GPU, and synchronize the parameters' updates to the full model only when we need to resample a new subnetwork. Hence, our method has great potentials in exploring larger deep neural networks. We left such engineering implements as the future work and we also welcome the engineers in the community to implement our method more efficiently. 3. In exploring larger networks, the channel remain ratio ρ can be much smaller than the one in the experiments in the main text. Notice that our method can reduce the computational complexity to ρ 2 * 100% of the full network. It implies that, in this scenario, the potential of our method can be further stimulated. We left this evaluation as future work after more efficient implementation as discussed above.
2021-11-11T02:31:54.989Z
2021-11-10T00:00:00.000
{ "year": 2021, "sha1": "ed1b1acd7a36b22397f052d10426f1531cfc18ce", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ed1b1acd7a36b22397f052d10426f1531cfc18ce", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
253840897
pes2o/s2orc
v3-fos-license
The chemical components, action mechanisms, and clinical evidences of YiQiFuMai injection in the treatment of heart failure YiQiFuMai injection (YQFM), derived from Shengmai Powder, is wildly applied in the treatment of cardiovascular diseases, such as coronary heart disease and chronic cardiac insufficiency. YiQiFuMai injection is mainly composed of Radix of Panax ginseng C.A. Mey. (Araliaceae), Radix of Ophiopogon japonicus (Thunb.) Ker Gawl (Liliaceae), and Fructus of Schisandra chinensis (Turcz.) Baill (Schisandraceae), and Triterpene saponins, steroidal saponins, lignans, and flavonoids play the vital role in the potency and efficacy. Long-term clinical practice has confirmed the positive effect of YiQiFuMai injection in the treatment of heart failure, and few adverse events have been reported. In addition, the protective effect of YiQiFuMai injection is related to the regulation of mitochondrial function, anti-apoptosis, amelioration of oxidant stress, inhibiting the expression of inflammatory mediators, regulating the expression of miRNAs, maintaining the balance of matrix metalloproteinases/tissue inhibitor of metalloproteinases (MMP/TIMP) and anti-hypoxia. the cornerstone in the treatment of heart failure. And new drugs have earned a place, such as angiotensin receptor neprilysin inhibitor (ARNI) and sodium glucose cotransporter two inhibitor (SGLT2i) (McDonagh et al., 2021). Although great progress has realized in the prevention and treatment of heart failure, the overall prognosis is still poor, and the 5-year survival rate is equivalent to that of some malignant tumors (Stewart et al., 2001). Exploring effective and thorough strategy for the treatment of heart failure remains to be done. Traditional Chinese Medicine, based on the concept of holism, differentiates syndromes and gives treatment in a Multi-target and individualized way, and its advantages lie in increasing exercise tolerance, improving the quality of life, elevating cardiac function, delaying myocardial remodeling, reducing mortality and rehospitalization rates (Zhang and Li, 2014;Sun et al., 2016). Shengmai Powder, first described in a classic named 'Yi Xue Qi Yuan', is composed of Radix of Panax ginseng C.A. Mey. (Araliaceae), Radix of Ophiopogon japonicus (Thunb.) Ker Gawl (Liliaceae), and Fructus of Schisandra chinensis (Turcz.) Baill (Schisandraceae), which has effects of replenishing qi, recovering pulse, nourishing yin, and promoting body fluid production (Zhang, 1978). It is commonly applied in the treatment of heart failure, coronary heart disease, hypertension, and viral myocarditis (Wu, 1997). Chinese patent medicines derived from Shengmai Powder include YiQiFuMai injection (YQFM), Shengmai injection, Shenmai injection, Shengmai San, Shengmai Yin, Shengmai capsule, etc. Studies have demonstrated the cardioprotective effects of Shengmai-related formulas, inccluding improving cardiac function, ameliorating ventricular remodeling, suppressing inflammation, reducing collagen deposition, and inhibiting apoptosis Yin et al., 2020). YQFM, the product of Traditional Chinese Medicine combined with modern pharmaceutical technology, not only retains the effecy of Shengmai Powder, but also takes effect more quicky (Du et al., 2021). It is the only powder injection traditional Chinese medicine cardiotonic approved for listing by the state at present (Fu et al., 2020). YQFM is mainly comprised of Radix of Panax ginseng C.A. Mey. (Araliaceae), Radix of Ophiopogon japonicus (Thunb.) Ker Gawl (Liliaceae), and Fructus of Schisandra chinensis (Turcz.) Baill (Schisandraceae). Modern pharmacological studies have shown that, as the main active substance of Radix of Panax ginseng C.A. Mey. (Araliaceae), ginsenosides effectively inhibited myocardial hypertrophy, improved myocardial ischemia, promoted vascular regeneration and inhibited apoptosis . Research has found that Radix of Ophiopogon japonicus (Thunb.) Ker Gawl (Liliaceae) guarded cardiovascular system by resisting myocardial ischemia, arrhythmia, thrombosis and improving microcirculation (Fan et al., 2020). The study showed that Fructus of Schisandra chinensis (Turcz.) Baill (Schisandraceae) acted on various signal pathways to protect myocardial cells from inflammation, apoptosis, oxidative stress, atherosclerosis and other advert effects . Therefore, YQFM can improve heart function and alleviate heart failure by reducing myocardial ischemia-reperfusion injury, antioxidant stress, regulating ventricular remodeling, and reducing the release of inflammatory factors . In clinical practice, it is mainly used for the treatment of heart failure, coronary heart disease, angina pectoris and other cardiovascular diseases . In 2007, China Food and Drug Administration approved YQFM for the treatment of cardiovascular diseases, including coronary heart disease, exertional angina pectoris and chronic cardiac insufficiency. And YQFM is recommended for acute exacerbation of heart failure in the 'expert consensus on diagnosis and treatment of chronic heart failure with integrated traditional Chinese and Western medicine' (Chinese Association of Integrative Medicine, 2016). Research has showed that YQFM significantly reduced the level of N-terminal pro-B-type natriuretic peptide (NT-proBNP), improved cardiac function, and relieved symptoms and signs in patients with acute heart failure . This review summarized the components, mechanisms and clinical evidences of YQFM in the treatment of heart failure in order to provide a theoretical basis for clinical practice. Q-TOF-MS), and among them, 13 saponins were reported for the first time (Liu et al., 2018). Furthermore, Wang et al. have determined the contents of fructose, glucose, sucrose, and maltose in YQFM by HPLC-evaporative light scattering and electrospray . Liu Action mechanisms of YiQiFuMai injection in the treatment of heart failure Heart failure is a chronic and progressive disease, and myocardial remodeling is a critical factor in the initiation and progression of heart failure. Improving cardiac function In HF mice induced by the permanent coronary artery ligation (CAL) with the intervention of YQFM for 2 weeks (0.13 g/kg, 0.26 g/kg, and 0.53 g/kg) showed that YQFM (0.53 g/kg) improved left ventricular function and ameliorated structural injury. It was reported that YQFM restrained the activity of serum lactic dehydrogenase (LDH) and creatine kinase (CK), lowered the levels of serum malondialdehyde (MDA), amino-terminal pro-peptide of pro-collagen type III (PIIINP), NT-proBNP, and myocardial hydroxyproline (HYP). And YQFM appears to reduce oxidation stress, suppress myocardial collagen deposition and fibrosis, and ameliorate cardiac remodeling by the of the blocking effect on mitogenactivated protein kinases (MAPKs) signaling pathway (Pang et al., 2017). Wistar rats were subjected to abdominal aortic coarctation to establish a chronic heart failure model. And the indications for successful modeling was determined by LVEF% ≤ 60% at 8 weeks after the operation. After 14 days of continuous treatment with YQFM (520, 260 and 1,040 mg/kg), the results indicated that YQFM increased the left ventricular posterior wall in the systolic (LVPWs), ejection fraction (EF), fractional shortening (FS), reduced left ventricular end-systolic diameter (LVESD), the levels of serum brain natriuretic peptide (BNP) and Types Components Ref Triterpenoid saponins copeptin (CPP), whereby improving cardiac function and delaying ventricular remodeling in rats . Reducing myocardial injury SD rats were given the intervention of YQFM (0.28, 0.55 and 1.10 g/kg) via tail vein injection for 7 days followed by the intraperitoneal injection of doxorubicin (25 mg/kg) for 5 days to establish acute myocardial injury model. The results manifested that YQFM alleviated doxorubicin-induced myocardial injury and improved cardiac function in rats by reducing the serum levels of LDH, CK, and AST, decreasing left ventricular end-diastolic diameter (LVEDD), and elevating FS (Wang XD. et al., 2020). Vitro experiment has also confirmed the cardioprotective effects of YQFM that the application of YQFM (5 mg/ml) boosted the viability of H9c2 cells exposed to H 2 O 2 (0.2 mmol/L, 5 h) . Improving mitochondrial function ICR mice were treated with different concentrations of YQFM (0.13, 0.26 and 0.53 g/kg, intraperitoneal) for 2 weeks after CAL. The results showed that YQFM redressed mitochondrial dysfunction by normalizing mitochondrial morphology, increasing mitochondrial membrane potential (Δψm), inhibiting the generation of reactive oxygen species (ROS), up-regulating the expression of mitochondrial fusion protein 2 (Mfn2), and reducing the phosphorylation of dynamin-related protein 1 (Drp1), which was related to reduction in NADPH oxidase 2 (NOX2), p67 phox , NOX4, calcium voltage-gated channel subunit α1C (CACNA1C) and phosphorylation of calmodulin dependent protein kinase II (p-CaMKII) . Ameliorating myocardial apoptosis In HF mouse models induced by CAL, after the intraperitoneal injection of YQFM (0.13 g/kg, 0.26 g/kg, 0.53 g/kg) for 14 days, the results showed that the levels of serum creatine kinase-MB (CK-MB), aspartate aminotransferase (AST), interleukin-6 (IL-6), troponin, myosin, and myoglobin were down-regulated, and the omentin level elevated. And the study indicated that YQFM improved left ventricular systolic function and suppressed apoptosis on account of boosting the expression of phosphatidylinositol 3-kinase (PI3K), the phosphorylation of protein kinase B (Akt) and adenine monophosphate activated protein kinase (AMPK) and inhibiting the phosphorylation of p38, C-Jun Kinase enzyme (JNK), and extracellular signalregulated kinase 1/2 (ERK1/2) (Li F. et al., 2019). In vitro experiments, compared with the control group, namely injured H9c2 cells induced by doxorubicin (0.3 μmol/L), the intervention of YQFM (125, 625, 3,125 μg/ml) reduced cytotoxicity, increased cell viability, inhibited the activity of LDH, elevated adenosine triphosphate (ATP) content and restored mitochondrial membrane potential, which played an anti-apoptotic effect (Zeng et al., 2018). Furthermore, the application of YQFM(2.5 mg/ml) on H9c2 cells significantly boosted cell viability and ATP content in apoptotic cells induced by tert-butyl hydroperoxide, and enhanced the phosphorylation of Akt. It also ameliorated the extent of hypertrophy in H9c2 cells induced by angiotensin II (0.1 μM) and elevated the expression of atrial natriuretic peptide (ANP) mRNA . Suppressing inflammatory mediators In chronic heart failure models induced by the ligation of rats' left anterior descending coronary artery, after the treatment of YQFM (100 mg/kg/d) for 8 weeks, UPLC-Q-TOF-MS combined with nuclear factor kappa-B (NF-κB) active luciferase reporter analyzed potential anti-inflammatory components. It was further demonstrated that YQFM reduced the size of myocardial infarction, improved cardiac function, and inhibited the expression of inflammatory cytokines, such as tumor necrosis factor-alpha (TNF-α), NF-κB, IL-6, and interleukin-1β (IL-1β). And eight potential anti-inflammatory components have been confirmed, including ginsenosides Rb1, Rg1, Rf, Rh1, Rc, Rb2, Ro and Rg3 (Xing et al., 2013). Anti-hypoxia effect To investigate the anti-hypoxia effect of the extraction of YQFM, an animal model of chronic intermittent hypoxia was constructed, treated with YQFM (1.4, 2.8, and 5.5 g/kg/d) for 28 days, and betaloc (0.1516 g/kg/d) served as the positive control. The results manifested that YQFM reversed endothelial cell swelling and cardiac vacuolation, improved myocardial hypoxia tolerance and attenuated myocardial damage by increasing EF and stroke volume (SV), inhibiting the activity of CK and LDH, reducing MDA content, and boosting superoxide dismutase (SOD) (Feng et al., 2016). Anti-oxidative effect In ICR mice that were given intraperitoneal injection with isoproterenol (0.02 g/kg/d) for 3 days followed by YQFM (1.352, 0.676 and 0.338 g/kg/d), the results showed that the serum levels of MDA, CK, and LDH and the activity of myeloperoxidase (MPO) decreased, while the serum SOD level elevated, indicating that YQFM exerted great cardioprotective effect (Wang et al., 2013). Maintaining the balance of matrix metalloproteinases/tissue inhibitor of metalloproteinases In Wistar rats with chronic heart failure that underwent abdominal aorta contraction, after the intervention of QYFM (520 mg/kg, 260 mg/kg, 1,040 mg/kg) for 14 days, there were significant changes of contents in myocardium, including MMP-2, MMP-3, MMP-9, TIMP-1, TIMP-2. The study reported that YQFM decreased the levels of MMP-2, MMP-3, Frontiers in Pharmacology frontiersin.org and MMP-9, and elevated the levels of TIMP-1 and TIMP-2, whereby improving cardiac function and delaying ventricular remodeling . Regulating the expression of miRNAs In chronic heart failure models established by the ligation of rats' the left anterior descending coronary artery, after the administration of YQFM (YQFM, 5 mg/kg/d, ip) for 28 days, the differential expression of microRNAs was studied via rat miRNA microarray and bioinformatics analysis. And the results manifested that YQFM increased left ventricular ejection fraction (LVEF) and left ventricular fractional shortening (LVFS), decreased left ventricular diameter, and boosted cardiac output (CO) by down-regulating the expression of miR-219a-2-3p, miR-466c-5p, and miR-702-5p, and up-regulating the expression of miR-21-3p, miR-216b-5p, miR-381-3p, and miR-542-3p . Clinical evidences of YiQiFuMai injection in the treatment of heart failure 1.3.1 Clinical trials A clinical study involving 1,134 patients with coronary heart disease and heart failure performed by 35 research centers, who were treated with YQFM (5.2 g/d) for 14 days, revealed that the application of YQFM on routine treatment for heart failure contributed to the reduction of Lee's heart failure score, Minnesota heart failure quality of life score, and cardiothoracic ratio, elevated SV, CO, EF, and FS, and decreased LVESD. Therefore, YQFM exerted great efficacy on improving the cardiac pumping performance and quality of life of patients and reversing ventricular remodeling (Sun et al., 2012). In addition, research has evaluated the clinical efficacy of YQFM combined with western medicine via stress echocardiography. A study involving 52 patients with ischemic heart failure showed that, compared with the conventional treatment group, YQFM combined with conventional treatment increased EF and early diastolic peak flow velocity/late diastolic peak flow velocity (E/A) value and reduced early mitral filling velocity/early diastolic mitral annular velocity (E/e') ratio and NT-proBNP levels, indicating that YQFM could improve cardiac function (Hu et al., 2014). Two randomized controlled trials concerning 60 elderly patients with chronic heart failure have illustrated that YQFM combined with basic treatment effectively reduced serum NT-proBNP levels and increased EF and 6-min walking distance (6 MWD) compared to basic treatment alone, suggesting the positive effects of YQFM on cardiac function and exercise tolerance Yang et al., 2016). In another randomized controlled trial involving 108 patients with ischemic cardiomyopathy and heart failure, the treatment group was given YQFM combined with Qiliqiangxin Capsules, while the control group was given Qiliqiangxin Capsules alone. And the results manifested that the application of YQFM lowered serum NT-proBNP levels and elevated EF, CO, and 6 MWD, indicating that YQFM combined with Qiliqiangxin Capsules enhanced clinical efficacy, improved clinical symptoms, and promoted the recovery of cardiac function (Jiang et al., 2018). In a study including 103 patients with chronic heart failure and atrial fibrillation, based on the conventional treatment, the control group was given rosuvastatin while the treatment group was given rosuvastatin and YQFM. Compared to the control group, there were significant improvement of cardiac function and 6 MWD performance, increased FS and EF, and reduction in the level of NT-proBNP, LVEDD, the recurrence rate of atrial fibrillation, and risk of permanent atrial fibrillation in the treatment group (Su and Wang, 2018). In a randomized controlled trial of 118 patients with coronary heart disease and chronic heart failure, the application of YQFM combined with atorvastatin improved cardiac function and delayed ventricular remodeling by decreasing LVEDD, lowering the levels of NT-proBNP, soluble CD40 (sCD40) and soluble CD146 (sCD146), and increasing nitric oxide (NO) levels . Furthermore, in a randomized controlled study involved 40 patients who underwent cardiac valve replacement surgery, based on the cardiac rehabilitation, the treatment group received YQFM immediately after surgery. And the results showed that YQFM effectively improved exercise tolerance and 6 MWD performance . Collectively, YQFM exerts great efficacy in the treatment of heart failure, including improving cardiac function, inhibiting ventricular remodeling, and elevating the quality of life. However, the quality of clinical trials of YQFM in the treatment of heart failure remains relatively low. Thus largesample, multi-center, randomized controlled trials and real world research are vital to provide the evidence-based data (Table 3). Meta-analyse and systematic reviews Meta-analysis evaluating clinical efficacy of YQFM combined with conventional western medicine in the treatment of heart failure has revealed that the intervention of YQFM increased EF, CO and 6 MWD, shortened LVEDD and LVESD, and reduced the serum levels of NT-proBNP and BNP, suggesting that on the basis of conventional western medicine treatment, YQFM further improved cardiac function and the quality of life (Wang XL. et al., 2016;Lian et al., 2016;Zhou et al., 2016;Xiong et al., 2017;Xie and Dai, 2019;Fan et al., 2021). Whereas, lacking of the calculation of sample size, details concerned the western medicine treatment and endpoint indicators, unclear random method, and non-uniform dosages and periods of YQFM weaken the quality of the clinical evidence (Table 4). Safety of YiQiFuMai injection in the treatment of heart failure Li et al. analyzed 240 patients with coronary heart disease who received YQFM in retrospect, and the results showed that none of the patients had the symptoms or signs of hepatic and renal injury. It was merely reported one case of pharyngeal pain with the injection of YQFM infusion, and the symptom subsided gradually after the withdrawn of YQFM ). An analysis involving 998 patients treated with YQFM (>70 years old accounting for 50.69%) exhibited that the incidence of untoward effects was 0.2%, and most of these symptoms and signs were transient without additional treatment . Wang et al. analyzed the safety of YQFM on 106 elderly patients (≥80 years old) with cardiovascular diseases, and it was found that YQFM had little impact on the levels of serum alanine aminotransferase (ALT), AST, total bilirubin (TBil), and creatinine (Cr), and only one case (accounting for 0.94%) reported mild palpitation and precordial discomfort . Sun et al. retrospectively analyzed the performance of YQFM on 2,476 hospitalized patients by the prescription automatic screening system. The study reported that 31 cases had adverse reactions (accounting for 1.25%) that mainly manifested as general damage and skin lesion, such as rash, pruritus, (Ma et al., 2015). In vitro experiments, YQFM not only inhibited the autonomic contraction of the isolated intestine, but also supressed the spasm of isolated intestine triggered by acetylcholine (Ach) and histamine (His). In vivo experiments the blue staining rate and the levels of His and 5-hydroxytryptamine (5-HT) in mice administered with low-dose YQFM were within the normal range without evident pulmonary injury and auricularinfection. By contrast, only when the mice were given 3.43 times the clinical equivalent dose of YQFM, there were mild increase in the blue staining rate and the levels of His and 5-hydroxytryptamine (5-HT) and inflammation, indicating that YQFM had few allergic reactions within a proper dosage range (Gu et al., 2018). Clinical studies have also verified that adverse reactions of YQFM mainly correlated with inappropriate prescription, including beyond indications and dosages, unnecessary treatment, contraindications, and excessive quantities of solvent, etc . Though YQFM reveals great safety and efficacy, it is worth noting that following the instructions strictly is the key to avoid adverse events. Conclusion Heart failure, the cumulative effect and endpoint of various cardiac abnormalities, eventually leads to the decline of cardiac pump function, putting up with the challenge to the exploration of effective strategy for the prevention and treatment of heart failure (Committee of Exports on Rational Drug Use National Health and Family Planning Commission of The People'Republic of China and Chinese Pharmacists Association, 2019). Long-term clinical practice verified that Traditional Chinese medicine exerts the composite effect through the multi-target and multi-link ways (Zhang RP., 2015). At present, there are plenty of fundamental research and clinical trials on YQFM, involving pharmacodynamic components, pharmacological effects, clinical application and quality markers. Research has demonstrated that YQFM improved cardiac function, inhibited ventricular remodeling, exerted great anti-inflammatory and antioxidative effect, regulated mitochondrial function, thereby improving the quality of life of patients with heart failure (Du et al., 2021). YQFM is widely used in the treatment of heart failure, with definite clinical efficacy and fewer adverse reactions, which provides a reference for rational clinical drug use ( Figure 1). However, there are great gaps referring to the dose-effect relationship, pharmacological targets and mechanism of YQFM in the treatment of cardiovascular diseases waiting to be filled in. The multi-component and multi-target characteristics of Traditional Chinese Medicine raise the bar for the exploration of pharmacological mechanism of YQFM in the treatment of heart failure. At present, the pharmacological effects of different components of YQFM on heart failure remains unclear, and the research merely focuses on the study of ginsenosides. However, there are few studies on the pharmacological effects, mechanisms and targets of the two traditional Chinese medicines of Radix of Ophiopogon japonicus (Thunb.) Ker Gawl (Liliaceae), and Fructus of Schisandra chinensis (Turcz.) Baill (Schisandraceae), as well as the important active components of Ophiopogonins, Frontiers in Pharmacology frontiersin.org 08 Ophiopogon japonicus polysaccharide and Schizandrin A. What's more, in some studies the dosage of YQFM is not adaptive for the clinical treatment, resulting in the mismatch between clinical practice and basic science. In the future, research ought to reveal the targets and related signaling pathways of YQFM and differ active components so as to provide scientific guidance for the application of YQFM in clinical practice. Besides, owning to the relatively low quality of clinical trials on YQFM in the treatment of heart failure, large-scale, multi-center, high-quality randomized controlled clinical trials and real world studies are badly in need. Finally, although there are few reports about the adverse effects of YQFM, non-standard prescription still an annoying epidemic in the clinical practice. Therefore, it is necessary to improve the legal system of drug reevaluation and the post-marketing supervision to guard the safety and efficacy of the application of drugs, so as to improve the efficacy of YQFM in the treatment of heart failure. Author contributions SL wrote the manuscript. YW and WZ searched and reviewed literature. HS conceived and designed the manuscript. Funding This work was supported by the China Postdoctoral Science Foundation (2022M710473) and National Key R&D Program of China (2017YFC1700400).
2022-11-24T14:58:48.069Z
2022-11-24T00:00:00.000
{ "year": 2022, "sha1": "215130ceb720224416c2c462a33d511f44809015", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "215130ceb720224416c2c462a33d511f44809015", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
18408884
pes2o/s2orc
v3-fos-license
Structure/Function/Dynamics of Photosystem II Plastoquinone Binding Sites Photosystem II (PSII) continuously attracts the attention of researchers aiming to unravel the riddle of its functioning and efficiency fundamental for all life on Earth. Besides, an increasing number of biotechnological applications have been envisaged exploiting and mimicking the unique properties of this macromolecular pigment-protein complex. The PSII organization and working principles have inspired the design of electrochemical water splitting schemes and charge separating triads in energy storage systems as well as biochips and sensors for environmental, agricultural and industrial screening of toxic compounds. An intriguing opportunity is the development of sensor devices, exploiting native or manipulated PSII complexes or ad hoc synthesized polypeptides mimicking the PSII reaction centre proteins as bio-sensing elements. This review offers a concise overview of the recent improvements in the understanding of structure and function of PSII donor side, with focus on the interactions of the plastoquinone cofactors with the surrounding environment and operational features. Furthermore, studies focused on photosynthetic proteins structure/function/dynamics and computational analyses aimed at rational design of high-quality bio-recognition elements in biosensor devices are discussed. STRUCTURAL OVERVIEW OF PSII CORE COM-PLEX Photosynthetic organisms have the unique ability of entrapping and using solar energy to oxidize water to molecular oxygen through the highly structured multi-subunit proteinchlorophyll complexes of photosystem I (PSI) and photosystem II (PSII), arranged in supra-molecular assemblies embedded in thylakoid membranes. These structured assemblies are organized in a way to ensure supreme energy conversion efficiency and effective diffusion of small molecules (plastoquinone) interconnecting the two photosystems functions in a fine-tuned electron transport chain and leading to the synthesis of ATP and reducing equivalents. PSII stands out for its remarkable ability to act as a water:plastoquinone oxidoreductase, extracting electrons from water in physiological conditions. The outstanding features of PSII have motivated numerous research efforts aimed to clarify its structural and functional relationships, which until recently have been hindered by the lack of a high resolution X-ray crystal structure. Our current understanding of the PSII structure/function/dynamics rationale has been largely acquired by comparative studies and assumptions with the simpler and better-known purple bacterial reaction centre (bRC) *Address correspondence to this author at the Institute of Crystallography, National Research Council, Monterotondo, Italy; Tel: 00390690672631; Fax: 00390690672630; E-mail: giuseppina.rea@ic.cnr.it [for review on bRC see 1,2]. In the last decade, the crystal structure resolution at 2.9-3.5 Å has exponentially improved the knowledge of atomic details of dimeric and monomeric forms of PSII core complex (PSIIcc) of the thermophilic cyanobacterium Thermosynechococcus elongatus [3][4][5][6][7]. More recently, Umena and coworkers provided the highly resolved 1.9 Å structure from Thermosynechococcus vulcanus [8]. This structure features a rich collection of atomic details among which a cofactors arrangement made up by 35 chlorophylls, 2 pheophytins, 11 -carotenes, 2 plastoquinones, 2 heme irons, 1 non-heme iron, 4 Mn atoms, 3-4 Ca atoms (among which, 1 is in the Mn4Ca-cluster), 3 Cl ions (among which, 2 are in the vicinity of the Mn4Ca-cluster), and 1 (bi)carbonate ion. From the electron density map it was possible for the first time to locate all the atoms building up the Mn4CaO5-cluster and to define the precise position of all the metal ligands and of the substrate water molecules. In this high-resolution structure the primary plastoquinones (PQs) Q A and Q B occupy positions very similar to those observed in previous crystallographic studies. However, the third plastoquinone molecule observed in the structure solved by Guskov and coworkers [5], the so called Q C , has not been observed in this last structure casting some doubts on the real physiological function of Q C . These structural data together with the numerous biochemical and spectroscopic methods, and computational analyses contribute to solve the enigma of PSII structure/function/ dynamics specific correlation. PSIIcc consists about 20 different polypeptide chains, with the majority in the form of trans-membrane andhelical subunits. Among these, PsbA (D1) and PsbD (D2) subunits, with five trans-membrane helices (TMH) each, form the architecture of PSII RC and host the major redoxactive cofactors organized in two quasi symmetrical branches: two main chlorophyll a (Chla), PD1 and PD2, two accessory Chla, ChlD1 and ChlD2, two peripheric Chla, ChlzD1 and ChlzD2, two Pheophytin a, PheoD1 and PheoD2, and two PQs, Q A and Q B (Fig. 1). On the stromal (cytosol in prokaryotes) side of the complex between the primary and secondary PQs, a non-heme iron interacting with a bicarbonate anion has been also revealed. It is electrostatically associated to the two PQs, optimizing charge distribution and H-bonding of the Q A -Fe-Q B bridge and contributing to the stability of PQs when reduced to semiquinones (SQ) [9,10]. Two antenna proteins, PsbB (CP47) and PsbC (CP43), are located in close proximity of the D1/D2 heterodimer. Their structure consists of six TMHs each and large extrinsic loops at the lumenal side. They host the majority of PSIIcc chlorophyll molecules (16 and 13 Chla, respectively) and, besides light harvesting, are involved in the assembly and stabilization of the Oxygen Evolving Complex (OEC) active conformation [8,11]. Although the resolution of the PSII crystal structure at 1.9 Å provided an improved picture of the overall configuration of the OEC Mn 4 CaO 5 cluster, and atomic distances of the Mn-Mn and Mn-Ca interaction network, the structural changes occurring during catalysis are still under investigation. Different X-ray absorption spectroscopy techniques and quantum mechanical/molecular mechanics calculations are providing further insights into the relative position of one of the bridging oxygen in the Mn 4 CaO 5 cluster, that are crucial for a detailed understanding of the O-O bond formation mechanism during the water oxidation reaction [12,13]. An overview of the kinetics and thermodynamics of water oxidation and new knowledge on biogenesis and assembly of OEC has been critically addressed recently by Vinyard and coworkers [14]. Among the major intrinsic proteins necessary for PSII function are the TMHs of PsbE and PsbF, corresponding toand -subunit of the heme-containing protein cytochrome b 559 (Cyt.b 559 ), which most probably takes an active part in PSII photoprotection mechanisms. However, its involvement in PSII forward electron transport seems unlikely, despite its location close to the secondary quinone binding site [15,16]. Important molecular players of PSIIcc are also carotenoid (Car) molecules; the position of 12 -carotenes per PSIIcc monomer has been assigned in the PSII structure refined at 2.9 resolution [5]. It has been proposed that these polyisoprenoid compounds can play crucial roles in lightharvesting, excitation energy transfer regulation, Chl triplet states quenching, singlet oxygen scavenging, and structure stabilization [17]. Crystallographic studies have also revealed up to 25 integral lipids associated to each PSIIcc monomer, including 11 monogalactosyldiacylglycerol (MGDG), 7 digalactosyldiacylglycerol (DGDG), 5 sulfoquinovosyl-diacylglycerol (SQDG), and 2 phosphatidylglycerol (PG), having almost full compositional and spatial overlap in PSII monomeric and dimeric form [5,7,8]. PSII lipid content and composition denote the typical thylakoid membrane distribution and are characterized by spatial heterogeneity: the head group has a stromal (cytoplasmic in prokaryote) orientation for the negatively charged PG and SQDG, lumenal for DGDG and bilateral for MGDG. PSII integral lipids behave more as a dynamic multifunctional cofactor into overall functioning of the complex than as a simply hydrophobic surrounding matrix. Lipid molecules, in fact, actively interact with the Mn-cluster components and stabilize its assembly; contribute to a different extent to define the configuration of approximately 1/3 of Chla and 3/4 of Car molecules in PSII, and last but not least, largely contribute to the PSII monomer-monomer interactions and complex dimerization [18,19]. Integral lipids of PSIIcc play a special role also in defining the large internal cavity for the PQ/PQH 2 exchange, representing the ideal flexible scaffold to host the hydrophobic PQ isoprenoid residues, including the recently identified PQ molecule, Q C . The cavity has two openings directed towards the membrane heart, approximately in the centre of the membrane thickness, defined as channel I and channel II [5]. Channel I is bigger than channel II, lodges Q C , and opens between TMHs of PsbJ protein and Cyt.b 559 , while the recently described channel II accommodates Q B phytyl tail and is formed by TMH-d and e of D1, TMH-a of D2 and TMH of PsbF ( -subunit of Cyt.b 559 ). PSII PLASTOQUINONES The two chemically identical PQ molecules, hosted into the Q A and Q B binding sites of the RC heterodimer, have the distinctive role of stabilization of primary charge separation and conduction of electrons towards the photosynthetic electron transport chain, respectively. The existence of a third plastoquinone, Q C , in the PSII core complex has been supported by Cyt.b 559 photoreduction and redox titration experiments and lately identified by the 2.9 Å resolution crys-tallographic study [5,[20][21][22]. The role of this quinone molecule in the PSII function is still under debate and will be discussed below. The binding energies for Q A , Q B and Q C obtained by fragment molecular orbital (FMO) calculations based on the 2.9 Å X-ray structure are -56.1 kcal/mol, -37.9 kcal/mol and -30.1 kcal/mol, respectively [23]. The FMO calculations indicated a decrease of the relative contribution of the PQ/protein matrix interactions to the total PQ interaction energy in the order Q A >Q B >Q C with concomitant increase of the relative contribution of surrounding lipids and cofactors ( Table 1). Thus, the FMO results drew a correlation between the increasing of PQ mobility and the growing relative contribution of lipids and cofactors interaction. Plastoquinone at Q A Binding Site The plastoquinone at Q A site tightly interacts with the RC protein matrix, does not undergo protonation, and is not exchangeable. It functions as a single electron recipient and mediates the double reduction of the second quinone molecule at the Q B binding pocket. In contrast, the double reduced and protonated Q B moves away from its binding site and is promptly replaced by a new PQ molecule (as well as by artificial quinones or herbicides), leading to consider this molecule more as a substrate rather than a tightly bound cofactor. Although chemically identical, Q A and Q B demonstrate distinguishable redox potentials, necessary for the potentiation of PSII forward electron transfer and the fine tuning of the backward recombination reactions [24,25]. The different functional and redox-properties of the PSII plastoquinone molecules are largely determined by the interactions with the surrounding environment (H-bonding, hydrophobic interfaces, -stacking), and the conformation of the individual binding sites and phytyl chains [4][5][6]. FMO quantum chemical calculations based on the 2.9 Å X-ray structure of T. elongatus pointed out that the Q A phytyl tail is responsible for more than 60% of the total molecule interactions and Q A stabilization is mainly due to interactions with amino acid residues [23], Table 1 and Table 2. The position of the isoprenoid tail is restrained mostly by van der Waals contacts with TMH-d of D2, TMH-a of D1, PsbL and PsbT and the accessory ChlD1 (Fig. 2). The 2.9 Å resolution structure revealed an additional stabilization of the Q A molecule due to the interaction of the terminal unit of the isoprenoid tail with -electron clouds of the phenyl rings of D1-Phe52 and PsbT-Phe10 [5]. Q A head is fixed by two H-bonds, formed by the keto-oxygen atoms with the D2-Phe261 backbone amide group and the side chain N of D2-His214, and bystacking with D2-Trp253 indole ring, occurring in an offsetstacked manner [4,5]. The static picture of Q A interactions within its binding niche obtained by X-ray analyses has been further complemented by spectroscopic studies. Simultaneous 1 H and 14 N Two-Dimensional Hyperfine Sublevel Correlation spectroscopy shed new light on the strength and orientation asymmetry of H-bonding of Q A anion radical, establishing that when Q A is in its reduced form, the H-bond involving D2-Phe261 amide group is significantly stronger than the one attributed to D2-His214 [26,27]. Similarly, computational studies provided stronger interaction energies of Q A with D2-Phe261, D2-Thr217 and D2-Trp253 than with Q A -D2-His214 [23], (Fig. 2 , left bottom panel). Important properties of the primary PQ molecule, proven in bacteria, cyanobacteria and higher plants, is the existence of the one-electron redox couple Q A /Q A • in two conformations with highly different redox potential [28]. The wellrecognized ability of herbicides binding at Q B site to increase or decrease the susceptibility of PSII to photodamage was related to herbicide-induced changes at midpoint potential of the Q A [29][30][31]. Recent studies have strengthened a previ- Table 1. Distribution of interaction energy of the PSII plastoquinone within Q A , Q B and Q C binding niche, according to Hasagawa and Noguchi [23]. The relative contribution of the protein matrix, and lipids and cofactors, respectively, are calculated as a percentage of the total interaction energy of the corresponding PQ molecule. The head/tail distribution of the total PQ interaction energy, as well as with the protein matrix and lipids and cofactors are calculated in a similar manner. The relative contribution of the PQs phytyl tail and head group for the total molecule interactions are also presented. Correlation between the increased PQ mobility (Q A >Q B >Q C ) and reduction in the interaction with surrounding protein matrix is underlined. ously found correlation [32,33] between the Q A /Q A • midpoint potential changes and PSII donor side integrity level [28,34]. The Q A low potential form is present in functional PSII reaction centres while donor-side Ca 2+ extraction or Mn cluster disassembly is associated with an increase (of approx. 150 mV) in the Q A redox potential. Müh and coworkers [35] and Cardona and coworkers [36] have recently addressed in comprehensive reviews the physiological significance of the Q A redox shift in guiding a "safe" PSII charge recombination reactions and protecting PSII from photodamage during the assembly of the RC. In addition, an up-shift of the Q A midpoint potential by ca. +38 mV was observed in T. elongatus in photo-oxidative stress conditions, when in the PSII RC the PsbA3 (D1) isoform preferentially replaces the PsbA1 (D1) protein, expressed in optimal conditions. This finding evidenced the capability of the D1 amino acid composition to influence the Q A redox potential, most probably affecting the Q A -His214(D2)-Fe-His215(D1)-Q B interaction chain [37]. The rationale of the Q A redox potential switching in photoprotection should be viewed in light of the very recent study conducted by Boussac and coworkers [38], demonstrating faster recombination reaction in PsbA3 than in PsbA1 PSII-RC, which could thwart 1 O 2 generation initiated by triplet state of the RC primary chlorophyll molecules. Plastoquinone at Q B Binding Site The site hosting the secondary quinone, Q B , is situated in a symmetry-related manner to Q A binding spot around the non-heme iron (Fig. 2, top panel). Because of its substratelike behaviour it was not always possible to assign the electron density found in the presumed Q B binding site to a PQ molecule [6]. The problem of the Q B binding site electron density inhomogeneity was surpassed by crystallization of the PSII core complex paired with a herbicide molecule [7]. However, in the 2.9 Å structure it was possible to model seven out of the nine Q B isoprenoid units [5,11]. In contrast to the secondary quinone acceptor in bRC, found in two differential positions called "proximal" and "distal" [39], PSII Q B was located in only one position, corresponding to the "proximal" one in bRC. The Q B binding site is formed mainly by the D1 RC protein residues ( Table 2). The ketooxygen atoms of the Q B head group form H-bonds with D1-Ser264 hydroxyl group and the side chain N of D1-His215. In contrast to Q A , Q B has a possibility to form a third H-bond with the backbone amide group of D1-Phe265 (2.84 Å) [5,8], (Fig. 2, centre panel). Most probably, the latter is involved in the stabilization of the SQ state of the Q B molecule during the reduction/protonation process [36]. The Q B isoprenoid tail, located into the recently described PQ/PQH 2 exchange channel II, is inclined in relation to the one of Q A [5]. Similarly to Q A , the contribution of the phytyl tail to the total Q B interaction energy is calculated to be more than 60%, but differently from the Q A the surrounding protein matrix contributes for a smaller fraction of the binding energy [23], Table 1. Based on the interaction strength of the Q A and Q B with the respective amino acid residues obtained by FMO calculations, Hasegawa and Noguchi [23] have questioned the role of the H-bonds established by the PQ keto-oxygen groups in the Q A and Q B binding. The authors observed more substantial contribution of the non-covalent interactions of Q A head group with D2-Phe261 and D2-Trp253 and Q B head group with D1-Phe255 and D1-Phe265 to the total PQ binding energies than the above mentioned Hbonding. Instead, it has been unambiguously demonstrated the importance of the H-bonding patterns of PSII PQs in determining their redox potential [40,41]. The relative contribution of PSII integral lipids and cofactors to the total Q B interaction energy is less than 20% and is mainly limited to the PQ tail ( Table 1 and Table 2). The Q B phytyl chain is accommodated in channel II of the PSII internal cavity, connecting the Q B binding site with the membrane PQ pool, and is in close interactions with the phytol chain ChlD2 and the acyl chains of MGDG18, and to a lesser extent to SQDG4, DGDG6 and MGDG7. The reduction/protonation mechanism of the secondary quinone has been a long-term challenge for researchers in photosynthesis. The Q B reduction occurs in two successive photoreactions including the formation of SQ intermediates. The first reaction is fast, occurring within 0.2-0.4 ms, and generates a SQ tightly bound into the Q B pocket, which can accept a second electron in a somewhat slower process -0.6-0.8 ms [42]. Experimental data suggested that Q B • becomes protonated before the second electron transfer occurs, thus avoiding the formation of highly charged and unstable Q A Q B 2 species [35,36] and the protonation occurs more promptly if Q A is in its reduced state [10].The resulting plastoquinol is characterized by low affinity for the Q B binding niche and is promptly released into the membrane. In bRC the protonation process has been well established: the first H + necessary to neutralize the charge on the distal (in relation to the non-heme Fe position) keto-oxygen atom of the SQ is provided by the Ser L223 amino acid trough Asp L213, while the second one is released by the Glu L212 residue. In plant and cyanobacterial PSII RCs the details of this phenomenon are still under investigation. Similarly to bRC, it has been supposed that the first protonation is accomplished by D1-Ser264 via D1-His252, while different possibilities have been explored for the mechanism of the second protonation [35,36,43]. Recently, by largescale quantum mechanical/molecular mechanical approach based on the 1.9 Å resolution crystal structure, the D1-His215 was suggested as the sought H + donor for the ketooxygen of Q B H proximal to the non-heme Fe [10]. The authors speculated that the resulting D1-His215 anion, in turn, is protonated in a redox-process, which most probably includes bicarbonate, D1-Tyr246 (Fig. 2, centre panel) and water as well as rearrangement of H-bonds network within the Q B binding niche. Additional PSII Plastoquinone Binding Sites A third plastoquinone molecule (Q C ) associated to PSIIcc was identified in the 2.9 Å resolution crystal structure of T. elongatus, and five of its isoprenoid units were successfully modelled into the PQ/PQH 2 exchange channel I, between the TMH of PsbJ and Cyt.b 559 and -subunits [5]. In a subsequent 3.6 Å resolution crystallographic study of monomeric PSII core complex from T. elongatus, a trace of electron density was revealed in the same position but it was not possible to assign it to a specific molecule [6]. No electron density was detected in the putative Q C binding side in the most detailed 3D structure at 1.9 Å resolution from T. vulcanus [8], and this binding spot was neither occupied by PQ nor by PSII herbicide terbutryn in T. elongatus PSII/terbutryn crystal complex resolved at 3.2 Å resolution [7]. The uncertainty of Q C presence in the different structures has been ascribed to differences in adopted purification and crystallization procedures, and photosynthetic sample pretreatment, to the low binding affinity of Q C , or to the lack of a well-defined binding pocket [5,7]. In fact, neither H-bonds nor -interactions were observed for Q C stabilization and, most probably, the molecule binding is a result of the possible van der Waals interactions with the nearby lipid molecules, ChlD2, CarD2 and phytyl tail of Q B [5,23], Table 2, PQ Head attractive interactions and (Fig. 2). FMO calculations based on the 2.9 Å resolution structure pointed out that more than 90% of the Q C interaction energy with the protein matrix and with PSII integral lipids and cofactors is due to its isoprenoid chain ( Table 1). In addition, the relative contribution of the interactions with the surrounding protein matrix to Q C total binding energy is about 56%, that is much lower compared to that calculated for Q A and Q B [23] -about 100% and 82%, respectively ( Table 1). The high mobility of the Q C molecule has been attributed to its peculiar interaction with the environment, which is characterized by the head group rimming predominantly by a belt of lipids and cofactors, and the tail fencing to proteins ( Table 2). The X-ray localization of a third PQ molecule in the 2.9 Å PSII structure supported earlier findings on the occurrence of additional quinones (different from Q A and Q B ), located in the PQ hydrophobic cavity close to the Cyt.b 559 [44,45]. The physiological role of the Q C is still under investigation. Guskov and coworkers [5] proposed three hypothetical mechanisms for PQ/PQH 2 exchange, in which Q C is alternating with Q B or is the PQ molecule waiting to enter into the Q B -binging site, thus contributing to the rapid turnover of the PQH 2 with a new PQ molecule from the membrane pool. Experimental evidences have suggested a strong involvement of Q C in a PSII photoprotective electron pathway mediated by Cyt.b 559 [20][21][22]. It was hypothesized that Cyt.b 559 could act as a plastoquinol oxidase (oxygen reductase) or a superoxide oxidase/reductase enzymes; these functions have been extensively documented in other reviews [15,16,36]. A distinctive feature of this heme-containing protein is its redox heterogeneity. In PSII membrane fragments 75% of Cyt.b 559 is in its high potential form, 16% is in its intermediate potential form and 9% is in its low potential form; its redox properties and reactivity have been correlated to the occupancy of the Q C site [22]. A biphasic pattern of Cyt.b 559 reduction by PQH 2 has been observed in PSII membrane fragments containing predominantly oxidized Cyt.b 559 , and experimental results suggested that a PQ/PQH 2 molecule bound to the Q C site is responsible for the fast phase (with rate constant of 100 ms 1 ) of the Cyt.b 559 reduction [46]. The slow Cyt.b 559 reduction phase (with rate constant of 2.4 min 1 ) was attributed to formation of redox equilibrium between a set of Cyt.b 559 redox forms and "free" PQ/PQH 2 from the membrane pool. The most likely mechanisms suggested by the authors for the Cyt.b 559 fast reduction requires: i) a location preferably encouraging one-electron redoxreaction; ii) a polar environment allowing accommodation of the PQ/SQ/PQH 2 and occurrence of the so called L substances in anionic form (such as ADRY reagents, dinoseb, high concentration of DCMU, tetraphenylboron); and iii) a fine-tuned Cyt.b 559 /quinone interaction requiring welldefined quinone-protein interactions. However, the structure of the Q C binding site described in the 2.9 Å resolution crystallographic study does not fulfil any of these requirements. The discrepancy between the structurally identified and experimentally suggested additional PSII PQ binding sites was found also in previous studies [20][21][22]. In order to distinguish between the Q C site and the presumed polar quinone site of PSII that binds PQ/SQ/PQH2/L, the authors proposed to assign the latter as Q D [46]. These experiments led to hypothesize the existence of two different PQ binding sites, Q C and Q D , which host two different PQ molecules or one bound quinone that can occupy two different positions. Neutron Scattering Techniques The determination of protein structure and the analysis of conformational changes are the key necessary to understand the biological function and the activity of a protein. The structural characterization of proteins, however, constitutes only part of the puzzle because their function requires a specific dynamic. The protein motions arise on a wide range of timescales, from femtoseconds for bonds vibrations to milliseconds or seconds, for more complex processes that require a structural rearrangement. The dynamics of proteins contributes to the thermodynamic stability of their functional states and plays an important role in various processes, such as allosteric processes, where the correlations between structural fluctuations are able to transmit information between distant sites of the protein. The study of dynamic processes at the microscopic level has been developed using a wide range of high resolution spectroscopies (Mössbauer spectroscopy, nuclear magnetic resonance (NMR), time-resolved fluorescence). We should add elastic, quasi-elastic and inelastic neutron scattering (ENS, QENS and INS, respectively) technique, which offers the advantage of covering length/distance (0.5-10 Å) and energy (0-200 meV) scales that correspond to thermal fluctuations and provide information on the geometry of the movements [47,48]. The neutrons as a dynamics probe are highly sensitive to movements of hydrogen atoms, which are uniformly distributed within biological structures and represents nearly half of the atoms in them. On temporal and spatial scales accessible with neutron scattering, the hydrogen atoms trace the movements of the chemical groups, which they are linked to, such as amino acids side chains. In addition, it is also possible to mask part of the signal using specific deuteration, since deuterium has a weak scattering signal with respect to that of hydrogen [48]. The neutron scattering experiments, with their peculiar chance to exploit the isotopic substitution technique, permit to integrate and give complement to data obtained by numerical simulation and through specific molecular biology techniques. Moreover, they provide unique information useful for the characterization of the dynamical heterogeneity of biological macromolecules, from globular protein up to membrane complexes, in the temporal range 10 -12 -10 -9 s. The neutron scattering is an invaluable technique to tackle a plethora of important open questions in Biophysics such as the definition of correlations between structure, dynamics and functionality, the correlations between the interaction of water molecules with proteins and its role in the folding process and enzymatic activity, the observation of how the hydrogen bond network between solvent and biological macromolecules is affected and modified by selective genetic mutation, and hence the local flexibility and biological efficiency of the molecules [49][50][51]. Neutron Scattering in Photosynthetic Research Photosynthetic research exploring the relationship between the function and the dynamics is a new field of interest. However, neutron scattering has been demonstrated to be a suitable tool to investigate protein dynamics in both complex multi-subunit photosynthetic systems and whole cells [52,53]. A pioneer quasi-elastic neutron scattering study on the purple bacterium Rhodobacter sphaeroides RC proteins has demonstrated the effect of genetic point mutations for the overall protein dynamics [52]. The RC dynamics over a temperature range of 280 K was probed in a wild type and two non-functional mutants, carrying alanine residues on the place of Glu212 and Asp213 in the L-subunit. The Glu212 residue is highly conserved in bRC and is involved in the secondary ubiquinol protonation (see section 2.2). The crystallographic structure of the mutant proteins showed a local configuration of the active site more open than that of the wild type. One of the parameters indicative for the internal protein motions, the mean square displacement (<u 2 >), calculated for the two mutants resulted to be higher than for the wild type protein suggesting a higher flexibility [52]. These results correlated the enhanced flexibility of mutated RC proteins with a lack of functionality, thus suggesting the requirement of a rigid core for the accomplishment of protein function in a multi-protein complex, such as RC involved in highly tuned photosynthetic reactions. This ground-breaking experiment opened the way to the urgency to understand if the core rigidity is essential for the charge transfer process, and if this property is shared by all the photosynthetic systems, and how this information can be applied to the design of bio-sensors and organic semiconductors. An original green microalga species from the Chlorococcal order, which was never studied for its high tolerance to radiations, has been isolated in the Institute Laue Langevin research reactor storage pool. The microorganism able to live and reproduce in extreme environments was investigated by Farhi and coworkers combining neutron scattering experiments of internal dynamics with results of radiation effects on the microalgae physiology, metabolism and genomics [53]. The microalgae were stressed with radiation doses up to 20 kGy (2 Mrad), and studied by NMR, looking for modification in the metabolism, and by ENS, looking for both dynamical flexibility and structural macromolecular changes in the cells. While the sugar metabolism was slightly reduced and only at high radiation dose, it was probed that the effect of a high irradiation on micro-algae does not lead to protein denaturation, but rather to reduction of macromolecules flexibility -as showed by the reduction of mean square displacement (<u 2 >) in (Fig. 2) of reference [53]. Pieper and coworkers have performed exhaustive neutron scattering studies on protein dynamics of PSII membrane fragments and antenna complex LHCII as a function of the temperature and sample hydration level [54]. They showed that PSII membrane fragments undergo a hydration dependent "dynamical transition" at about 240 K, underling the crucial role of hydration water for PSII flexibility. Above this temperature, PSII membrane fragments exhibit structural flexibility due to the presence of fast (picosecond) internal protein motions, while there is only harmonic vibrational dynamics at lower temperatures. It was then speculated that diffusive protein dynamics are indispensable for enabling Q A reoxidation by Q B at temperatures above 240 K, which explains the strong dependence of this electron transfer step on temperature and hydration level of the sample. A newly developed technique of time-resolved QENS offers the advantage of simultaneous monitoring of protein internal dynamics and functional activity in specific sample functional states. This method was firstly applied for dynamics-function correlation in bacteriorhodopsin, a membrane embedded protein that functions as a light-driven proton pump in Halobacterium salinarum, and lately for the more complex PSII membrane fragments with inhibited Q A to Q B electron transfer [54,55]. At physiological temperatures the calculated mean-square displacement of the hydrogen atoms, characterizing the flexibility of the system, revealed a significantly increased flexibility between 310 and 320 K [56]. The authors claimed that this transition is evidently correlated with the detachment of OEC from the membranes, as suggested by its absence in samples in which OEC has been removed prior to the QENS experiments. The transition is not accompanied by significant changes in the molecular organization of the pigment protein complexes, but the role of monomerization of PSII dimers cannot be ruled out. Lately, preliminary results on intracellular water collective dynamics investigation on Chlamydomonas cells, carrying both native and mutated D1 RC proteins, have been also addressed. The authors proposed a comparison between fully hydrated and partially de-hydrated deuterated whole cells. It was observed a distinct sound propagation speed between the two hydration levels suggesting a more rigid structure of hydration water than intracellular water [57]. COMPUTATION ANALYSES FOR MODIFICA-TION OF Q B BINDING POCKET AMINO ACID COMPOSITION: FUNDAMENTAL RESEARCH AND APPLICATIONS In the last decades, several mutational studies on D1 protein were conducted to determine the effects induced by aminoacidic substitutions located in the stromal loop of the D1 protein. Mutations resulted in an impairment of the electron transfer between Q A and Q B leading to the identification of residues often involved in modulation of PSII photoinhibition susceptibility and/or herbicide specificity [58][59][60][61][62]. Computational studies on PSII D1 protein aimed at rational modification of the Q B binding pocket have been scarce partly because a high resolution three-dimensional structure of PSII has been available only relatively recently and partly due to the difficulties in generating site-specific mutants of this protein. The first structure of PSII from T. elongatus has been published only in 2004 and at a resolution of only 3.5 Å [PDB code 1S5L, 3]. The same year a 3.2 Å resolution structure has been published independently [PDB code 1W5C, 63] and refined the following year at 3.0 Å resolution [PDB code 2AXT,4]. However, this last structure had still an insufficient resolution to uncover all the atomic details of this macromolecular proteins-cofactor complex such as the precise architecture of the OEC. Nonetheless, the Loll et al. crystal structure was sufficiently detailed to allow the first computational studies on the Q B binding pocket as well as homology modelling of PSIIs from other organisms. The first of such studies was probably that of Loll and coworkers [64] who modelled the three different copies of T. elongatus D1 protein (PsbA1-A3) within the 3.0 Å resolution crystal structure of PSII, revealing that most of the D1 amino acid variants were located in the direct vicinity of redox-active cofactors of the electron transfer chain. However, proper computational studies aimed at selecting amino acid variants within the Q B binding pocket able to modify the recognition properties of the pocket itself appeared only the following year [65]. In this study, homology modelling of the Chlamydomonas reinhardtii D1 protein was carried out based on the T. elongatus crystal structure, followed by in silico mutagenesis and energy minimization analyses to identify functional amino acids in the D1 protein interacting with herbicides. Following the results of theoretical calculations, three site directed mutants were produced by site-directed mutagenesis and characterized by fluorescence analysis. This study demonstrated that D1 mutants S268C and S264K could be used in the development of biomediators suitable for the selective detection of triazine and urea classes of herbicides, respectively. The study was further refined the same year undertaking an in silico study of the C. reinhardtii D1-D2 proteins with the aim of designing mutants with increased affinity for atrazine [66]. The final outcome of this work was the design and selection of D1 mutants hypothetically able to increase atrazine binding affinity by orders of magnitude, represent-ing a useful theoretical guide for the development of high affinity atrazine biosensors. With the increase in computing power, it has become feasible to adopt more sophisticated structural bioinformatics approaches, such as molecular dynamics (MD) simulations, for the study of the recognition properties of the Q B binding pocket toward natural and exogenous ligands. The first all atoms MD simulations of the complete T. elongatus PSII structure (including all cofactors) embedded in a lipid membrane was published in 2011 [67]. The aim of this work was that of uncovering the fine details of atrazine molecular interactions within the Q B binding pocket using a full atomistic (and thus realistic) description of PSIIcc. Atrazine was first docked in the vicinity of the Q B binding pocket and then let free to diffuse during a 10 ns MD simulations trajectory. During the simulations atrazine bound deeper into the Q B pocket establishing hydrophobic interactions with PSII D1-Phe255 phenyl ring and D1-Met214 side chain, and a single strong hydrogen bond with D1-His215 (Fig. 3). On the other hand, no binding partner appeared to stabilize other groups of the atrazine molecule such as the chloride atom (bound in an energetically unfavourable position in the vicinity of the aliphatic residue D1-Val219) and the N4 and N5 protons for which no hydrogen bonding partner was observed. The conclusions of this study were that rational design of sitedirected mutants was likely to increase atrazine affinity for the Q B binding pocket. In particular, it was suggested that, among others, mutation of D1-Phe265 residue with a polar hydrogen bond acceptor residue could stabilize the atrazine N5 proton and atrazine binding within the pocket. Fig. (3). Schematic view of the interactions of atrazine within the PSII Q B binding pocket predicted through a 10 ns MD simulations run [66]. For clarity, only the D1 backbone (orange ribbon) and residues mentioned in the text are shown. The dashed green line highlights the hydrogen bond between atrazine and His215. Atrazine carbon atoms are in green, nitrogen atoms are in blue, sulphur atom is in yellow and the chloride atom is in magenta. CONCLUSIONS AND PERSPECTIVES The advances in the crystallographic characterization of PSIIcc have broaden the understanding of PSII function and organization allowing to identify the atomic interactions tak-ing place between the RC protein matrix, electron transport cofactors and thylakoid membrane lipids. Unravelling finer molecular details and mechanisms of the PSII structure, function and dynamics is mandatory to support novel biotechnological applications exploiting its unique properties of charge separation, electron transfer and water splitting. Specifically, it is possible to exploit PSII enriched particles or their active subcomponent to develop receptor elements suitable to be exploited in photosynthesis-based biosensors for the detection of environmental pollutants. The latter, in fact, can interact with the PSII donor side interrupting the lightinduced electron transfer, generating a measurable optoelectronically detectable signal. In this context, insights into atomic interactions and dynamics of polypeptides and cofactors provided by both molecular dynamics or docking studies will help to design novel bio-recognition elements having improved selectivity and sensitivity towards different classes of compounds. Studies exploring the relationship between the function and the dynamics of the photosynthetic complex systems are still in their nascent stages, along with MD simulations. However, these techniques could offer a real possibility of investigating the dynamics associated with a molecule's biological function and complement the progress of this technology. Besides, the extraordinary properties of water-splitting could be used to design bio-inspired device for bio-energy production, while the charge separation features could be exploited to design molecular photovoltaics. CONFLICT OF INTEREST The authors confirm that this article content has no conflicts of interest. ACKNOWLEDGEMENTS This publication is supported by Grant of COST Action TD1102. COST (European Cooperation in Science and Technology) is Europe's longest-running intergovernmental framework for cooperation in science and technology funding cooperative scientific projects called 'COST Actions'. With a successful history of implementing scientific networking projects for over 40 years, COST offers scientists the opportunity to embark upon bottom-up, multidisciplinary and collaborative networks across all science and technology domains. For more information about COST, please visit www.cost.eu
2018-04-03T02:54:30.218Z
2014-05-31T00:00:00.000
{ "year": 2014, "sha1": "17500afdefe18198755c9fbd017662e39e7fcb5d", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc4030317?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "4eb8ae9e44637a88016d17d86f382ddb7179c439", "s2fieldsofstudy": [ "Biology", "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
73576597
pes2o/s2orc
v3-fos-license
Gross Pathology, Biochemistry and Histopathology of Selected Organs of Camels Suffering from Suspected Monensin Toxicosis in Australia The rumen modifier monensin is widely used in Australia cattle production systems. In addition to its anti-coccidial action monensin improves energy efficiency and nitrogen metabolism in rumen bacteria, and reduces the incidence of metabolic disorders such as acidosis and bloat. While monensin is considered safe for cattle, swine and poultry, it is extremely toxic to horses and incidents of toxicity have also been reported in camels. In this study, we are reporting for the first time monensin toxicosis in a camel herd in South-West Queensland, Australia (~267 km west of Brisbane). The camels were fed a cattle breeder supplements containing 250 mg/kg monensin, formulated to ensure effective concentrations if the supplement is consumed by breeder cattle at levels of 200-500 g/head/d. Blood samples were collected from 13 camels with clinical signs of monensin toxicosis and 12 healthy camels that had no exposure to monensin. Post-mortem examinations were carried out on two camels immediately after death, these animals had marked ascites. Monensin toxicoses resulted in marked decreases in albumin and increases in ALP, LDH and CPK when compared to physiologically normal healthy camels. Other parameters in the blood profile remain within normal limits. Minor to no histopathological changes were observed in the two necropsied camels however death due to rapidly developing congestive heart failure is suspected. Skeletal muscle was not examined histologically. However, the biochemical changes could be consistent with muscle necrosis. Introduction The rumen modifier monensin is a polyether ionophore antibiotic produced by the soil bacterium Streptomyces cinnamonensis [1]. It is widely used in Australian and US cattle production systems [2]. In addition to its anti-coccidial action, monensin improves energy efficiency and nitrogen metabolism in rumen bacteria, and reduces the incidence of metabolic disorders such as acidosis and bloat [3]. Monensin induces pH change within the cell, which can lead to a reduction in the secretion and/or transport of the chemicals important for the proper functioning of the cell. Monensin also affects the processes involved in formation of external structures on the cell surface and their growth by reducing the secretion of necessary substances [1]. It has been demonstrated that the cellular effects of monensin depend on the body size subjected to its action, the route of administration, and the dose of this antibiotic [1]. The mode of action of monensin is well established [4]. It forms a complex with extracellular Na + , dissolves into the bilayer membrane of bacteria and protozoa causing total intracellular Na + to increase and total K + to decrease. In doing so, it disrupts the Na + -K + pumps and kills bacteria and protozoa. While monensin is considered safe for cattle, swine and poultry, it is extremely toxic to horses [5] and incidents of toxicity have also been reported in several animal species in Egyptian camels [6]; in a group of bulls [7]; in two sheep flocks [8]; in a dairy herd [9] and an ostrich [10]. The blood profiles and histopathology in a camel herd, following suspected monensin toxicosis will be presented in this study. In conclusion, the camel appears not to be in the same category of intermediate sensitivity like the cattle and therefore feeding them cattle ration supplemented with monensin is likely to result in morbidity or mortality. Materials and Methods A group of Arabian camels (Camelus dromedarius) ranging in age from one to eight years were co-grazed with cattle in a paddock with native grass and different types of pine and eucalyptus trees and shrubs. They had access to varying levels of monensin for a period of approximately 4 weeks due to receiving a cattle breeder supplement containing the ionophore antibiotic feed additive monensin (250 mg/ kg). Thirteen camels out of a small herd of 40 camels showed clinical signs of monensin toxicity of variable severity. Blood samples were collected from these camels for blood profiling. Deaths of four camels were reported and two of these camels were necropsied on site immediately after their sudden death. Blood samples were also Journal of V Journal of Veterinary Science & eterinary Science & T Technology echnology collected from twelve healthy camels that had no exposure to monensin and used as controls. Sera were separated by centrifugation and biochemical analysis was performed using an Olympus blood analyzer (Olympus Life and Material Science Europa GmbH, Hamburg, Germany). Tissues were collected during the post mortem for histological examination. Specimens (forestomach, colon, liver, heart and spleen) were preserved in 10% neutral buffered formalin solution, embedded in paraffin, sectioned at 5 µm and routinely stained with haematoxylin and eosin (H&E). Finally, the stained sections were examined under a light microscope for further evaluation and histological studies. Clinical observation Toxicity signs demonstrated by several camels ranged from depression and muscular weakness to inability to stand. Among the four dead, a 13 month old camel was presented with a history of "not doing well", being recumbent and not able to get up and having cyanotic mucus membrane. This animal died with torticollis ( Figure 1). Post mortem and histological observations from the two dead camels are as follows: several litres of clear peritoneal fluid were present in the abdominal cavity ( Figure 2). Furthermore, fluid was found to be in large quantity within the omentum/mesentery and inside the pericardial sac while the heart was found to be within normal shape (Figures 3 and 4). Histopathology findings Myocardial necrosis, a feature of ionophore toxicity in other species was not noted in the sections examined. Microscopically, the only change observed was occasional nuclear rowing in the cardiomyocytes. No oedema or alveolar macrophages containing hemosiderin that might suggest congestive heart failure were noted in the sections of the lung examined. Moderate to severe oedema was detected in the submucosa of the forestomaches. Oedema in the colon was transmural. Within the colonic lamina propria, there was an equivocal increase in lymphocytes and plasma cells and multifocal, small, dense aggregates of eosinophils. No changes of significance were noted in other tissues. Skeletal muscle was not collected. Blood chemistry profile Blood profile showed an increase in alkaline phosphates (ALP), Lactate dehydrogenase (LDH) and creatine phosphokinase (CPK) when compared to physiologically normal healthy camels (Table 1) and reference ranges were reported in one study [11]. In comparison to the normal camels and published reference ranges, the average values from the exposed camels demonstrated marked hypoalbuminaemia (P<0.0001), associated hypocalcaemia (P<0.001) and hyperphosphataemia (P<0.01) were detected but other parameters (Na, Cl, Anion Gap, Cholesterol, Creatinine, gamma-glutamyl transferase (GGT), Globulin, HCO3 and Urea) in the blood profile remained unchanged (P>0.05). Camels exposed to monensin had a tendency Discussion Food mixing errors have been documented in the literature in diverse animal species. The suspected camel herd monensin toxicity reported in this article was due to this kind of error where cattle feed with monensin additives were offered to the camels without taking into consideration the camel's unknown sensitivity to this ionophore compound. LD 50 of monensin was reported to be 1.4 mg/kg for horses [12] and 26.4 mg/kg in goats [13] while in broiler chickens; it was 214 mg/kg [14]. The most sensitive animal species was horses [1,12]. The toxicity of monensin for cattle and other species is well documented and is known to be dose dependent [15][16][17][18]. Unfortunately, no such data is available in the literature for camels. No clear indication of extensive myocardial necrosis, a common feature of acute ionophore toxicity was observed or of myocardial fibre atrophy, indispersed with areas of myocardial hypertrophy and fibrosis which are features that have been described in cases of chronic ionophore toxicity in other species [17][18]. The only cardiac change observed was nuclear rowing which is an indication of early degeneration and regeneration initiation as reported by Mollenhauer et al. [19] in a Holstein heifer. The heart of this animal showed rare myocytes, characterized by rowing of nuclei and suggestive of attempted myofiber regeneration, with evidence of chronic yew toxicity in cattle. Some cases of chronic ionophore toxicity produce very localized or subtle cardiac lesions which may not be identified on limited sectioning of the heart. The severe ascites and visceral oedema could also be consistent with right sided congestive heart failure. Although the hypoalbuminaemia noted in the exposed animals may have been the cause of the ascites, it is possible that this hypoalbuminaemia was the result of rapidly developing congestive heart failure. This has been shown to occur in dogs and turkeys as a result of blood volume expansion, inanition and possibly loss of protein into the ascites fluid. An alternative explanation for the hypoalbuminaemia was not discovered, there was no histological or biochemical indication of extensive liver disease and protein-loss through the gut or kidney would be unlikely in multiple animals. A nutritional cause was also considered less likely as albumin concentrations in the control animals were substantially higher (29.83 vs 17.3, P<0.0001). It is also possible that monensin toxicity in camels results in skeletal muscle necrosis while cardiac muscle cells show no clear changes of necrosis due to selective effect [20]. Serum CK, GLDH and AST activities were increased (P<0.05 and P<0.0001), a finding that could be consistent with skeletal muscle necrosis. Serum potassium concentration was also higher (5.85 vs 4.93, P<0.001) above normal. Myoglobinuria was not noted. Unfortunately skeletal muscle samples were not collected for histopathological examination. Luckily, regardless of the exposure of meat animals to ionophores in feed, residues in the meat milk and its byproducts will have no potential effect on humans consuming these products [21]. In conclusion, the clinical signs associated with monensin toxicity in different animal species vary according to the dose and duration of exposure. In this case, no diarrhea was noted and histological evidence of extensive muscle damage was lacking. Although no histopathological changes were observed in the cardiac muscles, cardiac damage can sometimes be localized and subtle and the severe ascites could be consistent with congestive heart failure. From our observation, the camel is not in the same category of intermediate sensitivity like the cattle and therefore feeding them cattle ration supplemented with monensin is likely to result in mortality. Further observations of other possible cases of monensin toxicity or intoxication to clarify the rest of the unexplained changes in onehumped camels are very much needed if other cases arise.
2019-03-13T13:40:07.979Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "fb7b88abfc78e9899395dc686e7abf267bd29fe7", "oa_license": "CCBY", "oa_url": "https://www.omicsonline.org/open-access/gross-pathology-biochemistry-and-histopathology-of-selected-organs-ofcamels-suffering-from-suspected-monensin-toxicosis-in-austral-2157-7579-1000315.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "52e23ad464fe2dbb2baf7180a3c26f409ccf8e26", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
238409002
pes2o/s2orc
v3-fos-license
Assessment of Attitude of Primary Care Medical Staff Toward Patient Safety Culture in Primary Health-care Centers–—Al-Ahsa, Saudi Arabia Introduction An effective leadership is critical to the development of a safety culture within an organization. Patient safety in primary health care is an emerging field of research of increasing importance. Objective This study has been conducted to explore the safety culture attitude toward patient safety to improve the quality and patient safety in primary health-care centers. Methods A cross-sectional survey involving 288 medical staff in primary health-care centers in Al-Ahsa was conducted using an Arabic translated safety attitude questionnaire to assess the safety attitudes among health care center staff toward patient safety culture. Results This study showed that the attitude of medical staff in primary health-care centers is somewhat positive toward patient safety culture where the average of job satisfaction score in the current study was higher at 80% and the overall score for safety climate was 68%. The overall score for safety attitudes was highest in Al-Ayoun Health Center (79%) and lowest in Al Faisaliah Health Center (58%). The score of teamwork and stress recognition was high and statistically significant (p<0.05) among females. However, staff perception toward management was significantly higher (p<0.05) among males. Staff perception toward management was significantly low (p<0.05) among clinicians. The overall score for safety attitudes was remarkably high (p<0.05) among those with less than 10 years' experience, the overall safety culture score was significantly high (p<0.05) among administrative staff and all correlations were significant (p<0.01) except for recognition of stress with teamwork, job satisfaction, management perception, and safety climate. In addition, there were different attitudes toward patient safety culture between gender and physician vs non-physician and management staff vs non-management staff. Conclusion The findings suggested that certain improvements are needed, especially in the field of communication and stress recognition with regard to patient safety culture. Introduction In the current health-care setting, systems are increasingly becoming complex as caregivers are compelled to work in a fast-moving and pressurized environment thereby elevating the possibilities of clinical errors and harm to patients. 1 As a way of combating these rapid incidences, health-care institutions are striving to improve their performance as well as recognize the significance of developing a safety culture for enhancing the behavior and attitude of caregivers toward patients. 2 The safety attitude is also explained as the freedom from any kind of injury that is caused by negligence in medical care. Safety attitude helps in the reduction of unnecessary issues connected to health care in a smallest possible way. It is also referred to as a safety culture or safety climate ensuring the constant concerns of nurses, health-care workers, and professionals as they are the ones who play an important role in improving and promoting a better and safe environment for staff as well as patients. [1][2][3] Patient safety is defined as the prevention and avoidance of adverse events or patients' injuries occurring because of the procedures of health-care delivery. 3 Healthcare providers working in the primary health-care centers must be empowered with enough background information regarding patient safety to minimize the adverse event, especially where the caregivers are in frontline contact with patients. 4 Safety culture is an integral part of health-care organizations whereby the conceptualization of shared believes, attitudes, values, norms and behaviors are used to gauge a caregiver's performance toward achieving patient safety. 5 Majority of the community and population healthcare requirements and needs are being provided at the primary health-care centers, however, the theme of patient's safety culture becomes overshadowed and poorly visualized. 6 The safety attitude questionnaire (SAQ) was designed to fulfil the assessment of patient safety culture. The framework was developed by the University of Texas Center of Excellence for Patient Safety Research and Practice 7,8 where the main part of it involves six main factors including, perception of management, stress recognition, teamwork climate, communication, safety climate, working conditions and satisfaction. 9 SAQ helps in identifying the major expected weakness in the settings of clinics and motivates the reductions of medical errors while suggesting possible interventions for providing quality care. 10 Najjar et al 11 explored the relationship between patient safety attitude and adverse events, they explained that a hospital with a positive safety culture had fewer adverse events. Inconsistent with this finding Sorra et al 12 studied the relationship between staff attitude and patient assessment. Moreover, a systematic review performed to explore this association found evidence of association between the patient safety culture and patient outcome existed in the hospital and nursing units. 13 Primary health care is an essential component of the health-care system, where patient harm and adverse events may occur at any point of care during the treatment process. Assessment of the primary health-care staff toward patient safety attitude is a preliminary step to identify the weakness areas related to patent quality and safety. This study has been conducted to explore the culture of safety attitudes toward patient safety as it is considered an essential step to improve the quality of patient safety in primary health-care centers. Since, the majority of healthcare provision takes place in primary health care. However, most of the safety attitude studies were carried out in a hospital setting. Study Area Primary health-care center (PHC) services in Al Ahsa includes important rehabilitative, curative, preventive, and promotional services, immunization, child health, chronic disease management such as diabetes and hypertension, dental oral health, crucial laboratory investigation services, provision of essential medication, environmental health, disease control, and health education. Moreover, a medical imaging service (X-ray) is available in a limited number of PHCs in the region where the average annual number of visits was 2.6 for every single person of the Al Ahsa population. Primary health-care centers in Al Ahsa are distributed among the region in three sectors namely, Al Hofuf sector (n=22), Al Mubarraz sector (n=22), and Al Omran sector (n=23) with a total of 67 PHCCs. Consequently, the total workforce in PHCs was 1659 distributed between physicians, nurses, pharmacists, and allied health personnel employees male and female-Saudi (n=1440) and non-Saudi (n=219). Study Design This study is based upon the cross-sectional survey that was done in the primary health-care center of Al-Ahsa, Saudi Arabia from February 2020 to May 2020 (approximately four months). The dependent variable of the study is the attitude of primary care medical staff toward patient safety culture in PHCs. However, the independent variable includes sociodemographic characteristics, especially participant's gender, years of experience, job title, and position. Source and Study Population The study included the participants from physicians, medical managers, nurses, and other staff in PHC, in Al-Hasa, Saudi Arabia. The inclusion criteria comprises medical workers and staff members employed in the PHC, in Al-Hasa, Saudi Arabia. While, the new medical and nursing staff working for less than one year and trainees of medical and nursing staff were excluded from the study. The Sample Sizes The sample size for this research is measured by the help of the Raosoft® software program. In addition, for the calculation of the sample, the information considered includes the total number of the workforce in primary health care centers, Al-Ahsa as 1659, with a 5% error margin and 95% confidence interval. Furthermore, this provided the estimation of the sample to be 313. In addition, 10% extra was added to cover the incomplete answers, resulting in a final sample of 344 employees. Eligibility Criteria The inclusion criteria include the medical workers and staff members employed in the PHC, in Al-Hasa, Saudi Arabia. While the new medical and nursing staff working for less than one year and trainees of medical and nursing staff were excluded from the study. Instruments The SAQ tool was selected for the evaluation of safety attitude due to its ease of use and it was rigorously validated, as well as a common tool for collecting data regarding health care safety climate and attitude. It has been widely used in different countries including Saudi Arabia, and has been translated into seven different languages including Arabic. A translated Arabic version was used in this study and prior to administration of the questionnaire, permission was obtained from Dr Ayman Elsous, Israa University Gaza. 14,15 Questionnaire Data The SAQ measures patient safety culture along six subscales: teamwork climate, six items (items 1 to 6), safety climate, seven items (items 7 to 13): job satisfaction, five items (items 15 to 19): stress recognition, four items (items 20 to 23): perceptions of management, five items (items 24 to 28), and working conditions, four items (items 29 to 32), plus employee's perceptions of the quality of their work environment. The internal consistency was represented by Cronbach's α, cutoff=0.70 and it exceeded the set cutoff for all subscales ranging from 0.73 to 0.85, therefore, the overall Cronbach's α was 0.86, which indicates that each scale demonstrated a good and comparatively high level of reliability. Ethical Consideration This study was approved by Imam Abdulrahman Bin Faisal University research committee with approval reference number IRB-PGS-2020-03-056. Appendix A is a copy of IRB approval. Permission was obtained from PHC managers to participate in the study after giving full information about the aim and purpose of the study. The questionnaire was explained, and verbal consent was obtained from the participants. Appendix B. Data Analysis Data were recorded, tabled, and analyzed by IBM SPSS software version 25. Respondent's characteristics were described by using percentages. To compare the means between two groups a two-sample t-test was used. Oneway ANOVA was used to compare the means between several groups. Perception of management had the highest Cronbach's α-value and work condition had the lowest value. The closer Cronbach's alpha coefficient is to 1.0, the greater the internal consistency of the items in the instrument or the scale. Therefore, our findings indicate each scale demonstrated a good and a comparatively high level of reliability such that no sub-scales could be considered to be poorly constructed as shown in Table 2. Scale-to-scale correlations were studied by the degree of linear association between pairs of two scales: Pearson's correlation coefficients are shown in Table 3. The correlation ranged from 0.069 to 0.788. All the correlations were significant (p<0.01) except stress recognition with teamwork, job satisfaction, perception of management, and safety climate. Moreover, stress recognition was least positively correlated with subscales with teamwork, job satisfaction, perception of management, and safety climate. The total score for each subscale was more positively correlated with perception of management, job satisfaction, and safety climate and work condition 0.739 to 0.788. However, the least positive correlation of total score was with stress recognition and teamwork ranging from 0.428 to 0.598. The minimum and maximum score in each subscale along with mean SD (standard deviation) and score on a 100-point scale are shown in Table 4. Job satisfaction had the highest mean among all the subscales ie 4.20 (with 100Pt scale of 80) followed by teamwork (4.11 with 100Pt This section shows the participant's responses for each item in the six subscales of SAQ. It presents mean scores, standard deviation (SD), and frequencies of participants' agreement (slightly agree and strongly agree) and disagreement (slightly disagree and strongly disagree) with each item in the subscales Table 5. This section presents the comparison between participant's perception toward the six sub-scales of patient safety and their characteristics such as gender, job category, age, experience, job title, education level, CBAHI accreditation, and PHC sectors. In comparison between the sectors, the highest total score of safety attitude was recorded in Al Omran sector (70.7) followed by Al Mubarraz sector (67.1) and Al Hofuf sector (66.3). Moreover, it was found that CBAHI accredited PHCs had a low total safety attitude score compared to nonaccredited (67 vs 68.7), however, the result was nonsignificant Table 6. Discussion Previous studies [16][17][18][19] conducted in various regions of Saudi Arabia have explored the safety attitudes of physician and/or nurses in a specific area such as ICU or emergency department as well as at the level of the hospital. According to Alahmadi,20 Saudi Arabian hospitals in cities like Riyadh are struggling to enhance their patient safety and quality of care by utilizing safety system applications as well as creating a safety culture. Moreover, Al-Khaldi 21 explored the attitude of physicians at primary health-care centers in Aseer region toward patient safety. Correlation analysis in the current study indicated that stress recognition was least positively correlated with subscales teamwork, job satisfaction, perception of management, and safety climate although the analysis was nonsignificant, which is consistent with a study 22 carried out in Albanian hospitals as there was the least positive and nonsignificant correlation between stress recognition with perceptions of management, the teamwork climate, and job satisfaction. Similar findings were also found in a study 9 exploring safety attitudes among the staff of a primary health-care facility in Slovenia and it was reported that stress recognition was not significantly correlated with other subscales. However, the total score for each subscale in our study was more positively correlated with perception of management, job satisfaction, safety climate, and work condition ranging from 0.739 to 0.788. Nevertheless, the least positive correlation of total score was with stress recognition and teamwork ranging from 0.428 to 0.598. Subscale stress recognition had the lowest mean, which indicates that the acceptance of how work is affected by stressors is less recognized among all the subscales which is consistent with other studies. 22,23 Identifying that stress from work necessities can be a cause of sickness, disturbing usual work routines, and subsequently reduced quality of care, are perceptions that need to be recognized by health-care professionals. 23 There is a strong relationship between patient safety and fatigue, anxiety, as well as lack of motivation for not predictably doing the job, with the support and motivation of the team. This can affect an individual as well as the collective working of the patient care team and can also increase the likelihood of adverse events. 24,25 Furthermore, our study illustrated that after job satisfaction the highest total score was for teamwork (77.5) followed by perception of management (68.6), safety climate (68.5), and work conditions (62.6). In an era of growing complexity and several specialized professionals working together in patient care process demands effective communication and teamwork to consistently produce the best patient care. 26 It has been seen in the present study that the secondlowest score regarding safety attitude was recorded in subscale working conditions ie, 62.6. Furthermore, the findings of the study revealed that many fields of the work life of nurses in PHC need strategic reorganization such as attitudes of the public, family needs, management and supervision, professional development opportunities, salary factors, staffing, working atmosphere, and duty hours. Concerning the gender of study participants, the analysis revealed that teamwork and stress recognition score was found to be significantly high among the female gender compared to the male gender. These findings are consistent with other studies that have been carried out in the PHCs of Kuwait and Egypt. 1,4 The comparison between the score of participant's experiences revealed that teamwork, job satisfaction, stress recognition, perception of management, and total safety attitude score significantly high among those with less than ten years of experience compared to those with greater than ten years of experience. Contradictory to our findings a study 2 from Palestinian hospitals reported that patient safety attitudes became more positive with increasing years of experience in some subscales. Similarly the comparison between the participant's age and their safety attitude, in the current study, it was observed that the teamwork, job satisfaction, stress recognition, perception of management, as well as total score was significantly high among those who were less than 40 years old compared to those who were more than 40 years old. The possible explanation of this result is that the participant's age could be associated with their years of experience. The comparison between the score of physicians vs nonphysicians in the present study revealed that perception of management was significantly low among physicians. Alzahrani 27 explored physicians' and nurses' attitudes toward patient safety in the Saudi Armed Forces Hospitals in the eastern region and reported that less than half of nurses and doctors had positive attitudes toward patient safety, especially on the subscales of stress recognition and perceptions of management. It has also been reported previously that health-care workers were likely to deny the effect of stress and fatigue on their performance. 14, [28][29][30] There were some notable differences in scores among types of staff, ie managerial vs nonmanagerial staff. It was observed that: teamwork, safety climate, job satisfaction scores were significantly higher among managerial staff in contrast to nonmanagerial staff. This was in accordance with a study reporting managers' more positive safety attitude compared to nonmanagerial staff. 31 Regarding the education level of the study participants, the analysis revealed that teamwork, safety climate and perception management was significantly high among those with a bachelor's degree education level compared to those with diploma education level. Consistent with our finding Al-Khaldi 21 explored the attitude of physicians at primary health-care centers in Aseer region toward patient safety reported that those with high qualifications had a positive attitude toward patient safety. Limitations Due to time and resource restrictions, this research has some limitations. The sample size of the study was small to generalize the results for overall primary health-care centers operating in the Eastern Province of KSA or all of the primary health-care centers of KSA. As this study was a questionnaire-based survey it is essential to investigate more useful research approach such as hybrid methods for the safety attitude culture in PHCs. Conclusion With the suggestion to pay more attention to the older staff, who had a diploma education level, long working experience, and general staff position, certain improvements are needed, especially in the field of communication and stress recognition with regards to safety culture. The results could help the management of the health-care centers to introduce a systematic approach to patient safety, to tackle the weak points and improve them, to initiate a continuous assessment of safety culture, and to increase awareness of a no-blame culture. Certain improvements are needed, especially in the field of communication and stress recognition with regard to safety culture. The results could help the management of the health-care centers to introduce a systematic approach to patient safety, to tackle the weak points and improve them, to initiate a continuous assessment of safety culture, and to increase awareness of a no-blame culture. There is also a strong need to investigate the knowledge and skills of health-care staff to gain deep insights into the present situation. Possibly, another tool for a more comprehensive measurement of safety culture in PHCs could be utilized to recognize other factors that might be important for patient safety. Ethical Statement This study was approved by Imam Abdulrahman Bin Faisal University research committee with approval reference number IRB-PGS-2020-03-056. Permission was obtained from PHC managers to participate in the study after giving full information about the aim and purpose of the study. All participants signed written informed consent to confirm their willingness to participate after having the purpose of the study explained.
2021-10-07T05:15:54.303Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "2e7b68eda5abcab6f61b0fb90ce551162fe67747", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=74141", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2e7b68eda5abcab6f61b0fb90ce551162fe67747", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
220128107
pes2o/s2orc
v3-fos-license
YBCO-based non-volatile ReRAM tested in Low Earth Orbit An YBCO-based test structure corresponding to the family of ReRAM devices associated with the valence change mechanism is presented. We have characterized its electrical response previous to its lift-off to a Low Earth Orbit (LEO) using standard electronics and also with the dedicated LabOSat-01 controller. Similar results were obtained in both cases. After about 200 days at LEO on board a small satellite, electrical tests started on the memory device using the LabOSat-01 controller. We discuss the results of the first 150 tests, performed along a 433-day time interval in space. The memory device remained operational despite the hostile conditions that involved launching, lift-off vibrations, permanent thermal cycling and exposure to ionizing radiation, with doses 3 orders of magnitude greater than the usual ones on Earth. The device showed resistive switching and IV characteristics similar to those measured on Earth, although with changes that follow a smooth drift in time. A detailed study of the electrical transport mechanisms, based on previous models that indicate the existence of various conducting mechanisms through the metal-YBCO interface showed that the observed drift can be associated with a local temperature drift at the LabOSat controller, with no clear evidence that allows determining changes in the underlying microscopic factors. These results show the reliability of complex-oxide non-volatile ReRAM-based devices in order to operate under all the hostile conditions encountered in space-borne applications. I. INTRODUCTION Many efforts have been made in recent years to deepen the study of the properties of memory devices based on the resistance switching effect (called ReRAM or Memristors), in order to analyze their possible applications as memory devices as well as in logic circuits [1] or in those that mimic the electrical behavior of synapses [2] or even of neurons [3]. Another area of great technological importance is related to the development of radiation resistant memories [4]. This is of particular interest for aerospace industry, as electronic circuits in space must be protected from ionizing radiation from the Sun and/or from other radiation sources. Depending on spacecraft's orbit, different strategies are employed to mitigate radiation effects [5,6]. For instance, in Low Earth Orbit (LEO) missions, long-term radiation effects, like Total Ionizing Dose (TID), is one of the main concerns. Ionizing radiation in LEO is mainly composed of high fluxes of energetic protons and electrons, trapped in the inner and outer Van Allen belts. Typically, most of this ionizing particles are stopped by aluminum shields, albeit this strategy increases spacecraft's weight and, in consequence, mission's cost. In the case of interplanetary or deep-space missions, main risks are transient events, like Single Event Effects, caused by high energy particles from the Sun (coronal mass ejection) or galactic cosmic rays. Usually, this missions use fail-safe systems that rely on radiation-hardened electronics, error correction algorithms and redundant CMOS circuits to mitigate this kind of effects. Ubiquitous flash memory technology for non volatile storage relies on charge confinement, which is unstable against ionizing radiation [7]. Shielding, redundancy and watchdog timers are common strategies used to mitigate sporadic but profoundly disturbing problems triggered by the incidence of foreign radiation. At the core of the alternative ReRAM technology, memristors offer either interface or filamentary based mechanisms that rely on their constituent materials properties, thus exhibiting hardened response against ionizing radiation [8]. Moreover, due to their simple and downsizeable capacitor like geometry, other potential problems are expected to be tackled: robustness against launch vibrations, and stability against omnipresent thermal cycling. The resistance switching (RS) of ReRAM devices based on perovkite oxide-metal interfaces is associated with the local valence change induced by the electric field-driven migration of oxygen vacancies [9]. Their electrical conduction is mainly determined by the microscopic properties of the volume of the oxide, close to the interface [10,11], rather than to the interfacial surface itself, as occurs for example in cases where a Schottky barrier is formed [12]. This fact can be exploited to produce memory devices resistant to radiation considering that it will be more difficult for radiation to affect a volume property than one that depends solely on the surface. With this strategy in mind, in this work we explore the electrical response of YBa 2 Cu 3 O 7−δ (YBCO)-based memristors at a Low Earth Orbit (LEO). The electrical tests were performed by a LabOSat-01 controller (shortly LS01 hereafter) [13] at a 500 km-height orbit onboard a small satellite of the Satellogic company [14]. The electronic controller is powered once a day: a Standard Test (ST) is carried out and the results are stored for transmission to Earth through the satellite's communication capabilities. There is no synchronicity between this measurement and the position of the satellite, thus our results are to be analyzed as performed at random sampling at a unique LEO. Our results indicate that the YBCO-based test structure remained operational after more than 14 months in LEO, showing similar characteristics to those previously measured on Earth. A detailed analysis of the electrical transport, based on previous models, allowed us to infer that the small drift over time observed in the electrical properties can be essentially associated with the temperature changes measured onboard the satellite at LEO and that there is no evidence of microscopic changes such as those related to the increase of defects, both by thermal cycling or ionizing radiation, that could affect the performance of the device. In that sense, our studies serve as an initial step towards the validation of perovskite oxide-based ReRAM devices for space-borne applications. II. EXPERIMENTAL DETAILS Fully relaxed YBCO thin films were grown by pulsed laser deposition (PLD) on top of a (100) single crystal STO substrate. The deposition was performed by applying 1500 pulses with a growth rate of 0.1 nm/pulse, producing a 150 nm YBCO thick layer. This growth rate was previously confirmed by transmission electron microscopy (TEM), under the same deposition conditions [15]. The superconducting transition temperature (∼ 90 K) was determined by resistivity and magnetization measurements, confirming that the YBCO films are near optimally-doped. Additional details of their synthesis and characterization can be found elsewhere [15][16][17][18]. The device under test (DUT) was built by sputtering 30 nm of 2 different metal contacts (0.7 x 0.7 mm 2 ) on top of one YBCO film surface, arranged in a planar structure (which maximizes the exposure to external irradiation), as depicted in Fig. 1. One of the contacts, labeled arbitrarily as "+", was made with Pt, while Au was used for the ground (-) pad. We have used Pt and Au in order to produce a DUT with essentially only one active interface, as will be described later. Electrodes have a mean separation of 3 mm. Cu leads were carefully fixed over them by using silver paint without contacting directly the surface of the YBCO sample. The YBCO-based DUT was attached inside a SOIC-16 package, where its Cu leads were bonded with conductive silver paint. Finally, the package was sealed using space-qualified epoxy resin [19]. The DUT was first characterized at room temperature at the Laboratorio de Bajas Temperaturas (LBT) by using a B2902B Agilent SMU, programmed to apply 10 ms current writing pulses (I pulse ) of increasing and decreasing amplitude between ± 20 mA , establishing a hysteresis cycle by measuring the voltage during the pulses (V pulse ). In this way current-voltage (IV) characteristics of the DUT are measured. With 1 s delay after each writing voltage pulse, a small reading voltage is applied and again, by measuring the current, the remanent resistance (R rem ) is determined. By plotting R rem as a function of V pulse a resistance hysteresis switching loop (RHSL) can be observed, with V set and V reset as the voltages where the resistance switching begins, producing a low resistance or a high resistance state, respectively, as shown in Fig. 2. As obtained previously for ceramics [20] and for thin films [21] metal/YBCO devices, our DUT exhibit a bipolar RS and hysteretical and non-linear IV characteristics. The counter-clockwise RHSL indicates that the "active" contact (ie the one that generates most of the resistance change) is, as expected, the ground Au contact, although a small change of the proper "+" device contacts is also observed (marked with circles in Fig. 2), forming the shape coined as "table with legs" [22]. The voltages V set and V reset are also indicated. To characterize the packaged-DUT in orbit, we soldered it onto a dedicated controller, the LS01 board [13]. This board was specifically designed to electrically test two and three terminal electronic devices in hostile environments. Its purpose is to increase the Technology Readiness Level (TRL) of electronic devices for space-borne applications. In fact, LS01 has proven to be successful characterizing TiO 2 -based and La 1/3 Ca 2/3 MnO 3 -based ReRAMs in LEO [23]. It uses a SMU to test the DUT and has several sensors to monitor the hostile conditions while it is operating [13]. Particularly, in this work, we report data from its temperature sensor and 3 solid-state dosimeters, which measure long-term radiation doses, i.e., TID using COTS pMOS transistors [24][25][26]. In short, when a CMOS transistor is exposed to ionizing radiation, its threshold voltage (V th ) shifts to negative values due to charge accumulation in oxide traps under the gate structure [27]. Let us now consider how LS01 performs the measurements on the DUT, while in orbit. The pulsing sequence to measure IV and RHSL curves embedded in LS01 starts by applying a +250 µA pulse, then the amplitude of the pulses is increased in steps of 250 µA until it reaches a maximum of 20 mA; after that, the amplitude is decreased back to +250 µA, also in 250 µA steps. Once the positive part of the sequence is swept, an analog procedure is performed for the negative part, until a minimum of -17 mA is reached in steps of -250 µA. 250 ms after each writing pulse is applied, a reading pulse of I rem = 1 mA is applied. Simultaneous measurements of V pulse (V rem ) are taken while applying every I pulse (I rem ) pulse; these measurements are used to determine the IV and RHSL curves. As days in orbit went by, we observed LS01's temperature ranging mostly between -10 to 13 • C as can be seen in Fig. 4. Although periodic variations are expected, probably related to orbit's cinematic (∼ 90 min/orbit) and to the 24 h ST execution time lapse, a smooth shift toward lower temperatures is observed as the number of days at LEO increases. Fig. 4 is a partial sample of the overall thermal cycling that is reproduced with each orbit, that may not display the maximums and minimums reached, in some cases due to repositioning of the satellite while performing other tasks. This gives rise to additional stress on the DUT and may also affect all the associated measurement electronics. During this period, LS01 dosimeters did not show significant threshold voltage shifts. The standard deviation of each dosimeter's dataset was comparable to 1 Gy(Si). Nevertheless, we simulated TID using SPENVIS tool [28] to estimate expected radiation levels at the satellite orbit (∼ 500 km altitude). To perform the simulations, we used the exact dates reported above and the orbital parameters of the NuSat-5 satellite (a 500.1 km altitude circular orbit, with 97.33 • inclination, 11.21 • right ascension of ascending node, 251.6 • argument of perigee, and a true anomaly of 133.3 • ). Fluence and doses were simulated using an effective shielding equivalent to 9 mm-thick aluminum foam panes, AP-8 and AE-8 models for trapped protons and electrons, and ESP-PSYCHIC model for solar particles. We also consider minimum solar activity (end of Solar cycle 24), and for TID calculations we used SHIELDOSE-2 model. The simulated differential fluences of ionizing particles found in the orbit of the satellite are shown in Fig. 5a for the total mission's period. For energies below 10 MeV, trapped electron fluence is around 2 orders of magnitude higher than proton's. Conversely, for higher energies, electron fluence is significantly reduced which causes that high-energy contribution to TID come mainly from trapped protons. However, this contrast in the trapped particle fluence spectra is balanced by the thick aluminum shielding and its stopping power (see Fig. 5b). As can be seen in Fig. 5c, the total TID experienced by the DUT inside the satellite is composed of similar fractions of proton and electron TID. Finally, the simulations indicate that the absorbed dose should be around 2 Gy(Si) for the total mission's period. This result is slightly lower than expected values for a 1-yr period of typical LEO missions (see references [23,24] and references therein). However, as Huston and Pfitzer pointed out [29], we should consider this result as a rough approximation, as the trapped particle models used here returned overestimated predictions up to a factor of 2 in those previous works. Hence, it is not surprising that LS01 dosimeters did not sense critical levels of TID. III. RESULTS AND DISCUSSION Typical successive RHSLs measured by LS01 after nearly 1 year at LEO are shown in Fig. 6. The DUT still shows bipolar resistive switching and the counter-clockwise circulation is maintained as well as the shape "Table with Legs", indicating that the Au/YBCO interface is still the dominant with a lower resistive switching contribution of the Pt/YBCO interface. A small change in the remanent resistance can be observed after each cycle. This can be a consequence of the temperature variation of the DUT or can be related to a relaxation of the final resistance, considering the 24 h delay between successive measurements. This effect is characteristic of the YBCO-based interfaces, indicating the high mobility of oxygens along specific crystallographic orientations or in grain boundaries [30,31]. Additionally, it can be noticed that the remanent resistance values are 15-30% higher than those measured before lift-off. In order to gain insight on the origin of the variations observed in the RHSLs, we plotted in Fig. 7 the evolution of the maximum and minimum R rem as a function of the days at LEO. A noisy behavior with an overall tendency to increase with increasing the number of days can be observed. A similar tendency can be observed in Fig. 8 for the voltages V set and V reset , although the variation of V set with the number of days at LEO is less evident due to its noisy behavior. In order to understand the physical origin of the observed evolution of both characteristic parameters of this memristive DUT, we can deepen our analysis trying to determine the conduction mechanisms involved in each interface. For this, we can appeal to the analysis of the IV characteristic curves based on the power exponent γ = dLn(I)/dLn(V ) plotted as a function of V 1/2 [32]. Indeed, this method has been very helpful in determining the existence of different transport mechanisms present in a junction, especially when more than one is involved [33][34][35]. The γ representation of our DUT, determined before to lift-off, is shown in Fig. 9. The almost linear dependence from low voltages (with a positive intercept) and up to a voltage where a maximum is reached, confirms that a Poole-Frenkel (PF) emission is the main conduction mechanism through the interfaces of our DUT. A more detailed equivalent circuit model, which includes a leak resistor (R ± ) in parallel with the non-linear PF ± element and a series resistor (r), representing the interface-bulk resistance plus the intrinsic resistance of the film, is presented in the inset of Fig. 9. " + " and " − " represent the Pt/YBCO and the Au/YBCO interfaces, respectively. This more elaborated model was determined in previous studies performed on the very same interfaces [21,36,37]. Within this framework, the current through each PF element (I ± P F ) as a function of the voltage V ± P F at a fixed temperature T , can be expressed as [38]: with where R ± 0 is a pre-factor associated with the geometric factor of the conducting path, the electronic drift mobility (µ) and the density of states in the conduction band. E ± T rap is the trap energy level, k B the Boltzmann constant, q the electron charge, ǫ ′± the real part of the dielectric constant of the oxide and d ± the interfacial thickness where most of the voltage drops (for each interface ±). In this way, the voltage-dependent resistance related to the PF element of each interface can then be expressed as: This equation indicates that in the low-voltage limit, the PF element behaves, as a function of temperature, as a semiconductor does. Despite the existence of the two interfaces and the parallel and serial resistances indicated in the more detailed circuit model, if we plot the measured R max rem as a function of the temperature of each day at LEO (see Fig. 10a), we can observe that our DUT behaves as indicated by Eq. 3. In fact, this result is also indicating that the resistive behavior of the DUT is dominated by the Au/YBCO interface and more particularly by the PF emission linked to the oxide in the interfacial zone close to Au. The low resistance of both the Pt/YBCO interface (probably associated with a low R + value) and the film intrinsic resistance (r) as well as the low ohmic conducting leakage through the R − element determine the simplicity of the obtained dependence. The obtained value for E T rap ≃ 0.086 eV is in close accordance with the values already obtained for the Au/YBCO interface in its high resistance state [37]. Besides the noisy behavior of the resistance of the DUT along the days in the hostile environment at LEO, the observed drift of the remanent resistance seems to be strongly associated with the temperature variations that LS01 experiences within the satellite. This reasoning can be applied qualitatively to the high resistance state of the successive RHSLs presented in Fig. 6, where the value of R max rem can be ordered inversely to the temperature at which the measurement was made. In a similar way, if we plot V reset as a function of the temperature measured by LS01, the data seems to follow an almost linear dependence with a negative slope, as shown in Fig. 10b. This dependence was already observed and previously reported [39]. In addition to this, if we compare the γ curves performed by LS01 at LEO (sensing a smaller voltage range) with those performed prior to lift-off (see Fig. 11), we also obtain a result consistent with the change in temperature. This can be observed by the reduction of the slope of the linear part and by the reduction of the maximum γ attained, which may be attributed to the increase of the limiting series resistance (R + ) as a consequence of its semiconducting-like behavior (see the similarity with Fig. 9 of reference [32]). In other words, no relevant changes on the microscopic factors associated with the transport properties can be determined, other than those linked to the measured temperature variations. results represent a milestone as we move toward a next stage of testing YBCO-based devices.
2020-06-29T01:00:47.853Z
2020-06-26T00:00:00.000
{ "year": 2020, "sha1": "292540a661c08a6359a34995707fce92048f3041", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2006.15062", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "92d04cb345d83a3af2a7ac5f2b03a85d422ad8f9", "s2fieldsofstudy": [ "Physics", "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
261909841
pes2o/s2orc
v3-fos-license
Tikrit Journal of Pure Science T he lithofacies and sedimentary structures of Tanjero Formation (campanian-maastrichtian) have been studied in the field at Heeran area east of Shaqlawa north of Iraq. It is found that the thickness of the formation is (120) m., the lower contact is conformable with Shiranish formation while the upper contact is unconformable with Kolosh Formation and recognized by the presents of basal conglomerate about (1.5)m. in thickness. According to the lithofacies, the formation is divided in to three parts; the lower and the upper parts composed of clastic sand stone interbedded with thin beds of marl and carbonate. While the middle part is friable and composed of marl and shale. The studied sedimentary structures in the formation confirmed that the sediment deposited by turbidite currents with unstable tectonic conditions. Introduction Tanjero Formation (Campanian-Maastrichtian) is exposed in the structures of the high folded zone of Iraq.It forms low or gentle elevated areas in heeran area and mostly characterized by soft clastic rocks, with dark yellowish green and olive green colors.Tanjero formation composed of sandstone, claystone, shale and conglomerate beds with common lateral and vertical variation throughout the formation [1].[2] indicated that the Tanjero Formation cropped out at high folded and imbricate zone north east Iraq.It consists of clastic rocks with thick sequence of sandstone, clay stone and beds of conglomerate with biogenic limestone.The upper part of the formation consist of silty marls, silty stone, sandstone, conglomerate and sandy or silty limestone while the lower part formed predominately of globigerenal open sea marl and sandy or silty limestone [3].The planktonic foraminefra in the upper part refers to deep marine deposition while the middle and the lower part were deposited at deep basinal environment [4][5][6][7]. According to [8] during the study of the sedimentology of Tanjero formation in Dukan area divided the formation in to Four beds; recrystallized limestone beds, sugary texture beds, foramenifral limestone beds and planktonic and benthonic limestone beds.Also they indicated that the lower part deposited under deep marine shelf environments while the middle and the upper parts deposited under open marine environments.The present work aims to the sedimentological aspect which depending on the lithofacies from the field studies in addition to recognize the sed imentary structures in the clastic sediment succession for Tangero formation section outcropped near Heeran town north eastern Iraq Figure (1). Field study: The field study of Tanjero formation at Heeran area are including the determination of the lower contact of Tanjero formation and the underlying Shiranish formation from the physical characters of the rocks such as color and hardness of the lower contact, Shiranish formation represent by light color of marly limestone rocks which the Tanjero represent by the appearance of sandstone interbedded with marl or mudstone thin beds.The upper contact recognized by the unconformity with layer of basal conglomerate between Tanjero and the overlying Kolosh formation.Also the field study including the determination of the different types of clastic layers of sandstone, mudstone and thin beds of carbonate.The total thickness of the Tanjero formation is about 120 m.Many sedimentary structures have been identified specially with the sandstone beds and the overlying or underlying beds of marl and ten samples (T1 to T10) (figure -2) have collected from the different types of Tanjero rocks which aims to interpreting the different characters of clastic and non-clastic of the rocks. Tectonic setting: Laramide orogeny at late Maastrichtian forming a deep wide trough in NE of the Arabian plate with extension WNW-ESE where the flysh clastics deposit accumulated which representing Tanjero Formation [9].The basin of (upper Cretaceous) Tanjero formation is combined tectonically with that of the underlying Shiranish formation and named upper cretaceous zagros early foreland basin instead of previous miogeosyncline and trench basin [10].[11] mentioned that the obduction and closure of the southern Neo-Tethys occurred during the late campanian and maastrichtian and that causes a major transsgrassion across the whole of Iraq.the same stress regime in the NE of Arabian plate led to the formation of intraplate extensional and transextensional basins of NW-SE and E-W trend.The age of the unconformity which have been concluded within Tanjero formation at Chwarta area is estimated to (1.23m.y.) duration depending on the planktonic foraminiferal biostratigraphy zonation [12] also he recognized (26) planktonic foraminiferal species in addition to (30) benthonic species Figure (2). Sedimentary Structures: Sedimentary structures are the large scale features of sedimentary rocks and used for the interpretation of paleocurrents, environmental condition at the time of deposition and to study the paleogeography and tectonisim of the sedimentary rocks.[13].In the field study of Tanjero formation north Iraq and according to the previous studies [10]; [7] the formation has been deposited in flysh environment with turbidity conditions.During the field study and the litheofacies description many sedimentary structures have been identified specially in the clastic rocks such as sandstone and mudstone. The important sedimentary structures are: 1. Groove marks are liner ridges between the sandstone beds and the under lying mudstone or marlstone [13] (plate -8). 2. Impack marks with different shapes at the studied section within the clastic rocks, these structures are common in the sediment which accumulated by turbidities currents (Plate -8). 3. Bedding and lamination which are produced by changes of sedimentation conditions leads to change in grain size (Plate -9). 4. Graded bedding which are showing coarsening in grain size downward as recognized in the sandstone beds at the studied section of Tanjero formation (Plate -10). 5. Slumping, sliding and convolute structures which have been identified in Tanjero formation ,as a result of unstable tectonism directly after the deposition of the sediment and also formed by dewatering within the load of overburden accumulated sediments [13]. In the present study the slumping and slide structures are with different sizes (Plate -11). Conclusions From the field study of the lithofacies and the sedimentary structures of Tanjero formation (Campenaian -Maastrichtain) at Heeran area east of Shaqlawa concluded the following: 1.The lower contact between Tanjero and Shiranish Formations is conformable and recognized by the lithological change from clastic of Tanjero to carbonate of Shiranish formation..The upper contact between Tanjero formation and the underlying Kolosh is unconformable and identified by layer of basal conglomerate with thickness about (1-5) m. composed of carbonates and chert gravels. 2. According to lithofacies Tanjero Formation is divided to three parts The lower and upper parts composed mainly of clastic rocks interbeded with marl and marly limestone while the middle part mostly formed of friable marl and shale with sandstone and mudstone beds. Figure - 2 Figure -2 Stratigraphic column of Tanjero Formation indicating the unconformity within the formation south east of Chwarta city [12].Tectonically the studied area located at the NE limb of Safien structure which represent the largest plunging anticline with length about (50)km.and height (1970)m.above sea level and distributed by many different faulting, these faulting system causing discontinuity in the axis of the anticline and causing the Qamchuqa formation appears above Shiranish formation at SW of Heeran town, other faults are along Tanjero and Shiranish formations near NW plunge of Safien structure west of Heeran [1], there is no indication of the unconformity in the study area within the succession of Tanjero formation which has been recognized by [12] in Chwarta.Lithofacies stratigraphy: Tanjero formation composed of clastic sediment mostly friable with dark to olive green colors .the thickness of the formation is about 127 m. on the NE limb of Safien anticline near Heeran town to the east of shaqlawa city.The lower contact of the formation is conformable with Shiranish formation, identified by the change in the physical character of the two formations such as the change in the color and the hardness of rocks.(Plate-1and Plate -2).Shiranish formation is characterized by massive marly limestone layers followed by dark green clastic of Tanjero formation. Figure - 3 Figure -3 Stratigraphic cross section indicating the lithofacies of Tanjero Formation at Heeran area. 6 . Sand and clay balls are common in the beds of sandstone and mudstone, sometimes have elongated or elliptical shape resulted from the activity of turbidity currents (Plate -11).The clastic rocks of Tanjero formation have been deposited in foreland basins at elongated deep troughs at continental margins and most of the clastic sediments are showing indications of sliding and slumping structures with internal deformations resulting from gravity flow down a steep slopes, also the turbidity currents showing gradual change in the grain size and scoured base with the mud and carbonate beds under the sandstone.The thickness of the beds are variable and showing deformations resulted from the dewatering causing that the sediment lose the original volume and deformed the bedding with different degrees [13]; [14].Plate -8 Showing groove marks, impact marks and clay balls structures Plate -9 Bedding and lamination structures., convolute and sand balls structures.Discussion Tanjero Formation have been studied by many authors from the tectonic development and biostratigraphic infistigation they concluded that the formation deposited in deep marine condition and effected by the tectonic condition and turbidite currents which represented by a foreland basin the clastic sediment of the formation are of flysh type.The thickness and the composition of the formation are different from place to place according to the development of the basin of deposition, however it deposited in narrow elongated deep trough and must of the clastic and carbonate derived from the older rocks on the sides of the trough under unstable conditions with turbidite currents and variation in the sea level conditions.The study of the lithofacies and sedimentary structures of the formation indicated that the sediment of the formation deposited under turbidity condition in foreland basin.The thickness of the formation in the studied area is (120)m.and divided in three parts according to the lithology and composition.It is found that Tanjero formation in Heeran area is different from other outcrops in north and north east Iraq which confirm the lateral variation in depositional condition during Campenaian -Maastrichtain age.
2019-04-27T13:10:18.635Z
2018-07-16T00:00:00.000
{ "year": 2018, "sha1": "6607f248164e5ed32977fa76aedcd3caec3f54e4", "oa_license": "CCBY", "oa_url": "https://tjps.tu.edu.iq/index.php/tjps/article/download/524/188", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6607f248164e5ed32977fa76aedcd3caec3f54e4", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [] }
270811802
pes2o/s2orc
v3-fos-license
Establishing and clinically validating a machine learning model for predicting unplanned reoperation risk in colorectal cancer BACKGROUND Colorectal cancer significantly impacts global health, with unplanned reoperations post-surgery being key determinants of patient outcomes. Existing predictive models for these reoperations lack precision in integrating complex clinical data. AIM To develop and validate a machine learning model for predicting unplanned reoperation risk in colorectal cancer patients. METHODS Data of patients treated for colorectal cancer (n = 2044) at the First Affiliated Hospital of Wenzhou Medical University and Wenzhou Central Hospital from March 2020 to March 2022 were retrospectively collected. Patients were divided into an experimental group (n = 60) and a control group (n = 1984) according to unplanned reoperation occurrence. Patients were also divided into a training group and a validation group (7:3 ratio). We used three different machine learning methods to screen characteristic variables. A nomogram was created based on multifactor logistic regression, and the model performance was assessed using receiver operating characteristic curve, calibration curve, Hosmer-Lemeshow test, and decision curve analysis. The risk scores of the two groups were calculated and compared to validate the model. RESULTS More patients in the experimental group were ≥ 60 years old, male, and had a history of hypertension, laparotomy, and hypoproteinemia, compared to the control group. Multiple logistic regression analysis confirmed the following as independent risk factors for unplanned reoperation (P < 0.05): Prognostic Nutritional Index value, history of laparotomy, hypertension, or stroke, hypoproteinemia, age, tumor-node-metastasis staging, surgical time, gender, and American Society of Anesthesiologists classification. Receiver operating characteristic curve analysis showed that the model had good discrimination and clinical utility. CONCLUSION This study used a machine learning approach to build a model that accurately predicts the risk of postoperative unplanned reoperation in patients with colorectal cancer, which can improve treatment decisions and prognosis. INTRODUCTION According to the World Health Organization, colorectal cancer is one of the most common malignant tumors of the digestive tract [1].In 2018, there were more than 1.8 million cases of colorectal cancer globally, with a total of 881000 deaths -an average of 1 death out of every 10 cases [2].Colorectal cancer is one of the top three cancer contributors to morbidity and mortality rates in the world [3].Colorectal cancer poses a significant threat to the physical and mental health of the Chinese population.Early diagnosis of colorectal cancer in China is generally poor, and the majority of patients are in the middle-to-late stage of disease at the time of diagnosis [4].Postoperative recurrence and metastasis of colorectal cancer are influenced by multiple factors such as lymph node metastasis, tumor type, growth location, and degree of infiltration.These factors are also key in determining the prognosis of patients with colorectal cancer [5]. Colorectal cancer is a serious malignant tumor and its treatment can include surgery, radiation therapy, chemotherapy, molecular-targeted therapy, immunotherapy, endocrinotherapy, and traditional Chinese medicine [6].Currently, a combination approach based on surgery is the preferred strategy for the treatment of colorectal cancer [7].Common surgical methods include radical surgery.However, in recent years, laparoscopy has been widely adopted due to its rapid recovery time, minimal trauma, and significant short-term efficacy [8]. Postoperative reoperation, particularly the rate of unplanned reoperation within 30 d, is an important indicator of surgical quality and has been adopted by the United States Centers for Medicare and Medicaid Services in its Physician Quality Reporting System [9].Due to the high morbidity and mortality of colorectal cancer, patients undergoing surgery are at risk of later reoperation.The percentage of postoperative unplanned reoperation in patients with colorectal cancer ranges from 3% to 11% [10,11].The causes of reoperation include complications such as anastomotic leakage, bowel obstruction, and postoperative bleeding.Understanding the causes of reoperation helps improve patient prognosis.Despite improvements in surgical techniques and perioperative management, postoperative unplanned reoperation is still closely associated with complications [12].These complications not only affect the short-term prognosis of the patient but may also apply surgical stress on the immune system, affecting postoperative outcomes.Unplanned reoperation is an independent predictor of a patient's mortality within one year of surgery [13]. Machine learning has great potential for disease risk prediction and diagnosis.In colorectal cancer, machine learning models can accurately predict the risk of undesired postoperative return to surgery by comprehensively analyzing multidimensional data on surgical approaches, and a patient's clinical characteristics and comorbidities [14].The ability of such techniques to learn and adapt to new data means that their predictive accuracy continues to improve over time and data accumulation, reducing unnecessary reoperations, optimizing patient prognosis, and improving quality of life. The purpose of this study is to establish and validate a model of colorectal cancer postoperative unplanned reoperation.This model combines multidimensional data including patient clinical characteristics; surgical modalities, and comorbidities, to improve prediction accuracy.This model will help physicians to more accurately assess patient Machine learning models To efficiently screen feature variables associated with colorectal cancer postoperative unplanned reoperation, we used three different machine learning methods: support vector machine (SVM) [16] least absolute shrinkage and selection operator (LASSO) regression [17], and extreme gradient boosting (XGBoost) [18]. The SVM method effectively distinguishes between two classes of data points (i.e., patients with or without unplanned reoperation) by finding an optimal hyperplane in a high-dimensional space.SVM is particularly effective when dealing with large datasets because it can work with high-dimensional feature spaces and nonlinear classification problems. LASSO regression is particularly useful for feature selection as it reduces the coefficients of unimportant features to zero.This method limits the complexity of the model by adding a regularization term to avoid overfitting, while still identifying the most relevant features. XGBoost is an integrated learning method based on decision trees, which improves prediction accuracy by constructing multiple models and combining them.It is an effective feature selection method as it optimizes the performance of the model through a gradient-boosting framework. Model evaluation tools To fully evaluate our unplanned reoperation, we used the following key statistical tools.The receiver operating characteristic curve (ROC) was used to assess the model's ability to discriminate between two types of outcomes (e.g., occurrence and non-occurrence of unplanned reoperation).The more diagnostic the model is, the closer the area under the curve (AUC) is to 1.We also used calibration curves to test the accuracy of the model's predicted outcomes.Ideally, the calibration curve should be close to 45 degrees, showing a high degree of agreement between predicted and actual values.The Hosmer-Lemeshow test (H-L test) was used to assess the fit of the model.A high P value implies a good agreement between model predictions and actual observations.Decision curve analysis (DCA) was used to assess the utility of the model in clinical decision-making, as it identifies the thresholds at which the use of the model best improves patient care. Measurement of results Measurement of results: (1) The differences in clinical data between the control and experimental groups were compared; (2) SVM, LASSO, and XGBoost were used to screen for unplanned reoperation feature variables, and a Venn diagram was used to identify common feature variables; (3) Independent risk factors for postoperative unplanned reoperation were screened using logistic regression; (4) A nomogram was created based on the multifactorial logistic regression; (5) ROC curve, calibration curve, H-L test, and DCA were used to evaluate the differentiation, calibration, and clinical utility of the nomogram; and (6) Based on the risk coefficients, the risk scores of patients in the training and the validation groups were calculated.The differences in the risk scores of the patients were compared, and the predictive effect of the model was verified using the ROC. Statistical analysis Statistical analysis was carried out using SPSS 26.0 software.For normally distributed continuous data, used mean ± SD.Comparisons between groups were made using t-tests.The χ 2 test was used for count data.We screened all variables using SVM, LASSO, and XGBoost, and the common variables were screened using a Venn diagram.Multiple logistic regression analysis of the common variables was used to identify the independent risk factors.Then, we constructed a nomogram prediction model based on the selected independent risk factors using R software and the rms package.We obtained the calibration curve using Bootstrap and calculated the C-index.We also plotted the independent risk factors using ROC and calculated the AUC to validate the performance of the nomogram prediction model. Comparison of clinical data Comparison of the clinical data of the two groups showed that the number of patients in the experimental group aged ≥ 60 years, male, with a history of hypertension, a history of laparotomy and hypoproteinemia, and surgical time ≥ 240 mins was significantly higher than that of patients in the control group.The PNI of patients in the experimental group was also significantly higher than that of patients in the control group (P < 0.05; Table 1).The remaining variables were not statistically different (P > 0.05). Machine learning models screening unplanned reoperation feature variables We screened the unplanned reoperation feature variables using XGBoost, SVM, and Lasso methods (Figure 2).We found that XGBoost identified a total of 13 feature variables (Figure 3A), SVM identified 16 feature variables (Figure 3B), and Lasso identified 11 feature variables (Figure 3C).Using a Venn diagram (Figure 3D), we found that the 3 methods screened 10 common characteristic variables: PNI, history of laparotomy, hypoproteinemia, age, TNM staging, history of hypertension, surgical time, gender, history of stroke, and ASA classification. Logistic regression screening for independent risk factors for unplanned reoperation We analyzed the 10 identified signature variables using multifactor logistic regression.The 10 signature variables were first assigned values (Supplementary Table 1).The resulting analysis revealed that age, gender, history of hypertension, history of laparotomy, hypoproteinemia, and PNI were independent risk factors impacting the likelihood of unplanned reoperation (P < 0.05; Table 2).The total score was obtained by summing the scores of each variable and finding the corresponding value on the "Total Score Axis".The value of the "Total Score Axis" was compared with the probability prediction line at the bottom of the nomogram to find the risk of postoperative unplanned reoperation (Figure 4). Evaluation of nomogram The differentiation, calibration, and clinical utility of the model were evaluated by four methods: ROC, calibration curve, H-L test, and DCA.The ROC analysis revealed that the AUC of the nomogram was 0.842, with 80.59% specificity, 76.67% sensitivity, and 57.26% Youden index (Figure 5A).This indicates that the model has a good degree of discrimination and can correctly distinguish the ending event from the non-ending event.Calibration curve analysis found that the nomogram's calibration curve had a slightly poorer overlap, but generally went in the same direction (Figure 5B).The H-L test value was 8.588 (P = 0.378).The DCA curve indicated that the unplanned reoperation net benefit rate was higher than other, i.e., the blue line corresponding to the threshold probability was located to the upper right of the All line (red line), indicating that the model had some clinical utility (Figure 5C). Validation of nomogram We divided the data into a training group and a validation group.The risk scores were calculated separately for both groups and then validated using ROC, calibration curve, H-L test, and DCA.As before, we compared the baseline information of patients in the training group with those in the validation group.The results showed that there was no statistically significant difference between the baseline characteristics of patients in the training group and the validation group (P > 0.05; Table 3).We then calculated the risk scores of the two groups, and the results showed that the risk scores of the patients who underwent unplanned reoperation were higher than those of patients in the non-reoperation group, both in the training and validation group (P < 0.001; Figure 6).Finally, we found that the AUC of patients in the training group and the validation group were 0.798 and 0.846, respectively (Figure 7).This suggests that the model can correctly differentiate between the outcome and non-outcome events. Clinical validation of predictive modeling To validate our model, we randomized the clinical data of 1 patient with unplanned reoperation.This patient was aged ≥ 60 years, male, had no history of hypertension, no history of laparotomy, hypoproteinemia, and his PNI was ≥ 43.76.The probability of occurrence was calculated for this patient (45 + 30 + 0 + 0 + 39 + 100 = 216).The results showed that the probability of the patient having unplanned reoperation was about 73% (Figure 8). DISCUSSION Treatment of colorectal cancer through laparoscopy allows comprehensive observation; clear peeling and resection of the lesion, as well as procedures such as hemostasis and lymph node dissection [19].Laparoscopy has a low impact on the patient's abdominal cavity, reduces postoperative pain, and promotes recovery of gastrointestinal function [20].However, despite the improved precision and safety of laparoscopy, unplanned reoperation remains a challenge for colorectal cancer outcomes [21].Reoperation not only prolongs the hospital stay and increases the financial burden of the disease, but also it affects the subsequent treatment plan and significantly increases the perioperative morbidity and mortality rate [22].Therefore, investigation of the causes and risk factors of postoperative reoperation in colorectal cancer has important clinical applications in reducing the rate of reoperation.The absence of a standardized definition for unplanned reoperation has resulted in notable variations in the reported endpoint indicators for postoperative colorectal cancer across different medical centers.In a study by Feo and colleagues, covering 92 hospitals in China, the average reoperation rate for colorectal cancer surgeries was 9.7% [23].Unplanned reoperation's discrepancies were primarily attributed to disparities in medical resources and treatment approaches, which influence the risk of postoperative unplanned reoperations across various levels and regions of healthcare institutions.In contrast, the incidence of unplanned reoperations following laparoscopic surgery for patients with colorectal cancer was 2.94%.Our results generally align with the laparoscopic reoperation rate for bowel cancer (approximately 3.8%) reported by Speicher et al [24].These observations reinforce the efficacy and safety of laparoscopic surgery as a preferred treatment option for colorectal surgical interventions. Patients with colorectal cancer undergoing abdominal surgery have a higher incidence of unplanned postoperative reoperation compared to other general surgical procedures [25] due to their susceptibility to incisional and abdominal infections, venous thromboembolism, and perioperative complications [26,27].In addition, the inherent necessity of reconstructing abdominal organs during colorectal surgery increases the likelihood of postoperative complications, thereby increasing the likelihood of subsequent reoperations. This study aimed to develop a predictive nomogram model.To construct this predictive model, we first employed three advanced computational techniques: SVM [28], LASSO [29], and XGBoost [30].These methods are known for their efficacy in managing high-dimensional datasets and their ability to identify critical variables in such datasets [18].Specifically, SVM excels at handling a wide range of datasets, LASSO mitigates overfitting through a penalty-based approach, and XGBoost is particularly effective at dealing with nonlinear relationships between data points.This multifaceted methodological framework facilitates a robust assessment of the significance of variables from multiple analytical perspectives [31].After identifying essential variables through these preliminary methods, we applied logistic regression analysis to investigate these identified variables.This analysis allowed us to identify independent risk factors that significantly impacted the probability of unplanned reoperation.Our findings suggest that age, gender, prior hypertension, history of laparotomy, hypoproteinemia, and PNI are key independent risk factors.These insights provide an understanding of the patient-specific risks associated with unplanned reoperation after colorectal cancer surgery and contribute to the clinical decision-making process. Recent studies have identified the male gender as an independent risk factor for unplanned reoperation [32,33].This correlation is likely attributable to the male physiology, lifestyle habits, and adherence to postoperative rehabilitation protocols.Li et al [34] also highlighted age as a determinant, positing that elderly patients are at an elevated risk of undergoing unplanned reoperations, a conclusion that aligns with our observations.While not directly causing complications, the presence of comorbidities significantly influences surgical outcomes.Therefore, a comprehensive preoperative assessment and management of comorbid conditions are imperative to mitigating the likelihood of reoperation [35]. Numerous studies have substantiated the association between preoperative hypoproteinemia and the risk of unplanned reoperation.Saadat et al [36] recognized preoperative hypoalbuminemia as an independent risk factor in patients with rectal cancer, a finding corroborated by Michaels et al [37], who linked malnutrition to increased risk of unplanned reoperation.Our study further confirms that patients with diminished preoperative albumin levels are at a heightened risk for such interventions. The PNI is a crucial marker for evaluating a patient's preoperative nutritional and immunological status.Lower PNI values often indicate suboptimal nutritional health, which can potentially compromise wound healing through impaired collagen synthesis and fibroblast proliferation [38,39].Improving patients' nutrition by enhancing albumin concentrations and optimizing PNI scores may significantly curtail the risk of unplanned reoperations following rectal cancer surgeries.Moreover, a history of prior abdominal surgeries is an independent risk factor for postoperative bowel obstruction following rectal resections [40].This suggests that such historical surgical interventions may lead to extensive abdominal adhesions, thereby complicating subsequent procedures and elevating the risk of complications. Screening patients with high reoperation risk helps clinicians target perioperative observations and interventions, thus reducing unplanned reoperation and improving patient prognosis.In this study, we successfully predicted the incidence of unplanned reoperation through a constructed nomogram.The internal validation showed that the model was highly accurate and had good predictive efficacy. However, there are some limitations to this study.First, the retrospective design of this study may lead to information and selection bias.Second, the lack of an external independent dataset for validation limits the generalizability and reproducibility of the model.Finally, the lack of long-term follow-up data in this study prevented assessment of the longterm outcomes of surgery and patient quality of life.In the future, we hope to use a prospective design to reduce bias, conduct external validation to enhance the generalizability of the model, and include long-term follow-up to assess the long-term impact of surgery.These improvements may more accurately predict colorectal cancer postoperative risk and improve patient outcomes and quality of life. CONCLUSION This study successfully established and validated a postoperative unplanned reoperation risk model for colorectal cancer.Through comprehensive analysis, we accurately identified independent risk factors affecting the risk of unplanned reoperation: age; gender; history of hypertension; history of dissection; history of hypoproteinemia, and PNI.The application of the model in clinical practice can help to more accurately assess the postoperative risk of patients, thus optimizing treatment decisions, reducing the occurrence of unplanned reoperation, and improving patient prognosis and quality of life. Figure 1 Figure 1 Sample screening flow chart. Figure 2 Figure 2 Comparative analysis of model performance and complexity across feature selection and regularization techniques.A: Training performance vs model complexity; B: Error rate vs number of features; C: Lasso model selection via cross-validation. Figure 3 Figure 3 Signature variables of unplanned reoperation screened by machine learning.A: Extreme gradient boosting filtered unplanned reoperation feature variable; B: Supported vector machine filtered unplanned reoperation feature variable; C: Lasso filtered unplanned reoperation feature variable; D: Feature variables common to all three learning models Venn plot.TNM: Tumor-node-metastasis; BMI: Body mass index; ASA: American Society of Anesthesiologists; PNI: Prognostic nutritional index; SVM: Supported vector machine. Figure 4 Figure 4 Nomogram of postoperative unplanned reoperation in colorectal cancer.PNI: Prognostic nutritional index. Figure 5 Figure 5 Evaluation and validation of nomogram.A: Receiver operating characteristic curve of postoperative unplanned reoperation in colorectal cancer; B: Calibration curve of postoperative unplanned reoperation in colorectal cancer; C: Decision curve analysis curve of postoperative unplanned reoperation in colorectal cancer. Figure 6 Figure 6 Calculation of risk score for training group and validation group patients.A: Calculation of patient risk score for training group; B: Calculation of patient risk score for validation group.a P < 0.001. Figure 7 Figure 7 Receiver operating characteristic curve of training group and validation group patient risk scores in predicting patient reoperation.A and B: Receiver operating characteristic curve of risk score in predicting patient reoperation for (A) training group patients and (B) validation group patients. Figure 8 Figure 8 Clinical validation of predictive modeling.The green dashed line is the patient total score and incidence probability marker, and the light orange arrow is the patient risk factor marker.PNI: Prognostic nutritional index.
2024-06-22T15:24:00.604Z
2024-06-21T00:00:00.000
{ "year": 2024, "sha1": "8b0a3badb60fdc631d8cabc03db205579a3f6c49", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "8b0a3badb60fdc631d8cabc03db205579a3f6c49", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234854841
pes2o/s2orc
v3-fos-license
A New Decentralized Control Strategy of Microgrids in the Internet of Energy Paradigm The Energy Internet paradigm is the evolution of the Internet of Things concept in the power system. Microgrids (MGs), as the essential element in an Energy Internet, are expected to be controlled in a corporative and flexible manner. This paper proposes a novel decentralized robust control strategy for multi-agent systems (MASs) governed MGs in future Energy Internet. The proposed controller is based on a consensus algorithm applied with the connected distributed generators (DGs) in the MGs in the energy internet paradigm. The proposed controller’s objectives are the frequency/voltage regulation and proportional reactive/active power-sharing for the hybrid DGs connected MGs. A proposed two-level communication system is implemented to explain the data exchange between the MG system and the cloud server. The local communication level utilizes the transmission control protocol (TCP)/ internet protocol (IP) and the message queuing telemetry transport (MQTT) is used as the protocol for the global communication level. The proposed control strategy has been verified using a hypothetical hybrid DGs connected MG such as photovoltaic or wind turbines in MATLAB Simulink environment. Several scenarios based on the system load types are implemented using residential buildings and small commercial outlets. The simulation results have verified the feasibility and effectiveness of the introduced strategy for the MGs’ various operating conditions. Introduction Many studies have reported the use of a microgrid. A microgrid usually consists of distributed generators (DGs), loads and energy storage systems. The DGs are generally connected to the microgrids with powered electronic devices and can be regulated using hierarchical controllers for fulfilling different objectives like frequency regulation and active power-sharing [1]. Internet of Things (IoT) refers to a paradigm, which connects various digital, real and virtual devices (via information networks) to smart environments. It is applicable in many domains such as transportation, energy and cities. Energy Internet is regarded as a revolutionary network of smart grids. It is seen to be a general IoT application in the energy and power sectors. The Energy Internet consists of different techniques and components, which are summarized into three categories, i.e., (i) power systems, (ii) communication systems and (iii) control algorithms. In one study, the researchers stated that the Energy Internet's cross-disciplinary nature had presented several opportunities and challenges, which have to be investigated further and validated [2]. It was noted that the MGs act as primary building blocks in an Energy Internet since they can be operated in the grid-connected and islanded modes [3]. A droop-based primary control can be used for autonomous power-sharing among all connected DGs. The islanded MGs' secondary control feature allows voltage/frequency restoration while maintaining precise power-sharing among the connected DGs [4]. Furthermore, the tertiary control helps in the optimal operation of the MGs [5,6]. In the hierarchical control scheme, tertiary control helps determine the optimal dispatch values, which are based on renewable and load forecasting. Regarding the dispatch intervals, both the primary and the secondary controls are operated for sharing the actual power deviation taking place from the dispatch values. The distributed consensus algorithm-based secondary control and the distributed optimization algorithm-based tertiary controls have garnered a lot of research attention owing to their increased flexibility and resilience compared to the centralized control [7,8]. Furthermore, the implementation of a distributed algorithm is dependent on Multi-Agent System (MAS), wherein multiple subsystems/agents interact with one another with the help of sparse communication networks [9]. To the best of the authors' knowledge, achieving reactive power, active power-sharing and voltage and frequency regulation with preserved local information privacy is still an open question. To this end, this letter presents a distributed privacy-preserving consensus (PPC)-based method to achieve reactive power, active power-sharing and voltage and frequency regulation in microgrids. First, the original control problem is transformed into an equivalent active power reference generation problem which can be solved by obtaining the global active power utilization level. Further, a distributed PPC algorithm is proposed to acquire this global variable. In addition, this paper targets to provide potential solutions for the following three scenarios: (i) The distributed controllers may neither be located at the same location as DGs nor have a proprietary communication network. The remote control of MGs via the Internet, taking communication latency into consideration, is required. (ii) For MGs governed by MASs, each agent or sub-MAS can be practically owned by different stakeholders who could cooperate or work independently. A flexible control framework with plug-and-play capability is needed. (iii) With the advancements in IoT and renewable technology, the number of controllable units in MGs is dramatically increasing. Any distributed control framework's scalability to withstand increasing numbers of DGs is a problem worthy of exploration. The rest of this paper is organized as follows. Section 1.1 introduces a description of the related works and Section 1.2. Paper contribution, Section 2 presents the proposed system description, Section 3 introduces the proposed hierarchical control, Section 3.1 presents problem formulation. Section 3.2 introduces the primary control of inverter-based distributed generators, Section 3.3 offers MASs communication networks, Section 3.4 presents the proposed secondary distributed controller, Section 4 introduces the proposed Internet of the energy communication platform, Section 5 Result analysis and discussion proposed method, Section 6 presents the access of internet web page. Finally, Section 7 concludes the paper. Related Works Traditionally, active power-sharing is achieved by droop control. A centralized controller is then utilized to compensate for frequency deviations caused by droop control [10,11]. However, the centralized control structure lacks flexibility and is susceptible to a single point of failure. Therefore, distributed control algorithms are reported in the literature [12]. With the information shared among the distributed controllers through a sparse communication network, both active power-sharing and frequency regulation can be attained [13]. However, the DGs' sensitive local data, such as the power outputs, power capacities, utilization levels, etc., are directly transmitted to their neighbors without privacy protection. In [14], the centralized, coordinated control was proposed to equalize the state of charge, even for different distributed energy storage systems. However, a secure cloud-based platform for multi-agents is not investigated. A coordinated strategy for the examination of the state of charge (SOC) balance in the microgrid AC was proposed in [15] by the combination of communication technology and hierarchical control structure. However, the proposed control method will cause the invalidation of intact high-level control functions is inevitable. In [16], the authors proposed an efficient distributed control strategy for the synchronization of several distributed generators in an island microgrid. A secondary control technique is developed to remove frequency deviations and ensure a definite time efficient power sharing. The proposed end-time controller provides frequency control and active power sharing within a limited time frame that allows the unconnected design for the voltage control and a different time frame for reactive power sharing. However, the authors do not consider the graph network for data and information transfer between the MG connect agents. In [17], the authors suggested a distributed iterative learning environment to address the DC microgrid's current/voltage sharing problem. The optimal control method, which is further determined by using the iterative value algorithm, was derived in game theory. An adaptive dynamic programming architecture and algorithm were developed to share current while simultaneously changing the DC bus's voltage to its rated value. However, the active and reactive power-sharing is not investigated. In [18], the researchers analyzed an insulated MG consisting of parallel connected inverters from multiple voltage sources. In each inverter the primary control was integrated by internal voltage and current circuits with PR trims, virtual impedance and external power controllers based on voltage and frequency drops. The investigators implemented a secondary control frequency restoration function. This helps to perform the consensus algorithm that included a frequency control and a single communication network delay. However, a secure cloud-based platform for multi-agents is not investigated. In [19], the authors proposed a split multi-agent finite-time control strategy with delays in the balance of charge and restoration of the voltage in a DC microgrid deployed by the battery. Delays can be different and theoretically endless for each battery device. The linearization feedback approach is employed to transform charging status and voltage recovery problems, respectively, with input time delays in dual integrated and single integration systems. However, the distributed control for MASs governed MGs in Energy Internet not investigated. In [20], the authors created a hybrid control system based on a multi-agent system event that uses renewable energy supplies on the web to meet load demand and protection demand. However, the active and reactive power-sharing is not investigated. In [21], the authors suggested a new control method for voltage/frequency restore approach based on the consensus algorithm and proposed method implemented in island microgrid systems (MGs). However, a secure cloud-based platform for multi-agents is not investigated. The authors proposed a diffused method for coordination control of hybrid microgrids in [22]. The method proposed regulates accurate dc current and reactive power shares between distributed microgrid generators, maintains power sharing between the two microgrids and restores the DC voltage and the AC frequency to their rated values. However, the authors do not consider the graph network for data and information transfer between the MG connect agents. In [23], the authors proposed a collapsing hierarchical and distributed cooperative control strategy for the AC Microgrid cluster, including distributed layer generation, microgrid layer and cluster layer controls for Microgrid. The distributed generation layer control regulates each distributed unit's current/tension locally. The control of the microgrid layers for each microgrid is performed to positively manage distributed generating units via several small communication networks. The control of the Cluster-Layer co-ordinates micro grids on the basis of a more advanced peer-to-peer communication interface between micro-grid-agents. However, the distributed control for MASs governed MGs in Energy Internet not investigated. In [24], the researchers proposed a multi-agent and multi-layer architecture for acquiring the P2 P control of the MGs. Here, the control framework was distributed entirely and it contained three control layers that were operated in every MG. For the primary control, the researchers adopted a droop control for every MG-agent to carry out a localized power-sharing. The researchers proposed a distributed consensus for each secondary control that helped in frequency/voltage restoration and arbitrary power-sharing amongst the microgrid. However, a secure cloud-based platform for multi-agents is not investigated. From the literature, two essential research gaps have been identified. First, the voltage, frequency active power, reactive power-sharing simultaneous regulation are not investigated. Second, the distributed control schemes for MASs governed MGs in the Energy Internet have not been studied. This motivates us to provide a new methodology that enables the group plug-and-play feature, such that MGs with multiple MASs owned by different stakeholders can be flexibly controlled. Paper Contribution In this paper, the researchers developed a novel decentralized power management and control strategy for the hybrid Microgrids in the energy paradigm's energy Internet. It helped in remotely controlling the islanded MGs in the Energy Internet. The implementation and control architecture allowed the MAS agents to control all MGs through the cloud services. Furthermore, the MGs/DGs ownership could be altered by denying or allowing the cloud data more accessible to the agents. This paper could contribute to the literature in the following manner: • Firstly, the researchers investigated the MAS-controlled MGs in the Energy Internet, which has not been reported in the past. • Secondly, the researchers proposed a distributed secondary control of the MGs, which enabled the group plug-and-play feature after considering all interactions between and amongst the multiple MASs, which were differently owned. • Thirdly, they implemented a framework for the proposed control technique using MAS and cloud servers. • Furthermore, we proposed an IoT-based communication protocol, which included specifications like MQTT. This improves system flexibility. The proposed system offered analytics and business intelligence (BI), which allowed the researchers to gain insights on the data collected by visualizing dashboards and reports. Additionally, the use of big data-based data storage technologies enabled the system's scalability at the national level. This provided energy-efficiency strategies for the household owners and the utility companies. • We implemented a hierarchical two-layered communication architecture based on the MQTT protocol and using the cloud-based server called ThingSpeak. This helped customers realize the global and local communications necessary for the neighborhood appliance controllers. Proposed System Description Here, the researchers considered that the DGs consisted of the communication and control agents on the Internet of Energy realm, as described in Figure 1. The physical components of a general microgrid included the inverter-interfaced distributed generator [Like photovoltaic, wind turbine and energy storage systems], dynamic and static loads and the diesel generators [25,26]. It was noted that a framework controlled the DGs in a microgrid, wherein one MAS agent managed every DG. The MAS agents communicate by Local Area Network (LAN) and can access the Internet for remotely controlling the microgrid via the cloud servers. In the Energy Internet, every distributed generator/microgrid was managed by various stakeholders and their controllers on the MAS/agents differed from MG components. It was expected that the number of the distributed generator and MG agents could be changed online. Hence, a remote, flexible and distributed control and implementation framework were necessary. Figure 1 presents the structure of proposed system. The smart grid would need an effective measuring and communication system to continuously track the power and cost profile and regularly quantify power losses. There are several stages of data processing. This work contains measurement units (MU) for every distribution network bus. MU is MATLAB modeling. Power and cost information is sent to the control center regularly at a fixed time. The control center is designed as a virtual data management and analysis platform. One approach to communication relating to the device topology proposed is considered. The case takes a Cloud approach, which sends its measured data directly to the cloud by any MU connected to the corresponding feeder bus, as illustrated in Figure 1. The data transfer among the MATLAB software package and the open-source IoT framework ThingSpeak are used to model proposed communication architectures. ThingSpeak was chosen for the simulation of real-time cloud communication Due to its following benefits [27]: 1. ThingSpeak Cloud IoT platform data aggregation, tracking and analysis. In the smart grid model, the power profile is monitored on multiple ThingSpeak channels in real-time and depicted graphically. 2. Security: The Username and password allow user authentication while each channel is equipped with its ID and accessible (see by other users). There are two keys in each channel for the application programming interface. A randomly generated read key and write key of the API. These keys can save or retrieve information from stuff from each channel over the Internet or LAN. 3. It facilitates the double-way flow of data between the user and virtual device and allows data and remote control to be exchanged in real-time. The MATLAB Desktop Toolbox offers communication between the simulated feeding model and the ThingSpeak IoT platform. 4. Communication network enabling for the data transmission over the Internet between MATLAB and ThingSpeak. 5. Allows importing, exporting, analyzing and viewing data on multiple platforms and their fields simultaneously. Problem Formulation This paper considered an MG with N controllable distributed generator (indexed as I = 1, 2, . . . , N). The MGs electrical network is presented using an elaborate weighted graph, T = (V T , E T ), wherein the nodes V T = {v 1 , v 2 , . . . v N } represented the buses (DGs) and edges, E T ⊆ V T × V T , represented line connections [28]. Primary Control of Inverter-Based Distributed Generators The basic block diagram of the inverter-based distributed generator voltage-controlled source device is shown in Figure 3. The microgrid consists of many of these distributed generator units, synchronized to maintain a common voltage (generally reference) (V re f ) and frequency (ω re f ). As stated in the introduction, the primary controller alone is not powerful enough to resolve the frequency and voltage deviations of the individual distributed generator units in islanded mode. In an island mode of operation, secondary control is, therefore, necessary to restore frequency (ω i ) and voltage (v oi ) of each distributed generator unit to the nominal level [29]. We offer an inverter-based distributed generator unit equipped with the basic controller derived in the entire non-linear dynamic model [30]. Let i denote the angle of ith distributed generator reference frame concerning a common reference frame and satisfies the relation. where ω i is angular frequency of rotation of ith distributed generator while ω com represents same associated with common reference. Frequency and voltage droop characteristics exerted by the primary controller are regulated: where m pi and n Qi are a droop coefficients, selection of which depends on active and the reactive power ratings of each distributed generator; ω i is angular frequency of ith distributed generator unit fixed by primary control; P i and Q i denote to active power (in kW) and reactive power (in kVAr) measured at the terminals of ith distributed generator; V ni and ω ni act as the reference signals to the primary controller [30]. The power controller is characterized by: where ω ci is low-pass filters cutoff frequency; v oqi , v odi , i oqi , i odi are quadrature and direct components of v oi and i oi respectively. The differential-algebraic equations of voltage controller are given as: where φ di and φ qi are subsidiary status variables connected to voltages power PI controllers and where the nominal angular frequency is denoted by b. The current controller's dynamics are also obtained as: where γ di and γ qi are auxiliary states associated with the new control PI controllers. Finally, the output LC filter and output connector equations as shown in Figure 3. The following are mentioned; where v bdi and v bqi are direct microgrid bus voltage v b and quadrature elements, as shown in Figure 3. Now, Equations (1)- (8) can be expressed in a compact form describing the nonlinear, input-refined status-space model of the ith distributed generator unit: . where the state vector is given by: In addition, D i = ω com , v bdi , v bqi T symbolizes the known disturbance input MASs Communication The communication networks of microgrid having N agents was represented using a graph: All nodes presented in the graph G (agents) showed a one-toone correspondence to the nodes in the graph T (DGs). Furthermore, the edges in G, which represented the communication links for the data exchange, differed from the electrical connection seen in T . In addition, the set of neighbors described in the ith node of G was represented by The researchers represented the adjacency matrix as a ij ⊆ R n×n . Here, the term a ij represented the information that was exchanged between the agents i and j, wherein a ij = 1 when agents i and j were connected with the edge (v i , v j ) ∈ E G , else a ij = 0. The researchers represented the Laplacian matrix as L = l ij ⊆ R n×n where each element l ij = ∑ n i = 1 a ij − a ji . They described the pinning matrix as G = diag[g i ] ⊆ R n×n and g i = 1 when the DG/agent could access the references ω re f and V re f , else g i = 0. Figure 4 presents an example of the data exchange between the controllers. Proposed Secondary Distributed Controller Depending on feedback linearization process, secondary control of the droop controller distributed generator in the islanded microgrid was formulated in the following manner [2]: . . The problem of accurate power sharing control can be formulated as m pi As observed from Equations (16) and (17), the secondary control inputs of u ω i and u P i control ω nom , while the secondary control inputs of u ω i and u Q i control nom V nom . Here, the researchers have proposed a control framework having good flexibility and scalability on the Internet of Energy. Different distributed secondary control techniques were investigated earlier [2], wherein the researchers used a popular linear control protocol for every distributed generator. In the MAS having N agents, control protocol could be described as: where i, j ∈ {1, 2, . . . , n}, p i = m P i P i for simplicity, q i = m Q i Q i the control gains k ω i , k V i , k p i and k Q i are all greater than zero. The above Equations (18)-(21) can be described as: where the vectors . . k V n , k P = diag k P 1 , . . . k P n and k Q = diag k Q 1 , . . . k Q n . Then, all control inputs from multi agent system can be given as [2]: where u, x, x re f , K, L and G are vectors and matrices indicated in (26). The researchers also considered a case where the MASs managed the large-scale MGs. It was noted that the cluster/hierarchical consensus algorithm offered a control solution for the large-scale multi-agent system. The researchers stated that this control algorithm was sufficient for the scalable and flexible control of numerous MASs after considering the inter and intra multi-agent system interactions. Without any loss of generality, the researchers simplified representation after considering that no. of agents in every multi-agent system was similar. This proposed technique could be applied to a heterogeneous condition. Finally, the researchers proposed a feedback control protocol as follows: . . where matrix ∆ ⊆ R 4n×4n defines which agents have data exchange among each agent. For simplicity, this paper considers ∆ = K G. The Laplacian matrixL = l ij ⊆ R m×m indicates the interactions among groups. In compact form, (28) can be represented as: where: The schematic diagram of the proposed method for multi-agent MG has been shown in Figure 5. The flowchart of the proposed controller has been introduced in Figure 6. Proposed Internet of Energy Communication Platform The decentralized controller of a smart MG helps manage the system operating conditions if there is some disturbance. Furthermore, IoT technology can be used for communicating between the appliances present in smart homes, central controllers or power management centers. The researchers proposed the IoT platform for collecting the data, monitoring, managing and controlling the SMG. This platform included and connected all appliances and energy resources. Including the energy supply layer, network layer, energy management layer, energy appliance layer, control system layer and the IoT service layer, the primary IoT platform layers. MQTT Knowledge Message Queuing Telemetry Transport (MQTT) is a lightweight protocol that makes effective use of the network bandwidth with a fixed header of 2 bytes. The MQTT is operational on the TCP and ensures that all messages are sent from agent to server. Three main players, MQTT broker, MQTT publisher and a MQTT subscriber, are included in the protocol. The MQTT subscriber and publisher are indirectly linked and do not use one IP address simultaneously. The MQTT Broker refers to a network gateway that filters, receives, prioritizes and distributes the publishers' messages to the thousands of simultaneously-connected MQTT subscribers. An MQTT broker takes care of the customer authorization and initialization process necessary for communication. To publish the information, the MQTT publishers utilize custom themes for catering to their clients. The MQTT protocol did not use Metadata marking. After that, the MQTT topic management presents the metadata for a message load, which is considerable and it can attach meaningful attributes to the topic. MQTT is seen to be a string having a multi-attribute and multi-level hierarchical structure. The forward slash in a theme tree can separate every stage [31]. All subjects could be updated for deriving the routing data. Figure 7a presents the connection's initialization after exchanging the control packets between the clients and brokers. It was noted that the check packets for the CONNAC, Link, PUBACK, PUBLISH, SUBSCRIBE, SUBACK, etc., comprise specific instructions regarding the theme, transmission and the payload Quality of Service (QoS). Figure 7b presents all components of the MQTT contact. Figure 8 presents an overview of smart homes' hierarchical platform with a cyber layer, physical layer and control layer. Two communication layers were included in the hybrid platform. It was seen that in Layer one (local layer), the appliances in the smart building transmitted the MQTT messages to a Building MQTT Client (BMC) and reported the events/measurements and subscribed to the MQTT messages that BMC published for the protection/control purpose. Layer two (global layer) represented the interaction between the cloud and BMC with the HTTP GET/POST requests' help. In this architecture, every appliance was equipped with a Wi-Fi module connected to the local gateway. Thus, it could periodically publish the values of a dedicated and pre-defined topic. After that, the BMC subscribes to the different issues and posts the received values to the cloud channel. The cloud data can be accessed by the cloud MATLAB interface, which implements the designed appliance resource allocation algorithm. The results of the algorithm are then moved from a cloud into intelligent BMC devices which controls the devices. The researchers found that if communication failure occurs in any layer, the architecture proposed is resilient (either local or global). BMC was therefore developed to operate as a local controller (or a backup controller) for all devices in the building during any communication link failure or high network latency noted. The results section highlighted this function of the BMC. Result analysis and Discussion Proposed Method The proposed controller is tested with the microgrid model illustrated in Figure 1. The system and control parameter are listed in Table 1. Here, the researchers have described the simulated implementation of the distributed secondary controller on a multi-agent system platform, in addition, to their correlation with the cloud server and LAN. Figure 3 presents the MAS platform structure. The multiagent system was implemented in the MATLAB cluster connected to LAN via the network switch and connected to the cloud server by the Internet. Local communication was carried out by the TCP/IP protocol, whereas the TCP protocol conducted the communication between the cloud server and MAS. Communication between the agents was in the form of a client/server format with the help of ThingSpeak and could be configured for any network topology. In the ThingSpeak-based communication system, every agent acted as the server which waits for the incoming messages. It can also dispatch the messages to a corresponding technique since it was the neighboring servers' client. This part discusses the effect of the Microgrid communication system. The microgrid will exchange information in the communications device's presence, such as load consumption and power generation. The microgrid gets the required energy from neighboring agents to regulate active power, reactive power, voltage and frequency. On the other hand, the communication system provides the required information for the microgrid to transfer energy. The experimental results noted in MATLAB for the power, voltage and actual power of every DG have been presented in Figures 9-16. The results described in Figures 10, 12, 14 and 16 indicated that the real power, reactive power, frequency and voltage are restored to their reference values after applied proposed control. All distributed generator in the microgrid autonomously alters their power output for fulfilling the load demands. Results for Scenarios I, II, III and IV indicated that a cloud server's distributed MAS control for the remote microgrid was an effective technique. Table 2 presents the scheduling and operating activities of all the loads. The proposed method was simulated in three different scenarios in the MATLAB environment to assess the proposed strategy performance. The scenario I is performed in eight inverters with the same drop coefficients. In contrast, different droop coefficients are applied for scenarios II, scenario III and scenario IV. In both cases, the proposed approach is compared with another control method. Table 1 includes the device parameters for simulations. Scenario I This scenario examines the proposed method capability after islanding takes place at t = 3 h. At 3 h Home 1 and home 5 are connecting to the MG. At 6 h Home 2 and Home 8 are connected to the MG, at 9 h Home 3 and Home 6 are connecting to MG a and at 12-h, Home 4 and Home 7 are connecting to MG. While Home 1 and home 5 are disconnected from the MG at 15-h, Home 2 and Home 8 are disconnected from the MG at 18-h, Home 3 and Home 6 are disconnected from MG at 21 h. In this scenario m p = n q = 1 × 10 −4 . The results of this scenario I were presented in Figure 9. It is visible from Figure 10 that the proposed control method can modify the frequency, active power ratio, reactive power and voltage following significant disturbances, such as load changes and reconfiguration of microgrid structures. In contrast, accurate real power-sharing is guaranteed in a steady state. Figure 9a indicates the active power of the distributed generators using the droop control method in [32], Figure 9b represents the reactive power of the distributed generators using the droop control method, Figure 9c illustrate the frequency of the loads using droop control method in [32], Figure 9d, represents the voltage of the distributed generators using droop control method in [32]. Whereas Figure 10a indicates the active power of the distributed generators using the proposed control method, Figure 10b represents the reactive power of distributed generators using the proposed control method, Figure 10c illustrate the frequency of the loads using proposed control method, Figure 10d, represents the voltage of the distributed generators using proposed control method. Scenario II In this scenario, droop coefficients are not equal to each other; in this scenario m p = 1 × 10 −4 and n q = 1.3 × 10 −4 . The results of this scenario II were showed in Figures 11 and 12. Figure 11a indicates the active power of the distributed generators using the droop control method in [32], Figure 11b represents the reactive power of the distributed generators using the droop control method in [32], Figure 11c shows the frequency of the loads using droop control method in [32], Figure 11d, represents the voltage of the distributed generators using droop control method in [32]. Whereas Figure 12a indicates the active power of the distributed generators using the proposed control method, Figure 12b represents reactive power of distributed generators using the proposed control method, Figure 12c illustrate the frequency of loads using the method of proposed control, Figure 12d, represents the voltage of distributed generators using proposed control method. Scenario III In this scenario, droop coefficients are not equal to each other; in this scenario m p = 1 × 10 − 4 and n q = 1.6 × 10 − 4 . The results of this scenario III were showed in Figures 13 and 14. Figure 13a indicates the active power of distributed generators using the droop control method, Figure 13b represents the reactive power of distributed generators utilizing the method of droop control in [32], Figure 13c illustrates the frequency of loads using the method of droop control in [32]; Figure 13d represents the voltage of the distributed generators using droop control method in [32]. Whereas Figure 14a indicates the active power of the distributed generators using the proposed control method, Figure 14b represents reactive power of the distributed generators utilizing the method of proposed control; Figure 14c illustrates the frequency of loads using proposed control method; Figure 14d, represents the voltage of the distributed generators using proposed control method. Scenario IV In this scenario, droop coefficients are not equal to each other; in this scenario m p = 1 × 10 − 4 and n q = 2 × 10 − 4 . The results of this scenario I were showed in Figures 15 and 16. Figure 15a indicates the active power of the distributed generators using the droop control method in [32]; Figure 15b represents the reactive power of the distributed generators utilizing the method of droop control in [32]; Figure 15c shows the frequency of the loads using droop control method in [32]; Figure 15d, represents the voltage of the distributed generators using droop control method in [32]. Whereas Figure 16a indicates to active power of the distributed generators using the proposed control method, Figure 16b represents the reactive power of the distributed generators using the proposed control method, Figure 16c illustrates the frequency of the loads using the method of proposed control, Figure 16d represents the voltage of distributed generators using proposed control method. Access to Internet Web Page In this study, the researchers carried out a simulation test, where they described and discussed the results of a decentralized power management and control strategy for the microgrid in the Energy Internet paradigm, which was implemented using the proposed algorithm over the cloud platform for regulating the appliances in a smart home. As noted in the software communication and architecture interface, a MATLAB program was present for the Main Command and Control Unit (MCCU), which helped organize all ThingSpeak platforms. The MQTT (Mosquitto) functions as a broker and bridges the home appliance subscription and MCCU publishers' gap. For regulating the home appliances through the MQTT gateway, the researchers used a custom code derived from the proposed MATLAB-based algorithm for its deployment. Here, the researchers designed a ThingSpeak dashboard interface, using a simple and effective user interface (UI), which allowed the homeowners to access and interact with the home energy management service over the cloud system. Figure 17 presents an internet web page that can be accessed in any internet browser after entering and providing their username and password in the uniform resource locator (URL). Scenario I In this scenario, droop coefficients m p = n q = 1 × 10 −4 . This scenario I presented in Figure 18 using the droop control method in [32] and using the proposed control method. Whereas Figure 18a indicates the active power of the distributed generators, Figure 18b represents the reactive power of the distributed generators, Figure 18c illustrates the frequency of the loads, Figure 18d shows the voltage of the distributed generators. Scenario II In this scenario, droop coefficients m p = 1 × 10 −4 and n q = 1.3 × 10 −4 . This scenario I presented in Figure 19 using the droop control method in [32] and using the proposed control method. Whereas Figure 19a indicates the active power of the distributed generators, Figure 19b represents the reactive power of the distributed generators, Figure 19c illustrates the frequency of the loads and Figure 19d shows the voltage of the distributed generators. Scenario III In this scenario, droop coefficients m p = 1 × 10 −4 and n q = 1.6 × 10 −4 . The results of this scenario III were presented in Figure 20 using the droop control method and using the proposed control method. Whereas Figure 20a indicates the active power of the distributed generators, Figure 20b represents the reactive power of distributed generators, Figure 20c illustrates the frequency of the loads and Figure 20d shows the voltage of the distributed generators. Scenario IV In this scenario, droop coefficients m p = 1 × 10 −4 and n q = 2 × 10 −4 . This scenario IV's results were presented in Figure 21 using the droop control method and using the proposed control method. Where Figure 21a indicates the active power of the distributed generators, Figure 21b represents the reactive power of the distributed generators, Figure 21c illustrates the frequency of the loads, Figure 21d shows the voltage of the distributed generators. Lastly, to show the effectiveness of the expressed cooperative proposed control protocol, the proposed algorithm is compared with other reported techniques [21,32] and present the results in Figures 9-16, when the microgrid faces with the load change, the controller has a good performance and the voltage and frequency waveforms are stably regulated to the nominal values. The main objective of the comparison between the proposed method and methods in [21,32] is to prove the proposed controller's efficiency. The distributed control method [21] fails to regulate the microgrid's active power and reactive power. The proposed protocols in [21] fail to provide a web page to monitor DGs' active power, reactive power of DGs, frequency of loads and voltages of DGs. In addition, compared with the well-known conventional distributed method [32], as seen from Figures 9, 11, 13 and 15, when the microgrid faces the load change, the voltage and frequency waveforms become incredibly faulty. Therefore, the conventional distributed method [32] deteriorates the synchronization of voltage magnitude, frequency and real power ratio due to system faults. In addition, the control method in [32] fails to regulate the microgrid's active power and reactive power. The proposed protocols in [32] fail to provide a web page to monitor DGs' active power, reactive power of DGs, frequency of loads and voltages of DGs. Therefore, our proposed method has a better, robust, resilient, acceptable and desirable performance, even if the load change is excellent. Thus, the proposed protocols' capability to meet the requirement of voltage, frequency, active power and reactive power events is verified. In addition, our proposed protocols providing a web page to monitor the active power DGs, reactive power of DGs, frequency of loads and voltages of DGs. Conclusions The researchers proposed a novel distributed control framework for the MGs controlled by the various multi-agent system in this study. The proposed control law defines the data exchange within and among MASs to enable MG's flexible control in Energy Internet. The proposed control objectives are achieved with the evaluation of the stability considering network latency. The proposed controller depends on the information transferring between the connected agents in the MG system. The errors in frequency and voltage waveforms have been compensated by applying the proposed consensus controller. In addition, the active and reactive power is optimally shared among the DGs. The proposed controller improves the performance of the primary droop control method that can't adjust the MG-VF to their nominal values, and also, it does not enhance the power-sharing among the DGs in MG. A hypothetical multi-agent MG system is designed to prove the proposed controller's effectiveness using the MATLAB/Simulink environment in the presence of the different scenarios in MG. In addition, this study presents a hierarchical communication platform with a two-level structure, which is suitable for the microgrid management system. The proposed platform uses Transmission Control Protocol/Internet Protocol (TCP/IP) for local microgrid data exchange and as a backup communication method among microgrids in case of a failure in the cloud level communication. Message Queuing Telemetry Transport (MQTT) subscriber/publisher is adopted for cloud level messaging and HTTP TCP/IP for interactions between a cloud-server and the platform. The cost analysis provided in the simulation results section shows the efficiency of the proposed distributed communication platform compared to the centralized operation of the Microgrid communications. We also compared our proposed techniques with some of the existing methods and simulation results prove the efficacy of this paper's presented methodologies. The obtained results showed that the proposed controller regulates the frequency and voltage in MG under different faults. In addition, active and reactive power is equally shared between the DGs. Finally, for accessing the data related to the power consumption of the individual loads, the researchers developed a reliable web portal associated with the IoT environment. They provided a Graphical User Interface (GUI) after plotting a graph of power consumption for determining the daily power usage of every appliance. They further provided a database for efficient energy management, which could be used for analyzing the data. The proposed method can regulate the voltage and frequency well within the operational requirements. Furthermore, the flexibility and scalability of the approach are demonstrated in MG with eight DGs.
2021-05-21T16:57:02.510Z
2021-04-14T00:00:00.000
{ "year": 2021, "sha1": "69a37c3a464cf31685d285b428e7693898717b0c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/14/8/2183/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "616f671b483f5309e12217ba10772cd95859109b", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
214772337
pes2o/s2orc
v3-fos-license
Germline polymorphisms of circadian genes and gastric cancer predisposition. Dear Editor, Gastric cancer represents a remarkable disease burden worldwide, ranking among the first five tumor types in incidence and mortality [1]. Germline DNA variation has been extensively investigated in terms of predisposition to sporadic gastric cancer, which represents more than 90% of all cases [2]. Currently available evidence shows that the fraction of disease burden that can be attributable to known risk polymorphisms is small (< 20%) [2]. Single germline variations of circadian genes (also called clock genes) have been associated with the predisposition of different tumor types [3]. The circadian clock is a timetracking rhythmic biological system with a periodicity of about 24 hours that enables organisms to anticipate environmental changes and allow them to modify their behavior and physiological functions in the most efficient way. Circadian rhythms are controlled by proteins encoded by circadian genes, which have been discovered in all studied species. Remarkably, the disruption of these rhythms has been linked with risk of different diseases including cancer. In regards to the latter, a growing wealth of evidence supports the potential tumor suppressor role of the biological clock [3, 4]. As the role of circadian gene germline variants has never been explored in the field of gastric cancer susceptibility, with the present work, we intended to test the hypothesis that specific single nucleotide polymorphisms (SNPs) of the circadian genes, such as CLOCK, NPAS2, PER1, PER2, RORA, and TIMELESS, could significantly increase or decrease the predisposition to develop gastric cancer. We considered the 10 SNPs of the above listed 6 circadian genes that are known to be functional or associated with cancer risk or prognosis. The main features of the SNPs are described in our previous study [5]. We conducted a retrospective study based on a total of 1065 subjects comprising of 455 cases of gastric cancer and 610 healthy controls. All of them were of European ancestry. The median age of onset for gastric cancer was 67 years (range, 27- 90 years). Among these gastric cancer patients, 249 (54.7%) were males and 206 (45.3%) were females. The median survival was 30.0 months, ranging from 1.0 to 293.0 months. These datasets were already employed in our previous studies [5,6] and the detailed characteristics of the subjects are summarized in Table 1 and Supplementary Table 1. Genotyping was performed by real-time PCR. Multivariate logistic regression analysis was performed to assess the associations employing four models of inheritance: allelic, recessive, dominant, and co-dominant. The detailed methods are available in Supplementary information. All the preselected SNPs were successfully genotyped, and no departures from Hardy-Weinberg equilibrium were observed (Supplementary Table 2). The average genotyping success rate of selected SNPs in all participants was 98.9% (range, 96.0%-100%). The mean statistical power for this analysis was 61%. Detailed statistical power for each SNP is reported in Supplementary Table 3. Associations between the selected circadian genes genetic variations and gastric cancer predisposition were tested assuming 4 models of inheritance. The results are summarized in Table 2. We used odds ratios (ORs) and their corresponding 95% confidence intervals (CI) to measure the strength of association between each polymorphism and gastric cancer susceptibility. Overall, the genetic variants significantly associated with gastric cancer predisposition were: NPAS2 rs895520, PER1 rs3027178, PER2 rs934945, RORA rs339972. In particular, the present analysis suggested that NPAS2 rs895520 minor allele (A) was associated with an increased susceptibility to gastric cancer of 24% under an additive (per allele OR, 1.24; 95% CI, 1.01-1.52; P = 0.036), recessive (OR, 1.56; 95% CI, 1.09-2.24; P = 0.016) and codominant (OR, 1.62; 95% CI, 1.07-2.44; P = 0.022) model of inheritance. PER1 rs3027178, a genetic variant with a synonymous functional effect was associated with a reduced predisposition (per allele OR, 0.80; 95% CI, 0.64-0.99; P = 0.037). PER2 rs934945 (C > T) is located on the last exon of PER2 To the best of our knowledge, this is the first scientific work investigating the relations between circadian genes DNA genetic variations and the susceptibility to gastric cancer. Therefore, we could not know a priori the genotypephenotype relation of these SNPs; as a consequence, we tested 4 genetic models of inheritance: allelic, recessive, dominant and co-dominant. When testing the allelic/recessive/dominant models, for those polymorphisms which were significantly associated with the phenotype in more than one model, the best fitting model was considered the one with the lower P value. Our results indicated that NPAS2 rs895520 bestfitted model for the association with gastric cancer was the recessive model of inheritance, while RORA rs339972 was the allelic model. Interestingly, we found similar results regarding NPAS2 rs895520 in our previous work on associations of circadian genes polymorphisms with soft tissue sarcoma susceptibility [5], while there was no difference in terms of P value for RORA rs339972 comparing the allelic and the dominant model, nevertheless, both were associated with sarcoma susceptibility as it was for gastric cancer. Since the maximum power was reached when the 'true' mode of inheritance of the disease susceptibility loci and the genetic model used in the analysis were concordant [7], it is worth determining the genotype-phenotype relation for each SNP. We tested the co-dominant model as well, for two reasons: its robust method [7] and its application in testing the circadian genes SNPs associations with different neoplasms [8,9]. Employing the co-dominant model PER2 rs934945 heterozygotes had a decreased predisposition compared to homozygotes for the common allele (C) of 31%. Karantanos et al. [9] found no association of PER2 rs934945 with colorectal cancer neither with the allelic nor with the co-dominant model. Dai and colleagues [8] found no association of PER2 rs934945 with breast cancer in overall analysis while found a significant association in subgroup analysis. Homozygotes for the minor allele (T) had an increased risk of developing breast cancer only in a specific CLOCK rs3805151 background (homozygosis for the common allele C). This was in line with the shared idea that genetic variations have different effects in different neoplasms. In particular, this was recently highlighted for prognosis in an interesting work performed by Chang and Lai [4]. They performed a comprehensive study of circadian genes in 21 cancer types that considered genomic, transcriptomic and phenotypic (clinical prognosis) data and they found that circadian genes were substantially altered by somatically acquired deletions and amplifications. Core circadian genes, PERs, CRY2, CLOCK, NR1D2, RORA and RORB exhibited global patterns of somatic loss and downregulation across multiple tumor types and that loss-of-function of these genes resulted in increased death risks in patients. However, tumor suppressive qualities appeared to be cancer type-specific. Opposite trend was obtained for bladder and stomach cancers as their "low" loss-of-function of putative tumor-suppressive circadian genes were found to be associated with adverse survival outcomes [4]. In our previous study concerning the associations of gastric cancer prognosis and germline variation of circadian genes [6] we had a similar approach. We found that germline polymorphisms in the circadian pathway were associated with the survival of patients with gastric cancer, independently of established prognostic factors such as disease stage and patient age at diagnosis. In particular, combined information deriving from two SNPs (rs3749474 and rs1801260, two variants of the CLOCK gene 3'-UTR) allowed us to classify patients into a high or low CLOCK transcription, with the latter showing a significantly worse prognosis (about 70% increased risk of death). This apparent discrepancy highlights that gastric cancer prognosis and circadian genes relations need further in-depth analysis. Moreover, we could not replicate the data reported by Qu and colleagues [10] on the association between PER variants and prognosis. Different ethnicity (European vs. Asian), sample size (the Asian series was more than two-fold larger) and disease stage composition (only our study included patients with advanced and metastatic gastric cancer) might partly explain this discrepancy. Nevertheless, differences were found by 2 groups studying PER2 expression as a prognostic factor for gastric cancer in patients with Asian ethnicity. Zhao and colleagues [11] found that PER2 expression was downregulated in most gastric cancer tissues, while Hu and colleagues [12] found that it was upregulated. To our knowledge, this is the first analysis investigating the hypothesis of an association between germline genetic variations of the circadian pathway with gastric cancer susceptibility. The power of our study is not optimal, and the present study should be considered as a pilot work that warrants further validation in different datasets. Nevertheless, our results showed that the 4 circadian clock variants were clinically and statistically associated with gastric cancer predisposition. Models of inheritance Italy) for organizing the sampling activity and Dr. Enrico Lion (Padova University Hospital, Padua, Italy) for organizing the informed consent retrieval.
2020-04-03T19:18:56.822Z
2020-04-03T00:00:00.000
{ "year": 2020, "sha1": "5bbc6e54870c686f5c8b98f3f301a34ed0b736e9", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cac2.12008", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e468d8dbbd603ca6d880b73bb9fc982f889f8b29", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
53559367
pes2o/s2orc
v3-fos-license
FAST PROBABILISTIC FUSION OF 3 D POINT CLOUDS VIA OCCUPANCY GRIDS FOR SCENE CLASSIFICATION High resolution consumer cameras on Unmanned Aerial Vehicles (UAVs) allow for cheap acquisition of highly detailed images, e.g., of urban regions. Via image registration by means of Structure from Motion (SfM) and Multi View Stereo (MVS) the automatic generation of huge amounts of 3D points with a relative accuracy in the centimeter range is possible. Applications such as semantic classification have a need for accurate 3D point clouds, but do not benefit from an extremely high resolution/density. In this paper, we, therefore, propose a fast fusion of high resolution 3D point clouds based on occupancy grids. The result is used for semantic classification. In contrast to state-of-the-art classification methods, we accept a certain percentage of outliers, arguing that they can be considered in the classification process when a per point belief is determined in the fusion process. To this end, we employ an octree-based fusion which allows for the derivation of outlier probabilities. The probabilities give a belief for every 3D point, which is essential for the semantic classification to consider measurement noise. For an example point cloud with half a billion 3D points (cf. Figure 1), we show that our method can reduce runtime as well as improve classification accuracy and offers high scalability for large datasets. INTRODUCTION Scene classification is important for a wide range of applications and an open field in research concerning runtime, scalability and accuracy.Accurate 3D point clouds are essential for robust scene classification for state-of-the-art methods.It has been shown that 3D point clouds from laser sensors are suitable for this task (Schmidt et al., 2014).Unfortunately their acquisition is expensive, laser sensors are relatively heavy and have a high energy consumption. The recent progress in image-based 3D reconstruction by Multi-View Stereo (MVS) methods allows for the generation of 3D point clouds from images, also in large numbers.High resolution consumer cameras on Unmanned Areal Vehicles (UAVs) offer a cheap acquisition of images from novel viewpoints.Unfortunately, the generation of accurate 3D point clouds requires a high computational effort. Particularly, stereo methods like Semi-Global Matching (SGM) (Hirschmüller, 2008) can generate disparity maps also from very high resolution images.Recently, the fusion of large sets of disparity maps (billion 3D points) to accurate 3D point clouds has been demonstrated (Kuhn et al., 2013, Fuhrmann and Goesele, 2011, Fuhrmann and Goesele, 2014).Unfortunately, the processing of such large point clouds on single PCs can take a couple of days (Ummenhofer and Brox, 2015).For practical applications such as scene classification, there is basically no need for a computationally complex fusion to obtain accurate 3D point clouds.We, thus, show that a fast fusion via occupancy grids essentially speeds up the runtime and offers similar quality for semantic classification when considering probabilities. Scene classification is an essential and intensively studied topic in photogrammetry, remote sensing and geospatial information sci-ence.Many approaches have been reported over the last decades.Sophisticated classification algorithms, e.g., support vector machines (SVM) and random forests (RF), data modeling methods, e.g., hierarchical models, and graphical models such as conditional random fields (CRF), are well studied.Overviews are given in (Schindler, 2012) and (Vosselman, 2013).(Guo et al., 2011) present an urban scene classification on airborne LiDAR and multispectral imagery studying the relevance of different features of multi-source data.An RF classifier is employed for feature evaluation.(Niemeyer et al., 2013) proposes a contextual classification of airborne LiDAR point clouds.An RF classifier is integrated into a CRF model and multi-scale features are employed. Recent work includes (Schmidt et al., 2014), in which full waveform LiDAR is used to classify a mixed area of land and water body.Again, a framework combining RF and CRF is employed for classification and feature analysis.(Hoberg et al., 2015) presents a multi-scale classification of satellite imagery based also on a CRF model and extends the latter to multi-temporal classification.Concerning the use of more detailed 3D geometry, (Zhang et al., 2014) presents roof type classification based on aerial LiDAR point clouds. In this paper we present a robust and efficient analytical pipeline for automatic urban scene classification based on point clouds from disparity maps, which is adapted to utilize the additional probability information for the points to improve the results. The paper is organized as follows: In Section 2 we describe a pipeline for the fast generation of high resolution 3D point clouds.The fusion of point clouds and the derivation of per-point probabilities are given in Section 3. Section 4 examines the use of point cloud probabilities for urban scene classification.Experiments on a large dataset (see Figure 1) are presented in Section 5. Finally Section 6 gives conclusions and an outlook to future work.Figure 1.Village dataset: The four images on the left each show one of 296 36-megapixel input images acquired from an Unmanned Aerial Vehicle (UAV).In the middle the noisy point cloud derived from disparity maps accumulating half a billion 3D points is given.On the right the classification results for buildings (red), ground (gray), grass (blue) and trees (green) are presented.In spite of the huge amount of input data, our method allows for the classification within a couple of hours on a single PC. GENERATION OF 3D POINT CLOUDS In this paper, we focus on the fast generation of 3D point clouds from image sets which are suitable for semantic scene classification.We demonstrate that semantic classification of buildings, vegetation and ground for a complete village, captured by hundreds of high-resolution images leading to half a billion 3D points, is possible on a single PC within a couple of hours.Hence, especially the runtime of the processing pipeline, e.g., for generation of the point cloud is important. The first step in a dense point cloud generation pipeline is image registration which can be done fast even for thousands of wide baseline high-resolution images (Mayer, 2015).The fast processing is possible as only a fraction of the image information is needed for the estimation of camera poses.Additionally, Graphics Processing Unit (GPU) implementations can speed up the processing (Wu et al., 2011, Wu, 2013).The next step is MVS.Disparity maps from pairs of the entire set can be generated in parallel using multi-core systems.Nonetheless, this task can be of high computational complexity.Especially SGM (Hirschmüller, 2008) has been found to successfully compute high-resolution disparity images in reasonable time still retaining small details (Hirschmüller and Scharstein, 2009).Furthermore, for SGM, publicly available (Rothermel et al., 2012), fast GPU (Ernst and Hirschmüller, 2008) and Field Programmable Gate Array (FPGA) (Hirschmüller, 2011) implementations exist.The image set from Figure 1 with 296 36megapixel images can be processed at quarter resolution in only two and a half hours on a single PC with the FPGA.An example disparity map of an image of this set is shown in Figure 2. The final step for the generation of accurate point clouds is disparity map fusion.Even though recently scalable fusion methods have been presented (Fuhrmann and Goesele, 2011, Kuhn et al., 2013, Fuhrmann and Goesele, 2014, Kuhn et al., 2014, Ummenhofer and Brox, 2015) they are still not able to process large amounts of data, e.g., billion of 3D points, within one day on a single PC (Ummenhofer and Brox, 2015).To overcome the problem of costly MVS-based fusion of 3D point clouds, we leverage occupancy grids (Moravec and Elfes, 1985) for the fusion arguing that the redundancy and the high image resolution of specific datasets are highly useful for applications like image classification.Therefore, the fusion of 3D point clouds from disparity maps including the derivation of probabilities and their use for scene classification are the main focus of this paper. FUSION OF 3D POINT CLOUDS For an efficient scene classification from 3D point clouds it is essential to get rid of redundancy.Additionally, the 3D point cloud from disparity maps consists of noise as well as of outliers.Our goal for point cloud fusion is the applicability to semantic scene classification, because the use of high resolution images for MVS leads to point densities from which the classification does not automatically benefit.In this section we show, how point clouds from individual images can be merged very fast via octreebased occupancy grids at a resolution suitable for classification. To this end, first the framework of occupancy grids is described in Section 3.1.This is followed by a description of an octreebased fusion of 3D points from a single disparity map and the fusion of multiple disparity maps in Section 3.2.For the latter, the occupancy grids are used to fuse the complete set of disparity maps and for the derivation of point-wise probabilities suitable for scene classification. Occupancy Grids Occupancy grids are especially important for real time applications and, hence, popular in the robotics community.They were introduced by Moravec and Elfes (Moravec and Elfes, 1985) and consist of a regular decomposition of an environment into a grid.Within this representation a probability is derived for individual grid cells that a cell is occupied depending on the number of measurements assigned to the cell.This can be useful for the fusion of redundant and noisy measurements and the classification of outliers, e.g., for disparity map fusion. Redundant measurements assigned to the same cell are merged by means of probability theory.More precisely, a Binary Bayes Filter (BBF) is used for the derivation of the probability of a cell to be occupied.Initially, an inlier probability p is defined for a measurement.The measurement, e.g., a 3D point derived from disparity and camera calibration, is transformed into the grid depending on its position.To the corresponding cell the probability p is assigned which represents the probability of the voxel to be occupied. When multiple measurements are assigned to one voxel cell, e.g.redundant 3D points from multiple disparity maps, BBF allows for the fusion of the probabilities.To this end, the so called logprob l is defined as l = ln( p 1−p ).The fusion is conducted incrementally assuming uncorrelated data.Initially l can be set zero corresponding to an occupation probability of 0.5.The logprob at time t is defined as: The incremental formulation is a crucial advantage when fusing larger sets of disparity maps, as never more than one input point has to be considered.The overall logprob for n measurements (input points) in one cell can be formulated as: and rises continuously with the number of input points.In our case a constant inlier probability p derived from the disparity maps is assigned to all 3D points.Hence, p 1−p is constant and it is sufficient to calculate it only once.After fusion of the n measurements, the logprob can be transformed back to a final inlier probability as follows: (3) Figure 3 demonstrates the fusion of one to three measurements considering Equations 1 to 3. For a detailed description of Occupancy Grids and the BBF see, e.g., the textbook of Thrun (Thrun et al., 2005). 3D Fusion of Multiple Disparity Maps By considering the relative camera poses and the calibration of the camera(s), a 3D point cloud can be derived from disparity maps.Hence, the input of our proposed method is a set of point clouds corresponding to the individual disparity maps of the image set.For several applications it is useful to get rid of the high point density inherent in the disparity maps.This could be done by extraction of disparity maps from downscaled images.Because of varying distances to the scene and the resulting irregular loss in quality, we present a reduction in 3D space.As it allows parallel processing, we initially decompose the dense 3D point clouds for all disparity maps separately, to reduce the amount of data.This is our first step towards a fusion in 3D space via octrees, where the space is decomposed according to a given voxel size.For georeferenced data the voxel size can be defined by the application and the necessary accuracy, e.g.20 cm accuracy for scene classification.Figure 4 shows an input point cloud from the disparity map shown in Figure 2 and the decomposed point cloud.Octrees are suitable for fast decomposition as they offer logarithmic access time.In our method, the octree root represents the entire input point cloud.For it, a bounding volume has to be defined, which in our case is the bounding box of the input point cloud.Hence, as a first step the bounding box is calculated via the minimum and maximum x-y-and z-values of the 3D point coordinates from the input point cloud.After defining the root node, all points are traced through the octree down to the level of the decomposition size.If multiple measurements are assigned to a single voxel of the octree, the 3D coordinates are averaged. We do not use a probabilistic fusion via occupancy grids at this point, because the 3D point positions from a single disparity map are highly correlated due to the regularization terms in SGM. Hence, the geometric probabilistic fusion is only conducted on point clouds from multiple disparity maps. Additionally to the geometric position, the color of the 3D point is essential for scene classification.To determine it, we combine multiple RGB measurements from the images in the fusion process by means of the median.Particularly, for all of three color channels the median of all measurements from one image in one cell is calculated separately.The median allows, in contrast to the mean, sharper borders between different object classes and, hence, is suitable for scene classification. For fusion of the 3D point clouds and derivation of a pointwise probability, especially the merging of the individual (decomposed) point clouds is of interest as we assume them to be uncorrelated.To this end, we transform the reduced point clouds derived from the disparity maps into occupancy grids.For fast access times, again, octrees are used whose root size can be easily derived from the set of bounding boxes of the individual disparity maps.As in the decomposition, 3D coordinates in individual voxels are averaged while the RGB color is fused via median.Additionally, a probability is derived depending on the number of measurements. The incremental definition of the logprob fusion in the BBF (cf.equation 1) allows for a sequential processing of the set of input point clouds.This is an important benefit, as never more than one input point has to be processed at a time which guarantees high scalability even for large datasets.For all point clouds the 3D points are traced down to the given octree size, which equals the size in the individual disparity map decompositions.If the assigned voxel is not occupied, the defined inlier probability p is assigned to the voxel in its logprob representation l.In case the voxel is occupied, the logprobs are merged by the incremental sum (see Equation 1).After the octree has been built from the entire set of point clouds, the final point cloud with inlier probabilities, derived from the logprobs by Equation 3 is used for scene classification. Figure 5 demonstrates the probabilistic derivation by an example consisting of three input point clouds.The probability of 3D points in the voxels rises continuously with the number of measurements (see Figure 4). In summary, the fusion allows for a probabilistic decomposition of redundant data for multiple disparity maps.Additionally, to the benefit of data reduction, the derived point-wise probabilities are an essential prerequisite for a derivation of a stable scene classification. SEMANTIC CLASSIFICATION The reconstructed and fused point cloud is the basis for scene classification.The approach presented in (Huang and Mayer, 2015) is extended to utilize the additional probabilities assigned to the data points.The proposed classification works on rasterized (with x-y the basis and z the elevation) and, therefore, reduced point clouds.The quality of the points indicated by the given probabilities is employed to preserve the more meaningful data points.Figure 6 compares the rasterized data derived from non-probabilistic (left) and probabilistic (right, points with the best probabilities) fusion. Patch-wise Scheme Patch-wise classification is inspired and employs the image segmentation with superpixels (Ren and Malik, 2003, Felzenszwalb and Huttenlocher, 2004, Moore et al., 2008).The latter is widely used in image processing, but has not been adapted and employed for 2.5D or 3D data.The main reason is that the superpixel segmentation does not consider 3D geometry.It, thus, often results in a false segmentation, i.e., superpixels that cross objects, which directly leads to errors in the final result, which basically cannot be corrected by any post-processing.To tackle this, a reliable oversegmentation with full 3D parsing of the geometry is required.We follow the segmentation idea described in (Huang et al., 2014) to group the data points.As presented in Figure 7, an extended super-pixel segmentation for multiple channels including RGB, DSM and local normal directions is performed to oversegment the data into 3D "patches".Data points inside one patch are homogeneous concerning color and 3D geometry, which implies they belong to the same object. The advantage of the patch-wise scheme lies in its efficiency: Both the feature calculation and the classification only need to be conducted once and can be applied for all members of the same patch.Please note, however, the key that makes the fast scheme also achieves an acceptable classification accuracy is an appropriate oversegmentation.The improved color and elevation values, as shown in Figure 7, lead to a better segmentation with clearer boundaries and, therefore, ensure the feasibility of the patch-wise scheme. Relative Features "Relative" features instead of absolute ones lead to a more stable classification.Relative heights of buildings and trees in relation to the ground can be derived based on an estimated DTM (digital terrain model).The classification, however, still suffers from (1) the heterogeneous appearance of the objects in the same class, The challenge is to extract more inter-class stable and intra-class discriminative features from both color and geometrical information.As demonstrated in Figure 8, we employ the following synthetic features following (Huang and Mayer, 2015): (1) Relative height derived from the locally estimation ground level, (2) coplanarity used to measure how well the current point and its neighbors form a plane, which is calculated as the percentage of inliers for the common plane estimated using RANSAC (Fischler and Bolles, 1981), and (3) color coherence indicating the color difference to a reference class (vegetation), which is quantified by their distance in the L * a * b * space. The features are furthermore extended by integrating the probabilities derived in Section 3 as the belief in the individual data points.In the proposed scheme, a patch of 3D points is the unit of processing.Since a patch can only be represented by a single color, the accuracy of this color is important.Instead of using an averaged value, we only keep the color information above an empirically determined threshold of 0.8 for the beliefs and calculate the representative color with the beliefs as weights.The same idea is used for the calculation of relative height.For coplanarity, the only difference is that all the data points are kept, because in this case lower probability does not mean incorrect data and all points are needed for the consensus. Classification A standard random forest classifier (Breiman, 2001) is employed.The calculation of features and the classification with the trained classifier are implemented aiming at parallel processing.Please note that the superpixel-based segmentation is in principle not suitable for parallelization and requires computational effort related exponentially to the image size.With the assumption that the proposed "relative" features are robust in various scenarios, which implies a generally trained classifier can be directly applied on all data partitions without additional local or incremental training, the whole scene is divided into smaller partitions which A post-processing is conducted to correct trivial errors caused by data artifacts and improves the plausibility of the results.It works as a blob filter based on consistency constraints for roads and buildings. EXPERIMENTS For the village dataset (see Figure 1), we acquired a set of 296 images from the village Bonnland, Germany, flying a UAV 150 m above ground.The UAV carries a Sony ILCE-7R camera with a fixed-focus lens having a focal length of 35 mm.Hence, the images with a size of 7360 × 4912 pixels have a ground resolution of approximately 2 cm.Each image overlaps with ten other images on average.The scene shows buildings with paved ground between them in its center and forest and grass around them. The experiments are performed on a stand-alone standard PC (dual socket) with two Intel Xeon processors (135 W, 3.5 GHz) with 16 cores in total.Furthermore, the computer is equipped with a NVidia GeForce GTX 970 graphics card and a Virtex-6 Board for the Field Programmable Gate Array (FPGA).The graphics card is used in the registration step for SIFT feature extraction employing the approach of (Wu, 2007).We perform the semi-global matching (SGM) on the FPGA. First, we derived a 3D-reconstructed scene based on images downscaled to half size.Then, we repeated the test on the images with quarter resolution.This allows a faster processing with SGM and one additionally gets rid of high frequency noise.SGM on full resolution images leads to less dense disparity maps.Furthermore, the full resolution is not necessary for our application. Both tests were performed using SGM in a multi-view configuration.The images were registered by employing (Mayer et al., 2011) in 31 minutes (half size) or 36 minutes (full size).The Figure 9. 3D point clouds from the village dataset (Figure 1).Left: Dense and accurate 3D point cloud derived by volumetric fusion of truncated signed distance functions (Kuhn et al., 2014).Center:Resulting point cloud from the novel fast fusion method.Right: Coded probabilities from the occupancy grid from our fusion.It is obvious, that our fusion leads to higher noise and less dense point clouds than (Kuhn et al., 2014).Nonetheless, e.g., the border of the roof is clearly visible when considering the high probabilities (red points). multi-view SGM was performed in 148 minutes (quarter size) and in 851 minutes (half size). The disparity maps can be transformed to 3D point clouds considering the registration of the image set.Overall, the SGM disparity maps in half resolution of our village dataset lead to 1.082.124.721(over a billion) 3D points.The quarter resolution maps result in 406.819.206(nearly half a billion) 3D points.After decomposition of the individual disparity maps, the point cloud is reduced to 45.319.881(quarter resolution) and 32.136.740(half resolution) 3D points.The decomposition size was set to an accuracy of 20 cm.Hence, the decomposed point clouds from half and quarter resolution disparity maps are quite similar and in the following we only consider the quarter resolution pipeline as it is much faster and generates point clouds with less holes.For the occupancy grid fusion we use a logprob l = 1.0 for individual points corresponding to an inlier probability of p = 0.73. Because of the strong parallelization possible with 16 Central Processing Unit (CPU) cores and fast reading and writing capabilities on a RAID 0 SSD hard disk, the point cloud decomposition of the entire set of disparity maps can be processed in only 20 seconds.The fusion of the set of point clouds into an occupancy grid takes 30 seconds on a single CPU.This part can be further parallelized.Yet, because of the high implementation effort necessary, this is beyond the work of this paper.Overall, the entire fusion process was conducted on a single PC in below one minute for the village dataset. For the evaluation of the quality of the scene classification it is important to compare the results to state-of-the-art methods.Unfortunately, for image-based reconstruction of point clouds there is no ground truth data available.For a comparison, we, therefore, compare our results to results of high-quality fusion methods.Especially, 3D reconstruction methods based on the fusion of Truncated Signed Distance Functions (TSDF) (Curless and Levoy, 1996) have been shown to produce highly accurate point clouds also for large datasets (Fuhrmann and Goesele, 2011, Fuhrmann and Goesele, 2014, Kuhn et al., 2013, Kuhn et al., 2014).In the field of large scale urban environments, particularly the work of (Kuhn et al., 2014) has been used to produce results also for urban scene classification (Huang and Mayer, 2015).Hence, we use this method to produce dense and accurate point clouds, which can be employed for a comparison to our fusion method. The fusion of TSDF requires much memory.Hence, in (Kuhn et al., 2014) the entire space is split into subspaces which are merged subsequently (Kuhn and Mayer, 2015).To generate an accurate 3D point cloud, the fusion is done in probabilistic space where single measurements are represented by a couple of voxels. The complex fusion and high memory requirements mean that the runtime is the highest of all components when integrating it in our pipeline.E.g., for the large dataset with quarter resolution, the fusion needs more than four hours.In (Kuhn et al., 2014) an additional meshing of the resulting point cloud is proposed, which was not used in our experiments.Compared to this our novel fusion method is about 250 times faster and even though the result is much noisier and less dense (see Figure 9), it produces a similar scene classification (cf. Figure 10). The Bonnland data cover about 0.12 square kilometer of undulating terrain.The classification is performed on the rasterized point cloud with a resolution of 0.2 meter.Figure 10 shows selected results of the classification.Performance on the datasets with conventional (non-probabilistic) and the proposed probabilistic fusion method are presented for comparison.We define the four classes of object: Ground (gray), building (red), high vegetation (trees, green), and low vegetation (grass, blue).The runtime for the whole area is about 17 minutes.51.2% of the time is applied for the oversegmentation and the rest for the feature calculation (4.6%) as well as the classification (44.2%), which are processed in parallel with the above mentioned hardware setup.The data have been divided into 48 tiles of 200 × 200 pixels/data points (examples are shown in Figure 10).The partitioning significantly reduces the runtime of the oversegmentation, which uses global optimization and is thus the most time-consuming part.The precondition is, as mentioned in Section 4.3, that the features are robust against variations of both ground height and color appearance so that a generic classifier can be employed for all tiles.The latter also means the tiles could be calculated in parallel on a computer cluster, although in this paper they are processed sequentially on the stand-alone PC, where the given runtime has been measured. Examining Figure 10, it is obvious that scene classification of the noisy (fast fused) point cloud is only accurate when considering appropriately determined probabilities.Without these probabilities the classification produces significant errors (e.g., middle right image).By means of probabilities the quality of the results is similar to classification considering complex fusion.In a small region in the top part of the images in the right column the courtyard was classified correctly only based on complex fusion results.In the middle column it is obvious that vegetation was best classified considering fast probabilistic fusion.Furthermore, the buildings are better separated from the surrounding in all examples for fast probability fusion.This can especially be seen at the standalone building in the middle column. Figure 11 presents a difficult case with results of limited quality.The proposed patch-wise scheme provides a time-efficient classification, but the performance might be affected by an incorrect oversegmentation (cf.Section 4.1).The latter is mostly In summary, we have shown that an accurate scene classification from unordered images is possible with our method for a dataset within three and a half hours on a single PC.Table 1 gives an overview of the runtime for the components of our processing pipeline for quarter and half resolution images. CONCLUSIONS AND FUTURE WORK In this paper we have presented a method for fast fusion of disparity maps in 3D space.It complements state-of-the-art disparity map fusion as it is much faster and yields a better quality for scene classification.The method can cope with outliers as they are probabilistically considered.The fast fusion is very useful for applications that have no need for dense point clouds derived from high resolution images.We have shown that occupancy grids based on octrees are suitable for this task. We have also proposed to employ a supervised classification based on color and elevation data with (1) robustness against heterogeneous appearance of objects and variable topography and (2) time-efficient patch-wise feature extraction and classification. We are aware that an incorrect oversegmentation caused by data artifacts is one of the main sources of error.Besides improved data quality, a post-processing is considered to contextsensitively fill unavoidable gaps in the data.Furthermore, we consider to extend the class definition with additional and/or refined classes such as cars, water bodies and different roof types. An important benefit of our method is that already in disparity map generation the number of points can be increased by limiting outlier filtering.The above results show that outliers can be classified based on the probabilities estimated in the occupancy grid fusion.In MVS estimation by SGM, multiple dis-parity maps from image pairs are fused.Disparities with insufficient correspondences in n views are filtered.On one hand, this leads to stable 3D points, but on the other hand, important points may be filtered leading to gaps.Keeping instable points leads to more complete point clouds and, hence, could further improve the scene classification. Figure 2 . Figure 2. Example image of the urban dataset (Figure 1) and the corresponding disparity map from SGM considering twelve overlapping images showing partly the same scene.The white areas represent areas filtered by consistency checks in SGM. Figure 4 . Figure 4.The left images shows the dense point cloud from the disparity map shown in Figure 2.After decomposition in 3D the density is adapted to the application of scene classification. Figure 5 . Figure 5.The left three images show the reduced point clouds derived from three disparity maps.On the right the coded point cloud representing inlier probabilities derived by means of an occupancy grid is given.The overlapping region in the center has higher probabilities as it has votes from three point clouds.Only the red points are contained in all of the three input point clouds and, hence, have the highest probability. Figure 6 . Figure 6.Comparison of the rasterized data based on conventional (left) and probabilistic fusion (right): Point clouds presented in color (top) and as elevation (bottom). Figure 7 . Figure 7. Patches generated by oversegmentation employing color, elevation and normal vector information Figure 8 . Figure 8. Relative feature of both color and geometry Figure 10 . Figure10.The top row shows classification results on the point cloud from complex fusion(Kuhn et al., 2014) (Figure 9 [left]).Results for the fast proposed probabilistic fusion but without considering probabilities (Figure9[centre]) are shown in the middle row and for probabilities derived by Equations 1 to 3 (Figure9[right]) results are given in the bottom row (red -buildings, gray -ground, bluegrass, and green -trees). Figure 11 . Figure 11.Classification results affected by artifacts in the data (red -buildings, gray -ground, blue -grass, and green -trees) Table 1 . Runtime in minutes of Image Registration (IR), Semi-Global Matching (SGM), Point Cloud Fusion (PCF) and Scene Classification (SC).The input in the first row are the images in half resolution (res), while in the second row the images in quarter resolution are used.The overall runtime for the presented results (Figures1 and 10) is 204 minutes.
2018-10-17T07:42:57.138Z
2016-06-06T00:00:00.000
{ "year": 2016, "sha1": "2a9e9a7a3febab8a29eb9ea7514e8cac3a6671ae", "oa_license": "CCBY", "oa_url": "https://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/III-3/325/2016/isprs-annals-III-3-325-2016.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2a9e9a7a3febab8a29eb9ea7514e8cac3a6671ae", "s2fieldsofstudy": [ "Computer Science", "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
5211953
pes2o/s2orc
v3-fos-license
Hybrid magnetoresistance in Pt-based multilayers: Effect originated from strong interfacial spin-orbit coupling The hybrid magnetoresistance (MR) behaviors in Pt/Co90Fe10/Pt, Mn1.5Ga/Pt and Mn1.5Ga/Pt/Co90Fe10/Pt multilayers have been investigated. Both planer Hall effect (PHE) and angle-dependent MR in Pt/Co90Fe10/Pt revealed the combination of spin Hall MR (SMR) and normal anisotropic MR (AMR), indicating the large contribution of strong spin-orbit coupling (SOC) at the interfaces. When Pt contacted with perpendicular magnetic anisotropy (PMA) metal Mn1.5Ga, the strong interfacial SOC modified the effective anomalous Hall effect. The MR in Mn1.5Ga/Pt/Co90Fe10/Pt is not a simple combination of SMR and AMR, but ascribed to the complicated domain wall scattering and strong interfacial SOC when Pt is sandwiched by the in-plane magnetized Co90Fe10 and the PMA Mn1.5Ga. Magnetoresistance (MR) is the property of a material to change the value of its electrical resistance under an external magnetic field. The dependence of resistance on the angle between current and magnetization in metallic ferromagnets (FM) is called anisotropic magnetoresistance (AMR) 1 . On the other hand, planar Hall effect (PHE) and anomalous Hall effect (AHE) are both observed as a voltage transverse to the applied current in contrast to AMR, which is measured in longitudinal geometry [2][3][4][5] . The longitudinal resistivity ρ xx denoting AMR and the transverse resistance ρ xy characterizing PHE are given by: Recently, a new type of MR is observed when a strong spin-orbit coupling (SOC) metal such as Pt comes in contact with a FM, either metallic or insulating [6][7][8][9][10][11][12][13][14][15] . In these hybrid structures, spin and charge transport phenomena are interconnected, and Pt may serve as both spin current generator and detector [12][13][14][15][16] . The spin Hall effect (SHE) can convert charge current into pure spin current in the transverse direction and the conversion is enhanced in heavy metals such as Pt due to their strong SOC. The spin current can be used to apply torque to magnetic moment by direct transfer of spin angular momentum [17][18][19][20] . On the other hand, it can also be detected by inverse spin Hall effect (ISHE), which converts the pure spin current into charge current resulting in charge accumulation along the transverse direction. Nakayama et al. had presented the unusual MR of Pt/ yttrium iron garnet (YIG) in terms of a nonequilibrium proximity effect caused by the simultaneous action of SHE and ISHE and therefore called it spin Hall MR (SMR) 12 . The experiments were theoretically explained by Chen et al. who proposed a SMR theory based on the spin-diffusion approximation in a Pt layer in the presence of spin-orbit interaction and quantum mechanical boundary conditions at the Pt/YIG interface in terms of spin-mixing conductance 13 . At the interface the electrons in Pt will interact with the localized moment in the FM. A part of spin current is absorbed by the magnetization as spin-transfer torque and the spin-current reflection is thus suppressed. This absorption is zero when the magnetization M is parallel to the spin-current polarization σ and maximized when M is perpendicular to σ. By changing the magnetization direction of the FM, the polarization direction of the reflected spins and thus the direction of the additional created charge current can be controlled, and a transverse voltage is also generated. In a word, the SMR is a strong interfacial SOC phenomenon. However, the reports of SMR, so far, have mostly focused on Pt/YIG bilayers, because one can easily access the magnetotransport properties of the Pt thin film deposited on the insulating FM YIG. There is a challenge to detect the mechanism of the strong interfacial SOC when Pt contacts with a normal FM such as CoFe. Meanwhile, how about the phenomena when Pt contacts with perpendicular magnetic anisotropy (PMA) metals since SMR is also influenced by perpendicular magnetization component? In the past two decades, PMA Mn x Ga (1 < x < 1.8) alloy thin films with L1 0 structure have gained increasing attention for possible application in ultrahigh density magnetic recording media, permanent magnets and spintronics 21,22 . Therefore, there is also a fundamental interest to explore the spin current related phenomena when Pt contacts with a PMA Mn x Ga. In this work, we have investigated MR behaviors in Pt/Co 90 Fe 10 /Pt, Mn 1.5 Ga/Pt and Mn 1.5 Ga/Pt/Co 90 Fe 10 / Pt multilayers (Co 90 Fe 10 and Mn 1.5 Ga will be simply noted as CoFe and MnGa in the following paragraph), in which CoFe is polycrystalline and MnGa is a single-crystalline PMA metal. The magnetic and transport properties are compared with the multilayers in which Pt is replaced by Cu with a weak SOC. The PHE of Pt/CoFe/ Pt is much larger than that of Cu/CoFe/Cu. On the other hand, as compared with normal AMR in Cu/CoFe/ Cu, the angle-dependent MR in Pt/CoFe/Pt reveals that the longitudinal resistivity change is also related with the magnetization perpendicular to the current direction in the film plane. The phenomenon indicates a large contribution of strong SOC at the interface. When Pt contacts with PMA MnGa, the effective AHE becomes smaller, which also confirms the strong interfacial scattering due to SOC. The MR in MnGa/Pt/CoFe/Pt is not a simple combination of SMR and AMR but ascribed to the complicated domain wall scattering and SOC when Pt is sandwiched by the in-plane CoFe and the PMA MnGa. All the samples were fabricated into Hall bars with a nominal length l of 2.5 mm and a width w of 0.2 mm. Figure 1d shows the resistance measurement geometry of the Hall bars in the xy plane with a current along x and the configurations for longitudinal R XX and transverse resistance R XY . For subsequent measurements, the magnetic field was applied in the xy, zy, and zx planes with angles α xy , β zy and γ zx (simply noted as α , β and γ ) respectively, as shown in Fig. 1e-g. Fig. 2a,b, the measurements of PHE were done with the applied magnetic field forming a fixed angle (α = 45° and 135°) with the current, since the signal will be maximized in this geometry as shown in equation (2). After subtracting the common offset, the signals with opposite sign were obtained. On the other hand, the AMR measurements with maximized signals were done with the applied field keeping a fixed angle with the current (α = 0° and 90°) shown in Fig. 2c,d. It is observed that both the resistance change Δ R XX and Δ R XY of Pt/ CoFe/Pt are all much larger than those of Cu/CoFe/Cu. Considering the thickness and polycrystalline structure of the CoFe layer, the MR caused by magnetic domain walls for the two multilayers should all be quite small. The enhancement of the resistance change in Pt/CoFe/Pt may mostly be contributed by SMR. The longitudinal and transverse resistivity change for SMR can be formulated as 13 : Magnetic properties. where ρ is the intrinsic electric resistivity, Δ ρ 0 is the resistivity reduced by the spin-orbit interaction, m z is the component of the magnetization in z direction. Δ ρ 1 and Δ ρ 2 are the magnitude of the resistivity related to the complex spin-mixing interface conductance G ↑↓ = G r + iG i . Δ ρ 1 (caused mainly by G r ) contributes to the conductance modulation depending on the in-plane component of the magnetization, while Δ ρ 2 (caused mainly by G i ) contributes only when there is a magnetization component normal to the plane. Therefore, the resistance change not only depends on m x in ordinary AMR but also on m y in SMR. Meanwhile, for both longitudinal and transverse configurations, there are peaks or dips observed around the coercivity, and they also depend on the field direction. It is proposed that the magnetization of CoFe will be fully rotated in-plane towards H due to its in-plane magnetic anisotropy. This magnetic rotation results in a change in measured resistance, passing the maximum or minimum resistance, which is observed as a peak or dip around the coercivity. where θ SH is the spin hall angle, d N the thickness of heavy metal layer, σ = ρ −1 the conductivity and λ the spin diffusion length. By fitting the angular dependence curves in Fig. 2f, we firstly obtain Δ ρ 1 = 6.4 × 10 −3 μ Ω cm and SMR = Δ ρ 1 /ρ ≈ 0.06%. Using the parameters θ SH = 0.05 and λ = 1.5 nm for Pt 13 , the spin mixing conductance G ↑↓ of the multilayers can be deduced from Eq. (5) as about 2.6 × 10 10 Ω −1 m −2 . All the results reveal the combination of SMR and normal AMR, indicating the large contribution of strong SOC at the interfaces. Spin current related transport properties of the PMA MnGa/Pt bilayers. Firstly, we measured the transport properties of a single MnGa layer. Figure 3a shows the α , β and γ dependence of R XX . R XX (α ) shows the sin 2 (α ) dependence while R XX (β ) and R XX (γ ) all adapt sin 4 dependence on the angle. Figure 3b shows the field dependent resistance with H along the x, y, and z directions respectively. It is obviously seen from the two figures that the most dramatic resistance change happens when the magnetization is out of the plane, which is caused by the special domain structure of MnGa. Then we studied the thickness dependence of AHE resistance R ΑΗ in Mn 1.5 Ga/Pt (t) (t = 1 ~ 5 nm), as compared with those of MnGa and MnGa/Cu shown in Fig. 3c. The R ΑΗ was obtained by subtracting the ordinary Hall component (determined from a linear fit to the high-field region up to ± 6 T). The Hall effect measurements of Pt (5 nm) and Cu (5 nm) grown on Si/SiO 2 substrates show only the ordinary Hall effect with the Hall voltage linearly dependent on H as shown in Fig. 3d. The ordinary Hall effect is relatively small and will not dramatically influence the Hall effect in MnGa/Cu and MnGa/Pt. From Fig. 3c we can find that the R ΑΗ in MnGa/Cu (5) is larger than that in a single epitaxial MnGa film, while the R ΑΗ values in all the MnGa/Pt(t) bilayers become smaller. After inserting Cu between MnGa and Pt, the R ΑΗ in MnGa/Cu(3)/ Pt(3) and MnGa/Cu(3)/Pt(5) become larger than those in the films with direct contact but a little bit smaller than that in the single MnGa film. It has been proved that Cu is very far from the Stoner instability and the nonlocal exchange force does not reach over such a thickness. Meanwhile, Cu has a long (several hundred nanometers) spin diffusion length and a very small SHE due to weak SOC, and could carry spin current over a long distance. Altering the interface by inserting Cu can block the interfacial SOC induced by Pt. Therefore, the observation of Fig. 4. Before carrying out the measurement of AMR and PHE at low field range, a high magnetic field of 6 T was firstly applied along z axis of the samples to induce perpendicular magnetization of MnGa and then decreased to zero. As compared with Cu/CoFe/Cu, both Δ R XX and Δ R XY of MnGa/Cu/CoFe/Cu are very small as shown in Fig. 4a,c, which are also consistent with equation (1) and (2). However, the resistance changes in MnGa/Pt/CoFe/Pt become dramatic, especially for Δ R XY. The results are not consistent with the mechanism of either SMR or AMR. For the low field measurement, the strong and complex interfacial SOC have decreased PHE when Pt is sandwiched by the in-plane CoFe and PMA MnGa, but R XY still shows the α sin 2 dependence. On the other hand, the angle-dependent MR of MnGa/Cu/CoFe/Cu measured at high field reveals a combination of Cu/CoFe/Cu and MnGa. R XX (α ) shows the cos 2 (α ) dependence while R XX (β ) and R XX (γ ) show similar angle dependence with those of MnGa, as shown in Fig. 4e. For MnGa/Pt/CoFe/Pt, R XX (β ) shows a distinctive behavior, which adapts the dependence of cos 2 (2β ). In this case, the magnetization is perpendicular to the current in the film plane all through the measurement, which indicates the combination of both complicated domain wall scattering and strong interfacial SOC when Pt is sandwiched between the in-plane magnetized CoFe and PMA MnGa films. High magnetic field dependent resistance. To further study the transport properties induced by domain wall scattering, we also measured the high magnetic field dependent resistance of the four multilayers with a field H along the x, y, and z directions respectively. In Fig. 5a,b, the in-plane curves (H//x and H//y) of both Cu/CoFe/Cu and Pt/CoFe/Pt show steep resistivity changes at small fields (< 1000 Oe), but for H//z the curves indicate coherent magnetization rotation which is completed at about 1.8 T. At large fields, the films become homogeneously magnetized, all the curves exhibit linear decrease which is usually referred to as the spin-disorder MR caused by the suppression of spin waves with increasing field strength 1 . It is indicated that the difference of domain wall scattering between Cu/CoFe/Cu and Pt/CoFe/Pt is not large. However, the field dependent resistance in MnGa/Pt/CoFe/Pt becomes much more complicated compared with that in MnGa/Cu/CoFe/Cu. When the current is applied along x directions, the high magnetic field dependence of R XX for the two samples is almost the same. In contrast, when the magnetization is perpendicular to the current, for example H//y and H//z, more evident resistivity changes at small fields happen. To study the resistivity due to domain wall scattering, Levy and Zhang developed a quantum mechanical description based on the giant MR Hamiltonian which leads to an increased resistance due to the mixing of the spin conduction channels induced by magnetization rotation within the domain wall 23 . Noticeably, they were the first to derive both the CIW (current in wall) and CPW (current perpendicular to wall) resistances. Viret et al. carried out the low-temperature measurements of the resistance induced by magnetic domain walls in FePd with perpendicular anisotropy in the CPW and CIW configurations, which quantitatively agreed with the model of Levy and Zhang 24 . They have found that the resistance variation in these two configurations are quite different, which reflects the asymmetric domain wall induced increase of resistivity. Thus we ascribe the different resistance variations between current parallel (H//x) and perpendicular to the magnetic field (H//y and H//z) to different domain wall rotation. Therefore, both strong SOC and domain wall scattering at the interfaces largely contribute to the transport properties of MnGa/Pt/CoFe/Pt and the bottom Pt layer sandwiched by MnGa and CoFe may play a dominant role. To further improve it, we investigate the magnetic and transport properties of MnGa/Pt(1.5)/CoFe/Pt(1.5) as shown in Fig. 6. Both out-of-plane and in-plane hysteresis loops reveal the existence of magnetic coupling but indicate different micromagnetic configurations of domain walls as compared with MnGa/Pt(5)/CoFe/Pt(1.5) shown in Fig. 1a. It has been found that the variations of the transport properties in the multilayers are not evident with H//x, thus we just focus on the transport behavior when the current is perpendicular to the magnetic field. The complicated behaviors of the high magnetic field dependent resistance R XX with H//y also reveal the existence of complex domain wall scattering. However, for MnGa/Pt(1.5)/CoFe/Pt(1.5), the field dependent resistance R XX shows similar behavior with that in the single MnGa film when H//z, indicating weak contribution from magnetic coupling along the z direction. Meanwhile, the β scan of R XX also shows similar behavior with that in MnGa as shown in Fig. 3a. It is proved that decreasing the thickness of the bottom Pt layer decreases not only the contribution of magnetic coupling but also that from strong SOC. However, more detailed understanding of the transport properties in this kind of multilayer with different magnetic anisotropies is still a challenge and need further study. Current dependence of SMR. In our experiment, the current of 1 mA is applied and the current density is about 10 5 A.cm −2 . We have also carried out the current dependence of R XX (β ) and R XY (H) in Pt/CoFe/Pt and MnGa/Pt/CoFe/Pt multilayers with the current of 0.1, 1 and 5 mA as shown in Fig. 7. The measurements of R XY (H) were done with α = 135°. It is found that as increasing the current, the R XY in both two samples are enhanced. However, the R XX (β ) with high magnetic field of 6 T are almost the same with different applied current for Pt/ CoFe/Pt. For MnGa/Pt/CoFe/Pt, when the current is 0.1 mA, the R XX (β ) reveals a more evident contribution from MnGa film as shown in Fig. 3a, while R XX (β ) are also almost the same with 1 and 5 mA. It is indicated that the SHE may be not the sole origin of the SMR effect, other contributions of NM/FM interfaces, such as, texture induced geometrical size 10 or interfacial Rashba effect 25,26 may be existent, and further study is required to clarify the origin. In summary, we have investigated the origin of the hybrid MR in Pt/CoFe/Pt, MnGa/Pt and MnGa/Pt/CoFe/ Pt multilayers. Both the PHE measured at low field and the angle-dependent MR at high field in Pt/CoFe/Pt revealed the combination of SMR and normal AMR, indicating the large contribution of strong SOC at the interfaces. For MnGa/Pt, the strong interfacial SOC between Pt and PMA MnGa decreased the effective AHE. The MR in MnGa/Pt/CoFe/Pt was not a simple combination of SMR and AMR, but ascribed to the complicated domain wall scattering and strong SOC when Pt was sandwiched between the in-plane magnetized CoFe and PMA MnGa films. Our results provide a way of modulating the spin-related transport effect when strong SOC metals contact with different magnetic anisotropy metals.
2018-04-03T03:58:13.643Z
2016-02-04T00:00:00.000
{ "year": 2016, "sha1": "72e6f684228267162a2980a1022e3535e7a7d21d", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep20522.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "72e6f684228267162a2980a1022e3535e7a7d21d", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
257898113
pes2o/s2orc
v3-fos-license
Association between Genotype and the Glycemic Response to an Oral Glucose Tolerance Test: A Systematic Review The inter-individual variability of metabolic response to foods may be partly due to genetic variation. This systematic review aims to assess the associations between genetic variants and glucose response to an oral glucose tolerance test (OGTT). Three databases (PubMed, Web of Science, Embase) were searched for keywords in the field of genetics, OGTT, and metabolic response (PROSPERO: CRD42021231203). Inclusion criteria were available data on single nucleotide polymorphisms (SNPs) and glucose area under the curve (gAUC) in a healthy study cohort. In total, 33,219 records were identified, of which 139 reports met the inclusion criteria. This narrative synthesis focused on 49 reports describing gene loci for which several reports were available. An association between SNPs and the gAUC was described for 13 gene loci with 53 different SNPs. Three gene loci were mostly investigated: transcription factor 7 like 2 (TCF7L2), peroxisome proliferator-activated receptor gamma (PPARγ), and potassium inwardly rectifying channel subfamily J member 11 (KCNJ11). In most reports, the associations were not significant or single findings were not replicated. No robust evidence for an association between SNPs and gAUC after an OGTT in healthy persons was found across the identified studies. Future studies should investigate the effect of polygenic risk scores on postprandial glucose levels. Genome-wide association studies (GWAS) have identified associations between single nucleotide polymorphisms (SNPs) and fasting glucose levels. For instance, the Meta-Analysis of the Glucose and Insulin-related traits Consortium (MAGIC) reported several independent genetic loci associated with glucose metabolism [5]. Furthermore, a metaanalysis of nine GWAS, with 15,234 participants without type 2 diabetes mellitus (T2DM), revealed five genetic loci that are associated with the 2-hour glucose level after an oral glucose tolerance test (OGTT) [6], indicating that SNPs also affect postprandial glucose metabolism. However, Berry et al. (2020) have recently shown that genotypes play a minor role as predictors of the postprandial response to a standardized meal challenge [1]. The postprandial 2-hour glucose level is frequently used as a clinical parameter for the classification of disturbances of glucose metabolism and is of diagnostic value for T2DM. In this study, we focus on the glucose area under the curve (gAUC) as the primary outcome as an approximation of glucose metabolism and evaluate the genetic contribution to its In this narrative synthesis, gene loci were included, for which at least three reports were available (49 reports) ( Figure 1). This restriction of gene loci was crucial to increase the informative value and to reduce the presentation of single, not-replicated findings. Information on gene loci, for which one (68 gene loci) or two reports (15 gene loci) were available, are presented in Supplementary Tables S1 and S2. Study Quality Assessment The results of the quality assessment are shown in Figure 2. No report was rated as low quality. The quality of 23 reports was judged to be intermediate, since information on the power calculation, correction for multiple testing, adjustment, and/or ethnicity was missing. The remaining 26 studies were rated as high quality (Figure 2). The association between the SNP rs12255372 and the gAUC was investigated in five different cohorts (Table 1) [18,20,47,58]. Homozygous carriers of the minor allele (T) showed a significantly higher gAUC compared to heterozygous carriers and the wild-type (p = 0.04) in 1697 participants from the Ely study [20]. Similar results were found in 1538 Finnish men, where homozygous and heterozygous carriers of the minor allele (T) showed a higher gAUC than the wild-type (p = 0.039) [47]. These results could not be replicated in the cohort of the Amish Family Diabetes Study [18], the non-diabetic offsprings of persons with T2DM [47], or participants without a family history of T2DM (p > 0.05) [58]. The SNP rs7903146, which is in high LD (r 2 > 0.8) with the SNP rs12255372, was examined in different cohorts (Table 1) [12,13,18,20,21,[41][42][43][44][45][46]48,58]. While in eight cohorts, no significant difference between the genotypes and gAUC was found [12,13,18,[42][43][44]46,58], there was a statistically significant difference between the genotypes in two cohorts. In the Ely study, homozygous carriers of the minor allele (T) showed a significantly higher gAUC compared to heterozygous carriers and the wild-type (p = 0.013) [20]. A significantdifference was found between homozygous and heterozygous carriers of the minor allele (T) compared to the wild-type in 1065 participants of the TÜF cohort (p = 0.001) [21]. In two cohorts, the results for an association between this SNP and gAUC were inconsistent, depending on the selection of participants or the calculation method of the gAUC [41,45,48]. In the first cohort of 120 persons without diabetes, homozygous carriers of the minor allele (T) had a significantly higher gAUC than the wild-type [41,45]. A similar result was found for women (p < 0.05), while no association was found for men (p > 0.05) [41]. In the second cohort, carriers of the minor allele (T) had a significantly higher gAUC compared to the wild-type (Table 1) [48]. In a cohort of Han-Chinese participants, the findings were inconsistent, depending on the included participants or the genetic model [35]. While a significant difference between the genotypes in 667 normoglycemic participants was found in the additive (p = 0.006) and dominant (p = 0.007) models, no difference was observed for the gAUC between the genotypes in the recessive model. However, the significance disappeared after the correction for multiple testing. Independent of the genetic model, no significant association between SNP rs5215 and the gAUC was found in 458 participants with impaired glucose tolerance and impaired fasting glucose [35]. No association was found between genotypes and the gAUC in 669 participants from the Quebec Family Study (Table 3) [13]. An association between SNP rs5219, which is in high LD (r 2 > 0.8) with SNP rs5215, and the glycemic response to glucose was investigated in five cohorts [33][34][35][36]. No significant difference in the gAUC was observed in four cohorts [34][35][36]. In 298 persons without diabetes, carriers of the minor allele (T) had an increased gAUC compared to the wildtype when using the dominant genetic model (p = 0.04) or by comparing homozygous carriers of the minor allele with the wild-type (p = 0.02) [33]. No significant difference was seen when using the additive model (p = 0.05) [33]. In a subgroup analysis of 75 persons who underwent an OGTT and, in addition, a hyperglycemic clamp, the dominant model resulted in a significantly increased gAUC in carriers of the minor allele (T) compared to the wild-type (p = 0.02) ( Table 3) [33]. Further Genes Findings for further genes are presented in Table 4. The association between four SNPs within the CDKAL1 gene locus and the gAUC was assessed in four cohorts (Table 4) [12,13,21,28]. For the most examined SNP rs7754840, a significant difference in the gAUC was found between homozygous and heterozygous carriers of the minor allele (C) and the wild-type in 846 participants from the EUGENE2 study (p = 0.016) [28]. Similar findings were found for 1065 participants from the TÜF cohort (p = 0.02) [21], while no significant difference between the genotypes and the gAUC was found for 3367 participants without diabetes from the METSIM cohort [28]. In the Quebec Family Study with 669 participants, the rs7756992, which is in a high LD (r 2 > 0.8) with the SNP rs7754840, was not associated with the gAUC [13]. An association between the HNF4α gene locus and the glucose response was studied in three cohorts (Table 4) [17,19,31]. Out of six SNPs, four SNPs were investigated in one cohort and showed no significant association [17,19,31]. SNP rs1884614 was examined in 689 participants from the Amish Family Diabetes Study [19] and 4430 participants from the Inter99 Study [31]. In both cohorts, homozygous and heterozygous carriers of the minor allele (T) showed significantly different gAUC than the wild-type. While a significant difference was seen in the Amish population with the additive genetic model (p = 0.022) [19], no difference was seen in the Danish cohort in the additive (p = 0.05) as well as in the recessive genetic model (p = 0.21) [31]. Associations between SNP rs1885088 and the gAUC were investigated in the Inter99 Study [31] as well as in the Quebec Family Study [17]. In both cohorts, no significant difference was observed between the genotypes. In a sub-analysis within the Quebec Family Study, homozygous carriers of the minor allele (A) with a high physical activity level showed significantdifferences in the gAUC than heterozygous carriers (p = 0.01) or the wild-type (p = 0.01) [17]. No association was detected in participants with a low physical activity level (Table 4) [17]. Study Quality Assessment The results of the quality assessment are shown in Figure 2. No report was rated as low quality. The quality of 23 reports was judged to be intermediate, since information on the power calculation, correction for multiple testing, adjustment, and/or ethnicity was missing. The remaining 26 studies were rated as high quality (Figure 2). Figure 2. Quality assessment of genetic association studies [11]. The quality ratio was rather high (green), intermediate (yellow), or low (red). Abate et al. 2003 [49]; Baratta et al. 2003 [37]; Baratta Most reports investigated an association between TCF7L2 SNPs (rs12255372 and rs7903146, LD r2 > 0.8) and the gAUC [12,13,18,20,21,[41][42][43][44][45][46][47][48]58,60]. For both SNPs, reports based on the biggest sample sizes (SNP rs12255372: Ely study: 1697 participants [20], 1538 Finnish men [47], SNP rs7903146: Ely study: 1697 participants [20], TÜF cohort: 1065 participants [21]) found a significantly higher gAUC in carriers of the minor allele (T) compared to heterozygous carriers and/or the wild-type. However, for the TÜF cohort, no information about any statistical adjustment was given [21]. In contrast, no statistical significance was found in most of the smaller cohorts, including sample sizes between 18 and 721 participants [12,13,18,21,[41][42][43][44][45][46][47][48]58,60]. These results indicate that the SNPs rs12255372 and rs7903146 may modify the gAUC after an OGTT. However, false-positive results cannot be excluded since the statistical power to detect significant associations between the SNPs and gAUC is unknown. There is some evidence from GWAS, that were excluded from this narrative synthesis, that the TCF7L2 gene locus influences glucose metabolism not only in the fasting state [6,61] but also in the post-challenge phase [6]. A meta-analysis of several GWAS, including 15,234 participants without diabetes, showed that the SNP rs7903146 was associated with fasting glucose and 2-h glucose level after an OGTT [6]. However, no association could be found between the SNP rs7903146 and the AUC ratio of insulin to glucose [6]. Similar findings were found for an association between the PPARγ SNP rs1801282 and the gAUC [14,15,22,24,25,[37][38][39][40]59]. For example, in the Sapphire cohort with 1713 participants, significant differences were found when comparing homozygous and heterozygous carriers of the minor allele (G) and the wild-type [24]. Nevertheless, in most cohorts, no significant association between rs1801282 and gAUC was found, possibly due to small sample sizes or different ethnicities. A meta-analysis with around 32,000 participants without diabetes revealed no evidence for an association between SNP rs1801282 and the 2-h glucose level; however, data on gAUC were not reported [62]. In addition, this meta-analysis revealed an association between the SNP and fasting glucose in participants with obesity [62]. To the best of our knowledge, there is no evidence so far for an association focusing on postprandial glucose trajectories. All analyses investigating the association between KCNJ11 SNPs and gAUC were based on cohorts with less than 1000 participants [12,13,[33][34][35][36]. For the most frequently assessed SNP rs5219, one report with 298 participants stated that carriers of the minor allele (T) had an increased gAUC compared to the wild-type [33]. However, the significance disappeared in the additive genetic model. Considering other weaknesses such as low sample sizes, different ethnicities, and missing correction for multiple testing, there is little evidence for a clinically relevant association between SNPs rs5215 or rs5219, and differences in gAUC after an OGTT. In addition, no data from GWAS for an association between the KCNJ11 gene locus and gAUC are available. The eligible articles included data from the glucose response after a standardized 75 g OGTT in participants without diabetes. Potential confounding factors, e.g., age and BMI, were not considered mandatory for inclusion in this systematic review. Nevertheless, reports investigating the association between SNPs in the TCF7L2, PPARγ, as well as KNCJ11 gene loci and the gAUC were based on participants with a BMI less than 30 kg/m 2 . Furthermore, most of the identified articles considered potential confounders in the adjustment procedure. However, the following differences between reports were obvious: frequency of plasma glucose measurement during the OGTT (every 10 min up to every hour), duration of the OGTT (120 min up to 300 min), sample size, ethnicity, and statistical methods (genetic model, adjustment, power calculation, and correction for multiple testing). Thus, the comparability between eligible reports might be limited not only by the high variability of SNPs investigated but also by these confounders. Several explanations for the given negative findings exist: firstly, the missing power to detect small effect differences among the genotypes. To detect small genetic effects on the metabolic response, cohorts with large sample sizes are needed. This was the reason for the establishment of large international consortia, namely, to be able to combine genetic data for the identification of SNPs with rather small effect sizes [63,64]. Out of the 39 different cohorts identified in our analysis, only 4 cohorts were found with a sample size above 1000 participants, which is not comparable to genetic association studies with more than, e.g., 35,000 persons [63]. Nevertheless, GWAS investigating the association between SNPs and the gAUC after an OGTT could not be identified, whereas data on GWAS regarding the association with 2-h postprandial glucose levels are frequently found [6,65]. Secondly, other factors with a greater effect on gAUC might have masked any genetic effect. The Personalized Responses to Dietary Composition (PREDICT) study revealed that factors such as meal composition have a greater effect on the gAUC after a meal challenge than the genotype (15.4% vs. 9.5%) [1]. Addiotionally, the assessment of the association between SNPs and gAUC after an OGTT was not the primary aim of most studies, and usually, a post-hoc analysis was performed. Moreover, due to the missing clinical endpoint of the gAUC, the clinical relevance of the investigated association is difficult to determine. Furthermore, the most frequently studied gene loci, TCF7L2 [66][67][68], PPARγ [69,70], and KCNJ11 [71,72] are candidate genes for T2DM predisposition. This hypothesis-driven approach, with identified candidate genes, turned out to be of limited value in predicting people with early disturbances in glucose metabolism. It is rather likely that other gene loci or combinations thereof may also play a role for the metabolic response after an OGTT. The gastric inhibitory polypeptide receptor (GIPR) gene locus is one of the known genes to affect the metabolic response after an OGTT [6]. The GIPR SNP rs10423928 was associated with the 2-h glucose level and the AUC ratio of insulin and glucose after an OGTT in participants without diabetes [6]. However, the association between the GIPR gene locus and the gAUC could not be identified in any eligible article of this systematic review. Finally, no main single effect of an SNP on gAUC after an OGTT was found. Therefore, it may be worthwhile to study the effect of a combination of SNPs. In several studies, the association between a polygenetic risk score and gAUC after an OGTT was analyzed [73][74][75][76][77][78]. Depending on the chosen gene loci for the calculation of a risk score, both significant [73,76,77] and non-significant [24,74,75,78] differences were found for gAUC per risk allele. Therefore, research on polygenic risk scores might be more meaningful to evaluate a genetic effect on the metabolic response after an OGTT. So far, most candidate genes for T2DM or gene loci known to interfere with glucose metabolism were used for the calculation of the genetic risk score [73][74][75][76][77][78]. Machine learning approaches and artificial intelligence measures open further possibilities for a more comprehensive understanding of the genetic contribution to metabolic responses after an OGTT. Genome-wide polygenic risk scores may be even more promising in this context [79]. Strengths and Limitations This systematic review focused on OGTT as the standard method to characterize glucose metabolism. For all included reports, the methodological quality of genetic associations was assessed and presented. This systematic review is limited by focusing on SNPs and by excluding other genetic variants such as copy number variations and haplotypes. Findings are based on hypothesis-driven approaches, including candidate genes. As the gAUC is not a clinical parameter with a defined diagnostic or clinical value, no assessment of the clinical effect can be made. Furthermore, in most of the included cohort studies, the performance of the OGTT was for the classification of participants according to their glucose metabolism, e.g., normoglycemic or diabetic, rather than on the primary or secondary outcomes. This systematic review is focused on persons without diabetes to address the research gap on the association between SNPs and metabolic response on an OGTT in healthy persons to follow the current discussion on the inter-individual variation of metabolic response in a standardized meal challenge as a predictor for personalized nutritional recommendations. Therefore, the considered sample sizes are rather small and a conclusion on gender-specific results was not possible. A narrative synthesis, as indicated in PROSPERO, was conducted since data pooling and performing a meta-analysis were not considered to be appropriate. Conclusions In this systematic review, which is based on candidate gene analyses, heterogeneous findings for the association between SNPs and the gAUC after an OGTT in participants without diabetes were detected. The most investigated genetic loci (TCF7L2, PPARγ, and KCNJ11) are known to increase the risk of developing T2DM and have shown single findings for a significant association with gAUC. Therefore, more robust data, including data from hypothesis-free approaches, are needed to exploit the genetic contribution to personalized nutrition. Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/nu15071695/s1, Table S1: Identified genes for an association between SNPs and gAUC after an OGTT in adults; Table S2: Associations between SNPs and gAUC after an OGTT in adults. References Data Availability Statement: The data that support the findings of this study are available from the corresponding author upon reasonable request.
2023-04-02T15:26:49.251Z
2023-03-30T00:00:00.000
{ "year": 2023, "sha1": "48bd6297b43cc6d0bc560dd4736098670966e2a2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/15/7/1695/pdf?version=1681018373", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6135c88ec771a390965ef65e1fe7e6637bf8410a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
253406259
pes2o/s2orc
v3-fos-license
Intraday power trading: toward an arms race in weather forecasting? We propose the first speculative weather-based algorithmic trading strategy on a continuous intraday power market. The strategy uses neither production assets nor power demand and generates profits purely based on superior information about aggregate output of weather-dependent renewable production. We use an optimized parametric policy based on state-of-the-art intraday updates of renewable production forecasts and evaluate the resulting decisions out-of-sample for one year of trading based on detailed order book level data for the German market. Our strategies yield significant positive profits, which suggests that intraday power markets are not semi-strong efficient. Furthermore, sizable additional profits could be made using improved forecasts of renewable output, which implies that the quality of forecasts is an important factor for profitable trading strategies. This has the potential to trigger an arms race for more frequent and more accurate forecasts, which would likely lead to increased market efficiency, more reliable price signals, and more liquidity. Introduction In the last decades, the electricity industry in many countries has seen rapid changes. One driver of these developments was the transition from a highly vertically integrated, state controlled sector of the economy to a largely competitive and decoupled industry Pollitt (2019). Another reason is the climate crisis and the increasing efforts to transition to a carbon neutral society. The electricity sector is the key to sustainable energy systems, enabling a change of the nature of energy supply by sharply increasing production from variable renewable energy sources (VRES) such as wind and photovoltaics. In the majority of industrialized countries electricity is traded on a range of future markets whose products differ in their time to maturity. Recently, the weatherdependent and unpredictable nature of VRES production has increasingly shifted the focus to markets with a high temporal resolution that trade close to delivery when production forecasts are reasonably accurate. Short-term trading is mostly organized in real-time markets or continuous intraday markets. While the former is the prevailing design in the US (Milligan et al. 2016), the latter is, for example, used in Europe. These volatile markets are attractive for firms that can quickly adapt their demand or production profiles and can thus sell their flexibility to other market participants with balancing needs driven by, for example, forecast errors in VRES production. Short-term trading thus provides incentives to invest in flexible energy sources such as gas turbines and storage, which are required to balance the intermittent production from ever growing VRES capacities. Due to the increasing size and sustained price variability, short-term markets are not only interesting for flexibility providers but increasingly also for speculative traders who neither own production assets nor trade their own electricity demand. The essential defining characteristic of speculative trading on electricity markets thus is that any open position has to be closed before the respective product goes into delivery. In this paper, we propose a trading strategy for speculative trading on continuous intraday markets. Our approach is motivated by algorithmic trading strategies in continuous financial markets that are triggered by signals indicating a change in the fundamental value of an asset. Since, as discussed above, VRES production is an important driver of short-term electricity trading, we use forecast errors of aggregate VRES production as signals for our strategies. The rationale for this choice is that if forecasts for VRES production are inaccurate, producers have to correct their positions taken on the day-ahead market, which, if the errors are large enough, causes a shift in intraday prices (Kiesel and Paraschiv 2017;Kremer et al. 2020a, b). While the literature on asset backed trading on intraday power markets is extensive (see for example Boomsma et al. 2014;Kumbartzky et al. 2017;Séguin et al. 2017;Bertrand and Papavasiliou 2019;Wozabal and Rameseder 2020;Rintamäki et al. 2020), there is virtually no research on optimal bidding strategies for speculative traders that have no assets of their own. In the following, we review those papers that come closest to our trading strategies. Skajaa et al. (2015) analyze a wind power producer participating in the continuous intraday market as well as the balancing market using detailed data from the limit order book (LOB) as well as several updated wind power forecasts. Tankov and Tinsi (2021) propose to use repeatedly updated probabilistic forecasts instead of point forecasts for weather related variables. Sánchez de la Nieta et al. (2020) use several updated weather forecasts for bidding on the Spanish day-ahead market, intraday auctions, and imbalance market. Engmark et al. (2018) propose a trading strategy for a hydro power producer on the day-ahead, 1 3 Intraday power trading: toward an arms race in weather… intraday and balancing market. Dideriksen et al. (2019) consider trading strategies for a hydropower producer on the intraday market. Koch (2021) uses the intraday market to build up a position to be cleared on the balancing market, thus arbitraging between the two markets. Kath and Ziel (2018) introduce a forecast for the volume weighted continuous intraday price for 15-minutes contracts and develop a strategy to choose between trading on the day-ahead auction market and the continuous intraday market. Monteiro et al. (2020) evaluate future trading strategies on the Spanish Mibel market based on long-term electricity futures. Maciejowska et al. (2019) study the problem of a small VRES producer that trades on the day-ahead and the intraday market. Wozabal and Rameseder (2020) propose trading strategies for a storage that arbitrages between Spanish day-ahead and intraday markets. Furthermore, Kath and Ziel (2020) explore optimal order execution strategies with the aim to minimize liquidity cost and Glas et al. (2019Glas et al. ( , 2020 study optimal VRES trading strategies on the intraday market in an optimal control setting. Finally, Bertrand and Papavasiliou (2019) use reinforcement learning to optimize a Markovian strategy for an electricity storage on the German intraday market for power. We contribute to the literature in the following ways: 1. While there is a growing literature investigating the impact of VRES production forecast errors on intraday prices (e.g., Garnier and Madlener 2014;Kiesel and Paraschiv 2017;Kremer et al. 2020a, b;Kulakov and Ziel 2019), we are the first to propose a demonstrably profitable trading strategy based on this observation. We take great care to accurately model market mechanisms, the exact clearing algorithm, and the sequence of information. To the best of our knowledge Skajaa et al. (2015); Martin and Otterson (2018); Engmark et al. (2018); Bertrand and Papavasiliou (2019); Kuppelwieser and Wozabal (2020); Dideriksen et al. (2019) are the only other papers that capture the realities of continuous trading in similar detail. In particular, apart from Skajaa et al. (2015); Engmark et al. (2018); Bertrand and Papavasiliou (2019); Dideriksen et al. (2019);Koch (2021), this is the first paper that evaluates a trading strategy based on detailed order book data, which is different from the extant literature that discretizes the trading to 1 min or 15 min brackets to be able to deal with the shear amount of order data (e.g., Glas et al. 2019Glas et al. , 2020Kath and Ziel 2020). The resulting trading problem is characterized by substantial uncertainties about the future state of the continuous market and a high frequency of arrival of new order information, necessitating a large number of decisions which have to be taken at random points in time. Consequently, given the complex information structure of the problem and the number of decisions to be taken, finding optimal decisions is clearly computationally intractable (Bertrand and Papavasiliou 2019). We therefore propose a non-anticipative parametric policy that yields significant positive profits in controlled out-of-sample experiments and uses sufficiently large forecast errors of renewable production as trading signals. 2. Our results show that intraday power markets are far from efficient. In particular, it is possible to capitalize on information on day-ahead forecast errors of VRES 1 3 output. This fact suggests that the market disseminates information slowly and in an imperfect manner: While recent results found evidence that intraday electricity markets are weak-form efficient (e.g., Oksuz and Ugurlu 2019; Narajewski and Ziel 2020), our results illustrate that they violate the more restrictive semi-strong version of the efficient market hypothesis, which states that it is impossible to consistently generate abnormal returns using publicly available data (Malkiel and Fama 1970). 3. Next to demonstrating that strategies based on current state-of-the-art forecasting of renewable production are profitable, we quantify the value of a perfect forecast and conclude that there is potential for substantially increased profits from weather-based strategies. This finding suggests that in the future the industry might see an arms race in forecasting, similar to the arms race for speed observed in the financial markets (e.g., Budish et al. 2015). In our numerical case study, we consider the German intraday power market. We first examine the insample performance of our policy for 18 months of trading to identify sensible ranges for our parameters and for the timing of trading decisions. We find a trade-off between the quality of the signal that is required to trigger the strategy and the size of the traded position. Generally speaking, profits per trade rise in the quality of the signal. However, if trading is restricted to only those products with high quality signals, trading occurs infrequently reducing overall profits. A similar trade-off can be observed for the size of the position: while profits initially rise with larger positions, the marginal profit per additional traded MWh is diminishing due to liquidity costs that increase in order size. Furthermore, we find that one of the most important aspects of the trading strategy is how it deals with the lack of liquidity that plagues intraday power markets. In particular, a trader that seeks to capitalize on informational advantages in forecasting would ideally want to trade as early as possible on this information. However, since there is usually very little trading activity until 2-3 h before gate closure, such a strategy is running the risk of being unprofitable due to high transaction costs. We show how patient strategies based on a sequence of limit orders can significantly reduce liquidity costs and outperform simpler impatient strategies based on market orders. In an out-of-sample study, we evaluate our strategies for one year of trading. The results show that the proposed policies yield significant positive profits for both hourly and quarter-hourly products, where the former is characterized by larger volumes, higher profits, and more volatile profits per product, while the latter yields lower profits and also trades less volumes. These differences can mostly be explained by the higher liquidity of hourly products. We show that the potential additional earnings for a strategy which is based on a perfect intraday forecast of VRES production are significant, increasing profits by one order of magnitude or, more specifically, from €200,000 to 2 million for hourly products and from €60,000 to €300,000 for quarter-hourly products. Hence, there is a strong incentive to invest in better forecasts and more frequent updates during the day -a situation which might trigger an arms race in short-term forecasting of renewable output. As opposed to the arms race for speed observed in the share market (e.g., Budish et al. 2015), this development has the potential to increase market liquidity in early hours of intraday trading, the accuracy of price discovery, and therefore ultimately welfare. The rest of the paper is organized as follows: In Sect. 2, we describe the relevant features of intraday power markets and discuss liquidity and the impact of VRES. Section 3 is dedicated to our trading policy. Section 4 describes the setting of our case study, while Sect. 5 discusses its results. Finally, Sect. 6 concludes the paper and discusses implications as well as avenues for further research. Intraday markets In this section, we first describe the typical market design of continuous intraday power markets in Sect. 2.1, focusing on the German continuous intraday market as one of the most liquid markets. Secondly, we discuss the influence of renewable generation on prices in Sect. 2.2. Finally, we investigate market liquidity and its dependency on time to delivery in Sect. 2.3. Market design Most spot markets for power consist of a day-ahead market that allows market participants to trade electricity one day ahead of delivery and a short-term market, which gives participants the possibility to adjust their positions until shortly before physical delivery. Short-term markets are usually either organized as real-time markets or as intraday markets. Prominent examples of the former include most US power markets (Milligan et al. 2016), while European short-term markets fall in the latter category (Shinde and Amelin 2019). In Europe, there are currently two competing types of intraday trading systems: auction markets and continuous intraday trading. In 2015, the EU decided on the long-term goal to couple all European intraday markets in a large continuous market in order to facilitate a secure energy supply, competitiveness, and fair prices (European Commission 2015). While most European countries already transitioned to continuous intraday markets that are compatible with the joint European design, some countries such as Italy, Spain, and Portugal still use auction markets. In this paper, we are interested in continuous intraday markets and for the ease of exposition focus on the European market design and its implementation in Germany hosted by the EPEX, the largest power exchange in Europe (see Viehmann 2017, for a detailed description). However, we note that other markets are very similar in the features crucial for the analysis in this paper. With the build up of capacities in intermittent and unpredictable production, short-term trading on intraday markets is increasingly gaining traction (EPEX 2020b). As a result, liquidity of the German intraday market has been improving in the last years with growing trading volumes, but also an increased prevalence of automated trading (EPEX 2020b). In particular, due to the short-term nature of the 1 3 continuous intraday market, marketing of flexible power sources and electricity storage as well as position closing is often relegated to trading algorithms. On the German intraday market power can be traded on a national market until 30 minutes before physical delivery. After that, the national market closes and four separate markets, one for every control area, open where market participants can trade for delivery in that area until 5 minutes before physical delivery. The intraday market opens shortly after the clearing of the day-ahead market and allows to trade hourly, half-hourly, and quarter-hourly products. Market participants submit orders to the limit order book which are cleared continually. If for a market participant the combined orders from spot and future markets deviate from the actual physical production or consumption at gate closure of the intraday market, the residual quantities are settled on the balancing market. The price charged or paid for these deviations is the so-called symmetric reBAP (Bundesnetzagentur 2012). Each buy and sell order on the intraday market for a given product contains basic information about quantity, limit price, and validity time. A market order is cleared immediately against the best available order in the limit order book, while a limit order is only executed with matching orders on the other side of the market up to a certain price (the limit). If this is not possible, the order is kept in the limit order book until its end validity date to be cleared with future orders. If the quantities of two matched orders do not agree, the order with the higher order quantity is only partially cleared and remains in the order book with a correspondingly reduced quantity (EPEX 2020a). Market participants can add the usual order qualifiers such as all-or-nothing, immediate-or-cancel, or fill-or-kill. Additionally, iceberg orders are allowed for which only a fraction of the order quantity is visible to other market participants. As soon as the visible quantity is cleared, the next part of the order is automatically placed in the LOB (EPEX 2020a). The state of the LOB changes with the placement of a new order, with the modification of an order, and at the end-validity-time of an active order. The limit price of the order with the lowest sell price is called best-ask, while the order with the highest buy price defines the best-bid, and the difference between the two prices is the bid-ask-spread. The influence of renewable generation Because electricity is bought by most consumers for a price that is only infrequently updated, short-term consumption is inelastic. Furthermore, due to limited storage, supply and demand have to be matched instantaneously. Consequently, supply and demand shocks can lead to massive shifts in short-term prices (Weron 2014). One frequent source of supply shocks is the deviation of produced wind and solar power from its forecast levels. Typically, owners of VRES sell electricity on the dayahead market one day before delivery based on forecasts of wind speeds and solar irradiation. If those forecasts turn out to be incorrect, the residual quantities have to be traded on the intraday market or resolved on the balancing markets. Since the latter offers less favorable prices, VRES producers have an incentive to balance forecast errors on the intraday market as best as they can. In particular, if a trader sold too much energy on the day-ahead market she will try to buy back missing energy on the continuous intraday market as soon as more accurate forecasts become available and the error becomes apparent thereby increasing demand. An analogous situation occurs if too little energy was sold, which induces an increased supply, leading to downward pressure on the intraday prices. Due to the rapid expansion of VRES capacities in many countries and the high correlation of forecast errors for VRES production within a market zone, large unexpected aggregate deviations from production forecast are frequently observed and significantly influence the intraday price (Karanfil and Li 2017;Kiesel and Paraschiv 2017;Goodarzi et al. 2019;Kulakov and Ziel 2019;Hu et al. 2021;Spodniak et al. 2021). Traditionally weather forecasts are based on large computationally expensive models that depend on satellite images and high altitude measurements of planes and weather balloons, which are only collected every couple of hours. These forecasts are therefore updated too infrequently to be used as inputs for algorithmic trading strategies on the intraday market. However, recently, several providers specialized in combining these traditional global weather forecasts with real-time production data and local weather models to offer frequent updates of forecasts for renewable production of single plants. Currently, there are many providers such as Enfor, ConWX, Meteologica, Gnarum, enercast, weathernews, or windsim that compete to provide more accurate VRES power production forecasts and more frequent updates. The role of liquidity Liquid markets are necessary for the successful implementation of the trading strategies considered in this paper. The observations in this section therefore inform the discussions in the later sections. For a more comprehensive treatment of the liquidity of the German intraday market, we refer to Kuppelwieser and Wozabal (2020). Liquid markets allow trading for fair prices at low transaction costs and with little scope for price manipulation by dominant players. While traded volumes on the German continuous intraday market have been continuously increasing in the last years, the liquidity of the market is still rather limited at times. Most orders are placed shortly before the market closes and consequently, liquidity is typically low at the beginning of the trading session, increases toward physical delivery, and decreases again shortly before the market closes. As can be seen by comparing panel 1 with panel 2 and 3 of Fig. 1, the liquidity of the intraday power market is significantly worse than that of financial markets. The comparison reveals that, relative to the price, the bid-ask-spread for a share of a large company is roughly 50 times smaller than the bid-ask spread of the continuous power market during its most liquid period. Inspecting the lower two plots depicting bid and ask prices on the German intraday market for a typical trading session of an hourly product, we recognize the characteristic L-shape in the bid-ask spread with large differences between the two prices which suddenly falls to a low value close to delivery as also observed by Balardy (2018). We note that the market for half-hourly and quarter-hourly products is even thinner than that for hourly products (e.g., Narajewski and Ziel 2020). The comparison of the two plots in panel 2 and 3 reveals evidence for an increase in liquidity between the years 2017 and 2018. Finally, the high volatility of the intraday price during the trading session, makes the market attractive for speculative trading. Trading strategy Our trading strategy rests on the assumption that a large number of VRES plants sell their forecast production on the day-ahead market and use the intraday market to rebalance their positions so as to take into account updated production forecasts on the day of delivery. The idea behind the strategies discussed in this section is to capitalize on early intraday updates of aggregate VRES production forecasts for the whole of Germany by anticipating the direction of the correction in prices. To get an accurate measurement of profits, we evaluate the proposed strategy based on detailed limit order book data. In particular, we do not merely rely on tick data or a discretized version of the market as for example in Glas et al. (2019Glas et al. ( , 2020; Kath and Ziel (2020), but take into account the exact rules of continuous intraday market clearing as well as detailed data on orders by other market participants to calculate the price at which we buy and sell electricity. We are interested in trading strategies that work without physical assets or electricity demand, implying that every product has to be traded separately and positions Fig. 1 Financial markets vs EPEX SPOT: The three plots show the best-bid and the best-ask of one trading session. The upper plot shows the Amazon share (AMZN) traded on Nasdaq, the middle plot shows prices for the product H12 which delivers power from 11:00 to 12:00 on the 12.12.2018 as traded on EPEX and the lower plot shows the same product one year after to highlight the increase in trading activity. The data on the Amazon share has been obtained from LobsterData (https:// lobst erdata. com/) 3 Intraday power trading: toward an arms race in weather… have to be closed before gate closure. We base our algorithms for the product that delivers electricity in period t on the updates in the forecast of renewable production s hours before delivery where f DA t is the day-ahead forecast of renewable production in t while f s t is the updated forecast at time t − s . The quantity s t is thus the best estimate of the forecast error in aggregate VRES production at time t which is available at time t − s . We adopt the convention that f 0 t is the actual production, making 0 t the true forecast error of the day-ahead forecast. Our algorithm takes the form of a classic algorithmic trading strategy on financial markets and uses s t as a signal that can be used to infer a change in the fundamental value of the product, i.e., electricity to be delivered in period t. This is based on the assumption that traders that first become aware of the errors in forecasts can capitalize on this knowledge by trading accordingly. For example, as a result of a positive s t , a trader would buy electricity on the intraday market anticipating a rise in prices once the rest of the market becomes aware of the shortage. However, unlike signals in financial markets like earning announcements or prices of other assets, which can be regarded as public information as soon as they are revealed, information on VRES forecast errors is gradually improved as increasingly better forecasts become available. 1 In particular, the notion of a trader reacting first makes much less sense than for signals typically used for high frequency trading on shares markets, since orders cannot be placed as soon as information arrives and the decision when to act on updated forecasts becomes important. Traders thus face a trade-off between the reliability of the signal and the speed of the reaction. To define our strategy, we specify a traded quantity, a price for which we place orders, as well as the timing of orders. We depict the sequence of events in Fig. 2. The strategy is triggered by the arrival of a new forecast for VRES production at time t 1 , which is a pre-defined length of time s before delivery of a product t, i.e., t 1 = t − s . If the forecast error s t is large enough, we build up a position in the time interval [t 1 , t 2 ] . Subsequently, we hold the position until t 3 > t 2 and finally unwind the position in the time interval [t 3 , t 4 ] , where t 4 is close to gate closure. Note that since we assume that the trader does not have a physical asset, we require the position to be closed at the end of trading to avoid open positions on the balancing market. More specifically, we open a position of size V ± > 0 if the signal s t observed at time t 1 exceeds a threshold ± depending on the sign of the deviation. We thus define the traded quantity at time t 1 as where positive quantities correspond to buying of electricity, i.e., we buy V + MWh of electricity if forecasts are corrected downward by more than a threshold + . Apart from the traded quantity V ± , we also need to specify a price to place an order. We investigate two strategies: an impatient strategy using market orders and a patient strategy based on limit orders. If market orders are used, the price is set to ±9 999€/MWh, which is the maximum/minimum price the trading system allows, i.e., the quantity x t 1 is always immediately cleared at time t 1 regardless of the price, provided the order book on the opposing side of the market is not too small to cover the full quantity x t 1 . If a market order cannot be (fully) cleared due to a lack of market depth, we remove it from the order book and the second trading phase operates with the correspondingly smaller position. Similarly, at time t 4 the position is closed using market orders. Choosing this impatient strategy thus makes sure that a position is opened as soon as possible and closed at the last possible moment. The downside is that if market depth is insufficient, trading might happen at unfavorable prices. In contrast, the patient strategy places limit orders and accepts a delay in order execution in exchange for potentially more favorable prices. The strategy places an order that outbids the other orders in the system by a small margin > 0 . For example, if s t > + , i.e., we are seeking to buy, we set the price to be the best bid plus €. If an order with a higher price is added to the order stack at time t ′ with t 1 < t ′ < t 2 by another party, we update the price of our order to ensure that we outbid the best bid by €. We continue in this fashion until either the whole quantity is traded or time t 2 > t 1 comes at which point we remove the order from the system. We start closing the position at t 3 by again setting the price such that the order is on top of the respective side of the order book and update prices as new orders arrive. Finally, if the position is not closed at time t 4 > t 3 , we place a market order to close the position. If the order cannot be fully cleared against orders in the LOB at t 4 , the rest of the order is cancelled and the residual quantity is cleared on the balancing market. Note that opposed to the patient strategy the impatient strategy incurs the full bid-ask spread. For example, if the intention is to buy, then an order on the ask side of the market is accepted instead of placing orders on the bid side as it is done when using limit orders. Similarly, when closing the position with a market order an existing bid is accepted instead of placing an ask order in the system. Hence, loosely speaking the patient strategy avoids the bid-ask spread for the price of delayed order execution. In order to calculate the resulting profit, we denote by T 1 the set of time points at which the LOB changes in the period [t 1 , t 2 ] , by T 2 the set of time points when the LOB changes after t 3 until the end of trading of the product at t 4 , and by V as the quantity traded as consequence of order stack changes at times ∈ T ∶= T 1 ∪ T 2 . Further, for ∈ T , we denote by P as the volume weighted average per MWh price for which the quantity at time is traded. The profit and loss of the strategy in period t can thus be calculated as follows where R t is the symmetric balancing market price for period t and F is the per MWh trading fee. 2 Note that fees on the EPEX are exclusively payable for cleared volumes while modifications of limit orders are not charged. However, the number of modifications is limited to avoid an overload of the trading system. For this purpose, the order-to-trade ratio (OTR), defined by the number of order changes divided by the number of placed orders, is limited to 100 by the EPEX. Note that the above profit does not account for the cost of the forecast. Hence, Π t can also be interpreted as the maximum price a speculative trader that employs the above strategy would be willing to pay for the forecast. Case study: setup and data In this section, we discuss the LOB data and the forecasts of renewable output that we use in the case study in Sects. 4.1 and 4.2, respectively. In Sect. 4.3, we discuss how we use the data to calibrate the parameters of our strategy. (3) Limit order book data We use German LOB data for the years 2017 and 2018 as input for the clearing algorithm. The data consists of all submitted orders including information on order changes with timestamps in milliseconds resolution. To test our strategies, we implement the exact EPEX clearing algorithm in JAVA. To enable a concise discussion of results, we limit our attention to hourly and quarter-hourly products and do not consider half-hourly products. Since intraday markets in Europe are increasingly interconnected, some orders in our observation period are cleared against orders from neighboring countries at times when transmission capacities permit cross-border trading. We use the same idea as Martin and Otterson (2018) to deal with this issue by reconstructing the corresponding foreign orders using the clearing logs included with the limit order book data. In particular, we check for a counterpart for each executed order in the German LOB. If such a counterpart cannot be found, we add an order with the corresponding price and quantity to the German order book as described in Martin and Otterson (2018), making sure that we can reconstruct published prices with our clearing algorithm. In the considered period there are 47 000 560 orders for hourly products, 1 405 055 ( 2.9% ) of which were cleared against foreign orders. For quarter-hourly products there are 139 169 564 orders with 1 495 763 ( 1.06% ) of orders cleared against orders from other markets. We identify orders for which order quantities are updated immediately after the volume was fully cleared as iceberg orders. These orders are treated as iceberg in our algorithm with the overall quantity that is seen in cleared trades. The algorithm calculates a clearing at each modification of the limit order book, i.e., if a new order is added, an active order is updated, or an order reaches its endvalidity-time. If multiple orders with the same price arrive simultaneously, orders with lower ids are cleared first. Similar to the results in Martin and Otterson (2018), the prices and cleared quantities computed by our clearing algorithm show a good match with the historical transaction data published by the EPEX. We thus are able to accurately evaluate how the market would have cleared additional orders added to the LOB by our trading strategies, which enables us to conduct a historical backtesting. Forecasts of renewable output In order to execute our strategies, we require the signals s t defined in (1), which are defined based on aggregated historical forecasts of solar and wind power production in Germany kindly provided by Meteologica. 3 Our data consists of day-ahead forecasts available at 11 a.m. the day before delivery, the latest available intraday forecast before gate closure, and intraday forecasts with an offset of 8, 5, and 3 hours before the delivery of a product from July 2017 until December 2018. To assess the forecast errors, we use data on realized production of solar plants and wind parks for the four German control areas as provided by ENTSOE. 4 Box plots of the forecast errors are provided in Fig. 3. We observe an increasing average accuracy with smaller offsets as better weather forecasts and measurements of realized production become available. Our strategy is based on the expectation that errors in day-ahead forecasts are predominantly traded on the intraday market and therefore have the potential to change intraday prices for power, i.e., can be used as valid signal for changes in the true fundamental value of the product. Consequently, for our strategy, the most important aspect of forecasts is whether the sign of the error of the day-ahead VRES forecast can be predicted from the updated intraday forecasts. We investigate this aspect in Table 1, which displays how often the sign of the forecast error 0 t is correctly predicted by s t depending on the magnitude of the signal, i.e., | s t | . In line with expectations, the precision of the forecast increases as the data is restricted to products with higher absolute values of s t for all s and for both types of products. It can also be observed that shorter time to gate closure yields a consistently higher hit rate. However, the increase in accuracy is only moderate. Hence, it seems that earlier signals are not much worse while at the same time give traders more time to react and ensure that the resulting trades are among the first that are based on updated information. Finally, comparing hourly with quarter-hourly products, we observe that the latter yield worse forecasts of the sign of 0 t in most cases, but the differences are minute. Quarter-hourly forecast-errors of intraday updates for wind and pv Fig. 3 Forecast errors of intraday forecasts for hourly and quarter-hourly products traded on the German intraday power market between July 2017 and December 2018. The best forecast refers to the last forecast before delivery whose exact timing slightly varies with the product Calibration and evaluation of the policy We generate counterfactual profits for our strategies in an as-if valuation of market clearing based on the available LOB data. To that end, we inject orders generated by the trading strategy introduced in Sect. 3 into the order book and then clear the market according to the rules of continuous trading. Note that this introduces changes relative to the historically observed traded quantities and prices and yields the profits that could have been made, if the strategy was used. Of course, a limitation of these experiments is that, by the very nature of our analysis and the available data, we cannot take into account the effect that the orders placed by the strategy would have had on the behavior of other market participants. As discussed in the previous subsection, we use data on intraday updates of dayahead forecasts for VRES production as signals for our strategy. Based on a preliminary analysis of trading profits and in order to facilitate the discussion of results, we only use the forecast published 8 hours before delivery for our policies, i.e., consider 8 t as signal. This is also supported by the results in Sect. 4.2, which show only a moderate improvement of the hit rate for later forecasts. Furthermore, the choice 8 t has two additional advantages: Firstly, it allows the policy to start trading relatively early on the updated information before most other traders update their expectations on renewable production. Secondly, the long period from the arrival of the forecast until gate closure gives the patient strategy ample time to build up the position and thereby avoid excessive liquidity costs. We thus fix the time t 1 to start the algorithm at 8 hours before delivery and set t 2 such that the policy has 5 hours to build up the position. After that, the policy waits for 115 minutes and then starts closing the position at t 3 , 65 minutes before delivery. If the position is not closed at t 4 , 35 minutes before delivery, we place a market order to close the remaining position. Note that since the liquidity shortly before gate closure is markedly better than in the early hours of trading, we are able to choose the interval [t 3 , t 4 ] relatively short in comparison to [t 1 , t 2 ] . The choice of timing and the 8 hour forecast as signal remains constant for all hourly and quarter-hourly products and all variants of the strategy. Having fixed t 1 , … , t 4 , we optimize our strategies by choosing the remaining parameters ± = ( + , − ) and V ± = (V + , V − ) to maximize profits using historical training data on days d ∈ D 1 . In particular, we define a set of possible thresholds L = {100 ⋅ i ∶ 0 ≤ i ≤ 20} ⊆ ℕ and a set of volumes to be traded V = {1, 5} ∪ {10 ⋅ i ∶ 1 ≤ i ≤ 30} ⊆ ℕ for hourly products and V = {1, 2, 3, 4} ∪ {5 ⋅ i ∶ 1 ≤ i ≤ 6} ⊆ ℕ for quarter-hourly products. We then use a simple grid search separately for hourly and quarter-hourly products to solve where Π d ( ± , V ± ) is the sum of profits Π t as defined in (3) for all products t that go into delivery on day d using the parameters V ± and ± . For the calculation, we set the trading fees to 0.125€/MWh (EPEX 2020a) and use the quarter-hourly reBAP prices available from https:// www. regel leist ung. net/ as balancing prices. The choice of ± determines whether the algorithm acts on a relatively weak signals, i.e., for small values of s t , or whether a strong signal is required to open a position at t 1 . Clearly, for small ± the strategy trades products for which the forecast error might only have a small effect on prices, resulting in a high chance that prices move in the opposite direction due to the influence of other factors such as plant outages or changes in demand. Furthermore, for small estimates of the forecast error s t , the probability that the actual forecast error 0 t has the opposing sign is significantly greater than for larger forecast errors as illustrated in the discussion in Sect. 4.2. For example, if 8 t takes a small positive value 8 hours before delivery, forecasting that there will be shortage in production, the actual day-ahead forecast error 0 t might still be negative, i.e., VRES producers might be long. In contrast, larger values on ± make the strategy react only to strong signals increasing the chance that forecast errors 0 t have the same sign as 8 t and are driving prices in the anticipated direction in the time window [t 3 , t 4 ] . However, if ± is chosen too large, then the strategy will rarely open a position decreasing overall profits. The optimization in (4) thus seeks to navigate this trade-off by choosing optimal parameters ± . The second set of parameter chosen in (4) are the traded volumes V ± . Large volumes will generate large profits if signals are reliable and the liquidity of the market is high, while small orders that incur less transaction costs are preferable if markets are illiquid. Note that due to the rules for building up a position, it might be that even though V ± is large only smaller quantities are actually traded in some hours, where the market is illiquid. In the next section, we will investigate profits obtained from applying our policy calibrated using a set of training days D 1 to some (possibly) different set of days D 2 , which are used as test data. If D 1 = D 2 , then the measured profits are insample profits, i.e., the policy is calibrated using the same data that is used to evaluate profits. If D 1 ∩ D 2 = � , the profits for the days D 2 are out-of-sample profits. Results and discussion In this section, we present the results of a case study using 1.5 years of German LOB data from 01.07.2017 until 31.12.2018. In Sect. 5.1, we explore the in-sample profits made by optimally parameterized patient and impatient policies for hourly and quarter-hourly contracts using both the actual forecast error 0 t as well as 8 t . In Sect. 5.2, we focus on the more profitable patient strategies and partition the data in calibration and test sets optimizing implementable policies, which we evaluate outof-sample for the year 2018. We consider exclusively products where the day-ahead forecast, the 8-hour ahead forecast, as well as the actual production of renewables are available. Furthermore, we exclude the third hour on the 29.10.2017 and 28.10.2018 due to data problems connected with day-light saving and the whole of the 27.10.2018 due to missing LOB data. Additionally, we exclude 69 hourly and 190 quarter-hourly products due to an empty LOB shortly before the market closes. This leaves us with 12 492 hourly and 50 055 quarter-hourly products for the period between 01.07.2017 and 31.12.2018, excluding in total 5% of hourly products and 4.85% of quarter-hourly products. Insample results In this section, we analyze the optimal parameter choice for V ± and ± as well as optimal profits, setting both the training data, D 1 , and the test data, D 2 , to the period ranging from 01.07.2017 to 31.12.2018. Since we use the same data to calibrate the parameters and calculate the profits, the resulting optimal policy violates non-anticipativity and is therefore not practically implementable. In particular, in reality, a trader is forced to choose a trading strategy ex-ante, without knowing market outcomes. The results in this section can therefore be regarded as a in-sample evaluation of optimal profits. As discussed in the previous section, we start building up a position 8 hours before delivery for every hourly and quarter-hourly product in the observation period and optimize both the patient and impatient trading strategy. To that end, we evaluate the profit separately for products with positive and negative forecast error for the 21 × 32 = 672 (for hourly products) and 21 × 10 = 210 (for quarter-hourly products) parameter combinations in L × V . The parameters of the policy are kept constant for all products in the observation period. We start by analyzing the patient strategies based on actual forecast errors 0 t . Figure 4 shows how the choice of parameters influence the profits for the patient strategy with the red triangles marking the maximum profit. Observing results for fixed thresholds ± , it can be seen that, as expected, higher volumes lead to higher overall profits but due to limited liquidity, the increase is not linear and from a certain threshold on, there is even a decrease in profits for increasing V ± . Similarly, there is a sweet-spot for the required strength of the signal: Profits are initially rising in the threshold ± and then start to fall again illustrating the trade-off between frequent trading on weaker signals and infrequent trading on stronger signals. The profits and the optimal parameter choices for the considered policies are listed in the first panel of Table 2. The results show that, at least in-sample, a trading strategy that is based on a hypothetical 100% accurate intraday update of the dayahead forecast of renewable output yields significant positive profits for both hourly and quarter hourly products. Looking at the profits in detail, two observations can be made. Firstly, hourly contracts are one order of magnitude more profitable than quarter-hourly Fig. 4 Optimal profits of the patient trader for real forecast errors for hourly products (above) and quarter-hourly products (below) contracts although there are 4 times more products of the latter. Looking at the optimal parameter choices and in particular at the low quantities traded for quarter hourly products, it becomes clear that this is mostly due to missing liquidity for quarter-hourly products, which start to affect profits already for much lower volumes than this is the case for hourly products. Secondly, we can observe that the patient trading strategy based on limit orders performs significantly better than the impatient strategy which places market orders. In particular, the results suggest that the impatient strategy does not work at all for quarter hourly products and only produces moderate profits for hourly products. Again, this is due to the high liquidity costs in the market which has to be fully born by the impatient strategy. Next, we analyze the policy for the more realistic case that the signal is based on an updated forecast instead of the actual production, i.e., we use 8 t instead of 0 t as a signal. We again plot the relationship of the parameters of the patient strategy and the profit in Fig. 5. The plot exhibits many of the same characteristics as Fig. 4 with the difference that higher volumes V ± lead more quickly to less profits, i.e., optimal volumes tend to be smaller. This is due to the lower quality of the signal which in many cases leads to a lower than expected forecast error causing losses for policies that bid too aggressively based on 8 t . Turning to the value of the strategy in panel 2 of Table 2, we observe that, compared to the strategy based on 0 t , profits are significantly lower for the patient trader and stagnate at low levels for the impatient trader. Again, as for 0 t , the hourly strategies yield higher profits but the relative gap is smaller than for the perfect forecast. Although the signal is of a lower quality, surprisingly, the optimal parameters are rather similar to those found for 0 t , although optimal volumes tend to be slightly lower, explaining parts of the lower profits. The difference between the profits of the strategies based on 0 t and 8 t can be interpreted as a lower bound on the value of improved forecasting, which is substantial for the patient trader. To put the profits in perspective, we evaluate daily capital requirements as the sum of the cost of opening the positions for all products traded on a day, netting out positive and negative costs. The results are displayed in Table 3 and indicate that, on average, the strategy requires a negative amount of capital with low positive maximal values. The profits displayed in Table 2 can therefore be realized with a small amount of risk capital and offer a high return on investment. Out-of-sample results In this section, we evaluate strategies out-of-sample in the time period from 01.01.2018 until 31.12.2018. More specifically, we study non-anticipative strategies, i.e., make sure that decisions at any point in time only depend on information available at that time (Shapiro et al. 2009). Since the impatient strategy performs poorly in-sample, we exclusively focus on the patient strategy for the experiments in this section. We use a rolling window setting for the out-of-sample evaluation and re-optimize the parameters ± and V ± every day using the last six months of data for the calibration. More specifically, we start our evaluation on the 01.01.2018 using 180 days of training data spanning the period from 04.07.2017 until 30.12.2017 to calibrate ± and V ± by grid search as in (4). We then evaluate the profits of the resulting strategy on the 01.01.2018 and proceed to the 02.01.2020 by including the 31.12.2017 in the training sample while removing the 04.07.2017 and retrain our policy to obtain out-of-sample profits for the 02.01.2020. In this manner, we build up out-of-sample profits for every product traded in the year 2018. Figure 6 shows the results of our experiment for hourly products. The first panel displays the development of cumulative profits of the strategy based on the signal 8 t and 0 t . Looking at the graph for 8 t , it becomes clear that while profits over one year of trading are significantly positive and close to €200,000, there are single days with large losses and extended time periods where the strategy did not generate profits. Comparing with the profits based on 0 t , we see that, as in the insample results, a perfect intraday update of the forecast increases the profits by one order of magnitude. Furthermore, the strategy that is based on 0 t exhibits a much smoother increase in cumulative profits with fewer losses. This suggests that the losses for 8 t are mainly due to inaccurate forecasts and suggests that better forecasts can not only increase the profits of the strategy but also reduce the variance of daily profits and therefore the inherent risk of trading. Turning our attention to panel 2 and 3 of Fig. 6, which display the size and the value of the open position after time t 2 for the strategy based on 8 t , we see that the strategy takes long and short positions of up to 200 MWh with a roughly equal share of long and short positions. The position values suggest that the capital at risk for single products does not exceed €20,000. It can also be observed that there is a change in the strategy within the observation period: in the first few months the algorithm triggers frequently and short positions tend to be smaller than long positions. In the summer months, there is generally less trading activity, possibly due to lower wind production which lead to smaller forecast errors. Finally, the last panel of Fig. 6 displays netted daily payments from balancing for products for which the position cannot be closed until gate closure. As can be seen, there are only 7 days with a requirement for balancing. In most of these instances the payment is negative, i.e., the trader has to pay to the grid operator for balancing. However, as balancing is rare and none of the single payments to the balancing market exceed €5,000, we conclude that balancing fees are not a major driver of profits for the chosen strategy. Figure 7 presents an analogous analysis for trading of quarter-hourly products. The plot of the cumulative profits of the strategy reveals that, consistent with the insample results, the strategy is less profitable for quarter-hourly products than for hourly products. As with the insample results and the results on hourly products, the strategy based on perfect forecast is one order of magnitude more profitable than the strategy based on 8 t and at the same time is less volatile. A closer look at the cumulative profits over time reveals that, although the trading of quarter-hourly products yields only roughly one fourth of the profits that can be earned with hourly products, individual earnings for each product fluctuate much less than in the case for hourly products. This is due to the generally smaller positions taken by the optimal strategies which lead to less exposure to market risk as evidenced by panels 2 and 3 of Fig. 7. Observing these plots also reveals that there are less seasonal trends in the traded quantities for the quarter-hourly strategy. Finally, the last panel of the figure documents that, similar to the case for hourly products, balancing occurs infrequently and therefore only plays a minor role. Table 4 provides detailed figures for overall profits, balancing costs, and summary statistics for profits per product for both hourly and quarter-hourly trading. Looking at the summary statistics of profits per product confirms that trading quarter-hourly products yields profits with a lower dispersion and therefore lower capital requirement. Furthermore, conducting t tests, we see that all average per-product profits are significantly greater than zero at least at the 0.05% level and, due to their lower standard deviation, the significance is greatly increased for quarter-hourly products. We observe that the number of traded products is nearly twice as high for the strategies based on 0 t as opposed to 8 t . Furthermore, due to the lower thresholds for trading, the relative amount of traded products is larger for the quarter-hourly products. Despite this and the fact that there are more quarter-hourly products, the number of single trades that get cleared as result of our strategy is nearly as high for hourly products as for quarter-hourly products. This is due to the larger quantities traded for the hourly products which often cannot be cleared at once but require trades with several of counter-parties dispersed over a longer time span. Fig. 7 Cumulative profits, traded volumes, value of traded positions, and daily balancing payments for quarter-hourly products (see Fig. 6 for a more detailed description of panels) Conclusion and outlook In this paper, we propose a simple parametric trading strategy for continuous intraday trading on power markets based on intraday updates of forecast VRES production. Our strategy generates significant out-of-sample profits for one year of trading by an arbitrage trader that owns no production assets, has no own demand, and operates on the German intraday market. Our results show that one of the most important factors to consider when trading on the intraday markets is the lack of liquidity and the resulting transaction costs. In particular, any algorithmic trading strategy has to cope with the limited liquidity of the market, which on the one hand side drives price variability and thereby may favorably influence profits but on the other side makes it harder to capitalize on informational advantages, as any speculative trading strategy has to overcome the bid-ask spread. We mitigate these problems by designing a patient trading strategy that uses limit orders instead of market orders and allows for an extended time to trade waiting for favorable orders to arrive on the respective other side of the market. We show that this patience is key to making profits and that the impatient strategy incurs substantial liquidity costs that absorb most of the profit that can be generated with weather related information. Additionally, our results demonstrate that the German intraday market for power is not semi-strong efficient, since publicly available data on renewable power production forecasts can be used to define a trading strategy that generates significant profits while requiring a relatively small amount of risk capital. Furthermore, there would be a substantial potential for even more profitable trading if forecasts were to further improve. Clearly, since there is a finite amount of money to be earned with weather-based trading, the presented strategy is self-cannibalizing, i.e., the profits depend on how many other traders employ similar strategies. However, this is the case with any algorithmic trading policy and therefore does not make the approach obsolete but rather implies that only the most competent traders are able to capitalize on the respective signal. This implies that trading strategies similar to the one presented in this paper, could be a driver for continued innovations in short-term forecasting of VRES production as traders compete in the accuracy of their forecasts. This might trigger an arms race in forecasting with market participants trying to capitalize on ever improving forecasts. Algorithmic traders would consequently help the market to process information more efficiently thereby generating price signals of a higher quality and at the same time improve market liquidity. Additional market liquidity would in turn make weather-based trading easier and more profitable as is demonstrated by, for example, the higher profits generated by our algorithm for the more liquid hourly products as opposed to the less liquid quarter-hourly products. Hence, such a trend could, at least for a while, feed itself and therefore has the potential to lead to a much more responsive intraday market. Therefore, as opposed to the arguably adverse welfare effects of the arms race for speed that characterizes algorithmic trading on financial markets (Budish et al. 2015), this development would likely unlock positive welfare effects. In our study, we take great care to evaluate the proposed trading strategy as realistically as possible. To that end, we use detailed limit order book data on submitted orders to calculate profits based on an exact implementation of the EPEX clearing algorithm. Furthermore, we make sure that all our policies are non-anticipative, enforcing a strict separation of training and test data. However, there are still some limitations in our study. Most importantly, we work with historical order data to compute counterfactual profits of our strategy in an as-if fashion. This analysis by design cannot take into account the reaction of other market participants to our trading strategy. A completely different experimental design simulating market outcomes, using either artificial agents or laboratory experiments with human traders, would be required to overcome this shortcoming. Another shortcoming of our analysis concerns the quality of the order book data. In particular, we only use German orders even if a small amount of orders is cleared via cross-border trades. Although we reconstruct the foreign orders that were historically cleared against German orders, we cannot completely capture the influence that cross-border trading would have had on our results. However, due to transmission line restriction, the fraction of German orders cleared with orders from other countries is rather small (below 5%) and we therefore think that our results are robust with respect to this influence. Furthermore, the order book data supplied by the EPEX is imperfect in many ways impeding a fully accurate what-if analysis. In particular, the end validity date of cleared orders is overwritten with the clearing time which makes it impossible to reconstruct the actual end-validity dates of cleared orders. Additionally, it is hard to correctly identify iceberg orders and market orders from the data. However, since, apart from very few exceptions, our implementation of the clearing algorithm correctly reconstructs historically observed prices, we are confident that the cumulative impact of these issues on our results is negligible. This work opens some avenues for further research in weather-based automated trading algorithms on intraday power markets. In particular, it is easy to conceive improvements in the proposed trading strategies. One obvious example is the inclusion of maximum and minimum prices to build up a position as additional parameters of the strategy, preventing trades at unfavorably high or low prices. For similar reasons, basing the strategy on probabilistic forecasts instead of point forecasts (e.g., Pinson et al. 2007;Tankov and Tinsi 2021) might help to avoid trading on noisy signals with a high variance that are at risk to be substantially off. The policy can probably also be improved by taking into account several forecasts and dynamically adapting the orders in the LOB as well as the positions to newly arriving information. Furthermore, it would be interesting to extent the strategy to a broader setting, which incorporates assets such as renewable generation or electricity storage as well as possibly other markets including balancing markets and the day-ahead market. This and other possible refinements would lead to a larger number of parameters and would therefore necessitate a more sophisticated optimization approach. Possible improvements in this direction could be based on machine learning techniques such as artificial neural networks or reinforcement learning (e.g., Bertrand and Papavasiliou 2019). Alternatively, one could employ state-of-the art black box solvers such as CMAES (see Hansen et al. 2010) to find optimal parameters. Another large area of improvement is in the use of data. Firstly, it is conceivable that the quality of the order book data will improve in the coming years making more accurate analysis of the profits possible and mitigate most of the data related problems described above. Furthermore, as more data becomes available the training of strategies will become easier and the results more reliable. Secondly, a more careful selection of training data might benefit the performance of the algorithm. For the present paper, we simply use the last 180 days of data to train our strategy for all products. This implies that data from different times of the day, weekdays, and seasons is used indiscriminately to train the strategy for all products in the test data. Making sure that the training data matches the test data more closely and thus enabling different strategies for different weekdays, seasons, and products has the potential to increase trading profits. 3 Intraday power trading: toward an arms race in weather… are not publicly available. Data on forecasts are however available from the authors upon request and with permission of Meteologica. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/ licenses/by/4.0/.
2022-04-07T02:10:06.775Z
2022-11-07T00:00:00.000
{ "year": 2022, "sha1": "d4f44eb998084be695668e9cc7cb425fb4b155b3", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00291-022-00698-5.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "092e95a4e77492fcf6531e617e4a68e7e4f3c7c3", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Computer Science" ] }
230666352
pes2o/s2orc
v3-fos-license
Special Report of the RSNA COVID-19 Task Force: Crisis Leadership of Major Health System Radiology Departments during COVID-19 Severe acute respiratory syndrome coronavirus 2 has spread across the world since December 2019, infecting 100 million and killing millions. The impact on health care institutions during the coronavirus disease 2019 pandemic has been considerable, with exhaustion of institutional and personal protective equipment resources during local outbreaks and crushing financial consequences for many institutions. Establishing adaptive principles of leadership is necessary during crises, fostering quick decision-making and workflow modifications, while a rapid review of data must determine necessary course corrections. This report describes concepts of crisis leadership teams that can help maximize their effectiveness during the current and future pandemics. © RSNA, 2021 health care workers, and boost hospital staff and patient morale. Another example is Hospital Clínic Barcelona, which has actively sought community donations for COVID-19 research (9). These efforts led to €5 million (approximately $6 million USD) of funding from Cellnex Telecom (Barcelona, Spain) for a project focused on better understanding of T-lymphocytes that target severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The key for appropriate communication during a crisis is the implementation of dynamic continuous updates and discussion using brief, clear, and focused messages that address the different working environments across the hospital enterprise, specifically within radiology at all levels. Organizational Principles for Leading an Effective Radiology Department Academic radiology departments, like their parent academic medical centers, are complex organizations with considerable management challenges. Typically, academic radiology departments will contain multiple axes of organization, including radiology subspecialties, imaging modalities that cross subspecialties, physical facilities encompassing hospitals and outpatient centers, and mission-based organizations, including research, clinical, education, faculty development, and others. These axes create the framework for department organization. An institution can create an organizational matrix across these axes, with extensive interaction between leaders. For most academic radiology departments, the fundamental faculty and/or radiologist organizational unit is the subspecialty divisions. A triad of a chief technologist, radiologist modality director, and technologist educator form the leadership team in charge of individual modalities. The technologist educator trains on protocols and implements quality systems across physical locations. In addition, a dyad of a technical director and a radiologist site director leads each hospital or clinic. A vice chair leads each mission area. Supporting effective programming requires extensive collaboration among leadership across the organizational framework. Strong leadership is critical to the effectiveness of a matrix organization. In academic medical centers, we traditionally advance individuals into leadership positions based on demonstration of technical competencies (clinical skill, impactful publications, grants, teaching scores, and operational or administrative accomplishments) and may undervalue the behavioral competencies important for effective leadership (social intelligence, team-building, and collaborative skills). We also underestimate the need to train and support leaders in their leadership positions. Leadership development programs aimed at enhancing their leadership skill set can achieve great success in raising the effectiveness of leadership teams. Programs can include faculty, technical, and administrative leadership, along with individual coaching and quarterly group sessions covering topics including communication, feedback, difficult conversations, financial management, recruitment, talent development, and more. These programs can also help bolster the behavioral competencies necessary for effective Abbreviations COVID-19 = coronavirus disease 2019, SARS-CoV-2 = severe acute respiratory syndrome coronavirus 2 Summary Crisis leadership during coronavirus disease 2019 requires streamlined, communicative leadership teams that can implement rapid change while analyzing data on a continuous basis to determine the need for course correction. Key Results n Multithreaded and repeated communication from leadership to radiology teams is necessary for rapid communication of deployed changes during a crisis. n Effective and frequent departmental, hospital, and enterprise-wide communication is valuable for conveying new policies, procedures, training, and workflows, and interaction with patients regarding how these changes improve their protection is also key. n Creating an effective, streamlined, adaptive, and diverse leadership team and fostering a positive culture within the department can help navigate crises with success. Maintenance of comprehensive policies on the department website and specific instructions posted in reading and/or procedural rooms can also provide readily accessible information throughout the workday. Instructional videos and policy documents can guide employees on safety measures and appropriate use of protective equipment (7). Communication with Patients Many hospital systems have created their own materials for patient communication through institutional websites and social media, but radiology departments can also provide their own patient communication marketing materials to build or rebuild trust with patients and instill confidence in patient safety during imaging and/or procedural services (8). Different media can be used to disseminate these materials, including the department website, video monitors in waiting rooms or signs throughout the department, social media platforms, and direct patient communication portals, such as automated texting systems. Human success stories can also be published online. One institution created and shared a photo album featuring images of patients and staff alongside positive messages, which improved morale for health care workers and patients alike. Fundraising Communication of the fundraising needs to donors is crucial to an institution's ability to raise money. Postponement of outpatient studies and increased costs for acquisition of equipment may spur institutional efforts to increase fundraising during the pandemic. Community members and private and public institutions may be willing participants in the fundraising campaign, buoyed by the desire to support health care institutions that have become exhausted while caring for the mounting number of critically ill patients with COVID-19. An example of this is community donations of tablets for communication between patients and their families, personal protective equipment, and food for health care workers. These community efforts can allow the community to come together, support E189 a positive departmental culture consistent with the values of the institution. This foundation will serve the department well in navigating external pressures, including those related to the CO-VID-19 pandemic. Forming a Crisis Leadership Team We narrow our focus when encountering a threat, which may be both an advantage and a disadvantage (12). As departments and medical institutions addressed the pandemic, health care workers and administrators narrowed their focus on CO-VID-19, resulting in alignment of common goals across radiology departments and facilitating increased acceptance and more rapid incorporation of major changes in workflow. These changes also more closely aligned with the efforts and missions of the parent medical centers. When forming a leadership team to address the novel threat of the COVID-19 pandemic, the alignment of energy and efforts on the situation at hand facilitates nimble decision making. Yet the lure of making all decisions flow from a single central leader is a pitfall that can impede high-quality decisions. Framing the threat as both serious and surmountable will motivate individuals to be "all in." Should the crisis continue for a long period, leaders must be able to engage for the medium to long term and address the toll of continuous uncertainty on a workforce unit. Effective crisis leadership teams must be grounded within a foundation of an organizational leadership structure that fosters collaboration, empowerment, and role clarity (13). Diverse and inclusive organizations or departments will have a further advantage in that they may be less susceptible to the groupthink ensnaring more homogeneous and hierarchical entities (14). In addition, ensuring that leader expertise is appropriately represented among the crisis leadership team is of critical importance. In the setting of COVID-19, a leadership team comprised of dyads and triads of physician, nursing, and administrative leader subunits can bring a balanced view to decision making, with both clinical priorities and implementation realities considered. The internal dynamic tension of taking charge while assuming a state of humble inquiry is especially challenging to reconcile. Some decisions need a quick response, but the limited availability of data requires a keen ear to continuous input from the frontlines and the humility to reflect and correct course if necessary. The installation of a lean operating system can provide a critical framework for efficient leadership through the CO-VID-19 crisis. Tiered huddles facilitate rapid communication of workflow challenges, supply chain shortages, and equipment needs from the frontline workforce to the department crisis leadership team and, where appropriate, to the system-wide incident command center. Tiered huddles represent a series of brief, focused meetings (15 minutes) that take place across an enterprise, foster open bidirectional lines of communicationfrom executive leadership to technologists and nurses-and facilitate rapid dissemination of information. Critical issues can be escalated to senior levels of leadership within hours through the different tiers (15). Decisions often taking months to work their way through the usual committee structure-such as investing in additional home workstations for radiologists with pre-existing conditions and/or added caregiver burdens-could leadership. An added benefit is the esprit de corps developed across the leadership team. Once an organizational framework is established, it is critical to develop a plan for effective communication across the department. A common mistake is to assume that a single form of communication is effective in conveying important messages. Research shows that multiple forms of communication increase the effectiveness of conveying a message across an organization (10). A multithreaded communication strategy can be most effective. This includes electronic communication through e-mail and other media, communication at virtual department meetings, use of the formal organizational tree by communicating through organizational leadership, and communication to individual influencers during department rounds. Performing daily rounds is an effective approach to target communication to the informal department leaders who will use their sphere of influence to disseminate important messages. Virtual meetings have become the standard for most meetings during the pandemic, including department meetings, due to the need for social distancing and reducing potential exposure. The virtual gatherings have improved the reach of these meetings, facilitating increased engagement from faculty and staff who may otherwise be unable to be physically present. During the COVID-19 pandemic, the importance of multithreaded and repeated communication has been further amplified to communicate rapid changes and course corrections implemented from leadership and evolving knowledge and information. Organizational culture may be the most important asset driving department performance. Academic radiology departments are large enough that culture may vary across the organization, but it is important to have an overall approach to department culture. Schein (11) provides a framework with which to think about organizational culture in layers: organization values, behavioral norms, and tangible artifacts (rituals, physical workspace arrangement, and language). An established and healthy organizational culture can foster an organized, unified, and coherent response to a crisis, including the COVID-19 pandemic. Leadership must be intentional in developing and maintaining organizational culture. Three major leadership activities establish and promote organizational culture: message, model, and manage. Both internal and external department communications should include organizational vision and expected behavioral norms. Leadership should model expected behavioral norms. Finally, the workforce development program should include aspects of organizational culture, including yearly reviews, incentive plans, and recruitment. It is particularly important to recruit for cultural fit, which may be prioritized over technical skill. Clear external communication of a positive and inclusive culture will attract trainee, faculty, and staff applicants who resonate with the department culture, facilitating recruitment of individuals who will contribute to the established culture. In conclusion, creating the strong foundation of an effective organization includes an organizational framework, an effective leadership team, a multithreaded communication strategy, and communications. The domain groups update the leadership task force during daily meetings on new developments in their respective spaces and provide possible solutions and new workflows to overcome any potential issues. The decision-making process incorporates analysis and discussion of various proposed courses of action for a specified task before deciding on the best plan. After an initial period of implementation of the new workflow, the same task force performs a rapid review process based on data collected, newly emerging COVID-19 knowledge, and feedback from the ground (18). Data collection should include workload statistics that are as current as possible given the dynamic situation of a pandemic, along with comprehensive, multisource feedback from all stakeholders. When there is limited time for or access to comprehensive evaluations, engaging key stakeholders (including staff, radiology faculty, and/or ordering providers) in providing important feedback on new operational changes may be sufficient. New workflows are then reanalyzed with these new data, and a decision is then made as to whether course corrections are necessary and to what extent. After implementation of a course correction, the cycle of rapid review must follow, and adjustments are again made when needed. This compressed cycle of decision making followed by course corrections can be illustrated by the following three examples from Singapore General Hospital. An example of a departmental crisis leadership model is shown in the Figure. Example 1 When the evidence for presymptomatic and asymptomatic transmission of SARS-CoV-2 emerged (19,20), it became necessary to implement physical distancing across the department. Identified vulnerable spaces were staff rest areas where staff being unmasked during meals and social interactions could lead to high rates of virus transmission (21). The immediate decision from the task force was to reconfigure the tables and chairs in the main staff rest area so that only two staff members could sit at a single table to limit interaction and employee gatherings. However, this led to a shortage of staff eating areas, and staff started using other areas for eating. This, in turn, created issues with employee congregation in other areas, leading to other risks and inconveniences. In addition, the issue of limiting interactions and maintaining distancing still remained. The team sourced and converted additional vacant space for staff use. With this increase in space, there was then further reconfiguration of all the areas to allow only one staff member to a table, while also providing adequate space for all staff to use the common rest and/or eating areas (5). This improved staff isolation and protection during meals while also improving employee mealtime experiences and morale. Example 2 During the COVID-19 pandemic, it has been critical to build up imaging capacity, as each procedure takes longer to perform due to strict infection control measures. The department was offered a showroom CT scan unit and, by serendipity, had a vacant lead-lined room available. A quick decision was necessary without time to perform a full assessment of the CT unit, as it was believed that this unit could be used exclusively for patients be swiftly authorized to shift diagnostic interpretations out of health care facilities. Crisis leadership teams require that moral leadership, trustworthiness, integrity, and empathy guide core principles for setting the course of action. The direct threat of SARS-CoV-2 to health care workers places an enormous stress on health systems. While "the patient always comes first" is a common mantra in medicine, leaders must acknowledge that patients are not best cared for if the providers of that care are not well. Therefore, the health and safety of our workforce must be the first priority in order to provide optimal care for our patients. Unless the crisis leadership team, and especially the senior-most leader, appreciate the understandable concerns of providers and staff and commit to putting themselves in a similar high-risk situation, defection may result. This is particularly important as the second wave of rising cases and deaths is sweeping across the world. As health care workers witness their colleagues succumbing to the disease, their grief and distress rise. Leadership must communicate authentic empathy, caring, and support in a consistent and clear way. Within a fair and just culture, all should be inspired to model the behaviors that demonstrate appreciation for each other; this includes vigilance to early signs of burnout, depression, and/or addiction (16). During the COVID-19 pandemic, radiologists providing diagnostic imaging interpretation were in some circumstances permitted to work from home or from remote reading rooms, while frontline radiology workers, including procedural radiologists, technologists, and nurses, had to engage with patients, which could potentially expose them to COVID-19. At some institutions, trainees on diagnostic imaging rotations worked in the hospital while their faculty worked remotely. This split environment of on-site and off-site workers can create a sense of inequity and heighten concerns of unequal exposure risk. In this setting, it is the responsibility of leadership to encourage the department as a whole to show their appreciation and gratitude to these frontline health care workers while also providing clear descriptions of onsite and off-site responsibilities and the reasoning behind allowing some radiologists to work remotely. For on-site diagnostic imaging coverage, rotating coverage to ensure all members of a section equally provide coverage, unless there are health-related limitations for some, ensures a sense of equitable coverage and risk. Course Corrections on the Fly Decision making during a crisis, including the COVID-19 pandemic, must be rapid, often with incomplete information; the workflows implemented are often partial. In addition, the multiple new measures implemented concurrently would normally undergo a more gradual or sequential implementation. Therefore, it is important to have a systematic process for review after each implementation to apply course corrections when necessary as the situation evolves and/or knowledge changes. Radiology department leadership teams can be restructured into lean task forces based on a military model (17), in which a small leadership task force is divided into domain groups, with each group addressing a particular focus, specifically personnel, operations, intelligence, logistics and mid-to long-term planning, communication, financial management, and external E191 in a swift manner, with the flexibility to correct course. Appropriate and frequent communication through multiple approaches and streamlined, diverse leadership teams can optimize the teams' performance and the department's ability to best navigate the challenges of the pandemic. Activities not related to the present article: institution has received reimbursement for travel to meetings from the RSNA; received a consultation honorarium from The University of Texas MD Anderson Cancer Center. Other relationships: disclosed no relevant relationships. L.O. disclosed no relevant relationships. C.G.F. Activities related to the present article: disclosed no relevant relationships. Activities not related to the present article: is a paid consultant for Syntactx; received or will receive grants from FASNR to study glial lymphatic flow and from the National MS Society to study multiple sclerosis; received a stipend from Topics in Magnetic Resonance Imaging for writing a review article; is a minority stockholder of Avicenna. ai. Other relationships: disclosed no relevant relationships. M.M. Activities related to the present article: disclosed no relevant relationships. Activities not related to with COVID-19. From conception, it took just 41 days for this new facility to be operational. At that time, this new unit underwent a rapid review of its location and operational protocols, and the department realized that there was not a need for resources dedicated exclusively to scanning patients with COVID-19. We modified our plan and focused this new resource on scanning the backlog of patients without COVID-19. The resource reallocation allowed the department to catch up on postponed cases that otherwise would have continued to be delayed. Example 3 The radiology department needed to provide imaging support for a multistory parking garage converted into an on-campus COVID-19 screening facility. We quickly deployed bedside radiography units from existing clinical environments to support this facility. However, with the introduction of federal community screening sites, there was no need for the screening facility or the deployed bedside radiography units. We reviewed and discussed the recent developments and decided to close the imaging spaces in the makeshift screening facility and redeploy the bedside radiography units to sites where there was a need, including a new isolation facility built in an open-air parking lot. The bedside units can be returned quickly should the COVID-19 screening facility reopen. These examples illustrate the importance of being flexible and nimble yet systematic in reviewing every process within radiology. With this disciplined approach, it is possible to apply course corrections and move all members of the imaging service in the same direction. Conclusion The coronavirus disease 2019 pandemic, like other crises, requires dynamic and mobilized leadership prepared to manage adversity Diagram shows an example of crisis leadership team structure. The example crisis leadership team has the chair in an oversight position, with the vice chair of clinical operations and the operations director managing the leadership team. They are responsible for incorporating information and data from the leadership team and generating cohesive policies. These policies are then reviewed by the leadership team, which includes leaders in all workforce and operational spaces. Modifications are made to the new policies and workflows from the discussions and then communicated to the target audiences, including the department and clinical services that order imaging examinations through multithreaded communication. The workforce and operational leaders gather feedback from their teams and bring this back to the crisis leadership team to discuss what is and is not working and to facilitate course correction. All communication is bidirectional, with ground-up feedback. E192 radiology.rsna.org n Radiology: Volume 299: Number 1-April 2021 the present article: institution received reimbursement for administrative work from RSNA; received reimbursement for travel expenses from RSNA and the American College of Radiology; receives royalties from Elsevier; receives editorial board stipend from Contemporary Diagnostic Radiology. Other relationships: disclosed no relevant relationships. L.D. disclosed no relevant relationships. B.S.T. disclosed no relevant relationships.
2021-01-06T06:18:56.466Z
2021-01-05T00:00:00.000
{ "year": 2021, "sha1": "c3044892936f08c445d32f6977959815717759f2", "oa_license": null, "oa_url": "https://pubs.rsna.org/doi/pdf/10.1148/radiol.2020203518", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "b2f18245cdc29d144530b7bd10f7b0545e6115c2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251078605
pes2o/s2orc
v3-fos-license
Removal of Zn2+ from Aqueous Solution Using Biomass Ash and Its Modified Product as Biosorbent To study the removal effect of bottom ash of biomass power plants and its modified products on zinc (Zn2+) in aqueous solution, a series of indoor experiments is carried out. The aim of this work is to explore a method to improve the ability of biomass ash to remove Zn2+ from aqueous solution and obtain its adsorption characteristics of Zn2+ in aqueous solution; on this basis, the feasibility of its application in the treatment of Zn2+-contaminated wastewater is analyzed. The mesoporous siliceous material is used to modify the biomass, and the modified material is functionalized with 3-aminopropyltriethoxysilane. The results show that the specific surface area of modified biomass ash is nine times that of the material before modification. The adsorption capacity of Zn2+ on the material increases with the increase of pH, and pH 6 is the optimum pH to remove Zn2+ from the aqueous solution. The Langmuir model and Freundlich model can show better fits for biomass ash and the modified material, respectively. Thermodynamic analysis results show that the adsorption of Zn2+ is spontaneous and endothermic in nature. The adsorption of Zn2+ onto biomass and modified biomass ash follow pseudo-first-order and pseudo-second-order kinetics, respectively. Introduction Heavy metal pollution in water has become a common global problem. Water bodies polluted by heavy metals have the following three characteristics: (1) heavy metals can be enriched in organisms, participate in the biological cycle, and accumulate in the biological chain through various channels, which lead to the accumulation in humans [1]. (2) Heavy metal pollutants are not easy to degrade and so exist permanently in the environment [2]. (3) Heavy metals also have strong toxicity under low concentration conditions and will even be transformed into other valence states with stronger toxicity under the action of microorganisms, which is a threat to the biosphere. The concentration range of its toxicity is generally 1.0-10.0 mg/L; even if it is below 1.0 mg/L, it will still affect the ecosystem [3]. Metal pollutants in the environment mainly come from anthropogenic industrial and agricultural activities, and they can enter the water system in different ways, such as atmospheric sedimentation, waste water irrigation, and slag leaching [4]. Zinc is one of the most common and widely distributed heavy metals in the environment, and as an essential element for many organisms, it is beneficial to organisms when the content does not exceed the standard. However, due to industrial activities such as smelting, electroplating, mining, plastic manufacturing, and metallurgy, a large amount of wastewater carrying Zn 2+ is discharged into the environment; moreover, Zn 2+ is not easy to degrade in the environment, resulting in the content of zinc in water bodies increasing [5]. A large amount of Zn 2+ 2 of 12 accumulation will cause a series of negative effects on the health of organisms, mainly manifested as neurological symptoms, and can even lead to brain tissue atrophy. Therefore, it is of essential significance to reduce the concentration of Zn 2+ in industrial wastewater by technical means before it is discharged into the environment, so as to reduce its impact on the environment. The traditional methods of removing heavy metals from aqueous solutions include chemical precipitation, solvent extraction, ion exchange, membrane separation, and electrolysis [6]. However, most of these methods are uneconomical, such as by consuming a lot of energy [7,8]. In addition, when the concentration of heavy metals in water is low, these methods have the problems of low efficiency or high cost. Moreover, they may produce secondary waste that is more difficult to treat than raw wastewater [9,10]. These shortcomings have prompted researchers to seek both economical and efficient technologies to treat heavy metal-contaminated water [11,12]. Recently, the use of environmentally friendly materials to remove heavy metals from a large amount of wastewater has aroused the interest of researchers. Agricultural wastes and by-products are widely regarded as cheap adsorbents for removing toxic metals from solutions. In the past decade, these materials have been widely used as adsorbents to replace existing technologies. For example, Melaleuca diosmifolia leaf [13], tomato leaf powder [14], rice husk [15], pine cone [16], olive pomace [17], pineapple stem [18], coffee husk [19], coffee waste [20], cauliflower leaf [21], rubber leaf [22], Formosa papaya seeds [23], parsley stalks [24], potato peel waste [25], cucumber peel [26], and water hyacinth [27] have been studied, and it has been found that these agricultural wastes have fine remediation effects on metal polluted wastewater. Compared with other adsorbents, these adsorbents not only have the advantages of fine remediation effects but also are agricultural wastes, which can be widely obtained and are inexpensive. Through this remediation mode, not only can the problem of water contamination be solved, but the problem of agricultural environmental pollution can also be solved. Recently, the ash of agricultural wastes and by-products have been reported to have good adsorption properties for heavy metal ions in wastewater [28,29]. The results show that biomass ash can remove heavy metal ions from wastewater, mainly due to the high proportion of unburned C and Si present in these materials [30,31]. At the same time, after agricultural waste or by-products are burned, most of this ash has a large specific surface area. These characteristics give them adsorption properties, which are conducive to the adsorption of heavy metals in aqueous solution. However, the potential mechanism of metal removal from aqueous solution by this adsorbent is not fully understood. According to the current situation of biomass power plants in China, most power plants use crop straw as fuel, which produces a large amount of biomass ash in the process of power generation. As an industrial waste, biomass ash cannot be used and abandoned, which leads to the accumulation of a large amount of biomass ash in the environment [32]. Considering the good adsorption capacity of to heavy metal ions in aqueous solution, research has shown that the maximum adsorption capacity for Pb 2+ , Ni 2+ , Cd 2+ , Mn 2+ , Zn 2+ , and Cr 3+ in aqueous solution reached 1.95, 2.23, 2.00, 2.49, 2.46, and 1.50 g/kg [33], respectively; biomass ash can be used as an economic and environmentally friendly adsorbent. However, it has been reported that the adsorption capacity of natural biomass ash for specific heavy metals in aqueous solution is lower than that of some commercial or modified adsorbents [34]. This leads to the low efficiency of using natural biomass ash collected from biomass power plants as an adsorbent to remove heavy metal ions from industrial wastewater. The adsorption capacity of biomass ash to heavy metals in wastewater can be improved through appropriate modification methods so as to improve its ability to remediate heavy metal-polluted wastewater [35]. Therefore, it is necessary to explore a suitable method for modifying biomass ash. Various mesoporous materials based on silica have been widely studied and partially commercialized. These modified materials have good adsorption properties for heavy metal ions in aqueous solution because of their large specific surface area (2-50 nm), high thermal stability, good mechanical stability, uniform pore morphology, and high functionality [36]. In addition, by combining specific organic functional groups on the surface and/or in the pores of mesoporous materials, the removal rate of mesoporous materials for Zn 2+ , Cu 2+ , Pb 2+ , and Cr 3+ have been improved by 15.2-36.8%, 4.7-43.6%, 29.8-41.1%, and 20.5-39.4%, respectively, which gives them better application prospects [37][38][39]. Some researchers have synthesized a new material from coal fly ash and functional mesoporous materials; this new material has good adsorption properties for various pollutants in aqueous solution, including heavy metal ions [40]. In fact, compared with coal fly ash, biomass ash has a high silicon content, which makes it also possible to become a silica skeleton so as to carry out mesoporous modification and improve its adsorption performance. However, no research on mesoporous modification of biomass ash has been reported. Therefore, the purpose of this study is to modify biomass ash with mesoporous silica and organosilane so as to improve its ability to remove Zn 2+ from aqueous solution. On this basis, we obtained their adsorption characteristics of Zn 2+ in aqueous solution, and the feasibility of its application in the treatment of Zn 2+ -contaminated wastewater was analyzed. The research results will help to understand the removal effect and conditions of this new material so as to provide a theoretical basis for their industrial promotion to remove Zn 2+ from wastewater. Biomass Ash The biomass ash sample used in this study was taken from a biomass power plant burning agricultural residues in Anhui Province, China. The power plant uses a mixture of wheat straw, corn straw, peanut shell, and cotton straw as fuel for power generation. This mixture of fuels is burned in a mobile grate furnace at an excess air temperature of 850 • C. Modification Experiment The biomass ash was modified by co-condensation in a hexagonal mesoporous silica (HMS) matrix [41], and the synthesis steps reported by Walcarius et al. [42] were used. First, 1.24 g of dodecylamine was dissolved in 10 mL of alcohol followed by the addition of a mixture of 1.24 g of biomass ash in 90 mL of ultrapure water (CN61 M-UPR-I-20L) under stirring at 1000 rpm. Next, 6.09 mL of tetraethyl orthosilicate and 0.71 mL of 10% (w/v) 3-aminopropyltriethoxysilane [APS, NH 2 (CH 2 ) 3 Si (OC 2 H 5 ) 3 ], an organosilane, were added into the reaction mixture. After 30 s, 0.94 mL of trimethylbenzene was added, and the mixture was then stirred for 24 h. Finally, the mixture was filtered through a 0.45 µm filter membrane, and the residue was air dried at room temperature. The remaining trimethylbenzene was Soxhlet extracted with 125 mL of alcohol for 5 h, and the sample was air dried at room temperature for 24 h. Physico-Chemical Characterization and Surface Properties In this paper, the elemental composition of biomass ash was determined by ICP-OES (Perkin Elmer optima 2000, Agilent Technologies Inc., Santa Clara, CA, USA), and a scanning electron microscope (SEM, semitachi s-4800, Hitachi, Tokyo, Japan) was used to determine the surface morphology. The functional group composition of biomass ash was determined by Fourier transform infrared spectroscopy (FTIR, Spectrum Two IR Spectrometers, Perkin Elmer, Shimadzu, Kyoto, Japan) at 4000-5000 cm −1 , and the specific surface area of the sample was calculated by the BET method. The pH was measured using a METTLER TOLEDO pH meter (S40 SevenMulti TM , Mettler Toledo, Columbus, OH, USA) (solid/liquid ratio 1:5) [43]. Adsorption Experiments The adsorption characteristics of the materials were assessed by evaluating the initial Zn 2+ concentration, pH, and kinetic and thermodynamic factors. A total of 1000 mg/L Zn(NO 3 ) 2 standard solution was used to prepare the zinc solution used in the experiment. Then 0.1 M HNO 3 and 0.1 M NaOH were used to adjust the solution pH. To ensure the reliability of the experimental results, the reagents used in the experiment were all G.R. (guaranteed reagent). The zinc adsorption capacity in the adsorption experiments was evaluated by Formula (1). where A Zn 2+ is the zinc adsorption capacity, C e is the solution concentration after adsorption (mg/L), C 0 is the solution concentration before adsorption (mg/L), V is the volume of the solution (L), and M is the mass of the adsorption material (g). Effect of pH The pH greatly influences the removal of heavy metals in aqueous solution. To obtain the optimized pH value of biomass ash and modified materials for the removal of Zn 2+ , 0.1 g biomass and the modified material were separately added to 50 mL centrifuge tubes, and then 25 mL aliquots of solution (Zn 2+ concentration: 50 mg/L; pH: 2.0-8.0) were separately added to the centrifuge tubes [34]. The centrifuge tubes were placed in a constant temperature shaker for 24 h (150 rpm, 25 • C). Adsorption Equilibrium Experiment Solutions containing Zn 2+ with concentrations of 50, 60, 70, 80, 90, and 100 mg/L were prepared by Zn 2+ standard solution, and the pH value was adjusted to 5 with pH regulating solution. A total of 0.1 g of biomass ash and modified material was accurately weighed into a 50 mL centrifuge tube, and 25 mL amounts of the Zn 2+ solutions with different concentrations listed above were and placed in a constant temperature shaker for 24 h (25 • C, 150 rpm). Then the centrifuge tubes were taken out and placed in the centrifuge for 10 min (3000 rpm). After filtering the solution with 0.45 µm micron microporous filter membrane, the Zn 2+ concentration was measured with flame atomic absorption spectrometry (SpectrAA-220, Varian, Palo Alto, CA, USA). Adsorption Kinetics The adsorption kinetics of Zn 2+ were determined by adding 0.2 g of material into centrifuge tubes containing 100 mL Zn 2+ solution (100 mg/L, pH 5); all centrifuge tubes were placed in a reciprocating shaker and shaken at a speed of 150 rpm for 24 h at 0.5, 1, 2, 3, 5, 10, 15, 30, 60, 90, 120, 180, and 240 min. Then 5 mL aliquots of samples were collected using a pipette (Eppendorf, Research Plus, 0.5-5 mL). The determination of Zn 2+ was the same as in the adsorption isotherm experiment. Data Processing Data processing and analysis of variance were performed using Microsoft Excel 2010 (Microsoft Corporation, Redmond, WA, USA) and SPSS 20.0 (IBM SPSS, summers, New York, NY, USA). Graphics were conducted by Sigmaplot (12.5, Systat, San Jose, CA, USA). Physico-Chemical Characterization and Surface Properties The elemental composition of the biomass ash was reported by our previous study, and Si (12.0%), Ca (4.31%), and K (3.31%) were the main constituent elements [44]. SEM analysis showed that biomass ash was mainly composed of spherical particles and flake particles, and the particle diameter was 10-60 µm [34]. At the same time, we found that these ash particles were fully dispersed. The characterization results of these two materials were significantly different. After modification, the specific surface area of biomass ash was significantly increased, and its surface became smoother [34]. In addition, some weak channels were observed in the modified biomass ash, and the generation of pores helped to improve the porosity of the modified biomass ash so as to increase its adsorption sites (Table 1). Table 1. Comparison of the Brunauer-Emmett-Teller (BET) analysis of functionalized hexagonal mesoporous silica, biomass ash, and synthesized matrix. Reproduced with permission from our previous research results [34]. HMS-NH2 is non-functionalized mesoporous silica. FTIR analysis showed that the modified biomass ash demonstrated an intense absorption band at 3330.62 cm −1 , which could be attributed to the appearance of an O-H bond of the silanol group in modified biomass ash [34]. At the same time, we found that there were obvious absorption peaks at 845.85 cm −1 and 1051.82 cm −1 , which corresponded to symmetric and asymmetric Si-O-Si vibrations, respectively. Compared with biomass ash, the spectral characteristics of the material functionalized with 10% (w/v) APS and HMS matrix changed significantly, mainly showing a broad signal between 3000 cm −1 and 3600 cm −1 , which might be due to the increase in the number of silanol groups in the synthetic material. The stretching bands could be due to the N-H group of APS, and the band at 1488.2 cm −1 might be attributed to the bending vibration of the N-H groups [34,43]. The results of element composition analysis showed that C, O, Si, Al, Fe, and K existed in the two materials [34]. Effect of pH The Zn 2+ adsorption was found to be highly pH-dependent; moreover, in the pH range of 2-8, the modified material far exceeded that of biomass ash in terms of adsorption capacity (Figure 1). When the pH in the system was less than 4, the H + concentration in the solution was very high, and the adsorption capacity of biomass ash to Zn 2+ was very low, which might be due to the competitive adsorption of Zn 2+ and H + at low pH [45][46][47]. When the solution pH was considerably low, the value of H 3 O + exceeded that of the Zn 2+ , and most adsorption sites on the material surface were occupied by H 3 O + , thereby reducing the adsorption capacity for the metal ion [48]. As the pH gradually increased, the concentration of H 3 O + decreased and was gradually removed from the material surface. Consequently, the competitiveness between Zn 2+ and H 3 O + was decreased, so that metal ions could approach the active adsorption site on the material so as to increase the binding between Zn 2+ and the surface of the synthetic matrix through ion exchange, resulting in improving the adsorption capacity [49,50]. The exchange mechanism between H + and Zn 2+ in solution can be expressed by the following equations: X: Si, Fe, and Al. M: metal. The maximum adsorption efficiency of biomass ash and modified materials for Zn 2+ were found to be near pH 6. When pH was >6, the adsorption of Zn 2+ was weak, which can be attributed to the precipitation of Zn 2+ species such as carbonates or hydroxides (Figure 1) [51]. The modified material was functionalized with NH 2 groups, so that the material formed an amino-Zn complex with a greater stability constant after adsorbing Zn 2+ in the solution, and the stability of this complex mainly depended on the pH of the solution system, which must be close to 7 [52]. X: Si, Fe, and Al. M: metal. The maximum adsorption efficiency of biomass ash and modified materials for Zn 2+ were found to be near pH 6. When pH was >6, the adsorption of Zn 2+ was weak, which can be attributed to the precipitation of Zn 2+ species such as carbonates or hydroxides (Figure 1) [51]. The modified material was functionalized with NH2 groups, so that the material formed an amino-Zn complex with a greater stability constant after adsorbing Zn 2+ in the solution, and the stability of this complex mainly depended on the pH of the solution system, which must be close to 7 [52]. Adsorption Isotherm We used Langmuir and Freundlich adsorption models to fit the adsorption process of the material. The parameters of Langmuir and Freundlich are listed as follows [53][54][55]: where Ce represents the equilibrium concentration of the metal ions (mg/L), represents the amount of metal ions adsorbed by a unit mass adsorbent (mg/g), represents the maximum amount of the metal ions adsorbed by the unit mass adsorbent (mg/g), and KL represents the Langmuir constant (L/mg). KF and are the Freundlich constants, which indicate the adsorption capacity and adsorption intensity of a given material, respectively. Through analysis, we found that the fitting of biomass ash by the Langmuir model was more optimized, while the fitting of the modified products by the Freundlich model was more optimized ( Table 2). This might be because the adsorption of Zn 2+ by biomass ash belonged to monolayer adsorption, so the experimental data could be well simulated by the Langmuir model at all temperature levels. However, the adsorption of Zn 2+ by modified biomass ash belonged to multilayer adsorption. In addition to adsorbing Zn 2+ on the surface through physical action, the functional groups on the surface of the Adsorption Isotherm We used Langmuir and Freundlich adsorption models to fit the adsorption process of the material. The parameters of Langmuir and Freundlich are listed as follows [53][54][55]: ln(q e ) = ln(K F ) + 1 n ln(C e ) where C e represents the equilibrium concentration of the metal ions (mg/L), q e represents the amount of metal ions adsorbed by a unit mass adsorbent (mg/g), q L represents the maximum amount of the metal ions adsorbed by the unit mass adsorbent (mg/g), and K L represents the Langmuir constant (L/mg). K F and n are the Freundlich constants, which indicate the adsorption capacity and adsorption intensity of a given material, respectively. Through analysis, we found that the fitting of biomass ash by the Langmuir model was more optimized, while the fitting of the modified products by the Freundlich model was more optimized (Table 2). This might be because the adsorption of Zn 2+ by biomass ash belonged to monolayer adsorption, so the experimental data could be well simulated by the Langmuir model at all temperature levels. However, the adsorption of Zn 2+ by modified biomass ash belonged to multilayer adsorption. In addition to adsorbing Zn 2+ on the surface through physical action, the functional groups on the surface of the adsorbent also existed in the form of Schiff bases (-N=CH-), and the -N=CH-could complex with Zn 2+ in the solution. This indicated that Zn 2+ occurred in both the adsorption reaction and complexation reaction on the modified material surface. This might be the fundamental reason why the adsorption process of modified materials for Zn 2+ in solution did not conform to the Langmuir model. In the Freundlich model, the constant 1 n represented the adsorption strength. When the value of 1 n was between 1 and 10, the adsorption process was favorable [56]. In our study, the value of 1 n at each temperature was more than three, indicating that the modified biomass ash had good adsorption for Zn 2+ (Table 2). By analyzing the data, we found that the adsorption capacity of both materials increased slightly with the increase of temperature. This might be because the adsorption process of the material to Zn 2+ in the solution was an endothermic reaction, and thus increasing the temperature could increase the internal structure of the material and improve the adsorption capacity [57]. Table 2. Values of the constants and fitting of the adjusted adsorption models. Thermodynamic Studies The thermodynamic process and the parameters can be expressed according to Gupta [58]: ln where K L ', K L1 , and K L2 are the Langmuir constants at T, T1, and T2, respectively; R is the gas constant (8.314 J·mol −1 ·K −1 ). According to thermodynamics, ∆G is the adsorption driving force, which reflects the intensity of the adsorption driving force and depends on the enthalpy and entropy factor. ∆G is negative, and ∆H and ∆S are positive, indicating that the main driving force in the adsorption process is entropy change (Table 3). At the same time, the negative value of ∆G indicated that Zn 2+ tended to be adsorbed from the solution to the modified biomass ash. Or we could understand it by this way-that the adsorption of Zn 2+ on the material was spontaneous. As the temperature increased, ∆G decreased gradually, which indicated that the increase of temperature was conducive to the adsorption process; this result was also consistent with the endothermic process of the material's adsorption of Zn 2+ in the solution. In this research, the composition of the synthetic matrix ∆H was 30.0, indicating that the adsorption force was hydrogen bonding, which was ligand exchange [59]. This once again verified that the adsorption process of modified biomass ash for Zn 2+ in solution included physical adsorption and chemical adsorption. Kinetic Adsorption Studies The adsorption kinetics of Zn 2+ on biomass ash and modified products in the solution are shown in Figure 2. The Zn 2+ concentration in aqueous solution decreased sharply in the initial 30 min, and the concentration of Zn 2+ in the solution decreased to nearly 10 mg/L when the experiment lasted for 120 min. However, it took a longer time to reach the adsorption equilibrium for biomass ash. At the same time, when the adsorption process was close to equilibrium, the equilibrium concentration of Zn 2+ was about five times that of the modified biomass ash, and the equilibrium removal rate of modified biomass ash (85.0%) was much higher than that of biomass ash (63.5%). This showed that the adsorption capacity of the modified biomass ash was much higher than that of the original biomass ash. This was also consistent with the results obtained by the Langmuir and Freundlich model. In the initial stage of adsorption reaction, there were a large number of active adsorption sites on the material surface; these active adsorption sites could provide space for Zn 2+ , which made Zn 2+ move quickly to the material surface and be adsorbed and fixed [60]. However, with the progress of the adsorption process, the active adsorption sites on the material surface gradually decreased; therefore, the adsorption rate decreased rapidly [61]. The slow diffusion of Zn 2+ on the internal matrix of modified biomass ash might lead to the reduction of the adsorption rate at this stage [62]. In order to simulate the change of adsorption rate of these two materials, pseudo-first-order and pseudo-second-order rate equations were used. The two models and the parameters can be expressed according to Zahra [63]: ln(Q e − Q t ) = lnQ e − k 1 t (10) where Q e is the adsorption capacity (mg/g) at equilibrium, Q t is the amount (mg/g) of material adsorbed at time t, k 1 represents the rate constant (min −1 ) of the pseudo-first-order model, and k 2 is the rate constant (g/mg/min) of the pseudo-second-order model. For biomass ash, the R 2 value simulated by the pseudo-first-order model was much greater than that simulated by the pseudo-second-order model; however, for the modified biomass ash, this result was just the opposite (Table 4). For the modified biomass ash, the R 2 simulated by the pseudo-second-order model reached 1.00, which showed that the pseudo-secondorder model could accurately simulate the adsorption process of Zn 2+ by the modified biomass ash. The results of kinetic model simulation was consistent with the fitting results of Langmuir and Freundlich. The chemical adsorption process of modified materials for Zn 2+ in solution might be caused by the reaction force and coordination process between Zn 2+ and -NH 2 , -NH on the surface of modified biomass ash. Conclusions Using biomass ash as raw material, the mesoporous material synthesized by modification has a stronger adsorption capacity for Zn 2+ and a higher removal rate of Zn 2+ in aqueous solution. The adsorption capacity of the material for Zn 2+ is closely related to the initial Zn 2+ concentration and pH of the aqueous solution. Compared with untreated biomass ash, due to the functionalization of mesoporous materials by APS, the specific surface area of modified biomass ash is nine times that of the material before modification, which makes this material have more active adsorption sites to adsorb Zn 2+ from the solution. The adsorption of Zn 2+ by biomass ash conforms to the Langmuir model, while the adsorption of Zn 2+ by modified biomass ash conforms to the Freundlich model. This is mainly due to the difference of adsorption mechanisms between the two materials. The adsorption process of Zn 2+ by the two materials is endothermic. The adsorption process of Zn 2+ by biomass ash conforms to the pseudo-first-order kinetics model (R 2 = 0.968), while the adsorption process of Zn 2+ by the modified material conforms to the pseudosecond-order kinetics model (R 2 = 1.00). It is worth noting that compared with the reported materials, this modified material shows strong adsorption capacity for Zn 2+ (its removal rate of Zn 2+ in solution is 21.6% higher than that of biomass ash) and has great potential in the remediation of Zn 2+ pollution in a water environment. This study provides a suitable method for the resource utilization of by-products of biomass power plants. However, more research is needed on its industrialization. Author Contributions: L.X. conducted all the experiment and wrote the manuscript. X.X. and J.P. conducted some of the experiment and revised the manuscript. All authors have read and agreed to the published version of the manuscript. Data Availability Statement: The data that support the findings of this study are available on request from the corresponding author. Conflicts of Interest: The authors declare no conflict of interest.
2022-07-27T15:05:33.040Z
2022-07-24T00:00:00.000
{ "year": 2022, "sha1": "e97b2ce0389394a2c3c46de2346bb92a0d8d9b4e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/19/15/9006/pdf?version=1658668835", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "78afa9dd2b735da40ddd7febfe9364138bef39b8", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
269384176
pes2o/s2orc
v3-fos-license
The DIRECT consortium and the REST-meta-MDD project: towards neuroimaging biomarkers of major depressive disorder Abstract Despite a growing neuroimaging literature on the pathophysiology of major depressive disorder (MDD), reproducible findings are lacking, probably reflecting mostly small sample sizes and heterogeneity in analytic approaches. To address these issues, the Depression Imaging REsearch ConsorTium (DIRECT) was launched. The REST-meta-MDD project, pooling 2428 functional brain images processed with a standardized pipeline across all participating sites, has been the first effort from DIRECT. In this review, we present an overview of the motivations, rationale, and principal findings of the studies so far from the REST-meta-MDD project. Findings from the first round of analyses of the pooled repository have included alterations in functional connectivity within the default mode network, in whole-brain topological properties, in dynamic features, and in functional lateralization. These well-powered exploratory observations have also provided the basis for future longitudinal hypothesis-driven research. Following these fruitful explorations, DIRECT has proceeded to its second stage of data sharing that seeks to examine ethnicity in brain alterations in MDD by extending the exclusive Chinese original sample to other ethnic groups through international collaborations. A state-of-the-art, surface-based preprocessing pipeline has also been introduced to improve sensitivity. Functional images from patients with bipolar disorder and schizophrenia will be included to identify shared and unique abnormalities across diagnosis boundaries. In addition, large-scale longitudinal studies targeting brain network alterations following antidepressant treatment, aggregation of diffusion tensor images, and the development of functional magnetic resonance imaging-guided neuromodulation approaches are underway. Through these endeavours, we hope to accelerate the translation of functional neuroimaging findings to clinical use, such as evaluating longitudinal effects of antidepressant medications and developing individualized neuromodulation targets, while building an open repository for the scientific community. Introduction Major depressive disorder (MDD) is the second leading cause of health burden worldwide (Ferrari et al., 2013).Unfortunately, objective biomarkers to assist in diagnosis are still lacking, and current first-line treatments are only modestly effective (Borowsky et al., 2000;Williams et al., 2011), reflecting our incomplete understanding of the pathophysiology of MDD.Characterizing the neurobiological basis of MDD promises to support developing more effective diagnostic approaches and treatments. An increasingly used approach to reveal neurobiological substrates of clinical conditions is termed restingstate functional magnetic resonance imaging (R-fMRI) (Biswal, 2012).Despite intensive efforts to characterize the pathophysiology of MDD with R-fMRI, clinical imaging markers of diagnosis and predictors of treatment outcomes have yet to be identified.Previous reports have been inconsistent, sometimes contradictory, impeding the endeavour to translate them into clinical practice (Yan et al., 2019).One reason for inconsistent results is low statistical power from small sample size studies (Button et al., 2013).Low-powered studies are more prone to produce false positive results, reducing the reproducibility of findings in a given field (Ioannidis, 2005;Poldrack et al., 2017).Of note, one recent study demonstrated that a sample size of thousands of participants may be needed to identify reproducible brainwide association findings (Marek et al., 2022), calling for larger datasets to boost effect size.Another reason could be the high analytic flexibility (Carp, 2012).Recently, Botvinik-Nezer and colleagues (Botvinik-Nezer et al., 2020) demonstrated the divergence in results when independent research teams applied different workflows to process an identical fMRI dataset, highlighting the effects of 'researcher degrees of freedom ' [i.e. heterogeneity in (pre-)processing methods] in producing disparate fMRI findings.pr To address these critical issues, we initiated the Depression Imaging REsearch ConsorTium (DIRECT) in 2017.Through a series of meetings, a group of 17 participating hospitals in China agreed to establish the first project of the DIRECT initiative, the REST-meta-MDD project, and share 25 study cohorts, including R-fMRI data from 1300 MDD patients and 1128 normal control participants.On the basis of our previous work, a standardized preprocessing pipeline adapted from Data Processing Assistant for Resting-State fMRI (DPARSF) (Yan et al., 2016;Yan & Zang, 2010) was implemented at each local participating site to minimize heterogeneity in preprocessing methods.R-fMRI metrics can be vulnerable to physiological confounds such as head motion (Ciric et al., 2018;Ciric et al., 2017).Based on our previous work examining head motion impact on R-fMRI functional connectivity (FC) connectomes (Yan et al., 2013) and other recent benchmarking studies (Ciric et al., 2017;Parkes et al., 2018), DPARSF implements a regression model (Friston-24 model) on the participant-and group-level correction for mean frame displacements as the default setting. Participating groups first preprocessed R-fMRI images with a DPARSF standardized protocol at local hospitals, then shared the final R-fMRI indices along with demographic (age, sex, and education) as well as clinical information (first episode/recurrent, medication usage, illness severity, etc.).The REST-meta-MDD project was intended to boost statistical power by pooling functional data across centers, while minimizing the effects of heterogeneous analytical strategies and creating an openly available dataset for the global scientific community.As of 1 January 2020, the dataset of deidentified imaging derivatives was made available for unrestricted sharing.All researchers can obtain access to these R-fMRI indices and corresponding demographic/clinical information via http://rfmri.org/REST-meta-MDD,and perform any analyses of interest without putting participant privacy or confidentiality at risk. Since its launch, DIRECT has encouraged independent investigations.Data sharing was conducted in two phases.In the initial coordinated sharing phase, all researchers who sought access to the dataset needed to submit a written proposal to the consortium review board.The aims and research design of proposals were evaluated to minimize conflicts with already approved research proposals.The consortium also provided technical support for participating sites regarding preprocessing and statistical analysis as these issues can be challenging for clinical researchers.Through these practices, DIRECT sought to provide a platform that would allow all participating sites to leverage the large R-fMRI database and explore it independently.At the time of writing, DIRECT investigators have published several peer-reviewed research papers.Here, we review the principal findings from these published studies, summarized in Table 1, and discuss the implications of these results and the future directions of DIRECT. FC abnormalities in MDD The first DIRECT study (Yan et al., 2019) concentrated on a simple but surprisingly controversial theme: FC within the default mode network (DMN) in depression.The DMN was first recognized as a set of brain regions showing reduced haemodynamic activity during externally directed attention tasks and increased activity during resting state or internally focused tasks (Raichle, 2010;Raichle et al., 2001;Raichle & Snyder, 2007).By consensus, MDD was considered to be characterized by enhanced FC within the DMN, which was also proposed to be a neural mechanism underlying rumination (Greicius et al., 2007;Hamilton et al., 2015;Kaiser et al., 2015).However, previous findings regarding FC within the DMN in patients with MDD were inconsistent (for a review, see Yan et al., 2019).Thus, DIRECT first conducted a megaanalytic investigation, that is, pooling individual-level measures across sites and conducting regression analysis on this pooled dataset.Potentially confounding site effects were corrected with a linear mixed model with a random intercept for sites.Such an analytical approach can boost statistical power to detect subtle effects and allow for flexible control of confounders (Schmaal et al., 2020).The mega-analysis also investigated the effects of certain phenotypes: the number of episodes, medication usage, and illness duration.Contrary to initial assumptions, FC in patients with MDD was significantly lower than in HCs within the DMN (t = −3.762,P = 0.0002).This effect was only observed in patients with recurrent MDD (t = −3.737,P = 0.0002), and not in first episode drug naïve patients (t = −0.914,P = 0.361).Overall, MDD patients were found to be characterized by a general yet subtle decrease of FC within the DMN (Fig. 1A).This contradicted some of the previous literature.However, previous studies showing increased DMN FC in MDD were primarily conducted with Caucasian samples, while the sample from REST-meta-MDD was homogeneously Chinese.Ethnic differences in MDD have been consistently reported.Compared with Caucasians, Asians have lower prevalence rates (Ferrari et al., 2013), more psychosomatic symptoms (Ryder et al., 2008), and different risk genes (Bigdeli et al., 2017).Hence, one critical future direction for DIRECT is to identify potential cultural and ethnic differences by pooling cross-cultural samples with international collaborators (see an example of neurodevelopment from Dong et al., 2020). Topological abnormalities of functional brain networks in MDD Subsequently, the topological properties of functional brain networks in patients with MDD have been examined (Bullmore & Sporns, 2009;Yang et al., 2021) (Fig. 1B). The individual-level R-fMRI data from the REST-meta-MDD project allowed building a topological network with a predefined brain atlas, i.e.Dosenbach's 160 atlas (Dosenbach et al., 2010).The effort focused on two essential features of networks, their global (E glob ) and local efficiencies (E loc ) (Rubinov & Sporns, 2010) , 2009, 2012).Accordingly, patients with recurrent MDD may be less able to deal with distress from negative life events due to their disrupted intrinsic functional brain topology, making them vulnerable to relapse into depression.The effects of antidepressant medications may also be involved.In a longitudinal study (Li et al., 2021), FC was decreased in almost all brain networks after patients were administered escitalopram or duloxetine for 8 weeks.Other factors, such as illness duration, may also contribute.Since most recurrent MDD patients have long histories of medication use, future studies will need to collect more information on the medication usage of patients to better identify medication effects on the properties of functional brain networks of patients with MDD. Altered dynamic FC in MDD Recent studies have highlighted the dynamic aspects of intrinsic brain activity and its role in the pathology of MDD (Demirtas et al., 2016;Hou et al., 2018;Wang et al., 2020).However, most of these studies are preliminary and findings have been inconsistent.Accordingly, a comprehensive study was conducted to characterize altered dynamic FC in MDD at both local and global levels (Long et al., 2020;Sizemore & Bassett, 2018).A dynamic network-based framework based on the sliding-window approach was used to estimate several spatio-temporal dynamic network features such as temporal variability, temporal clustering, and temporal efficiency.A total of 460 patients with MDD and 473 HCs were selected for statistical analysis according to their age, education, imaging quality, etc. (for details of selection criteria, please refer to Long et al., 2020).Results showed significantly increased temporal variability (F = 10.218,P = 0.000216, FDR corrected), decreased temporal correlation coefficient ( F = 15.071,P = 0.0000333, FDR corrected), and shorter characteristic temporal path length ( F = 8.768, P = 0.000314, FDR corrected) in MDD patients (Fig. 1C).These effects were significant in both first-episode drug naïve (FEDN) and non-FEDN patients.In addition, temporal variability (ρ = 0.111, P = 0.045) and temporal efficiency (ρ = −0.101,P = 0.045) were correlated with Hamilton depression rating scale (HAMD) scores after adjusting for age, sex, and site effects in patients with MDD.These results indicate that MDD patients fail to maintain relatively stable brain networks over periods of time and some aberrant connections may interfere with normal interactions among brain regions (Sun et al., 2019;Zalesky et al., 2014). Altered functional lateralization features in MDD Brain asymmetry has been proposed to be a critical feature of the human brain in both structure and function (Toga & Thompson, 2003).Functional lateralization was characterized with a novel metric, the parameter of asymmetry (PAS) (Ding et al., 2021). The PAS was defined as the difference between the mean inter-hemisphere FC and intra-hemisphere FC for a given voxel.We found significantly increased PAS scores in patients with MDD compared with HCs, indicating decreased hemispheric lateralization (Fig. 1D).On the other hand, interhemispheric functional integration is also an important aspect of the brain's functional architecture that can be examined by a voxel-wise measurement called voxel-mirrored homotopic connectivity (VMHC) (Stark et al., 2008;Zuo et al., 2010).VMHC was compared between 1004 patients with MDD and 898 HCs from the REST-meta-MDD project (Deng et al., 2021).Decreased VMHC in MDD was revealed in a wide range of brain regions, including posterior cingulate cortex (PCC), medial prefrontal cortex (MPFC), pre-/post-central gyrus, and inferior frontal and occipital gyrus (Fig. 1E).Such reduced homotopic resting-state FCs may be caused by disrupted structural connectivity such as reduced fractional anisotropy in the corpus callosum (van Velzen et al., 2019). MDD subgroups MDD is a highly heterogeneous disorder, probably containing subgroups that correspond to different pathologies and treatments (Drysdale et al., 2016).Leveraging the REST-meta-MDD sample, MDD patients were categorized into subgroups according to their resting state FC patterns using a data-driven approach (Liang et al., 2020).K-means clustering divided MDD patients into two groups depending on their within-DMN FC pattern.One group was characterized by enhanced FCs within the DMN, especially FC between MPFC and PCC, while the other group featured decreased FCs within the DMN (Fig. 1F).These results illustrate a complex pattern of abnormalities in DMN FC, which would be difficult to observe in traditional case-control analyses.Finally, although the REST-meta-MDD project primarily focused on the functional neuropathology associated with MDD, structural alterations in MDD were examined by analyzing the T1-weighted anatomical images collected along with R-fMRI data (Liu et al., 2021).Specifically, MDD patients were divided into patients with gastrointestinal symptoms (GI group) and those without GI symptoms (non-GI group).GI symptoms are common in MDD and associated with poorer prognosis (Kop, 2012).Results showed significantly different grey matter volume (GMV) in temporal and occipital regions, thalamus, prefrontal, and postfrontal gyrus among GI, non-GI MDD patients, and HCs (Fig. 1G).The GI group had increased grey matter density in bilateral thalamus compared with the non-GI group.Larger grey matter density in the GI group was also found in right temporal gyrus, fusiform, and lingual gyrus compared with HCs.These results demonstrated that Chinese patients with MDD who experience GI symptoms have abnormalities in grey matter structures. Once the REST-meta-MDD project entered the unrestricted sharing phase, researchers from outside DIRECT began to conduct additional exploratory analyses.For example, Tozzi and colleagues (2021) re-analyzed within-DMN FCs in terms of the three DMN subsystems (Andrews-Hanna, 2012;Andrews-Hanna et al., 2010).Recent empirical evidence showed that the DMN can be fractionated into three subsystems: a core subsystem that corresponds to self-referential thinking (the core subsystem); a subsystem that is anchored in the dorsal PFC and corresponds to cognition related processes; and a subsystem that is anchored in the medial temporal lobe (the MTL subsystem) and corresponds to autobiographical memory (Andrews-Hanna, 2012;Andrews-Hanna et al., 2010).They found that only FC within the core subsystem was significantly reduced in MDD compared to NCs.These results have expanded the research scope of RESTmeta-MDD and show the potential of this rich repository of clinical data. Future Directions for the DIRECT Consortium In this review, we have briefly described the motivation and evolution of the DIRECT consortium and the main published findings based on its first project, RESTmeta-MDD.Through this endeavour, we demonstrated that pooling R-fMRI data across multiple sites with standardized processing protocols can substantially boost statistical power and detect subtle but reliable MDDrelated abnormalities in brain.Furthermore, we established an open-access data repository to make all shared functional data available to the broad scientific community.We hope this will advance discovery-based analyses seeking neuroimaging biomarkers, a deeper understanding of MDD's neuropathology, and development of novel treatments for MDD.Despite the inspiring and unique findings emerging from the present research based on REST-meta-MDD and DIRECT, further questions can be raised.An important limitation is the exclusively Chinese sample.Thus, one critical next step of the DIRECT consortium is to extend the Chinese sample to other ethnic groups such as Caucasians through international collaborations.Other directions include: (i) improving reproducibility and sensitivity by using a surface-based, state-of-the-art pipeline (DPABISurf) (Yan et al., 2021); (ii) accumulating functional neuroimaging data from other psychiatric disorders such as bipolar disorder and schizophrenia; (iii) longitudinal research targeting the effects of antidepressant medications on brain networks in MDD; (iv) quantifying alterations in structural connections in MDD with diffusion tensor imaging (DTI); and (v) novel individualized neuromodulation approaches (e.g.transcranial magnetic stimulation, TMS) with fMRI-based anatomic targeting. Identifying effects of different cultural groups MDD in different ethnic groups has been reported to have different prevalence rates, heterogeneous subtypes, and varied treatment outcomes (Budhwani et al., 2015;Lee et al., 2014;Lesser et al., 2007).As this planet's largest ethnic group, the Chinese have been reported to exhibit lower rates of depression (Huang et al., 2019;Parker et al., 2007;Parker et al., 2001).The reasons for this are the focus of continuing debate.Some claim that the Chinese tend to express depression somatically and deny feelings of distress (Qiu et al., 2018).From the cultural viewpoint, some argue that Chinese beliefs and ways of responding to emotions (i.e.holistic thinking styles) make Chinese people less vulnerable to the negative affects of distress (De Vaus et al., 2018).Genetic factors may also contribute.The s/s allele of serotonin transporter is more prevalent in East Asians (45-74%) compared to Caucasians (12-24%) (Goldman et al., 2010).Furthermore, the s/s genotype is associated with a higher risk of MDD in Caucasians but not in Asians (Kiyohara & Yoshimasu, 2010).Thus, results from an exclusive Chinese sample may not generalize to other ethnic groups.In 2012, the international Enhancing NeuroImaging Genetics through Meta-Analysis (ENIGMA) initiative launched its MDD consortium, to identify neuroimaging alterations associated with MDD and their modulators (Schmaal et al., 2020).So far, neuroimaging data from more than 9000 HCs and 4000 MDDs have been accumulated by ENIGMA MDD consortium.Initial attempts have been made to pool data from DIRECT and ENIGMA MDD to identify the potential ethnic and cultural factors (e.g.directly compare the differences regarding whole-brain FC maps between Caucasian and Chinese MDD patients) in the neuropathology of MDD.We believe direct comparison across a large cross-cultural sample can provide unprecedentedly powered evidence for the contribution of ethnic factors to the structural and functional alterations associated with MDD. Towards surface-based analyses Obtaining more reliable and reproducible findings from functional neuroimaging data has become a major challenge for the field (Botvinik-Nezer et al., 2020;Chen et al., 2018).Preprocessing of R-fMRI data is complex and contains numerous steps to yield clean data for further statistical analyses.Outdated and flexible ad hoc preprocessing pipelines have been shown to decrease the quality and consistency of results (Esteban et al., 2019).In the REST-meta-MDD project, this issue was addressed by conducting a standardized, volume-based preprocessing pipeline based on DPARSF.However, recent methodological benchmarks have highlighted the drawbacks of volume-based preprocessing approaches, calling for a transformation to surface-based approaches (Coalson et al., 2018;Zuo et al., 2013).One obstacle in applicating state-of-the-art surface-based approaches in a multi-site consortium like DIRECT is the lack of a 'turn-key' toolbox.Accordingly, the DPABISurf pipeline was developed with a user-friendly graphical user interface that requires no scripting skills from users (Yan et al., 2021).DPABISurf is the latest upgrade of the widely used preprocessing pipeline DPARSF/DPABI (Yan et al., 2016;Yan & Zang, 2010) and follows the same designing concept.On the basis of this pipeline, future pooling of preprocessed time series in DIRECT will be produced with a surface-based approach, which should enhance the reliability and sensitivity of future DIRECT studies. Transdiagnostic investigation Evidence from ENIGMA has implicated a shared structural abnormality pattern among MDD, bipolar disorder, and schizophrenia (Schmaal et al., 2020).Genomewide association studies have also suggested that implicated genes may have pleiotropic effects across disorders (Huang et al., 2010).Furthermore, the differential diagnosis of bipolar disorder and MDD has long been a challenge for clinicians, indicating an overlap of clinical presentation between these disorders (Hirschfeld, 2014).Thus, investigating and characterizing shared and unique functional/structural brain alterations across these disorders may be particularly important and can help develop image-based diagnostic biomarkers to assist differential diagnosis.DIRECT is building a transdiagnostic dataset together with new participating research groups.Preliminary analyses to explore similarities as well as differences among these disorders regarding brain function and structure are anticipated. Identifying longitudinal effects of antidepressant medications One contribution of a pooled large-scale R-fMRI data repository is to generate hypothesis for future longitudinal studies.Effect sizes of studies using a within-subject design are larger than those using a cross-sectional between-subject design (Chen et al., 2018).However, longitudinal studies require more resources and a targeted design, so a sufficient prior knowledge base is needed to narrow the exploration scope.The present DIRECT studies have highlighted the effects of antidepressant medications on MDD patients' functional brain networks, especially the DMN.To test this, the effects of antidepressant treatment were studied in a group of 41 firstepisode drug-naïve patients with MDD who were administrated escitalopram or duloxetine for 8 weeks (Li et al., 2021).FCs within and among brain networks were generally decreased after antidepressant treatment, confirming the findings from the original DIRECT studies.The longitudinal effects of antidepressant medications on large-scale brain networks will be the focus of a future study that is being planned. Exploring white matter alterations in MDD DTI is an effective in vivo technique to investigate white matter microstructural properties of psychiatric patients (Rae et al., 2012) Developing network-targeted neuromodulation therapy Neuromodulation techniques, especially TMS, have the potential to treat MDD (Lefaucheur et al., 2020).Initial findings from DIRECT highlight the critical role the DMN plays in the neurophysiology of MDD.DMN abnormalities in MDD have long been associated with rumination, a passive and repetitive thinking style that is common in MDD patients (Hamilton et al., 2015;Kaiser et al., 2015).A recent hypothesis-driven study found that FCs between the core and MTL subsystems were enhanced during rumination, while FCs between core and DMPFC subsystems were reduced (Chen et al., 2020).Further analyses showed that the dynamic stability of the DMN was also decreased during rumination (Chen & Yan, 2021).These findings indicate that it might be possible to inhibit rumination by directly modulating DMN FC patterns through novel neuromodulation approaches such as TMS.Current TMS approaches show promising antidepressant effects, but effect sizes are modest and treatment duration is long (Lefaucheur et al., 2020). Transforming present scalp-based targeting to individualized fMRI guided targeting may improve the efficiency of TMS (Cash et al., 2020).Indeed, one recent doubleblinded randomized controlled trial (Cole et al., 2021) found that targeting an individualized left dorsal lateral prefrontal cortex (DLPFC) region that is anticorrelated to subgenual anterior cingulate cortex was highly effective (remission rate 79%), indicating the feasibility of generating individualized targets for TMS in relation to specific brain networks.Future DIRECT research intends to develop target searching algorithms according to the subsystem mechanisms underlying rumination and set up a clinical trial to test the antidepressant effects of such neuromodulation therapy. Conclusion In sum, the DIRECT consortium has accumulated an unprecedently large functional neuroimaging repository by initiating the REST-meta-MDD project.Studies based on this dataset have provided highly powered evidence for the field of neuropathology of MDD that has been beset by contradictory results.Furthermore, some intriguing insights have emerged from initial analyses. The second stage of data sharing under the framework of DIRECT is underway and several longitudinal studies based on hypotheses from REST-meta-MDD have been launched.We hope these endeavours will advance the translation of neuroimaging studies to clinical practice. Figure 1 : Figure 1: Principal findings from DIRECT studies.(A) Reduced FC within the DMN is revealed in patients with MDD compared to HCs(Yan et al., 2019).(B) Both decreased global efficiency (E glob ) and local efficiency (E loc ) are found in MDD vs. HC contrast(Yang et al., 2021).(C) Alterations in terms of temporal dynamic properties (increased variability, decreased temporal correlation coefficient, and characteristic temporal path length) are observed in patients with MDD as compared to HCs(Long et al., 2020).(D) Altered PAS scores are primarily observed in DMN (red), VN (blue), FPCN (yellow), and ventral and dorsal attention network (green) in MDD vs. HC contrast(Ding et al., 2021).(E) Reduced VMHC was found in DMN, VN, and SMN regions in MDD vs. HC contrast(Deng et al., 2021).(F) Patients with MDD can be clustered into two subgroups according to FCs within DMN(Liang et al., 2020).(G) Temporal and occipital regions, thalamus, prefrontal, and postfrontal gyrus show difference in GMV among GI, non-GI MDD patients, and HCs(Liu et al., 2021).Abbreviations: PAS, parameter of asymmetry; GI, gastrointestinal; MDD, major depressive disorder; HC, healthy control; DMN, default mode network; VN, visual network; FPCN, fronto-parietal control network; SMN, somato-motor network; FC, functional connectivity; VMHC, voxel-mirrored homotopic connectivity.
2024-04-27T05:06:16.567Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "7f9c341b54ead815f069ac8b6baa9296c1f0e2d7", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1093/psyrad/kkac005", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7f9c341b54ead815f069ac8b6baa9296c1f0e2d7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236941724
pes2o/s2orc
v3-fos-license
Reply on RC1 (1) The major weakness of the paper is that it does not even mention the results of the other Zeppelin flights and the ground measurements during the two campaigns. I do appreciate the in depth analysis of the case studies, but it is not clear if there represent what happens most of the time in these two areas or if they are very special days. I think a section summarizing the results of all flights and their similarities (or differences) with the case studies discussed is needed. after the authors address the following issues. Major issues (1) The major weakness of the paper is that it does not even mention the results of the other Zeppelin flights and the ground measurements during the two campaigns. I do appreciate the in depth analysis of the case studies, but it is not clear if there represent what happens most of the time in these two areas or if they are very special days. I think a section summarizing the results of all flights and their similarities (or differences) with the case studies discussed is needed. ANSWER: The detailed measurements with NAIS and APITOF were only available from the nucleation layout flights. These included 5 flights in Italy and 6 flights in Finland. The NPF event was fully captured on only some of those days, which leaves us with the case studies. Even though the time of the year and meteorology represent a situation when NPF usually happens in Po Valley and Hyytiälä, we acknowledge that the case studies may not represent the typical case of NPF. We will try to emphasize this more in the introduction and conclusions, stating that the results are from case studies. Studying the average profile from the roughly 30 flights in Po Valley and Hyytiälä using the SMPS data and comparing it to other measurements is probably best done in a separate manuscript. (2) Despite the presence of relatively high levels of sulfuric acid in the residual layer above the Po Valley there was no NPF there ( Figure 4c). This is an interesting observation that deserves some discussion and discussion. I understand that the Zeppelin was not measuring the concentrations of gas-phase pollutant during this flight but my understanding is that the authors have some measurements during other flights in the campaign. What was different in the RL? They suggest that may be there was not enough ammonia there. However, the presence or lack of VOCs is probably worth some discussion using the observations of VOCs in that region during other flights in the campaign. ANSWER: We added the following piece of discussion about VOCs: "In addition oxidized VOCs are important for aerosol particle growth (Ehn et al., 2014). VOCs were measured on board the Zeppelin in Po Valley in 2012 and the results showed higher VOC concentrations close to ground Jäger (2014). This may at least partly explain why we measured increased concentrations of intermediate ions in the RL but they did not grow to larger sizes in any significant quantities." (3) I was surprised by the measured spatial extent of NPF in Hyytiala. According to the measurements it is taking place in a relatively narrow area of 30-40 km around the station and not over scales of 100s of kilometers as it has been sometimes assumed. However, there is little discussion of what is happening in this relatively narrow corridor that leads to NPF and what is missing outside it and NPF is not happening. To be more provocative are all of these NPF observations over the years in Hyytiala referring to something that is quite limited in space and covers only a small fraction of the boreal forest? ANSWER: We studied this phenomenon further in a separate paper that was published in 2020 and found that these narrow zones of NPF seem to be related to locally enhanced NPF caused by organized convection in the BL, more specifically roll vortices (Lampilahti et al 2020). (4) There is little discussion of the measurements of the composition of the smallest particles during these flights. ANSWER: The composition of the particles in the sub-20 nm range could not be determined with the instruments on board. With the APiTOF we were able to detect [HSO4-] ions and used it as an estimate for sulfuric acid in the gas phase as this only required one or two distinct peaks that were relatively easy to spot. However due to low signal and changes in pressure, other interesting compounds like organic molecules could not be reliably detected and this data was not included in the manuscript. Minor points (5) I had some difficulty with Figure 3b (SO2 in Hyytiala) and Figure 3c (CS in Hyytiala) until I realized that the y-axis includes negative concentrations. I strongly suggest starting these axes from zero. Also does the N axis in Figure 3c start from zero or from another value? ANSWER: Changed the axis to start from zero (6) The legend of Figure 3 should mention that these are ground measurements. ANSWER: We added this to the caption.
2021-08-06T14:47:20.665Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "77edbc42df04fd51473daff575f648bd01d4bfdf", "oa_license": "CCBY", "oa_url": "https://os.copernicus.org/preprints/os-2021-11/os-2021-11.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "1d159eea800bd159c773a2e0fc20a9bbb191a222", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
56311668
pes2o/s2orc
v3-fos-license
Case study of a Hungarian breeding program using imported Booroola rams The first major gene for prolificacy identified in sheep was the Booroola (FecB) gene. Since the recognition of its existence, the Booroola Merino has spread all over the world. In Hungary, a new breed called Hungarian Prolific Merino – had been established based on the crossing of Hungarian Merino ewes and Booroola Merino rams, and was acknowledged in 1992. The only way to determine the FecB genotypes has been the measurement of the ovulation rate over a long period. In 2001, the Booroola mutation was identified. Mutation on the bone morphogenetic protein receptor – 1B gene was found to be associated with the increased ovulation rate in the Booroola Merino ewes. 138 ewes and 46 rams in the Hungarian Prolific Merino population were tested for this mutation by PCR-RFLP and their FecB genotypes were determined. One copy of the Fec allele increased (P<0,05) the ovulation rate by 0.89 ova and two copies increased by an average of 2.27 ova. Effectiveness of the FecB genotype estimation based on phenotype measurement was also compared to the results of direct DNA testing, and was found to have up to 80% accuracy. Introduction Profitability of sheep breeding is mainly determined by litter size.Selection for prolificacy based on phenotype has a low genetic gain (SAFARI and FOGARTY, 2003).(Another possibility is conducting selection using breeding values (BLUP)).Beyond this some investigations deal with the effect of the Booroola gene on the quality of the end products (KLEEMANN et al., 1988;SUESS et al., 2000).Based on the national data set of the Hungarian Merino sheep, an insignificant genetic trend was found in litter size (NAGY, 2000).Major genes for production traits provide opportunities for large and rapid increases in the efficiency of sheep production.The first major gene for prolificacy identified in sheep was the Booroola (FecB) gene in Australia, which has additive effects on ovulation rate and is dominant for litter size (DAVIS et al., 1982;PIPER and BINDON, 1982).After the recognition of the existence of the single gene, Booroola Merino has spread all over the world.In Europe, Hungary was the first country that imported Booroola Merino rams and ewes (VERESS, 1983).A new breed, called Hungarian Prolific Merino, was established based on the crossing of Booroola Merino rams and Hungarian Merino ewes, and was acknowledged in 1992.The aim of the breeders was to create and maintain a homozygous flock for the FecB locus and use it in cross breeding programmes (VERESS et al., 1987).To date, the carriers of the Fec B allele were identified on the basis of ovulation rate records in the case of ewes, and extensive progeny testing in the case of rams.This method is time and labour consuming, which hinders its practical application.Until the Booroola gene itself is identified, genetic markers linked to the FecB locus might assist in the introgression of the Fec B allele into new breeds.The FecB locus has been assigned to sheep chromosome 6 (MONTGOMERY et al., 1993) and localised in a region of 10 cM between two microsatellites BM1329 and OarAE101 (LORD et al., 1998).Suitability of these microsatellites as markers for the identification of the Fec B carriers were investigated in different countries and found to represent efficient and robust genotyping system (LEYHE-HORN et al., 1998;GOOTWINE et al., 1998;WEIMANN et al., 2001).Based on these results, BM1329 and OarAE101 microsatellites were tested in the Hungarian Prolific Merino sheep.Unfortunately, these markers were not suitable for identifying the carriers of the Fec B allele (ÁRNYASI et al., 2003) because of the low level of heterozygosity and the relatively low population size.In 2001, a point mutation at the position 830 of the bone morphogenetic protein receptor -1B gene (GenBank accession number AF312016) was found to be associated with increased ovulation rate in Booroola Merino ewes by different research groups (WILSON et al., 2001;SOUZA et al., 2001;MULSANT et al., 2001).This nucleotide substitution results in a change from glutamine the wild type to an arginin in the Booroola animals, which leads to a partial inactivation of the BMPR-1B resulting in the 'precocious' development of a large number of small antrall follicules.The aim of this study was to demonstrate that the high ovulation rate in the Hungarian Prolific Merino is caused by the mutation in the BMPR-1B receptor gene and to compare the effectiveness of the genotyping method based on the ovulation rate to the results of the direct gene test. Materials and Methods The Hungarian Prolific Merino population, which is bred at the research farm of the University of Debrecen, Centre of Agricultural Sciences, was involved in the investigation. Estimation of FecB genotypes by indirect methods Before the identification of the FecB mutation, the FecB genotypes were estimated by counting corpora lutea during laparoscopic examination in the ewes.The ovulation rate (OR) data were collected from ewes at natural oestrus cycle in autumn as described by MAGYAR (1994).FecB genotypes of 90 ewes (group A) were estimated and later compared to the results of the direct gene test.The animals (Table 1) have been classified as homozygous non-carriers (Fec + Fec + ) with an OR of 2 or less, heterozygous carriers (Fec B Fec + ) with average OR of 3 and homozygous carriers (Fec B Fec B ) with an average OR of 4 or more (VERESS et al., 1998). In the case of rams, different sources of information were available to estimate the FecB genotypes.In 1986, 4 Booroola rams were imported from New Zealand.These animals (group B) were genotyped based on the OR of their mothers and estimated FecB genotypes of their fathers by the breeders in New Zealand.Three Booroola Merino and 8 Hungarian Prolific Merino rams (group C) were classified based on the OR data of their daughters from 1988-1993.In this case, the ovulation rate of the daughters was measured after PMSG treatment at 6 months of age, as described by VERESS (1991).From 1986-1996, FecB genotypes of a further 10 rams (5 Booroola and 5 Hungarian Prolific Merino) (group D) were estimated.Their genotypes were determined based on their pedigree and the OR data of their daughters.In this case, the OR was measured in normal oestrus cycle without PMSG treatment (MAGYAR et al., 1999).The FecB genotypes of 25 rams were estimated altogether.The indirect methods used for estimation of FecB genotype is summarised in Table 1.2). The PCR-RFLP technique using primers and restriction enzyme (AvaII) as described by WILSON et al. (2001) was used for the detection of the mutation in the bone morphogenetic receptor type 1 B (BMPR-1B) gene.DNA was isolated from blood and semen samples as reported by ZSOLNAI and FÉSÜS (1996).For PCR, 10µl reaction contained 20-100µM genomic DNA, 10xPCR buffer, 1.5mM MCl 2 , 0.2mM of each primer and 0.25U Taq DNA polymerase (Promega).The PCR profile included an initial denaturation of 1 min at 94°C followed by 35 cycles of 15 sec at 94°C, 30 sec at 65°C and 30 sec at 72°C.The fragments were resolved on a 4% agarose gel and scored for the mutation. Statistical Analyses Least Square Means of the OR in the three groups of FecB genotype (Fec B Fec B ; Fec B Fec + ; Fec + Fec + ) were calculated.Mixed Model Least-Squares and Maximum Likelihood Computer Programme PC-2 were used for the ANOVA (HARVEY, 1999). The FecB genotype determined by direct DNA test, the year and the age of ewes at the time of OR measurement were involved into the model as fix effects.The ovulation rate data (115 measurement altogether) of 64 pedigree ewes (Table 2) were involved in the analyses of variance because of the availability of the fix effects data. The accuracy of the estimation of FecB genotypes was calculated by comparing the estimated genotypes to the genotypes determined by the direct gene test.The correspondence was expressed in percentage. Ovulation rate distribution The arithmetical mean of the OR was 3.12.8.70% of the measured OR were 1, 33.91% were 2, and 57.39% were 3 or more indicating the presence of the FecB mutation in the population (Table 3).The FecB genotype was found to have a significant effect on OR at a level of P<0.05.Differences were significant between all the three genotypes at a level of P<0.05 (Table 4).In the case of 64 Hungarian Prolific Merino ewes, the ovulation rate was increased by 0.89 by one copy of the Fec B allele.Homozygous carriers had an ovulation rate higher than the non-carriers by 2.27. Effectiveness of the estimation of the FecB genotypes Effectiveness of the genotyping methods based on phenotype measurement was calculated comparing to the results of the direct DNA test (Table 5).In the case of ewes, estimated FecB genotypes were determined to 80% accurately and overestimated in 20%.In the case of rams, big differences were found in genotypes determined by direct DNA test comparing to the results of estimation based on the progeny test or the parents' genotype.The FecB genotypes of 4 rams (group B) imported from New Zealand were estimated as (Fec B Fec B ) homozygous carriers by the Australian breeders.Applying the direct DNA test, one of the four rams was proved to be a homozygous carrier and three of them were genotyped as Fec B Fec + .In the case of five rams out of the 11 rams of group C the results of the direct gene test and the classified genotypes were in agreement.One heterozygous ram was misclassified to homozygous, and five Fec B Fec + rams were classified as homozygous carriers.In this case, the estimation of the FecB genotypes was accurate in 45% (Table 5). In the case of eight rams of group D, the estimated FecB genotypes corresponded to the results of the DNA test.The genotypes of two rams were wrongly assumed.The accuracy of the estimation of FecB genotype (80%) was found to be similar to the results for ewes. Distribution of the FecB genotypes in 2003 The distribution of the present female Hungarian Prolific Merino population based on their FecB genotype determined by direct gene test is the following: 37.41% are Fec B Fec B homozygous, 43.88% are Fec B Fec + heterozygous and 18.71% are Fec + Fec + homozygous.Regarding to the whole flock, the portion of animals carrying the Fec B allele in a homozygous form is 38.9%, in heterozygous form is 44.8% (Table 6). Discussion Mutation in the BMPR-1B receptor gene is proved to result in the high ovulation rate in the Hungarian Prolific Merino population on the basis of the results of the one-way ANOVA between ovulation rate and FecB genotypes.Although, it was published that the Booroola mutation has an additive effect on ovulation rate by PIPER et al. (1985), different results were obtained in our experiment.The ovulation rate of homozygous carriers had more than twice as much as the non-carriers.This phenomenon could be explained by the allele-allele interaction between the wild type (Fec + ) and the Fec B allele.Similar case was observed in the crosses of Inverdale and Booroola Merino by Davis et al. (1999).When the 2 genes, the FecI and FecB, were in combination the OR was higher than the sum of the effects of each gene alone.Distribution of the three FecB genotypes in the female and the male populations reflects that the aim of the breeders has not yet been fulfilled, since less than half of the whole population is homozygous Fec B Fec B .In this study great differences were found in the results of determination of FecB genotype by direct DNA test and estimation of the genotype based on ovulation rate in the female population.FecB genotype was overestimated by 20% by breeders.This explains, partly, the high frequency of the Fec B Fec + heterozygous and Fec + Fec + non-carriers in the Hungarian Prolific Merino population.Moreover, FecB genotypes of the rams were also estimated improperly based on their progeny test.The accuracy of the estimation of FecB genotype was different depending on the PMSG treatment in the progeny test.Classification was found to be more accurate if it was based on the daughters' OR results which were measured in normal oestrus cycle, comparing to the methods wherein PMSG treatment was taken.Although several research groups used previously the PMSG treatment in the progeny tests to estimate the FecB genotype (CLEVERDON and HART, 1981;DAVIS and KELLY, 1983;OLDHAM et al., 1984;DAVIS and JOHNSTONE, 1985), this methods was proved to be not reliable in our experiment.Rams with genotype of Fec + Fec + or Fec B Fec + were used for a long time because of the missed classification.This also gives a good explanation for the high frequency of the non-carriers, and the heterozygous (Fec B Fec + ) animals in the present population.No other publications were found to report data similar to those presented here.In conclusion, results support that the high rate of ovulation is caused by the mutation in the BMPR-1B receptor gene in the Hungarian Prolific Merino.Efficiency of the breeding programme to increase the frequency of the Fec B allele in this population was hindered by the previously applied improper genotype classification.Application of the direct DNA test will accelerate the spread of the Fec B allele in the flock.At present an experiment is under way for exploiting the connection between endocrinologycal processes (e.g.IGF, leptin level) and seasonality and litter size, respectively. Table 1 Summary of the indirect methods used for the estimation of FecB genotype (Übersicht der indirekten Methoden zur Schätzung des FecB Genotypes) Number Table 2 Summary of the number of animals having FecB genotypes determined by different methods (Tierbestand mit dem FecB Genotyp, bestimmt mit verschiedenen Methoden) * OR data of 64 ewes out of the 90 were involved in the ANOVA Table 5 Accuracy of estimation of FecB genotypes based on phenotype measurement comparing to the results of direct gene test (Genauigkeit der Schätzung der FecB Genotypen aufgrund der phänotypischen Messungen im Vergleich mit dem direkten Gen Test) Table 6 Distribution of the breeding animals based on their FecB genotype in the Hungarian Prolific Merino population (Verteilung der Zuchttiere auf Grund ihrer FecB Genotypen in der Ungarischen Fruchtbaren Merino Population)
2018-12-17T23:28:06.096Z
2004-10-10T00:00:00.000
{ "year": 2004, "sha1": "af2d0843e1cdce69f23d02ec1a822dfe68a5c30e", "oa_license": "CCBY", "oa_url": "https://aab.copernicus.org/articles/47/359/2004/aab-47-359-2004.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "af2d0843e1cdce69f23d02ec1a822dfe68a5c30e", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
117104316
pes2o/s2orc
v3-fos-license
Quenched Disorder From Sea-Bosons The degenerate Fermi gas coupled to a random potential is used to study metal-insulator transitions in various dimensions. We first recast the problem in the sea-boson language that allows for an easy evaluation of important physical attributes. We evaluate the dynamical number-number correlation function and from this compute the a.c. conductivity. We find that the d.c. conductivity vanishes in one and two dimensions. For a hamiltonian that forbids scattering of an electron from within the Fermi surface to another state within the Fermi surface we find that there is no metal-insulator transition in three dimensions either. Introduction In a series of published works [1] [2], and a recent preprint [3], we showed how to extract the anomalous exponents in case of the Luttinger model using seabosons. This paves the way for application of the amended sea-boson theory that now is powerful enough to reproduce most of the exactly known results in 1d, to other systems such as electrons with quenched disorder with and without Coulomb interactions in various dimensions. The relevant literature on this subject is vast and we shall not attempt to be exhaustive in surveying it. Anderson's pioneering work on localization [4] was followed by the work of Abrahams et.al. [8] and later on a more rigorous formulation of the notion of disorder averaging was given by McKane and Stone [6]. This relates to a single electron in a disordered potential. The classic review of Lee and Ramakrishnan [7] includes many references on the literature concerning the degenerate electron gas in a disordered potential. A more recent review is by Abrahams et. al. [8]. Number-Number Correlation Function Eventually, we would like to compute the a.c. conductivity at absolute zero. Unfortunately this quantity is rather difficult to compute. This is because it involves first calculating the dynamical number-number correlation function. This latter function has proved very difficult to evaluate. Before we evaluate this quantity we would like to say a few words about how the dynamical numbernumber correlation enters into the picture. It is defined as follows. Notice that the a.c. conductivity is related to the dynamical total momentummomentum correlation function. This formula was derived in an earlier preprint [3]. The momentum-momentum correlation function in turn may be related to the dynamical number-number correlation function. In a recent preprint [3], we provided some hints as to how might go about computing the number-number correlation function for the interacting system. It involves functional differentiation of the average momentum distribution with respect to sources that couple to the number operator. When this is done carefully we find the following appealing form of the dynamical number-number correlation function. Here the various quantities are defined recursively. First (here m, n = A, B), One could take the point of view that S 0 mn (kt; k ′ t ′ ) is evaluated by assuming that a k (q) are canonical bosons, dropping all the square roots and so on. The reason being that the corrections caused by fluctuations in the momentum distributions are included in the exponential prefactors. These quantities are defined recursively.Ŝ where m, n = A, B. The Toy Hamiltonian Here we couple the free Fermi gas to a disorder potential and compute the a.c. conductivity. The diagonalization is rendered trivial in the sea-boson language. However, the formula for the dynamical number-number correlation function in terms of the bosons is very nontrivial and can therefore be expected to lead to nontrivial results. The above hamiltonian describes electrons close to the Fermi surface interacting with the disorder potential. However, notice that no externally chosen cutoff is needed. A natural smooth cutoff emerges by not linearizing the bare fermion dispersion. In the Fermi language, Eq. ( 8) is equivalent to the following hamiltonian. . Thus the toy hamiltonian Eq.( 9) describes electrons coupling to the disorder potential near the Fermi surface in such a way that processes that take an electron below the Fermi surface and place it in another state also below the Fermi surface or both above the Fermi surface are forbidden. We shall see that in this case there is no metal insulator transition in any dimension. However, we reproduce the results that in one and two dimensions, the d.c. conductivity is zero. This hamiltonian may be trivially diagonalized by the following transformation. Thus we have, Also for the number fluctuations, Thus we may compute the following quantities, In an earlier preprint we showed that the real part of the a.c. conductivity may be written as, where, A.C. Conductivity The disorder averaged a.c. conductivity for Gaussian disorder may be writtten as In other words, Using M athematica T M we find, In one dimension, further simplification is not possible. Hence we write, In two spatial dimensions we have, In three spatial dimensions we have, In two dimensions we have, This may be approximately evaluated as follows. It can bee seen that the zero frequency limit of the above expression is zero since the integral vanishes exponentially fast ∼ e −c0/ω /ω 2 . Thus the d.c. conductivity of a two dimensional system is zero and the frequency dependence is rather nontrivial. Similarly we may expect that in one dimension the d.c. conductivity vanishes. Unfortunately for a similar reason we find that the d.c. conductivity in three dimensions also vanishes. This means we have to include terms beyond what Eq.( 9) does. Perhaps the reader can do this or at least offer to collaborate with the author. Please contact me at gsetlur@imsc.res.in Some Technical Musings It appears that the mathematical literature on the subject of quantum particles in random potentials is vast [9]. It is possible, indeed likely that many mathematically rigorous results are known regarding this problem. But this does not prevent the authors from making some remarks that more knowledgeable readers may choose to critique. In particular, the author is uncomfortable with the notion of disorder averaging. Nature chooses its potentials based on the distribution of impurities, defects and so on. This potential is fixed and well-defined for a particular distribution of these imperfections. The physicists' ignorance of the precise nature of this potential is not a license to average over these potentials. Nature does not average, people do. But are people justified in averaging ? In other words can averaging simplify the problem without washing out essential physics ? In order to answer this question we have to make the following conjectures. Defn0 : Let U d be the set of all potentials U (x) in a fixed spatial dimension d. Defn1 : Let F d be the set of all potentials U (x) in a fixed spatial dimension d that has the following property. They all lead to the same exponent δ for the frequency dependence of the a.c. conductivity. In other words, each of these potentials predicits that Re[σ(ω)] ∼ ω δ (in some region of ω with possibly some additive part independent of ω) with the same δ. If Conjecture1 is valid, then one may average over all these 'sufficiently erratic' potentials and expect to extract δ which is all that physicists care about. It is possible that δ may be extracted from a numerical solution of the Schrodinger equation using a specific U that belongs to the set F d . But this would involve using the computer for more than checking one's email, and not everyone likes that. Defn2 : Let M 3 be the set of all potentials U (x) in spatial dimension d = 3 that has the following property. They all lead to the same exponent β for the mobility edge exponent. In other words, each of these potentials predicit that σ d.c. ∼ (E F −E c ) β θ(E F −E c ) with the same β. However for different potentials, E c -the mobility edge, may be different. If Conjecture2 is valid, then one may average over all these 'sufficiently erratic' potentials and expect to extract β. Thus the validity of the process of averaging over potentials rests crucially it seems, on all these sufficiently erratic potentials predicting the same exponents and on these sufficiently erratic potentials spanning nearly all possible potentials. If both these are satistifed then one may average over all potentials and extract the exponents, or, if one is better at programming, choose a particular potential from this set, numerically solve the Schrodinger equation and extract the exponent from there. In either case we should get the same answer. A final conjecture seems appropriate. then β = β ′ and δ = δ ′ . In other words, these exponents are unique. With powerful computers now available, purely analytical methods such as this work may seem passè, but a closed formula for the a.c. conductivity that one can stare at (and one that is hopefully right) and admire has a charm that a cold data file on the hard disk is unable to duplicate. Besides, with Coulomb interaction, the problem becomes intractable numerically, however, one may expect to combine the sea-boson method with the present one to extract the exponents analytically.
2014-10-01T00:00:00.000Z
2003-06-02T00:00:00.000
{ "year": 2003, "sha1": "d596553ba6a50a5048b1d19d512803a48aacb43d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d596553ba6a50a5048b1d19d512803a48aacb43d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
20903649
pes2o/s2orc
v3-fos-license
Prefrontal Hemodynamics in Toddlers at Rest: A Pilot Study of Developmental Variability Functional near infrared spectroscopy (fNIRS) is a non-invasive functional neuroimaging modality. Although, it is amenable to use in infants and young children, there is a lack of fNIRS research within the toddler age range. In this study, we used fNIRS to measure cerebral hemodynamics in the prefrontal cortex (PFC) in 18–36 months old toddlers (n = 29) as part of a longitudinal study that enrolled typically-developing toddlers as well as those “at risk” for language and other delays based on presence of early language delays. In these toddlers, we explored two hemodynamic response indices during periods of rest during which time audiovisual children's programming was presented. First, we investigate Lateralization Index, based on differences in oxy-hemoglobin saturation from left and right prefrontal cortex. Then, we measure oxygenation variability (OV) index, based on variability in oxygen saturation at frequencies attributed to cerebral autoregulation. Preliminary findings show that lower cognitive (including language) abilities are associated with fNIRS measures of both lower OV index and more extreme Lateralization index values. These preliminary findings show the feasibility of using fNIRS in toddlers, including those at risk for developmental delay, and lay the groundwork for future studies. INTRODUCTION Functional near infrared spectroscopy (fNIRS) is a non-invasive, affordable, compact instrument for measuring functional brain activity. Compared to fMRI or PET, fNIRS is less susceptible to motion artifacts, making it an appropriate alternative to acquire brain activity data in infants and young children. Multiple studies have used fNIRS to investigate cerebral hemodynamics in both typical infants and infants at risk for neurodevelopmental disorders (e.g., Keehn et al., 2013;Gomez et al., 2014;Lloyd-Fox et al., 2015) as well as in older preschoolers (e.g., Perlman et al., 2016;Li et al., in press). The current study focuses on the more challenging toddler period between 18-36 months of age, when language delay and other developmental problems are first noted, and tools measuring neural correlates of potential delays are needed. fNIRS uses light in the near infrared range to measure changes in concentration of oxy-hemoglobin (HbO) and deoxyhemoglobin (Hb) in cortical regions. Since neural activity is associated with increased demand for HbO, changes in light absorption in the near infrared range are reflective of brain activation (Lam et al., 1997;Yodh and Boas, 2003;Boas et al., 2004;Gratton et al., 2005). Therefore, like fMRI, fNIRS infers activation from hemodynamic response. Previous studies have indicated a strong correlation between fNIRS and fMRI signals (Huppert et al., 2006;Sassaroli et al., 2006;Amyot et al., 2012). By capitalizing on its ease of use and localizable cerebral hemodynamic signals, fNIRS can be used to characterize the neural substrates of developmental differences in toddlers. Language delays in the first three years of life indicate risk for later diagnoses of autism spectrum disorder (ASD), intellectual disability, and specific language impairment (Michelotti et al., 2002). Identifying characteristics related to brain function that accompany language or more general delay can improve our understanding of the neurobiological timeline within which such delays and their sequelae occur. In the current study, we investigate both the feasibility of fNIRS within the toddler years as well as two potential metrics for detection of individual variation in neural activity. First, we analyze the oxygenation variability (OV) index, a measure of change in hemodynamic response in the frequency range associated with Cerebral Autoregulation (CA). Various physiological mechanisms can result in hemodynamic oscillations at specific frequencies (Obrig et al., 2000;Sassaroli et al., 2012). Specifically, CA maintains cerebral blood flow (CBF) by means of vasoconstriction and vasodilation (vasomotion), and is thus a necessary process for precise regulation of cerebral hemodynamics and circulation. Cerebral autoregulation is related to brain function in typical development (Chiron et al., 1992;Udomphorn et al., 2008;Cipolla, 2009;Kilroy et al., 2011;Anderson et al., 2014) and has been linked to poor developmental and cognitive outcomes in children and adults (Muizelaar et al., 1991;Lam et al., 1997;Udomphorn et al., 2008;Liu et al., 2015;Chernomordik et al., 2016). Spontaneous hemodynamic oscillations at frequencies of <0.1 Hz are known to be associated with cerebral autoregulation in children (Bassan et al., 2005;Wong et al., 2008) and are related to the strength and degree of cerebral autoregulation based on vasomotion (Sassaroli et al., 2012;Kainerstorfer et al., 2015;Liu et al., 2015). Here, using fNIRS, we assess the OV Index (Anderson et al., 2014) to quantify changes in oxygen saturation oscillations in frequencies associated with cerebral autoregulation in toddlers and relate those changes to individual variability in developmental ability. Additionally, hemispheric lateralization has been a target in the search for early brain markers related to neurodevelopment due to its early presence in infant development and the association of aberrant lateralization with atypical development. Some aspects of language processing are lateralized at birth (Pena et al., 2003;Telkemeyer et al., 2009), and lateralization persists through infancy and into childhood (Sato et al., 2010May et al., 2011). The capability that NIRS has for detecting lateralization patterns at birth makes it a promising method for potentially detecting early differences in infants who may be at risk for a neurodevelopmental disorder. In addition, functional lateralization (particularly to linguistic stimuli) varies in individuals with a range of neurodevelopmental concerns, including autism, specific language impairment, and dyslexia (e.g., Whitehouse and Bishop, 2008;Lindell and Hudry, 2013;Nielsen et al., 2014;Xu et al., 2015). Apart from language, lateralization of the Prefrontal Cortex, and its relation to higher cognitive function has also been a focus in the literature (Van Horn et al., 1996;Dumontheil et al., 2008;Christoff, 2009;Kawakubo et al., 2011;Burgess and Wu, 2013) due to its important role in cognitive development in both children and adolescents (Miller and Cohen, 2001;Kwon et al., 2002;Wood and Grafman, 2003;Hare and Casey, 2005;Casey et al., 2008;Tsujimoto, 2008;Boschin et al., 2015). Lateralization differences in the PFC have been noted in both individuals with ASD and adults with mild cognitive impairments (Tamura et al., 2012;Kikuchi et al., 2013;Zhu et al., 2015;Yeung et al., 2016). Therefore, lateralization of the PFC could play an important role in understanding and tracking cognitive development in toddlers. Therefore, while the fNIRS literature in infants, children, and adults frequently focuses on functional (i.e., event related) changes in oxygenation, here we focus on two alternative metrics that can be used in alert, relaxed toddlers. Specifically, we measure Laterality Index, which captures the percent difference between left and right hemodynamic response. While positive values indicate more leftward activation and negative more rightward, more extreme values (i.e., higher absolute values) indicate more unilateral vs. bilateral patterns of activation. The OV index, quantifies the level of variations in the oxygen saturation signal, where higher values indicate greater variability in oxygenation levels, which is related to dynamics of cerebral hemodynamics. Similar to other studies in young children, who are less likely to tolerate controlled laboratory stimuli, we measured activity while toddlers underwent a "vanilla baseline" period (Jennings et al., 1992) which consisted of watching and listening to an engaging children's show (Kikuchi et al., 2013;Fekete et al., 2014;Li and Yu, 2016). This method of acquiring usable data while very young children rest has been extensively used to measure brain activity through EEG techniques in various populations, including toddlers at risk for neurodevelopmental problems (Elsabbagh et al., 2009;Tierney et al., 2012). Here, we combine this commonly-used method whereby toddlers are kept alert and calm with audiovisual presentation of children's videos, with fNIRS measurement of both Laterality Index and OV Index. This allows for exploration how prefrontal hemodynamics relate development in toddlers, including those unable to tolerate an absence of stimuli or presence of repetitive, controlled stimuli. In order to improve tolerance and data quality in toddlers, we used a non-fiber based method for optode placement, which while being restricted to measuring activation over the prefrontal cortex, is more comfortable and is associated with reliable skin contact. We compared laterality index, based on the lateralization patterns of the hemodynamic response, and OV Index across toddlers, and analyzed the relation between these measures and developmental ability. In sum, the combination of recording during audiovisual presentation of a children's show and using a relatively comfortable frontal fNIRS band were selected to improve data acquisition in toddlers and to reduce the motion artifact. Because neurodevelopmental and language disorders are frequently associated with rightward or bilateral distribution of language-related activation compared to leftward lateralization in controls (Whitehouse and Bishop, 2008;Lindell and Hudry, 2013;Nielsen et al., 2014;Xu et al., 2015), we hypothesized that this relation could be detectable during audiovisual presentation of children's shows. We further hypothesized that lower developmental abilities, including language, would be associated with lower OV Index values. Finally, since language development is strongly correlated with more general cognitive development in toddlers (Oliver et al., 2004), we investigate whether these associations are specific to language or related more to general cognitive development in order to determine potential value of these metrics in serving as markers of risk for developmental delays. MATERIALS AND METHODS This study was approved by an Institutional Review Board at the National Institutes of Health. Parents of all participants completed informed and written consent prior to their child's participation. Participants Participants included 29 children (11 female) between the ages of 18 and 43 months, (mean = 29.57, SD = 7.18) with varying language abilities (see Table 1). Toddlers in this pilot study were recruited from either a typically developing group (n = 21, 7 female) or a language delay group (n = 8, 4 female), with the intent to follow children until they were 3 years of age to study aspects of development (NCT01339767) and early indicators of continued delays. Inclusion criteria for the typically developing group included T-scores>35 on all domains of the Mullen Scales of Early Learning (MSEL, Mullen, 1995) and no parent-reported history of delays. Inclusion criteria for the language delay group included expressive and receptive language scores in the "Very Low" range (T-score<30) on the MSEL at the time of screening, which occurred between 12 and 18 months of age. As expected, some children in the language delay group also scored below average on other aspects of cognitive development (see Table 1). Exclusion criteria for both groups included prematurity at birth, motor or other medical impairment deemed responsible for delays, and known genetic disorder. All children were recruited from the community through both advertisements and referrals from providers. Procedures The fNIRS session occurred at one of the regularly scheduled study visits, which occurred when the child was ∼18, 24, or 36 months. We first attempted fNIRS only in children at 24 or 36 months, before trying to acquire data in children as young as 18 months. As part of a longer fNIRS session, Toddlers underwent a vanilla baseline recording, during which they watched two 50s clips from children's shows, presented in audiovisual format. The vanilla baseline is a paradigm used during physiological data collection, such as EEG and ECG, in order to better homogenize participants' experience while maintaining attention throughout the data collection period (Jennings et al., 1992). Audiovisual clips from the Elmo's World segments from Sesame Street © were chosen because they were engaging and maximized toddlers' attention while reducing motion artifact. While these videos were not designed to isolate particular functional abilities, they included Elmo interacting with children and animals through speech, gesture, and song. The videos (trials) were displayed on a 14-inch monitor placed at a distance of 40-60 cm from the participant. The video frame rate was at 29 frames per second and the audio sample rate was at 44 kHz. Children watched these videos after completing developmental and diagnostic assessments in a pediatric research clinic. Most of the younger toddlers (18-24 months) watched the videos while seated on a parent's lap, whereas most of the 36-month-olds watched the video while seated in a child-sized chair. Handedness was determined at 36 months by systematic observation of dominance displayed on behavioral tasks (e.g., grasping pennies, throwing a ball). Non-verbal mental age was calculated as the mean of the age equivalents from the MSEL visual reception and fine motor subscales, and Verbal mental age was calculated as the mean of the age equivalents from the MSEL receptive language and expressive language subscales at the time of their fNIRS visit. Then, Non-verbal and Verbal Developmental Quotients (DQs) were calculated by dividing each toddler's age equivalent on the MSEL by their chronological age and multiplying by 100. Using DQ as an indicator of relative developmental status (compared to using T-scores) provides a measure reflecting the variability of the sample that fNIRS In this study, we used a continuous wave fNIRS system (fNIRS Devices LLC, MD). The instrument consists of an array of four sources and 10 detectors, with a total of 16 source-detector pairs (see Figure 1). The source-detector separation was set at 2.5 cm. While in fiber-based systems each source and detector has separate fiber connection, in this system all sources and detectors are molded together in a single silicon band. This non-fiber based sensor is portable and easier to apply on the forehead region (due to lack of hair) especially in toddler population. It collects data at two wavelengths-730 and 850 nm-with an acquisition frequency of 2 Hz. The sensor band was positioned on each child's forehead covering the prefrontal cortex (PFC). Due to a smaller head size in toddlers, only the middle channels (5-12) were used for data analysis. The sources and detectors were centered horizontally at FPZ based on the international 10-20 coordinate system (see Figure 1). This system was selected for this feasibility study because it is comfortable and easy to wear, portable, and inexpensive compared to fiber-based caps that cover the whole head, making this and similar systems potential candidates for use in early screening in infants and toddlers. NIRS light intensities at two wavelengths were then converted to changes in oxy-and deoxy-hemoglobin. Here, the modified Beer-Lambert (MBLL) law was used for calculating changes in concentration of HbO and Hb. For MBLL, we used two differential pathlength factors (DPF) to account for two wavelengths and each subject's age (Scholkmann and Wolf, 2013). The DPFs for each age were calculated based on the following formula: DPF(λ,A)= α + βA γ + δλ 3 + ελ 2 + ζλ, where A = age and λ = wavelength (e.g., DPF(730,2) = 5.4, DPF(850,2) = 4.32). Processing of the raw NIRS signal involved detection and removal of artifacts related to subject motion as well as respiration and heart rate. We used both median filtering and the sliding window motion artifact rejection (SMAR) to detect and remove motion artifact and saturated channels . This algorithm uses presence of sharp spikes and high standard deviations of the signal (>3% temporally) to detect motion. It should be noted that while these filters were applied with the goal of removing any signal contaminated with motion artifact, the 29 children whose data is used here did not show significant motion artifact during the resting period. Thus, all data recorded during the audiovisual resting stimuli for these children were usable. Correlation Based Signal Improvement (CBSI) was also used to remove any unidirectional changes in Hb and HbO signal (Ayaz, 2010;Cui et al., 2010). This method is based on the assumption that there should be a negative correlation between HbO and Hb signal. The algorithm for CBSI is based on the linear combination of these two signals, resulting in an improved HbO signal that contains information from Hb signal. It should be noted that the processed HbO signal had been altered by Hb data patterns during the CBSI step. Then, a low pass frequency filter (<0.1 Hz, Hanning window, order 20) was applied to remove high frequency contamination related to heart beat and respiration (Izzetoglu et al., 2007;Kreplin and Fairclough, 2013;Naseer and Hong, 2013). To avoid edge artifacts, filtered data included all samples from 30 s prior to the trial to 30 s after trial completion. Afterward, signals were detrended (based on Linear detrending where the best straightline fit from the signal is removed) to eliminate the slow drifts in the signal. Then, the HbO signals from the left channels and right channels were averaged separately and over the two trials, each including 50 s of data (i.e., 100 samples) to improve the signal to noise ratio. We used the HbO signal since it has been shown to be a better correlated of BOLD fMRI signal and have a better signal to noise ratio compared to Hb and has been commonly used in NIRS studies (Strangman et al., 2002b;Greve et al., 2009;Tong et al., 2011;Sato et al., 2013;Yue et al., 2013;Kawano et al., 2016). Hemodynamic response curves were detected using Matlab as an increase in HBO signal, followed by gradual decrease. We then calculated the laterality index based on the percentage difference between area under the curve (AUC) of the HbO signal for left and right prefrontal cortex, such that positive values indicate greater left vs. right activation, while negative values indicate higher right activation. The absolute value of the laterality index therefore provides the magnitude of difference between left and right activation. Furthermore, we computed OV index for each child to quantify the observed hemodynamic oscillations in frequencies related to cerebral autoregulation (<0.1 Hz). OV index characterizes the level of variability of oxygen saturation at a given frequency band. First the instantaneous amplitudes of HbO and Hb data are calculated to quantify instantaneous oxygen saturation at the frequency band related to cerebral autoregulation. The instantaneous amplitude of each signal (A (t)) is calculated based on the analytic signal continuation approach (Boashash, 1992). Where S (t) is the real signal, H |S (t)| is the Hilbert transform of the signal, and v(t) indicates the complex signal in the time domain. We then calculate instantaneous oxygen saturation (SO 2 ) based on a ratio of instantaneous amplitudes of changes in oxy-hemoglobin (HbO) to total hemoglobin (Hb+HbO) (SO 2 = HbO Hb+HbO ). Therefore, in calculation of OV Index, both HbO and Hb have been taken into account. We defined OV index as the coefficient of variation (σ/µ, ratio of mean to standard deviation) of the instantaneous oxygen saturation signal (Anderson et al., 2014). Statistical Analysis Shapiro-Wilk normality tests were performed to test the normal distribution of both the OV index and Lateralization quotients. This test did not support non-normal distribution for either metric [Laterality index: F (29) = 0.95, p = 0.27, OV index: F (29) = 0.96, p = 0.42]. Moreover, there was no significant correlation between age and Laterality index or OV Index (r = −0.076, p = 0.35 and r = −0.096, p = 0.31, respectively). Therefore, all further analyses were collapsed across age. Then, we calculated the Pearson Correlation Coefficient values between the fNIRS measures (i.e., OV Index and Laterality index) and behavioral measures (i.e., Verbal, Non-Verbal and Composite Developmental Quotient) across all subjects. Toddler Tolerance As shown in Table 2, we successfully collected data during the audiovisual simulation from 29 out of 37 toddlers between the ages of 18-36 months with varying levels of language development. The percentages of successful NIRS data acquisition sessions were 80 and 76% for 18-24 months old and 36 months old toddlers, respectively. Further analysis using ANOVA showed no difference in Composite DQ for successful vs. unsuccessful fNIRS acquisitions [F (1, 35) = 0.89, p = 0.352]. OV Index There was no significant difference between the OV index from left and right PFC [t (28) = −0.156, p = 0.87]. Therefore, the OV index from left and right were combined. We ran a twotailed Pearson correlation between OV index and Composite Developmental Quotient (Composite-DQ), as well as verbal and non-verbal DQs. The result based on the combined OV index showed a significant correlation between OV index and Composite-DQ (r = 0.567, p = 0.001), Verbal DQ (r = 0.503, p = 0.005), and Non-Verbal-DQ (r = 0.53, p = 0.003). Toddlers with lower developmental scores showed a lower OV index (Figure 2). In addition, OV index was correlated with the T-scores on the visual reception, receptive language, and expressive language subscales of the MSEL across all toddlers, with higher scores associated with a higher OV index (r = 0.542, p = 0.002; r = 0.449, p = 0.015; and r = 0.463 p = 0.011, respectively). OV index was not significantly correlated with fine or gross motor subscales (r = 0.35, p = 0.06 and r = −0.16, p = 0.53, respectively). Lateralization In this sample, there was a non-significant trend toward a positive correlation between Composite-DQ and laterality index (r = 0.358, p = 0.056). Specifically, toddlers with a lower Composite-DQ exhibited more rightward activation (Figure 3). There was no significant correlation between laterality index and Verbal or Non-Verbal DQ (r = 0.323, p = 0.088 and r = 0.324, p = 0.086). Moreover, there was a significant negative correlation between Composite-DQ and the absolute value of the laterality index (r = −0.596, p = 0.001). Specifically, toddlers with a lower Composite-DQ showed a greater discrepancy between left and right hemisphere activity (Figure 4). A similar pattern was found between Verbal and Non-Verbal-DQ and laterality index (r = −0.5, p = 0.006 and r = −0.6, p = 0.001). Absolute value of the laterality index was negatively correlated with the fine motor, receptive language and expressive language T-scores on the MSEL (r = −0.627, p = 0.000; r = −0.436, p = 0.018; r = −0.501, p = 0.006). Specifically, toddlers with a larger difference between left and right activation patterns showed lower scores. DISCUSSION In this study, we examined hemodynamic response-via laterality index and OV index-in toddlers with varying levels of developmental ability. First, we show feasibility for use of a NIRS frontal band with toddlers, including those with language delays. Second, we found potential for the utility of both metrics as potential indicators of developmental risk. Specifically, toddlers with lower developmental scores showed lower OV index across hemispheres, as well as a pattern of greater differences in activation between hemispheres, along with a potential pattern of rightward activation. These early results suggest the feasibility of fNIRS as a potential modality to measure brain activity that may relate to neurodevelopmental differences in toddlers. Toddlers in the present study generally tolerated the fNIRS headband and produced usable data. The percent of toddlers who produced usable data is similar to success rates seen in older children using fMRI (Yerys et al., 2009), including those toddlers with developmental delays. Future studies are necessary to determine if findings suggested in this feasibility study are indeed applicable to a larger population of toddlers. We demonstrate preliminary evidence of an association between developmental ability and OV index within this paradigm. Here, OV index reflects the degree of oscillation in oxygen saturation within the frequency range associated with cerebral autoregulation. Our results are in accordance with previous studies of changes in cerebral autoregulation, cerebral blood flow, and OV index in children and adults (Chiron et al., 1992;Schoning and Hartig, 1996;Anderson et al., 2014;Chernomordik et al., 2016). Studies in children and adults have also indicated a relation between lower variability in autoregulatory responses with poor cognitive outcome (Vavilala et al., 2004;Silvestrini et al., 2006;Turalska et al., 2008). For example, relative degree of these oscillations and therefore magnitude of the OV index has been found to be lower in a Traumatic Brain Injury (TBI) population (Chernomordik et al., 2016). It is worth mentioning that the OV index is not a direct measure of cerebral autoregulation. Rather, it is associated with frequencies related to this mechanism and serves to quantify oscillations in those frequencies. The significance of these slower oscillations and their origin are still unknown. In addition, although we used the frequency cutoff of 0.1 Hz to reduce the effect of Mayer waves, this metric may be affected by Mayer waves because they share the spectral range with hemodynamic response. The effect of Mayer wave oscillation on HbO signal can also be more prominent in scalp regions and correction of the signal using shorter distance channels in future studies can be useful (Yucel et al., 2016). While this study suggests that some features of prefrontal hemodynamics may vary in toddlers at risk for developmental delays due to early language delay, more research in this age group is required to clarify the specificity of these differences. Our finding of differential lateralization patterns as a potential correlate of general developmental delay is consistent with the extant literature. Lateralization has been shown to be a marker of abnormal development in previous studies, such as those using fMRI. Redcay et al. (Redcay and Courchesne, 2008) showed that in comparison to typically developing children, children with ASD recruited greater right hemisphere frontal lobe activity while listening to language sounds during sleep Those authors also found a positive correlation between language ability and right hemisphere activation in children with ASD, suggesting a compensatory role of right hemisphere regions in language processing in ASD. More recent research on children with language delay suggests at the population level, the lack of lateralization is a marker of risk for language impairment (Bishop et al., 2014). The present study was a feasibility study with a limited sample size; thus, replication and extension with longitudinal follow-up are required to clarify the role of left and right lateralization in cognitive development or language. With a larger sample size and more developmental variability, it will be useful to explore whether NIRS studies may be able to differentiate specific developmental problems (e.g., language delays, global developmental delay, or other delays). Correlation between absolute value of the laterality index and both verbal and non-verbal aspects of cognition indicates that group differences on this measure may be capturing the effects of general developmental delay rather than language delay, specifically. As such, the results of this study may reflect brain activity differences relating to general developmental abilities. Limitations In addition to investigating whether prefrontal hemodynamic patterns can potentially signal presence of developmental delay, it will be important to determine if those patterns are also related to change in developmental status. Here, we initially attempted NIRS at 24 rather than 18 months, and only attempted at 18 months after accumulating evidence of the headwear being tolerated at 24 months. Non-etheless, simultaneous behavioral and neural measurement at 24 and 36 months allows for a more comprehensive profile to be explored. In addition, if these findings are replicated in larger studies in relation to developmental trajectories, it will also be essential to measure factors that may be mediating the relation between lateralization and group status (e.g., attention, autonomic functioning). Another limitation of the present study is that we measured activation during audiovisual presentation of children's shows. Therefore, relating the findings of the study to specific cognitive functions is challenging, as participants were not engaged in a cognitive task during data collection. The results are therefore most useful in suggesting feasibility of fNIRS methodology in toddlers and for producing hypotheses for future work. For example, it is possible that toddlers with typical development processed the verbal aspects of the video very differently from toddlers with language delays who are at-risk for persistent developmental delays, and that this difference alone would explain observed differences in NIRS signal. Therefore, future research should investigate how the nature of stimuli present during acquisition (i.e., social, non-social, verbal) affects NIRS lateralization patterns and OV Index in children with typical and atypical development. Specifically, it will be important to compare resting state and event-related designs to systematically determine which best captures patterns of developmental variation and how this relates to any potential trade-offs with increased motion artifact in event-related designs. This is an essential next step in determining which design would be more practical for potential clinical use in toddlers. The videos selected and number of trials used were chosen to maximize quality data acquisition in toddlers rather than characterizing functional responses to specific stimuli. While using this video and the given number of trials was associated with very high rates of toddlers tolerating the NIRS equipment and producing usable data, understanding of task related changes in the NIRS signal will require use of an increased number of more controlled stimuli. The field of neuroscience in the toddler age group is still new, and there are currently no published fNIRS studies in the age group of 18-36 months. As with the use of EEG in toddlers, the use of fNIRS in toddler neuroscience will likely involve shorter periods of data acquisition. Given the high temporal resolution of fNIRS, the duration of hemodynamic response can be captured within 12 s, (Huppert et al., 2006). In this study, we designed the stimuli length to obtain a continuous measure and ensure sufficient timing to include changes in hemodynamic response. Although currently there are no resting state data in 18-36 months old toddlers using fNIRS, the length of usable data in our study is within the range of studies in infants using EEG or NIRS. It is common for EEG studies in infants and toddlers to complete analyses with <1 min of data per participant (Friedrich and Friederici, 2005;Tierney et al., 2012;Jentschke et al., 2014;Gabard-Durnam et al., 2015). Similar timing has been used in event-related fNIRS studies of infants and toddlers Nakato et al., 2009;Wilcox et al., 2012;Lloyd-Fox et al., 2014). As one of the main goals of our study design was to reduce the data loss in toddlers, the data sample used in our analysis is based on usable data that is sufficient for statistical analysis. Another limitation of this study is lack of integration of handedness data with fNIRS lateralization patterns. This limitation is due in part to the fact that handedness is not fully established in toddlers and cannot be reliably assessed until age of four (Bryden et al., 2000;Scharoun and Bryden, 2014). Future longitudinal studies will be able to address the effect of handedness in early lateralization patterns. The hemodynamic signal in fNIRS can also be contaminated with changes in blood flow in the skin, making it difficult to determine the strength of cerebral sources of oxygenation change. However, task-related effects on skin blood flow have been shown to be negligible (Mancini et al., 1994;Sato et al., 2013;Funane et al., 2014). Studies of hypercapnia, using a continuous wave system similar to the one used here, show that changes in the signal originate predominantly from the cerebrum rather than the skin (Themelis et al., 2007). In this study we use a sourcedetector separation that allows for differentiation of signals coming from the cerebrum vs. skin (Strangman et al., 2002a). However, it is also correct that group differences could be related to global neurophysiological differences, (i.e., increased blood flow generally related to attention and arousal) and comparing either specific cerebral regions or matched conditions can be helpful (Aslin and Mehler, 2005). For this reason, comparing activation between hemispheres becomes useful as individual differences in such a comparison would be more likely to be driven by local changes in cerebral hemodynamics. Finally, the present study focuses on the prefrontal cortex. We selected a small, comfortable NIRS sensor band that takes advantages of the hairless skin on the forehead to achieve improved signal quality while maintaining comfort for toddlers. The fNIRS system is comfortable and affordable, with less potential for artifact compared to the fiber based system, especially for the toddler population, because of the larger surface area over which the optodes make consistent contact with the skin. Given that the development of the PFC and its cognitive function during early childhood is substantial and plays vital roles in cognitive and developmental abilities in children (Kwon et al., 2002;Hare and Casey, 2005;Davidson et al., 2006;Durston et al., 2006;Casey et al., 2008;Tsujimoto, 2008) studying the PFC may be useful in the toddler population. In the context of this study, we examined if we could detect developmental variations within the prefrontal cortex region with possible important implication of using fNIRS in clinical setting. While the PFC plays a major role in language and social processing, other important regions of interest (e.g., temporal language areas, parietal association areas) could not be studied with the current sensor design. This study, however, is useful in suggesting that differences in activation can be captured within the prefrontal cortex, and that those differences could therefore potentially be a correlate of risk. As such, while the present study describes measures of frontal cortex hemodynamic patterns that may be useful in detecting early delays, it cannot provide a measure of functional brain activation related to language and communication. CONCLUSION Overall, the results of this study provide evidence for feasibility of the use of fNIRS methodology in toddlers, including the 18-36 months age range as well as in toddlers with varying levels of language development. Further, preliminary results suggest that decreases in OV index and larger lateralization differences in toddlers are associated with specific measures of lower developmental ability. Future studies with longitudinal designs, controlled stimuli, and larger and more diverse subject populations will be necessary to determine the role of prefrontal cortical hemodynamics as potential biomarker for neurodevelopmental disorders. AUTHOR CONTRIBUTIONS AA wrote and revised the manuscript and prepared the figures and tables, and performed data analysis; ES revised the manuscript, performed behavioral analysis, and task design and prepared the tables; ES, FC, and AT, edited manuscript AA and EC. Revised the manuscript FA advised regarding the analytical concepts ES and FC performed data acquisition; ES, AA, AT, and SM contributed to the study design; BS and LS. Performed behavioral assessment; AT, DM, and AG gave technical support and conceptual advice AT and AG supervised the study and reviewed manuscript AG supervised the analysis and provided analytical insight.
2017-06-15T18:49:42.261Z
2017-05-30T00:00:00.000
{ "year": 2017, "sha1": "3b9c2ed5989477f9a625d58ae2e29974ac1ebbb3", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2017.00300/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3b9c2ed5989477f9a625d58ae2e29974ac1ebbb3", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
34110799
pes2o/s2orc
v3-fos-license
The comprehensive model system COSMO-ART – Radiative impact of aerosol on the state of the atmosphere on the regional scale . A new fully online coupled model system developed for the evaluation of the interaction of aerosol particles with the atmosphere on the regional scale is described. The model system is based on the operational weather forecast model of the Deutscher Wetterdienst. Physical processes like transport, turbulent diffusion, and dry and wet deposition are treated together with photochemistry and aerosol dynamics using the modal approach. Based on detailed calculations we have developed parameterisations to examine the impact of aerosol particles on photolysis and on radiation. Currently the model allows feedback between natural and anthropogenic aerosol particles and the atmospheric variables that are initialized by the modification of the radiative fluxes. The model system is applied to two summer episodes, each lasting five days, with a model domain covering Western Europe and adjacent regions. The first episode is characterised by almost cloud free conditions and the second one by cloudy conditions. The simulated aerosol concentrations are compared to observations made at 700 stations distributed over Western Europe. For each episode two model runs are performed; one where the feedback between the aerosol particles and the atmosphere is taken into account and a second one where the feedback is neglected. Comparing these two sets of model runs, the radiative feedback on temperature and other variables Abstract. A new fully online coupled model system developed for the evaluation of the interaction of aerosol particles with the atmosphere on the regional scale is described.The model system is based on the operational weather forecast model of the Deutscher Wetterdienst.Physical processes like transport, turbulent diffusion, and dry and wet deposition are treated together with photochemistry and aerosol dynamics using the modal approach.Based on detailed calculations we have developed parameterisations to examine the impact of aerosol particles on photolysis and on radiation.Currently the model allows feedback between natural and anthropogenic aerosol particles and the atmospheric variables that are initialized by the modification of the radiative fluxes.The model system is applied to two summer episodes, each lasting five days, with a model domain covering Western Europe and adjacent regions.The first episode is characterised by almost cloud free conditions and the second one by cloudy conditions.The simulated aerosol concentrations are compared to observations made at 700 stations distributed over Western Europe. For each episode two model runs are performed; one where the feedback between the aerosol particles and the atmosphere is taken into account and a second one where the feedback is neglected.Comparing these two sets of model runs, the radiative feedback on temperature and other variables is evaluated. Correspondence to: B. Vogel (bernhard.vogel@kit.edu)In the cloud free case a clear correlation between the aerosol optical depth and changes in global radiation and temperature is found.In the case of cloudy conditions the pure radiative effects are superposed by changes in the liquid water content of the clouds due to changes in the thermodynamics of the atmosphere.In this case the correlation between the aerosol optical depth and its effects on temperature is low.However, on average a decrease in the 2 m temperature is still found. For the area of Germany we found on average for both cases a reduction in the global radiation of about 6 W m 2 , a decrease of the 2 m temperature of 0.1 K, and a reduction in the daily temperature range of −0.13 K. Introduction Aerosol particles modify atmospheric radiative fluxes and interact with clouds.As documented in the IPCC 2007 report the global influence of natural and anthropogenic aerosols on the atmosphere is not well understood.As shown by recently published and contradictive findings the state of knowledge is even worse concerning the effects of aerosol particles on radiation, temperature, and cloud formation on the regional scale (Bäumer andVogel, 2007, 2008;Franssen, 2008;Bell et al., 2008).Beside observations, numerical models are important tools to improve our current understanding of the role of natural and anthropogenic aerosol particles for the state of the atmosphere.On the global scale there exist a large number of model systems and corresponding applications of models addressing the quantification of the effect of anthropogenic aerosol particles on climate change (e.g.Lohmann, B. Vogel et al.: The comprehensive model system COSMO-ART 2008;Hoose et al., 2008;Bäumer et al., 2007).Due to a lack of computer capacity global climate models however often include simplifications and approximations (Lohmann and Schwartz, 2009).For example, Stier et al. (2005) prescribe the spatial distribution of important gaseous compounds that are involved in the formation and the dynamics of the aerosol particles. On the cloud resolving scale several studies concerning the influence of aerosols and cloud formation can be found also (Khain et al., 2004;Levin et al., 2005;Cui et al., 2006).These model systems do not account for the radiative interaction of aerosol particles, cloud droplets and other hydrometeors.Most often the aerosols are treated in a very general manner within these models.One example is to categorize the aerosols as continental or maritime respectively, with prescribed size distributions that are kept constant during the simulation (Noppel et al., 2007). On the continental to the regional scale there also exist several model systems which treat atmospheric processes, chemistry and aerosol dynamics.However, most of these model systems focus on air quality problems (Zhang, 2008 and references therein;Stern et al., 2008). When studying feedback processes between aerosol particles and the atmosphere it is necessary to use online coupled model systems.Here, online coupled means that one identical numerical grid for the atmospheric variables and for the gaseous and particulate matter is used.In addition identical physical parameterisations are used for atmospheric processes such as turbulence and convection.In such a fully online coupled model system all variables are available at the same time step without spatial or temporal interpolation.It allows studying feedback processes between meteorology, emissions and chemical composition.Those online coupled models have to treat the relevant physical, chemical, and aerosol dynamical processes at a comparable level of complexity.Meteorological pre-or postprocessors are not needed. Currently only a limited number of model systems exist fulfilling these requirements (Zhang, 2008).Studying atmospheric processes with grid sizes down to a few kilometres requires a non-hydrostatic formulation of the model equations on the regional scale where phenomena such as mountain and valley winds, land-sea breezes or lee waves become important (Wippermann, 1980).This again reduces the number of available model systems.One example of such a fully online coupled model system is the WRF/Chem model (Grell et al., 2005). We have developed a new online coupled model system which is based on the operational weather forecast model COSMO (Consortium for Small-scale Modelling; Steppeler et al., 2002) developed at the Deutscher Wetterdienst.Processes such as gas-phase chemistry, aerosol dynamics, and the impact of natural and anthropogenic aerosol particles on the state of the atmosphere are taken into account.As the radiative fluxes are modified based on the currently simulated aerosol distribution a quantification of several feedback processes is possible.In the first part of the paper we will give an outline of the current status of the model system.In the second part we address the effect of soot and secondary aerosols on radiation and temperature over Europe by applying the model system to two episodes in August 2005.In order to quantify the feedback mechanisms caused by the interaction of aerosol particles and radiation separately, the interactions of the aerosol particle with cloud microphysics are neglected. Model description Based on the mesoscale model system KAMM/DRAIS/ MADEsoot/dust (Riemer et al., 2003a;Vogel et al., 2006) we have developed an enhanced model system to simulate the spatial and temporal distribution of reactive gaseous and particulate matter.The meteorological module of the former model system was replaced by the operational weather forecast model COSMO of the Deutscher Wetterdienst (DWD).The name of the new model system is COSMO-ART (ART stands for Aerosols and Reactive Trace gases).Gas phase chemistry and aerosol dynamics are online coupled with the operational version of the COSMO model.That means that in addition to transport of a non reactive tracer, the dispersion of chemical reactive species and aerosols can be calculated.Secondary aerosols formed from the gas phase and directly emitted components like soot, mineral dust, sea salt and biological material are all represented by log normal distributions.Processes as coagulation, condensation and sedimentation are taken into account.The emissions of biogenic VOCs (volatile organic compounds), dust particles, sea salt and pollen are also calculated online at each time step as functions of meteorological variables.To calculate the photolysis frequencies a new efficient method was developed using the GRAALS (General Radiative Algorithm Adapted to Linear-type Solutions) radiation scheme (Ritter and Geleyn, 1992), which is already implemented in the COSMO model.With this regional scale model system we want to quantify feedback processes between aerosols and the state of the atmosphere together with the interaction between trace gases and aerosols.The model system can be embedded by one way nesting into individual global scale models as the GME model (Global model of the DWD) or the IFS (Integrated Forecast System) model of ECMWF (European Centre for Medium-Range Weather Forecasts). Figure 1 gives an overview of the new model system. When developing COSMO-ART we applied the concept of using identical methods to calculate the transport of all scalars, i.e. temperature, humidity, and concentrations of gases and aerosols.This also includes the treatment of deep convection with the Tiedtke scheme (Tiedtke, 1989).As COSMO-ART has a modular structure specific processes as chemistry or aerosol dynamics can be easily substituted by alternative parameterisations. The extended version of MADEsoot In MADEsoot (Modal Aerosol Dynamics Model for Europe extended by Soot), several overlapping modes represent the aerosol population, which are approximated by lognormal functions.Currently, we use five modes for the sub-micron particles.Two modes (if and jf ) represent secondary particles consisting of sulphate, ammonium, nitrate, organic compounds (SOA), and water, one mode (s) represents pure soot and two more modes (ic and jc) represent aged soot particles consisting of sulphate, ammonium, nitrate, organic compounds, water, and soot.The modes if and ic represent the Aitken mode particles and jf and jc the accumulation mode particles, respectively.The modes if, jf, ic and jc are assumed to be internally mixed.All modes are subject to condensation and coagulation.The growth rate of the particles due to condensation is calculated following Binkowski and Shankar (1995) depending on the available mass of the condensable species and the size distribution of the particles.In case of coagulation, the assignment to the individual modes follows the method of Whitby et al. (1991): (1) Particles formed by intramodal coagulation stay in their original modes.(2) Particles formed by intermodal coagulation are assigned to the mode with the larger median diameter.Furthermore, a thermodynamic equilibrium of gas phase and aerosol phase is applied to calculate the concentrations of sulphate, ammonium, nitrate and water (Kim et al., 1993). The source of the secondary inorganic particles in modes if and jf is the binary nucleation of sulphuric acid and water. The nucleation rates are calculated using the parameterisation of Kerminen and Wexler (1994).The secondary organic compounds are treated according to Schell et al. (2001).The two product approach of Odum et al. (1996) is used and eight parent organic compounds are treated which are oxidised to form condensable species. The soot particles in mode s are directly emitted into the atmosphere.Direct emissions of sulphate and primary organics into modes if, ic, jf, and jc are not taken into account.The particles in mode ic and jc are formed due to the aging process.Two processes can impact the transfer of soot from external into internal mixture, namely coagulation and condensation.Coagulation of soot particles in mode s with particles in modes if, jf, ic or jc transfers the mass of mode s into the modes ic or jc.As a second process, condensation of sulphuric acid on the surface of the soot particles and the subsequent formation of ammonium sulphate and the condensation of organic material can transfer soot into an internal mixture as well.Following Weingartner et al. (1997) we define that all material of mode s is moved to modes ic and jc if the soluble mass fraction of mode s rises above the threshold value ε=5%.Thus, the ageing of the soot particles is treated explicitly (Riemer et al., 2004) which is of great importance with respect to their radiative effects (Jacobson, 2000;Riemer et al., 2003a).Additionally, sedimentation, advection and turbulent diffusion can modify the aerosol distributions.The coarse mode c contains additional anthropogenic emitted particles.each mode.Furthermore, COSMO-ART also treats mineral dust (Stanelle, 2008), sea salt (Lundgren, 2006), and pollen (Vogel et al., 2008). For each mode prognostics equations for the number density and the mass concentration are solved.The standard deviations are kept constant.The zeroth moment M 0,l gives the total number density of mode l.To be consistent with the treatment of temperature and humidity within COSMO-ART the number density and the mass concentration are normalized with the total number density of air molecules N and with the total mass concentration of humid air ρ, respectively: and m n,l denotes the mass concentration of the chemical compound n of the aerosol. In COSMO-ART the following equations are solved numerically for the normalized number density of each mode. -For modes if and ic: -for the modes jf and jc: -for the soot mode s: -and for the coarse mode c: v is the wind vector, v sed,0,l the sedimentation velocity for the zeroth moment and F 0,l the turbulent flux for the zeroth moment of the mode l.W0,l describes the loss of particles due to precipitation scavenging and is parameterised according to Rinke (2008).The term Ca 0,l 1 l 2 describes the changes of the zeroth moment due to coagulation, and Nu 0,l describes the increase of the zeroth moment due to nucleation.The hat denotes the density weighted Reynolds average that is given by: The number densities of the coarse mode are small.Therefore the Inter-modal coagulation between the coarse mode and the other modes and the intra-modal coagulation of the coarse mode particles are both neglected.From the numbers we calculated for the individual modes we found that the intermodal coagulation between the nucleation mode (that gives the highest number densities) and the coarse mode is two orders of magnitude less than the inter-modal coagulation of the nucleation mode particles and the accumulation mode particles. In addition to the balance equations for the normalized number density given above, the balance equations for the normalized mass concentration are solved.Since a thermodynamical equilibrium is assumed for the system of sulphate, nitrate, ammonium and water, balance equations are only solved for sulphate, soot, SOA and the coarse mode.This leads to the Reynolds-averaged balance equations for the respective modes as follows: -For the modes if and ic: -for the modes jf and jc: -for the soot content of the soot containing modes ic and jc: -for the pure soot mode s: -the coarse mode c: v sed,n,l is the sedimentation velocity with respect to mass in mode l and F 3,l is the turbulent flux for the normalized mass concentration of mode l.In addition, no intramodal coagulation terms appear in the equations for the normalized mass densities, as they do not change this quantity.M 3,l is the third moment of mode l.Similar equations as Eqs.( 10) to ( 13) are solved for the organic compounds of the aerosol particles and similar equations as Eqs.( 8) and ( 17) are solved for mineral dust and sea salt. The turbulent fluxes F n,l , in Eqs. ( 3)-( 17) are parameterised in analogy to the turbulent fluxes in the diffusion equation and in the balance equation for water vapour in the COSMO model.Wn,l describes the loss of particles due to precipitation scavenging and is parameterised according to Rinke (2008).The term Ca 3,l 1 l 2 describes the transfer rate of the third moment of mode l 1 due to coagulation.Co sulf,l is the condensational loss or gain of mass and Nu sulf is the increase of mass due to nucleation. A detailed description of the treatment of the sedimentation and dry deposition, coagulation, condensation, nucleation and the thermodynamical equilibrium of the aerosol species is given by Riemer (2002). Chemistry of RADMKA The chemical reactions of the gaseous species are calculated using the chemical mechanism RADMKA (Regional Acid Deposition Model Version Karlsruhe).This mechanism is based on RADM2 (Regional Acid Deposition Model; Stockwell et al., 1990) and includes several series of improvements.We have updated the reaction rates for NO 2 +OH→HNO 3 (Donahue et al., 1997) and HO 2 +NO→OH+NO 2 (Bohn and Zetsch, 1997).The very simple treatment of the heterogeneous hydrolysis of N 2 O 5 was also replaced by a more complete one that takes into account the actual aerosol concentration and its chemical composition (Riemer et al., 2003b).Furthermore, the rate constants for NO+OH→HONO have been updated and a heterogeneous reaction that leads to the formation of HONO at surfaces has been included (Vogel et al., 2003).As was shown in Vogel et al. (2003) and Sarwar et al. (2008) HONO is an important source for OH under certain conditions.The very simple isoprene scheme of RADM2 has been replaced by the more sophisticated one of Geiger et al. (2003).Additional biogenic and anthropogenic hydrocarbons that may serve as precursors for secondary organic components of the aerosol were added to RADMKA according to the SORGAM (Secondary Organic Aerosol Model) module by Schell et al. (2001).Currently, RADMKA does not take into account wet phase chemistry.This means that formation of sulphate in cloud droplets is not taken into account.Therefore, sulphate burden might be underestimated particularly in the "HC" case. Treatment of radiative fluxes Several processes in the atmosphere are affected by shortwave and longwave radiation.The divergence of the radiative fluxes contributes to the diabatic heating.The incoming and outgoing shortwave and longwave fluxes at the earth surface are major components of the energy balance and therefore drive the temporal variation of the surface temperature. The vertical profiles of the radiative fluxes depend on those of temperature, pressure, the concentrations of water, CO 2 , O 3 , and the size distributions of the aerosol particles.The GRAALS radiation scheme (Ritter and Geleyn, 1992) is used within the COSMO model to calculate vertical profiles of the short-and longwave radiative fluxes.To perform fully coupled simulations with COSMO-ART the aerosol optical properties for each of the eight spectral bands of the GRAALS radiation scheme (Table 2) are required at every grid point and at every time step when the radiative fluxes are calculated.Those optical properties are the extinction coefficient, the single scattering albedo and the asymmetry factor.They depend on the size distributions of the aerosol particles, their chemical composition, as well as the soot and water content of the particles.The necessary Mie calculations can be performed with the detailed code of Bohren and Huffman (1983).However, the enormous amount of computer time inhibits the calculation of the optical properties at each grid point and at each time step.For this reason we have developed a parameterisation scheme for the optical properties based on a priori calculations using the scheme of Bohren and Huffmann and pre-calculated aerosol distributions.The a priori calculations are based on simulated aerosol particle size distributions and their chemical composition.These distributions were calculated with COSMO-ART where the feedback between the aerosol and the radiation was switched off.Detailed Mie calculations were then carried out for each grid-box of the model system.The results were plotted as functions of the wet aerosol mass or in case of the single scattering albedo and the asymmetry parameter as functions of the mass fraction.Applying fit procedures we derived the parameters given in Table 3.Hence, by this procedure our parameterisation is based on typical size distributions and chemical compositions that are simulated in our model domain.Since these calculations deliver the optical properties at single wavelengths numerous calculations are performed to determine the optical properties for the spectral wavelength bands required by GRAALS.The method we used for the weighting of the individual wavelength is described in Bäumer et al. (2007). Finally, we ended up with the following parameterisation for the optical properties.The extinction coefficient b k for a specific wavelength interval is calculated by: k denotes the wavelength interval, and l denotes the modes if, jf, ic, jc, and s.The coefficients bk,l that were derived from a priori calculations are given in Table 3. m l is the total wet aerosol mass of mode l in µg m −3 .The single scattering albedo ω k is calculated differently for the long wave range and the short wave range.For wavelength intervals 1-3 ω k is given by: and for wavelength intervals 4-8 by: The asymmetry factors g k of the individual wavelength intervals are calculated by: sf ic and sf j c are the soot fractions of modes ic and jc, respectively.The factor f l gives the mass fraction of the respective mode l related to the total aerosol mass including water. In Table 3 the bk,l , ωk,l , and gk,l are given for each wavelength interval and respective mode.The method we applied here (Eqs.21 and 22) is a simplification as we calculated the single scattering albedo and the asymmetry parameter by weighting the values of the individual modes by their mass fraction.We also calculated the single scattering albedo by weighting the contributions of each mode with their corresponding extinction coefficient and the asymmetry factor by weighting with its corresponding extinction coefficient and single scattering albedo.On average the deviations were in the order of a few percent. In our case the mass fraction of the coarse mode is very low, therefore the coarse mode does not contribute remarkably to the extinction and was neglected in these simulations.We looked for AERONET data at the station Karlsruhe (Germany) and found for both episodes (LC and HC) a small contribution (<15%) of the coarse mode to the aerosol optical depth.This might be different in the southern part of the model domain where mineral dust contributes a lot to the total aerosol load. Photolysis frequencies Photochemistry is influenced by the radiative fluxes in the atmosphere.The photolysis frequencies are required at each grid point due to the highly variable spatial and temporal distribution of clouds and aerosols.As the detailed calculation of the photolysis frequencies for the individual species is very time consuming, we have developed a new parameterisation combining a detailed radiation scheme with an efficient two-stream scheme.In contrast to existing procedures, which on one hand cannot account for changes in e.g.cloud cover and on the other hand need additional time consuming radiation calculations (e.g.Wild et al., 2000;Landgraf and Crutzen, 1998), this parameterisation uses the existing efficient radiation calculations as described above.The parameterisation consists of two steps. Step one is an a priori calculation of vertical profiles for the shortwave actinic flux I * A for wavelength band 3 and the photolysis frequencies J * i .J * i is achieved with the detailed radiation scheme STAR (System for Transfer of Atmospheric Radiation; Ruggaber et al., 1994) with the radiation code GRAALS.These calculations are carried out for a set of solar zenith angles for cloud free conditions and for standard profiles of the aerosol optical depth.Since GRAALS is a two-stream scheme the actinic flux is calculated by: E dir is the direct solar irradiance, E diff is the diffuse irradiance, and µ 0 is the cosine of the zenith angle.In the second step the online calculation of the actual profiles of the shortwave actinic flux I A is carried out.By dividing the actual actinic flux by the pre-calculated one, the relative change is determined.The most important factor for the vertical profile of the actinic flux are clouds.For overcast situations the impact of clouds on the actinic flux is nearly wavelength independent in the desired wavelength band (Crawford et al., 2003).Hence, the pre-calculated vertical profiles of the individual photolysis frequencies are used to calculate J i (z). Anthropogenic and biogenic emissions The anthropogenic emissions were pre-calculated with a spatial resolution of 14×14 km 2 and a temporal resolution of one hour.The weekly cycle of the emissions is not taken into account.The anthropogenic emission data account for traffic emissions, emissions by large point sources and area sources such as households and industrial areas.The method used to determine these emissions is described in Pregger et al. (2007).Emissions of the gases SO 2 , CO, NO x , NH 3 , and 32 individual classes of VOC, and the particle classes PM 10 and EC1 are treated.PM denotes particulate matter and EC elemental carbon.PM 10 is emitted into the anthropogenic coarse mode c with an initial median diameter of 6 µm and EC1 into the pure soot mode s with an initial diameter of 0.17 µm.While the anthropogenic emissions are pre-calculated the biogenic VOC emissions are calculated as functions of the land use data, the modelled temperatures and the modelled radiative fluxes (Vogel et al., 1995).For the parameterisation of the NO emissions from the surface a modified scheme of Yienger and Levy (1995) is used (Ludwig et al., 2001). Sea salt emissions The emission of sea salt depends on the wind speed and on the sea water temperature and is calculated using a combination of three individual parameterisations for three individual size ranges (Lundgren, 2006).The parameterisation of Mårtensson et al. (2003) is chosen for particles with a dry particle diameter of 0.02 µm<D p <1 µm. For 1 µm<D p <9 µm, the parameterisation of Monahan et al. ( 1986) is used and the parameterisation of Smith et al. (1993) is used to describe the flux of particles with a dry particle diameter of 9 µm<D p <28 µm.This is illustrated in Fig. 2. For describing the initial lognormal distribution of sea salt the mode diameter and standard deviation given in Table 4 are the third mode which has been modified with the assumption that large particles have a short residence time in the atmosphere. Mineral dust emissions The emissions of mineral dust depend on the friction velocity and on the surface conditions.Mineral dust particles are represented by three modes with initial diameters of 1.5, 6.7 and 14.2 µm.A detailed description of the dust emissions module is given in Vogel et al. (2006). Sedimentation and deposition The sedimentation and dry deposition of the aerosol species is treated according to Binkowski and Shankar (1995) and Ackermann et al. (1998, and references therein).The washout of the aerosol particles depends on their size distribution and the size distribution of the rain droplets.A detailed description of the parameterisation that is used in COSMO-ART is given in Rinke (2008). Results COSMO-ART is applied to quantify the impact of soot and secondary aerosols via the modification of atmospheric radiation on the state of the atmosphere over Europe for two episodes.Direct interactions of the particles with cloud microphysics are currently excluded.The modification of the radiative fluxes caused by the aerosol particles initialises a number of feedback mechanisms starting with changes in temperature (Fig. 3).Thereafter, the wind, turbulence, humidity, and cloud distributions are modified.Additionally there is an impact on the precipitation which leads to a modification of the washout resulting in another feedback mechanism.The chemical composition of the gaseous and particulate components is modified by these changes.Changes in radiation fluxes due to the aerosol particles also influence the photolysis frequencies and thus, the concentrations of the gases and the chemical composition of the aerosol are modified.An overview describing the feedback mechanisms that are treated in this application of COSMO-ART is given in Fig. 3.The simulation domain includes a large part of Europe and northern Africa (Fig. 6).We used a horizontal grid size of 0.125 • (∼14 km) in both horizontal directions and 40 vertical layers up to a height of 20 km.The meteorological initial and boundary conditions are achieved from the IFS model of ECMWF.Clean air conditions are prescribed for the gaseous and the aerosol variables.The analyses are used to update the boundary conditions every six hours.We are applying the standard COSMO procedure including a buffer zone at the lateral boundaries.The gaseous and the particulate species are treated at the lateral boundaries in the same way as the atmospheric variables are treated.For the calculation of the biogenic VOC emissions we used land use data provided by the Joint Research Center at Ispra (http: //www-tem.jrc.it/glc2000/). We simulated two situations which especially differed in average cloud cover.The first episode (LC) lasted from 28 August until 1 September 2005 and was characterized by nearly cloud free conditions, i.e. a low amount of clouds.Figure 4 shows the surface pressure and the 500 hPa geopotential for the last three days of the episode.A stable surface near high-pressure system with weak gradients over Germany and a ridge over Central Europe prevailed in the period from 29 August until 1 September 2005.Temperatures over Germany reached values above 30 • C. In Fig. 5 (left) the average cloud cover for the corresponding period simulated with COSMO-ART is illustrated.The mean cloud cover was less than 20% over Germany, Poland, the Czech Republic, Austria, France, and Southern Spain during this episode. The second episode (HC) lasted from 16 August until 20 August 2005.During that episode rather cloudy conditions prevailed.In the beginning of the episode the meteorological situation was characterized by a ridge over France and Great Britain, with less clouds and easterly winds.During the following days a high-level trough approached Europe and intensified.On the front side of the trough a depression developed over France and moved on in a north-eastern direction during the next days.This development yielded cloudy conditions and westerly winds for large parts of Europe from the 18 until the end of the episode.During the first four days the daily maximum temperature mostly exceeded 25 • C. On the last day of the episode a change in air mass from warm polluted air to cooler maritime air occurred over France and Germany due to a frontal system passage. For each case we carried out two sets of simulations.First a reference run (R) was carried out neglecting the feedback mechanisms between the aerosol particles and the atmosphere.In the second model run (F) the feedback mechanisms initialized by the modification of the radiative fluxes by the aerosol particles were enabled.Two days of simulation time were used for the spin up of the model before the feedback processes were switched on.40 Comparison with observations The results of simulation F were compared with observations for each of the episodes.Daily mean PM 10 concentration data were used from more than 731 stations classified as rural.Their respective locations are given in Fig. 6 ( EEA, Copenhagen, 2008).We excluded the measurements of the stations in Spain since the measured concentrations were much higher than the simulated ones, even for remote stations.A possible reason could be large contributions of mineral dust not only from the Saharan desert but also from local sources in Spain, or from local biomass burning.Neither of these emission sources was taken into account in our model runs and these stations were thus excluded in the comparisons.Figure 7 (top) shows simulated daily mean dry mass concentrations and observed daily PM 10 concentrations for the stations that are depicted in Fig. 6.The average PM 10 concentration of all stations is 23 µg m −3 and the simulated one is 13 µg m −3 .That means that the simulated concentrations were on average 40% lower than the observed ones.For episode HC the average PM 10 concentration of all stations is 21 µ m −3 and the simulated one is 12 µg m −3 .The simulated concentrations were again on average 40% lower than the observed concentrations.The underestimation of the observations is comparable to the results of Grell et al. (2005).The correlation is lower than those documented by Stern et al. (2008) and Sartelet et al. (2007) but in those cases model simulations were performed for longer time periods.Reasons for the underestimation could be lacking emissions of PM 10 and an underestimation of the organic particle fraction.Furthermore we prescribed clean air at the boundaries of our model domain which may also have contributed to the underestimated simulated concentrations compared to the observed concentrations.This shortcoming will be avoided in the future by the supplying the model system with boundary conditions from global scale models.We also compared the simulated dry aerosol mass density and the observed PM 2.5 concentrations (Fig. 7, bottom).Although the scatter of the data is comparable to the results for PM 10 the absolute values are now in a better agreement.The average values of simulated and observed mass concentrations are now 13.6 µg m −3 and 10.6 µg m −3 in case LC and 10.1 µg m −3 and 9.5 µg m −3 in case HC. For episode HC we carried out additional simulations with the nested version of COSMO-ART.The model domain for this simulation is shown in Fig. 6.The horizontal grid size for the smaller domain was 0.0625 • (∼7 km). Figure 8 shows diurnal cycles of NO 2 and PM 10 for the monitoring station Eggenstein (Landesanstalt für Umwelt, Messungen und Naturschutz Baden-Württemberg, Germany).This station is located close to a four lane radial highway with high traffic during the morning and the evening hours.In case of NO x the model system is able to reproduce the observations with the exception of individual peaks.The simulated and measured NO x concentrations averaged over four days are almost identical (29 µg m −3 ).Again, the model underestimates the PM 10 concentrations by a factor of two (measured: 23.8 µg m −3 , simulated: 10.4 µg m −3 ). As COSMO-ART calculates the extinction coefficient at each grid point and at each time step it is possible to compare the simulated optical depth with those obtained from satellite data.Figure 9 shows the observed and the calculated aerosol optical depth at 1 September 2005 for the time period when MODIS-Terra (Acker and Leptoukh, 2007) passed over the model domain.The observed spatial pattern is in agreement in the central part of the model domain.Larger deviations are found in the south-eastern part.This is caused by the assumption of clean air at the lateral boundaries and the neglecting of mineral dust. Feedback of aerosols and the state of the atmosphere Bäumer and Vogel ( 2007) have detected weekly cycles for atmospheric variables such as temperature and cloud cover over Germany.They also found distinct weekly cycles in the aerosol optical depth for several stations located in Germany and other European countries (Bäumer et al., 2008).This raises the question if the observed weekly cycles of the atmospheric variables can be attributed to the weekly cycles of the aerosol concentrations.The model results that we present here are a first step to answer this question.We used COSMO-ART to quantify the effect of the soot and natural and anthropogenic secondary aerosol particles on the state of the atmosphere.This is done by comparing the results of model runs R and F. The aerosol concentrations show a high B. Vogel et al.: The comprehensive model system COSMO-ART Also for episode HC high dry aerosol mass concentration is simulated for the southern part of the North Sea, Belgium, the Netherlands and the north-western part of Germany.As in episode LC high aerosol concentrations are simulated in the Po valley.For the HC episode high mean aerosol concentrations are also simulated in the south-eastern part of Germany. The modification of the radiative fluxes depends on the water content of the aerosol (see Sect. 2.4). Figure 10b illustrates the spatial distribution of the wet aerosol mass concentration.Again averages over three days are shown for the LC (left) and HC (right) episodes.The highest values are approximately 370 µg m −3 in case LC and 450 µg m −3 in case HC.A comparison of the dry mass concentration with the wet mass concentration shows differences in the pattern of the spatial distributions.For the LC episode, especially in the part of the Po valley, where high dry aerosol mass concentration was simulated, we observe a modified distribution in the wet mass concentration with two maxima in the north and the south.This is caused by higher relative humidity in the mountainous regions (see supplemental material: http://www.atmos-chem-phys.net/9/8661/2009/acp-9-8661-2009-supplement.pdf).Larger differences in the spatial pattern are also found over England.While the highest wet mass concentrations are found over land, the highest dry mass occurs over the North Sea.The reason for this is a frontal passage during episode LC which is connected to a band of high relative humidity.This band passes over Great Britain during the episode and reaches France at 1 September.In Germany, where low relative humidity prevails between 30 August and 1 September, the spatial distributions of the dry and the wet aerosol concentrations are similar.The aerosol water content is influenced by the nitrate content, which is strongly dependent on the temperature.As the temperature was only changing slightly the spatial distribution of the wet and the dry aerosol mass concentration looks similar over Germany.This is not the case in the western part of the model domain e.g.above Great Britain due to temperature changes. Figure 10c shows the difference in global radiation ( E G =E G (run F)-E G (run R)) at the surface, averaged over the last three days of each episode (LC left, HC right).For the episode LC the spatial patterns of the wet aerosol concentration (Fig. 10b) and E G (Fig. 10c) are similar.In general, the aerosol leads to a reduction in the global radiation in the range of 10 W m −2 .In some areas the reduction is even larger with maximum values of 50 W m −2 .Although averages over three days are presented here there are areas where E G shows a spotty behaviour e.g. in the south-eastern part of the model domain.The increase in global radiation in such cases is due to changes in cloud properties (e.g.cloud water content) that are initialized by the radiative effect of the aerosol particles on the thermodynamics of the atmosphere.These changes in cloud properties superpose the effects of the aerosol.The radiative impact of the aerosol is modified by feedback processes of clouds although effects of aerosol particles on cloud microphysics are neglected in this study. In the south-eastern part the spotty pattern can be explained by a spatial shift of clouds.When no clouds are present, as over Germany, only negative values of E G occur (Fig. 10c).Furthermore, temperature and other meteorological variables are modified through the feedback mechanism (Fig. 3). During episode HC larger areas of the model domain are covered by clouds.Consequently, the correlation between the aerosol concentration (Fig. 10b) and the E G (Fig. 10c) decreases.In the eastern part of Germany where high aerosol concentrations are simulated E G is very low and in some areas even positive.However, in areas with fewer clouds as the North Sea and the Netherlands negative values of E G and high aerosol concentrations coincide. Figure 10d shows the corresponding averages over three days of the temperature change ( T=T(run F)-T(run R)) at 2 m height for both episodes.For episode LC (left) a decrease in temperature is obvious at those grid points where the global radiation is reduced by the aerosol particles.This is not the case in the areas where clouds are present.T is found in the range between −1 K and +1 K. As could already be expected from the changes in global radiation (Fig. 10c) the correlation for episode HC of the aerosol concentration and the changes in temperature is small (Fig. 10d).A temperature increase is simulated in the southeastern part of Germany where high aerosol concentrations are simulated but the changes in global radiation are small.This behaviour is a result of several nonlinear feedback mechanisms and cannot be addressed to a single process.T is seen between −1 K and +1.5 K for this episode. Aerosol effects on the 2 m temperature above Germany In the following we concentrate on the model results for Germany.As can be seen in Fig. 5 Germany was almost cloud free for three days during episode LC and had a high cloud cover during episode HC. Figure 11 gives the aerosol wet mass concentration in the lowest model layer versus the simulated differences of the 2 m temperature for runs F and R for each episode.It is obvious that for episode LC the aerosol the changes in temperature and the wet mass concentration are correlated quite well (Fig. 11, left).This is not the case for episode HC.In this case the temperature change is no longer correlated with the wet mass concentration (Fig. 11, right). The nonlinearity is mainly caused by changes in cloud properties due to changes in the dynamics that are initialised by the modification in radiation. Several previous studies based on observations have focused on the weekly cycles of meteorological variables.Bäumer and Vogel (2007) found a weekly cycle of the mean temperature anomaly of about 0.2 K for an average over 15 years and 12 stations distributed over Germany.Other studies focused on the weekly cycle of the daily temperature range (Forster and Solomon, 2003;Gong et al., 2006).The daily temperature range is the difference between the maximum and the minimum temperature at a certain day.It has been speculated that these weekly cycles are caused by the weekly cycle of the aerosol concentration.A weekly cycle of the aerosol optical depth was found for stations distributed over Europe by Bäumer et al. (2008). As our simulations were only carried out for two episodes and we neglected the feedback of the aerosol particles with the cloud microphysics it is not possible to compare our temperature difference with those that were determined from observations for a period of 15 years.Nevertheless it is quite interesting to compare the magnitude of our simulated differences in temperature and daily temperature range with the observed weekly cycles.We evaluate the model results in detail for a sub domain that covers Germany.The results of this evaluation are summarized in Table 5.The averages over three days of the aerosol optical depth (AOD) for the model runs F and R is 0.17 for the case LC and 0.59 for the case HC.The higher value in case HC is caused by the high relative humidity and the high aerosol concentrations, especially on 20 August.The average reduction of the global radiation (run F-run R) is −5.3 W m −2 in case LC and −6.0 W m −2 in case HC.Although the aerosol optical depth is almost three times greater than in case LC the average global radiation change does not differ too much between cases HC and LC as the global radiation in case HC is much lower than in case LC.The average 2 m temperature differences between runs F and R are −0.10K (LC) and −0.08 K (HC).This means that over Germany the simulated aerosols induce a temperature reduction. We have calculated the mean daily temperature range for Germany for each day of the two episodes.The mean daily temperature ranges and the differences (run Frun R) are given in Table 6.The strong decrease of the daily temperature range at the third day of episode HC is due to a frontal system that passes over Germany (see supplemental material: http://www.atmos-chem-phys.net/9/8661/2009/acp-9-8661-2009-supplement.pdf).The radiative feedback caused by the aerosol particles produces a reduction in the daily temperature range of about 0.13 K.This decrease is in the same order as the observed weekly cycle of the temperature range (Bäumer and Vogel, 2007). The results of our model runs cannot be used to prove or to explain the observed weekly cycles since we have carried out our model simulations with emissions that were constant from day to day.While Bäumer and Vogel (2007) related the weekly cycles in the atmospheric variables to the weekly cycle of anthropogenic emissions, this study takes anthropogenic and biogenic emissions into account.Moreover, we have simulated only two episodes and the interaction with microphysics has been neglected.However, the results serve as a rough estimation of the effects of aerosol particles that can be expected on the regional scale.As COSMO-ART is currently being improved to treat the interaction of aerosols with clouds, a real simulation of the effects of changing emissions during a week will be carried out in future applications of the model system. Summary and conclusions We have built up the new model system COSMO-ART for studying the interaction of aerosol particles with the atmosphere on the regional scale.The model system is based on the operational weather forecast model COSMO of the Deutscher Wetterdienst.It is fully online coupled with detailed photochemistry and aerosol dynamics of natural and anthropogenic aerosol particles.The photochemistry includes the reactions that lead to the formation of precursors of aerosol particles.Within the model primary particles such as mineral dust, sea salt and soot are treated together with secondary particles consisting of sulphate, ammonia, nitrate, organic compounds and water.The size distribution of the particles and their chemical composition vary in space and time depending on transport processes, emissions, and the chemical processes.Based on very detailed calculations we have developed parameterisations to treat the impact of the aerosol particles on the atmospheric short and long wave radiation.Applying these parameterisations within COSMO-ART we are able to quantify the feedback processes, between the aerosols and the state of the atmosphere, that are initialized by changes in radiation on the regional scale in a fully coupled way.We have applied the model system to two episodes in August 2005 that differed in cloud cover over Western Europe.The simulated PM 10 concentrations have been compared to the observed ones and an underestimation of the observations in the order of 40% was found.This is comparable with the results of other model studies.Reasons for the underestimation are missing emissions and probably an underestimation of the organic fraction of the particles. For each episode two series of model runs were carried out, one where the interaction of the aerosol particles with radiation was switched off and one where it was taken into account.This approach allows the estimation of the direct radiative aerosol effect on the atmosphere at short timescales on the regional scale.For the case with a low amount of clouds we found a good correlation between the aerosol optical depth and the changes in 2 m temperature.Locally, the reduction of the averages over three days of the temperature reached values of 0.3 K.For cloud free conditions the atmosphere responses quite rapidly to the aerosol optical depth.In the case of cloudy conditions this correlation is weaker.When clouds are present the aerosols cause changes in the cloud cover or in the cloud water content which amplify the pure radiative effect due to the aerosols.In most parts of the model domain this leads to a cooling effect, but under certain conditions also an increase of the averaged temperature is simulated. We calculated the changes in the daily temperature range over Germany.For both episodes the aerosol leads to a reduction of the daily temperature range in the order of magnitude of 0.1 K.This value is in the same order of magnitude as was found from observation of weekly cycles of this parameter.As we underestimated the observed aerosol concentration our findings represent a lower limit of the aerosol effect. Fig. 2 .Fig. 3 . Photochemistry is influenced by the radiative fluxes in the atmosphere.The photolysis frequencies are required at each grid point due to the highly variable spatial and temporal distribution of clouds and aerosols.As the detailed calculation of the photolysis frequencies for the individual species is very time consuming, we have developed a new parameterisation combining a detailed radiation scheme with an efficient two-stream scheme.In contrast to existing procedures, which on one hand cannot account for changes in e.g.cloud cover and on the other hand need additional time consuming radiation calculations (e.g.Wild et al., 2000;Landgraf and Crutzen, 1998), this parameterisation uses the existing efficient radiation calculations as described above.The parameterisation consists of two steps.Step one is an a priori calculation of vertical profiles for the shortwave actinic flux I * A for wavelength band 3 and the photolysis frequencies J * i .J * i is achieved with the detailed radiation scheme STAR (System for Transfer of Atmospheric Radiation;Ruggaber et al., 1994) and the shortwave actinic flux I * A is calculated Fig. 2 . Fig.2.The three parameterizations describing flux of sea-salt particles per area of bubbles and second for chosen intervals for a sea surface temperature of 25 • C and a wind speed of 9 m s 1 ,Lundgren (2006). Fig. 3 . Fig. 3. Feedback processes that are included in the model runs.The dashed lines indicate interactions that are not taken into account. Fig. 3 . Fig. 3. Feedback processes that are included in the model runs.The dashed lines indicate interactions that are not taken into account. Fig. 6 . Fig. 6.Location of the surface based stations that were used for comparison of measured and simulated PM 10 . Fig. 6 .Fig. 7 . Fig. 6.Location of the surface based stations that were used for comparison of measured a simulated PM10. Fig. 6 .Fig. 7 . Fig. 6.Location of the surface based stations that were used for comparison of measured a simulated PM10. Fig. 7 . Fig. 7. Top: Simulated dry aerosol mass concentration (coarse mode included) and observed PM 10 concentrations for episodes LC (left) and HC (right).Bottom: Simulated dry aerosol mass concentration (coarse mode excluded) and observed PM 2.5 concentrations for episodes LC (left) and HC (right). Fig. 8 . Fig. 8. Simulated (black) and observed (red) daily cycles of NO x and PM 10 at Eggenstein. Fig. 10 . Fig. 10.Simulated averages over three days of aerosol dry mass concentration (a), wet mass concentration (b), difference in global radiation (c), and difference in 2m temperature (d).Results are shown for episodes LC (left) and HC (right). Fig. 10 . Fig. 10.Simulated averages over three days of aerosol dry mass concentration (a), wet mass concentration (b), difference in global radiation (c), and difference in 2m temperature (d).Results are shown for episodes LC (left) and HC (right). Fig. 11 . Fig. 11.Simulated change in 2m temperature and wet mass concentration.Each data point gives the three day average (left: episode LC, right: episode HC). Fig. 11 . Fig. 11.Simulated change in 2 m temperature and wet mass concentration.Each data point gives the three day average (left: episode LC, right: episode HC). Table 1 . Mixing state, chemical composition, and standard deviation of the individual modes of the aerosol particles. Table 2 . List of the used spectral range with the considered components of the atmosphere in the radiation model. Table 3 . Parameters derived from detailed Mie-calculations.In these cases the single scattering albedo is calculated according to terms 4 and 5 of Eq. (20). * Table 4 . Initial median diameter and standard deviation for the three sea salt modes respectively. Table 5 . Averages over three days of AOD, E G and T for the sub domain that covers Germany for cases LC and HC. Table 6 . Mean daily values of TR and TR for the sub domain that covers Germany for cases LC and HC.
2017-11-29T19:42:52.185Z
2009-11-16T00:00:00.000
{ "year": 2009, "sha1": "c9796474df9eb8fc360f27ced958bc0de901c60a", "oa_license": "CCBY", "oa_url": "https://acp.copernicus.org/articles/9/8661/2009/acp-9-8661-2009.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "676adb4c1a9230486c4e1188e3b0544486256fb7", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
12558597
pes2o/s2orc
v3-fos-license
Association of Hair Manganese Level with Symptoms in Attention-Deficit/Hyperactivity Disorder Objective The study examined the association between hair manganese level and symptoms of attention-deficit/hyperactivity disorder (ADHD) in Korean children. Methods Forty clinic-referred children with ADHD and 43 normal control children participated in this study. The participants were 6-15 years old and were mainly from the urban area of Seoul, Korea. ADHD was diagnosed using the Diagnostic and Statistical Manual of Mental Disorders, 4th edition and Kiddie-Schedule for Affective Disorders and Schizophrenia-Present and Lifetime Version-Korean Version. The severity and symptoms of ADHD was evaluated according to the ADHD Diagnostic System, and parent's Korean ADHD Rating Scale (K-ARS). All participants completed intelligence test and hair mineral analysis. We divided the data of hair Mn into two groups to determine whether a deficit or excess of Mn are associated with ADHD. Multiple logistic regression analyses were performed to identify hair manganese levels associated with ADHD, controlling for age, sex, and full scale intelligence quotient (IQ). Results The proportion of abnormal range Mn group was significantly high in ADHD compared to controls. However, after statistical control for covariates including age and sex, abnormal range Mn group was significantly associated with ADHD (OR=6.40, 95% CI=1.39-29.41, p=0.017). Conclusion The result of this study suggests that excess exposure or deficiency of Mn were associated with ADHD among children in Korea. Further investigation is needed to evaluate the effects of hair manganese levels on symptoms in ADHD. INTRODUCTION Attention-deficit/hyperactivity disorder (ADHD) is one of the most common psychiatric disorders of childhood and adolescence. 1 ADHD has a global prevalence of about 5%. 1 ADHD is characterized by symptoms of inattention, hyperactivity, and impulsivity. The impairment areas of childhood ADHD include academic and social dysfunction and skill deficits. 2 Even though the etiology of ADHD is not known exactly, what is believed to predict persistence of ADHD in-cludes family history of ADHD, cormobid psychiatric disorder, and psychosocial adversity. 3 Some evidence based on neurochemical, imaging, and genetic studies suggest dysregulation of catecholaminergic systems in ADHD. 4 Necessary nutrients, such as trace minerals including manganese (Mn), iron, zinc, iodine, selenium, copper, fluoride, and chromium, are associated with changes in neuronal function that can lead to adverse effects on behavior and learning. 5 Specifically, Mn is an essential nutrient in human and animals. Mn is needed by children to support normal brain growth and development. 6 Mn is a naturally occurring element that constitutes approximately 0.1% of the earth's crust, and in present in low levels in water, food, and air. 7 The main exposure to manganese is by eating food or Mn-containing nutritional supplements. 8 Vegetarians who consume foods rich in Mn such as whole-grain cereals, green leafy vegetables and nuts, as well as heavy tea drinkers, may have a higher intake of Mn than the average person. 9,10 The amount of Mn ingested in drinking water is substantially lower than intake online © ML Comm from food, generally considered to be <1% although depending on the concentration of Mn, this route of intake can rise to 20%. 11 Certain occupations like welding, mining, and working in a factory where steel is made may increase chances of being exposed to high levels of Mn. 12,13 Furthermore, people who smoke tobacco or inhale second-hand smoke are exposed to high level of Mn, particularly for children who live in house where there are smokers. 9 Mn concentrations in the serum increase after 12 months of age, and Mn has been measured as an average of 1.4±1.25 μg/L in children aged 1 month to 18 years. 14 Concentrations of Mn in food and drinking water may vary between different countries and geographic area. Also, few data are available that provide clear cut-offs for nontoxic levels of Mn. Based on the dietary information described by World Heath Organization, 10 Schroeder et al., 15 and National Research Council, 16 Environmental Protection Agency (EPA) estimated that an intake for 10 mg Mn per day in the diet is safe for lifetime exposure. 17 The US Food and Drug Administration, EPA, and Ministry of Environment of South Korea also recommends a concentration of Mn in drinking water not to exceed 0.05 mg/L. 7 Although Mn is an essential metal at low doses, excessive and chronic exposure to high doses has been associated with neurotoxicity. 18 Mn neurotoxicity is characterized by alterations in dopamine neurobiology of brain. The dopamine transporter (DAT) is affected by high Mn levels. 19 ADHD has also been linked to impaired dopaminergic functioning, so high Mn levels in children with ADHD reflects a similar neurotoxic effect. 20 In developing children, high Mn exposure have been associated with behavioral disinhibition, 21 hyperactive behavior, 21 and diminished cognitive function, such as intelligence quotient (IQ), 18,[22][23][24] memory, 24,25 school grades. 25 The high blood concentration of Mn inversely affected attention 8 and IQ scores, 26 in studies on children living in community. Farias et al. assessed a group of children with ADHD and matched control children attending public school and reported elevated serum level of Mn in treatment-naive children with ADHD compared to normal controls. 8 A cross-sectional study in a non-risk area found that in school-aged children, higher levels of Mn in blood samples were inversely associated with significantly lower scores on tests of verbal IQ and full-scale IQ. 26 High levels of Mn have been found in the scalp hair of children with ADHD 27 and elevated levels of Mn that influence the dopaminergic system and dopaminergic transmission is postulated to be involved in the etiology of ADHD. Presently, we investigated the association between excess or deficency of Mn in head hair and symptoms of ADHD in non-risk environmental Mn exposure. Unlike previous studies that examined associations between ADHD and Mn using only rating scales, our study raised the accuracy of ADHD di-agnosis using semistructured interview. We contrasted Mn levels in a group of children with ADHD and normal control in Korea. Participants and diagnosis Forty clinic-referred children with ADHD and 43 normal control children participated in this study. Normal controls were recruited by advertisement. The participants were 6-15 years old and were mainly recruited from the urban area of Seoul, Korea. ADHD was diagnosed using the Diagnostic and Statistical Manual of Mental disorders, 4th edition (DSM-IV) and Kiddie-Schedule for Affective Disorders and Schizophrenia-Present and Lifetime Version-Korean Version (K-SADS-PL-K). K-SADS-PL-K is a comprehensive measure of a variety of past and present pathological conditions, and is useful as a diagnostic interviewing tool for diagnosing major psychiatric disorders in child and adolescent psychiatry. This measure was performed for all subjects as well as their parents in an effort to evaluate psychiatric disorders comorbid with ADHD. We excluded children with comorbid psychiatric disorder, medical illness requiring medication, or with a prior history of taking ADHD medication. All tests were performed by highly trained and supervised psychiatrists. To evaluate the severity of ADHD symptoms, parent's Korean ADHD Rating Scale (K-ARS) was used. The ADHD diagnostic system (ADS) was also used to evaluate the severity of inattention and impulsivity. Full-scale intelligence quotient (IQ) was measured using the Weschsler Intelligence Scale for Children, 3rd edition (WISC-III) and hair mineral was analyzed. Written informed consent was obtained from the parents of the children after the purpose and process of the study were explained. The protocol of this study was approved by the Institutional Review Board at Kangbuk Samsung Hospital (Seoul, Korea). Korean version of Dupaul's ARS, parent and teacher version Developed by Dupaul, the ARS lists 18 symptoms of ADHD based on DSM-IV diagnostic criteria: nine for attention-deficit and nine for hyperactivity-impulsivity domains. 31 The scale has been translated and standardized in Korean. 32 The K-ARS parent and teacher forms are considered to have high validity and reliability. Internal constancy of K-ARS by age is 0.77-0.89. In interrater reliability between parents and teachers, Pearson correlation coefficients are 0.31-0.97 and are statistically significant (p<0.01). Items are rated on a 4-point scale (0=never or rarely, 3=very often) checked by teacher and parent. ADS ADS is a computerized continuous performance test that consists of visual-auditory stilmulation tests. 33 In each modality, the targets and non-targets are presented in the form of auditory or visual stimuli, which takes 15 minutes to complete. The test is available for Korean children over 5 years of age. It consists of three sessions: early, middle, and late phases. ADS has four variables: omission errors, commission errors, response time, and response time variability. An omission error indicates that the subject did not respond to a target stimulus, with high scores reflecting inattention. 34 A commission error indicates the subject made an incorrect response to the non-target. This measures impulsivity 35 and inhibitory control. 36 The response time (RT) score measures the amount of time between presentation of the target stimulus and a correct response. The standard deviation of the RT measures variability or inconsistency of attention. 33 Scores are reported as ageadjusted T-scores. In our study, all subject performed the ADS at baseline of the study. WISC-III The WISC-III, 37 which is suitable for children ≥6 years of age, consists of five (or six, depending on administration) verbal subscales that together provide a Verbal IQ score and a similar number of performance subscales that together provide a Performance IQ. Hair sampling and analysis Mn exposure levels of both groups (ADHD and control) were similar because they are living in the same geographic area. Hair analysis was performed to evaluate the long term metal exposure and mineral level. For analysis of hair minerals, all participants were asked not to chemically process their hair (i.e., no dyeing, perms, or frosting) for at least 3 weeks prior to hair sample acquisition. Participants also refrained from using hair gels, oils, and hair creams before sampling. Approximately 150 mg of hair was obtained from the parietal region of the scalp using stainless steel scissors so as not to contaminate the samples with any metal. Hair samples were collected from the children with their parents accompanying them to reduce children's anxiety and to increase their cooperation. The only proximal portion (within 3.0 cm of the root) was acquired as the sample. The hair samples were not washed for the assays. The cut hair was placed directly into a clean hair specimen envelope normally provided by the laboratory and sealed with the envelope's glue flap. Hair samples were assayed by Trace Elements, Incoperated (Addison, TX, USA). Each sample was weighed, placed in a 50 mL acid-washed polyprophylene tube, and trace-metal grade HNO 3 was added. After centrifuging for 5 minutes, hair sample was transferred into a CEM Mars 5 Plus Microwave Digestion apparatus. Samples were kept at 70°C for 20 min and then the temperature was gradually increased to 115°C over 15 min. Content of each mineral was analyzed by inductively coupled plasma-mass spectrometry [ICP-MS using a Sciex Elan 6100 apparatus (Perkin-Elmer, TX, USA)]. Statistical analyses Demographic and clinical variables were compared by Student's t-test for continuous variables and Chi-square test for categorical variables. The Mn concentration was natural log transformed to achieve normal distributions of the variables. We divided hair Mn level into two groups: normal range Mn group (0.10-1.30 ppm), and abnormal range Mn group (<0.10 ppm or >1.30 ppm) to determine whether a lack or overload of Mn are associated with ADHD. 38,39 Two multiple logistic regressions were performed to evaluate the association between Mn level and ADHD after adjusting confounding factors. In Model I, Mn was used as a categorical variable after controlling age, sex. And full scale IQ was additionally controlled in Model II. Statistical analysis was performed using SPSS statistical software, version 18.0 (SPSS, Chicago, IL, USA). The cut-off for statistical significance set at p<0.05. Comparison of clinical characteristics and hair Mn levels between ADHD and control groups The ADHD group had hair Mn concentrations that were slightly higher, but not statistically significant, than the control group (0.31±0.46; ADHD, 0.22±0.10; control, t=0.255, p=0.79) ( (Table 2). Total K-ARS were higher in children in the ADHD group compared to the control group, and was statistically significant (3.63±3.63; control, 28.37±12.08; ADHD, t=-12.545, p=0.000) ( Table 2). Associations between hair Mn and ADHD Levels of hair Mn were significantly correlated with total K-ARS (r=0.275, p=0.013) and not significantly correlated with full scale IQ (r =-0.036, p=0.755). Logistic regression analysis was performed to determine the influence of hair Mn levels on the prediction of ADHD. Hair Mn level was not significant association with ADHD after controlling age, sex, and full scale IQ (OR=4.43, 95% CI=0. 50-35.54, p=0.178). Odds ratio of abnormal range Mn group was significantly high compared with the normal range Mn group after controlling age and sex in Model I (OR=6.40, 95% CI=1.39-29.41, p= 0.017) ( Table 3). The associations between abnormal range Mn group and ADHD were not significant after controlling age, sex, and full scale IQ in Model II (OR=2.60, 95% CI= 0. 45-15.16, p=0.289) ( Table 3). DISCUSSION The present study examined the association between concentrations of Mn in hair and ADHD symptoms of children. There was no difference between the ADHD group and control group in hair Mn levels. In logistic regression analysis using Mn as a continuous variable, odds ratio of Mn level was elevated without significant association with ADHD after controlling age, sex, and full scale IQ (OR=4.43, 95% CI= 0.50-35.54, p=0.178). After statistical control for covariates including age and sex, abnormal range Mn group was signifi- Data are expressed mean±standard deviation or n (%). *Mn concentrations was natural log transformed to achieve normal distributions of the variables, † statistical significance based on chi-square test as appropriate, using two categories of Mn were 0.10-1. Although the neurotoxicity of lead (Pb) is well established, 40 relatively little is known about Mn neurotoxicity. Unlike Pb, which is a toxic metal, Mn is an essential microelement. 41 Mn is needed by infants and children to support normal growth and development of brain. 6 There is no concensus on the optimal biomarkers of Mn exposure in children. 42 In a small casecontrol study, children with ADHD had significantly higher hair Mn levels than did controls. 27 In this regard, recent studies have observed the impacts of heavy metals on childhood cognition and behavior. Exposure to subtoxic levels of Mn has also been suggested to be associated learning and attention problems, 43,44 hyperactive behavior, and learning problems, 3 with neurofunctional alterations characterized by neuromotor and cognitive deficits, and mood changes. 5 Recently, a series of studies reported associations between excessive Mn exposure and neurologic disorders in children, mainly behavioral effects. 45 A number of studies reported relationships between excessive Mn exposure and neurobehavioral performance, 25 lower learning and memory test scores, 22,24 and cognitive attention deficits in children. 46 We additionally showed the results after controlling full scale IQ. The associations between abnormal range Mn group and ADHD were not significant after adjusting age, sex, and full scale IQ in Model II (OR=2.60, 95% CI=0. 45-15.16, p=0.289). Previous study reported that ADHD is more likely to be present in the context of developmental delay, at the level of borderline-to-mild intellectual disability. 47,48 In contrast, other study reported major impact of ADHD on lower IQ scores, impaired verbal and visuo-spatial short-term memory. 49,50 Therefore, this finding after adjusting full scale IQ should be viewed with caution Our study findings support the hypothesis that high-level, chronic exposure and deficiency of Mn is associated with ADHD risk in children. It is consistent with previous findings of an association between Mn hair level and ADHD. 21,51,52 Compared with adults, infants and younger children can ab-sorb and accumulate more Mn. Homeostatic mechanisms that limit absoption of ingested Mn are not fully developed in infants and younger children, 53 allowing Mn to more easily enter the brain. 54 The study has several limitations. First, the small sample size limited our ability to evaluate and adjust for potential compounding factor. Because of small sample size, we combined groups with hair Mn level which lies less than 0.10 ppm or greater than 1.30 ppm and then defined it as an abnormal range Mn group. A recent study on relationship between blood Mn levels and child's attention, cognition, behavior and academic performance divided Mn level into three groups: lower, and upper 5th percentile and middle 90th percentile. 52 They reported that excess or deficiency of Mn can cause harmful effects in children. Although we could not classify Mn into three groups due to small sample size, the results was not much different from the results of three groups. Also, small sample size leads to likelihood of detecting real associations between hair Mn levels and childhood ADHD was reduced. But, Mn concentration was completely dissociated from socioeconomic status, which reduces the potential for confounding. 22 Second, the lack of other trace elements influence on the symptoms of ADHD in our study makes it difficult to conclude that manganese levels are specifically associated with ADHD symptoms. Third, the concentrations of Mn were measured in hair only. Several tests are used to measure Mn in blood, hair, urine, or feces. Currently, no consensus has emerged as to the optimal biomarker of exposure to manganese. 42 Hair might not be the tissue that provides the most accurate measure of a child's exposure to the metals of interest. Also, it is unknown that hair accurately reflects Mn level of brain because the exact mechanism of Mn transport into the brain is not well understood. 55 Although there have been several discussions about the usefulness of hair analysis and its standardization for studying Mn exposure, 7 additional measures, such as serum Mn, are needed. While urine and blood tend to show current exposure or recent body status, hair reflects chronic exposure and reveals retrospective information 56,57 Also, hair is easier and safer to collect, ship, and store for mineral analyses than blood or urine and the analysis is less expensive. 58 Research suggests its usefulness as an early predictor of toxic exposure. 7 For this reason, we decided trace element analysis in hair as a screening tool. But, use of hair is problematic for several reasons. For example, exogenous contamination may yield values that do not reflect absorbed does, and hair growth and loss limit its usefulness to only a few months after exposure. 59 Other researches reported that manganese concentrations in hair vary with hair color. 60 Forth, the crosssectional nature of our data makes it difficult to infer a causal relationship from the results. In summary, this study revealed significant associations between hair Mn levels and ADHD after statistical control for covariates. Possible foci for future research should include prospective design, broadly representative ADHD samples, and good ethics of research. Further research is needed to understand the causal relationship between Mn exposure and children's health, and to enable an improved risk assessement.
2016-05-17T12:19:34.450Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "e3179da3b802518cd36717b1663d17693666b152", "oa_license": "CCBYNC", "oa_url": "http://psychiatryinvestigation.org/upload/pdf/pi-12-66.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e3179da3b802518cd36717b1663d17693666b152", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
6643770
pes2o/s2orc
v3-fos-license
Primary breast angiosarcoma: a rare presentation of rare tumor – case report Background Primary breast angiosarcoma is defined as malignant proliferation showing endothelial differentiation. It is a very rare tumour (0.05% of primary mammary cancers), whose diagnosis can be difficult. Case presentation We report the observation of a patient with no previous history, aged 27 years. The clinical examination finds a right breast discreetly increased in volume. The trucut biopsy was in favour of a lactating tubular adenoma. However, an immunohistochemical complement was requested. An absence of pancytokeratin labelling contrasted with strong expression of CD31, CD34 (endothelial markers) are described. The proliferation index (Ki67) was estimated at 30%. This led to the conclusion that the phenotypic aspect is related to a vascular proliferation that evokes an angiosarcoma. After a multidisciplinary assessment, the patient benefited from an enlarged excision of the tumour. The histopathological examination of the surgical specimen found an infiltrating mesenchymal proliferation made of vessels of variable sizes anastomosed to vascular slits with lesional limits. The immunohistochemical examination on the surgical specimen showed to the same phenotypic profile on biopsy. The final diagnosis was a high-grade mammary angiosarcoma of incomplete excision. The patient refused any additional surgical management; external radiotherapy and close supervision were prescribed. After eight months of evolution, no local or remote recurrence was reported. Conclusion Primary breast angiosarcoma is a mesenchymal malignant tumour of rare vascular origin. Our observation is peculiar by the absence of any prior radiotherapy, its clinical presentation, its morpho-phenotypic characteristics, its management and its evolutive aspects. Background Primary breast sarcomas are rare entities. These malignant tumours originate from mesenchymal breast tissue and account for less than 1% of all breast cancer cases. Angiosarcomas are rare malignant tumours that arise from endothelial cells lining vascular vessels. Most angiosarcomas are known to be induced by radiation. Primary angiosarcomas are rare and account for 0.05% of all malignant breast tumours. Case presentation We report the case of a 27-year-old female, with no history of previous breast surgery or irradiation. She reported after four months of breastfeeding a history of painless progressive lump in her left breast. On examination the patient was in good condition. The right breast was slightly increased in volume. We found a 5 cm lump involving all quadrants of the breast with no cutaneous involvement. Ganglionic area and axilla were free. The sonography showed a hypo echoic (Doppler vascularized) nodular formation with fuzzy limits. The mammography revealed a homogeneous opacity, dense, with poorly defined contours, on the two internal quadrants with a retromammelonal contingent (labelled ACR-3). The trucut biopsy was in favour of a lactating tubular adenoma. The radioclinic discordance, the morphological aspect and the exiguous nature of the material, an immunohistochemical complement was requested. An absence of pancytokeratin labelling contrasted with strong expression of CD31, CD34 (endothelial markers) is described. The proliferation index (Ki67) was estimated at 30%. This led to the conclusion that the phenotypic aspect is related to a vascular proliferation that evokes an angiosarcoma. After a multidisciplinary reassessment, the patient benefited from a conservative treatment by an enlarged excision of the tumour (Fig. 1). The histopathological examination of the surgical specimen found an infiltrating mesenchymal proliferation made of vessels of variable sizes anastomosed to vascular slits with lesional limits (Fig. 2). The endothelial cells were often spindle shaped, with anisocaryotic nuclei and numerous mitoses and reduced cytoplasm (Fig. 3). The immunohistochemical examination on the surgical specimen showed to the same phenotypic profile on biopsy (Fig. 4). The final diagnosis was a high-grade mammary angiosarcoma of incomplete excision. The patient refused any additional surgical management or external radiotherapy and close supervision were prescribed. After eight months of evolution, no local or remote recurrence was reported. Discussion Breast angiosarcoma is defined as malignant proliferation showing endothelial differentiation [1]. It is divided into two distinct groups. Primary, which arises in the breast parenchyma and secondary, which develops in the skin, chest wall or breast parenchyma subsequent to surgery and postoperative radiation for breast cancer. Mammary angiosarcomas, both primary and secondary, may show mutations in the receptor of tyrosine kinase gene KDR and high levels of Myc amplification [1][2][3]. Primary angiosarcoma has an incidence of about 0,05% of all primary malignancy in the breast. It is more frequent in young women (20 to 50 years) with no previous cancer history or other known risk factors [3,4]. Up to 12% of primary breast angiosarcoma are diagnosed during pregnancy or shortly after suggesting hormonal involvement. However, oestrogen and progesterone receptors were reported to be negative in most cases [5]. The rapid growth of the disease during pregnancy and lactation are thought to be related to the suppressed immune system and placental growth factors, besides hormonal effects. Patients with primary angiosarcoma present with a palpable mass that may be growing rapidly as seen in our case. Skin involvement is frequent (bluish red discoloration, haemorrhage). Distant metastasis can be found. The mean tumour size of the mass at presentation vary from 1 to 25 cm (average 5 cm), while in this case patient presented with a larger lump of size 17× 14 × 7 cm. Mammographic characteristics can establish the diagnostic, but frequently, as in our case, it is non specific. Sonography and magnetic resonance imaging are more sensitive in characterizing those breast lesions, but again there are no distinctive features of angiosarcomas [4,[6][7][8][9]. Diagnosis prior to surgery, either by fine needle aspiration or needle core biopsy is difficult. Authors have reported a false negative rate of 37% [3,9]. Surgical excision and sufficient sample for histopathological examination with immunochemistry are necessary to render a final diagnosis. Morphologically, there is a broad spectrum of growth patterns and nuclear atypia. Well-differentiated angiosarcomas consist of anastomosing vascular channels that dissect through adipose tissue and lobular stroma. Other architectural patterns include vasoformative growth, solid growth, papillary endothelial growth, and capillary-type pattern. Tumour cell shape may be typical endothelial shape, plump, spindled, or epithelioid. Nuclear atypia and mitoses may range from none to severe and numerous. Blood lakes and necrosis may be prominent. There is no possible morphological distinction between primary and secondary breast angiosarcoma. The immunophenotype will prove the endothelial differentiation. Immunohistochemical staining for CD31, CD34 or sometimes Podoplanin (D2-40) is very useful in poorly differentiated tumours. However, progressive tumour dedifferentiation can lead to a loss of those markers [9][10][11][12][13]. Three groups of breast angiosarcoma were proposed by Donnel et al. [12]: well differentiated, Intermediategrade and low-grade angiosarcoma. It is based on the constellation of growth patterns, atypia, and mitotic activity. The histological grading was thought to be predictive of the prognosis. Recent data, suggest that in angiosarcoma grade has no prognostic value. Low-grade lesions can metastasize. Second locations occur in the lungs, liver, bone and skin. Involvement of axillary lymph nodes is rare [13][14][15][16]. The differential diagnosis is variable according to this grade. It includes benign haemangioma, angiomyolipoma, melanoma, undifferentiated carcinoma, stromal sarcoma and reactive spindle cell proliferative lesion [17,18]. The management of breast angiosarcoma is based on large surgical excision. Total mastectomy is the rule. Haematogenous dissemination of the angiosarcoma is making axillary lymph node dissection unnecessary. In high-grade angiosarcoma, chemotherapy showed a better outcome (cyclophosphamide, anthracyclin, or an alkylating agent combined with a pyrimidine analogue). In case of local recurrence, radiation therapy might be indicated. There is no clear agreement on pre-operative radiotherapy in the metastatic setting [15][16][17]. Most authors, link the outcome to the tumour size at diagnosis, and margin status at surgery. Median recurrence free survival is inferior to 3 years. Five year overall survival is 46% for primary breast angiosarcoma and 69% for secondary angiosarcoma [19][20][21]. Conclusion Breast angiosarcomas are rare tumours. In young women, tumours with highly vascular component at the biopsy should be considered malignant until proven otherwise. The therapeutic outcome and the prognosis are determined by tumour size, margin status and secondary location.
2017-09-10T04:04:09.747Z
2017-08-29T00:00:00.000
{ "year": 2017, "sha1": "e084efff208bf7f1d3bd96e7547ce5ef2c847a82", "oa_license": "CCBY", "oa_url": "https://bmcclinpathol.biomedcentral.com/track/pdf/10.1186/s12907-017-0055-y", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "e084efff208bf7f1d3bd96e7547ce5ef2c847a82", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
9704510
pes2o/s2orc
v3-fos-license
A note on gauge fixing in supergravity/Kac-Moody correspondences We explain how to achieve the traceless gauge for the spatial part of the spin connection in the framework of the recently proposed correspondence between the (appropriately truncated) bosonic sectors of maximal supergravities and the `geodesic' sigma-model over E10/K(E10) at low levels. After making this gauge choice, the residual symmetries on both sides of this correspondence match precisely. The gauge choice also allows us to give a physical interpretation to the multiplicity of certain primitive affine null roots of E10. Recent work has established intriguing evidence for the realization of indefinite (sometimes hyperbolic) Kac-Moody algebras in supergravity and M-theory. In particular, for maximal D = 11 supergravity [1], there are now several proposals on how to realize these symmetries. The approach of [2] seeks a covariant implementation of the 'very-extended' Kac-Moody algebra E 11 via a non-linear realization directly in eleven dimensions (possibly augmented by further central charge coordinates [3]). By contrast, the approach of [4,5], based on the hyperbolic Kac-Moody algebra E 10 , has its roots in the classic BKL analysis of Einstein's equations in the vicinity of a spacelike (cosmological) singularity [6], according to which the theory near the singularity is effectively described by a one-dimensional reduction, in which spatial gradients are neglected in comparison with time derivatives (for a recent review with many references, see [7]). A 'hybrid' approach, combining some of the features of [2,4] has been developed in [8,9,10]. In spite of important conceptual differences between these approaches, a common feature is that they all require the tracelessness of the anholonomicity coefficients (or, equivalently, the spin connection) in order to match the (appropriately truncated) degrees of freedom between supergravity and the Kac-Moody σ-model. For E 11 , the issue has been discussed in [11]. In this note, we explain how to realize this gauge in the E 10 -based approach of [4], by making joint use of diffeomorphisms and local Lorentz transformations in such a way that, at the end of the gauge fixing procedure, the residual symmetries on both sides of the correspondence match precisely. Our arguments underline a point already made in [12] concerning the importance of gauge fixing before making the identification between the supergravity theory and the 'geodesic' Kac-Moody σ-model, both at the kinematical and the dynamical level. The traceless gauge choice also resolves a puzzle concerning the multiplicity of the affine null root (=8 for E 10 ) and its images under permutations of the spatial coordinates; namely, we will show that this multiplicity indeed coincides with the number of physically relevant degrees of freedom for each choice of null root. In the final section, we comment on related issues in the context of the E 11 proposal of [2], and on the extension of the present results to the fermionic sector. Let us first summarize the basic conjecture and results of [4]. As shown in [7], the relevant equations of motion simplify near a space-like singularity in the sense that the degrees of freedom can be divided into 'active' ones (the diagonal metric components), and 'passive' ones (off-diagonal metric and various matter degrees of freedom) which freeze near the singularity. The resulting dynamics is thus described by a one-dimensional reduction of the higher dimensional field equations (i.e. purely time-dependent equations at a fixed, but arbitrary spatial point x 0 ) which receives effective corrections from the passive degrees of freedom (in lowest order in the form of 'walls' leading to a cosmological billiards). 1 In the context of supergravity, the possible relevance of a reduction to one dimension, and the possible appearance of E 10 in this reduction, had already been foreseen in [13], but one crucial difference here is that the dependence on the spatial coordinates is conjectured to reemerge via a gradient expansion, which gets linked to a level expansion (or height expansion) on the σ-model side. More precisely, the correspondence is made between the purely t-dependent σ-model degrees of freedom of the Kac-Moody σ-model, and the time-dependent supergravity fields and their (so far only first order) spatial gradients at a fixed spatial point x 0 . We now explain the successive gauge choices required for the correspondence of [4], stressing the residual symmetries at every step. Pseudo-Gaussian gauge: The analysis of [4] proceeds from a spacetime metric in the zero shift (or pseudo-Gaussian) gauge 2 where indices m, n, · · · = 1, . . . , 10 label the spatial coordinates, and g denotes the determinant of the spatial metric, and where the purely timedependent lapse n(t) is to be identified with the one of the geodesic Kac-Moody σ-model, and hence left free. The above gauge is supposed to be valid in a tubular neighborhood of the worldline parametrized by {(t, x 0 ) | t > 0} (in comoving coordinates). After making this choice, the metric (1) is left invariant by separate reparametrizations of the time-and space coordinates, respectively, that is, t → t ′ (t) and x → x ′ (x), but coordinate changes mixing space-and time coordinates are disallowed. The pure space reparametrizations are assumed to leave the point x 0 invariant (and hence the worldline). 1 It has already been noted before that this mechanism offers new possibilities for 'emergent spacetime' scenarios, as the dependence on the spatial degrees of freedom here is thought to 'emerge out of' (or 'disappear into') the spacelike singularity. 2 For clarity, we will stick mostly to D = 11 supergravity, but the argument remains the same for other models of interest in various space-time dimensions D ≤ 11. Vielbein gauge: Next we make partial use of the local Lorentz group to bring the elfbein which gives rise to (1) into block-diagonal form. With a (1+10) split of the indices we demand the form: The local space-time Lorentz group SO(1, 10) is thereby broken to its rotation subgroup SO(10); that is, (2) still admits space-time dependent spatial rotations Λ ab (t, x) as a residual symmetry. Traceless spin connection gauge: We now wish to exploit this remaining rotation symmetry to set where are the spatial components of the coefficients of anholonomicity, and the spin connection, respectively. Relation (3) is supposed to hold in the same tubular neighborhood as (1), and implies the vanishing of the trace and all its spatial gradients along the the world line (t, x 0 ). The necessity of the tracelessness condition arises from the appearance of a representation for the magnetic dual of the graviton [14,15,16,2,17,11] at level ℓ = 3 in a level decomposition of E 10 under its A 9 = SL(10) subgroup [4]. The associated tensor of mixed symmetry is related via the correspondence of ref. [4] to this dual graviton by However, from the level decomposition it follows that this representation is subject to the irreducibility constraint which, as indicated, is equivalent under the dictionary to the traceless gauge (3). Inspection of the available tables of higher level representations [18] reveals the absence of such a trace representation at low levels; the relevant representation (000000001) appears only at level ℓ = 13, with outer multiplicity equal to 22. Similar comments apply to representations corresponding to the spatial gradients of the trace. Because both Ω ab c and ω a bc transform as scalars under coordinate transformations, it is clear that diffeomorphisms are of no further use at this point; in particular, a spatially constant Ω ab c (with or without trace, e.g. Bianchi cosmologies) remains invariant under relabeling of the coordinates. This is analogous to the traceless gauge Γ n nm = 0 for the Christoffel symbol, which transforms as a scalar under local Lorentz transformations, whence the role of diffeomorphisms and the local Lorentz group is interchanged. Therefore, given a spatial spin connection ω a bc , the problem reduces to solving the equation in terms of the spatial rotation matrix U ab (t, x) ∈ SO(10). In infinitesimal form (with V a ≡ ω b ba small, and ∂ b U ab = ∂ b Λ ab ), this equation becomes Making the ansatz Λ ab = ∂ a v b − ∂ b v a , and noticing that v a can be chosen divergence-free by shifting v a → v a + ∂ a v with a suitable v = v(t, x), we arrive at a continuous set of Poisson equations (one for each t) where △ ≡ ∂ a ∂ a is the 10-dimensional spatial Laplacian. The set of equations (9) are to be solved in some tubular neighborhood of the worldline (t, x 0 ) with appropriate boundary conditions. The known local existence of solutions to the Poisson equation guarantees that the gauge (3) can be chosen; moreover the required SO(10) rotation only fixes the space-dependent part of the SO(10) transformations since it follows from (7) that ω b ba = 0 is not changed by purely time-dependent SO(10) rotations. Summary of residual symmetries: Having achieved the gauge choices (1), (2) and (3) we are left with the following three residual symmetries on the supergravity side, which can now be directly identified with the residual symmetries of the E 10 /K(E 10 ) σ-model in the level decomposition under A 9 : (i) Reparametrizations of the time parameter t → t ′ (t), where the timedependent lapse n(t) in (1) is identified with the lapse function of the E 10 /K(E 10 ) σ-model. (ii) Purely space-dependent coordinate transformations (leaving x 0 inert) that can be expanded around x 0 according to The first order term ξ m n realizes the GL(10) subgroup of the (global) E 10 . The higher order terms in this expansion are related to higher order spatial gradients of the various fields, which are expected to correspond to higher level representations in the decomposition of E 10 under its A 9 subalgebra. 3 (iii) Eq. (3) is left invariant by purely time-dependent spatial rotations Λ ab = Λ ab (t). The resulting group SO(10) can be identified with the subgroup of t-dependent SO(10) rotations within the local 'R symmetry' group K(E 10 ) on the σ-model side, which is the finite dimensional residual invariance left by fixing the triangular gauge for all fields except in the level ℓ = 0 sector. In summary, we have a precise matching not only of the degrees of freedom and equations of motion up to level ℓ = 3, but also of the residual symmetries 3 The relevant E 10 transformations in the σ-model will be accompanied by local (in time) compensating K(E 10 ) transformations. This is analogous to fixing a triangular gauge of the spatial vielbein e m a in (2). on both sides of the correspondence. Analogous results hold for the D 9 and A 8 × A 1 decompositions [19,20] of E 10 : one similarly finds no trace representations at low levels. For the A 8 ×A 1 decomposition (corresponding to IIB, see [20]) this is straightforward since one deals with the dual of the graviton over A 8 = SL(9) instead of SL (10), and the irreducibility constraint (6) still implies that one has to fix the spacedependent rotations to arrive at the traceless gauge. For D 9 = SO(9, 9) (related to massive IIA supergravity in [19]) the situation is slightly more involved since the relevant tensor containing the dual of the graviton is now contained in an antisymmetric three-form representation of SO(9, 9) (at D 9 level ℓ = 2), which we denote by P IJK (with I, J, K = 1, . . . , 18). Seen from the compact subgroup SO(9) × SO(9) ⊂ SO (9,9) there are four different components that need to be distinguished (i, j = 1, . . . , 9 ;ī, = 10, . . . , 18, cf. [19]) We have indicated the structure of these four tensors under the diagonal rotation group SO(9) diag ⊂ SO(9)×SO (9). We see that those tensors which allow for the mixed symmetry which is required for (part of) the dual graviton also allow for the presence of a vector representation. The nine-dimensional trace 9 b=1 ω b ba transforms in a vector representation of SO(9) diag and therefore it would seem unnecessary to choose a gauge for it. However, this reasoning overlooks the dual field for the type IIA dilaton gradient ∂ a φ which also transforms as a vector 4 . Now the appropriate gauge condition relates the two vectors Interestingly, this is precisely what the original gauge condition summed over ten space directions translates into if one follows through the redefinitions of [19]. This will be discussed in more detail in [21]. In both cases we see that the matching between supergravity and the E 10 /K(E 10 ) σ-model is possible only if (3) is satisfied and all gauges are fixed so that the residual symmetries agree. Interpretation of root multiplicity: The significance and proper physical interpretation of the imaginary roots of E 10 and their multiplicities in the present context is far from understood 5 (recall that, generically, imaginary roots α are degenerate with exponentially growing multiplicities mult(α) > 1). The above choice of gauge now allows us to extend the matching (and hence the 'dictionary') beyond real roots, and to give a physical interpretation at least for the fact that lightlike (null) roots are associated with root multiplicity > 1. Namely, the roots associated with latter fall into two classes [7]. First, there are the gravitational roots (giving rise to 'gravitational walls') associated with those components Ω bc a , for which the indices a, b, c are all different: these correspond to level-3 roots α abc defined by the wall forms (cf. [7], section 6.2) α abc (β) = 2β a + e =b,c β e (14) and are real: α 2 abc = 2. The corresponding components of the dual field P a 0 |a 1 ...a 8 are the ones where a 0 is equal to one of the indices a 1 , . . . , a 8 . In addition, [7] identified ten subleading gravitational walls associated with ten null roots, designated as µ a for a = 1, . . . , 10, cf. eqn. (6.16) there, and defined by the wall forms These ten null roots (for a = 1, . . . , 10) can all be obtained by sl(10) Weyl reflections (or, equivalently, by permuting the spatial coordinates) from the primitive (i.e. lowest height) null root at height 30, which has δ 2 = 0, mult(δ) = 8 and is identical to the null root of the affine subalgebra e 9 ⊂ e 10 (in the notation of [7], we have δ = µ 1 ). This null root, and its images under the sl(10) Weyl group, are the only imaginary roots appearing on levels ℓ ≤ 3 in the A 9 decomposition. The associated components of the dual field P a 0 |a 1 ...a 8 belonging to these null roots are the ones for which the indices a 0 , . . . , a 8 are all distinct. Using the correspondence we can now give a physical interpretation to the multiplicity mult(δ). Since the indices on P a 0 |a 1 ...a 8 are all different, two indices on the dual coefficient of anholonomicity Ω ab c must be equal, i.e. we must consider the components 6 Ω ab b . As shown in [7], these components are then all associated with the null root µ a , and it would thus appear that we have nine possible values for b. However, thanks to our gauge choice (3), there is now one linear relation b Ω ab b = 0, whence the number of independent field components associated to each null root µ a is only eight -in agreement with the root multiplicity mult(δ) = 8! How are these statements mirrored in E 11 [2,23,24,11]? At least locally, the traceless gauge Ω AB B = 0 (contractions now to be taken with the Minkowski metric in eleven dimensions) can be reached by exploiting the full local Lorentz group SO(1, 10) [11]. The difference is now that, after gauge fixing, the local Lorentz group has been 'used up' completely, and there remains no symmetry to identify with the SO(1, 10) subgroup of the local group K(E 11 ), while the traceless gauge is still compatible with full 11-dimensional diffeomorphism invariance. A second difference is that a counting argument analogous to the one given above would suggest that there are now nine independent components in Ω AB B = 0 for each A (no summation on B), whereas the multiplicity of the associated null root δ remains the same (= 8) when δ is considered as a root of E 11 . As also mentioned in [11], instead of discarding the trace (in order to retain full Lorentz invariance), one might look for a trace representation at higher levels. Inspection of the tables [18] reveals that the relevant representation (0000000010) does appear in the A 10 decomposition of E 11 , but only at level ℓ = 14, and with outer multiplicity 491. Supersymmetric generalization: Similar considerations apply to the supersymmetric version of the E 10 σ-model [25,26]. The Kac-Moody model allows only a local supersymmetry with parameter ǫ(t) depending only on time. 7 Therefore, we should require on the supergravity side, a similar gauge conditon on the supergravity fermions involving spatial gradients, which reduces ǫ(t, x) to purely time-dependent supersymmetry transformations with parameter ǫ(t). The precise form of this condition is presently unknown but will be schematically of the form ∂ m Ψ m = 0 (where Ψ m denotes the spatial components of the gravitino). We note also that one can consider a completely gauge-fixed version of the model where one chooses the lapse n(t) = 1 which is reflected in the supersymmetric partner constraint ψ 0 − Γ 0 Γ a ψ a = 0 [25].
2014-10-01T00:00:00.000Z
2006-03-20T00:00:00.000
{ "year": 2006, "sha1": "eff3db524488945075e8ad2a6eeb88d670a078aa", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "3a341f7dd673de204a2b6b2ce446bc1df9452071", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
218670937
pes2o/s2orc
v3-fos-license
Warming and acidification threaten glass sponge Aphrocallistes vastus pumping and reef formation The glass sponge Aphrocallistes vastus contributes to the formation of large reefs unique to the Northeast Pacific Ocean. These habitats have tremendous filtration capacity that facilitates flow of carbon between trophic levels. Their sensitivity and resilience to climate change, and thus persistence in the Anthropocene, is unknown. Here we show that ocean acidification and warming, alone and in combination have significant adverse effects on pumping capacity, contribute to irreversible tissue withdrawal, and weaken skeletal strength and stiffness of A. vastus. Within one month sponges exposed to warming (including combined treatment) ceased pumping (50–60%) and exhibited tissue withdrawal (10–25%). Thermal and acidification stress significantly reduced skeletal stiffness, and warming weakened it, potentially curtailing reef formation. Environmental data suggests conditions causing irreversible damage are possible in the field at +0.5 °C above current conditions, indicating that ongoing climate change is a serious and immediate threat to A. vastus, reef dependent communities, and potentially other glass sponges. Apparent pumping arrest A greater proportion of sponges exposed to warming and/or acidification treatments ceased pumping than the control sponges over the course of the experiment (Fig. 1a), but there was no significant effect of acidification, warming, or their interaction on apparent pumping arrest over time (Table 1a). The onset of apparent pumping arrest was seen as early as two weeks in sponges exposed to warming (including OW and OAW), and the proportion of individuals not filtering remained relatively stable in the OA and OW treatments, but there were fluctuations observed in the OAW treatment combination (Fig. 1a). Pumping capacity Minimum residence times were similar across treatments early in the experiment, but then diverged through time in response to acidification and warming (Fig. 1b). Although minimum residence time remained constant in control tanks, it declined by 2-to 3.5-fold in sponges exposed to OA, OAW, OW treatment combinations. Individuals subjected to acidification and warming separately pumped the dye significantly slower than the control after four months (120 days) of exposure to these treatments (Table 1b). Treatment interaction dampened this negative response but not significantly. After four months of exposure, sponges in OA, OAW, and OW tanks showed reduced pumping strength, by 2-to 5.5-fold compared to the control (Fig. 1c). Strength was significantly weaker in individuals subjected to acidification and warming relative to the control (Table 1c; Fig. 1c). Warmed sponges (OW and OAW) had depressed pumping strength as early as the first sampling point, whereas sponges in the acidification only treatment lost pumping strength more gradually (more details Supplementary Table S1). Notably, after three months, sponges exposed to elevated temperature alone showed increase in minimum residence time (slowed pumping) and decrease in pumping strength, but the pumping capacity of individuals in the OAW treatment (exposed to both acidification and warming) was relatively faster and stronger, similar to the acidified treatment. However, in the final (fourth) month pumping capacity of sponges in OAW ultimately worsened, mirroring that of individuals subjected to warming (Fig. 1). These patterns resulted in a significant Acidification x Warming x Time interaction (Table 1c). Tissue withdrawal The effects of acidification and warming on tissue withdrawal were large in magnitude, but significance was not detected (Table 1d), potentially as a result of the relatively low sample size (n = 8 per treatment combination). Yet, trends are alarming and worth detailing: individuals subjected to warming (including OW and OAW treatment combinations) had earlier onset (by one month) of tissue withdrawal relative to the control and OA treatment combinations. By the end of the experiment all (100%) sponges in the OA and >75% of sponges in the warmer (OW and OAW) treatment combinations had signs of tissue withdrawal, a 35-60% increase compared to control sponges (Fig. 1d). The Cox proportional hazards regression model estimated hazard ratio (Exp. Coeff. in Table 1d) suggests a threefold increase in the probability of acidified and warmed sponges showing signs of tissue withdrawal compared to those in the control. Skeletal breaking force per volume and stiffness Experimental treatment combinations (OA, OW, OAW) reduced the force per volume required to break A. vastus skeleton (Fig. 2a), but only a significant effect of warming was detected (Table 1e). Both acidification and warming significantly reduced skeleton modulus (stiffness; Fig. 2b), meaning the skeleton became more elastic after four months exposure to these conditions (Table 1f). There were no significant Acidification x Warming interaction effects for these material properties. Warming and acidification pose an immediate threat to sponge filtration and reef formation Our results indicate that future acidification, warming, and their combination may have substantial adverse effects on the pumping capacity, tissue withdrawal, and structural integrity of the glass sponge A. vastus, a species that contributes to the formation of historically and ecologically important habitats unique to the Pacific Northwest. , pumping strength (c), and onset of tissue withdrawal (d) in the reef-building glass sponge Aphrocallistes vastus exposed to four treatment combinations. Treatment combinations include: ambient conditions ('Control'), CO 2 -induced acidification ('OA'), increased seawater temperature ('OW'), and a combination of both ('OAW') for four months. (a) Colour gradient represents total apparent pumping arrest (dark shade) to strong pumping (light shade). (b) 'Minimum residence time' refers to time (in seconds) taken to expel dye from the oscula after being injected with a fixed volume, mean values exclude individuals that were not pumping (assigned a pumping strength score of zero). (c) 'Pumping strength' is comprised of a score assigned to the volume of the plume expelled from the oscula, mean values include individuals that were not pumping (score of zero). (d) "Kaplan-Meier survival curve" for the probability of observing tissue withdrawal in each individual. 95% confidence limits are shown (in d) and error bars represent standard error (SE; in b, c) of the mean (n = 8 per treatment combination). Most worryingly, the onset of apparent pumping arrest was quick (occurring within two weeks) for sponges exposed to elevated temperatures regardless of acidification. Subsequent degradation was observed two weeks following apparent arrest. The rigid skeleton of glass sponges, like A. vastus, does not permit these animals to contract like other sponges from class Demospongiae (which possess both protein and silica spicules) in response to, for example, particle obstruction 24,25 . To protect themselves from this, glass sponges typically go into temporary apparent pumping arrest whereby they cease filtering particles from surrounding water to prevent particle www.nature.com/scientificreports www.nature.com/scientificreports/ obstruction 24 . It is possible that treatments caused significant stress, manifested as apparent pumping arrest leading to tissue withdrawal, throughout the experiment. Warming can influence sponge-feeding behavior by reducing choanocyte chamber density, size, and therefore filtration efficiency 26 . Few glass sponges have successfully been kept in recirculating seawater, but for those that did survive for weeks to months, unusual changes in the structure of choanocyte chambers were noted over time 27 , which may help explain tissue withdrawal observed in control sponges. To the best of our knowledge there have been no studies investigating the effects of acidification and warming on glass sponge filtration, but there are some examples among demosponges. Similar to our findings for A. vastus, Rhopaloeides odorabile shows drastic reductions in pumping rates and feeding efficiency in response to warming 13 . In contrast, Dysidea avara filtration rates remain unaffected by natural changes in temperature 28 , but in Halichondria panicea rates increased at higher temperatures 19 . Overall, available data suggest that ocean warming impacts on filtration capacity could be species-specific, and the effects of warming will depend on whether a particular species is already near or above its thermal optimum. The action potential controlling filtration in glass sponge Rhabdocalyptus dawsoni, is known to function within a narrow temperature range (7-12 °C) 29 , but it is unclear if this is the case for other glass sponges. Below 7 °C, sponges are unable to resume filtration after arrest, and cannot undergo arrest at temperatures above 12 °C, thereby making them more susceptible to starvation and clogging from sediments 25 . The ambient and upper limit of the temperatures examined in the present study were within the physiological tolerance limits of glass sponges, but signs of distress were still observed under the climatically realistic magnitude of warming used in our experiment. It is possible that prolonged exposure (>2 weeks, as defined by our study) to warming might further restrict the physiological limits of glass sponges and could cause a decrease in biomass of A. vastus populations as a result of starvation (marked by apparent pumping arrest). Periods of prolonged warming have already been observed in the field, at the collection site of the present study (Fig. 3) and in other Howe Sound bioherms 30 . Warm periods, defined as temperatures reaching >10.4 C with no more than 12 hrs of cooling (temperatures < 10.4 C), lasting 6-13 days occurred six times between July and October, 2016, with five brief periods of cooling in between warm periods, which corresponded to a weak temperature anomaly (La Niña year) 31 . Results suggest that irreversible tissue withdrawal could take place in A. vastus after 30 days of exposure to warming (>10.4 C), which could have occurred if it were not for several brief periods of cooling observed in the summer of 2016. Warming trends pose an immediate stress to glass sponge reefs, as the addition of 0.5 °C to the 2016 pattern would result in 140 consecutive days of warming, a period longer in length and warmer than the sponges were exposed to in the present study. www.nature.com/scientificreports www.nature.com/scientificreports/ Responses of glass sponges to ocean acidification have not previously been investigated; responses of species in other sponge classes are not well known and, similar to responses to warming, appear to be species-specific. Some species appear to be resistant to acidification: elevated temperatures caused significant adverse effects on abundant tropical sponge species, but acidification alone had little effect 32 . Cliona orientalis, a demosponge, had increased bioerosion rates under acidified conditions 14 ; demosponge species were present near Mediterranean CO 2 vent sites with pH values as low as 6.6 units 16 ; no effect of acidification was found on the survival rates of Crella incrustans 18 ; and Mycale grandis showed extraordinary resistance to acidification 17 . In contrast, increased mortality in response to acidification was seen in sponges Cliona celata 15 and Tethya bergquistae 18 . Our study suggests glass sponges are less sensitive to ocean acidification than warming, at least within the range of change expected for these variables in the coming decades, but are not resilient to long-term exposure to either since both elevated temperature and acidification ultimately had detrimental outcomes for the sponges. Importantly, the interactive effect of acidification and warming had mitigating effects on the pumping capacity of individuals exposed to warming for the first three months, mirroring the response of acidified sponges. The oscillations in pumping capacity and apparent arrest throughout the experiment suggest that the interactive effect of acidification may cause the sponges to intermittently start/stop pumping. Contrary to our work, the interaction between acidification and warming exacerbated the effect of temperature stress in heterotrophic sponge species 32 . However, acidification may mitigate these stresses in phototrophic species, reducing mortality, necrosis and bleaching of tropical sponges 32,33 . In the final month of our experiment, individuals subjected to a combination of elevated CO 2 with warmer temperatures performed similarly to the temperature treatment suggesting that acidification may have a threshold and short-term buffering capacity. Because acidification did not dampen the presence of tissue withdrawal, it may not be able to mediate the effect of temperature and ultimate loss of this species in the long term. It must be noted that there have been documented mass mortality of glass sponges (including A. vastus) in Howe Sound, where the sponges were collected for this study and several glass sponge reefs exist. These extensive glass sponge mortalities (including A. vastus) correlate with elevated temperatures reported during the 2009/2010 and 2015/2016 El Niño events 30,34 , and provide some indication that these sponges are sensitive to elevated temperatures. However, this period of warming was not associated with a decrease in pH 35 . Furthermore, acidification independent of warming has been documented in Howe Sound, but not associated with sponge mortalities 35 . From these field observations there can be no conclusions drawn regarding how temperature and acidification may interact in the field and how acidification may impact the sponges under natural circumstances. However, the patterns do qualitatively match the results seen in our experiment suggesting warming is the primary threat to glass sponges. The combination of reduced skeletal stiffness (under warming and acidification) and strength (from warming) would be expected to slow or completely curtail reef formation. The fused and three-dimensional skeletal network, comprised of biosilica and chitin, held together at the joints with low concentrations of calcite, is responsible for the sponges' rigid body that prevents disaggregation of the skeleton long after its death, allowing for reef development [36][37][38][39] . The dictyonine skeleton (fused robust scaffolding) is thought to reduce skeletal stiffness in glass sponges like A. vastus, providing natural flexibility to minimize stresses posed by hydrodynamic forces in shallower waters 40 . Material stiffness values (measured as Young's modulus) from previous work on A. vastus are slightly higher (2.76-10.04 MPa) 40 than those obtained in the present study (control sponges = 1.2 + 0.7 MPa). Discrepancies might be due to life stage differences as the present study was conducted on juvenile sponges (3-8 cm in height). Regardless, warmed and/or acidified sponges were half as stiff as the control sponges. Alterations to the skeleton, especially in terms of reduced stiffness (increased flexibility) as presented here, could reduce feeding efficiency, lowering the sponges' critical water flow threshold, and potentially their distribution, restricting them to waters with higher food availability. Furthermore, under warming conditions the more brittle (measured as reduced force per volume) skeletons might collapse under the increasing weight of a growing sponge, which can reach 2-3 m in height 36 , and/or might not be able to withstand the myriad animals walking and swimming in and among the sponges 40,41 . Because we only examined material properties in living tissue, it is www.nature.com/scientificreports www.nature.com/scientificreports/ unclear how the dead skeleton would be altered by climate change and whether it too would succumb to the fate of the living skeleton, but it is reasonable to suspect that differences in skeletal strength apparent during life would perpetuate after death. This is critical as dead sponges are important for reef growth as larval glass sponges and other invertebrates settle and grow on the macerated skeleton 36 . The unique architecture of glass sponges vital to reef formation may be vulnerable to climate change. Implications for associated biodiversity and ecosystem function Exposure to acidification and warming reduced the feeding efficiency (i.e. increased minimum residence time and decreased pumping strength) of glass sponges, suggesting that the feeding ability of juvenile A. vastus might be diminished (2-5.5 fold) by the end of the century as a result of climate change. Cascading effects of impaired pumping on local and regional biogeochemical processes remain unknown, but are likely to be negative. Via their remarkable filtration capacity, sponges convert large quantities of suspended particles and dissolved organic carbon (DOC) into food for other animals 1 . Through feeding, excretion, and symbiont microbial activity, sponges are known to chemically transform seawater passing through their structure 42 . The 19 documented glass sponge reefs in the Salish Sea, for example, collectively filter 1.04 × 10 11 L of water each day, representing 1% of the total water volume in the Strait of Georgia and Howe Sound combined 6 . By doing so, glass sponges bring microbial food energy from marine and terrestrial sources into local food webs by feeding on and removing up to 90% of bacteria from the water 11,43 . Reduction in this tremendous filtration capacity, as well as the reefs' eventual physical decimation, could alter local and regional microbial loop and energy supplied to the benthic community. Examples of breakdowns in bentho-pelagic coupling exist: sponge populations in Florida Bay have historically controlled phytoplankton blooms via particle removal and pumping rates 44 . Devastation of the sponge population in the area lead to increased toxic blooms in Florida Bay. Reduced skeletal strength could act as a positive feedback loop further weakening the sponge infrastructure and making it more prone to damage from inhabitants (fish and invertebrates) moving about the reef. Habitat loss as a consequence of ocean acidification 45 and warming 46 has negative downstream effects on biodiversity in coral reefs, mussel beds, and some macroalgal habitats. Similarly, we anticipate biodiversity loss in these ancient glass sponge habitats as a result of climate change. Methods Collection and husbandry. Juvenile A. vastus, ranging in height from 3 to 8 cm, were randomly selected from 'Field of A Thousand' dive site on the west side of Bowen Island (49.396, −123.397) in Howe Sound, British Columbia, Canada, under collection license XR 321 2017. Sponges were placed in plastic bags with ambient seawater (collected at depth) and stored in coolers for transportation to the laboratory at the University of British Columbia. To ensure longevity, sponges were slow drip acclimatized to their tank chemistry over the course of one hour by adding 100 mL of water to the collection bag (stored in a cooler) every 10 min from the respective tank in which an individual sponge was to be housed. Two sponges were placed in each of sixteen 250 L recirculating seawater aquaria bubbled constantly with ambient air and equipped with a multistage filtration system, including biological filter (sock filtration, protein skimmer, and bioballs) and UV sterilizer. The source seawater was obtained locally from 16 m depth in Burrard Inlet, BC, and coarse filtered by the Vancouver Aquarium. Sponges were held in total darkness with red light exposure during feeding. White light exposure was kept to a minimum, 1 hr per month or less, to measure pumping activity and observe tissue withdrawal. Ammonia, nitrite, and nitrate (using API Marine Master Test Kit) were monitored throughout the experiment. Twenty percent water changes were performed when necessary. Water changes were also conducted at least once per month during cleaning, which was kept at a minimum to avoid stressing the animals with excessive water movement. Siliconoxide was monitored with Salifert Si Profile Test kit. To supplement silica content in the water, two drops Sponge Excel Marine High-purity Silica from Brightwell Aquatics were added twice to each tank throughout the experiment. The sponges were fed twice daily at fixed times (every 12 hrs) to approximate their natural exposure to tidal rhythm. Each sponge tank received: in the morning, 0.5 mL Reef Nutrition Roti Feast + 0.5 mL Reef Nutrition Oyster Feast/tank mixed with 10 mL seawater, fed to the sponges using Kent Marine Sea Squirt feeder; in the evening, two drops of concentrated Sponge Power (Korallen-Zucht Sponge Power) directly added to each tank. In addition, four times weekly, 0.5 mL Fauna Marine Ultra Min S and 0.5 mL Fauna Marine Ultra Min D mixed with 10 mL seawater was added to each tank using Kent Marine Sea Squirt feeder. All food was injected near the water's surface to prevent contact or movement near the animal. Experimental setup and water chemistry. The sponges were acclimated in their assigned tanks at 8-9 C for five days without food; on the sixth day, the sponges were fed and tanks were set to their experimental temperature and pH over 8 hrs. Experimental treatments were chosen based on conservative future projections (temperature + 1.8 °C and ΔpH -0.2 units based on year 2100 projections) 47 . The 16 experimental aquaria were divided equally into four treatments: (1) control (ambient temperature = 8.6 C and pH = 7.8), (2) reduced pH (present-day temperature and projected year 2100 and pH = 7.6), (3) elevated temperature (projected year 2100 temperatures 10.4 C ( + 1.8 °C) and present-day pH = 7.8) and (4) elevated temperature and reduced pH (projected year 2100 temperatures and pH). Because of a leak in the CO 2 canister, acidification took place one week later than other treatments, but for conservative purposes its time span was treated similarly to other treatments for analyses. Temperature was maintained using individual chillers connected to each tank. Elevated CO 2 concentrations were achieved using mass flow controllers to bubble an appropriate mixture of compressed CO 2 (100% CO 2 ; Praxair) and ambient air (drawn from outside the building) from an air compressor. Control tanks were bubbled www.nature.com/scientificreports www.nature.com/scientificreports/ with ambient compressed air. Temperature was measured 5x per week using a combination of YSI (YSI Pro 30) and mercury thermometer. Seawater pH was measured 3-4x per week using Oakton pH 450 (two-point calibration with saltwater buffers AMP and TRIS, pH 6.77 and 8.09 respectively at 25 C). YSI was also used to monitor salinity. Water samples for carbonate system parameters were collected bimonthly and stored with 10 μL Mercuric Chloride 5% (w/v) Aqueous for future analysis. Dissolved inorganic Carbon (DIC) was measured using DIC Analyzer Model AS-C3 Apollo Sci Tech, according to guidelines of the Standard Operating Procedure 2 48 . Three replicates of 0.75 mL were taken for each sample. Results were normalized to a Certified Reference Material (CRM Batch No. 154) supplied by Prof. Andrew Dickson (Scripps Institution of Oceanography). Full carbonate system parameters were conducted on the control and acidified treatments using CO2SYS 49 (Table 2). Sponge pumping and tissue withdrawal. Pumping was monitored on days 16, 31, 50, 71, 92, and 120 from the start of the experiment. Two milliliters of freshly made fluorescent Calcein dye (4 g/L; Syndel Laboratories Ltd), a fluorescent derivative of fluorescein, was injected with a pipette positioned 0.5 cm from the sponge wall and halfway down the sponge's structure to measure pumping capacity, which was quantified by calculating the time it took the dye to be expelled from the oscula (referred to as 'minimum residence time' hereafter), and scoring the density of the plume expelled from the oscula ('pumping strength' hereafter). We calculated minimum residence time as the amount of time in between dye injection and the emergence of the dye from the sponge osculum. Here, the time it took for the dye to be expelled from a set distance was calculated not by noting the time the dye appeared above the oscula (like in the dye front method 50 ) but rather when it appeared at the edge of it (distance = 0 mm from oscula) therefore a flow rate could not be calculated. We consider minimum residence time a proxy measurement for pumping rate as we were unable to calculate pumping rate directly in such small specimens. In preliminary trials we observed that sponges with similar minimum resident times differed significantly in the shape of the exhalent dye plume (see examples in Supplementary Videos S1). Consequently, we added a measurement we refer to as "pumping strength". Video of the sponges pumping were recorded with Sony Handycam so as to precisely measure the minimum residence time. Average minimum residence time for each treatment did not include those sponges that were not pumping. Pumping strength was scored over a gradient of 0-6: 'weak' = a diffuse (score 1-3) and 'strong' = dense (4-6) plume of dye, and 'none' = apparent pumping arrest (scored 0). The term ' Apparent pumping arrest' does not correspond to pumping arrest because a flowmeter was not used to record this, it does however infer that pumping was so weak that it was not observed with the use of a dye. Scores were determined by an unbiased observer. The quantity of dye expelled, speed at which this dye was ejected, and continuous versus puffs of dye were all factors considered when scoring a plume. The presence/absence of tissue withdrawal was monitored daily until first signs of tissue withdrawal appeared. Withdrawn tissue was easily distinguished from healthy tissue by its translucent (colourless) nature whereas healthy tissue maintained its original beige or orange colour (Fig. 4). After onset, tissue withdrawal was monitored every two weeks, for the remainder of the experiment. All sponges survived through to the end of the experiment, except one sponge in the control treatment that died in the last month of the experiment. The final time point of this sponge was excluded from the analyses because its death was deemed to be caused by a microbial infection since the sponge died suddenly (within 24 hrs) despite pumping strongly and with no signs of tissue withdrawal, and developed strings of mucus in that 24 hrs period. Mechanical properties. Skeleton breaking force per volume and modulus (stiffness) was tested using a standard compression method in a computer-interface tensometer (model 5500 R, Instron Corp., Canton, MA, USA). Skeleton was selected halfway down the sponge and cut into square pieces (approx. 1 cm 2 ). These were placed in the cross-beam of the instrument and a maximum force of 4 N was gradually applied (load rate = 0.25 N/ min; strain rate = 0.35 mm/s) to the skeleton until point of failure. Breaking force was recorded. Because thickness differed by sponge (1.7-4.6 mm), measurements of sponge skeletal thickness and cross-sectional area were used to standardize breaking force per volume. The compression surface consisted of a 3 mm diameter puncture probe. Modulus was calculated by dividing material stress with strain (i.e. the slope of the stress-strain curve produced during the compressive test). The average of 3-5 replicates was taken for analysis. Statistical analyses. All statistical analyses were performed in R version 3.6.0 51 for Mac OS X. For all tests, significance was determined at p < 0.05. Data were transformed when necessary (as outlined below) to meet the Table 2. Measured and calculated carbonate chemistry parameters for four treatment combinations: ambient conditions ('Control'), CO 2 -induced acidification ('OA'), increased seawater temperature ('OW'), and a combination of both ('OAW'). Parameters of carbonate seawater chemistry (total alkalinity (TA) and pCO 2 ) were calculated from measured dissolved inorganic carbon (DIC), pH, temperature, and salinity values using CO2SYS. SW -seawater; *Directly measured (n = 8 per treatment); **Calculated.
2020-05-18T14:00:53.473Z
2020-05-18T00:00:00.000
{ "year": 2020, "sha1": "39349825532891381044d4583c502f6e766c8703", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-65220-9.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "39349825532891381044d4583c502f6e766c8703", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Environmental Science", "Medicine" ] }
53221968
pes2o/s2orc
v3-fos-license
Natural Antispasmodics: Source, Stereochemical Configuration, and Biological Activity Natural products with antispasmodic activity have been used in traditional medicine to alleviate different illnesses since the remote past. We searched the literature and compiled the antispasmodic activity of 248 natural compounds isolated from terrestrial plants. In this review, we summarized all the natural products reported with antispasmodic activity until the end of 2017. We also provided chemical information about their extraction as well as the model used to test their activities. Results showed that members of the Lamiaceae and Asteraceae families had the highest number of isolated compounds with antispasmodic activity. Moreover, monoterpenoids, flavonoids, triterpenes, and alkaloids were the chemical groups with the highest number of antispasmodic compounds. Lastly, a structural comparison of natural versus synthetic compounds was discussed. Introduction Antispasmodic compounds are currently used to reduce anxiety, emotional and musculoskeletal tension, and irritability. Although most of the available antispasmodic compounds are synthetic or semisynthetic, traditional uses of this group of compounds are still popular. We collected information about natural compounds with antispasmodic activity isolated from terrestrial plants. We searched the databases of Google Scholar, PubMed, and Sci-Finder and compiled the information about 248 compounds published until December 2017. This review focuses on the antispasmodic activity of isolated compounds and activities from extracts without further purification are not discussed. The Neurons Nerve cells or neurons are responsible for receiving, conducting, and transmitting signals. A neuron consists of a nucleated body, a long thin extension called an axon, and several dendrites or prolongations extended from the cell body. Axons conduct signals from the nucleated body towards distant targets, while dendrites provide an enlarged surface area to receive signals from the axons of other neurons. Signal transmission through axons is driven by a change in the electrical potential across the plasma membrane of neurons. This plasma membrane contains voltage-gated cation channels, which are responsible for generation of action potentials. An action potential is triggered by a depolarization of the plasma membrane or a shift to a less negative value. In nerve and skeletal muscle cells, a stimulus can cause sufficient depolarization to open voltage-gated Na + channels allowing the entrance of Na + into the cell. This influx of Na + depolarizes the membrane further causing the opening of more Na + channels. To avoid a permanent influx, Na + channels are able to reclose rapidly even when the membrane is still depolarized. This function is based on the presence of voltage-gated K + channels, which are responsible for K + efflux equilibrating the membrane potential even before the total inactivation of Na + channels. In some cases, the action potential in some muscles depends on voltage-gated Ca 2+ channels. Transmission of Signals. The transmission of signals occurs mainly between neurons or from neurons to skeletal muscles, which are the final acceptors of electrical signals, causing a muscular contraction. Signal Transmission between Neurons. Neuronal signals are transmitted between neurons at specialized sites of contact known as synapses. Neurons are separated by a synaptic cleft where a release of a neurotransmitter occurs. This neurotransmitter is stored in vesicles and is released by exocytosis. Upon triggering, the neurotransmitter is released into the cleft provoking an electrical change in the postsynaptic cell by binding to the transmitter-gated ion channels. To avoid a continuous electrical change and to ensure both spatial and temporal precision of signal transmission, the neurotransmitter is rapidly removed from the cleft either by specific enzymes in the synaptic cleft or by reuptake mediated by neurotransmitter carrier proteins [1]. Neurotransmitters can also open cation channels causing an influx of Na + and then called excitatory neurotransmitters (e.g., acetylcholine, glutamate, and serotonin) or produce an opening of Cl − channels and then inhibiting the signal transmission by maintaining the postsynaptic membrane polarization [e.g., -aminobutyric acid (GABA) and glycine]. Neuromuscular Signal Transmission. The transmission of electrical signals to muscles involves five sequential and orchestrated steps: (i) nerve electric signal reaches the nerve terminal, (ii) it depolarizes the plasma membrane of the terminal, (iii) voltage-gated Ca 2+ channels opens causing an increase in Ca 2+ concentration in the neuron cytosol, and (iv) release of acetylcholine into the synaptic cleft is triggered. Acetylcholine binds to acetylcholine receptors in the muscle plasma membrane opening Na + channels and provoking a membrane depolarization. This depolarization enhances the opening of more Na + channels causing a self-propagating depolarization. The generalized depolarization of the muscle plasma membrane activates Ca 2+ channels in specialized regions on the membrane causing Ca 2+ release from the sarcoplasmic reticulum (Ca 2+ storage) into the cytosol. As a consequence of an increase in the Ca 2+ concentration, myofibrils in the muscle cell contract. The increase of Ca 2+ in the cytosol is transient because Ca 2+ is rapidly pumped back into the sarcoplasmic reticulum causing a relaxation of the myofibrils. This process is very fast and Ca 2+ concentration at resting levels is restored within 30 milliseconds [2]. Receptors The autonomic nerve system controls and monitors the internal environment of the body. The input of its activity is provided by neurons that are associated with specific sensory receptors located in the blood vessels, muscles, and visceral organs (Table 1). According to the neurotransmitter secreted, these neurons are classified as adrenergic or cholinergic. The adrenergic neurons secrete the neurotransmitter noradrenalin termed also norepinephrine. Adrenergic receptors include the types and , which are further categorized as 1 , 2 , 1 , 2 , and 3 . On the other hand, cholinergic neurons secrete acetylcholine, which induces a postsynaptic event. There are two types of cholinergic receptors, the nicotinic receptor (abundant at the neuromuscular junction) and the muscarinic receptor (abundant on smooth and cardiac muscles and glands). There are several agonists (neurotransmitters, hormones, and others) able to bind to specific receptors and activate the contraction of smooth muscle. Upon binding the agonist to the receptor, the mechanism of contraction is based on an increase of phospholipase C. This enzyme hydrolyzes phosphatidylinositol 4,5-bisphosphate located on the membrane, producing two powerful secondary messengers termed diacylglycerol (DG) and inositol 1,4,5 triphosphate (IP3). IP3 binds to specific receptors in the sarcoplasmic reticulum, causing release of Ca 2+ within the muscle. DG together with Ca 2+ activates the protein kinase C (PKC), which phosphorylates specific proteins. In most smooth muscles, the contraction process commences when PKC phosphorylates Ca 2+ channels or other proteins that regulate the cyclic process. For instance, Ca 2+ binds to calmodulin (a multifunctional intermediate calcium-binding messenger protein), triggering the activation of the myosin light chain (MLC) kinase, which phosphorylates the light chain of myosin and together with actin carries out the process of initiating the shortening of the smooth muscle cell [147]. However, the elevation of the intracellular concentration of Ca 2+ is transient, and the contractile response is maintained by a mechanism sensitized by Ca 2+ modulated by the inhibition of myosin phosphatase activity by Rho kinase. This mechanism sensitized to Ca 2+ is initiated at the same time that phospholipase C is activated and involves the activation of the small RhoA protein bound to guanosine triphosphate (GTP). Above activation, RhoA increases the activity of Rho kinase, leading to the inhibition of myosin phosphatase. This promotes the contractile state, since the myosin light chain cannot be dephosphorylated [147]. Relaxation of smooth muscle occurs as a result of either removing the contractile stimuli or by the direct action of a substance that stimulates the inhibition of the contractile mechanism. In any circumstance, the relaxation process requires a decrease in the intracellular Ca 2+ concentration and an increase in the activity of the MLC phosphatase. The sarcoplasmic reticulum and plasma membrane remove Ca 2+ from the cytosol. Na + /Ca 2+ channels are located on the plasma membrane and help to reduce the intracellular concentration of Ca 2+ . During relaxation, other contributors that restrict the Ca 2+ entry into the cell are the voltage-operated channels and Ca 2+ receptors in the plasma membrane, which remain closed [147]. Spasmodic Compounds The historical antecedents date from the year 1504 when South American natives inhabiting the basins of the high Amazon and the Orinoco prepared a mixture of alkaloids termed curare. This substance was placed in the tips of arrows in order to hunt (prey paralyzing) and fight in wars. Curare produces muscle weakness, paralysis, respiratory failure, and death [148]. In 1800, Alexander von Humboldt, identified that curare was made from the extracts of the species Chondrodendron tomentosum and Strychnos toxifera. In 1935, the French physiologist Claude Bernard managed to isolate the alkaloid d-tubocurarine from the curare [149]; and one year later, it was elucidated that this compound had the ability to inhibit acetylcholine, blocking the transmission of nerve impulses to the muscles [150]. Lastly, new benzylisoquinoline alkaloids were isolated from curare by Galeffi et al. in 1977 [151, 152]. In 1822, the pharmacist Rudolph Brandes obtained an impure alkaloid from Atropa belladonna (Solanaceae), which after purification was named atropine. Interestingly, atropine was not produced as a natural compound from the plant and it was a derivative generated from the alkaloid hyoscyamine during the process of purification [153]. It is important to note that atropine has been naturally found in small quantities in other members of the Solanaceae family such as Datura stramonium, Duboisia myoporoides, and Scopolia japonica [154][155][156]. The use of the plant Papaver somniferum (opium poppy) (Papaveraceae) dates back to about 4000 BC. At present the plant is only used to extract a base material for the manufacture of other alkaloids, such as noscapine and codeine, both discovered by the French pharmacist Pierre-Jean Robiquet in 1831 and 1832, respectively [157]. In 1848, papaverine was another substance extracted from the same plant by the German chemist Georg Merck [158], which is rarely used today because of the high doses needed (approximately 6 to 12 mg). However, it is still used as a control in experimental models with the purpose of studying antispasmodic activity of plant extracts. In the 20 th century, extracts and powders derived from A. belladonna were widely used as antispasmodics, but from the 1950s these preparations were displaced by synthetic and semisynthetic anticholinergic compounds in order to obtain a better response [159], such as the case of methocarbamol and guaifenesin. On the other hand, a series of compounds such as dantrolene, glutethimide, methaqualone, chlormezanone, metiprilone, and ethchlorvynol were introduced to replace the meprobamate, which had to be withdrawn from the market in 1960 due to problems resulting from use such as abstinence, addictions, and overdoses. In 1962, the Swiss chemist Heinrich Keberle synthesized baclofen, which can be obtained by reacting glutarimide with an alkaline solution [160]. Glutarimide can also be found in plants such as Croton cuneatus and C. membranaceus (Euphorbiaceae) [161,162]. The arrival of the quaternary compounds of nitrogen reinforce their peripheral anticholinergic activity offering also the advantages of being poorly absorbed in the gastrointestinal tract, producing a more powerful and longer lasting sedative effect unlike atropine [1]. For example, ipratropium bromide was developed by the German company Boehringer Ingelheim in 1976 and used to treat asthma. This compound was obtained by reacting atropine with isopropyl bromide [163]. Another quaternary compound was the nbutylhyoscine bromide, which is possible to obtain by the organic synthesis of scopolamine and the cimetropium bromide found in the A. belladonna [164]. Although at present the preparations of plant mixtures are no longer used for therapeutic purposes, these compounds formed a part of and served as the basis for modern pharmacology for their applicability as antispasmodics and anesthetics. Spasms are involuntary contractions of the muscles, which are normally accompanied by pain and interfere with the free and effective muscular voluntary activity. Muscle spasm can originate from multiple medical conditions and is often associated with spinal injury, multiple sclerosis, and stroke. Spasticity and rigidity are caused by a disinhibition of spinal motor mechanisms. There are several scenarios where a muscle can produce a spasm: (i) unstable depolarization of motor axons; (ii) muscular contractions persist even if the innervation of muscle is normal and despite attempts of relaxation (myotonia); (iii) after one or a series of contractions, the muscle can decontract slowly, as occurring in hypothyroidism; and (iv) muscles lack the energy to relax. Distribution of Spasmodic Compound in Nature. Spasmodic compounds are widely distributed in nature (Table 2). Frequently, these compounds are found in animals that paralyze their preys or used for defense. Some examples include the venom of the black widow and tarantula spiders [11,165] and the venom of snakes [166]. Plants also produce spasmodic metabolites, such as strychnine, an alkaloid obtained from the tree Strychnos nux-vomica (Loganiaceae). Furthermore, microorganisms synthesize spasmodic compounds such as the neurotoxins tetanospasmin and botulinum toxin from the Gram-positive bacteria Clostridium tetani and C. botulinum, respectively. These toxins produce a toxic disorder, which is characterized by persistent spasms of skeletal muscles on spinal neurons similar to strychnine. Mechanisms of Antispasmodic Activity of Natural Products. Antispasmodic compounds exert their activity in different ways, such as antispasmodic activity through inhibition of the response to the neurotransmitters 5-hydroxytryptamine (5-HT) or serotonin and acetylcholine. However, other authors attribute the antispasmodic effect to (i) capsaicinsensitive neurons, (ii) the participation of vanilloid receptors [167], (iii) the activation of K + ATP channels, (iv) the blockade of Na + channels and muscarinic receptors, (v) the reduction of extracellular Ca 2+ , or (vi) the blockade of Ca 2+ channels [22,168,169]. The above is merely a reflection of the ambiguity of the studies showing the mechanisms of action of the antispasmodic compounds [36]. For example, the hydroalcoholic extract of Marrubium vulgare showed antispasmodic effect, having the ability to inhibit the Blocks the cholinergic post-synaptic response [16] neurotransmitters acetylcholine, bradykinin, prostaglandin E2, histamine, and oxytocin [170], whereas a dual effect of antidiarrheal and laxative activities was reported in Fumaria parviflora [171]. Gastrointestinal Model. The small intestine is characterized by its large surface area as a result of its circular folds, villi, and microvilli. It is the longest part of the GI system (approximately 5 meters) and comprises about 5% of its initial length, which corresponds to the duodenum (characterized by the absence of the mesentery) and then the jejunum (around 40% of the intestinal length), ending with the ileum. It is the organ of absorption of nutrients and digestion in organisms. These functions are carried out mainly in the duodenum and jejunum. The main types of bowel movement are the segmentation and peristaltism. The segmentation is most frequent in the small intestine and consists of contractions of the circular muscle layer in very close areas. Contractions last for 11-12 and 8-9 contractions per min in the duodenum and ileum, respectively. When this segmentation is rhythmic, the contractions are alternated with relaxation. This type of movement results in a mixed effect of the chyme (acidic fluid that passes from the stomach to the small intestine) with the digestive secretions, allowing an optimal contact with the intestinal mucosa. In the case of peristalsis, contractions of successive sections of the circular smooth muscle cause the movement of the intestinal contents in anterograde form. The short peristaltic movement also takes place in the small intestine, but less frequently than the segmentation movements. Peristaltic waves rarely cross more than 10 cm of intestine and, due to the low frequency of propulsion of the chyme, it is in this zone where digestion and absorption are preferably carried out. Peristalsis is regulated mainly by the nervous action of the myenteric plexus (major nerve supply to the gastrointestinal tract that controls GI tract motility) in the intestinal wall. The diversity of experimental models used for the testing of antispasmodic compounds is large. These models mainly use isolated organs or live animals. Once the organ is extracted from the animal, the intestinal motility is assessed with the administration of a substance. The use of extracted organs can be sustained for hours when placed in a physiological solution, such as Ringer, Jalon, Tyrode, and Krebs [172]. The most used organs to perform the studies are guinea pig ileum, duodenum, heart, trachea, and jejunum. The same organs can be also extracted from rabbit, mouse, rat, and hamster ( Table 3). The preparation of ileum is preferred because it evaluates the spasmolytic activity. However, although the jejunum contracts spontaneously, it allows evaluating the spasmolytic activity directly and without the use of an agonist [173]. Some advantages of performing ex vivo experiments are as follows: (i) different substances can be evaluated in fresh tissues without absorption factors, metabolic excretion or interference due to nerve reflexes; (ii) it is possible to quantify the effect produced by a precisely determined drug; and (iii) it is easier to obtain dose-effect curves, such as the smooth muscle where the contraction obtained under the influence of a spasm or in tissue homogenates is measured by determination of the enzyme activities [172,174]. Guinea Pig Ileum and Rat Stomach. The ileum is removed and cut in strips of approximately 2 cm long and then placed in a bath filled with an isotonic solution as mentioned earlier. Electrophysiological studies are performed by graphically recording the contractions with the aid of a transducer, which is calibrated 30 min before the treatment begins. A range of 0.01 to 0.03 M is generally used to determine dose response curves of the antispasmodic substance [175]. In rats, the stomach is removed and the corpus and fundus are cut in strips of approximately 5 mm x 15 mm and placed on a prewarmed warm solution as mentioned before. Compounds Used to Elicit a Spasmodic Activity. The main compounds used are acetylcholine, atropine, BaCl 2 , carbachol, histamine, KCl, and serotonin. Acetylcholine is a postganglionic neurotransmitter in the parasympathetic neurons that innervate the intestine. The response to acetylcholine is regulated by activation of the two types of muscarinic receptors: M2 and M3 [176]. The activation of these receptors causes contractions by increasing the intracellular concentration of Ca 2+ via IP3 [176]. Atropine is a competitive reversible antagonist of muscarinic acetylcholine receptors M1, M2, M3, M4, and M5. KCl increases the voltage-operated Ca 2+ channel activity by increasing intracellular free Ca 2+ in smooth muscle [180]. Serotonin is also an important neurotransmitter mainly stored in the digestive tract, affecting the secretory and motor activities. At high concentrations, it acts as a vasoconstrictor by contracting endothelial smooth muscle directly or by potentiating the effects of other vasoconstrictors [181,182]. Antispasmodic Activity of Natural Compounds Compounds isolated from terrestrial plants have shown the ability to function as antispasmodic compounds. The chemical group with the highest number of members of antispasmodic compounds is the monoterpenoid group (41 compounds) followed by flavonoids (35 compounds), alkaloids (with 33 compounds), and triterpenes with 31 ( Figure 1). Although we summarize in Table 3 248 compounds, in most of the cases the mechanism behind their activity has not been elucidated. Mutagenicity Studies related to the mutagenicity of antispasmodics are very scarce. This topic has been underestimated when testing the bioactivities of ethnomedicinal plants. Probably the most useful method to determine the mutagenicity of natural products or plant extracts is the Ames method [183]. This test is based on the rate of mutations detected in genetically modified strains of Salmonella typhimurium. Moreover, this test has also been developed to detect mutagenicity of metabolized compounds in the liver. In this situation, a mixture of liver 8 BioMed Research International KCl in guinea pig ileum IC [27] (S)-(+)-Carvone Orphenadrine Skeletal muscle relaxant that is used for the treatment of acute muscle aches, pain, or spasms. Phenylpropanoids Baclofen GABA B Spinal cord injury, cerebral palsy, and multiple sclerosis Idrocilamide Prevents release of intracellular Ca 2+ Skeletal muscle relaxant and muscular pain enzymes (S9 microsomal fraction) is used to mimic the metabolites that will be produced in the liver [184]. Few studies have been performed to determine the mutagenicity of natural products with antispasmodic activity. For example, the flavonoids quercetin and luteolin were tested using the Ames method and the appearance of point mutations in four of the tested bacterial strains was shown [185]. In another study, the extracts of the plants Brickellia veronicaefolia, Gnaphalium sp., Poliomintha longiflora, and Valeriana procera were studied. Compounds isolated from these plants are listed as antispasmodic compounds ( Table 3). Results of the mutagenicity test indicated that Gnaphalium sp., Poliomintha longiflora (used in the Mexican cuisine and as a traditional medicine), and Valeriana procera induced mutagenesis in the tested bacterial strain [186]. Chemical Similarities between Natural and Synthetic Antispasmodic Compounds To determine whether or not there is an analogy between synthetic (Table 4) and natural antispasmodic compounds, the structures of both groups were compared. Results showed that no similarities were found except for alkaloids, amines, and amino acids. One of the main differences is that commercial alkaloids are methylated in their nitrogen to make them positive, increasing their solubilities because of salt formation. In contrast, natural products have no positive nitrogen, rendering the molecule neutral and pH dependent. Thus, the compound may or may not be protonated, resulting in a change in its solubility and consequently a change on the targeting tissues. The comparison can perhaps be focused on the distribution of charges rather than by functional groups or families of compounds, emphasizing the electron distribution. For example, a physical characterization such as the heat of formation, the surface electrostatic potential, the molecular weight, the surface tension, the refractive index, the lipophilicity, and others has been used to characterize the structure-activity relationship of alkaloids extracted from the Amaryllidaceae family [187]. These alkaloids were selected because of their ability to inhibit the effect of the acetylcholinesterase enzyme. Of special interest is the natural compound salvinorin A isolated from the Mexican hallucinogenic Salvia divinorum (Lamiaceae) used in the traditional medicine as an antidiarrheal. It has been reported that this compound inhibited the intestinal motility through the activation of other receptors such as -opioid receptors (KORs). Upon inflammation of the gut, the cannabinoid C, B 1 , and KOR receptors are upregulated. It appears that salvinorin A interacts in the cross-talk between these receptors with a reduction of the inflammation as demonstrated in murine and guinea pig models [188,189]. Analysis of the similarities between synthetic and natural antispasmodic structures is depicted in Table 5. Conclusions A large number of natural products with antispasmodic activities have been reported. Although the use of plants in traditional medicine is still relevant, it is necessary to perform new studies to elucidate the mechanism of action of antispasmodics. Moreover, more information about cytotoxicity and mutagenesis should be explored to ensure that these compounds are safe for consumption. The findings of this study corroborated the need for safety studies on plants extensively used for primary health care in countries such as Mexico. Such studies must be carried out before continuing with the widespread use of some species, which may provoke long-term and irreversible damage. Conflicts of Interest The authors declare no conflicts of interest.
2018-11-15T17:36:51.837Z
2018-10-08T00:00:00.000
{ "year": 2018, "sha1": "d014af849e6fdf669aa1b0569ebe2dee161a6fd7", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/bmri/2018/3819714.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "405ab208ef99186ca8b80b804fabbffe898dce9a", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
71715778
pes2o/s2orc
v3-fos-license
Characteristics of the tree shrew gut virome The tree shrew (Tupaia belangeri) has been proposed as an alternative laboratory animal to primates in biomedical research in recent years. However, characteristics of the tree shrew gut virome remain unclear. In this study, the metagenomic analysis method was used to identify the features of gut virome from fecal samples of this animal. Results showed that 5.80% of sequence reads in the libraries exhibited significant similarity to sequences deposited in the viral reference database (NCBI non-redundant nucleotide databases, viral protein databases and ACLAME database), and these reads were further classified into three major orders: Caudovirales (58.0%), Picornavirales (16.0%), and Herpesvirales (6.0%). Siphoviridae (46.0%), Myoviridae (45.0%), and Podoviridae (8.0%) comprised most Caudovirales. Picornaviridae (99.9%) and Herpesviridae (99.0%) were the primary families of Picornavirales and Herpesvirales, respectively. According to the host types and nucleic acid classifications, all of the related viruses in this study were divided into bacterial phage (61.83%), animal-specific virus (34.50%), plant-specific virus (0.09%), insect-specific virus (0.08%) and other viruses (3.50%). The dsDNA virus accounted for 51.13% of the total, followed by ssRNA (33.51%) and ssDNA virus (15.36%). This study provides an initial understanding of the community structure of the gut virome of tree shrew and a baseline for future tree shrew virus investigation. Introduction The tree shrew (Tupaia belangeri) belongs to the family Tupaiidae, order Scandentia, which has a wide distribution in South Asia, Southeast Asia and Southwest China [1]. The tree shrew is a small mammal similar in appearance to squirrels and feeds on fruits, insects and small vertebrates [2]. Tupaia belangeri is the only representative in China and consists of six subspecies: T. belangeri gaoligongensis, T. belangeri modesta, T. belangeri yaoshanensis, T. belangeri a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 tonquinia, T. belangeri yunalis and T. belangeri chinensis [3]. Previous studies [4][5] showed that the tree shrew has a closer relationship with humans than did rodents in terms of physiological function, biochemical metabolism and genomic signatures. Due to its unique characteristics, such as small body size, low cost of maintenance, life span and short reproductive cycle, the tree shrew has been increasingly used in laboratory analyses in recent years. Several studies have used this animal for the construction of human disease models, such as models for hepatitis virus, influenza virus, cytomegalovirus, herpes simplex virus, and dengue virus [6][7][8]. Although some viruses were isolated or detected from tree shrew in previous reports, the gut viral diversity for this animal is still unknown. In addition, traditional methods, such as cell culture or PCR, failed to fully estimate the distribution of microorganisms and taxonomic diversity. However, the recent availability of next-generation sequencing methods has provided a thorough investigation of the complex and diverse gut virome, and a large number of gut metagenomics studies have been conducted [9][10][11]. Previous gut virome analysis mainly referred to human or other animals [12][13], not including the tree shrew. Because of the importance of zoological research, it is necessary to determine the gut virome of this animal. Therefore, in this study, a viral metagenomic method based on next-generation sequencing was used to reveal the characteristics of the gut virome for tree shrew collected from the suburbs of Kunming, China. Sample source and preparation Fifty fecal samples from tree shrews were collected at the Center of Tree Shrew Germplasm Resources, Institute of Medical Biology, Chinese Academy of Medical Science and Peking Union Medical College in Kunming, China (103˚40 0 E, 26˚22 0 N). All the animals were housed for use in further research without any sacrifice. Fresh feces were collected and immediately stored at -70˚C. The animals were healthy without visible features of tumors or disease; 30 were male, 20 were female, and the average weight was 130.25±18.76 g. Each sample was resuspended in sterile phosphate-buffered saline (PBS), centrifuged at 8,000 rpm (15 min, 4˚C), and then filtered through a 0.45 μm and 0.22 μm syringe filter (Millipore, Bedford, MA). All samples were pooled and ultra-centrifuged at 40,000 rpm for 4 h at 4˚C. Subsequently, the supernatant was discarded, and the pellet was resuspended in 500 μl PBS and then treated with a cocktail of DNase (TaKaRa Bio Inc. Japan), benzonase (TaKaRa Bio Inc. Japan) and RNase (TaKaRa Bio Inc. Japan) [14]. Nucleic acid extraction and sequence-independent amplification Viral nucleic acids were extracted from nuclease-treated resuspended supernatant by using a QIAamp Viral RNA Mini Kit (QIAGEN, Germany) and OMEGA E.Z.N.A Viral DNA Kit (OMEGA, USA) following the manufacturer's instructions. A NanoDrop spectrophotometer (Thermo Scientific, USA) was used for the quantification of viral nucleic acids. The extracted viral RNA was reverse transcribed with the PrimeScript II 1st Strand cDNA Synthesis Kit (TaKaRa Bio Inc. Japan) using the primer R-6N (GCCGGAGCTCTGCAGATATCNNNNNN). Then, the second-strand synthesis was run for 1 h at 37˚C with Klenow fragment (3'-5'exo-, NEB, Ipswich, MA) into double-strand DNA [14]. Sequence-independent amplification was performed using the primer R (GCCGGAGCTCTGCAGATATC), and the amplification procedures were 94˚C for 10 min, followed by 35 cycles of 94˚C for 40 s, 55˚C for 40 s, 72˚C for 90 s, and finally 72˚C for 10 min. The primer sequence was cut off with EcoRV (TaKaRa Bio Inc. Japan), and the product was purified using a QIAquick PCR Purification Kit (QIAGEN, Germany). The purified product was electrophoresed on a 1% agarose gel. Next-generation sequencing and bioinformatics analysis Sequencing libraries were generated using the NEBNext Ultra DNA Library Prep Kit for Illumina (NEB, USA) following the manufacturer's recommendations, and index codes were added to the sample. Briefly, the DNA sample was fragmented by sonication to a size of 300 bp, and then the fragmented DNA was end-polished followed by A-tailed ligation with the full-length adaptor for Illumina sequencing with further PCR amplification. The libraries were analyzed by an Agilent 2100 Bio-analyzer and quantified using real-time PCR. Sequencing was performed on an Illumina HiSeq2500 platform, and paired-end reads were generated. Sequence data were deposited in the NCBI database with the SRA accession SRP154022. The raw reads were quality controlled by removing low-quality sequences, adapters, primers and host sequences. Briefly, low sequencing quality reads were trimmed using Phred quality score 10 as the threshold. Adaptor and primer sequences were trimmed by using the default parameters of QIIME [15]. Host reads and bacterial reads were subtracted by mapping the reads to tree shrew reference genome [16] (accession number deposited at GenBank: ALAR00000000) and bacterial RefSeq genomes release 59 using bowtie2 [17]. The filtered, clean data were aligned and compared to the NCBI non-redundant nucleotide databases (Version: 2014- [10][11][12][13][14][15][16][17][18][19], reference viral protein databases (Refseq version: 2015-09-08) and ACLAME database (Viruses) using tBLASTx, BLASTn and BLASTx to identify the reads identity [18][19][20]. BLAST hit with significant E-value was reported by using a threshold E-value of �10 −3 , and given the similarity was higher than 75% [21][22]. Several related taxonomies were yielded through an equally high scoring top hit, and these reads were assigned to most recent common ancestor. Reads which did not match genomes used in the clean data and did not match viral genomes included in the database were reported as of unknowns (others). The taxonomies of the aligned reads based on the hit sequence match from all lanes were parsed by Krona [23]. Phylogenetic analysis Based on the sequence results mentioned above, specific primers were designed to detect adenovirus in this study. The PCR primers were designed with Clone Manager Professional Suite 8 software (Scientific & Educational Software) ( Table 1). The adenovirus 3'UTR gene was amplified by using primers F1/R1 for the first round and F2/R2 for the second round. The amplified products were sent for bidirectional sequencing, merged by DNAStar software (Lasergene) and compared to the NCBI database using BLASTn. The aligned sequences were trimmed to match the genomic regions of the sequences obtained in this study and to generate phylogenetic trees in MEGA 6.0 [24] using the neighbor joining method with 1000 bootstrap replicates. The perspective phylogenetic analysis of some top virus species was performed by extracting the best match sequences from total valid reads using CLC Genomics Workbench 9.5.2 (QIAGEN, Denmark). The reference genomes of Cercopithecine herpesvirus 5 (accession: NC_012783), Theilovirus (accession: NC_001366), African bat icavirus A (accession: NC_026470), and Cosavirus JMY-2014 (accession: NC_025961) were used. The contigs were assembled and generated from joining overlapped reads from different pairs. All the reads were aligned to each reference genomes by CLC Genomics Workbench 9.5.2 (QIAGEN, Denmark) with the default parameters. We selected parts of the regions of sequences for each contig to build the phylogenetic trees with MEGA 6.0 mentioned above. Ethics approval statement The sample collection and detection protocols were carried out in accordance with relevant guidelines and regulations approved by the Sequencing data analysis A total of 20,260,886 raw reads were generated, and 20,195,972 reads were validated after trimming and removing adapter and host genomic sequence. The data output for the clean data was 3,033.48 Mbp; 3.77 Mbp was adapter data, and the no-host data were 3,029.40 Mbp. The Q30 of the sequencing was 83.46%, the GC content was 42.94%, and the effective rate was 99.814%. However, only 1,177,922 reads (5.80%) were associated with viruses through comparisons of reads against the NCBI nonredundant nucleotide databases, reference viral protein databases and ACLAME database. These reads were further classified into three major orders, Caudovirales (684,931 reads, composition ratio 58.0%), Picornavirales (191,092 reads, 16.0%), and Herpesvirales (71,785 reads, 6.0%) (Fig 1A). At the family level, Siphoviridae (46.0%), Myoviridae (45.0%), and Podoviridae (8.0%) comprised most Caudovirales. Picornaviridae (99.9%) and Herpesviridae (99.0%) were the primary families of Picornavirales and Herpesvirales, respectively, as shown in Fig 1B. The unclassified order was divided into several families, such as Microviridae (47%), unclassified (36%), Phycodnaviridae (3%), Mimiviridae (3%), Anelloviridae (3%), Inoviridae (2%), Polyomaviridae (1%) and others (5%) (Fig 1B). Unknown/other reads indicated the proportion of highly divergent and/or novel sequences with no homology to NCBI. Based on the relative abundance of reads related to each classification, the top 10 families, genera and species of all reads were shown in Fig 2A-2C. The most relatively abundant family was Siphoviridae (composition ratio 27.07%), followed by Myoviridae (26.03%), Picornaviridae (16.22%), and Microviridae (9.11%) (Fig 2A). The top 5 genera were unclassified for each family, as shown in Fig 2B. Cytomegalovirus (composition ratio 5.99%), Cardiovirus (4.92%), Cosavirus (4.73%), and Tunalikevirus (3.84%) had a high relative abundance at the genus level. At the species level, high relative abundances were found for Cercopithecine herpesvirus (composition ratio 6.0%) and African bat icavirus A (5.52%); all others belonged to bacterial phages ( Fig 2C). All the taxonomic distributions of the different levels of viruses were shown by Krona in Fig 2D. According to the host types and nucleic acid classifications, all of the related viruses in this study were divided into bacterial phages (61.83%), animal-specific viruses (34.50%), plant-specific viruses (0.09%), insect-specific viruses (0.08%) and other viruses (3.50%) ( Table 2). The double-stranded DNA viruses (dsDNA) accounted for 51.13% of the total, followed by singlestrand RNA (ssRNA) (33.51%) and single-strand DNA (ssDNA) viruses (15.36%), as shown in Table 2. The data revealed a wide diversity of viruses with a prevalence of bacterial phages and described a range of animal viruses, such as ssRNA viruses belonging to the order Picornavirales, ssDNA viruses (Circovirus), and dsDNA viruses (Adenovirus and Herpesvirus). In addition, some of the results were related to insect viruses in Densoviridae and Iridoviridae and plant viruses in Caulimoviridae and Potyviridae. Phylogenetic analysis To confirm the discovery of adenovirus, PCR assays were used to amplify the conserved region of the 3'UTR gene of adenovirus. The results showed that the 3'UTR of adenovirus in this study had 93% similarity to the newly described tree shrew adenovirus A (GenBank: AF258784.1) (Fig 3). For the perspective phylogenetic analysis, the coverage of generated contigs of top virus species were above 80% for each reference genomes, and the details of the alignments were shown in S1 Fig Discussion The viral metagenomics method has been employed to identify both commensal viruses and viral pathogens successfully in recent years and has the potential to detect most viruses through sequence similarity searches [11,25]. Due to considerable genetic homology with both humans and primates, the tree shrew was considered to be a model for studies on viral infection and preclinical drug development [2]. This study indicated that the value of tree shrew as a model animal was increased because of the presence of a fecal virome. A large number of studies have been performed on animal viromes, both in wild animals and in domestic animals [13,[26][27]. Ng et al. [28] conducted a wide survey of viral diversity within mosquitoes using metagenomics. Viral reads represented only 1% to 2% of total reads, and animal viruses represented not more than 10% of viral reads. As a consequence, animal viruses detected in mosquitoes may have reflected the virome of a large variety of vertebrate hosts (e.g., humans, primates, or birds). In our study, only 5.8% of reads exhibited significant similarity to the sequences deposited in the viral reference database. A total of 94% of sequences could not be classified, which was consistent with other metagenomic studies of fecal viromes. Several factors may lead to this phenomenon, such as limited representation of viruses in reference sequence databases, limitations of alignment-based classification, and the divergence or length of viral sequences [29]. We considered maybe it was due to the "Viral dark matter", that only a small fraction of the total nucleic acids were known viral origins, and unknown sequences dominated viromes as 63%-93% of the reads often lack functional or taxonomic annotations [30]. Phan et al. [14] performed a metagenomic analysis of fecal specimens from mice, voles and rats. Their results showed that the presence of insect (e.g., Densovirinae, Iridoviridae) and plant viral sequences (e.g., Nanoviridae, Geminiviridae) reflected the diet of rodents. They also noted the presence of plant viruses, such as Virgaviridae, in the virome of the rodents' feces. Similar results could be found in our study because insect-or plant-specific viruses were identified, which also reflected the eating habits or life cycles of tree shrew. Furthermore, the phylogenetic analysis of tree shrew contigs or genes showed high similarity with reference viruses, such as Cercopithecine herpesvirus 5, Theilovirus, African bat icavirus A, Cosavirus JMY-2014 or adenovirus, possibly reflecting real infectants of the animal. Hofer et al. [31] showed 87% of the contigs for human gut virome had no overlap with previously identified viruses, and 13% belonged to phage families, including Microviridae, Podoviridae, Myoviridae and Siphoviridae. Carding et al. [32] reviewed that the human intestinal virome was personalised and stable, and dominated by phages. The most distributed three families were Siphoviridae, Myoviridae and Podoviridae. For the non-human primates, D'arc et al. [33] evaluated the gorilla gut virome in association with natural simian immunodefciency virus infection. Their results showed that three bacteriophage families (Siphoviridae, Myoviridae and Podoviridae) represented 67.5 and 68% of the total annotated reads in SIVgor-infected and uninfected individuals, respectively. Specifically, the Siphoviridae family was more frequent in SIVgor-infected individuals compared with uninfected individuals, while two other bacteriophage families were more frequent in uninfected individuals. Liu et al. [34] performed the metagenomic analysis of wild rhesus monkey gut virome in China. Except for bacteriophage, five vertebrate virus families, six insect virus families and eleven plant virus families and other viruses were found in their study. All these studies showed similar results with ours, no matter for human or non-human primates. As alternatives to primates as lab animal models, the tree shrew gut virome was indeed comparable to current animal models. Bacteriophages have biomedical importance because they can transmit genes to their bacterial hosts, conferring increased pathogenicity, antibiotic resistance, and new metabolic capacity. Previous studies have shown that siphophage fragments were the most common fragments observed in published metagenomic libraries. In particular, siphophages constituted 44% of phage sequences in the sediment library [25]. Viruses were present in several environments, which indicated that siphophages might be the most abundant genomes on Earth. In our study, the most abundant virus was Siphoviridae, reflected the distribution characters of tree shrew gut virome. In addition, we also found some specific viruses in the gut virome of tree shrew. Approximately 15.36% of the virome was detected as the ssDNA virus, the dominant virus; for example, Mischivirus, a pathogen that may cause human disease, was also found in the body of a bat [35]. For dsDNA viruses, the dominant genus, Orthopoxvirus, under the family Poxviridae, uses vertebrates, including mammals and humans, and arthropods as natural hosts [36][37]. Diseases associated with this genus include smallpox, cowpox, horsepox, and monkeypox. There were currently ten species in this genus, including the type species vaccinia virus, which was the dominant species (55%) of this category for animal viruses in the tree shrew gut virome. The second dominant genus, i.e., Alphapolyomavirus, under the family of Polyomaviridae, may infect humans and other mammals [38]. Polyomaviridae is a family of viruses whose natural hosts are primarily mammals and birds. Some members of the family, such as Merkel cell polyomavirus and raccoon polyomavirus, are oncoviruses known to cause tumors or cancers in their natural hosts [38]. The third dominant genus, i.e., Mastadenovirus, under the family of Adenoviridae, has human, mammal, and vertebrate natural hosts. Diseases associated with this genus included respiratory, gastrointestinal and eye infections, among others [39]. Furthermore, Singapore grouper iridovirus, under the family Iridoviridae [40] and categorized as an animal virus, has also been found in the tree shrew gut virome. All these results indicate that these viruses should be the focus of future studies involving disease model construction. One major limitation of this study was pooling samples from 50 tree shrews to generate a single metagenome. The primary purpose of this research was to characterize the stool virome of tree shrew, and it would have been more informative to show variability of viruses detected and their proportional abundances across individuals, and to test whether virome composition correlated with host parameters (e.g. gender, body weight, health status). This would be the future investigation of tree shrew gut virome for our further study.
2019-03-11T17:22:20.481Z
2019-02-26T00:00:00.000
{ "year": 2019, "sha1": "38a17773e5278c56f189d66ac14defdc9c0b8648", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0212774&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "38a17773e5278c56f189d66ac14defdc9c0b8648", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
49562141
pes2o/s2orc
v3-fos-license
From ERPs to MVPA Using the Amsterdam Decoding and Modeling Toolbox (ADAM) In recent years, time-resolved multivariate pattern analysis (MVPA) has gained much popularity in the analysis of electroencephalography (EEG) and magnetoencephalography (MEG) data. However, MVPA may appear daunting to those who have been applying traditional analyses using event-related potentials (ERPs) or event-related fields (ERFs). To ease this transition, we recently developed the Amsterdam Decoding and Modeling (ADAM) toolbox in MATLAB. ADAM is an entry-level toolbox that allows a direct comparison of ERP/ERF results to MVPA results using any dataset in standard EEGLAB or Fieldtrip format. The toolbox performs and visualizes multiple-comparison corrected group decoding and forward encoding results in a variety of ways, such as classifier performance across time, temporal generalization (time-by-time) matrices of classifier performance, channel tuning functions (CTFs) and topographical maps of (forward-transformed) classifier weights. All analyses can be performed directly on raw data or can be preceded by a time-frequency decomposition of the data in which case the analyses are performed separately on different frequency bands. The figures ADAM produces are publication-ready. In the current manuscript, we provide a cookbook in which we apply a decoding analysis to a publicly available MEG/EEG dataset involving the perception of famous, non-famous and scrambled faces. The manuscript covers the steps involved in single subject analysis and shows how to perform and visualize a subsequent group-level statistical analysis. The processing pipeline covers computation and visualization of group ERPs, ERP difference waves, as well as MVPA decoding results. It ends with a comparison of the differences and similarities between EEG and MEG decoding results. The manuscript has a level of description that allows application of these analyses to any dataset in EEGLAB or Fieldtrip format. In recent years, time-resolved multivariate pattern analysis (MVPA) has gained much popularity in the analysis of electroencephalography (EEG) and magnetoencephalography (MEG) data. However, MVPA may appear daunting to those who have been applying traditional analyses using event-related potentials (ERPs) or event-related fields (ERFs). To ease this transition, we recently developed the Amsterdam Decoding and Modeling (ADAM) toolbox in MATLAB. ADAM is an entry-level toolbox that allows a direct comparison of ERP/ERF results to MVPA results using any dataset in standard EEGLAB or Fieldtrip format. The toolbox performs and visualizes multiple-comparison corrected group decoding and forward encoding results in a variety of ways, such as classifier performance across time, temporal generalization (time-by-time) matrices of classifier performance, channel tuning functions (CTFs) and topographical maps of (forward-transformed) classifier weights. All analyses can be performed directly on raw data or can be preceded by a time-frequency decomposition of the data in which case the analyses are performed separately on different frequency bands. The figures ADAM produces are publication-ready. In the current manuscript, we provide a cookbook in which we apply a decoding analysis to a publicly available MEG/EEG dataset involving the perception of famous, non-famous and scrambled faces. The manuscript covers the steps involved in single subject analysis and shows how to perform and visualize a subsequent group-level statistical analysis. The processing pipeline covers computation and visualization of group ERPs, ERP difference waves, as well as MVPA decoding results. It ends with a comparison of the differences and similarities between EEG and MEG decoding results. The manuscript has a level of description that allows application of these analyses to any dataset in EEGLAB or Fieldtrip format. INTRODUCTION Since Haxby and colleagues popularized MVPA for functional magnetic resonance imaging (fMRI) (Haxby et al., 2001), multivariate approaches have gained widespread popularity. Initially, MVPA was often used as an abbreviation for multivoxel pattern analysis, but in recent years it has become more common to let the acronym denote the generally applicable term multivariate pattern analysis. MVPA can refer to a number of related multivariate analytical techniques but is typically used when referring to the practice of characterizing (decoding) the difference between experimental conditions based on the observed patterns of brain responses in those conditions. Curiously, although the multivariate nature of EEG has long been recognized (e.g., Peters et al., 1998;Mitra and Pesaran, 1999), widespread adoption of MVPA to decode experimental conditions using brain activity has been much slower in EEG and MEG research than in fMRI. In recent years however, MVPA decoding approaches have started to gain popularity in EEG and MEG research too. Multivariate analysis in EEG and MEG offers a number of analytical advantages over univariate time-series analysis, such as the ability to look at temporal generalization to characterize neural dynamics over time (King and Dehaene, 2014), the use of representational similarity analysis to map different physiological measures or anatomical substrates onto each other (Kriegeskorte et al., 2008;Cichy et al., 2014), as well as the ability to establish a common performance measure to map behavioral onto neural data (Fahrenfort et al., 2017b). Moreover, MVPA allows one to quantify experimental effects without a-priori electrode or channel selection, potentially identifying differences between conditions that are harder to detect using conventional analyses (Fahrenfort et al., 2017a). Indeed, many researchers now prefer to use multivariate analyses over traditional ERP/ERF analyses based on signals averaged over epochs (Mostert et al., 2015;Kaiser et al., 2016;Wardle et al., 2016;Contini et al., 2017;Marti and Dehaene, 2017;Turner et al., 2017). Consequently, some who have been employing traditional univariate ERP analyses may be considering switching to MVPA or extending their analysis pipelines with MVPA. However, although a number of decoding toolboxes exist, this step can appear daunting to those who have been using software packages with graphical user interfaces (GUIs), like EEGLAB or BrainVision Analyzer. For this reason, we developed the ADAM toolbox (from here on simply referred to as ADAM) for the MATLAB platform. ADAM takes EEGLAB or Fieldtrip data formats as input, and performs multivariate analysis using a relatively simple specification of the required parameters. Although ADAM has no GUI, the toolbox requires no programming experience, only rudimentary knowledge of MATLAB such as opening and closing of text files and running commands in the Command Window. ADAM performs standard analysis of raw EEG/MEG data (both ERP averages and decoding results), but also provides a number of additional capabilities. For example, it is able to compute temporal generalization matrices (King and Dehaene, 2014) and it can run a time-frequency analysis prior to decoding. In this case, results are plotted in a time-by-frequency matrix, or temporal generalization matrices for particular frequency bands. Timefrequency analysis can either be based on total power, or on induced power. Furthermore, ADAM can simultaneously run a forward encoding model (FEM) in addition to a backward decoding model (BDM), allowing one to reconstruct patterns of neural activity that were never present during model generation (Brouwer and Heeger, 2009;Fahrenfort et al., 2017a). The current article does not cover all of these options, but rather takes a subset of them as an entry-level introduction for those who have been doing ERP research and want to explore multivariate analysis. It covers decoding of raw EEG/MEG data and describes an analysis pipeline in which ERPs are compared to decoding results. It also shows how to compute and visualize temporal generalization matrices which allow one to look at the stability of patterns of neural activity over time (King and Dehaene, 2014). Finally, the analysis pipeline compares decoding of EEG to decoding of MEG data. Note that this article is not primarily intended as an explanation of why to perform multivariate decoding analyses (although some advantages of MVPA over ERPs are highlighted), but rather to explain how to perform these analyses. At the end of the article, one should be able to run decoding analyses on any epoched EEG dataset in EEGLAB or Fieldtrip format. Along the way, the article briefly explains basic terminology such as decoding, classes/classification/classifier, temporal generalization, train/test schemes (such as k-fold) in the context in which they are first introduced. For more detailed explanations we refer to introductory texts such as Blankertz et al. (2011), King and Dehaene (2014), and Grootswagers et al. (2017). The article also assumes working MATLAB knowledge. Programming experience is not required, but the reader should be able to open and close files in MATLAB and know how to execute snippets of code in the MATLAB Command window, which is easy to learn even for those who have not used MATLAB before. The data that we analyze in this manuscript come from a publicly available MEG/EEG/fMRI dataset. This dataset contains event-related responses to famous, non-famous and scrambled face stimuli, and was acquired and made available by Daniel Wakeman and Richard Henson (Wakeman and Henson, 2015). The dataset contains the type of factorial design that is common to many experiments. The manuscript is organized as follows: the methods describe where the sample data can be obtained, where to obtain the toolbox and its dependencies, how to install the toolbox on MATLAB and provides code that shows how to run the first level (single subject) analyses. The results section provides the code to run and plot the results from the group analyses. Although somewhat unorthodox, providing these together in the results section improves coherence. This way, the code that generates the plots can be presented together with the plots themselves. The results section contains group analyses of ERPs, group analyses of decoding results, examples to plot forward transformed classifier weights (equivalent to univariate topomaps, which are interpretable as neural sources) (Haufe et al., 2014), and shows temporal generalization matrices of the EEG and MEG results. It also provides an example of how to plot temporal generalization for a specific time window and ends with a direct comparison of EEG to MEG. The discussion considers the degree to which MVPA analyses can provide extra information over standard univariate analysis, based on the results that were presented. Data The raw data are available at https://openfmri.org/dataset/ ds000117. However, due to the size of the original data files in the public repository (which also include fMRI), we have created a slim-sized version of the data in standard Fieldtrip format to facilitate easy reproduction of the analyses as described in this article. These data files can be found on the open science framework by following the following link: https://osf. io/p2k97/files. To replicate the analyses described here, store all files under DATA (20 GB) in a local directory. No preprocessing was applied to the original data other than downsampling from 1,100 to 275 Hz, and epoching around the target stimuli with an interval between −0.5 and 1.5 s. The MEG data were obtained from an Elekta MEG system, and were processed with MaxFilter 2.2 (Elekta Neuromag) by the original authors (Wakeman and Henson, 2015). To further reduce data overhead, we removed the magnetometers from the original data. Magnetometers have a more diffuse spatial profile with large overlaps between neighboring sensors when compared to planar gradiometers (Gross et al., 2013). Removal of magnetometers after application of a MaxFilter is not uncommon (e.g., see Kloosterman et al., 2015), and a pilot analysis confirmed that this did not substantially affect classification performance. The abovementioned repository includes a MATLAB script under SCRIPTS, that converts the original data as supplied by Wakeman and Henson to the files we posted, but since this step is idiosyncratic to whatever system is used to acquire EEG or MEG data, we did not make it part of the analysis pipeline we describe in the remainder of the text. Task The task employed during the experiment was described in detail by Wakeman and Henson (2015). For ease of reference, we briefly explain the task here. Every trial started with a prestimulus period between 400 and 600 ms (randomly jittered) containing a white fixation cross on a black background. Next the target stimulus appeared for a random period between 800 and 1,000 ms. The target stimulus was a cut-out of a photo of a face on a black background, overlaid with a white fixation cross. The face could be either a famous, non-famous, or phase scrambled face. Each image was presented twice, with the second presentation occurring either immediately after the previous one (Immediate Repeats), or after 5-15 intervening stimuli (Delayed Repeats). Each type of repeats occurred in 50% of the trials. Face identity was not task relevant, subjects only had to indicate whether a given stimulus was more or less symmetrical than the average amount of symmetry across all photos. Participants used their left or right index finger to indicate symmetry, counterbalanced across subjects. Participants Data was collected from 19 participants (8 female). Further details can be found in Wakeman and Henson (2015). Requirements ADAM works under a relatively recent version of MATLAB (≥R2012b, older versions might or might not work) with the Signal Processing Toolbox and Statistics Toolbox installed. Further, when running first level (single subject) analyses it depends on a recent version of EEGLAB (Delorme and Makeig, 2004) (≥13, older versions might or might not work) and a recent install of Fieldtrip to perform time-frequency analysis prior to decoding (Oostenveld et al., 2011) (≥2015, older versions might or might not work). Finally, a reasonably modern desktop or laptop computer with standard specifications. More is better (especially RAM), but any computer used for office work should in principle be sufficient. All analyses presented here (three EEG and three MEG comparisons) were executed on a 2013 Macbook Pro with 8GB of memory, using Matlab R2014b, EEGLAB v14_1_1b and Fieldtrip v20170704. The first-level analyses took about 10 h to complete. If one wants to replicate these analyses in a shorter timeframe, it is easy to shorten computation time by lowering cfg.nfolds to 2 instead of 5, which affects the number of folds in the experiment (reducing computation time by 60% to about 4 h). The concept of folds is explained in section 2.9.6 below. Another way to reduce computation time is by lowering the number of subjects in cfg.filenames, e.g., from 19 to 10 (another 50% reduction). Both these changes can be made in the first-level script in section 2.9, and will have little effect on the qualitative patterns of single-subject and group-level results, although some effects may not reach significance. Group-level analyses take very little time and can be executed on the fly. ADAM Toolbox When replicating the analyses in this article, we recommend to download version 1.0.4 of the ADAM toolbox from Github at https://github.com/fahrenfort/ADAM/archive/1.0.4.zip. This is the version of the toolbox that was used to perform the analyses and generate the figures in this article and is therefore guaranteed to work with the scripts that are provided herein. We also provide version 1.0.4 of the toolbox along with a version of EEGLAB and Fieldtrip that are guaranteed to work with the toolbox under TOOLBOXES on the Open Science Framework here: https://osf. io/8vby7/download. For regular use of the ADAM toolbox, we recommend to download the latest version of the toolbox by going through http://www.fahrenfort.com/ADAM.htm where users can leave their e-mail before being forwarded to the download site on Github. Keeping track of e-mail addresses allows us to contact users if major bugs come to light. A simulated validation dataset is currently being developed and will in the near future be used to continuously validate core functionality of the toolbox. Installing Installing the toolbox and its dependencies is easy. To replicate the analyses in this article, download the file from the repository above and unzip it. This should create a folder called 'TOOLBOXES' containing all three toolboxes (ADAM, EEGLAB and Fieldtrip). This folder can be placed anywhere (e.g., 'C: TOOLBOXES' on Windows PC, or '/Users/JJF/TOOLBOXES' on a Mac) but do take note of the location. Next, follow the install instructions in the text file "install_instructions.txt" that is contained in that directory. Following these instructions will make sure that MATLAB knows how to find the toolboxes. If all goes well, the following should be displayed in the Command Window (along with some other messages): FIELDTRIP IS ALIVE EEGLAB IS ALIVE ADAM IS ALIVE When these messages are displayed, all code provided in this article should run smoothly. ADAM Architecture and Core Functionality The ADAM processing pipeline is depicted in Figure 1 (from top to bottom). It involves: (1) Data-import and pre-processing (2) first-level single-subject analysis (3) computing group-level statistics and (4) visualization (plotting) of group statistics. These steps are implemented by a number of main ADAM user functions, all starting with the prefix adam_ (also mentioned in the top left corner of each box in Figure 1): • adam_MVPA_firstlevel (computes and stores first level / single subject results) • adam_compute_group_ERP (reads single subject ERPs and computes group statistics which can be plotted using adam_plot_MVPA) • adam_compute_group_MVPA (reads single subject classification performance and computes group statistics which can be plotted using adam_plot_MVPA) • adam_compare_MVPA_stats (compares outcomes of group analyses, which can be plotted using adam_plot_MVPA) • adam_plot_MVPA (plots the outcome of the adam_compute_group_ or the adam_compare_MVPA_stats functions) • adam_plot_BDM_weights (plots topomaps of the classifier weights or forward transformed weights, the latter of which are equivalent to univariate difference maps and are interpretable as neural sources, see (Haufe et al., 2014). All ADAM user functions can be called from the MATLAB Command Window using the same syntax: result = adam_somefunction(cfg, input); In this expression, cfg (short for configuration) is a variable that specifies the parameters that the function needs. The concept of a cfg variable was borrowed from the Fieldtrip toolbox (Oostenveld et al., 2011), but ADAM is not part of Fieldtrip so their functionalities should not be confused. The input variable is not always required. It can either be a variable that contains data, or it can specify a file path to the data. In the remainder of the methods section, we will outline how to use each of the main ADAM functions illustrated in Figure 1 to run a first-level analysis, and how to run and visualize a subsequent group analysis. We will use the Wakeman and Henson dataset (Wakeman and Henson, 2015) as an illustration of how to use the cfg variable to specify analysis and/or plotting parameters at each step of the way. Data Structure We recommend to use a standard folder structure when analyzing experiments using ADAM: at the highest level a container folder for the experiment that is analyzed. Inside that folder, there should be at least three subfolders: (1) DATA: a folder with EEG/MEG input files, such as the epoched EEGLAB or Fieldtrip files that are downloaded from the repository, or the processed EEG/MEG datafiles of a different experiment. (2) SCRIPTS: a folder that contains MATLAB scripts that perform ADAM analyses that are particular to the experiment. Scripts are snippets of code that tell ADAM how to analyse the data (which are distinct from the ADAM toolbox, so one should not put these scripts inside the toolboxes folder). When saving analysis scripts, it is good practice to prepend the names of these scripts with a canonical prefix so they can be easily recognized as scripts (e.g., prepend all scripts with "run_"), such as run_preprocessing.m, or run_RAW_level1.m. It also helps to add further keywords like "RAW" to indicate that the file contains a script to perform a decoding analysis of raw EEG data, or to use a keyword like TFR to indicate it performs a decoding analysis on time-frequency data. Adding a keyword like "level1" can be used to indicate that the script performs an analysis of the single subject data. Example scripts for the analyses that are described in this article are provided in the text but can also be found in the SCRIPTS folder located at https://osf.io/p2k97/files. (3) RESULTS: a folder that contains the outcome of the single subject analyses (these are often referred to as "first level" analyses), for example when classifying from the EEG whether subjects are viewing faces or scrambled faces, or when classifying whether they were viewing famous faces or non-famous faces. Each such analysis will be stored in a separate folder. This folder in turn will contain deeper levels created by ADAM, reflecting electrode selections and/or specific frequencies on which the analysis was performed, and finally a results file for every subject. First Level (Single Subject) Analysis In this manuscript, we describe how to run the first level analyses for three main comparisons, using ADAM: • non-famous faces vs. scrambled faces • famous faces vs. scrambled faces • famous faces vs. non-famous faces These are performed separately on the EEG and MEG dataset, so six analyses in total. The first script we provide below executes the first of these analyses: non-famous vs. scrambled faces of the EEG data. The script starts by specifying some initial variables (such as the names of the input files and the event codes that belong to the various factors/levels in the experimental design, which are needed to run first-level analyses), and subsequently specifies the cfg parameter settings that determine the settings during a first level analysis. Note that in MATLAB notation, comments are preceded by a percent (%) sign and drawn in green. These comments are used to provide a brief explanation of what a particular line of code does but are not actually executed by MATLAB when the code runs. The last line of the script executes the actual first level analysis using the adam_MVPA_firstlevel function, which computes single subject decoding and/or forward encoding results. ERPs are computed automatically when running adam_MVPA_firstlevel. The outcomes of the single subject analyses are stored as files inside the RESULTS folder, which are subsequently read in during group analysis (see section 2.11). This script can also be found in the SCRIPTS folder on https://osf.io/p2k97/files in the file run_firstlevels.m. Use RAW data Compute time-frequency representations The final performance metric is computed by averaging over test folds (in this example, K=5). Several transformations can be performed on the training and testing data, e.g. binnning, whitening, computing induced power, etc. These transformations are either performed separately on training and testing data, or they are performed indiscriminately across all stimulus classes. Option 1: K-fold cross-validation. Requires a single data file per subject. Testing data Training data Requires separate data sets for training and testing (either using separate files or separate event values for train and test data) Option 2: For every time point, build a backward decoding model (BDM) or forward encoding model (FEM) using training data, and compute performance metric on testing data. Weights of BDMs are forward transformed. Import and pre-process Import native EEG or MEG data into EEGLAB or FieldTrip (not part of ADAM) format, pre-process, e.g. highpass filter, epoching, artefact rejection. Baseline correction and muscle artefact rejection can also be applied by ADAM during first-level analysis. Out: publication-ready graphs of performance metrics and/or topographical maps of forward transformed weights The performance metric is computed over the testing data. Input Filenames The filenames used for the first level analyses are specified using cfg.filenames, the path to these files is specified in cfg.datadir (see example code above). The toolbox is able to work with two file formats: (1) Standard EEGLAB format with ".set" and ".fdt" extensions (Delorme and Makeig, 2004). (2) A standard Fieldtrip struct saved with ".mat" extension (Oostenveld et al., 2011) either in timelock (ERP/ERF) or non-timelock epoched format. All files should be epoched and the event code that specifies the relevant conditions for analysis needs to be numeric and placed at 0 ms in the epoch (for EEGLAB format) or be contained in a trialinfo field with an event code for each trial (for Fieldtrip format). Both EEGLAB and Fieldtrip have a large number of importing options for many available EEG/MEG data acquisition formats. The file name specification for the ADAM analysis should list all files in a cell array as in the example code above. Do not use extensions in the filename specification. The toolbox will first attempt to locate ".set" (EEGLAB) files, if it cannot find those it will look for ".mat" files containing a Fieldtrip struct. The function file_list_restrict selects files from the full file list based on a part of the file name. This can be useful in cases like this, where separate EEG and MEG files exist, or when files come from different experimental sessions that need to be analyzed separately etc. The example code above creates a separate array of the EEG files and of the MEG files to be able to run separate first level analyses for EEG and MEG. It is also possible to train the classifier on one input file and test on another input file by separating the two files using a semicolon (see sections 2.9.3 and 2.9.6 for more information about train-test schemes). That way one can train a classifier on one task, and test on another, or one can even train the classifier on one subject and test on another subject, as long as both files have the same data format (same number of electrodes etcetera). Class Specifications and Balancing A decoding analysis tries to discriminate between a fixed set of experimental variables using brain data. The algorithm that performs classification is called the "classifier, " and the experimental conditions it tries to discriminate are called the "classes." Table 1 shows the factorial design of the experiment that is analyzed in the current manuscript. The numbers in the table are the event codes that were used to denote the various events/conditions in the experiment. It is easy to draw a similar table for most experimental designs. Once the event codes in the levels of the factors in the design are assigned to variables (see the code on the previous page), it is easy to set up a class definition, which specifies the conditions or groups of conditions (classes) that the analysis should try to discriminate (classify). For example, to compare famous faces to non-famous faces simply write: cfg.class_spec{1} = cond_string(famous_faces); cfg.class_spec{2} = cond_string(nonfamous_faces); cond_string is an ADAM function that creates string specifications from numbers because ADAM requires string inputs. Thus, the above class definition is effectively the same as: cfg.class_spec{1} = '5,6,7'; cfg.class_spec{2} = '13,14,15'; By default, ADAM enforces balanced designs. A design is balanced when the trial counts in the different cells of the factorial design (as in Table 1) are equal. Unbalanced designs (asymmetrical trial counts) can have a number of unintended effects on the type of conclusion that can be drawn from the analysis. There are two types of imbalances: within class imbalances and between class imbalances. Within class imbalances occur when event counts within classes are unequally distributed. For example, this occurs if a decoding analysis compares famous faces to non-famous faces (irrespective of the factor stimulus repetition), while at the same time the design contains many more first presentations than immediate or delayed repeats. In such a case, the outcome might be driven more strongly (or even entirely) by the first presentations than by the repeated presentations. This would impact how generalizable the effect of being famous is across the experimental design. Because designs are often unbalanced, ADAM rebalances designs by applying two types of corrections: within class undersampling (throwing out trials) and between class oversampling (duplicating trials). The act of rebalancing unbalanced designs through underor oversampling has been shown to convey clear performance benefits for linear discriminant analysis and area under the curve (Xue and Hall, 2015), which are the classification algorithm and default performance metric that ADAM uses (see sections 2.9.3 and 2.9.5). Between class imbalances occur when an entire class is overrepresented in the analysis. An example of such an imbalance would be when performing decoding of famous faces and non-famous faces, while many more famous faces than nonfamous faces exist in the dataset. In such cases the classifier can develop a bias by classifying the majority (or even all) trials as famous faces. Classification performance across trials would be higher than chance even if the classifier has in fact no ability to discriminate famous faces from non-famous faces, due to the simple fact that the majority of trials contain famous faces. Therefore, ADAM rebalances classes by default by making use of a special case of oversampling (duplicating trials) in the training set. This is achieved by synthetically generating instances (trials) of the class that has the fewest number of trials (i.e. the minority class). Class instances are generated using a modification of the ADASYN algorithm, which generates instances that maximally drive learning during the training phase (see sections 2.9.3 and 2.9.6) (i.e. by generating synthetic trials that are close to the classifier decision boundary) (He et al., 2008). In the example above, if the class of famous faces contains 150 trials and the class of non-famous faces contains 75 trials, ADAM would generate another 75 synthetic trials of the non-famous faces class so that there are an equal number of trials of both classes in the training set. Within classes, ADAM applies event balancing by default through undersampling so that all event types contribute equally to a stimulus class. In the example above, if there are 200 first presentations of a famous face (event code 5), but only 50 immediate repeats (event code 6) and 50 delayed repeats (event code 7) of famous faces, ADAM lowers the trial count of the first presentations to match with the others (so the 200 first presentations would be lowered by randomly selecting 50 of those, to match with the immediate and delayed repeats). It is important to be aware of this, as one may lose a lot of trials if the experimental design is heavily unbalanced within classes. ADAM also allows one to specify an idiosyncratic ratio of each trial type in the class definition. For example, to specify two first presentations for every immediate and delayed repeat, use: cfg.class_spec{1} = '5,5,6,7'; To keep things simple, the analyses that are covered in this article will only classify the first presentation of each stimulus type. The cond_string function makes it easy to create such class definitions by combining levels in the factorial design, as was done in the example code provided. It is also possible to use different class definitions for training and testing, by separating the two using a semi-colon (see sections 2.9.3 and 2.9.6 for more information about train-test procedures). Model Selection ADAM is able to run two basic models: a backward decoding model (BDM, default) and/or a forward encoding model (FEM, sometimes also referred to as an inverted encoding model) (Brouwer and Heeger, 2009). BDMs allow one to predict an experimental variable (condition) given an observed pattern of brain activity. The experimental variables that the model attempts to discriminate based on brain data are called the classes (see section 2.9.2). The model that makes these predictions is often referred to as the classifier. The process of making class predictions is often referred to as "classification" or "decoding, " and involves a procedure in which some data is first used to train the classifier (build the model), and a set of independent data is used to evaluate its performance (see sections 2.9.5 and 2.9.6). By default, the BDM in ADAM employs Linear Discriminant Analysis (LDA) to perform decoding, a standard decoding algorithm that has been shown to perform well compared to other algorithms (Grootswagers et al., 2017), and which is able to solve classification problems for two or more classes. All analyses described in the current manuscript use a BDM. While BDMs employ a categorical relationship between brain data and experimental variables, FEMs describe an invertible continuous relationship between experimental variables and brain data, allowing one to predict patterns of brain activity for arbitrary values of the experimental variable (and vice versa). FEMs are most useful when the relationship between the experimental variable and neural activity is continuous (e.g., color, orientation of a bar, position on a circle). It determines the relationship between such a continuous experimental variable and multivariate brain signals using a Channel Tuning Function (CTF). The CTF allows one to reconstruct patterns of neural activity for stimuli that were never used during model generation and vice versa (Brouwer and Heeger, 2009;Fahrenfort et al., 2017a). FEMs too make use of cross-validation, in which independent datasets are used for fitting the model (training) and validating the model (testing), also see section 2.9.6. FEMs are not relevant to the experimental design of the data that are analyzed and presented here, and therefore outside the scope of this manuscript. However, there is considerable literature available for those who want to know more (Foster et al., 2016(Foster et al., , 2017. The cfg.model parameter allows one to specify whether ADAM should run a BDM or a FEM during analysis. Raw or Time-Frequency ADAM is able to either perform MVPA analyses on raw EEG/MEG data, or first perform a time-frequency decomposition into frequency bands prior to analysis. In the current manuscript, we only analyze raw data, but ADAM is able to first compute time-frequency representations (TFRs) prior to a BDM or FEM analysis. The cfg.raw_or_tfr parameter specifies whether ADAM should analyze the raw EEG/MEG amplitude over time, or whether it should first compute TFRs by respectively specifying "raw" or "tfr." It is good practice to store the results from analyses on raw data in a different folder from analyses that are performed on TFR data. When computing TFRs, it is important to realize that the input files for ADAM are always raw data, ADAM will compute the TFRs internally during analysis using Fieldtrip. There are a number of additional options available for TFRs, such as computing induced rather than total power (Klimesch et al., 1998;Pfurtscheller and da Silva, 1999;Fahrenfort et al., 2012). When performing decoding on TFR data, ADAM computes accuracy in a time-by-frequency plot by default, but it can also compute temporal generalization matrices for specific frequency bands when cfg.crossclass is set to 'yes' (see section 2.9.7). Performance Measures The performance of a classifier quantifies how accurately it can predict class membership based on measured brain activity. There are many conceivable classifier performance metrics, depending on the research question and goal of the analysis. An often-used performance measure in the literature is "accuracy, " the number of correct classifications averaged across all class instances. When ADAM computes accuracy, it does so for each class separately, and then averages across classes (balanced accuracy). For example, when an analysis targets a classification of faces and scrambled faces, ADAM computes accuracy as: This measure should theoretically produce chance accuracy even when the classifier develops a bias and/or when a stimulus class is overrepresented in the data. A more sensitive measure to compute classifier performance is Area Under the Curve (AUC) (Bradley, 1997). AUC is the default performance measure that ADAM computes. AUC refers to the area under the receiver operating characteristic, a metric derived from signal detection theory (Wickens et al., 2002). It constitutes the total area covered when plotting the cumulative true positive rates against the cumulative false positive rates for a given classification problem and-like balanced accuracy-is insensitive to classifier bias. In a two-class decoding analysis, this is achieved by plotting the cumulative probabilities that the classifier assigns to instances as coming from the same stimulus class (true positives) against the cumulative probabilities that the classifier assigns to instances that come from the other stimulus class (false positives). AUC takes into account the degree of confidence (distance from the decision boundary) that the classifier has about class membership of individual instances, rather than averaging across binary decisions about class membership of individual instances (as happens when computing standard accuracy). In other words, low confidence decisions contribute less to the AUC than instances about which the classifier is very confident, whereas for accuracy all classifications are treated equally. When ADAM computes AUC in multi-class problems, it uses the average AUC across all pairwise comparisons between classes (Hand and Till, 2001). Therefore, chance AUC performance is always 0.5, regardless of the number of classes that the analysis attempts to discriminate. The performance measure that ADAM should compute can be specified using the class_method parameter, e.g., cfg.class_method = 'AUC'. ADAM can also compute a number of other measures derived from signal detection theory, such, d' ('dprime'), hit rate ('hr') and false alarm rate ('far'). Train-Test Procedures, K-fold Cross Validation A classification analysis usually consists of two steps: one in which a model is fitted to the data (training), and one in which the performance of the model is evaluated (testing). These two steps are usually performed on independent data. If they would be performed on the same data, the performance of the model would not only reflect true differences between stimulus classes, but also differences that occur because of coincidental (noise related) differences between stimulus classes (this is also called "overfitting"). To prevent overfitting from inflating the performance of the model, separate data are used for training the model and testing the model. There are two ways of achieving this goal in ADAM: (1) use two independent data sets, one for training and one for testing or (2) use a single dataset for training and testing using k-fold cross validation. In k-fold cross validation, the trials are split up into k equally sized folds, training on k-1 folds, and testing on the remaining fold that was not used for training. Therefore, the training set is independent from the testing set on that iteration. This procedure is repeated k times until each fold (all data) has been tested exactly once, while on any given iteration the trials used for training are independent from the trials that were used for testing. A graphical illustration of this procedure can be found in Figure 1 in the box that says adam_MVPA_firstlevel. Next, the performance measures obtained at each iteration/fold are averaged to obtain a single performance metric per time point. In ADAM, the number of folds is specified using the cfg.nfolds parameter. For example, if nfolds is 4, the classifier will train on 75% of the data and test 25% of the data, repeating the process until all data has been tested once. If nfolds is higher than the number of trials in the dataset, ADAM automatically lowers nfolds to a number that implements leave-one-out testing, in which the classifier is trained on all but one trial and then tested on the remaining trial. This would be very time consuming, as the entire process is then repeated equally often as there are trials in the data set. When train and test data are already independent (for example when using different input files for training and testing, or when using different event codes for training and testing), nfolds is disregarded. 2.9.7. Temporal Generalization Using Classification Across Time ADAM is also able to cross-classify across time. In this case, the classifier is not only trained and tested on the same point in the trial, but every train time point is also tested on all other time points in the trial. This results in a train × test time performance matrix, also called a temporal generalization matrix. If classifier performance for any given train time point is high when testing on other time points, this means that the pattern that was used to train the classifier at that time point generalizes to these other time points. This in turn suggests that (part of) the underlying cortical signal is stable across this time interval. Distinct patterns in the temporal generalization matrix allow one to draw different conclusions about the dynamics underlying neural processing (for details see King and Dehaene, 2014). In ADAM, the cfg.crossclass parameter specifies whether to compute temporal generalization or not. If cfg.crossclass is set to 'yes', ADAM computes a train × test generalization matrix, which can subsequently be statistically analyzed and visualized at the group level. The diagonal of the train × test performance matrix is the same time series that is computed when cfg.crossclass is set to 'no' (this is because for these diagonal time points, the classifier is trained and tested on the same time points). For this reason, training and testing on the same time points is sometimes referred to as "diagonal decoding." If cfg.crossclass is set to 'yes' when computing the first level (single subject) results, this affords maximal flexibility when performing group level analysis using adam_compute_group_MVPA (see section 2.11 further below). For example, one can either compute the full train × test temporal generalization matrix at the group level, compute only the diagonal at the group level, or average over particular train or test intervals at the group level (also see section 2.12). However, computing temporal generalization does require much more computing time. To save time, one can opt to have ADAM downsample the input signal prior to first level analysis (see section 2.9.8). If cfg.crossclass is set to 'no' computation time is relatively short, but in this case one can only compute and plot statistics at the group level for the diagonal (training and testing on the same time points). 2.9.8. Pre-processing: Channel Selection, Resampling, Baseline-Correction ADAM assumes that input files are already pre-processed (e.g., in Fieldtrip or in EEGLAB), but to make life a little easier ADAM is able to perform some basic pre-processing steps. For the analyses discussed here, no pre-processing was applied to the data prior to ADAM analysis other than epoching and down-sampling. ADAM provides four noteworthy internal pre-processing options: electrode/channel selection, resampling, baseline-correction and muscle artifact rejection. Channel selection is done using cfg.channelpool. This option makes it possible to select electrodes/channels (these are called the "features" in a decoding analysis) prior to computing classification performance. Although classification algorithms already intrinsically up-weigh features that contribute to classification performance and down-weigh features that do not, sometimes a signal is known to be contained in a particular part of the brain. For example, when using a visual task, occipital channels are likely to be most informative. In such cases, classifier accuracy can be boosted by pre-selecting channels. To keep all channels, use 'ALL_NOSELECTION'. More information about channel selection can be found by typing help adam_MVPA_firstlevel and/or help select_channels in the MATLAB Command window. In the current analysis, no electrode/channel selection was applied. In addition, it is possible to down sample the signal prior to running an analysis by specifying a new sampling rate using cfg.resample. The main advantage of doing this is to save computation time (at the expense of temporal resolution of course). This is of particular importance when running cross classifications to compute temporal generalization matrices, in which the analysis is performed for every train and test time combination (see section 2.9.7). When performing decoding on TFRs, ADAM will use the original sampling rate to compute TFRs, and only perform decoding on time points that belong to the redefined sampling rate after power has been computed. In the current analysis, the data were resampled to 55 Hz. Third, a very common step in ERP analysis is to apply baseline correction. ADAM can do this automatically by specifying cfg.erp_baseline (in seconds). In the current analysis, a baseline correction between −100 and 0 ms was applied. Finally, it is possible to remove trials containing muscle artifacts in a certain window of the trial using the cfg.clean_window parameter. This step was not applied in the current analyses. One can pre-process the data using any personal choice prior to using ADAM, as long as the data are epoched. An overview of the effect of various pre-processing steps on classification performance is given by Grootswagers et al. (2017). Running the First-Level Analyses When running adam_MVPA_firstlevel using the example code at the start of this section, it will classify the activity across all electrodes for each train-test sample in a trial as either coming from a famous face or from a non-famous face, and compute average classification performance for each of these samples. The result of each subject's analysis will be written to disc. The directory to which the first level results are written is specified using cfg.outputdir. The output directory should contain a name that is specific to a given analysis (see example code). If a directory does not exist, ADAM will create that directory. The resulting data structure will be briefly explained in section 2.10. For the current manuscript, we ran three first level analyses for EEG and the same three for MEG, so a total of six first level analyses. The code above already ran the first analysis. Assuming that the variables from that code (containing the event specifications etc.) are still in MATLAB's workspace, it is easy to run each of the remaining five analyses using the code below. First Level Folder Structure The results of a first level analysis are stored in a directory path specified in cfg.outputdir. In the example above, the first relevant directory in this path is typically called RESULTS, followed by a directory indicating whether decoding was performed on raw EEG data (EEG_RAW) or raw MEG data (MEG_RAW), but for a time-frequency analysis one might indicate something like EEG_TFR or MEG_TFR. The last folder in the directory structure denotes the comparison in the analysis (e.g., in the example above, decoding famous vs. non-famous faces using EEG data was indicated using EEG_FAM_VS_NONFAMOUS). Inside this folder, ADAM will further automatically create separate folders for analyses based on different channel selections, as specified in cfg.channelpool (see section 2.9.8). If an analysis uses all electrodes/channels as in the current example, this folder will be named ALL_NOSELECTION, but if specifying to only use occipital electrodes, it will create a folder called OCCIP for that analysis. This directory will typically contain .mat data files containing the first level result for each of the individual subjects, or may contain separate folders for each frequency in case it pertains to a decoding analysis of timefrequency data. It is advisable to use a clear and unambiguous naming scheme when specifying cfg.outputdir, as in the example above. If one or more of the directories in cfg.outputdir do not exist, ADAM automatically creates the hierarchy with all the missing directories. Group Analysis Workflow Once the first level analyses have completed, the next step is to perform group analysis and visualization. ADAM has two functions that perform group analysis: adam_compute_group_ERP and adam_compute_group_MVPA. Group statistics on ERPs are computed using adam_compute_group_ERP (ADAM automatically also saves ERPs when running first level analyses), while group statistics on multivariate analysis results are obtained using adam_compute_group_MVPA. Both functions read the results from the first level single subject files that are contained in the RESULTS folder and perform a group analysis on these data. This returns a group stats variable that contains the outcome of one or more analyses (explained in section 2.12 below), which can subsequently be plotted using the adam_plot_MVPA function (explained in section 2.13 below). In keeping with the logic of the adam_MVPA_firstlevel function, both functions have a cfg variable as input. The cfg variable specifies the parameters that can be adjusted when computing group statistics and plotting results. These parameters will be treated in detail in section 3 (Results), so that they are discussed alongside the output that the functions produce. Group Statistics and Multiple Comparison Correction The adam_compute_group_ functions read in the outcome of first level analyses and compute group statistics on them. They return the result in a stats variable. When executing one of the adam_compute_group_ functions, a folder selection dialog will pop up. This dialog allows the user to select a first level directory from which to compute the group stats variable. One can either select a directory referring to a specific analysis (e.g., EEG_FAM_VS_NONFAMOUS in the current example analysis) or select one directory higher up that contains multiple first level analyses (e.g., RAW_EEG in the current example analysis). When selecting a folder that contains multiple analyses, ADAM will compute group-level results for all the analyses contained in the folder and return the group results of these analyses in a stats array. A number of examples of how this works are supplied in section 3 (Results). The group statistics are computed by applying t-tests across subjects using the metric that was specified during first level analysis (for MVPA the default performance metric is AUC, see section 2.9.5, for ERPs it is µV). The t-tests compare this metric to a reference level for each sample (this reference level is 0.5 chance performance in the case of AUC, or either 0 or a cfg-defined reference condition/channel when computing ERPs). ADAM can constrain the range of tests by pre-selecting a train and/or test time window and/or range of frequency bands. In addition, ADAM can average across any of these time windows or frequency ranges. This is particularly relevant when the first level analyses contain time-frequency results (see section 2.9.4) and/or temporal generalization (see section 2.9.7). Examples of how to constrain the time points that are used in a group-level analysis using the cfg variable are given in the results section, as for example in section 3.8. The outcome of a group analysis yields a p-value for every sample. Because large numbers of tests result in the well-known multiple comparison problem (Bennett et al., 2009), ADAM has two ways of controlling for multiple comparisons at the group level. One option is to apply cluster-based permutation testing, in which clusters are defined as contiguously significant t-tests. The size of each observed cluster is defined as the sum of the t-values in that cluster. Next, this procedure is repeated many times (1000 by default), each time randomizing the condition labels (e.g. the AUC value and its reference .5 value) for each subject prior to performing the t-tests. These iterations allow one to compute a null distribution of cluster sizes under random permutation against which to compare the actually observed cluster sizes, based on which the p-value for each actually observed cluster can be directly computed (section 5 in Maris and Oostenveld, 2007). This limits the number of hypothesis-related tests to the number of observed clusters, severely limiting the number of relevant statistical comparisons. The standard p-value used to delineate whether a given sample is part of a cluster is 0.05. Alternatively, one can apply multiple comparison correction using the False Discovery Rate (FDR) under dependency (Benjamini and Yekutieli, 2001). FDR correction limits the false positive rate q, such that no more than a fixed percentage of tests (usually 5%) of the total number of significant tests can reasonably be expected to be false positives (type I errors). When either correction is applied, only tests that survive the threshold under that correction are plotted as significant by adam_plot_MVPA. Examples of both correction methods are given in the results section, as for example in section 3.1. It is also possible to directly compare different first level analyses to each other. This is achieved by the adam_compare_MVPA_stats function. In this case, two first level metrics from different analyses are compared against each other using t-tests. The same multiple comparison corrections can be applied as in the adam_compute_group_ functions. Note that this is usually only sensible when the data come from the same experiment and/or subjects, as different experiments may have different signal to noise ratios, hampering interpretation. An example of this analysis is given in section 3.7 of the results. Also note that some caution is in order when drawing population level inferences from statistics computed on MVPA metrics. In particular, standard statistical tests of classification performance against chance do not allow population level inference when the train and test set are drawn from the same distribution (i.e., when the when they both come from the same task), as is the case in a k-fold analysis (see section 2.9.6). In this case, the results should be interpreted as fixed-rather than random effects (see Allefeld et al., 2016 for details). This can be resolved by computing information prevalence across the group, but this metric has not yet been implemented in the current version (V1.0.4) of ADAM. Population level inference is not jeopardized when train and test sets are drawn from different distributions, as when the training data are obtained from a different task than the test data, when evaluating off-diagonal classifier performance in a temporal generalization matrix (see section 2.9.7), or when different first level analyses are compared to each other at a group level (as happens when using the adam_compare_MVPA_stats function). Plotting Group Results Group results are plotted using adam_plot_MVPA. This function requires two inputs: a cfg and one or more stats variables produced by the adam_compute_group_ functions. Each stats variable can contain a single analysis but can also contain multiple analyses in an array (see section 2.12). ADAM either visualizes the outcomes of all analyses that it receives in a single figure or plots them as separate figures. Plotting is always constrained by the settings that were applied when computing the group statistic (see section 2.12). As a result of these settings, the plotting function can visualize two types of graphs: either line graphs that plot classifier performance on the y-axis and time on the x-axis, or graphs that plot classifier performance using a color scale. Color scale graphs either have train-time and test-time on the x-and y-axis (in the case of temporal generalization), or frequency on the y-axis and time on the x-axis (when the first level was performed with time-frequency option). Significant time windows in line graphs are indicated by using a thicker line, which is placed on the line graph itself and/or near the time axis. Significant samples in color graphs are indicated using saturated colors. Unsaturated (bland) colors either reflect p-values that do not survive the uncorrected threshold, or are below the multiple-comparison corrected threshold, depending on the settings applied when computing the group-level statistic (section 2.12). The cfg variable specifies the parameters to adjust the plot, such as tickmarks, y-limits, the order of the plots (in case of multiple analyses), whether to plot the results in a single graph or in multiple graphs (in case of multiple analyses) and so forth. Examples of these options are given in the results section, along with the code that produces the graphs. In addition, the help file of adam_plot_MVPA provides a detailed description of the options. ERPs and Difference Waves of ERPs In the first group-level analysis, we compute the group results from the first-level analysis of the comparison between nonfamous and scrambled faces. We will compute the raw ERPs of non-famous and scrambled faces, and also their difference, and subsequently plot everything in a single plot. First, we will compute the raw ERPs. When running the code below, a selection dialog will pop up from which a folder can be selected. The firstlevel analyses that will be plotted are contained in the folder EEG_NONFAM_VS_SCRAMBLED (inside EEG_RAW), so that is the folder to select. Because it is cumbersome to have to navigate to the RESULTS folder for every a particular group result, the user can point the function to the root folder for the first-level analyses using cfg.startdir: %% COMPUTE GROUP ERPs FROM FIRST LEVEL RESULTS cfg = []; cfg.startdir = 'C:\MY_EXP\RESULTS'; cfg.mpcompcor_method = 'cluster_based'; cfg.electrode_def = {'P10'}; % select EEG_NONFAM_VS_SCRAMBLED in the dialog % that appears after running the following command: erp_stats = adam_compute_group_ERP(cfg); Two other relevant settings are cfg.mpcompcor_method, which specifies the method used to correct for multiple comparisons (cluster based permutation testing in this case, Maris and Oostenveld, 2007), and cfg.electrode_def, which specifies the electrode(s) to obtain ERPs for. The user can also specify the pvalue cut-off values (default: 0.05) and whether to use one-tailed or two-tailed testing (default: two tailed). More information about these and other settings can be found by inspecting the help of adam_compute_group_ERP. Once the function has finished, the erp_stats variable will contain group ERPs of the classes that were specified when running the first level analysis (the first class contained initial presentations of non-famous faces, the second class contained initial presentations of scrambled faces, see the code in the beginning of section 2.9). Next, to compute the difference between these ERPs, the function needs to be executed once more, this time specifying 'subtract' in cfg.condition_method. When running the code below, the selection dialog will pop-up once more, where the user should select the EEG_NONFAM_VS_SCRAMBLED folder as before. %% COMPUTE DIFFERENCES BETWEEN ERPs cfg = []; cfg.startdir = 'C:\MY_EXP\RESULTS'; cfg.mpcompcor_method = 'cluster_based'; cfg.electrode_def = {'P10'}; cfg.condition_method = 'subtract'; % select EEG_NONFAM_VS_SCRAMBLED in dialog: erp_stats_dif = adam_compute_group_ERP(cfg); The snippets of code above will now have produced two variables, one called erp_stats (containing the separate ERPs) and one called erp_stats_dif which contains the difference between these ERPs. ERPs and other stats variables can be plotted using adam_plot_MVPA. This function has two inputs. The first input is a cfg variable, specifying the parameters that are relevant to adjust the plot, the second input contains the stats variable containing the data to plot. Two (or more) stats variables can be plotted using a single adam_plot_MVPA command by enumerating them after the cfg variable while separating them using commas: Two cfg parameters of adam_plot_MVPA are noteworthy here. The first is cfg.singleplot. This setting specifies whether all the analyses in the stats variable are plotted together in a single plot, or whether the function produces a different plot for every stats variable. Try setting cfg.singleplot = false (which is the default) to see the effect. The other is the cfg.line_colors setting. ADAM uses default line colors for graphs, which can be changed using the cfg.line_colors parameter. This parameter specifies the RGB colors of the lines that are plotted using a triplet of values between 0 and 1 for every line to denote the contribution red, green and blue respectively (type doc ColorSpec in the MATLAB Command window for more information). In the plot presented here, the colors were changed to make them consistent with the remaining plots in the results section. The snippet of code above produces the plot shown in Figure 2A. %% PLOT THE ERPs AND DIFFERENCES IN The thick parts of the line are parts of the time-series that are statistically significant after applying the correction method that was specified when producing the group stats variable, these are also plotted near the time axis at the bottom. The shaded area around the line is the standard error of the mean across participants. Note that the initial C1 and P100 component of the raw ERPs (erp1 and erp2) do not reach significance despite having a very small standard error. This is due to the fact that cluster-based permutation testing robustly determines clusters (including cluster onsets) but is less sensitive to focal regions of significant activity (as would be the case for the peaks of the C1 and P100 components). If one is interested in small windows of highly significant activity, it might be better to apply an FDR correction (Benjamini and Yekutieli, 2001). In the current example analysis this can easily be achieved by re-running the group-level script above after replacing the line that says cfg.mpcompcor_method = 'cluster_based'; with cfg.mpcompcor_method = 'fdr';. Plotting the result again indeed shows that both the C1 and P100 of erp1 and erp2 reach significance under FDR correction (see Figure 2B). However, the disadvantage of FDR correction is that it is less robust to assessing the onset of large clusters (resulting in later onsets than is observed under cluster-based permutation, a similar detrimental effect of FDR correction on cluster onsets can also be seen in Grootswagers et al., 2017, Figure 14), and less robust to identifying sustained clusters (compare the continuous significance of erp1 in Figure 2A to the interrupted significance line of erp1 in Figure 2B). In the remainder of the manuscript we will consistently be using cluster-based permutation testing but alert the reader to the impact of using different types of multiple comparison correction. We also point out that the adam_plot_MVPA function has many parameter settings, allowing one to specify the tick-marks of the x-and y-axis in the graph, inverting the direction of the y-axis (negative up or negative down) etc. These parameter settings will be treated further down or can be found in the help documentation of the adam_plot_MVPA function (type help adam_plot_MVPA in the MATLAB Command window). Difference Waves of ERPs In the second analysis, we compute the outcome of three ERP subtractions in the experiment: non-famous vs. scrambled faces (as in the first analysis), famous vs. scrambled faces and nonfamous vs. famous faces. Below is the code to compute these three group analyses. When running this snippet of code, a selection dialog will pop up. This time, select the EEG_RAW folder (which contains all of these three contrasts, as these were computed in the first level analysis). The first is called cfg.plot_order. This parameter specifies the order in which the comparisons inside the EEG_RAW folder (which was selected when computing group results) are plotted. The plot order impacts the order in which the default colors are used for plotting, and accordingly the order of the names in the legend. When omitting this parameter, the plot function will use the order in the stats variable. Another parameter is the acclim parameter, which sets the bounds for the y-axis in line graphs. When omitting this parameter, the function will use default bounds (which are usually fine). Here we adjusted them slightly to remove overlap between the plots and the legend. Figure 3 reveals that two out of three ERPs difference waves (subtractions between raw ERPs) result in windows of activity in which the difference is significant (as indicated by thick lines). Inspecting the Stats Structure These temporal windows (their start and stop point in milliseconds and the time at which they peak) can be inspected in the stats structure. For example, to inspect the third stats variable, type: This displays the contents of this analysis in the MATLAB Command window: FIGURE 3 | ERP difference waves for the three comparisons in the experiment. Thick lines denote p < 0.05 under two-sided cluster-based permutation (Maris and Oostenveld, 2007). The condname field shows that this is the analysis that compares non-familiar faces to scrambled faces. Other analyses can be inspected by putting a different number between the parentheses. The mapping between the number and the analysis that was performed may differ depending on how the operating system orders files. To enforce a particular order, specify cfg.plot_order when calling adam_compute_group_ERP). The stats structure also contains a field called pStruct. This contains the values of the significant clusters in this analysis. The pStruct field can be accessed by typing: erp_stats_dif(3).pStruct This will show the following in the Command window: The pStruct field contains two fields, one for positive clusters and one for negative clusters. As can be seen from Figure 3, the significant window for EEG_NONFAM_VS_SCRAMBLED is negative (by convention negative is often plotted upwards when plotting ERPs), and indeed the posclusters field is empty, as it is followed by empty square brackets []. To inspect the negative clusters, type: The first field is called clusterpval. This is the cluster based p-value after cluster based random permutation (Maris and Oostenveld, 2007). In this case, the value is 0. By default, the cluster-based permutation test in the adam_compute_group_ functions run 1,000 times. The fact that the clusterpval is 0 means that a cluster of the actually observed size was never obtained under random permutation, so that the p-value under permutation is smaller than 1/1000, hence this p-value should thus be reported as p < 0.001. The clustersize field reflects the number of consecutive samples in the time window, the datasize field reflects the total number of samples in the time series, the start_time reflects the onset time in milliseconds of the significant window, the stop_time the offset time in milliseconds, and the peak_time reflects the time point at which the ERP difference was maximal. The same information can also be obtained for decoding analyses, e.g., by inspecting the stats structure that results from running adam_compute_group_MVPA instead of adam_compute_group_ERP. Training and Testing on the Same Time Points (Diagonal Decoding) Next, we cover how to apply a decoding analysis using very similar code as was used to compute ERPs. First, we will run the equivalent of the ERP analyses that were computed in the previous sections, this time using adam_compute_group_MVPA. When running the following code, a selection dialog will pop up again. Select the RAW_EEG folder, after which the group analyses will be performed. %% COMPUTE DIAGONAL DECODING, ALL EEG COMPARISONS cfg = []; cfg.startdir = 'C:\MY_EXP\RESULTS'; cfg.mpcompcor_method = 'cluster_based'; % 'diag' means train and test on the same points: cfg.reduce_dims = 'diag'; % select RAW_EEG when dialog appears: mvpa_stats = adam_compute_group_MVPA(cfg); After running the code above, the decoding results for all analyses contained in the RAW_EEG folder will be contained in the mvpa_stats variable. The only new setting in the above is the cfg.reduce_dims variable. For now, it is sufficient to remember that setting this to 'diag' means that this extracts the decoding analysis in which the classifier was trained and tested on the same points (so without looking at temporal generalization, see section 2.9.7). To plot the decoding results, again very similar code is used as before: Running the snippet of code above produces the plot contained in Figure 4. This figure looks comparable to the plot contained in Figure 3, but this time the y-axis denotes classification performance (rather than µV as in the ERP analyses). As can be seen from Figure 4-and perhaps unsurprisingly -decoding produces somewhat similar results as the ERPs in Figure 3. Two out of three decoding results show windows of activity in which accuracy is significant after correcting for multiple comparisons using cluster-based permutation (as indicated by the thick lines). However, there are also some notable differences. For example, decoding of famous vs. scrambled faces is significant for a longer period of time than the ERP subtraction of this comparison at electrode P10. Moreover, the difference between famous and non-famous faces never reaches significance in the ERP, but does reach significance in the decoding analysis. Both differences between decoding results and the ERP at the P10 electrode must be due to the fact that there is information contained in the multivariate pattern of activity across the scalp, which exceeds the information that is contained in the P10 electrode alone. This demonstrates one of the strengths of the decoding technique: MVPA allows one to obtain a measure for the difference between two conditions (stimulus classes) without having to specify a priori in which electrode this difference emerges, while at the same time picking up subtle differences that might not have been noticed had such an a priori electrode selection been made, also see (Fahrenfort et al., 2017a). In section 3.6 further below we explain how to visualize the pattern of activity that underlies classification performance using topographical maps. Plotting Single Subject Results A nice feature of ADAM is that it allows quick visualization of group results (ERPs, classification performance etc.). However, it is unwise to simply compute a group result without also inspecting single subject results. For example, one should typically ascertain whether the group result was caused by only a few participants or whether the effect is present in most of the participants in the sample (Allefeld et al., 2016). Moreover, it may be that some participants show irregularities, for example due to incidental equipment failure, software bugs, or bad signal to noise ratio. It requires only a single line of code to also display single subject results when computing group results, by setting cfg.plotsubjects to true. In the code below, the single subject results for an analysis are plotted. When running the code, select EEG_FAM_VS_SCRAMBLED in the RAW_EEG folder when the selection dialog pops up. %% PLOT SINGLE SUBJECT RESULTS cfg = []; cfg.startdir = 'C:\MY_EXP\RESULTS'; cfg.reduce_dims = 'diag'; % splinefreq acts as an 11 Hz low-pass filter: cfg.splinefreq = 11; cfg.plotsubjects = true; % select EEG_FAM_VS_SCRAMBLED in dialog: adam_compute_group_MVPA(cfg); The result is shown in Figure 5. This figure shows the first level decoding result when comparing famous to scrambled faces. Single subject results are displayed on a grid, with the vertical axes equalized to enable easy comparison. The tickmarks are set on half the maximum classification performance of that subject. This way, one can quickly inspect whether all subjects show approximately the same effect, or whether any subjects show large deviations. Note that the code also specifies a splinefreq parameter in the cfg variable. When specifying this parameter, the data is down-sampled to that frequency, always including the sample that contains the largest peak (or trough) in the data. Subsequently, a spline is fitted through this down-sampled signal. This procedure effectively acts as a low-pass filter that retains the maximum (or minimum) in the signal, while removing high frequency information. This parameter is particularly useful when the results contain lots of high-frequency noise (as is typically the case for individual subjects) and is only applied as a visualization step. Statistical testing is always applied to the unaltered data. The cfg.splinefreq parameter can of course also be applied when plotting at the group level, although we chose not to do so here. Topographic Maps As mentioned before, it is often useful to know the pattern of neural activity that gives rise to classification performance. However, weight vectors (the weights that correspond to the features resulting from the training procedure in a decoding analysis, electrodes in this case) are not directly interpretable as neural sources (Haufe et al., 2014). Therefore, ADAM can transform the weight vectors from BDM analyses to weights that would result from a forward model. The procedure for this transformation is simple, and results in activation patterns that are directly interpretable as neural sources, thus allowing one to plot an interpretable topographical map of the activity that underlies the decoding result. The transformation has previously been described by Haufe et al. (2014), and involves taking the product of the classifier weights and the data covariance matrix. The resulting activation patterns are equivalent to the topographical map one would obtain from the univariate difference between the stimulus classes that were entered into the analysis. Yet, it is slightly more elegant to derive them this way because of the direct mapping between the decoding analysis and the topographical maps (at the same time providing a sanity check of the data integrity of the analysis). Alternatively, one can visualize the correlation/class separability maps that are obtained by taking the product of the classifier weights and the data correlation (instead of covariance) matrix. Correlation/class separability maps visualize activity patterns for which the task-related signal is both strong and highly correlated with the task, while at the same time minimizing the influence of strong artifacts such as eye-blinks (Haufe et al., 2014;Fahrenfort et al., 2017b). The following code visualizes the activation patterns resulting from the product of the forward transformed decoding weights topographical maps, for each of the three main analyses. The resulting topographical maps can be found in Figure 6. Interestingly, all three comparisons show clearly significant clusters after cluster-based permutation (Maris and Oostenveld, FIGURE 6 | Activation patterns from 250 to 400 ms, spatially normalized (z-scored) for every subject. Thick electrodes denote p < 0.05 under two-sided cluster-based permutation (Maris and Oostenveld, 2007). 2007 ). This is especially surprising for the famous vs. non-famous comparison, as classification performance was not significantly above chance for this comparison in this time interval (see Figure 4). This could have a number of causes. For example, in Figure 6 we plot topomaps for a particular time window (average between 250 and 400 ms), rather than looking at above chance classification performance across time, so the pre-selection of a temporal window is likely to impact the outcome of the clusterbased test. Relatedly, Figure 6 shows a cluster-based permutation test across electrodes (looking for clusters of contiguous electrodes that remain significant after random permutation), whereas Figure 4 performs a cluster-based permutation test of classification performance across time (looking for clusters of contiguous time samples that remain significant after random permutation). Finally, it is important to realize that a given classifier may not always succeed in extracting the relevant features to achieve above chance classification performance, even when there is potentially relevant information in the data. Selecting a subset of features (electrodes/channels), a different accuracy measure (Bradley, 1997), different pre-processing steps (Grootswagers et al., 2017), or a different train-test algorithm (Cox and Savoy, 2003;Grootswagers et al., 2017) may all impact the degree to which a decoding analysis yields above chance classification performance. EEG and MEG Temporal Generalization Results An additional advantage of performing multivariate analysis over ERPs is the ability compute the stability of neural activity over time by inspecting the so-called temporal generalization matrix (King and Dehaene, 2014). Temporal generalization matrices display how well classification performance for a given time sample generalizes to all other time samples. Thus, a classifier is trained for every sample, and each of these classifiers is tested on all samples in the trial. If a classifier that was trained on a given sample yields high classification performance across samples from all other time points, this shows that the neural pattern of activation is stable, otherwise classification performance would not generalize to these other samples. The ability to inspect temporal generalization matrices needs to be specified during first-level analysis by setting cfg.crossclass = 'yes' (which was indeed the case, see section 2.9.7). In this section, we compute temporal generalization matrices for all three comparisons, separately for the EEG data and for the MEG data. When running the code below, a selection dialog will appear twice. The first time it appears, one should select the EEG_RAW folder, the second time it appears, one should select the MEG_RAW folder. %% COMPUTE ALL TEMPORAL GENERALIZATION MATRICES cfg = []; cfg.startdir = 'C:\MY_EXP\RESULTS'; cfg.mpcompcor_method = 'cluster_based'; % reduce the number of iterations to save time: cfg.iterations = 250; % select RAW_EEG when dialog appears: eeg_stats = adam_compute_group_MVPA(cfg); % select RAW_MEG when dialog appears: meg_stats = adam_compute_group_MVPA(cfg); The results of the EEG and MEG temporal generalization matrices are now stored in eeg_stats and meg_stats respectively. Importantly, we did not specify cfg.reduce_dims here, as we did when we previously ran adam_compute_group_MVPA. This means that the group analysis is applied to the entire temporal generalization matrix that was computed during the first level analyses. Another thing to note is that we specified cfg.iterations = 250. This lowers the number of iterations that the cluster-based permutation test applies to 250 iterations, rather than the default 1000 iterations. This is merely done to save some computation time; with the only implication that the obtained p-values are slightly less accurate. To obtain more accurate cluster-based pvalues keep the default at 1000 or higher. To plot all resulting group temporal generalization matrices, both for EEG and MEG, run the code below. The result can be seen in Figure 7. This figure shows the temporal generalization matrices for all three EEG comparisons in the top and for MEG in the bottom row. The eeg_stats and meg_stats variables passed as a comma separated list as before, and the cfg.plot_order parameter specifies the order in which to plot the comparisons, as has also been shown previously. When eyeballing these graphs, there are three notable differences between the EEG and MEG results. The first is the fact that the EEG matrices seem to achieve higher classification performance in the faces vs. scrambled comparisons when compared to MEG, especially along the diagonal where the result is darker red for EEG than for MEG. The second is the observation that the MEG seems to show better temporal generalization than EEG, as the colored portion of the MEG graphs extends further away from the diagonal (i.e., is more "square") than that for the EEG graphs. The third notable observation is that the famous vs. nonfamous graph shows significant differences in MEG, but not in EEG. EEG and MEG Stability Over Time When Training on 250-400 ms To understand and visualize these differences more easily, it can be advantageous to pick a training time window and investigate to what extent that window generalizes to other time samples in the trial. For illustrative purposes, we use a training window between 250 and 400 ms, and plot how well the neural pattern observed in that window generalizes to the rest of the trial. When running the code below, as in the previous section, first select the EEG_RAW folder, and then the MEG_RAW folder. %% COMPUTE TEMPORAL GENERALIZATION FOR 250-400 ms cfg = []; cfg.startdir = 'C:\MY_EXP\RESULTS'; cfg.mpcompcor_method = 'cluster_based'; % specify a 250-400 ms interval in training data: cfg.trainlim = [250 400]; % average over that training interval: cfg.reduce_dims = 'avtrain'; % select RAW_EEG when dialog appears: eeg_stats = adam_compute_group_MVPA(cfg); % select RAW_MEG when dialog appears: meg_stats = adam_compute_group_MVPA(cfg); Two new cfg parameters are important here: cfg.trainlim and cfg.reduce_dims. The trainlim parameter specifies the temporal window in milliseconds to which the training data (vertical axis in Figure 7) should be limited. The parameter cfg.reduce_dims = 'avtrain' averages over the training window, in this case the period between 250 and 400 ms. The resulting stats structures evaluate how that train window generalizes to all other samples in the trial. This can subsequently be plotted using: This produces the line graphs in Figure 8, which again show EEG in the top row and MEG in the bottom row. If decoding stays high throughout a line graph, this shows that the neural pattern of cortical activity that occurs between 250-400 ms is stable over time, as it is able to drive classification performance at all other time points. As one can see in Figure 8, this is indeed the case for MEG, where classification performance remains above chance all the way to the end of the trial at 1,500 ms. However, this is not the case for EEG, where classification performance drops off to chance toward the end of the trial period (in the faces vs. scrambled comparisons) or is at chance altogether (in the famous vs. non-famous faces comparison). This seems to confirm the observation that was made in Figure 7 that facerelated processing generalizes better in MEG than in EEG. Also confirmed are the observations that initial decoding seems higher for EEG than for MEG and that classification performance for famous faces vs. non-famous faces is only significant for MEG and not for EEG. Comparing EEG and MEG Decoding Accuracies Directly Although seemingly interesting, the differences between EEG and MEG so far have been established by observing significance in one comparison while not observing significance in another comparison and/or eye-balling the data. For example, the famous faces vs. non-famous faces comparison yields significance in MEG, but not in EEG. However, such observations do not allow one to infer that EEG and MEG are differentially sensitive to the famous faces vs. non-famous faces comparison. That inference would require an explicit statistical test (Nieuwenhuis et al., 2011). As long as the data come from the same experiment and the same subjects-decoding analyses provide a common dependent measure to compare the extent to which different methodologies are able to recover differences between experimental conditions. To formally evaluate differences in classification performance across time between MEG and EEG, they can be compared in a statistical test. The adam_compare_MVPA_stats function provides this functionality. Below the code to directly compare the MEG and EEG stats: Interestingly, Figure 9 confirms that initial classification performance during the encoding phase is significantly higher in EEG than in MEG during the famous vs. scrambled faces comparison (left graph, below chance classification performance early on), while temporal generalization is significantly higher in MEG than in EEG (left graph, above chance classification performance toward the end of the trial). The same pattern can be seen in the non-famous vs. scrambled faces comparison, although the difference in the initial encoding phase does not survive multiple comparisons correction when applying cluster-based permutation. Although the MEG comparison of famous vs. non-famous faces was selectively significant in the original analysis, the direct comparison between EEG and MEG is not significant, plausibly due to a lack of power. DISCUSSION In this article, we have shown how to analyze a publicly available dataset from Wakeman and Henson (2015) using ADAM. The analysis pipeline described here can easily be ported to other datasets by replacing the input filenames in the script and modifying the class definitions using one's own event codes. In the dataset we analyzed, subjects viewed famous, non-famous and scrambled faces. Unsurprisingly, the results show that ERPs can show similar outcomes as decoding analyses, as long as one knows which electrode(s) to select. However, there are a number of notable advantages to MVPA when compared to standard ERP analysis. For example, MVPA does not require one to select electrodes, as the decoding analysis automatically extracts informational content from the distribution of activity across all electrodes. Although prior feature (electrode) selection can still be beneficial to improve classification performance (for example selecting only occipital electrodes when a given task is visual), in principle this step is covered automatically by the training phase of a decoding analysis. In the analyses described here, the superiority of this approach becomes apparent when comparing Figure 3 (ERPs) to Figure 4 (classification performance). The decoding graph uncovers a significant difference between famous and non-famous faces that the ERP analysis does not identify. The plausible reason is that the decoding analysis automatically extracts information relevant to the difference between these conditions, which in ERPs would require prior knowledge about which electrodes to select, or require some split half procedure (Kriegeskorte et al., 2009). Of course, this information is also present in the univariate ERPs somewhere (or the classification algorithm could not pick up on it), but experimental differences can be much harder to identify or substantiate using traditional ERPs than using MVPA if the locus of the effect is unknown (also see Fahrenfort et al., 2017a). Another advantage is that decoding analyses allow one to look at the stability of neural activation patterns over time (King and Dehaene, 2014). This advantage is unique to MVPA, as only multivariate analysis allows one to statistically characterize patterns of neural activity. For example, the temporal generalization matrices in Figure 7 reveal the degree to which representations reflecting the encoding of famous and non-famous faces generalize to later time points in a trial. Given the extent of above chance decoding in the far corners of these graphs (the "squareness" of the red-colored region showing above chance decoding performance), these figures suggest that representations of faces during encoding generalize better to other time points when characterizing them using MEG than EEG activity. This suggests that EEG and MEG measurements may be differentially sensitive to stable representations (maintenance) in the face processing network. To further investigate this, we looked at temporal generalization for a specific time window (between 250 and 400 ms), and subsequently compared this temporal generalization signal between MEG and EEG directly in Figure 9. These graphs reveal that decoding accuracy is better in EEG than in MEG during an early encoding phase, but that MEG generalizes better to points later in time in MEG than in EEG. This interaction in the temporal domain suggests that EEG and MEG tap into different properties of the face processing network: EEG seems to have a higher signal to noise ratio during the fleeting encoding phase, whereas MEG taps into cortical activity that is stable over time, plausibly reflecting maintenance involved in evaluating faces. Together, these analyses reveal a third potential advantage of MVPA. MVPA provides a common measure to directly compare observations obtained from different methodologies, as long as the data are obtained from the same subjects, using the same tasks. In the current manuscript, this was done when comparing EEG decoding accuracies to MEG decoding accuracies, but this methodology in principle also allows one to directly compare neural decoding sensitivity to behavioral sensitivity, as long as the data comes from the same subjects and/or care is taken to properly normalize different dependent measures (Fahrenfort et al., 2017b). The analysis pipeline described in this article highlights three advantages of MVPA over traditional univariate analysis. A more in depth treatment of the differences between standard univariate approaches and multivariate analysis can be found in Hebart and Baker (2017). In addition, there are a number of advantages of using ADAM to perform these analyses. ADAM makes it easy to move from ERP, ERF, or TFR-centered research to MVPA analyses, as it enables an easy side-by-side comparison between univariate and multivariate methodologies. This may be particularly helpful for those who have been performing ERP analyses and want to transition to MVPA-centered approaches. ADAM takes EEGLAB or Fieldtrip as input formats, making the switch relatively easy for those who have already been using standard MATLAB analysis toolboxes. To further enable this transition, ADAM takes care of a number of potential confounds that can easily plague an analysis pipeline put together by those not aware of some of the issues. For example, ADAM trades versatility for usability by automatically enforcing balanced designs and by computing AUC rather than overall accuracy. In addition, it allows one to run a multivariate analysis on raw data or automatically perform time-frequency analysis prior to multivariate analysis (not covered in this article), and it easily applies a FEM in addition to a BDM (not covered in this article). Many options are automatically applied by default, or can easily be executed or changed by specifying just one or two parameters in the cfg variable. Using ADAM also has disadvantages. ADAM is mostly maintained by a single person (the first author of this paper), and for that reason support is limited. ADAM's core functions were initially developed to support standard analyses by the first author, and only later converted into a toolbox to support researchers that are considering a transition from ERP to MVPA analyses. Thus although it aligns with the growing movement to promote open source in cognitive neuroscience (Gleeson et al., 2017), it does not necessarily provide the latest and greatest in multivariate analysis. For those already comfortable with programming and/or multivariate analysis, a number of more versatile alternatives for time-series based MVPA exist which have larger development teams, notably CoSMoMVPA (http:// www.cosmomvpa.org, MATLAB) , the Neural Decoding Toolbox (http://www.readout.info, MATLAB) (Meyers, 2013), the Decision Decoding Toolbox (http://ddtbox. github.io/DDTBOX, MATLAB) (Bode et al., 2018), MNE (http:// www.martinos.org/mne/stable/manual/decoding.html, Python) (Gramfort et al., 2014) and the PyMVPA toolbox (http://www. pymvpa.org, Python) (Hanke et al., 2009). Yet, for those wanting to dip their toes into multivariate waters for the first time, ADAM could be a great start. ETHICS STATEMENT The study was approved by Cambridge University Psychological Ethics Committee. Written informed consent was obtained from each participant prior to and following each phase of the experiment. Participants also gave separate written consent for their anonymized data to be freely available on the internet. AUTHOR CONTRIBUTIONS JF wrote the toolbox, JvD and JF designed analyses, JvD provided import scripts for original data and improved help files for toolbox, JF, JvD, SvG, and CO wrote paper and/or provided editorial guidance.
2018-07-03T18:37:59.621Z
2018-07-03T00:00:00.000
{ "year": 2018, "sha1": "548812f351a9aa4c8f0a2960a0b8c8139517518a", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2018.00368/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "548812f351a9aa4c8f0a2960a0b8c8139517518a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
118548106
pes2o/s2orc
v3-fos-license
Fano lines in the reflection spectrum of directly coupled systems of waveguides and cavities: measurements, modeling and manipulation of the Fano asymmetry We measure and analyze reflection spectra of directly coupled systems of waveguides and cavities. The observed Fano lines offer insight in the reflection and coupling processes. Very different from side-coupled systems, the observed Fano line shape is not caused by the termini of the waveguide, but the coupling process between the measurement device fiber and the waveguide. Our experimental results and analytical model show that the Fano parameter that describes the Fano line shape is very sensitive to the coupling condition. A movement of the fiber well below the Rayleigh range can lead to a drastic change of the Fano line shape. I. INTRODUCTION Photonic crystal (PhC) [1] cavities are of tremendous interest for device applications due to their beneficial properties such as small mode volume and high quality factor (Q) [2][3][4]. Multiple-cavity systems are great platforms to study fundamental physics and build all-optical devices [5][6][7][8][9][10]. Thus, the characterization of multiplecavity systems composed of photonic crystal cavities is of great importance. Reflection measurement is a typical way of characterizing resonant systems. From a reflection spectrum, the resonance width and frequency can be obtained. Depending on the structure of a system, resonances shown in a spectrum can be Lorentzian shape or Fano line shape [11]. The Fano line shape shows up when a narrow resonance interferes with a continuum. Depending how the continuum interacts with the resonance, a Fano line can show up various profiles [11,12]. In many cases, the sharp asymmetric Fano line shape is preferred to the Lorenztian shape. For example, for optical switching Fano resonances reduce the switching thresholds and give much larger switching contrast [13,14]. Fano lines appear very often and are widely studied in side-coupled single and multi-cavity and waveguide systems. For side-coupled single cavity and waveguide systems, Fano lines can be created by adding extra scattering or reflecting elements in waveguides since transmission is open within the bandwidth of the waveguides [15][16][17]. For side-coupled multicavity and waveguide system, Fano line shapes show up naturally due to the direct and indirect cavity-cavity couplings even without extra scattering and reflecting elements [18,19] . The situation is different in directly coupled single and multi-cavity and waveguide systems (Fig.1). In such systems transmission is only open at the cavity resonance, and indirect cavity-cavity couplings are absent. In case the light frequency is far off the resonance of the PhC cavity, light will be completely reflected, since the system is closed. Therefore, reflection of the facet of the PhC waveguide does not drastically change the line shape. Only in case the frequency of the incident light is close to the resonance of the PhC cavity, the reflection of the waveguide plays a role. In Fig. 2, we show the calculated spectra of direct-coupled multi-cavity and waveguide systems in two different cases, one with the consideration of the reflection of the waveguide, the other without taking into account of the reflection of the waveguide. In both cases, we see symmetric Lorentzian shapes. In contrast, we observe sharp asymmetric line shapes on top of Fabry-Pérot fringes in our experiments. Apparently, the observed strong asymmetric Fano line shape in a directly coupled systems can not be solely attributed to the termini of the waveguide. FIG. 1. A schematic representation of the sample and experimental setup. The green membrane structure represents the sample. In the barrier waveguide (middle line defect), three cavities have been created by shifting the holes around the waveguided in a tapered way. The shift of the green holes are S1 = 0.0124a, the shifts of the red and blue holes are 2/3 S1 and 1/3 S1, and a is the lattice constant. The red cone next to the sample represents a polarization maintaining lensed fiber (PMF) used to couple light into the membrane. The possibility to tune the asymmetry of the Fano line shape also arouses much interest. This tuning has been arXiv:1610.08351v2 [physics.optics] 27 Jun 2017 achieved by tuning the cavity resonances [15,[20][21][22]. For a given cavity, by deliberate design of the coupling to the continuum, different Fano line shapes can be obtained by fabricating different samples [17,23,24]. However, after the structure is fabricated, it is very difficult to change the Fano line shape without changing the frequency of the resonance because this requires to change the properties of the continuum or the broad resonances. In this work, we experimentally and theoretically investigate Fano resonances in a multiple-cavity system directly coupled to a waveguide in a PhC membrane structure. We measure the Fano lines and manipulate their shape by tuning the resonances. We create an analytical model which uncovers the origin of the Fano line shape, and it accurately reproduces our experimental results. With the help of our model, we propose and experimentally demonstrate a way of directly manipulating the Fano lineshape without tuning the cavity resonances. Reflection spectrum of a direct coupled waveguide and multiple-cavity system. Black square: measured spectrum. Red line: calculated spectra taking into account the reflection of the waveguide. Blue line: calculated spectra ignoring reflection of the waveguide facet. The calculations were performed without taking into account the lossy Fabry-Perot cavity between the fiber tip and photonic crystal chip. II. THEORETICAL MODEL The schematic of the system we consider is shown in Fig. 3. We explicitly include the lensed fiber used to couple light into the waveguide. Light propagation in this system can be separated into three processes. The first process is the light coupling between the lensed fiber and the input waveguide. The second process is the light transport in the input waveguide. The last process is the light coupling between the waveguides and cavities. A correct description of the coupling process between the lensed fiber and the input waveguide is essential for the formation the Fabry-Pérot fringes in Fig. 2 3. Schematic of the optical system including coupled cavities directly coupled to waveguides. A lensed fiber as a measurement device is also taken into account in the system. We use transfer matrices to model all these processes. The transfer matrix [25,26] connects the fields of forward and backward propagating waves from the left side to the right side, Here S R± is the forward (backward) propagating wave on the right side, S L± is the forward (backward) propagating wave on the left side and M is the transfer matrix that links them. The matrices describe each process are discussed in the appendix. III. EXPERIMENTAL SETUP AND REFLECTION SPECTRA In Fig. 1 we show a representation of the sample. Our sample is a photonic crystal membrane structure made of InGaP [27] with a thickness of 180 nm. The lattice constant is a = 485 nm, the radius of the holes is 0.28a. There are two waveguides shown in the sample in Fig. 1. One is the input waveguide with the width of 1.1 √ 3a. The length of it is 219a. The other one is the barrier waveguide, the width of it is 0.98 √ 3a. In the barrier waveguide, there are three mode-gap cavities [2]. They are created by shifting the holes around the barrier waveguide (Fig. 1). An output waveguide which is in line with the input waveguide is also in the structure. However, it is placed further away from the third cavity. A polarization maintaining lensed fiber (red cone in Fig. 1) with numerical aperture (NA) 0.55 is used to couple light from a tunable continuous wave (CW) infrared (IR) laser to the sample. A fiber circulator is used to connect the lensed fiber and laser. The third port of the fiber circulator is connected to a photodiode to measure the reflection spectra of the sample. The reflection spectrum is shown in Fig. 4. In Fig. 4(a), we see that the cavity resonances form different lineshapes on top of Fabry-Pérot fringes. The first Fano resonance ( Fig. 4(b)) is between 1544 nm and 1545 nm, and is a wide and deep valley with a slight asymmetry. The second Fano resonance is between 1543.5 nm and 1544 nm, it has a sharp asymmetric line shape. The peak intensity of this resonance is twice smaller than the maximum intensity of the background fringes. The third resonance (Fig. 4(c)) between 1541.95 nm and 1542.10 nm is less pronounced. In order to cancel the disorder of the cavities [29], we use a CW diode laser (λ pump = 405 nm) to tune the frequencies of the cavities in the sample by laser induced heating [30]. To control the first and third cavity simultaneously, two foci are projected on the surface of the sample by an objective with NA 0.4. These foci are generated with the help of a reflective spatial light modulator (SLM) in the pump path. A digital holographic phase pattern is written on the SLM for the generation of the foci [28]. When the laser spots are focused on the cavities, their resonances are tuned to the red. Although there is no direct laser light on the second cavity, the resonance of cavity 2 also shifts due to heat diffusion. The reflection spectrum of the tuned device is shown in Fig. 5 [29]. The power of the spots on cavity 1 and cavity 3 is 9 µW and 108 µW respectively. We see that in Fig. 5(a) the three resonances occur within 1 nm in the spectrum. One is with sharp asymmetric line shape. The other two have less asymmetric line shape and form two deep valleys in the spectrum. The modulation of the Fano dip shown in Fig. 6(b) is attributed to SLM noise. IV. ANALYSIS OF REFLECTION SPECTRA The analysis of the reflection spectra is done by fitting experimental results with our model. We first fit the original spectrum, then the spectrum of the tuned device. The values of the parameters in our model are listed in Table 1. The fit of our model to the original spectrum is shown in Fig. 4. The fit agrees very well with our data for resonances 1, 2 and the background fringes. It accurately characterizes the period and visibility of the fringes. Meanwhile, it describes the line shapes, widths, heights of the peaks and depths of the trough for resonances 1 and 2. However, we cannot correctly reproduce the depth and width of resonance 3 at the same time. The probable cause for this is a direct coupling term between the waveguide and cavities 2 and 3, or a second neighboring coupling between the cavities. Such terms have been ig-nored in our model as they would lead to an excessive number of free parameters. The fit of our model to the tuned reflection spectrum is shown in Fig. 5. We see that for all three resonances the fit agrees very well with experimental data. There are slight deviations on wings of the resonances between the fit and experimental data. These wings are mostly determined by the Fabry-Pérot fringes. To characterize them accurately, the accurate knowledge of the curvature of the PhC waveguide band is needed. In our model, we only use two parameters m and ω edge to describe the band. This is an approximation only valid for a narrow frequency band. The values are obtained by fitting from the reference spectrum shown in Fig 4(a). Thus, the small deviation shown in the tuned spectrum which is in a different wavelength range is reasonable. The fact that the fit from our model has an excellent agreement with the experiment shows that our model describes the physical process of the system accurately. It not only explains the physical origin of the observed Fano resonances but also provides the key parameters of the sample such as the intrinsic loss rates of the cavities. The fitting results show that the intrinsic Q factor is larger than 10 5 . After we apply the tuning the frequency detuning between cavity 1 and 2 is reduced below their coupling rate, the same holds for cavities 2 and 3. We can describe the lensed fiber, air gap, input waveguide and the cavities together as a special Fabry-Pérot cavity. If we ignore the reflection from the facet of the input waveguide, the first "mirror" of this Fabry-Pérot cavity is the tip of the lensed fiber, the second "mirror" of this cavity is the system of photonic crystal cavities. The length of the cavity is the total length of the air gap together with the input waveguide. The phase shift of a single round trip of this Fabry-Pérot cavity consists of two parts, the first is the phase shift from propagation and the second is the phase shift due to the reflection from the second "mirror". The specialty of the second "mirror" is that it is very dispersive around the cavity frequencies. The Fano line shape is determined by the phase shift of the round trip. Therefore, we conclude that the Fano lineshape of the resonance can be tuned by changing the length of the air gap. V. SENSITIVITY OF THE FANO LINE SHAPE We perform an experiment to test this prediction on a new sample with the same parameters as our previous sample. Due to the inevitable disorder, the resonance positions of the cavities appear at different wavelengths. We measure reflection spectra with different sizes of the air gap. This was done by moving the sample step by step away from the lensed fiber with our precise translation stage. The measured reflection spectra are shown in Fig. 6. In Fig. 6(a), the reference spectrum is presented (the reference distance between the lensed fiber and sample is denoted by ∆ and is the distance where the coupling is optimized), and we see a Fano resonance is in the form of a dip with a bit asymmetry. After the reference measurement, we increase the length of the air gap with a step size of 100 nm. The spectra with air gap sizes ∆+100 nm, ∆ + 300 nm and ∆ + 500 nm are shown in Fig. 6(b), Fig. 6(c) and Fig. 6(d) respectively. In Fig. 6(b), we see a sharp asymmetric Fano resonance with a peak at short wavelength and a dip at a longer wavelength. In complete contrast to the reference spectrum, we see a peak with a slight asymmetry instead of a dip in Fig. 6(c). In Fig. 6(d), we again see a sharp asymmetric Fano resonance, however, it is almost a flipped version of Fig. 6(b). It has a dip at a short wavelength and peak at a longer wavelength. We also retrieve the q parameter that describes the asymmetry of Fano lines using the Fano line formula [12]. We also plot the fits from our analytical model. In the fits, all further parameters are kept the same for the results from Fig. 6(a) to Fig. 6(d) except the coupling loss, since it increases slightly as we increase the size of the air gap. The fits agree well with our experiment, which confirms that changing the air gap size causes the drastic change of the Fano line shape. The results shown in Fig. 6 confirm our prediction that the shape of a Fano resonance can be tuned by changing the size of the air gap between the sample and lensed fiber, and demonstrate that the origin of the Fano line shape is indeed the interference of the sharp resonances with the broad resonances defined by the reflection of the lensed fiber and coupling loss. The maximum distance we move the fiber to manipulate the Fano line shape is only 500 nm from the optimal coupling point. Since the Rayleigh range is 1.9 µm, the coupling efficiency only experiences a very small change. On the contrary, the Fano line shape as we show in Fig. 6 experiences a drastic change. VI. CONCLUSION In summary, Fano resonances in the reflection spectra of a direct-coupled waveguide-cavities system in a photonic crystal membrane structure have been experimentally and theoretically investigated. Our theoretical model has an excellent agreement with our experimental results and provides important information on the very low bare loss rate of the cavities. The origin of the Fano lineshape is the interference between the wave reflected from the lensed fiber and the wave reflected from the photonic crystal cavities. The path length difference between these waves is a round trip of the air gap size and the input waveguide. We propose and experimentally show that the Fano asymmetric parameter can be tuned drastically by only changing the air gap size between the sample and the fiber by 100 nm which is a number well below the Rayleigh range. Our model can be used to investigate other physical processes in the system, such as the dynamical tuning of the Fano asymmetry by ultrafast switching [31,32]. To derive the matrix which describes the light coupling between the waveguides and cavities, we use the temporal coupled equations. In the equations, we only consider the coupling between the first (last) cavity and input (output) waveguide, with coupling rates γ 1 and γ 2 respectively. We use a j (t) (j = 1, 2, 3) to denote the time evolution of the field in cavity j, and S l± (l = 1, 2) to denote the amplitude of the mode in waveguide l. l = 1 represents the input waveguide and l = 2 represents the output waveguide. The "±" represents forward (backward) propagation. From coupled mode theory [1,[35][36][37], we use following equations to describe the dynamics of the system, = iω 1 a 1 − (γ 01 + γ 1 )a 1 + 2γ 1 S 1+ + i Γ 1 2 a 2 da 2 dt = iω 2 a 2 − γ 02 a 1 + i Γ 1 2 a 1 + i Γ 2 2 a 3 da 3 dt = iω 3 a 3 − (γ 03 + γ 2 )a 1 + 2γ 2 S 2− + i Γ 2 2 a 2 S 1− = −S 1+ + 2γ 1 a 1 S 2− = −S 2+ + 2γ 2 a 3 (A.6) In Eq. (A.6), ω j is the actual bare frequency of cavity j which is defined as ω j = ω 0 +δω j where ω 0 is the intended intrinsic frequency of the cavities, and δω j (j = 1, 2, 3) representing the frequency deviation of cavity j from the actual bare resonance frequency due to fabrication disorder. Γ 1 is the coupling rate between cavity 1 and 2, Γ 2 is the coupling rate between cavity 2 and 3. We solve Eq. (A.6) in a matrix equation form in the Fourier domain, and obtain matrix M III which is the transfer matrix that links (S 2− , S 2+ ) and (S 1− , S 1+ ). The lengthy but straightforward expression is not shown here. The matrix that describes all the processes is M sys = M III · M II · M I . In our experiment the value of γ 2 is small enough to assume that all the elements after the output waveguide decouple from the system and do not influence the reflection spectrum. Model without the lensed fiber In the model without the lensed fiber, M I = M pr . The rest process described by M II and M III remain the same.
2017-06-27T21:12:28.000Z
2016-10-26T00:00:00.000
{ "year": 2016, "sha1": "104445ca1570b8cfc4021d8c7c65915866688bf2", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1610.08351", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "104445ca1570b8cfc4021d8c7c65915866688bf2", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
269991773
pes2o/s2orc
v3-fos-license
Unveiling the gaps: Hypertension control beyond the cascade of care framework Abstract This study examines hypertension control beyond the cascade of care framework, which assesses awareness, treatment, and control sequentially. The analysis included 52 434 hypertensive adults (blood pressure (BP) ≥140/90 mm Hg and/or treatment in the past 6 months), aged 25–69, from the French population‐based CONSTANCES cohort from 2012 to 2021. The authors assessed the typical “awareness, treatment, and control” scenario and characterized other possible control patterns. The authors found that 13% achieved control. This percentage rose to 19% when considering individuals who were not aware but treated and controlled. This alternative control scenario was associated with female sex, younger age, higher education, Northern‐African origin, and reporting prior cardiovascular diseases (CVD). Sub‐Saharan African origin, diabetes and overweight/obesity were associated with the typical control scenario. This study highlights that applying a typical sequential cascade of care approach may lead to the exclusion of some specific groups of participants who do not fit into the defined categories. INTRODUCTION 2][3] A fundamental principle of this framework is that each successive step is a prerequisite for the next: awareness must precede treatment, and treatment must precede control.This study focuses on the alternative paths relevant to research using observational data that are not considered in this "typical" This is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non-commercial and no modifications or adaptations are made.© 2024 The Author(s).The Journal of Clinical Hypertension published by Wiley Periodicals LLC. hypertension care cascade and can nevertheless lead to hypertension control. METHODS The CONSTANCES cohort design has previously been published. 5udy participants visited health screening centers (HSC) for comprehensive health assessments, including doctor-administered questionnaires and three standardized blood pressure (BP) readings. 6The CONSTANCES study team has furnished standardized procedures for 705IT R) with appropriately sized cuffs were supplied.BP measurements were conducted as follows: after a 5-min resting period in a lying position, three measurements were taken.The first measurement was on the right arm, followed by the second on the left arm with a 1-min interval.Subsequently, the third measurement was taken on the reference arm (identified as the arm with the highest value) after another 1-min interval.The average of the two reference arm measurements was utilized to determine systolic and diastolic BP.Consenting individuals were connected to the French National Health Data System (SNDS), providing data on antihypertensive treatment reimbursements. Definitions Hypertension was defined as having a measured BP ≥140/90 mm Hg and/or having been reimbursed for at least one antihypertensive medication in the past 6 months. Individuals with hypertension were considered aware if they reported hypertension in their medical history when answering the doctor-administered questionnaire. Hypertensive patients were considered under treatment if they were reimbursed for at least one box of antihypertensive medication within 6 months before inclusion. Control was determined if individuals with hypertension had an average BP < 140/90 mm Hg. The "typical control scenario" followed the traditional care cascade and was defined as the sequential pattern of awareness, treatment, and control. The "alternative control scenario" was defined as the sequential pattern of unawareness, treatment, and control. Covariates Biological sex and age were provided at inclusion.Highest level of education was self-reported at inclusion. Migratory status was used as a proxy for ethnoracial group 7 and was assessed using the participant's declaration of their and their parents' geographical region of origin and their nationality at birth. 8Firstand second-generation immigrants were grouped together, as well as overseas department natives and descendants.The ethnoracial was coded as follows: the Majority group, the Overseas France group, the North African group, the Subsaharan African group, the Asian group, the Europe and others group. History of other CVD (myocardial infarction, coronary artery disease, stroke, transient ischemic attack, abdominal aortic aneurysm, peripheral artery disease, heart failure, other) was either reported when answering the doctor-administered questionnaire or identified by medical reimbursements. Diabetes was defined if participants reported type II diabetes, were receiving antidiabetic medication, or if their fasting blood glucose concentration > 7 mmol/L. Overweight/obesity was calculated at the HSC. Study participants We pooled data from 196 304 adults included in the CONSTANCES cohort (2012-2021).We excluded individuals with missing data on hypertension treatment reimbursements, measures, or self-report.We also excluded pregnant women.Finally, we selected individuals aged 25−69.In our sample of 174 606 French adults, 30% (n = 52 434) had hypertension. Descriptive analyses were performed on typical and alternative control scenarios.Each characteristic was described comparing respondents who achieved control via the alternative scenario, to those in the typical scenario, using logistic regressions. RESULTS When considering the typical care cascade framework, among the 52 434 individuals with hypertension, 35.6% were aware, 32.2% aware and treated, and 13.0% had control of their hypertension (Figure 1A). When considering the alternative scenario (Figure 1B), the proportion of those in the control group rose to 19.0%, because individuals with hypertension who were not aware, but who were treated and controlled (6.0%) were included.Overall, the "Treated" share in the second graph rose to 40.7% compared to the first graph.This increase was due to an additional 8.4% of individuals with hypertension who were not aware but were treated. Compared to the typical control path, the alternative path to achieve control (not aware, treated, controlled) was positively associated with being a woman, younger age, pre-existing CVD, having a postgraduate degree, and being of the North African group.It was negatively associated with having diabetes, overweight/obesity, and being part of the Sub-Saharan African group (Table 1). CONCLUSIONS In a large French cohort, we used observational data to identify that Different hypotheses can be made on the basis of the respondents' characteristics.1][12] Indeed, in our study, having a history of other CVD was strongly associated with the alternative control path, which corroborates this hypothesis.When considering only individuals with a history of other CVD, the control rate rose from 18.3% in the typical path to 32.3% considering the alternative path, while it rose from 11.5% in the typical path to 15.2% in the alternative path individuals without any history of other CVD (Figure S1).Some individuals might therefore be categorized as hypertensive because they receive treatment, without having elevated BP levels, potentially introducing a selection bias creating an overestimation of hypertension prevalence that needs to be addressed.For example, in our study, 78% of individuals in the alternative path were under β-blockers versus 49% of individuals in the typical path (Table S1).β-blockers are recommended in the management of myocardial infarction, coronary heart disease and heart failure, 13 and were mainly present in individuals with these diseases in our study (Table S2).Furthermore, the selected molecules to identify an "antihypertensive treatment" in an administrative dataset such as the SNDS (Figure S2) might also contribute to a definition bias.Another hypothesis revolves around the potential ambiguity in reporting hypertension.It is possible that having a controlled BP might lead some patients to report not having hypertension, contributing to reporting bias.Furthermore, some respondents with controlled hypertension might also be undergoing treatment without a comprehensive understanding of the diagnosis, which could cause some ethical issues. Although our data does not allow us to evaluate the extent to which this situation is prevalent, studies in the USA, 14 Ireland, 15 and Italy 16 have shown that not knowing their diagnosis or treatment plan is not unusual when patients are discharged from hospitals.This underlines the importance of physician-patient communication in a patient's treatment plan, especially when hypertension is diagnosed in the hospital, which might be more frequent in France than in other countries, due to the absence of large-scale detection campaigns in the population. 17r study found that the alternative control path was associated with overweight/obesity, diabetes, and older age.We hypothesize that respondents with these demographics and medical conditions may be more likely to be engaged in the healthcare system, potentially contributing to a better insertion in the hypertension care continuum.The alternative control scenario was more likely among women.This could partially be due to a lack of sex and sex-sensitive approaches in the design, analysis and interpretation of research on hypertension. 18The alternative control scenario was also associated with a postgraduate degree.This could stem from the higher hypertension unawareness rate among the most educated in the CONSTANCES cohort. 19Sub-Saharan African origin was correlated to the typical control.This may be the result of clinical guidelines which recommended a targeted and tailored approach to hypertension management towards this group. 20rther research is essential to elucidate the patterns of hypertension management based on the respondents' sex and ethnoracial group, TA B L E 1 Factors associated with an alternative control within the care cascade (compared to typical).The cascade of care framework was initially introduced in the 2000s to assess the loopholes in the sexually transmitted illness (STI) care continuum.HIV likely represents the most established and successful application of the model. 21More recently, its use has been adapted to other communicable (other STI, hepatitis C) and noncommunicable diseases (diabetes, hypertension). 21Perlman and coworkers while examining care continua for HIV, hepatitis C, and tuberculosis warn that adapting the model to any disease must imply a disease-focused reflection on the definitions of each of its stages.In the case of hypertension, the traditional definition of treatment might not be well adapted, as it very often comprises medical treatment only, although lifestyle and diet adaptation are a part of the treatment, and sometimes suffice without the need for pharmacological intervention. Characteristic As a linear path to treatment, the cascade of care has shown some limitations.Regarding HIV, Hallett and Eaton 22 suggested considering "side door entries," that is, entries or reentries in the cascade of care that do not occur at the front door of the cascade, in our case awareness.A scenario put forward by the authors is the "drop-out reinitiating" scenario.In the case of hypertension, where adherence to the treatment is low, 23 we hypothesize that reentering the cascade through the treatment side-door is possible.Perlman and coworkers also question the scenario in which aware and treated patients relapse after having had control of their condition for a period of time.To our knowledge, this has not been examined for hypertension.Therapeutic inertia, failure to adequately intensify or up-titrate treatment, is common in hypertension management, 12 and could lead patients to reenter the cascade at a previous step. Finally, although our methodology to assess prevalence and the different steps of the cascade of care in the general population has largely been validated and published, 1,2,12 it still shows limitations.BP measurements in a single visit does not suffice to confirm a clinical diagnosis of hypertension. 12Hypertension prevalence based on data collected in multiple visits might be lower than our estimations, 2 especially in populations that are not used to frequent BP measurements. 24r study might therefore overestimate hypertension prevalence and underestimate control.Furthermore, the choice of molecules to define hypertensive treatment is rarely communicated in epidemiological studies, often due to insufficient data, although it may lead to a selection bias. In conclusion, using the typical hypertension cascade of care framework might contribute to underestimating control, overestimating prevalence, and excluding social-specific groups.The approach developed in this study is especially relevant to observational research on hypertension in the general population, although the alternative control scenario highlighted here should be tested on different study types so as to improve the understanding of hypertension control and identify more specific areas for improvement. 19 . 2 F I G U R E 1 0% of hypertensive patients achieved control, among whom one third were rendered invisible if the criteria of a typical cascade of care framework were applied.The use of the cascade of care framework has proved efficient in the identification of the loss of patients throughout the hypertension care continuum, and in determining hypertension control in the general population.Many recent major observational studies have used this framework 1-3 enabling a systematic approach and contributing to improved comparability between periods and countries. 1,Hypertension care cascade and paths to achieve control.(A) Hypertension care cascade and the typical path to achieve control.(B) Hypertension care cascade and an alternative path to achieve control.The blue path represents the "typical" path to hypertension control according to the cascade of care framework The red path represents an "alternative" path to hypertension control: not aware, treated, controlled.*To the exception of respondents who were both not aware and treated.However, one of the main disadvantages of this framework is that a proportion of people who achieve control might not be captured because they fall between the gaps of typical classification.As shown in our study by the share of individuals with hypertension in the alternative control scenario.To our knowledge, most studies do not include them in their assessment of hypertension control.
2024-05-25T15:14:05.076Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "62bf518adf5237423e4f43120a82e84fe05ad5e7", "oa_license": "CCBYNCND", "oa_url": "https://hal.science/hal-04609469/document", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "cf3e5300c1243b140f90dfbb018aa1256116d779", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
7982198
pes2o/s2orc
v3-fos-license
Measuring Thematic Fit with Distributional Feature Overlap In this paper, we introduce a new distributional method for modeling predicate-argument thematic fit judgments. We use a syntax-based DSM to build a prototypical representation of verb-specific roles: for every verb, we extract the most salient second order contexts for each of its roles (i.e. the most salient dimensions of typical role fillers), and then we compute thematic fit as a weighted overlap between the top features of candidate fillers and role prototypes. Our experiments show that our method consistently outperforms a baseline re-implementing a state-of-the-art system, and achieves better or comparable results to those reported in the literature for the other unsupervised systems. Moreover, it provides an explicit representation of the features characterizing verb-specific semantic roles. Introduction Several psycholinguistic studies in the last two decades have brought extensive evidence that humans activate a rich array of event knowledge during sentence processing: verbs (e.g. arrest) activate expectations about their typical arguments (e.g. cop, thief ) (McRae et al., 1998;Altmann and Kamide, 1999;Ferretti et al., 2001;McRae et al., 2005;Hare et al., 2009;Matsuki et al., 2011), and nouns activate other nouns typically co-occurring in the same events (Kamide et al., 2003;Bicknell et al., 2010). Subjects are able to determine the plausibility of a noun for a given argument role and quickly use this knowledge to anticipate upcoming linguistic input (McRae and Matsuki, 2009). This phenomenon is referred to in the literature as thematic fit. Thematic fit estimation has been extensively used in sentence comprehension studies on constraint-based models, mainly as a predictor variable allowing to disambiguate between possible structural analyses. 1 More in general, thematic fit is considered as a key factor in a variety of studies concerned with structural ambiguity (Vandekerckhove et al., 2009). Starting from the work of Erk et al. (2010), several distributional semantic methods have been proposed to compute the extent to which nouns fulfill the requirements of verb-specific thematic roles, and their performances have been evaluated against human-generated judgments (Baroni and Lenci, 2010;Lenci, 2011;Sayeed and Demberg, 2014;Sayeed et al., 2015Greenberg et al., 2015a,b). Most research on thematic fit estimation has focused on count-based vector representations (as distinguished from prediction-based vectors). 2 Indeed, in their comparison between highdimensional explicit vectors and low-dimensional neural embeddings, Baroni et al. (2014) found that thematic fit estimation is the only benchmark on which prediction models are lagging behind stateof-the-art performance. This is consistent with 's observation that "thematic fit modeling is particularly sensitive to linguistic detail and interpretability of the vector space". The present work sets itself among the unsupervised approaches to thematic fit estimation. By relying on explicit and interpretable count-based vector representations, we propose a simple, cognitively-inspired, and efficient thematic fit model using information extracted from dependency-parsed corpora. The key features of our proposal are a) prototypical representations of verb-specific thematic roles, based on feature weighting and filtering of second order contexts (i.e. contexts that are salient for many of the typical fillers of a given verb-specific thematic role), and b) a similarity measure which computes the Weighted Overlap (W O) between prototypes and candidate fillers. 3 2 Related Work Erk et al. (2010) were, at the best of our knowledge, the first authors to measure the correlation between human-elicited thematic fit ratings and the scores assigned by a syntax-based Distributional Semantic Model (DSM). More specifically, their gold standard consisted of the human judgments collected by McRae et al. (1998) and Padó (2007). The plausibility of each verb-filler pair was computed as the similarity between new candidate nouns and previously attested exemplars for each specific verb-role pairing (as already proposed in Erk (2007)). Baroni and Lenci (2010) evaluated their Distributional Memory (henceforth DM) 4 framework on the same datasets, adopting an approach to the task that has become dominant in the literature: for each verb role, they built a prototype vector by averaging the dependency-based vectors of its most typical fillers. The higher the similarity of a noun with a role prototype, the higher its plausibility as a filler for that role. Lenci (2011) has later extended the model to account for the dynamic update of the expectations on an argument, depending on how another role is filled. By using the same DM tensor, this study tested an additive and a multiplicative model (Mitchell and Lapata, 2010) to compose and update the expectations on the patient filler of the subject-verb-object triples of the Bicknell dataset (Bicknell et al., 2010). The thematic fit models proposed by Sayeed and Demberg (2014) and Sayeed et al. (2015) are similar to Baroni and Lenci's, but their DSMs were built by using the roles assigned by the SENNA semantic role labeler (Collobert et al., 2011) to define the feature space. These authors argued that the prototype-based method with dependencies works well when applied to the agent and to the patient role (which are almost always syntactically realized as subjects and objects), but 3 Code: https://github.com/esantus/Thematic Fit 4 In this paper, we will make reference to two different models of DM: DepDM and TypeDM. DepDM counts the frequency of dependency links between words (e.g. read, obj, book), while TypeDM uses the variety of surface forms that express the link between words, rather than the link itself. that it might be problematic to apply it to different roles, such as instruments and locations, as the construction of the prototype would have to rely on prepositional complements as typical fillers, and the meaning of prepositions can be ambiguous. Comparing their results with Baroni and Lenci (2010), the authors showed that their system outperforms the syntax-based model DepDM and almost matches the scores of the best performing TypeDM, which uses hand-crafted rules. Moreover, they were the first to evaluate thematic role plausibility for roles other than agent and patient, as they computed the scores also for the instruments and for the locations of the Ferretti datasets (Ferretti et al., 2001). Greenberg et al. (2015a,b) further developed the TypeDM and the role-based models, investigating the effects of verb polysemy on human thematic fit judgments and introducing a hierarchical agglomerative clustering algorithm into the prototype creation process. Their goal was to cluster together typical fillers into multiple prototypes, corresponding to different verb senses, and their results showed constant improvements of the performance of the DM-based model. Finally, Tilk et al. (2016) presented two neural network architectures for generating probability distributions over selectional preferences for each thematic role. Their models took advantage of supervised training on two role-labeled corpora to optimize the distributional representation for thematic fit modeling, and managed to obtain significant improvements over the other systems on almost all the evaluation datasets. They also evaluated their model on the task of composing and updating verb argument expectations, obtaining a performance comparable to Lenci (2011). Methodology As pointed out by , most works on unsupervised thematic fit estimation vary in the method adopted for constructing the prototypes. The semantic role prototype is usually a vector, obtained by averaging the most typical fillers, and plausibility of new fillers depends on their similarity to the prototype, assessed by means of vector cosine (the standard similarity measure for DSMs; see Turney and Pantel (2010)). Its merits notwithstanding, we argue that this method is not optimal for characterizing roles. Distributional vectors are typically built as out-of-context representations, and they conflate different senses. By building the prototype as the centroid of a cluster of vectors and measuring then the thematic fit with vector cosine, the plausibility score is inevitably affected by many contexts that are irrelevant for the specific verb-argument combination. 5 This is likely to be one of the main reasons behind the difficulties of modeling roles other than agent and patient with syntax-based DSMs. We claim that improving the prototype representation might lead to a better characterization of thematic roles, and to a better treatment of polysemy. When a verb and an argument are composed, humans are intuitively able to select only the part of the potential meaning of the words that is relevant for the concept being expressed (e.g. in The player hit the ball, humans would certainly exclude from the meaning of ball semantic dimensions that are strictly related to its dancing sense). In other words, not all the features of the semantic representations are active, and the composition process makes some features more 'prominent', while moving others to the background. 6 Although we are not aware of experimental works specifically dedicated to verb-argument composition, a similar idea has been supported in studies on conceptual combinations (Hampton, 1997(Hampton, , 2007: when a head and a modifier are combined, their interaction affects the saliency of the features in the original concepts. For example, in racing car, the most salient properties would be those related to SPEED, whereas in family car SPACE properties would probably be more prominent. Yeh and Barsalou (2006) used a property priming experiment to show how the concept features activated during language comprehension vary across the background situations described by the sentence they occur in. When concepts are combined in a sentence, the features that are relevant for the specific combination are activated and are then easier to verify for human subjects. The same could be true for linguisticallyderived properties of lexical meaning: brought neuroimaging evidence of the early activation of word association areas during property generation tasks, and Santos et al. (2011) showed that word associates are often among the properties generated for a given concept. Such findings suggest that, while we combine concepts, both embodied simulations and word distributions influence property salience . Our model makes the following assumptions: • the composition between a verb role representation and an argument shares the same cognitive mechanism underlying conceptual combinations; • at least part of semantic representations is derived from, and/or mirrored in, linguistic data. 7 Consistently, the process of selecting the relevant features of the concepts being composed corresponds to modify the salience of the dimensions of distributional vectors; • thematic fit computation is carried out on the basis of the activation and selection of salient features of a verb thematic role prototype and of the candidate argument filler vectors. We rely on syntax-based DSMs, using dependency relations to approximate verb-specific roles and to identify their most typical fillers: for agents/patients, we extract the most frequent subjects/objects, for instruments we use the prepositional complements introduced by with, and for locations those introduced by either on, at or in. Assuming that the linguistic features of distributional vectors correspond to the properties of conceptual composition processes, a candidate filler can be represented as a sorted distributional vector of the filler term, in which the most salient contexts occupy the top positions. Similarly, the abstract representation of a verb-specific role is a sorted prototype-vector, whose features derive from the sum of the most typical filler vectors for that verb-specific role. Differently from Baroni and Lenci, the core and novel aspect of our proposal, described in the following subsections, is that we do not simply measure the correlation between all the features of candidate and prototype vectors (as vector cosine would do on unsorted vectors), but rather we rank and filter the features, computing the weighted overlap with a rank-based similarity measure inspired by AP Syn, a recent proposal by Santus et al. (2016a,b,c) which has shown interesting results in synonymy detection and similarity estimation. As we will show in the next sections, the new metric assigns high scores to candidate fillers sharing many salient contexts with the verb-specific role prototype. Typical Fillers The first step of our method consists in identifying the typical fillers of a verb-specific role. Following Baroni and Lenci (2010), we weighted the raw cooccurrences between verbs, syntactic relations and fillers in the TypeDM tensor of DM with Positive Local Mutual Information (PLMI; Evert (2004)). Given the co-occurrence count O vrf of the verb v, a syntactic relation r and the filler f , we computed the expected count E vrf under the assumption of statistical independence: From the ranked list of (v,r,f) tuples, for each slot, we selected as typical fillers the top k lexemes with the highest PLMI scores (see examples in Table 1, Typical Fillers column). In our experiments, we report results for k = {10, 30, 50}. Role Prototype Vectors To represent the typical fillers, the candidate fillers and the verb-specific role prototypes (which are obtained by summing their typical filler vectors), we built a syntax-based DSM. This includes relation:word contexts, like sbj:dog, obj:apple, etc.. Contexts were weighted with Positive Pointwise Mutual Information (PPMI; Church and Hanks (1990), Bullinaria and Levy (2012), Levy et al. (2015)). Given a context c and a word w, the PPMI is defined as follows: where w is the target word, c is the given context, P(w,c) is the probability of co-occurrence, and D is the collection of observed word-context pairs. 8 The context c of the prototype vector P representing a thematic role has a value corresponding to the sum of the values of c for each of the k typical fillers used to build P . The contexts of P are then sorted according to their weight. Desirably, the highest-ranking contexts for a role prototype will be those that are more strongly associated with many of its typical fillers. Such second order contexts correspond to the most salient features of the verb-specific thematic role, as they are salient for many role fillers (some examples are reported in Table 1, Top Second Order Contexts column). In summary, we built centroid vectors for our verb-specific thematic roles by means of second order contexts, which are first order dependencybased contexts of the most typical fillers of a verbspecific role. Since we are interested only in the most salient contexts, we ranked the centroid contexts according to their PPMI score, and we took the resulting rank as a distributional characterization of the thematic roles. Filtering the Contexts Filtering the prototype dimensions according to syntactic criteria might be useful to improve our role representations. It is, indeed, reasonable to hypothesize that predicates co-occurring with the typical patients of a verb are more relevant for the characterization of its patient role than -let's sayprepositional complements, as they correspond to other actions that are typically performed on the same patients. Imagine that apple, pizza, cake etc. are among the most salient fillers for the OBJ slot of to eat, and that OBJ-1:slice-v, OBJ-1:devour-v, SBJ:kidn, INSTRUMENT:fork-n, LOCATION:table-n are some of the most salient contexts of the prototype. 9 Things that are typically sliced and/or devoured are more likely to be good fillers for the patient role to eat than things that are simply located on a table or that are patients of actions performed by kids. To test this hypothesis, we evaluated the performance of the system in three different settings, each of which selecting: formance we will not discuss it further. Santus et al. (2016c) previously showed that their rank-based measure performs worse on PLMI-weighted vectors, as they are biased towards frequent contexts. 9 Our DSM also makes use of inverse syntactic dependencies: target SYN-1 context means that target is linked to context by the dependency relation SYN (e.g. meal OBJ-1 devour means that meal is OBJ of devour). • only predicates in a subject/object relation (SO setting); • only prepositional complements (PREP setting); • both of them (ALL setting). Computing the Thematic Fit Our hypothesis is that fillers whose salienceranked vector has a large overlap with the prototype representation should have a high thematic fit. Such overlap should take into account not only the number of shared features, but also their respective ranks in the salience-ranked vectors. When the prototype has been computed and the candidate filler vector has also been sorted, we can measure the Weighted Overlap by adapting AP Syn (Santus et al., 2016a,b,c) to our needs: where for every feature f in the intersection between the top N features of the sorted vectors x, x [1:N ] , and y, y [1:N ] , we sum 1 divided by the average rank of the shared feature in x and y, r x (f ) and r y (f ) (N is a tunable parameter). This measure assigns the maximum score to vectors sharing exactly the same dimensions, in the same salience ranking. The lower the rank of a shared context in the sorted vector, the smaller its contribution to the thematic fit score. If the feature set intersection is empty, the score will be 0. Differently from cosine similarity, which conflates multiple senses, measuring the Weighted Overlap between prototype and candidate filler can improve the estimation of the thematic fit by favoring the appropriate word senses: for example, for a verb-argument pair like embracev-communism-n, communism-n is likely to intersect and to increase the saliency (through the average rank) only of the second-order features of embrace-v referring to its abstract sense. Table 2 for the coverage of each system for the datasets). Metrics. Performance is evaluated as the Spearman correlation between the scores of the systems and the human plausibility judgments. Fillers. In order to make our results more comparable with previous studies, the typical fillers for each verb role were extracted from the TypeDM tensor of the Distributional Memory framework (see Section 3.1). 10 Those were the same fillers used by Baroni and Lenci (2010) and Greenberg et al. (2015b). DSM. Distributional information is derived from the concatenation of two corpora: the British National Corpus (Leech, 1992) and Ukwac (Baroni et al., 2009). Both were parsed with the Maltparser (Nivre and Hall, 2005). From this concatenation, we built a dependency-based DSMs, weighted with PPMI, containing 20,145 targets (i.e. nouns and verbs with frequency above 1000) and 94,860 contexts. The syntactic relations taken into account were: sbj, sbj-1, obj, obj-1, at-1, in-1, on-1, with-1. Settings. To prove our hypotheses and verify the consistency of the system, we tested a large range of settings, varying: • the number of fillers used to build the prototype, with the most typical values in the literature ranging between 10 and 50. We report the results for 10, 30 and 50 fillers • the types of the dependency relations used for calculating the overlap: we report results for the SO, PREP and ALL settings; • the value of N , that is the number of top contexts that we take into account when computing the weighted overlap. Table 3 reports the scores for our best setting, while the performances for other values of N are discussed in the Section 5. Baseline and State of the Art. As a baseline, we use the thematic fit model by Baroni and Lenci (2010), with no ranking of the features of the prototypes and with vector cosine as a similarity metric. 11 Results are reported for 10, 30 and 50 fillers. For reference, we also report the results of state-of-the-art models, both the unsupervised (Baroni and Lenci, 2010;Sayeed and Demberg, 2014;Greenberg et al., 2015b) and the supervised ones (Tilk et al., 2016). Table 3 describes the performance of the best setting (weight: PPMI; N=2000). In the first three rows, the table shows the scores obtained by our system varying the types of dependency contexts (i.e. ALL, SO, PREP) and the number of fillers considered for the prototype (i.e. 10, 30 and 50). The other rows respectively show i) the scores obtained by calculating the vector cosine between the role prototype vector (i.e. the vector obtained by summing the most typical fillers, with no salience ranking of the dimensions) and the candidate filler vector and ii) the scores reported in the literature for the best unsupervised and supervised models. At a glance, our best scores always outperform the reimplementation of Baroni and Lenci, being mostly competitive with the state of the art models. More precisely, for agents and patients the performance is close to the reported scores for DM, when only predicates are used in the W O calculation, as hypothesized in Section 3.3. The neural network of Tilk and colleagues retains a significant advantage on our models only for the McRae dataset. Our system, however, shows a remarkable improvements on the Ferretti's datasets, and specifically on Ferretti-Instruments, when only complements are used (see Section 3.3), outperforming even the supervised and more complex model by Tilk et al. (2016), which has access to semantic roles information. Compared to the other unsupervised models, our system has a statistically significant advantage over Baroni and Lenci (2010) on the locations dataset and over Sayeed and Demberg (2014) on the locations and on the instruments dataset (p < 0.05). 12 At the best of our knowledge, the result for the Table 4: Average gold values, number of items listed for both metrics, and distribution of syntactic and lexical forms among the 35 best and worst correlated items for every measure in the given datasets. Results instruments is the best reported until now in the literature. This is particularly interesting because -as pointed out by Sayeed and Demberg (2014) instruments and locations are difficult to model for a dependency-based system, given the ambiguity of prepositional phrases (e.g. with does not only encode instruments, but it can also encode other roles, such as in I ate a pizza with Mark). We think this is the main reason behind the different trend observed for the Instruments datasets with respect to the number of the fillers (see Table 3 and Figure 1). Unlike all the other datasets, instrument prototypes built with more fillers tend to be more noisy and therefore to pull down both the vector cosine and W O performance (this is partially true also for locations, where the performances -for cosine and W O with a lower number of contexts -drop with more than 30 fillers: see Figure 1). Systems based on semantic role labeling have an advantage in this sense, as they do not have to deal with prepositional ambiguity. Our results show that, by weighting and filtering the features of the role prototype, dependency-based approaches can be successful in modeling roles other than agent and patient, eventually dealing also with the ambiguity of prepositional phrases. Settings. Apart from the above-mentioned exceptions, the best scores are obtained building the prototypes with a higher number of fillers, typically with 50, and calculating the W O only with a syntactically-filtered set of contexts. More specifically, Padó and McRae benefit from the calculation of W O using only second order subject-object predicates (i.e. SO), while Ferretti-Instruments and Ferretti-Locations benefit from the exclusive use of prepositional complements (i.e. PREP). On the other hand, the opposite setting (e.g. SO for Ferretti-Instruments and Ferretti-Locations and PREP for Padó and McRae) leads to much lower scores, whereas the full vectors (i.e. ALL) tend to have a stable-but-not-excellent performances on all datasets. As briefly mentioned above, in our experiments, we tested both PPMI and PLMI as weighting measures. Table 3 only reports PPMI scores because it performs more regularly than PLMI, whose behaviour is often unpredictable. A parameter that has an impact on the performance of our system is the value of N , which is the number of second order contexts that are considered when calculating the W O. We have noticed that the performance of W O is directly related to the growth of N , and this can be noticed in Figure 1, where W O is plotted for the different values of N with every combination of dataset and number of fillers. For space reasons, the plot only contains the performance for the best type of second order contexts for each dataset (i.e. SO for Padó and McRae and COMP for Ferretti-Locations and Ferretti-Instruments). As it can be seen in Figure 1, the scores of W O tend to grow with the growth of N in all datasets. Interestingly, they are largely above the competitive baseline in most of the cases, the only exceptions being Padó (where a large N is necessary to outperform the baseline) and Ferretti-Locations with 10 fillers (prepositional ambiguity might have caused the introduction of noisy fillers among the top ones). Agent & Patient. In order to further evaluate our system, we have split Padó and McRae datasets into agent and patient subsets. Figure 2 describes the performance of W O and vector cosine baseline while varying N and the number of fillers. The plot shows a clearly better performance of W O for the agent role (i.e. subject), especially when N is equal or over 1000 (note that the value of N has little impact in the agent subset of the McRae dataset). Such advantage, however, is reduced for the patient role (i.e. object). This is particularly interesting because we do not observe large drops in performance for the vector cosine between agent and patient role (except for Padó, k = 10). The drop is particularly noticeable in Padó, a dataset which has several non-constraining verbs (especially for the patient role: a similar observation was also made by Tilk et al. (2016)). As the constraints on the typical fillers of such verbs are very loose, we hypothesize that it is more difficult to find a set of salient features that are shared by many typical fillers. Therefore, estimations based on the whole vectors turn out to be more reliable. This can be confirmed by looking at the worst correlated words reported in Lexemes column, in Table 4. Error Analysis We performed an error analysis to verify -for the best settings of W O in each dataset -the correlation between vector cosine and W O scores (see Table 5), and the peculiarities of the entries with the strongest and the weakest correlation (see Table 4). We found that W O and vector cosine always have a high correlation (i.e. above 0.80), with the highest correlations reported for McRae and Ferretti-Instruments. Looking at Table 4 we can also observe that: • the average gold value of the 35 most (4.65) and least (4.56) correlated items does not substantially differ from the average gold value calculated on the full datasets (4.31), meaning that the distribution of likely and unlikely fillers among the best and worst correlated items is similar to the one in the datasets (i.e. no bias can be identified); • both measures have difficulties on the same test items (probably because of loose semantic constraints), but report their best performances on different pairs (see Overlap and Lexemes columns); • syntactically, vector cosine correlates better with objects, while W O is more balanced between objects and subjects, often showing a preference for the latter (see the distribution in Syntax column). Conclusions In this paper, we have introduced an unsupervised distributional method for modeling predicateargument thematic fit judgments which works purely on syntactic information. The method, inspired by cognitive and psycholinguistic findings, consists in: i) extracting and filtering the most salient second order contexts for each verb-specific role, i.e. the most salient semantic dimensions of typical verb-specific role fillers; and then ii) estimating the thematic fit as a weighted overlap between the top features of the candidate fillers and of the prototypes. Once tested on some popular datasets of thematic fit judgments, our method consistently outperforms a baseline re-implementing the thematic fit model of Baroni and Lenci (2010) and proves to be competitive with state of the art models. It even registered the best performance on the Ferretti-Instruments dataset and it is the second best on the Ferretti-Locations, which were known to be particularly hard to model for dependency-based approaches. Our method is simple, economic and efficient, it works purely on syntactic dependencies (so it does not require a role-labeled corpus) and achieves good results even with no supervised training. Finally, it offers linguistically and cognitively grounded insights on the process of prototype creation and contextual feature salience, preparing the ground for further speculations and optimizations. For example, future work might aim at identifying strategies for tuning the parameter N to account for the different degrees of selectivity of each verb-specific role. Another possible extension would be the inclusion of a mechanism for updating the role prototypes depending on how the other roles are filled, which would be the key for a more realistic and dynamic model of thematic fit expectations (Lenci, 2011).
2017-07-26T17:22:54.000Z
2017-07-19T00:00:00.000
{ "year": 2017, "sha1": "691013ee0a815c9c1ab7208e43296e308f674f02", "oa_license": "CCBY", "oa_url": "https://www.aclweb.org/anthology/D17-1068.pdf", "oa_status": "HYBRID", "pdf_src": "ACL", "pdf_hash": "9268a7e2290a9e0fa905f05dd14c2789d9de7c7b", "s2fieldsofstudy": [ "Computer Science", "Linguistics" ], "extfieldsofstudy": [ "Computer Science" ] }
233388324
pes2o/s2orc
v3-fos-license
Does Hyperglycemia Cause Oxidative Stress in the Diabetic Rat Retina? Diabetes, being a metabolic disease dysregulates a large number of metabolites and factors. However, among those altered metabolites, hyperglycemia is considered as the major factor to cause an increase in oxidative stress that initiates the pathophysiology of retinal damage leading to diabetic retinopathy. Diabetes-induced oxidative stress in the diabetic retina and its damaging effects are well known, but still, the exact source and the mechanism of hyperglycemia-induced reactive oxygen species (ROS) generation especially through mitochondria remains uncertain. In this study, we analyzed precisely the generation of ROS and the antioxidant capacity of enzymes in a real-time situation under ex vivo and in vivo conditions in the control and streptozotocin-induced diabetic rat retinas. We also measured the rate of flux through the citric acid cycle by determining the oxidation of glucose to CO2 and glutamate, under ex vivo conditions in the control and diabetic retinas. Measurements of H2O2 clearance from the ex vivo control and diabetic retinas indicated that activities of mitochondrial antioxidant enzymes are intact in the diabetic retina. Short-term hyperglycemia seems to influence a decrease in ROS generation in the diabetic retina compared to controls, which is also correlated with a decreased oxidation rate of glucose in the diabetic retina. However, an increase in the formation of ROS was observed in the diabetic retinas compared to controls under in vivo conditions. Thus, our results suggest of diabetes/hyperglycemia-induced non-mitochondrial sources may serve as major sources of ROS generation in the diabetic retina as opposed to widely believed hyperglycemia-induced mitochondrial sources of excess ROS. Therefore, hyperglycemia per se may not cause an increase in oxidative stress, especially through mitochondria to damage the retina as in the case of diabetic retinopathy. Introduction Diabetes is an endocrinological disorder that dysregulates several metabolic processes and so forth alters the levels of a multitude of metabolites and signaling molecules, either due to lack of insulin or insulin signaling. Apart from the altered metabolites of carbohydrate, lipids, and amino acids, an increasing number of other biomolecules and hormones including hydroxy acids, pyrimidines, arginine, proline, various peptides, and growth factors have all been found to be altered, making the pathophysiology of diabetes extremely complex [1][2][3][4][5][6]. However, over the years, an increasing amount of research has been dedicated to diabetes-induced hyperglycemia, the hallmark of diabetes as the major factor involved in the etiology of diabetes and its complications [7][8][9][10][11]. Hyperglycemia has been widely considered as the main trigger that initiates the dysregulation of various anabolic and catabolic pathways within cells, thereby inducing cellular damage that leads to various complications of diabetes including diabetic retinopathy, the leading cause of blinding disease worldwide [4,5,12]. Numerous studies have reported mechanisms of hyperglycemia stimulated biochemical abnormalities in the diabetic retina including stimulation of protein kinase C, glycation, polyol formation, and hexosamine synthesis that induce oxidative stress, ultimately leading to cellular damage [10,[13][14][15]. Besides, many investigators reported that diabetesinduced hyperglycemia stimulates glycolysis and tricarboxylic acid cycle fluxes that increase NADH/NAD + ratios both in the cytosol and mitochondria of the cells [8,9,[16][17][18][19]. This in turn increases electron disposal at the electron transport chain which, thereby produces superoxide radicals by partial reduction of oxygen [20,21]. These findings were partly supported by an increased level of reactive oxygen species (ROS) found in the retina of diabetic animals [8,22], and also in isolated Müller and endothelial cells once exposed to hyperglycemic conditions [23]. In contrast, we and others did not support the hyperglycemic induced fluxes that may generate surplus NADH to generate excess superoxide radicals in the diabetic retina and cultured endothelial cells [24][25][26]. We believe that the discrepancies in results might be primarily due to the difference in the methodologies, as most of the investigators measured the metabolites and generation of free radicals either by using frozen tissues of diabetic animals or in the isolated diabetic retinas incubated without high glucose [8,22,27]. Since oxygen free radicals are extremely short-lived and their generation in an intact retina requires adequate oxygen tension in the incubation buffer. Due to these impediments, proper techniques and physiological conditions are warranted to measure the exact level of oxygen-free radicals in diabetic retinas. Moreover, several researchers have reported excess free radicals being generated in cultured retinal cells under hyperglycemic conditions [7,9,28,29], which may not correlate with the results of the intact retina, since isolated cells cannot depict the exact pathophysiology as in the case of the whole retina; although increased ROS levels and their damaging effects are well known in the diabetic retina [9,[30][31][32], their source and the mechanism of increasing ROS is still uncertain. Therefore, in this study, we adopted a unique experimental approach and techniques to measure precisely the generation of free radicals in a real-time situation under hyperglycemic and diabetic conditions in the intact rat retina. We analyzed the ROS generation in the control and diabetic retinas under in vivo and ex vivo experimental conditions, and a comparison was made between them to elucidate the basis of oxidative stress concerning hyperglycemia in diabetic retinopathy. Animals Wister albino rats were used in this study. Rats were housed under controlled conditions (25 • C; 12-h light-dark cycle) and allowed to have free access to food and water. The rats aged 8-9 weeks, weighing 260-290 g were used to make them diabetic using streptozotocin (STZ) from Sigma (St. Louis, MO, USA). A single intraperitoneal injection of STZ (65 mg/kg body weight) freshly prepared in 50 mM citrate-buffered solution (pH 4.5) was induced to each rat. Age-matched control rats were injected with an equal amount of the citrate buffer. Diabetes was confirmed by measuring blood glucose levels of more than 250 mg/dL. Retinal experiments were carried out either after 5 or 10 weeks of STZ-injections. Rats were routinely treated following the guidelines of the National Institutes of Health. All experimental procedures and protocols were under the Association for Research in Vision and Ophthalmology (ARVO) recommendations to the Care and Use of Experimental Animals. The experimental animal protocol has been approved by the Experimental Animal Care committee (approval number KSU-SE-21-04 ), King Saud University, Riyadh, Saudi Arabia. Isolation of the Retinas and Incubation Conditions for Metabolic Studies The 10 weeks STZ-diabetic and age-matched controls rats were anesthetized with ketamine-xylazine (53 mg ketamine, 5.3 mg xylazine/kg). Retinas from rats were dissected from excised eyes and the metabolic experimental protocol was followed according to our previously published methods with a slight modification [24]. [U- 14 3 , with 5 or 20 mM glucose, equilibrated with 95% O 2 -5% CO 2 , pH 7.4) to metabolically recover the retina after removal from the animals [33]. Incubation was initiated by the addition of approximately 5 µCi [U- 14 C]glucose and terminated at 30 min by the addition of 20% perchloric acid (final concentration 2%). To evaluate CO 2 and glutamate formation, incubation in the buffer was carried out under euglycemic (5 mM glucose) or hyperglycemic (20 mM glucose) conditions. A total of five control and diabetic rats were used in this study. Oxidation of Glucose to CO 2 To measure the oxidation of glucose to CO 2 , retinas from control and diabetic rats were incubated in 1 mL of Krebs buffer under euglycemic and hyperglycemic conditions at 37 • C in glass vials with the addition of [U-14 C]glucose as described above, and a trap containing fluted filter paper was inserted in the vials. Immediately, the vials were sealed from the atmosphere. After 30 min, reactions were stopped by injecting 100 µL of 20% perchloric acid into the incubation buffer, and 100 µL of 1 N NaOH in the traps. 14 CO 2 formed by glucose oxidation reaction was allowed to diffuse out of the acidified samples and trapped in the filter paper traps soaked with NaOH. The filter paper traps were immersed in liquid scintillation fluid and counted after shaking for several hours. The disintegrations per minute of trapped 14 CO 2 in the filter paper are divided per milligram of retinal protein and by the specific activity of 14 C-glucose to get values for 14 CO 2 formation per minute per milligram of protein. Oxidation of Glucose as a Measure of Glutamate Formation After incubating with [U-14 C]glucose in the control and diabetic retinas, reactions were stopped by adding perchloric acid as described above. Retinas were homogenized and centrifuged to separate precipitated protein and the supernatant containing [ 14 C]glutamate. The supernatant was neutralized and chromatographed using Dowex-1 acetate columns to separate glutamate with acetic acid [33]. The eluted [ 14 C]glutamate from the column was quantitated by scintillation counting. The radiolabeled 14 C-glutamate counts per minute divided by milligrams of retinal protein and by the glucose-specific activity permits to calculate the glutamate formation from glucose. Protein pellets obtained after centrifugation of retinal extract were sonicated in NaOH (0.5 mL of 1 M) and assayed for protein using the Bio-Rad reagent. The Rate of H 2 O 2 Clearance in the Excised Control and Diabetic Rat Retinas The rate of intracellular H 2 O 2 formation depends upon pro-oxidant superoxide dismutase (SOD), while its removal depends upon antioxidant catalase and peroxidases. H 2 O 2 formed by SOD is removed by the catalase and peroxidases, which convert it into water. The rates of disposal of H 2 O 2 by antioxidant enzymes were determined under ex vivo conditions in the excised whole retina of 5 and 10 weeks diabetic and age-matched control rats. Each of the freshly excised retinae from both groups of rats was first preincubated for 3 min at 37 • C in glass vials with 600 µL Krebs bicarbonate buffer, pH 7.4 containing 20 mM HEPES, 118 mM NaCl, 4.7 mM KCl, 2.5 mM CaCl 2 , 1.2 mM KH 2 PO 4 , 1.17 mM MgSO 4 , 25 mM NaHCO 3 , 5 mM glucose equilibrated with 95% O 2 -5% CO 2 to allow the retina to adapt to the buffer. After 3 min, the preincubated buffer was replaced with 600 µL fresh Krebs bicarbonate buffer with either 5 mM glucose (euglycemic) in case of control retinas and 20 mM glucose (hyperglycemic) for diabetic retinas. The reaction was allowed until 30-40 min with the addition of 5µM H 2 O 2 . At every 5-10 min intervals, an aliquot of 50 µL was collected from each incubation vial to assay for H 2 O 2 using Fluoro H 2 O 2 TM kit (Cell Technology, Mountain View, CA, USA), following the company instructions. The H 2 O 2 kit employs a non-fluorescent reagent to be oxidized by H 2 O 2 to produce a fluorescent product, resorufin. The collected aliquot samples were assayed fluorometrically using excitation at 570, and emission at 590 nm wavelengths with a plate reader (Spectra-Max Plus; Molecular Devices, Sunnyvale, CA, USA). In another set of experiments, an inhibitor of catalase, 3-aminotriazole (3-AT) was used to influence the rate of disposal of H 2 O 2 by the excised control and diabetic retinas. Each retina was preincubated with 2 mM 3-AT in the Krebs bicarbonate buffer, 30 min before the addition of H 2 O 2 . The reaction was initiated with the addition of 5 µM H 2 O 2 in the incubation buffer and at every 5-10 min intervals, 50 µL aliquots were collected from the buffer to measure the concentration of H 2 O 2 . After completion of the reactions, retinas were sonicated in 1 mL 50 mM phosphate buffer, pH 7.0 containing 0.1% SDS, and then centrifuged to obtain a supernatant. Total protein in the supernatant was measured using the Lowry method [34]. Disposal rates of H 2 O 2 by the retina are expressed as % of H 2 O 2 disposal/mg of protein. The Measurement of the Level of H 2 O 2 in the Excised Control and Diabetic Retinas The level of H 2 O 2 was determined under ex vivo conditions in the excised retinas from 10 weeks diabetic and control rats, under euglycemic and hyperglycemic conditions. Each of the freshly excised retinae from both groups of rats was separately preincubated at 37 • C in glass vials with 600 µL Krebs bicarbonate buffer as described above, containing 5 or 20 mM glucose equilibrated with 95% O 2 -5% CO 2 . In separate experiments, retinas were treated with 10 µM CuSO 4 . CuSO 4 is known to catalyze the production of H 2 O 2 and lowers the activity of catalase and glutathione peroxidase [35,36]. Aliquots of 100 µL from the reaction vials were collected after 15 and 30 min of incubation to measure the H 2 O 2 generation in the retinas using the Fluoro H 2 O 2 kit. After reactions, retinas were processed for protein estimation as described above. Results from H 2 O 2 generation in the retina were presented as relative fluorescence unit/mg of protein. The Measurement of ROS in the Excised Control and Diabetic Rat Retina The fluorogenic marker CM-H 2 DCFDA (molecular probe) that passively diffuses into cells was used to measure ROS generation in the retina. Oxidation of CM-H 2 DCFDA yields fluorescent adducts that are trapped inside the cell. Fluorescent assay of the intracellular adducts provides a measure of ROS. Thus, the level of ROS was determined in the excised retinas from 10-week diabetic and age-matched control rats. Each of the freshly excised retina from diabetic and control rats was incubated at 37 • C in glass vials with 1 mL Krebs bicarbonate buffer equilibrated with 95% O 2 -5% CO 2 , along with 5 or 20 mM glucose and freshly made 10 µM CM-H 2 DCFDA. After 30 and 60 min of incubations, the retinas were separated and washed in cold 50 mM phosphate buffer saline. Then, those retinas were briefly sonicated in 300 µL 20 mM HEPES buffer, pH 7.4 containing 0.1% SDS. The retinal homogenate was centrifuged, and 100 µL supernatant immediately assayed fluorometrically at excitation and emission wavelengths of 485 and 538 nm, respectively. The level of ROS in the retina was presented as oxidized H 2 DCFDA fluorescence units/retina. The Measurement of ROS under In Vivo Conditions in the Rat Retina To measure the level of ROS in the 10 weeks diabetic and age-matched control retinas of live rats, a fresh stock solution (2.16 mM) of CM-H 2 DCFDA was made in DMSO, and 3 µL of the dye was injected intravitreally into the eye cavities of anesthetized rats according to our recently published method [37]. After six hours of injections, rats were anesthetized, retinas dissected, and immediately washed with cold phosphate buffer saline. Then, the retinas were homogenized by sonication in 300 µL of 20 mM HEPES buffer, pH 7.4 containing 0.1% SDS. The retinal homogenate was centrifuged, and 100 µL supernatant assayed fluorometrically. A comparison of the in vivo ROS level was made between the control and diabetic retina. Additionally, we made three groups of control rats. In the first group, only 5 µL (2 µL saline + 3 µL of CM-H 2 DCFDA) was intravitreally injected into the retina. In the second group of rats, we intravitreally injected lipopolysaccharide (LPS, 1 µg/2 µL; plus, CM-H 2 DCFDA, 3 µL), and in the third group, diamide (2 µL/eye, 1 mM; plus, CM-H 2 DCFDA 3 µL) was injected. Lipopolysaccharide (LPS) is a well-known endotoxin to causes inflammation and increases the ROS level. Diamide is also known to increase oxidative stress by oxidizing glutathione [38]. After injections, the three groups of rats were housed overnight. After 16 h of injection, they were anesthetized, retinas dissected, washed in cold phosphate buffer saline, and sonicated in the 20 mM HEPES buffer, pH 7.4 containing 0.1% SDS, and processed as described above to assay the oxidized H 2 DCFDA fluorescence in each retina. Total retinal protein in the supernatant of each retina was measured. The level of oxidized fluorescence reflected the level of ROS in the retina, which is presented as fluorescence units/mg of retinal protein. The extent of the fluorescence level in the retina of three groups of control rats injected with; H 2 DCFDA alone, H 2 DCFDA + LPS, and H 2 DCFDA + diamide, were compared. Statistical Analysis Data are presented as means ± standard error of the mean (SEM). p-values less than 0.05 were considered significant. Statistical analyses were conducted by an unpaired, two-tailed Student t-test. Glucose Oxidation under Ex Vivo Condition in the Control and Diabetic Retina We analyzed the influence of hyperglycemia and diabetes on flux through the citric acid cycle by measuring CO 2 and glutamate production. The production of 14 CO 2 from [U-14 C]glucose was measured in the 10 weeks control and diabetic rat retinas incubated with 5 or 20 mM glucose, respectively ( Figure 1A). The rate of 14 CO 2 production was significantly decreased in diabetic rat retinas compared to controls when exposed to either 5 or 20 mM glucose (p < 0.01). Interestingly, there was also no significant influence of hyperglycemia on the rates of CO 2 production in the controls compared to euglycemic exposure. Similarly, no significant change was observed between hyperglycemic and euglycemic diabetic retinas. Furthermore, the rate of [ 14 C]glutamate formation modestly decreased in diabetic retinas compared to controls both under euglycemic and hyperglycemic conditions as shown in ( Figure 1B); besides, no significant difference in the rate of glutamate formation in the control or diabetic retinas was observed under hyperglycemic and/or euglycemic conditions. The rate of glutamate formation reflected the differences seen in CO 2 production. Both CO 2 and glutamate data are related because both are tricarboxylic acid cycle fluxes. Therefore, despite the excess glucose in the diabetic retinas, they oxidized less glucose to CO 2 and glutamate as compared to euglycemic controls. Rates of Clearance of H 2 O 2 under Ex Vivo Condition in the Control and Diabetic Retinas We measured the rate of clearance of H 2 O 2 in the 5 and 10-week diabetic and agematched control rat retinas. First, we optimized the concentration of H 2 O 2 for the clearance experiments, and 5 µM of H 2 O 2 was found to be appropriate, as this concentration did not saturate under our experimental conditions. After applying 5 µM of H 2 O 2 to 5 weeks excised control and diabetic retinas incubated under euglycemic and hyperglycemic conditions respectively, the level of H 2 O 2 started to disappear linearly for at least 10 min in both groups. There was an insignificant difference in the H 2 O 2 disposal between control and diabetic retina (Figure 2). The H 2 O 2 clearance followed the first-order kinetics with an apparent 14 min half-life as calculated from the semi-logarithmic plot of the data (Figure 2, Insert). Furthermore, to analyze the influence of the duration of diabetes on the rates of H 2 O 2 disposal, 10 weeks hyperglycemic-diabetic, and age-matched control rat retinas were employed. The rates of disappearance of H 2 O 2 indicated a slight increase in the disposal rate of 10 weeks of diabetic retinas compared to euglycemic controls (Figure 3). The slope of the straight lines obtained from the logarithmic plot indicated rates of H 2 O 2 clearance in the euglycemic control and hyperglycemic diabetic rat retinas (Figure 3, Insert). In the absence of retina but under similar conditions, the concentration of H 2 O 2 in the incubation buffer remained constant for at least 40 min. An inhibitor of catalase, 3-aminotriazole (3-AT) was used to discriminate between the involvement of the two groups of antioxidant enzymes (catalase and glutathione peroxidase) for detoxification of H 2 O 2 in both 5 and 10 weeks, control and diabetic retinas [39]. Surprisingly, no significant influence of catalase inhibitor on the rates of H 2 O 2 disposal was observed in all the groups of control and diabetic rat retinas. H 2 O 2 Levels under Ex Vivo Conditions in Control and Diabetic Rat Retinas We were not successful in the measurement of the level of H 2 O 2 in the control and 10 weeks of diabetic rat retinas under ex vivo conditions even after 30 min of incubation under hyperglycemic conditions. However, a robust increase in the level of H 2 O 2 was detected in the incubation buffer when retinas were treated with 10 µM CuSO 4 . A significant increase in the level of H 2 O 2 was observed in the hyperglycemic diabetic retinas compared to euglycemic controls as soon as after 15 min of CuSO 4 treatments (Figure 4). Moreover, the level of H 2 O 2 did not increase further and remained persistent until 30 min of CuSO 4 treatments, indicating a complete inactivation of antioxidant enzymes in the retina within 15 min of CuSO 4 treatments. ROS Levels under Ex Vivo Conditions in the Rat Retinas The level of ROS was measured using CM-H 2 DCFDA dye (10 µM) in the excised control and 10 weeks diabetic retina, under euglycemic and hyperglycemic incubation conditions. We measured oxidized H 2 DCFDA fluorescence in the retina. As shown in Figure 5, there was a low endogenous ROS level detected in the hyperglycemic diabetic retinas after 30 and 60 min of incubation as compared to euglycemic controls. However, the difference in fluorescence was more evident after 60 min in the hyperglycemic diabetic retina compared to euglycemic controls. Contrary to several previous studies, our study shows that hyperglycemia seems to influence a decrease in the ROS level in the diabetic retina compared to euglycemic control retinas. ROS Level under In Vivo Conditions in the Control and Diabetic Rat Retina To measure the ROS production in retinas under in vivo conditions, we employed intravitreal injection of the "precursor" dye, carboxy-H 2 DCFDA. The dye passively diffused into the rat retina. The intracellular ROS formed, oxidized the trapped precursor dye in rat retinas which was measured as described in the above method section. The relative fluorescence unit was considered to be proportional to the level of ROS [40]. The fluorescence data from 10 weeks control and diabetic rat retinas are presented in Figure 6. The relative fluorescence was found to be more than 2-fold in the diabetic retinas as compared to controls. To validate this in vivo measurement of ROS in the rat retinas, we injected LPS and diamide as positive controls. Indeed, after 16 h of injection, both LPS and diamide caused a significant increase in the level of ROS as reflected by an increase in the oxidized dye fluorescence trapped inside the retina compared to only dye-injected retinas. Discussion The purpose of this study was to investigate oxidative stress in the rat retinas due to hyperglycemic and diabetic conditions that may cause long-term retinal damage, leading to diabetic retinopathy. To achieve this, we first studied the glucose oxidation to CO 2 and glutamate under ex vivo conditions in the excised control and diabetic rat retinas using radiolabeled 14 C-glucose. We measured the rate of CO 2 and glutamate formation in retinas, which gives a measure of the rate of flux through the citric acid cycle. Secondly, we measured the antioxidant activity by hydrogen peroxide disposal and free radical generation in the excised intact retinas from control and diabetic rats under ex vivo experimental conditions, treated with euglycemic and hyperglycemic conditions. Finally, we employed in vivo techniques to analyze free radical generation in both control and diabetic rats by intravitreal injection of fluorogenic cell-permeant marker CM-H 2 DCFDA, as the oxidized fluorescent product of the dye gives a measure of intracellular level of ROS generation in the retina. Several investigators proposed that a high serum level of glucose in diabetes increases intracellular levels of glucose, which in turn increases the glucose metabolism by inducing the rate of glycolysis. This is followed by an increase in citric acid fluxes that consecutively floods the mitochondria with excess reduced electron carriers (NADH) to increase the accumulation of ROS [7,28,29]. This mechanism of hyperglycemia-induced excess ROS generation has been widely accepted. However, our previous metabolic studies in the ex vivo rat retinas, using unique radio-isotopic techniques, indicated a decreased flux of glycolytic and citric acid cycle intermediates in diabetic retinas, which did not support an increase in ROS by mitochondria under hyperglycemic conditions [24]. We and others have been using the ex vivo retina or tissues, especially for metabolic studies, for a long time which is quite recognized in the field. Similarly, in this study, we measured glucose oxidation and oxidative stress parameters in the ex vivo retina of control and diabetic rats. We found a decreased rate of glucose oxidation, as evidenced by a reduced level of CO 2 and glutamate in the diabetic retina. This indicates that the mitochondrial electron transport chain may not be under the influence of high electron pressure to escape electrons to make excess ROS. Thus, our studies negate the generation of excess ROS through mitochondria under hyperglycemic conditions in the diabetic rat retinas as opposed to several previous studies [8,9,11,[16][17][18][19]. Generally, mitochondria through the electron transport system generate a major part of cellular ROS, but the production is low under normal conditions. However, due to the excess level of NADH, some electrons released might not get reduced to O 2 and H 2 O. Thus, these escaped electrons generate superoxide and oxygen free radicals [20,21]. Moreover, if oxygen free radicals are generated in excess, they are instantly detoxified by mitochondrial antioxidant enzymes to harmless products. Hydrogen peroxide, a powerful oxidizing agent which is generated by mitochondrial superoxide dismutase can be detoxified by catalase and peroxidase enzymes to water molecules. In this study, we measured the antioxidant capacity of these enzymes in terms of H 2 O 2 disposal in the retina of two aged groups (5 and 10 weeks) of control and STZdiabetic rats under ex vivo conditions. Interestingly, we found a similar activity of H 2 O 2 detoxifying enzymes in the control and diabetic retinas from both groups of rats even after diabetes is prolonged. Thus, contrary to a few previous studies [8,16,17,[41][42][43], our study suggests that there is little to no influence of diabetes or the short-term duration of diabetes (5-10 weeks) on the antioxidant capacity of mitochondrial enzymes, as evident from the rate of disposal of H 2 O 2 in the rat retinas. This is partly supported by the Obrosova group, who reported that catalase activity was high, but not low in diabetic rat retina [44]. Moreover, after treating retinas with 3-aminotriazole (a specific inhibitor of catalase), no difference in the rates of H 2 O 2 disposal was found between control and diabetic retinas of the two groups of rats. This suggests the possibility of a major role of peroxidases, other than catalase in the degradation of H 2 O 2 . A study by Makino et al. reported that glutathione peroxidase detoxifies H 2 O 2 at a concentration below 10 µM, as we only used 5 µM H 2 O 2 in our experiment, whereas catalase contributes at a higher concentration [45]. Our results further suggest the existence of a strong antioxidant system in the retina to detoxify excess H 2 O 2, if generated in the case of diabetic retinas. Also, the measurement of H 2 O 2 level in the ex-vivo rat retinas indicated that the concentration of H 2 O 2 was too little to be detected by our H 2 O 2 kit. For this reason, we exposed the excised retina with CuSO 4 , a known inhibitor of catalase and peroxidases to induce the production of H 2 O 2 [35,36]. Indeed, after exposing the retinas with CuSO 4 , a robust increase in H 2 O 2 generation was observed. Interestingly, a significant increase in the level of H 2 O 2 was observed in the hyperglycemic diabetic retinas compared to euglycemic controls, and the difference remained unchanged after prolonged incubation. These results suggest that antioxidant enzymes (catalase, peroxidases) became inactivated by CuSO 4 , but on the other side, SOD appears to be relatively activated in the diabetic retinas to generate an increased level of H 2 O 2 as compared to non-diabetic controls. We speculate that the increase in H 2 O 2 is not due to hyperglycemia-induced excess pressure on mitochondria, rather due to diabetes-induced non-mitochondrial sources such as by activation of xanthine oxidase, NADPH oxidase, and peroxisomes in the cell. Next, we employed the fluorescent CM-H 2 DCFDA dye to analyze the ROS generation in the excised control and diabetic retina, under euglycemic and hyperglycemic incubation conditions. CM-H 2 DCFDA dye passively diffused inside the cells and the extent of the oxidized fluorescent product of the dye corresponding to the intracellular level of ROS generation [40]. Surprisingly, a significantly low endogenous ROS level was detected in the diabetic retinas under hyperglycemic conditions as compared to euglycemic controls. This is further supported by our recent in vitro studies using cultured rat retinal cells (Muller and endothelial cells), where we found a significantly low ROS level when cells were treated with high glucose (25 mM) as compared to euglycemic conditions (Unpublished data). This observation is supported by a few other studies reporting that pyruvate, the glycolytic product of glucose is a strong antioxidant and protects the retina and retinal cells under diabetic conditions [46,47]. Thus, contrary to several previous studies, hyperglycemia seems to influence a decrease in the ROS generation in the excised diabetic retina compared to euglycemic controls. Our next aim was to measure the ROS generation under in vivo conditions in the intact control and diabetic rat retina by intravitreal injection of CM-H 2 DCFDA, for which we successfully adopted the method recently published [37]. As expected, when rats were intravitreally injected with LPS and diamide, which served as positive controls in this study, induced a significant increase in the ROS generation. In agreement with most of the studies, the in vivo ROS level in diabetic rat retinas was significantly high compared to controls. This increased generation of ROS in the diabetic retina indicates a possibility of either diabetes-induced activation of paracrine mediators or activation of non-mitochondrial oxidases that may influence the excess ROS generation [26,27]. Taken together, our data show that oxidation of glucose decreased in the diabetic retina despite hyperglycemic conditions. These decreased levels of oxidation of glucose in the diabetic retina indicate a slow rate of glycolysis and/or citric acid cycle, thereby suggesting that excess ROS may not be generated by mitochondria. The duration of diabetes and treatments to high glucose could not influence the retinal antioxidant capacity of mitochondrial enzymes in the disposal of H 2 O 2 , suggesting that mitochondria may not be a major source of oxidative stress in the diabetic retina. Nevertheless, an increased level of ROS was found under in vivo conditions in the diabetic retinas that indicate the possibility of non-mitochondrial sources of ROS generation, which may include activation of NADPH and NADH oxidases [27,48,49], activation of endothelial cells by paracrine mediators [25], activation of microglia [50], and glutamate excitotoxicity [51,52]. Thus, metabolic abnormalities by hyperglycemia per se, especially through mitochondrial stress may not be the sole basis of retinal damage in diabetic retinopathy. Besides diabetesinduced hyperglycemia, emerging evidence suggests a potential role of numerous other altered metabolites and factors that need to be considered in the pathophysiology of retinal damage through oxidative stress. In addition, further metabolic studies and possibly in vivo ROS imaging techniques are required to better elucidate the mechanism of ROS production and their major sources in the diabetic retina. Data Availability Statement: All relevant data are included within the manuscript. The raw data supporting the findings of this manuscript will be provided by the author to any researcher on reasonable request. and helping me in the metabolic studies. I also thank the Department of Biochemistry, King Saud University for providing all the facilities and funding support from KACST-NPST: 13 MED-1374.
2021-04-26T05:14:58.537Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "9cd4eaffb86431de433450551917b65f7d880164", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/cells10040794", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9cd4eaffb86431de433450551917b65f7d880164", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
256906759
pes2o/s2orc
v3-fos-license
Analysis of soil water movement inside a footslope and a depression in a karst catchment, Southwest China Soil water movement is difficult to explain with event-scale approaches, especially in karst regions. This paper focuses on investigating seasonal recharge and mean residence time (MRT) of soil water based on temporal variation of stable isotopes (δD and δ18O) and a dispersion model (DM), and discussing their differences along a footslope and a depression in a small karst catchment of southwest China. Temporal variations of the stable isotopes in precipitation and soil water within 0–100 cm profiles were monitored weekly for approximately 43 and 99 weeks. Results show that the seasonal recharge of soil water inside the footslope and the depression were similar, but the vertical flow velocity was higher implying a faster hydrological process in the footslope. The MRT of soil water (2–64 weeks) increased roughly, suggesting decreasing velocity of water displacement with increasing depth. However, the MRT at 60–100 cm depths in the depression (47–64 weeks) was obviously longer than at other sites, revealing more intensive water mixing. Furthermore, a shallower isotopic damping depth was found in the depression, indicating stronger delay and attenuation effects on base flow recharge. These results provide new insights into research on hydrological processes in karst areas. Water movement in unsaturated soil zones plays a complex and important role in the transformation of precipitation to groundwater [1][2][3] . In a karst environment, soils are thin and rocky, and solution-enlarged fissures, gaps, and channels in the underlying bedrock facilitate the rapid transport of surface water to groundwater [4][5][6][7][8] . The strong interaction between surface and subsurface waters in karst areas makes soil water movement more difficult to decipher than in non-karst areas 3,9 . Soil water movement is influenced by multiple environmental factors, such as precipitation, evaporation, vegetation, topography, and soil properties [10][11][12][13] . However, in karst areas, the spatial distribution of soil and permeable underlying bedrock is more heterogeneous, resulting in more complex hydrological processes, and then the effects of topography and landform may be more important than in non-karst areas 3,14,15 . Therefore, a more complete understanding of soil water movement in karst regions will help us manage shallow groundwater resources and deepen our knowledge of water balance in karst catchments. Hillslopes and depressions are fundamental landscape units. Depressions are lower than hillslopes (a part of a hill between the top and the foot) surrounding it, and is identified as an even level place with deeper soil at the bottom of the catchment where a natural creek often appears. Depressions can be differentiated from upslope zones by their unique hydrology, vegetation, and soils 16,17 . Water movement parameters such as flow path, mean residence time (MRT), recharge, and runoff generation have been frequently reported for hillslopes in many regions [18][19][20] . However, the water movement between hillslopes and depressions is particularly difficult to study in a karst environment. The high rock fragment content of soils and the heterogeneous underlying epikarst on the slopes make the soil water flow path complex and difficult to decipher [21][22][23][24] . High infiltration rates and the rare occurrence of overland flow on karst slopes indicate a rapid interaction between surface and subsurface water in the shallow soil zones 23,[25][26][27] . In contrast, soils in the lower parts of a hillslope (footslopes) and depressions are during the 2-year sampling period. However, the dry season (from October to March) had an average weekly rainfall amount of 16.0 mm, and its maximum was less than 100 mm. The levels of δD and δ 18 O in weekly rainwater exhibited strong seasonal variations and, when graphed, presented the characteristics of sinusoidal waves (two sinusoidal cycles with 128‰ and 16‰ peak-to-peak variations, respectively) during the sampling period ( Fig. 1). There was an inverse pattern between rainfall amount and isotopic composition, indicating high rainfall amounts corresponded to low δD and δ 18 O values. Hu et al. 45 showed that smaller amounts of stable isotopes were found in heavy rainfall, but a one-year periodicity was observed and no linear relationship with event precipitation amount was exhibited. The calculation of seasonal recharge and MRT of soil water using input and output stable isotope data is based on the distinct stable isotopic signals of rainfall during the rainy and dry seasons. The local meteoric water line (δD = 8.1δ 18 O + 12.7; n = 99, R 2 = 0.9756) was obtained from the weekly amounts of δ 18 O and δD values in rainwater between April 2011 and February 2013 (Fig. 2). The local meteoric water line was very similar to the global meteoric water line (δD = 8δ 18 O + 10). The mean annual δ 18 O and δD values of soil water at different depths were similar to both the local and global meteoric water lines, indicating that the non-equilibrium fractionation processes caused by evaporation could be negligible in the study area. Generally, evaporation of soil water was greater near the surface than in the deep soil layers because of surface heating 46 . However, the short interaction time between water and surface heating, caused by a rapid infiltration rate and less immobile water stored in shallow soil, seemed to lead to a below average evaporation rate. This allowed us to avoid overestimation of soil water recharge during the periods of high isotopic levels in input rainfall. Seasonal variation of stable isotopes in soil water at different depths. Seasonal variations of δD and δ 18 O values for all soil water samples are given in Figs 3 and 4, respectively. As a whole, δD and δ 18 O values of soil water at each depth in both SD and SS sites were similar to each other, but differences existed among different soil depths. Variations of δD and δ 18 O values of soil water tended to decrease with increasing soil depth. At a depth of 20 cm, the δD and δ 18 O values of soil water expressed a similar sinusoidal wave like that of rainwater. At a depth of 40-60 cm, the δD and δ 18 O values of soil water showed a similar variation trend with that at a depth of 20 cm, but with a smaller varying extent. However, soil water at a depth of 80-100 cm showed steady-state δD and δ 18 O levels, with very little variation around the trend, indicating that a higher proportion of older water was present. Seasonal recharge of soil water. Mean annual δ 18 O and δD values of rainwater and soil water at different depths in all six sampling sites are shown in Table 1. The mean δ 18 O and δD values and their coefficient of variation (CV) of rainfall during the dry season were more enriched and had higher variation than during the rainy season. Generally, due to much larger rainfall amount during the rainy season, and therefore, the rainy-season rainfall should be the main recharge source for soil water compared to the dry-season rainfall. However, mean annual δ 18 O and δD values of soil water at a depth of 20 cm for all six sampling sites were close to those of annual rainfall, indicating an almost year-round recharge. Meanwhile, the highest CV at a depth of 20 cm reflected a significant response to rainfall. Therefore, this depth of soil could not contain much rainy-season rainfall or bypass flow. Soil water samples collected at a depth of 60 cm, showed depleted and lower δ 18 O, δD and CV values, respectively. The mean annual δ 18 O and δD values of soil water at 60-100 cm depths were near (or more depleted than) rainy-season rainfall values, illustrating that dry-season rainwater contributed little to recharge. However, there wasn't a regular measurable difference between depression and footslopes. Soil water at a depth of 40-100 cm in SD-1 and SD-2 had lower CV than in the other sites. However, because of the presence of weathered sandy soil (Table 3), SD-3 was an exception in that it broke the above-mentioned tendency and exhibited the most significant response to rainfall than any other site at the same depth. In addition, compared with the footslopes, the mean annual δ 18 O and δD values of deep soil water (60-100 cm) in the depression were closer to those of baseflow. The annual δ 18 O and δD values of deep soil water in the footslopes were more depleted than those in baseflow. Isotopic damping depth of soils. The annual CV of isotopic data for soil water decreased with increasing depth at each sampling site with a logarithmic relationship between the data and depth (Fig. 5). The isotopic damping depths obtained from δ 18 O and δD values show little variation, and thus, only δD data were evaluated. The sharp decrease in CV at a 20 cm depth occurred in the depression but did not appear in the footslope. This suggests that most of the damping of the annual isotopic composition signal of precipitation occurred in the upper 0-20 cm soil layer in the depression. Based on the logarithmic relationship at each sampling site, the total isotopic damping depth can be calculated when the mean annual baseflow CV (0.12) is used 19,35 . They were 73.6 cm in SD-1, 73.7 cm in SD-2 and 99.2 cm in SS-3, which were much shallower than those of SS-1 (167.9 cm) and SS-2 (123.3 cm). However, an abnormally high value (237.5 cm) was found in SD-3. MRT of soil water. To obtain MRT of soil water, seasonal variations of δ 18 O and δD in precipitation and soil water at different depths were used for DM simulations as input and output, respectively ( Table 2). The infiltration rate, which reflected the difference of infiltration between rainy and dry seasons, was obtained and used to calculate MRT in soil water. The results calculated with δ 18 O and δD values had a few differences. Model efficiency (ME) of the DM based on δD values was higher than that based on δ 18 O values under most conditions, indicating a better fit of δD than δ 18 O. The MRTs of soil water ranged between 2 and 64 weeks and tended to increase with increasing depth. In the same sampling line, MRTs of soil water in the depression were longer than those in the footslope at depths of 40-100 cm. The sharp increases in MRT between depths of 40 and 60 cm took place in the depression, although they disappeared in the footslope. The MRTs of soil water at a 60 cm depth were more than three times longer than those at a 40 cm depth. However, there was very little difference between values for SD-1 and SD-2 at a 60−100 cm depth. SIGMA of simulations in the footslope was higher than those in the depression, indicating that the DM had a poorer fit in the footslope. ME functions for soil water at a depth of 60-100 cm were approximately equal to zero, indicating no periodic variation of the isotopes. MRT of soil water in SD-3, between 9 to 11 weeks (roughly increasing with depth), had no clear pattern. Moreover, SIGMA of simulations in SD-3 was obviously higher than that in SD-1 and SD-2, indicating a poorer fit. The estimated vertical flow velocity in soils was calculated from the ratio of soil depth to MRT. The 40-60 cm soil layer seemed to be a boundary zone, where water flowing through would move more slowly. However, opposite tendencies were observed in the SD-3 and SS-3 because of the presence of sand. In addition, the vertical flow velocities at a 100 cm depth in each sampling site displayed obvious variations: SD-1 and SD-2 had much lower values than SS-1, SS-2, and SS-3. The vertical flow velocities in the footslopes were 1.8-5.0 times faster than those in the depression. In addition, an opposite negative relationship between damping depth and MRT was found. There was a greater difference between MRT of soil water at the surface and deep layers, and a shallower damping depth was found in the surface layer. Discussion Water recharge in the soil zone. The suction lysimeter samples contained water collected from the soils, and, thus, the preferential flow, which was difficult for plant use, could be avoided. The seasonal recharge of soil Table 1. Summary of stable isotopes and CV in rainfall, baseflow, and soil water. Rainfall dry and Rainfall rainy indicate rainfall samples that were collected during dry and rainy seasons, respectively. water was mostly caused by water stored in pores that had a relatively slow flow velocity 35 , and a recharge pattern that varied with depth. The 40-60 cm soil depth seemed to be a transitional boundary. On one hand, the water recharge of dry-season rainfall in the upper layer (0-40 cm) had a faster water flow velocity in the rainy season than in the dry season. On the other hand, in the lower layer (60-100 cm) the dry-season recharge was negligible and water mixed well. Lee et al. 11 also found an analogous boundary between the upper fine-grained soil and the lower coarse-grained soil in Jeju Island, Korea, where soil water probably flows slowly through micropores rather than rapidly through macropores in the unsaturated soil zone. The surface soil layer (0-20 cm) received an almost year-round recharge, indicating that the effect of rainy-season rainfall on surface layer was not as important as it was on deep layers. This is because the shallow soil had low water-holding capacity, and a large portion of rainy-season rainfall led to preferential flow instead of being stored in the micropores. Meanwhile, small quantities of water were stored in the upper soil layer where a fast pathway occurred. This pathway was not present in the lower layer. Previous studies have shown that root channels and other biopores (or cracks and fissures) may provide fast pathways for water movement in sandy soil or upper shallow loam soil [47][48][49][50] . In addition, the high content of rock fragments (usually distributed in the shallow soil zone) changes the soil pore volume and structure, which modifies the size and distribution of pathways for water movement through the soil zone 23,51 . The major recharge, occurring during the rainy season, profited from the fact that, relative to the upper layer, the lower soil layer has the ability to contain more water with a slower flow velocity. Soil physical properties such as BD, SWC, and CWC influence water movement, because they are impacted by soil pore properties. BD and pore-size distribution were two of the most important soil physical properties affecting infiltration and many aspects of the soil-water-plant-atmosphere system. They are often used to predict soil water retention properties [52][53][54] . However, both high macropore and micropore volumes can lead to low BD, although they have opposite impacts on water movement in soil zones. Hence, the relationship between BD and recharge processes is usually complicated. We tended to believe that macropores dominated the pores in shallow soil layers because plant roots, cracks, fissures, and other natural soil pipes, which form macropores, are likely to exist in shallow soil layers 47 . Slope position did not have a consistent effect on soil water recharge. Spatial variability in soil characteristics and vegetation distribution likely had a greater influence on soil water than did slope position 55 . Soil water recharge had no obvious differences among different sampling sites between footslopes and depressions, and therefore, the recharge patterns were similar. However, soil water seemed to be mixed more sufficiently in the depression, perhaps owing to the occurrence of lateral flow. In a transition process from the driest to wettest conditions, the hydrologic connection between footslope and depression has been shown to be continuous owing to lateral flow, which has weak responses to rainfall 28,56 . The interaction between epikarst water and soil water was strong in the deep soil zones. The transit times for the lateral water movement through bedrock were significantly greater than those for vertical movement of water through the soil 57 . The weathered sandy soil controlled any abnormal soil water movement. Unlike ordinary soil, this kind of soil tends to have poor water-holding capacity, which results in fast infiltration. A set of mean saturated hydraulic conductivity values was developed according to soil texture, where the hydraulic conductivity value for sand was 15, 91, and 350 times greater than that for loam, clay loam, and clay, respectively 58 . Horizontal flow in the soil zone. The annual average soil hydraulic diffusivity exhibited a positive relationship with isotopic damping depth for soil water movement based on an analogy with unsteady heat flow 35 Table 2. Parameters of fit DM models for oxygen-18 and deuterium variations in each sampling site from February 21, 2012 to February 28, 2013. T, P D , ME, SIGMA and FV were MRT, parameter of dispersion, model efficiency, accuracy of fit simulation and vertical flow velocity calculated from the data of δD, respectively. NA means that the model fits the data not better than a horizontal line through the mean concentration observed. Table 3. Soil physical properties within the soil profiles at the six sampling sites in the experimental area. The asterisk (*) indicates a weathered sandy soil layer. The others were ordinary soil layers, including loam and clay loam. SS- Scientific RepoRts | 7: 2544 | DOI:10.1038/s41598-017-02619-x ranged from 93.25 to 485.27 cm 2 /d and were larger in the footslope than in the depression without a weathered sandy layer. The hydraulic diffusivities seemed to reflect water movement when soils had lower moisture contents 35 . This implied that horizontal flow in the footslope would occur more easily than in the depression. The damping depth can be influenced by several factors, such as vegetation, soil porosity, rock fragment content, and topographic position. O'Driscoll et al. 19 argued that interception and water uptake by vegetation during the growing period removed most of the water from the shallow soil layer, leading to a reduction in isotopic variation. However, the impact of vegetation was not significant because the sampling sites with analogous vegetation types did not have similar damping depths. On the contrary, the sampling sites with analogous slope positions (footslope and depression) had similar damping depths. This indicated that slope position might have a dominant impact on damping depth. The vertical component of kinetic energy, which is greater on gentle slopes, probably caused quasi-stagnant water that would have been stored in micropores (i.e., with slow flow velocity) to flow fast in the footslope 59 . Additionally, the weathered bedrock layer acted as a preferential flow channel as a result of the fractures distributed heterogeneously inside the layer, which made lateral flow easy 60 . The depression accumulated this lateral water from upslope areas, which had steady isotopic compositions, reducing the CV of soil water in deep layers. Rock fragment content and the weathered sandy soil layer seemed to have an opposite effect with vegetation: they supplied fast channels, allowing isotopic signals of soil water to exhibit a strong response to rainfall. MRT of soil water. The DM was used to interpret variations in δD and δ 18 O because it can be applied effectively to all soil water samples. As in some published case studies, interpretation of the D samples yielded P D (dispersion parameter) values as high as 2.5 18,42 . These high values reflect the high inhomogeneity and broad width of transit time distributions in shallow soil layers. This results from that the high rock fragment content, root system, and soil porosity provided varying flow pathways. However, the lower P D values less than 0.05 were unexpected, which indicated that the DM produced poor matches to the data in a weathered sandy soil layer. Consequently, because of the probable wide distribution of weathered sandy soil, an appropriate improvement of the DM was necessary for application in karst catchments. Vertical flow-path length had a weak influence on MRT of soil water because MRT exhibited no obvious linear relationship with soil depth. This result contradicts the viewpoint that MRT of soil water depends on the length of vertical infiltration 57 and reflects the complex flow path in karst soil zones. The dominant factors controlling MRT of soil water were likely soil porosity and pore-size distribution 53 . Soils contained mobile water that is characterized by short MRTs in the fissures or large pores, and stagnant or quasi-stagnant water that is characterized by long MRTs in the micropores 48,61,62 . Consequently, soil drainable porosity appears to be an important control on new water ratios of hillslope discharge for steep, wet hillslopes with thin soil cover 20 . Slope position (footslope or depression) has an important impact on MRT of soil water, resulting from its effects on slope gradient, rock fragment content, and water-contributing area. Asano et al. 57 found that MRTs of soil water and transient groundwater were mostly described by soil depth, whereas, perennial groundwater and stream water, which were strongly affected by water flow through bedrock, can be described by the upslope contributing area. McGuire et al. 42 found that simple topographic factors (such as gradient) were strongly correlated with water transfer at the catchment scale, despite the relatively complex hydrological processes involved. The slope gradient improved soil permeability and caused a high recharge coefficient 44,63 . This relationship meant that the temporal variations of isotopic compositions in weighted rainfall were higher in the footslope than in the depression. The shorter MRTs of soil water resulted from the large variation of isotope values in input rainfall. Although previous studies have shown that landscape organization is a first-order control on MRT 42, 64 , its effect was not significant in the soil zones of the study area. The MRT of water increased with increase in upslope contributing area 57 . Footslopes maintained water tables and were almost continuously connected with the stream network, even in the dry season 28 . This observation suggests that recharge of soil water in the depression from footslopes through horizontal flow was continuous 60 . Moreover, the horizontal recharge of deeper soil water in the depression probably came from epikarst water, which had longer MRTs 9 . In addition, Chen et al. 23 found that the mean total volumetric rock fragment content tended to have a positive relationship with slope gradient on hillslopes. Rock fragments might supply fast flow pathways and thus decrease the MRT of soil water. Ponding and runoff flow were delayed in soils with a high cover of rock fragments 22 . This finding indicates that rock fragments facilitated water infiltration and actually caused fast-flowing water, which was characterized by short MRTs. Weathered sandy soil, which is similar to sand with high hydraulic conductivity and a wide pore-size distribution facilitating water infiltration 53, 58 , seems to be the reason for short MRT. Obviously, this weathered sandy soil had less capillary water, lower saturated water content, and poorer water-holding capacity, suggesting that fast flow controlled water movement. Consequently, it was not difficult to conclude that the soil hydrological function can be divided into three cases according to analysis of MRT of soil water and vertical flow velocity: (1) Soil in the footslope and at 0-40 cm depths in the depression is characterized by moderate water-holding capacity and vertical flow velocity; (2) Soil at 60-100 cm depths in the depression is characterized by good water-holding capacity and slow vertical flow velocity; and (3) The weathered sandy soil layer has the poorest water-holding capacity and the fastest vertical flow velocity. Conclusions Soil water movement and storage in footslopes and a depression was evaluated with respect to seasonal recharge, isotopic damping depth, and MRT using temporal variations of δD and δ 18 O and a dispersion model (DM). Year-round recharge was found in soil water at a depth of 0-20 cm, whereas soil water at a depth of 40-100 cm was recharged seasonally. Recharge was more likely to occur during the rainy season. Water flow velocities of shallow soil layers (0-40 cm) were faster than those of deep soil layers (40-100 cm). The DM provided a better fit in ordinary soil rather than in weathered sandy soil. The MRTs of soil water in the footslopes ranged from 2 to 31 weeks. However, a MRT longer than 1 year at a depth of 60-100 in the depression implied that this layer was a well-mixed zone or was recharged from epikarst water. Higher vertical flow velocities and deeper isotopic damping depths were obtained in the footslopes. This demonstrates that average annual hydraulic diffusivity for water movement in the footslopes was larger than in the depression. Slope position (footslope and depression) had a significant impact on MRT of soil water and damping depth, but it only affected seasonal recharge slightly. The weathered sandy soil layer had poorer water-holding capacity and shorter MRT than ordinary soil. This sandy layer was found underneath the ordinary soil layer and above the epikarst and often exhibits a strong response to rainfall. The above results indicated that soil water movement was complex and distinctive in karst areas with high heterogeneity, and the spatial distribution of soils and vegetation should be considered in future hydrological modeling at a catchment scale. Site descriptions. The study was conducted at the Huanjiang Observation and Research Station for Karst Ecosystems under the Chinese Academy of Sciences (24°43′-24°44′N, 108°18′-108°19′E) in Huanjiang County of northwest Guangxi, southwest China (Fig. 6). The experimental site is a typical peak-cluster depression area, which is characterized by a relatively flat depression with an elevation lower than 280 m above sea level (about 28% of the total catchment area) surrounded by overlapping hills and ridges except for an outlet in the northeast. It has a subtropical mountainous monsoon climate and covers an area of 1.01 km 2 . Approximately 60% of the slope gradients are larger than 25° and elevation ranges from 272 to 647 m above sea level. Mean annual temperature is 19 °C, and mean annual precipitation is 1389 mm, mostly occurring between May to early October. Soil depths in the depression and on the hillslope are 20-160 cm and 0-50 cm, respectively. On most of the hillslope tops, there is usually no soil cover and bedrock is exposed. The shallow and discontinuous soils have been developed from dolomite and contain significant amounts of rock fragments 23 . Soils are well drained, gravelly and calcareous, and have a clay to clay-loam texture (25-50% silt and 30-60% clay). Weathered sandy soil, underlying relatively impermeable rocks, sometimes appear in the deep soil layers both in the footslope and depression. Based on tension infiltrometer measurements (20 cm in diameter), stable infiltration rates range from 0.43 to 4.25 mm/min 26 . Organic matter content is relatively high ranging from 2.2% to 10.1%, and pH varies between 7.1 and 8.0. The average percent of exposed bedrock ranges from 15% in the depression to 30% on the hillslope. Some rock outcrops are large (2-10 m in height) with a vegetative cover of deep-rooted trees. All residents have relocated and the cultivated lands have been abandoned since 1985. The dominant vegetation types are grass and sparse shrub. However, there are patches of zonal dense scrub and forest with a high amount of exposed bedrock, especially in the southwest. Overland flow on the hillslopes, under the various land cover types, is low and the corresponding runoff coefficient is often less than 5% 26 . Three seepage springs sometimes appear at the bottoms of the hillslopes in the rainy season and recharge the creek. The groundwater table changes seasonally and is often 1-3 m below ground surface in the depression 65 . A creek originates from the southwest corner of the catchment where vegetation cover is relatively dense. This creek is linked with an excavated channel in the middle of the catchment. The outlet of the catchment is at the northeastern end of the channel. All of the surface water and part of the subsurface water flow into a small water reservoir in the northeast (Fig. 6). Rainfall and base flow sampling. Weekly rainfall samples were sequentially collected within the study area from April 10, 2011 to February 28, 2013. Precipitation sampling devices were adopted from IAEA and consisted of a 150-mm-diameter funnel, connected at the base to a 0.5 L brown bottle 9 . An air outlet tube was welded to the lower part of the funnel. To avoid evaporation of water samples, wax was used at every junction. Water samples of creek base flow were collected at the outlet of the study catchment weekly from February 21, 2012 to February 28, 2013. A polyethylene bottle (2 mL) was used to collect the water samples and a loop was created in the bottle to prevent water vapor from migrating out of the sample reservoir. Soil water sampling. In order to investigate the effects of slope position, vegetation and soil properties on water movement, three sampling lines were placed along slopes with similar gradients (11-17°) as illustrated in Fig. 6. According to the landform and the distribution of soil depth and vegetation, each line had two sites chosen for soil water sampling, with one location in the footslope (SS) and the other in the depression (SD). SS-1, SS-2, and SS-3 were located in the footslopes, and the corresponding sites in the depression were SD-1, SD-2, and SD-3, respectively. Vegetation types consisted of woodland in SD-2 and SS-2 and shrubland in all other sites. The locations of the three sampling lines were selected to represent the vegetation types in the study area. Soil water was sampled by use of tension lysimeters (produced by the Institute of Geographic Sciences and Nature Resources Research, Chinese Academy of Sciences), which consisted of suction lysimeters, ceramic porous cups, sampling bottles, tubes, and pressure monitors 66 . A small hand pump was connected to each device and an approximate 80-Kpa vacuum was created within the bottles. Soil water was collected at depths of 20, 40, 60, 80 and 100 cm using the ceramic porous cup and placed into the sampling bottles at each sampling site. In the SD-3, soil depth was less than 100 cm, thus no water sample was collected at a depth of 100 cm, and water samples at 60 and 80 cm depths were collected within the weathered sandy soil layer. Soil water was extracted at weekly intervals for 43 weeks from February 21, 2012 to February 28, 2013. At the same time, shallow groundwater levels were monitored in the depression at intervals of five and ten days during rainy and dry seasons, respectively. The groundwater table varied seasonally (i.e., shallow during the rainy season and deep during the dry season) 65 . The subsurface depth ranged from 0.18 m to greater than 5 m. This indicates that the distribution of precipitation strongly influences the groundwater table. In addition, a groundwater level of less than 1 m is only found from May to July under heavy rainfall. This indicates that most soil water samples were collected from unsaturated zones. Soil physical properties, such as saturated water content (SWC), capillary moisture content (CMC) and bulk density (BD) were measured in 0-100 cm soil profiles to evaluate their corresponding impact on soil water movement and storage at the six sites along the three lines. Near each soil water sample site, a 1.2 m deep soil profile was dug (about 1 m deep in the SD-3). Due to the relative homogeneity of soil horizontal distribution, the two undisturbed soil samples were collected with an ordinary ring knife (100 cm 3 ) at depths of 0-10, 10-20, 20-30, 30-40, 40-50, 50-70 and 70-100 cm, respectively. BD, SWC and CMC were measured in the laboratory for each undisturbed sample 9 . According to the vertical distribution of soil properties in the study area 15,67 , the soil profile could be divided into three soil layers, 0-10, 10-50 and 50-70 cm for SD-3 and 0-10, 10-40 and 40-100 cm for other sites (Table 3). Stable isotopes analysis. Deuterium and oxygen-18 values of the water samples were analyzed by the DLT-100 (Los Gatos Research (LGR), Inc., model 908-0008), a liquid water isotope analyzer, at the Key Laboratory of the Agro-ecological Processes in Subtropical Regions, Institute of Subtropical Agriculture, Chinese Academy of Science. Results were reported in δ notation relative to V-SMOW as: Damping depth estimation. Damping and lagging of the seasonal fluctuations, with increasing depth or flow path length, in the subsurface soil and rock were used to compute an "isotopic damping depth" analogous to the damping depth computed for soil temperature fluctuations based upon sine-wave analysis 35 . Isotopic damping depth is defined as the soil depth at which the input signal needs to become damped to equal levels similar to the mean base flow at the catchment 19 . A functional relationship between input and output can be represented as 35 where, d h (damping depth) is in cm, A Z2 is the amplitude at depth Z2 in ‰, and A Z1 is the amplitude at depth Z1 = 0 in ‰. Mean water residence time analysis. To provide an understanding of the MRT of soil water at different depths and positions, a lumped parameter mathematical model was used: the dispersion model (DM), which is based on long-term isotope data. The DM was adopted because it has a better fit than other models when applied to a porous medium and an unsaturated soil zone 39 . The Flow PC software version 3.1, which was introduced from IAEA, was used [68][69][70] . A functional relationship between input and output can be represented as: out i n Where, C out and C in are δD and δ 18 O values of soil water and rainfall samples, respectively; g(T) is the system response function, which specifies residence time distribution of water within the system; t′ is time of entry; and T = t-t′ is the MRT of water, which can be calculated from the system response function through the calibration of the models. Based on sufficient input data for C out and C in , the g(T) can be calculated. In flux mode of the DM, the following uni-dimensional solution to the dispersion equation for a semi-infinite medium is used as the response function where P D is the apparent dispersion parameter, which mainly depends on the distribution of travel times. The higher values reflect more inhomogeneity and a greater breadth of transit time distributions. Consequently, T can be obtained by determining the best fit between output and input data. The infiltration rate, which represents the fraction of precipitation entering the groundwater system in the observed month, is needed before the MRT is calculated with the Flow PC software. The accuracy of fit of simulations to the experimental data is computed using the SIGMA function: where, C mean is the arithmetic mean of the measured values. Equation (6) is useful for testing breakthrough curves in artificial tracer tests and periodic output functions. In the case of stable isotopes, seasonal variations are used to find a model.
2023-02-17T14:45:13.438Z
2017-05-31T00:00:00.000
{ "year": 2017, "sha1": "ca2110b0f22fba31579187ba13b3703ed062877e", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-017-02619-x.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "ca2110b0f22fba31579187ba13b3703ed062877e", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
127328719
pes2o/s2orc
v3-fos-license
Application of Richardson extrapolation method to the CFD simulation of vertical-axis wind turbines and analysis of the flow field There is still discrepancy regarding the verification of CFD U-RANS simulations of vertical-axis wind turbines (VAWTs). In this work, the applicability of the Richardson extrapolation method to assess mesh convergence is studied for several points in the power curve of a VAWT. A 2D domain of the rotor is simulated with three different meshes, monitoring the turbine power coefficient as the convergence parameter. This method proves to be a straightforward procedure to assess convergence of VAWT simulations. Guidelines regarding the required mesh and temporal discretization levels are provided. Once the simulations are validated, the flow field at three characteristic tip-speed ratio values (2.5 - low, 4 - nominal and 5 - high) is analyzed, studying pressure, velocity, turbulent kinetic energy and vorticity fields. The results have revealed two main vortex shedding mechanisms, blade- and rotor-related. Vortex convection develops differently depending on the rotor zone (upwind, downwind, windward or leeward). Finally, insight into the loss of performance at off-design conditions is provided. Vortex shedding phenomena at the low tip-speed ratio explains the loss of performance of the turbine, whereas at the high tip-speed ratio, this performance loss may be ascribed to viscous effects and the rapid interaction between successive blade passings. Introduction Due to their higher power output, horizontal axis wind turbines (HAWTs) have been traditionally the preferred option against vertical axis wind turbines (VAWTs). Nevertheless, VAWTs are increasing their popularity, specially in urban areas, due to their particular characteristics (Tummala, Velamati, Sinha, Indraja, & Krishna, 2016). VAWTs are able to produce useful energy from lower wind speeds and they can work independently of the wind direction. As a result of their lower turbine rotational speeds, their noise levels are lower. In addition, their installation and maintenance operations are much simpler, as the bearings and generator unit may be placed on the ground. All these advantages show the need of further research on VAWTs to overcome the main obstacles for their implantation, which are basically their difficulty to self-start and the complexity of the flow developed through and around the turbine. There are several methodologies for the prediction of the performance of VAWTs. Three main groups of methodologies may be identified: analytical models, CONTACT Andrés Meana-Fernández andresmf@uniovi.es based on fluid equations developed from the simplification of physical phenomena; numerical models, based on the discretized resolution of the Navier-Stokes equations; and experimental tests consisting in the fabrication and testing of a turbine prototype. In this work, the focus is set on numerical models, as they present better accuracy levels than analytical models. In addition, although experimental tests would be always desirable, the complete description of the flow field by experimental methods requires exhaustive measurement campaigns. At turbine design stages, it would be unpractical to build an experimental prototype for every turbine concept and test it. At these stages, the potential of computer fluid dynamics (CFD) simulations to obtain a complete picture of the flow field represents a great advantage (Akbarian et al., 2018;Ardabili et al., 2018;Mou, He, Zhao, & Chau, 2017). Nevertheless, when performing CFD analyses, there is sometimes a lack of verification and validation of the employed numerical codes. Some solutions are considered as 'valid' without a proper assessment of mesh and temporal numerical convergence. Besides, the literature shows different procedures to verify the validity of the simulations. Firstly, some authors compare the results of their simulations with experimental results, considering their discretizations valid if they resemble the experiments. Sui, Lee, Huque, and Kommalapati (2015) studied the transitional effect on a turbulence model for a wind turbine blade and Lee, Min, Park, and Kim (2015) and Li et al. (2016) studied the performance of a VAWT, validating their simulations with experimental results. Yang, Guo, Zhang, Jinyama, and Li (2017) validated their study of the tip vortex shedding from a VAWT with experiments as well. Lam and Peng (2016) validated their 2D and 3D simulations to study the wake of a VAWT with PIV measurements, whereas Abdalrahman, Melek, and Lien (2017) compared their results with existing CFD and experimental benchmarks to validate their model for the study of the effect of the blade pitch angle on the performance of a VAWT. The second procedure to assess the validity of the simulations is the study of numerical convergence. This procedure is typically performed by refining the mesh until the solution no longer changes. Although this refinement is performed by several authors in the literature, there are discrepancies that sometimes result in an unnecessary waste of numerical resources. Bhargav, Kishore, and Laxman (2016) used two different meshes and two time step sizes to study the influence of fluctuating wind conditions on a VAWT. Li et al. (2018) studied the convergence of their simulations for the point of maximum power coefficient of the turbine with two different meshes as well. Bianchini, Balduzzi, Bachant, Ferrara, and Ferrari (2017) also used two meshes, but refining the grid only in the rotating part of the domain. Chen, Chen, Huang, and Hwang (2017) analyzed a VAWT at its design point using three meshes and comparing their results with simulations from other authors. Chen and Lian (2015) used three meshes as well to investigate the vortex dynamics in a VAWT. Meng, He, Wu, Zhao, and Guo (2016) also employed three meshes for the simulation of an offshore VAWT. Make and Vaz (2015) analyzed the scaling effects on offshore wind turbines with five different meshes. Balduzzi, Bianchini, Ferrara, and Ferrari (2016) used five different meshes and ten different time step sizes to propose dimensionless numbers for the assessment of mesh and temporal requirements for the CFD simulation of VAWTs. Jin, Wang, Ju, He, and Xie (2018) used seven different meshes to verify grid independence. Lin, Lin, Bai, and Wang (2016) also used seven different levels of refinement near the wall region of the blades to study the effect of modifications in the blade trailing edge on the performance of a VAWT. Wang, Cot, Adolphe, and Geoffroy (2017) recently studied the capacity of wind concentration over a roof using eight different meshes. Finally, some authors combine the verification of grid independence and the validation with experimental results. Subramanian et al. (2017) used two different meshes and time step sizes and validated their results experimentally. Wekesa, Wang, Wei, and Zhu (2016) studied the effect of turbulence on the aerodynamic performance of a VAWT, using two meshes for testing grid independence and validating the simulations with their own experiments. Lei et al. (2017) refined the grid close to the blades, generating three meshes and using the grid with the medium refinement because its results resembled more the experimental results. Qamar and Janajreh (2017) used three 2D VAWT meshes for the grid independence study and validated their simulations with experimental results. Tian, Mao, An, Zhang, and Wen (2017) used three meshes and validated experimentally the performance of a VAWT in the wake of moving vehicles. Réthoré, van der Laan, Troldborg, Zahle, and Sørensen (2014) verified and validated an actuator disk model for wind turbines using four different grid levels and validating the results experimentally. Marinić-Kragić, Vuina, and Milas (2018) performed 40 mesh modifications based on the combinations of different element sizing values along the blade and near the trailing edge. Then, after employing six different time step sizes, they validated their results experimentally. Finally, Rezaeiha, Kalkman, and Blocken (2017b) and Rezaeiha, Kalkman, Montazeri, and Blocken (2017) studied the effect of the shaft on the performance of a VAWT applying the Richardson extrapolation method and validating their results experimentally. As it may be appreciated in the previous paragraphs, there is still a clear research gap with respect to the mesh and temporal requirements for the proper simulation of a VAWT, regardless of the availability (or not) of experimental data. As remarked by Lockard (2010), it is not unusual to find inconsistent and somewhat disappointing convergence properties on CFD codes, as shown by Vassberg and Jameson (2010). In this context, grid convergence analyses based on the Richardson extrapolation method have been used in different kinds of fluid dynamics related problems (Celik & Li, 2005;Marchi et al., 2016;Roache, 1998;Tengs, Storli, & Holst, 2018). Even some studies applying this methodology to the nominal working point of vertical axis wind turbines may be found in the literature (Almohammadi, Ingham, Ma, & Pourkashan, 2013;Rezaeiha et al., 2017b;Tingey & Ning, 2016;Zadeh, Komeili, & Paraschivoiu, 2014). Nevertheless, the wide operational ranges of a VAWT require the simulation of different tip-speed ratio (λ) values, so it might be difficult to accept that verifying and validating the numerical code for just the nominal λ is enough. In addition, due to the discrepancies between the number of mesh elements proposed by different authors in the literature and even the number of meshes and time step sizes used to assess the convergence of simulations, a straightforward procedure to assess numerical convergence in VAWT simulations becomes necessary for future research. For all these reasons, the applicability of the Richardson extrapolation method to several points in the useful range of the power curve of a VAWT is studied in this work. A 2D domain of the rotor has been simulated using three different meshes and monitoring the power coefficient values of the turbine as the convergence parameter. Guidelines regarding the mesh near the airfoil boundary layers and the required temporal resolution are provided. Finally, once the simulations have been validated, the finest mesh has been used to perform an analysis of the flow field. The results regarding the flow field have revealed two main vortex shedding mechanisms in the rotor. In addition, insight has been given into the loss of performance of the VAWT when it works at off-design conditions. VAWT geometry, computational domain and mesh A 3-bladed low-solidity VAWT with DU 06-W-200 airfoils with a radius of 0.5 m was simulated in a 2D domain, as represented in Figure 2. The DU-06-W-200 airfoil, designed in Delft University, is allegedly supposed to present a good self-starting behavior (Claessens, 2006). Being a straight-bladed turbine, the main tridimensional aerodynamic effects are the blade tip effects, which are detrimental to the turbine performance. Nevertheless, with a sufficient turbine aspect ratio (height/radius), a 2D simulation of the turbine mid-plane is a reasonable choice. In fact, due to the relatively great number of cells required to model the full 3D turbine, most of the analyses in the literature are performed using 2D simulations. A range of tip-speed ratio values between 2 and 5.5 has been selected to compare the results at the nominal working point with two offset points, one in the dynamic stall region (λ = 2.5) and other with the turbine producing a high flow blockage (λ = 5) (Amet, Maitre, Pellone, & Achard, 2009). The details of the turbine, in line with optimum parameters reported in the literature (Meana-Fernández, Solís-Gallego, Oro, Díaz, & Velarde-Suárez, 2018), are collected in Figure 1. A rotor tower has not been included in the design and it has not been modeled, as not all VAWT design concepts include it. If it were desired, the grid requirements for the boundary layers of the blade airfoils may be easily extrapolated to the tower. Figure 2 shows the different regions of the computational domain. An interface between the turbine rotating zone and the fixed one has been placed at a distance of 5.5D (rotor diameters) from the rotor, in order to avoid distortion effects in the transfer of information between adjacent fluid regions. The total size of the circular computational domain is 12D. These values are in agreement with the values reported in the literature (Alaimo, Esposito, Messineo, Orlando, & Tumino, 2015;Almohammadi et al., 2013;Mohamed, Ali, & Hafiz, 2015;Rezaeiha, Kalkman, & Blocken, 2017a;Zadeh et al., 2014). Finally, in order to perform a mesh convergence study, three different meshes with different levels of discretization have been generated using the software GAMBIT R . The grids have been generated with triangular elements; however, quadrilateral elements have been employed in the near-wall regions (distance to the wall < 2 mm) for a better control of the mesh growth. The position of the first mesh node has been calculated to ensure a y + value less than 1. A view of the mesh around the airfoils is depicted in Figure 2, bottom. In summary, Mesh #1 (fine mesh) has 989,770 elements, Mesh #2 (medium mesh) Numerical solver The incompressible unsteady Reynolds-Averaged Navier-Stokes equations have been solved with the commercial package ANSYS FLUENT R , using the k-ω-SST model Menter (1994) for the closure of turbulence, which combines the advantages of k-ε and k-ω models in predicting aerodynamic flows, and in particular in predicting boundary layers under strong adverse pressure gradients (Argyropoulos & Markatos, 2015). The boundary conditions applied may be identified in Figure 2. A constant inlet velocity condition of 9 m/s, a typical design value for VAWTs, and an outlet pressure equal to atmospheric pressure have been set. The domain size, larger than 10 times the turbine diameter, ensures that the boundary conditions will not interfere in the flow developed inside the rotor (Alaimo et al., 2015;Almohammadi et al., 2013;Mohamed et al., 2015;Rezaeiha et al., 2017a;Zadeh et al., 2014). The sliding mesh technique has been applied thanks to the previously defined interface between the rotor and outer domain zones. Finally, the wall boundary condition has been applied to the rotor blades. The time step used to perform the simulations is discussed in the following section. Grid and temporal convergence analysis Before proceeding to the analysis of the simulation results, a grid convergence analysis was performed by means of the Richardson extrapolation method (Richardson & Gaunt, 1927). The Richardson extrapolation method, also known as 'h 2 extrapolation', 'the deferred approach to the limit' or 'iterated extrapolation', is a method for obtaining a higher-order estimate of the continuum value (value at zero grid spacing) from a series of lower-order discrete values. As introduced by Roache (1997), a numerical simulation yields a quantity f that can be expressed as: where h is the grid spacing of the simulation. Functions g i are defined in the continuum and thus are independent of the grid spacing. f exact is the continuum value at zero grid spacing. As stated in Roache (1997), for a second-order method (g 1 = 0), by combining the results f 1 and f 2 from two different grids of spacing h 2 (coarse) and h 1 (fine) and neglecting third-and higher-order terms, an estimate for f exact may be obtained, resulting into the original statement of Richardson and Gaunt (1927) for h 2 extrapolation: Defining the grid refinement ratio as: Equation (2) may be rewritten as: This equation may be generalized to pth order methods (Roache, 1998) as: This method allows not only the estimation of the continuum value, but also provides a practical estimation of the grid refinement error due to the discretization of the simulation domain. For the practical application of the method, the guidelines proposed in ASME (2008) have been followed: • First of all, a representative grid size parameter, relating the cell size and the number of cells, has been defined as follows: Roache (1998). In this work, the variable φ used to judge grid convergence is the mean-time power coefficient of the turbine, which has been monitored from the different simulations performed. • Afterwards, the apparent order of the method has been calculated as: where ε 32 = φ 3 − φ 2 , ε 21 = φ 2 − φ 1 are the absolute errors of the variable of interest φ obtained with the three different meshes. q(p) is a function depending on the refinement ratios between the meshes and the behavior of the solutions obtained as the grid is refined, defined as: where s is the parameter related to a monotonic or oscillatory behavior of the solution as the grid is refined: Negative values of s are an indication of oscillatory convergence. In addition, if either ε 32 or ε 21 is very close to zero, the above procedure does not necessarily work. This breakdown may be ascribed to oscillatory convergence or, in some cases, it could mean that the exact solution has been already attained (Roache, 1997). As it may be observed, Equations (7) and (8) must be solved iteratively. • Once the exact values of p, q(p) and s were obtained, the extrapolated value of the solution (the estimator of the exact solution) has been calculated in a similar way to Equation (5): • Then, the error estimates for the relative error and extrapolated relative error have been obtained as: and • Finally, the grid convergence index (GCI) has been used as an indicator of the mesh convergence level: being F S a security factor for the calculation of this index, which may be set to 1.25 when having three different meshes (Roache, 1998). As an additional step, if a certain grid convergence level GCI * were desired, the required grid resolution r * with respect to the finer mesh might be obtained as: An example of the values obtained with this method for the three working points with maximum performance, λ = 3.5, 4 and 4.5, is displayed in Table 2. For every case, it was observed that the mean value of the power coefficient of the turbine attained convergence after 6 rotor revolutions, as shown in Figure 3. Thus, the values presented in this work correspond to the seventh revolution of the rotor. Although this value is in discrepancy with results from other authors (values over 20 revolutions have been reported in the literature Rezaeiha et al., 2017b), the necessity of a practical model determined this choice (computational times are around 1 week in a 4-nodes Intel Core i7-52820K at 3.3 GHz and 64 Gb RAM). In addition, the extrapolation towards t → ∞ of the mean C P value using an exponential function gave practically the same value as the value monitored in the last time step. This result is interesting regarding industrial purposes, as it is possible to save simulation time performing just a few rotor revolutions and then extrapolating the result towards infinity if only the C P value is sought. The results of the Richardson extrapolation method are collected in Figure 4. This figure shows the different values of the power coefficient obtained with the three meshes at different tip-speed ratio values and the extrapolation performed towards h → 0, which would correspond to a mesh composed of an infinite number of elements. It may be observed that, at values near the nominal working point of the turbine λ = 3.5 − 4.5, the mesh convergence levels are really good (below 2% and even 0.043% at the nominal working point). Outside these region, the mesh convergence study is not so good. As previously introduced, if the differences between the values of the magnitudes used to study mesh convergence are very small, the procedure might not reach a trustworthy solution (either there is oscillatory convergence and/or an exact solution has already been attained). The breakdown of the method is clearly happening at a low tip-speed ratio (λ = 2.5), where the oscillatory convergence is clearly observed in Figure 4. This result is consistent with the issues found by Celik and Li (2005) when extrapolating their cases with non-monotonic convergence. Hence, a new simulation with a mesh one level finer than Mesh #1 was performed. The extrapolation method was applied again with this new mesh and meshes #1 and #2 from the previous analysis, as shown in Figure 4 (λ = 2.5). It may be observed that the method converges, avoiding oscillatory behavior and providing a more reasonable value for the power coefficient. On the other hand, the thin attached boundary layers that arise at the highest tip-speed ratios (λ = 5 and 5.5) and that will be shown in the next section reveal the importance of the refinement of the mesh in the boundary layer regions in order to model correctly the flow behavior. Figure 5 shows the CFD prediction of the power coefficient of the turbine with the three different meshes alongside the power curve predicted by an analytical double-multiple streamtube model developed by the authors. Both methods predict the maximum turbine performance at the same tip-speed ratio. Despite the assumptions performed by streamtube models about the nature of the flow (wake not modeled, downwind zone of the rotor assumed to be in the fully expanded wake of the upwind zone, influence of the downwind zone on the upwind zone neglected), the slopes of the predicted power curves with the streamtube model and the finest mesh match. In addition, some comments about the results of the Richardson extrapolation method presented before may be introduced. At first sight, for the lower tip-speed ratios, it may be appreciated that the difference in the power coefficient predicted by the three meshes is almost negligible. By looking at Figure 5 and from the results of the extra analysis with a finer mesh commented before, it may be assured that an accurate enough solution had been already found with the coarsest mesh, being this fact the cause of the breakdown of the extrapolation method for λ = 2.5. For the higher tipspeed ratio values, however, the differences between the power coefficient values of the three different meshes are higher. As previously commented, this might be indicating that the number of cells in the cross-streamwise direction to the airfoil in the coarser meshes is too low. Nevertheless, as the turbine is unlikely going to work at tip-speed ratio values higher than the nominal one, it may be considered that the fine mesh is accurate enough for a practical study of the turbine. For further validation of the accuracy of the procedure and the results of the finest mesh, the evolution of the streamwise and cross-streamwise forces on an airfoil during a whole turbine rotation is shown in Figure 6. Values for three different tip-speed ratios are shown (λ = 2.5 (low), 4 (nominal) and 5 (high)). This figure shows also the evolution of the aerodynamic torque of the turbine. The results for all the variables show that convergence is very good for the low and nominal tip-speed ratios, where it becomes difficult to distinguish between the results of the three meshes during the whole rotation cycle. For the highest tip-speed ratio, even when it is possible to differentiate the evolution of the curves for the three different meshes (especially for the aerodynamic torque), the agreement between the results obtained is also relatively good. Therefore, it may be considered that the discretization of the finest mesh is enough to model accurately the flow behavior. Regarding the low tip-speed ratio λ = 2.5, the force oscillations present in the downwind zone of the turbine will certainly cause oscillations in the produced aerodynamic power. This effect must be considered when designing/selecting the generator and electronic control system for the turbine. Finally, in order to select an adequate temporal resolution, the finest mesh has been used to perform a convergence study of the time step size. Three discretization levels have been tested, corresponding to rotor advancements of 1 • , 0.5 • and 0.25 • per time step. Following the guidelines for the Richardson extrapolation method, a so-called spatial-temporal h ST index has been defined as: being N s the number of cells, N t the number of time steps per rotor revolution, A i the size of the cell i and ω the rotational speed of the turbine. Figure 7 shows the results of this extrapolation method, which results in a T-GCI (Temporal-Grid Convergence Index) of 0.05% for the medium-to-smallest time step at λ = 4. Hence, all the calculations presented in this study have been performed using a temporal discretization corresponding to a rotor advance of 0.25 • per time step, that is, 1440 time steps per rotor revolution. Comparison with existing benchmarks In an attempt to compare the results of this study with the existing literature, a small benchmark has been developed with results from other authors, trying to find VAWTs with similar values of solidity and similar blade airfoils (Bedon, Betta, & Benini, 2016;Delafin, Nishino, Wang, & Kolios, 2016;Sabaeifard, Razzaghi, & Forouzandeh, 2012). Table 3 collects the main characteristics of each VAWT, whilst Figure 8 shows the comparison between the optimal tip-speed ratio of the turbines depending on their solidity. It may be appreciated that the maximum power coefficient is attained at similar values of the tip-speed ratio in the case of turbines with similar solidity values (Bedon et al., 2016;Delafin et al., 2016) and that the turbine studied in this work, with the same airfoil and a similar freestream Reynolds number to the turbine presented in Sabaeifard et al. (2012), fits into the trend of the values obtained by these authors. The turbine from Delafin et al. (2016) Figure 8. Comparison of the optimal tip-speed ratio of the studied turbine with existing results for similar turbines. attains a higher power coefficient than the rest of the turbines compared. It employs a NACA 0015 airfoil, which has shown a higher efficiency than thinner or thicker airfoils from its family (such as NACA 0018) Gosselin, Dumas, & Boudreau, 2013;Meana-Fernández et al., 2018) and has been modeled using a vortex model, which might overpredict results when compared to CFD simulations. All these facts may explain why this turbine presents higher power coefficient values. The differences between the other turbines may be either ascribed to the differences in the Reynolds numbers (it has been observed that an increase in the Reynolds number shifts the power curve up-and leftwards Bausas & Danao, 2015;Meana-Fernández et al., 2018;Paraschivoiu, 2002) and the airfoil used to build the blades. Despite the differences observed between the curves, it may be considered that the results of this study are consistent with the results found in the literature. In order to provide an additional source of validation, CFD results from Sabaeifard et al. for different VAWT solidities have been plotted in Figure 9 and compared with the power curve of the turbine simulated in this study. It may be appreciated that the turbine fits perfectly into the graph regarding both the shape of the power curve and the optimal tip-speed ratio at which the maximum power coefficient is attained. The smaller value of the peak coefficient may be ascribed to the lower Reynolds number of the turbine of this study, as previously commented. Figure 9. Position of the power curve of the studied turbine with respect to its solidity. Numerical description of the flow field Once the CFD methodology has been verified and compared with existing results from other authors, it has been used to analyze the flow behavior around the turbine at three different tip-speed ratio values: λ = 2.5, corresponding with a working point in the vortex shedding predominant region; λ = 4, corresponding with the nominal working point; and λ = 5, corresponding with a working point past the nominal working point, with highly attached boundary layers and predominant viscous effects. The pressure, velocity, turbulent kinetic energy and vorticity fields have been analyzed. Pressure on the turbine blades Figures 10-12 show the chordwise distribution of the pressure coefficient on a blade for λ = 2.5, λ = 4 and λ = 5 respectively. The strong pressure gradients observed corroborate the selection of the k-ω-SST model for turbulence modeling (Alaimo et al., 2015). The maximum pressure differences in the distribution arise in the upwind part of the turbine, the first stage of power extraction from the wind. In addition, differences between the windward and leeward regions of the turbine may be identified, with the blades in the windward zones receiving greater pressure from the incoming wind. The blades have been found to exhibit greater pressures in the region between 30 • and 150 • , which correspond to one-third of the whole rotating path. This fact may explain why 3-bladed turbines are predominant, as with this design only one blade remains in the high-loading zone at every instant. Regarding the results, the worst position of the blade seems to be 60 • . Finally, the effects of the increase of the rotational speed of the turbine are easily identified. At λ = 2.5, the pressure coefficient differences between the pressure and suction sides of the airfoil are only slightly higher in the windward region. The rest of regions do not suffer great shifts in the pressure distribution along the blades, which may be correlated with the low power extraction capacity of the turbine at this operational point. At λ = 4, the pressure coefficient distribution varies accordingly to the blade position, and it may be concluded that significant power is being extracted at several positions during the rotor cycle. Finally, at λ = 5, the pressure coefficient values increase with respect to the nominal working point, but the lower power extraction at this operational point allows to conclude that this rise in pressure does not translate into an effective power generation (maybe due to higher viscous effects on the airfoil surface that arise as a consequence of the higher rotational speed). Velocity field Contours of normalized velocity V/V ∞ for a whole rotor cycle are shown in Figure 13 for the three tip-speed ratio values studied. At first sight, a wake of size D is clearly identifiable, similar to the one shed by a cylinder with the same diameter as the turbine. The vortices shed by the blades at λ = 2.5 are also easy to identify. It may be also appreciated that the blockage produced by the rotor in the wind current is very low; hence, it is not surprising that the turbine is not capable of extracting enough power at this operational point. Comparing the contour values at λ = 4 and 5, the greater blockage effect of the rotor at high tip-speed ratio values may be confirmed. This increase on the blockage, however, does not translate into a greater power extraction; hence, the blockage level at λ = 4 seems to be the optimal one to maximize energy harvesting. Figure 14 shows the contours of normalized turbulent kinetic energy (TKE) for λ = 2.5, 4 and 5. This type of contour maps is very useful to identify the main zones of turbulence production and its further convection downstream. The turbulent kinetic energy is defined as: Turbulent kinetic energy where u i are the turbulent velocity fluctuations in the flow. Vortex shedding phenomena are clearly visible in the case of λ = 2.5, which start to roll up behind the airfoil before being convected downstream. Leaving aside the obvious differences between the upwind and downwind parts of the rotor (in the upwind part the wind comes "cleanly" onto the blades, whereas in the downwind part blade performance is clearly affected by the wakes shed from the upwind part), a totally different behavior is present between the windward and leeward zones of the rotor. In the windward zone, the blades move towards the wind at higher relative velocities and generate much narrower wakes. In addition, the air flowing through the turbine convects the vortices before they can start to roll up, resulting in a much cleaner flow pattern such that it is very easy to identify the path followed by the blades during rotation. On the other hand, on the leeward side of the rotor, vortices are shed from the blades and roll up before being convected. This results in more intricate flow patterns, where the new vortices shed from the incoming blades are convected across a sea of vortices shed by the previous blades. The flow becomes blurry, nevertheless, it is still possible to associate each wake to its corresponding blade. This difference between leeward and windward regions is really interesting and can be already detected in Figure 6. In fact, it is the explanation for the oscillatory behavior of the forces on the blade that is appreciated in Figure 6, top (see angles in the downwind-leeward region). Finally, despite the apparent simplicity of the turbulent kinetic energy contours at λ = 4 and 5, there are still some comments that may be made from the comparison between the contours of Figure 14 to try to give insight into the reason why the turbine is more effective at the lower tip-speed ratio. Taking a closer view to the downwind-leeward region, when a blade enters this region it must pass across the turbulent kinetic energy traces left from the previous ones. In the case of λ = 4, four traces, whereas for λ = 5, six traces. This must be translated into a loss of efficiency in the performance of the airfoil in this region. This effect is added to the increasing viscous forces due to the rotational speed of the turbine, and may contribute to explain why further increases in the tip-speed ratio are no longer beneficial to the turbine performance. In addition, on the windward side of the rotor, the 'turbulent kinetic energy wake' is much longer at λ = 5 than at λ = 4. This could be attributed to a poorer airfoil performance in that region, as the turbine rotates so quickly that there is no time for the wakes shed on this side of the turbine to be drifted away from the rotor by the wind before the next blade comes and sheds its own wake. In addition, the high tip-speed ratio of the turbine prevents the airfoil from reaching angles of attack high enough to ensure significant lift generation. Vorticity Finally, contours of normalized in-plane vorticity are shown in Figure 15. Vorticity, which is an indicator of the tendency of the fluid to rotate, has been made dimensionless with the rotational speed of the turbine ω r and it is defined by: where u i is the velocity in the i direction. Vorticity contours show two different mechanisms of vortex shedding and wake development. The first one, of greater magnitude, is vortex shedding from the blades during rotation. Two opposite-sign high-vorticity regions may be identified after each blade trailing edge. The second one arises from the combination of the turbine rotation and the incoming wind velocity. The vortices shed from the blades during their rotation are convected downstream by the incoming wind, forming different patterns depending on the tip-speed ratio. The combination of the turbine rotation and the wind velocity, thus, generates a wake of a size around the turbine diameter D. This wake presents lower values of vorticity, as the vortex mixing process starts as soon as the vortices leave the blades and makes vorticity values drop. At λ = 2.5, the turbine wake is full of unsteadiness due to the retardation in the dissipation of the big vortices shed from the turbine blades. Nevertheless, two different regions with opposite vorticity signs on the leeward and windward zones of the rotor may be identified. Looking at the contours for λ = 4 and 5, the distinction between leeward and windward regions becomes even more evident, with the wake divided in two halves of opposite vorticity. The size of the wake D shed by the whole rotor and its characteristics resemble the wake shed by a cylinder of the same diameter. Following this line of thinking, if VAWT farms were to be simulated, it would be possible to save computational costs by considering them as cylinders if they are working at high enough rotational speeds. Finally, comparing the contours at λ = 4 and 5, the greater number of shifts in the sign of vorticity at λ = 5 may be correlated with a greater generation of vortices. And, although the boundary layers are more attached to the blades, the interaction between their successive passings contributes to generate greater levels of unsteadiness inside the rotor that finally translate into performance losses. Conclusions The applicability of the Richardson extrapolation method to the CFD simulation of vertical-axis wind turbines has been verified. A 3-bladed low-solidity VAWT with DU 06-W-200 airfoils was simulated in a 2D domain using U-RANS k − ω SST modeling. A convergence study applying this method to three meshes with different levels of discretization determines the level of uncertainty of the final mesh. When convergence problems arise, an extra analysis with a finer mesh is enough to ensure convergence of the method and the adequate selection of the mesh. It has been concluded that performing the grid convergence analysis at only one point of the turbine power, as typically done in the literature, does not guarantee the same level of accuracy for the whole curve, especially at high tip-speed ratios. Specifically, a discretization of 40 cells in the first 2 mm in the cross-streamwise direction from the airfoil wall and 12 cells/mm in the streamwise direction for the airfoil chord is enough to capture all the relevant fluid phenomena. Additionally, a new spatial temporal h-index for the assessment of the temporal discretization may be defined using the Richardson extrapolation method as well. It was concluded that a rotor advancement of 0.25 • per time step is adequate enough. With these conditions, reasonable agreement was found between the results of this work and existing benchmarks. The flow behavior (pressure, velocity, turbulent kinetic energy and vorticity) of the turbine shows two main vortex shedding mechanisms: vortex shedding from the blades during rotation and interaction of the turbine rotation with the incoming wind. Vortex convection develops differently depending on the rotor zone (upwind, downwind, windward or leeward), but finally generating downstream a wake of the size of the turbine diameter. The performance of the turbine depends on the tip-speed ratio, with great vortex shedding phenomena at small tip-speed ratios being responsible for the poor turbine performance. At high tip-speed ratios, the combination of viscous effects in the boundary layers and the increased interaction between successive blade passings also decrease the turbine performance. This study does not consider tridimensional effects, so future work should include the realization of 2.5D or 3D full simulations using the discretization values from the finest mesh. Additionally, experimental tests on the prototype to determine the power curve of the turbine would be helpful to provide a deeper validation of the developed simulations. Disclosure statement No potential conflict of interest was reported by the authors. Funding This work has been supported by the 'FPU' predoctoral research scholarship provided by the Spanish Ministry of Education, Culture and Sports. The authors also want to acknowledge the support from the Projects 'Desarrollo de una herramienta de diseño optimizado de perfiles aerodinámicos para su utilización en turbinas eólicas de eje vertical' from the University Institute of Industrial Technology of Asturias, financed by the City Council of Gijón, Spain and 'Diseño optimizado de una turbina eólica de eje vertical' from the University of Oviedo Foundation, financed by the company AST Ingeniería and 'Desarrollo y construcción de turbinas eólicas de eje vertical para entornos urbanos' (ENE2017-89965-P) from the Spanish Ministry of Economy, Industry and Competitiveness.
2019-04-23T13:24:06.775Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "9533bc114d3195aeaaf7456dc84b7b96cbe41392", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/19942060.2019.1596160?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "ff0d53f234dce790766c060782a0aa657b2621e1", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Geology" ] }
226054834
pes2o/s2orc
v3-fos-license
Impact of Hyperhomocysteinemia and Different Dietary Interventions on Cognitive Performance in a Knock-in Mouse Model for Alzheimer’s Disease Background: Hyperhomocysteinemia is considered a possible contributor to the complex pathology of Alzheimer’s disease (AD). For years, researchers in this field have discussed the apparent detrimental effects of the endogenous amino acid homocysteine in the brain. In this study, the roles of hyperhomocysteinemia driven by vitamin B deficiency, as well as potentially beneficial dietary interventions, were investigated in the novel AppNL-G-F knock-in mouse model for AD, simulating an early stage of the disease. Methods: Urine and serum samples were analyzed using a validated LC-MS/MS method and the impact of different experimental diets on cognitive performance was studied in a comprehensive behavioral test battery. Finally, we analyzed brain samples immunohistochemically in order to assess amyloid-β (Aβ) plaque deposition. Results: Behavioral testing data indicated subtle cognitive deficits in AppNL-G-F compared to C57BL/6J wild type mice. Elevation of homocysteine and homocysteic acid, as well as counteracting dietary interventions, mostly did not result in significant effects on learning and memory performance, nor in a modified Aβ plaque deposition in 35-week-old AppNL-G-F mice. Conclusion: Despite prominent Aβ plaque deposition, the AppNL-G-F model merely displays a very mild AD-like phenotype at the investigated age. Older AppNL-G-F mice should be tested in order to further investigate potential effects of hyperhomocysteinemia and dietary interventions. The present exploratory animal study concentrates on the role of hyperhomocysteinemia, driven by vitamin B deficiency, in the context of AD. Therefore, we used the novel and not yet fully characterized App NL-G-F knock-in mouse as a model of the disease. The App NL-G-F mouse is expected to display a mildly impaired phenotype, simulating the very early preclinical period of AD pathology and thus should provide the possibility of assessing preventive interventions adequately. A versatile behavioral test battery should firstly assess potential deterioration of cognitive performance by hyperhomocysteinemia. Secondly, behavioral testing should clarify whether special diets enhance cognition and potentially could serve as preventive measures for AD. Here, we compared B-vitamins and PUFAs with a more complex micronutrient mixture similar to Fortasyn ® Connect [35]. HCys and HCA levels were measured in urine and serum using a validated LC-MS/MS method (liquid chromatography-tandem mass spectrometry) and the quantity of Aβ plaques in the brains was assessed. The present exploratory animal study concentrates on the role of hyperhomocysteinemia, driven by vitamin B deficiency, in the context of AD. Therefore, we used the novel and not yet fully characterized App NL-G-F knock-in mouse as a model of the disease. The App NL-G-F mouse is expected to display a mildly impaired phenotype, simulating the very early preclinical period of AD pathology and thus should provide the possibility of assessing preventive interventions adequately. A versatile behavioral test battery should firstly assess potential deterioration of cognitive performance by hyperhomocysteinemia. Secondly, behavioral testing should clarify whether special diets enhance cognition and potentially Nutrients 2020, 12, 3248 4 of 27 could serve as preventive measures for AD. Here, we compared B-vitamins and PUFAs with a more complex micronutrient mixture similar to Fortasyn ® Connect [35]. HCys and HCA levels were measured in urine and serum using a validated LC-MS/MS method (liquid chromatography-tandem mass spectrometry) and the quantity of Aβ plaques in the brains was assessed. Materials and Methods A detailed description of all experimental procedures including the single behavioral testing systems, analytical methodologies and quality parameters of the current study can be found in Appendix A. Animals and Experimental Diets All experimental procedures were carried out in compliance with the '3R' and in accordance with the Principles of Laboratory Animal Care (National Institutes of Health publication no. 86- 23, revised 1985), the DIRECTIVE 2010/63/EU and the regulations of GV-SOLAS and were approved by the local Ethics Committee for Animal Research in Darmstadt, Germany (approval number: F152/1011; approval date: 31.07.2017). In the current study, 16 C57BL/6J wild type mice (WT) and 96 homozygous App NL-G-F knock-in (KI) mice, consisting equally of males and females, were included. AIN93M chow served as a basis for the experimental diets and was modified, defining the different groups of App NL-G-F mice ( Table 1). The exact composition of the diets is summarized in Table A1. Each mouse received four grammes of diet per day, except for the period of food restriction for males during the touchscreen PAL-task. Water was available ad libitum, except for the period of temporally conditioned water access for females during the IntelliCage experiment. App NL-G-F knock-in Control C (KI) 3 App NL-G-F knock-in Vitamin B deficient B-DEF 4 App NL-G-F knock-in Vitamin B enriched B-ENR 5 App NL-G-F knock-in PUFA supplemented PUFA-ENR 6 App NL-G-F knock-in Vitamin B enriched and PUFA supplemented B+PUFA-ENR 7 App NL-G-F knock-in Fortasyn ® Connect-like FC Behavioral Testing The testing battery we conducted consisted of diverse behavioral tests investigating different domains of cognition in the animals (Figure 2). At the age of 15 weeks, resp. 10 weeks on diet, the mice were first tested in the open field, followed by the elevated zero maze, Barnes maze and social interaction test. Finally, males were tested in a touchscreen task and females in the IntelliCage system. Outcomes of every behavioral experiment were assessed automatically by camera or transponder detection. All experiments were performed between 8 a.m. and 3 p.m. during the light phase. After each trial, testing systems were cleaned with 70% ethanol to remove odors in the devices and to achieve comparable conditions for each animal. Sample Collection As illustrated in Figure 2, serum and 24-h urine of the mice were sampled after 8 and 30 weeks on experimental diets, resp. 13 and 35 weeks of age. The biological matrices were stored at −80 • C for subsequent analysis of HCys and HCA. At the end of the study, we euthanized all animals at the age of 35 weeks in order to harvest the brains. Brains were removed and post-fixed in 4% paraformaldehyde, followed by a stepwise dehydration, and embedding in paraffin. Ten µm thick sections were cut and mounted on glass slides for subsequent immunohistochemical analysis. Behavioral Testing The testing battery we conducted consisted of diverse behavioral tests investigating different domains of cognition in the animals (Figure 2). At the age of 15 weeks, resp. 10 weeks on diet, the mice were first tested in the open field, followed by the elevated zero maze, Barnes maze and social interaction test. Finally, males were tested in a touchscreen task and females in the IntelliCage system. Outcomes of every behavioral experiment were assessed automatically by camera or transponder detection. All experiments were performed between 8 a.m. and 3 p.m. during the light Biochemical and Immunohistochemical Analyses The determination of HCA was performed as previously described in detail [36] using a combination of protein precipitation and solid phase extraction for sample preparation followed by an LC-MS/MS analysis applying a combination of a HILIC separation and tandem mass spectrometry. HCys was analyzed using protein precipitation in combination with reversed phase chromatography and tandem mass spectrometry. Brain sections were immunohistochemically stained for amyloid-β peptides (Aβ) using an ABC/DAB protocol that is described in detail in Appendix A. After digitization of the sections, we analyzed the resulting images for the area of Aβ plaques in several regions of interest (ROI ; Table A2), using ImageJ software. Statistical Analyses All experiments were statistically analyzed using IBM SPSS Statistics 25 (Ehningen, Germany). For each test, we conducted an outlier analysis in order to exclude extreme outliers (more than three times the interquartile range). Shapiro Wilk tests revealed whether Gaussian distribution could be assumed or not. Because of several data sets, which did not show a normal distribution, testing of statistically significant differences was computed by non-parametric Mann-Whitney-U-tests (comparison 1: C57BL/6J (group 1) versus App NL-G-F control (group 2); comparison 2: App NL-G-F control (group 2) versus App NL-G-F on special diets (groups 3-7)). A p value lower than 0.05 was considered statistically significant. Results were expressed as median ± interquartile range (IQR). Where applicable, medians were further compared to hypothetical medians using the non-parametric one-sample Wilcoxon signed rank test. Graphical presentation was performed using GraphPad Prism 7 software (San Diego, CA, USA). Homocysteine and Homocysteic Acid LC-MS/MS analysis was performed in order to measure HCys and its oxidative metabolite HCA in serum and urine samples. Vitamin B deficiency resulted in an elevation of both HCys and HCA serum levels in males and females after 8 weeks on experimental diet (HCys male p < 0.001, female p = 0.001; HCA (pooled) p < 0.001) ( Figure 3A,C). A consistent statistically significant difference between C57BL/6J wild type (WT) and App NL-G-F knock-in (KI) mice was not observed. Dietary interventions resulted in decreased serum levels of HCys (PUFA-ENR male p = 0.001, female p = 0.005; B+PUFA-ENR male p < 0.001, female 0.026; FC male & female p < 0.001). Serum samples had to be pooled for an adequate analysis of HCA because of low sample volumes obtained by vena facialis puncture ( Figure 3C). Because of the resulting decreased number of observations, data are not depicted separately for males and females in this case. After 30 weeks on the diet, vitamin B deficient males remained significantly hyperhomocysteinemic (HCys p = 0.001; HCA p = 0.001), although to a lower extent, compared to 8 weeks on the diet, whereas females returned to baseline level due to the maintenance chow they received during the IntelliCage tasks. Analysis of 24-h urine samples delivered data that were largely comparable to the results from the serum samples. After 8 weeks on the diets ( Figure 3E,G), both urinary HCys and HCA were significantly elevated because of the vitamin B deficient chow (HCys male & female p < 0.001; HCA male p = 0.001, female p = 0.035), whereas a genotype effect was not detectable. Experimental diets resulted in decreased amounts of HCys (PUFA-ENR female p = 0.014) and HCA (B-ENR female p = 0.001; PUFA-ENR female p = 0.022; FC female p = 0.040) in the urine compared to KI control mice. After 30 weeks on diets ( Figure 3F,H), males deficient in vitamin B6, B12 and folate displayed elevated urinary amounts of HCys (p = 0.001) and HCA (p = 0.003), but to a lower extent compared to that after 8 weeks on the diets. Vitamin B deficient females showed equal quantities to the control groups due to the maintenance chow they had received during the IntelliCage tasks. Open Field This behavioral test aimed to evaluate locomotion, anxiety, and habituation behavior of the mice during a 30-min session in the open field boxes. The total distance moved revealed no statistically significant differences ( Figure 4A). Consequently, locomotion activity was not influenced by genotype or dietary intervention. The time the animals spent in the inner zone of the box, an indicator of anxiety, was not affected by genotype or diet ( Figure 4B). As a third parameter, the amount of intrasession habituation was expressed by a habituation ratio (Equation (1)): ratio intrasession habituation = (5 min(final))/((5 min(final) + 5 min(initial))) (1) A ratio lower than 0.5 indicates habituation; a ratio of 0.5 means no change in activity, i.e., that no habituation occurred as in the case of groups 2-6 in males and groups 3 and 5-6 in females. Females fed with a vitamin B deficient chow displayed the least tendency to habituate; however, effects of experimental diets did not reach statistical significance in comparison to the KI control group. Female App NL-G-F control mice displayed a significantly lower level of habituation compared to the C57BL/6J WT control (p = 0.009), indicating an impact of the genotype ( Figure 4C). Elevated Zero Maze We tested anxiety behavior of each mouse for a session duration of 5 min. C57BL/6J WT and App NL-G-F KI control mice moved equal distances in the maze; only male App NL-G-F mice fed with a vitamin B and PUFA enriched diet moved less than App NL-G-F controls (p = 0.003) and thus displayed lower locomotion activity ( Figure 5A Barnes Maze To investigate spatial memory and learning, the Barnes maze test was implemented in this study. In the first part of the test, the acquisition phase, the mice had to learn and remember the location of the escape box at the target hole. Figure 6A shows the latencies the mice needed to reach the target hole on subsequent days of training in the acquisition phase. The graph indicates a learning curve in every group. Tests on statistical significance were carried out for day 4 and revealed no differences at this stage of the test. In the probe trial on day 5 ( Figure 6B), the reference memory of Barnes Maze To investigate spatial memory and learning, the Barnes maze test was implemented in this study. In the first part of the test, the acquisition phase, the mice had to learn and remember the location of the escape box at the target hole. Figure 6A shows the latencies the mice needed to reach the target hole on subsequent days of training in the acquisition phase. The graph indicates a learning curve in every group. Tests on statistical significance were carried out for day 4 and revealed no differences at this stage of the test. In the probe trial on day 5 ( Figure 6B), the reference memory of the previously learned target hole was tested. At this time, female App NL-G-F controls needed significantly longer to reach the target hole compared to the C57BL/6J WT control animals (p = 0.016). Vitamin B deficiency and corresponding hyperhomocysteinemia did not result in a worse performance at any stage of the Barnes maze test. Social Interaction Test Testing social behavior proceeded in two subsequent phases. At first, we assessed sociability, describing the curiosity of the animals towards the stimulus mouse in the testing system (Equation (2)) ( Figure 7A). ratio sociability = (time social cage)/((time social cage + time empty cage)) (2) No statistically significant difference was observed between C57BL/6J WT and App NL-G-F control animals. Experimental diets also had no impact on the social ability of the mice. Medians were statistically unequal to 0.5 except for group 2, 4 and 6 (males) and group 2 and 3 (females). A ratio of 0.5 means that contact times with the conspecific stimulus mouse and the empty cage were equal. In the second phase of the test, we assessed the social recognition performance of the animals (Equation (3)) ( Figure 7B). ratio social recognition = (time novel animal)/((time novel animal + time familiar animal)) (3) As for sociability, neither genotype nor experimental diets had an influence on social recognition in the different experimental groups. In neither phase of the test did hyperhomocysteinemia aggravate the cognitive performance of the mice. Except for group 1 (males) and group 5 and 6 (females), medians of the other groups did not differ significantly from 0.5. Paired Associates Learning (PAL) Task The touchscreen PAL was used to assess potential cognitive impairment of the male mice (about five to eight months of age). Both the session duration and the number of trials completed per session, as well as the percentage of correct trials per session were analyzed ( Figure 8). The resulting learning curves revealed no statistically significant difference in these parameters between C57BL/6J WT mice and App NL-G-F KI mice in the final phase of the test (block 6). Hyperhomocysteinemic App NL-G-F mice did not perform worse than App NL-G-F control mice. Other experimental diets also had no benefit on the cognitive abilities of App NL-G-F mice at this age. The C57BL/6J WT group showed a smaller variability in the touchscreen chambers in comparison to the App NL-G-F KI groups. This effect was particularly observed in the parameter trials completed ( Figure 8B). Vitamin B deficient animals showed a tendency to perform better at the beginning of the test (trials completed, block 1) and thus did not display a learning curve like that of App NL-G-F control mice. However, no effects reached statistical significance in block 6. Animals fed with a vitamin B and PUFA combination diet did not reach the maximum number of trials per session. Therefore, the session duration scarcely also decreased over time in this group. The proportion of correct and incorrect trials was not affected. particularly observed in the parameter trials completed ( Figure 8B). Vitamin B deficient animals showed a tendency to perform better at the beginning of the test (trials completed, block 1) and thus did not display a learning curve like that of App NL-G-F control mice. However, no effects reached statistical significance in block 6. Animals fed with a vitamin B and PUFA combination diet did not reach the maximum number of trials per session. Therefore, the session duration scarcely also decreased over time in this group. The proportion of correct and incorrect trials was not affected. Place Learning (PL) and Reversal Learning (RL) Task Learning and memory performance of the females at the age of about six to eight months was finally tested using two tasks in the IntelliCage system. We detected the visits of the mice to the drinking corners and analyzed the percentage of correct visits during the drinking sessions in the place learning (PL) and the reversal learning (RL) tasks. Three points in time along the course of the tasks are illustrated in Figure 9. Statistical analysis of the late phase of this course in both ( Figure 9A) PL and ( Figure 9B) RL (session 31; resp. 23) revealed no significant differences between App NL-G-F and age-matched C57BL/6J mice. In comparison to the App NL-G-F KI control group, none of the groups fed with experimental diets showed improved or impaired memory abilities. drinking corners and analyzed the percentage of correct visits during the drinking sessions in the place learning (PL) and the reversal learning (RL) tasks. Three points in time along the course of the tasks are illustrated in Figure 9. Statistical analysis of the late phase of this course in both ( Figure 9A) PL and ( Figure 9B) RL (session 31; resp. 23) revealed no significant differences between App NL-G-F and age-matched C57BL/6J mice. In comparison to the App NL-G-F KI control group, none of the groups fed with experimental diets showed improved or impaired memory abilities. Immunohistochemical Analysis Brain sections of all animals were immunohistochemically stained and analyzed in order to semi-quantify the amount of amyloid plaques. For this purpose, we assessed the area (percentage) occupied by plaques in images of several regions of interest (ROI). The positions of the different cortical and hippocampal ROI (Table A2) are marked in Figure 10. Immunohistochemical Analysis Brain sections of all animals were immunohistochemically stained and analyzed in order to semi-quantify the amount of amyloid plaques. For this purpose, we assessed the area (percentage) occupied by plaques in images of several regions of interest (ROI). The positions of the different cortical and hippocampal ROI (Table A2) are marked in Figure 10. Figure 10 illustrates examples of brain sections of a C57BL/6J WT mouse and an App NL-G-F KI mouse. Aβ plaques, indicated by characteristic brown staining, occurred abundantly and diffusely in the brain sections of the KI animals ( Figure 10B), whereas WT mice did not show any signs of Aβ deposition at all ( Figure 10A). The differences in the Aβ burden between the C57BL/6J and App NL-G-F genotype, as well as a potential impact of the experimental diets, were further analyzed using ImageJ software. Semi-quantification of the Aβ burden confirmed a significant difference between WT and KI control groups ( Figure 11) in all ROI (p < 0.001; p = 0.002; p < 0.001; p < 0.001; p < 0.001; p < 0.001; p < 0.001; p < 0.001). There was no statistically significant difference in the plaque area between the diet groups and the App NL-G-F control group in the single ROI and in total. However, the immunohistochemical results indicate prominent plaque formation in all App NL-G-F groups at about 8 months of age. Figure 10 illustrates examples of brain sections of a C57BL/6J WT mouse and an App NL-G-F KI mouse. Aβ plaques, indicated by characteristic brown staining, occurred abundantly and diffusely in the brain sections of the KI animals ( Figure 10B), whereas WT mice did not show any signs of Aβ deposition at all ( Figure 10A). The differences in the Aβ burden between the C57BL/6J and App NL-G-F genotype, as well as a potential impact of the experimental diets, were further analyzed using ImageJ software. Semi-quantification of the Aβ burden confirmed a significant difference between WT and KI control groups ( Figure 11) in all ROI (p < 0.001; p = 0.002; p < 0.001; p < 0.001; p < 0.001; p < 0.001; p < 0.001; p < 0.001). There was no statistically significant difference in the plaque area between the diet groups and the App NL-G-F control group in the single ROI and in total. However, the immunohistochemical results indicate prominent plaque formation in all App NL-G-F groups at about 8 months of age. Discussion The current preclinical study investigated the impact of an induced hyperhomocysteinemia in the App NL-G-F knock-in mouse model for AD, as well as potentially preventive benefits of different Figure 11. Semi-quantitative analysis of amyloid-β (Aβ) in immunohistochemically stained brain sections; results are shown for single regions of interest (ROI) and in total; 35 weeks of age, 30 weeks on experimental diet (males and females pooled); data presented as median ± IQR; outliers beyond threefold IQR removed; p < 0.05 (Mann-Whitney-U-test) considered statistically significant (*). Discussion The current preclinical study investigated the impact of an induced hyperhomocysteinemia in the App NL-G-F knock-in mouse model for AD, as well as potentially preventive benefits of different micro-nutritional interventions. In order to characterize the phenotypes of the mice, we conducted a versatile behavioral test battery, accompanied by an analysis of HCys-/HCA levels and of the Aβ plaque burden. However, despite successful induction of prominent cerebral plaque deposition and hyperhomocysteinemia, merely subtle impairments were observed in the App NL-G-F mice. C57BL/6J mice, a frequently studied mouse strain and background strain of the App NL-G-F knock-in (KI) model, served as an age-matched wild type (WT) control group in this study. Hence, results in these mice indicated a reference behavior and enabled subsequent assessment of the App NL-G-F genotype in the KI mice. In the open field, we focused on the intrasession habituation of the mice, which is one form of learning. Intrasession habituation describes a decreasing level of exploration of a new environment over time in a single session which can typically be detected in C57BL/6J mice [37]. This is in accordance with our finding that the habituation ratio in C57BL/6J was significantly lower than 0.5 and therefore indicated intrasession habituation. As expected, C57BL/6J mice demonstrated spatial learning and memory ability on consecutive days of training in the Barnes maze [38]. In a test for sociability and social recognition [39], C57BL/6J mice preferred to spent time with a conspecific (ratio sociability > 0.5) [40]. However, they did not prefer the novel conspecific in the second part of the test (ratio social recognition = 0.5). In the touchscreen PAL [41], male WT animals completed the maximum number of all 36 trials per session, accompanied by decreasing session duration. The increase in the percentage of correct trials is in accordance with observations in a similar study [42]. In the IntelliCage setup [43,44], learning curves indicated a constant learning effect in the female WT animals. For several reasons, we decided to use an AβPP-based KI mouse model for AD in this study. Firstly, the novel KI models provide the advantage of not overexpressing AβPP in comparison to the more established transgenic models. Consequently, artificial phenotypes due to an overproduction of AβPP fragments besides the Aβ peptide should be avoided [9]. Secondly, an increased anabolism of Aβ levels is primarily a hallmark of hereditary-or early-onset AD [3]. Hyperhomocysteinemia, which is especially prominent in older people [45], is supposed to be a risk factor for AD [24]. Therefore, elevated HCys and HCA have been regarded as a hallmark of sporadic-or late-onset AD. The late-onset form affects the vast majority of AD patients [3]. By combining both the increased Aβ anabolism as a feature of hereditary AD and the detrimental effects of excess HCys as a feature of sporadic AD, we attempted to simulate cognitive decline more comprehensively. Thirdly, in order to investigate preventive treatments, it is mandatory to use a model displaying subtle phenotypes corresponding to a very early stage of the disease. According to a review by Zahs and Ashe, AβPP-based mouse models simulate the early phase of AD and thus are adequate for preventive interventions [46]. In the current study, a very subtle phenotype, i.e., very mild cognitive deficits, was observed. For each analysis, we compared C57BL/6J WT animals with App NL-G-F KI control animals. Both groups received the same control diet. KI mice displayed an impaired habituation behavior in the open field. Male mice of the two control groups habituated equally to the new environment, whereas females differed significantly. Data from the probe trial in the Barnes maze confirmed this finding: App NL-G-F KI mice needed longer to locate the former target hole than WT mice. As in the open field, this effect reached statistical significance only in females. Previous clinical studies suggest that a reduced cognitive reserve in women might explain the female vulnerability to develop a more severe phenotype of AD, a disorder affecting more women than men [47,48]. Other behavioral tests did not reveal differences caused by the KI genotype. However, data from the PAL test indicated an increased variance of results (higher IQR) in the App NL-G-F versus WT mice. WT animals showed a clearer performance curve with regard to the session duration and the number of trials completed along the course of the test, meaning that WT mice did not need as long as the KI mice to fulfil the 36 trials in a 1-h session. This enhanced efficiency might be the result of a higher motivation of the WT animals. Nevertheless, effects at the final stage of the test (block 6) did not indicate a significant impact of the genotype. Other groups reported similar findings in App NL-G-F mice, indicating a very subtle phenotype. Two recent publications summarized these findings in tabular overviews, considering also sex and age of the mice of the included studies [49,50]. Latif-Hernandez and colleagues showed that the behavior of App NL-G-F mice was largely unaffected at the age of 3-10 months [51]. Similarities with our study can also be found in a publication by Whyte et al., who observed no differences between C57BL/6J and App NL-G-F mice in different cognitive tests at the age of 6 months [52]. Sakakibara and colleagues tested App NL-G-F mice at a higher age (15-18 months) and reported an intact learning ability but also recommended App NL-G-F as an AD model for preventive studies [53]. One year later, Jacob et al. observed neither consequences on cognitive performance in a touchscreen task nor age-dependent changes in a phase-amplitude coupling analysis, which was used as a measure of neurophysiological functioning, in 4.5 month old App NL-G-F mice. In accordance with our findings in the App NL-G-F model, these mice displayed a higher variability than WT control mice [42]. The question remains whether the KI mice were too young to display clear impairments. Further investigations are required to test the combination of the App NL-G-F genotype with our experimental diets in older mice. However, other groups detected significant cognitive deficits in the App NL-G-F model [9,49,50,54]. As summarized elsewhere [49], the majority of studies in the field investigated only male animals. Hence, a 1:1 comparison of these studies with our results comprising both sexes is difficult. Furthermore, a review of the topic described a relatively high level of variability in AβPP KI models between different laboratories [55]. Staining results of App NL-G-F brain sections showed prominent plaque deposition throughout the brain, as previously reported in similar studies [49,52], and thus indicate amyloid pathology as a central hallmark of early AD. In order to investigate potentially detrimental effects of elevated HCys and HCA levels, one group of App NL-G-F mice received a special diet deficient in vitamin B6, B12 and folate. The resulting hyperhomocysteinemic state was confirmed in serum and urine prior to the start of behavioral tests. Our behavioral testing data obtained in the social interaction test, PAL and in the IntelliCages revealed no deficits in hyperhomocysteinemic mice and therefore do not support previous findings (e.g., [56]). The open field test and Barnes maze indicated subtle deficits in habituation behavior and spatial learning and memory, but these effects did not reach statistical significance. Only the elevated zero maze revealed an increased anxiety in hyperhomocysteinemic females. This observation might be of translational relevance, because anxious behavior is also one aspect of the AD phenotype [57]. Various preclinical studies in the field indicate a significant impact of hyperhomocysteinemia on plaque burden [58,59]. Other groups reported no such effects, which is in accordance with our immunohistochemical results in the App NL-G-F model [60,61]. In conclusion, despite severely elevated levels of HCys and HCA over a longer period of their life span, App NL-G-F mice showed neither a modified plaque burden nor significant cognitive deficits due to hyperhomocysteinemia. A majority of preclinical data published in the field indicate behavioral deficits in animal models caused by increased HCys (e.g., [56,59,62]). However, we assume that the evidence might be biased to some extent. On the one hand, behavioral data obtained in transgenic models based on massive AβPP overexpression might be somewhat artificial because of an overproduction of other AβPP fragments aside from Aβ [9]. It should also be considered that negative results are often not published, although equally important as positive results. The publication bias, meaning the reduced publishing of negative or null results, is not restricted to the field of AD research, but is rather a general problem [63]. Hyperhomocysteinemia is referred to as a hallmark of AD [10], but its impact on the disease is still under discussion. From a translational point of view, this experimental group simulates the portion of elderly people who are deficient in B-vitamins [64]. Preclinical evidence [65] and clinical evidence [45] confirm an age-related elevation of HCys levels. An impaired vitamin status is one reason amongst others for hyperhomocysteinemia in the elderly [66]. In the present study, the lack of vitamin B6, B12 and folate in combination with 1% sulfathiazole sodium to inhibit bacterial folate synthesis in the gut [58], led to a "severe" hyperhomocysteinemic state, according to a classification used in other publications [67]. Consequently, our vitamin B deficient mice displayed high HCys serum concentrations (45,760 ng/mL ≈ 339 µmol/L) in comparison to our App NL-G-F KI control (1054 ng/mL ≈ 8 µmol/L) and in comparison to elevated HCys levels in similar studies (e.g., [56,62,68]). Fuso and colleagues also reached high plasma total HCys (>400 µmol/L ≈ 54,000 ng/mL) in their study with TgCRND8 mice and explained the relatively high levels by not fasting the mice before sacrifice and by inhibiting both the re-methylation and the transsulfuration pathway [58]. Vitamin B deficient chow resulted in~50 fold higher serum and urinary HCys and~10-20 fold higher serum and urinary HCA compared to animals fed with control diet for 8 weeks. About 0.1% of HCys molecules were oxidized to HCA in serum (42.9 ng/mL ≈ 0.23 µmol/L) and excreted in urine (1184 ng) in 24 h. Only free HCys can be oxidized to HCA, which is suggested to be the main neurotoxic species [32][33][34]. In the current study, we did not measure the free form but the levels of total HCys by adding a reduction step (TCEP-solution) in the analytical method. In vivo, most HCys molecules are protein-bound or dimerized; only about 1% are available in the free thiol form [12]. Hasegawa et al. reported cognitive impairment in transgenic 3xTg-AD mice, triggered by elevated HCA in the brain [69]. We also investigated other experimental diets besides the vitamin B deficient chow discussed above. Group 4 received a vitamin B enriched diet containing a particular high content of folate, B6 and B12 compared to both the control diet and the FC-like diet. The goal of this diet was to investigate whether an additional increase, specifically of B-vitamins, in comparison to the FC-like diet could provide further benefits in the outcome of the study. Therefore, the difference in B-vitamin contents should simulate a potentially different effectiveness between FC (Souvenaid ® ) and existing higher dosed vitamin B preparations as human treatment options. In accordance with a recent international consensus statement [23], PUFAs (DHA + EPA) have been suggested to be beneficial for cognitive functioning in general and might be additionally linked to AD pathology [25,70]. Because single nutrient intervention studies often failed to show beneficial effects on cognitive function, it has been suggested that it might be important to investigate combinatory approaches [35]. For this purpose, we combined the high content vitamin B enrichment with the supplementation of PUFAs (group 6). Finally, group 7 received the FC-like diet, a complex mixture of ingredients (Table A1), which we implemented due to positive previous findings (e.g., [35,71]). Supplementation of B-vitamins and PUFAs, as well as combinatory approaches and the FC-like mixture, were capable of lowering HCys and HCA below the levels of the App NL-G-F control mice fed with a standard rodent chow. However, by taking both sampling points (8 and 30 weeks on diet) as well as behavioral testing data into consideration, results appear inconsistent. In the open field, anxiety-related behavior did not differ between groups fed with B-vitamins, PUFAs or a mixture and App NL-G-F control animals. However, the elevated zero maze revealed increased anxiety in males fed with the combination diets. Especially the mice supplemented with both B-vitamins and PUFAs were more anxious and stayed in the closed corridors of the zero maze, but it has to be emphasized that these mice also displayed a reduced locomotion activity during the test. In the Barnes maze, experimental chow did not affect latencies to target at day 4 of training. Other researchers too did not observe benefits of PUFA-supplementation in cognitive tasks [72]. We confirmed the lack of dietary effects on cognitive performance in the social interaction test, the IntelliCage and PAL. Although not significant in the final block of 6 sessions, the session duration and trials completed indicate a worse learning curve for group 6 (B+PUFA-ENR) in the PAL test. This might be due to a lack of motivation in these mice receiving a high number of vitamins and PUFAs, which possibly lowered their affinity to the milk reward in the PAL task. One reason could be that the food restriction was not strict enough for this group. The FC-like diet did not prove beneficial in any test in comparison to the control chow. This is in accordance with some clinical studies, which do not support the benefit of the FC diet and thus indicate equivocal evidence [73,74]. In conclusion, the beneficial tendencies we observed did not mostly reach statistical significance in behavioral tests and biochemical-/immunohistochemical analyses and consequently do not suggest a clear beneficial effect of B-vitamins or PUFAs in this mouse model at the investigated age and diet duration. It is important to question here whether it is possible to observe amelioration through dietary intervention when merely a subtle behavioral deficit is induced in the KI mouse model. Overall, this mouse model, simulating amyloid pathology without AβPP overexpression, merely displays a very mild phenotype despite massive cerebral Aβ deposition at the age of 35 weeks. The amyloid hypothesis has been questioned frequently because of the disappointing track record in clinical trials of drugs that target Aβ despite decades of extensive research in the field [7,75]. In addition, in some cases, substantial plaque deposition does not even cause dementia-like symptoms [76]. However, the window for potentially preventive measures is limited to an early stage of AD, where cerebral amyloidosis remains the central hallmark of the pathology [3]. Despite all criticism of the amyloid hypothesis, beneficial effects were recently observed using the human anti-Aβ monoclonal antibody aducanumab [77], confirming a causal role of Aβ in AD pathogenesis. Conclusions The current study only indicates a mild hyperhomocysteinemia-driven exacerbation of the AD-like phenotype, simulated in the App NL-G-F knock-in mouse model. Dietary interventions consisting of B-vitamins and/or PUFAs as well as the FC-like diet as a complex micronutrient mixture were unable to modify cognitive performance in this mouse model for AD. Neither the B-vitamin deficient diet, resulting in elevated HCys and HCA levels, nor the potentially beneficial diets affected the amount of plaque deposition in the brain. In comparison with the age-matched C57BL/6J wild type control group, App NL-G-F control mice displayed merely subtle behavioral deficits at the investigated age. Further investigations should clarify whether the App NL-G-F genotype and the experimental diets have an impact in older animals. Acknowledgments: We wish to thank MEDICE Arzneimittel Pütter GmbH & Co. KG for funding this preclinical study. Furthermore, we thank RIKEN Center for Brain Science for providing the App NL-G-F knock-in mice. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. Animals Wild type (WT) mice were purchased from Charles River Wiga GmbH (Sulzfeld, Germany), whereas the knock-in (KI) mice were kindly provided by the RIKEN Center for Brain Science (Saitama, Japan) on a C57BL/6J background and further bred at mfd Diagnostics GmbH (Wendelsheim, Germany). After their arrival at our facility at the age of four weeks, the animals were chipped with subcutaneous transponders to facilitate identification and to enable the IntelliCage task. Furthermore, additional genotyping via polymerase-chain-reaction analysis was carried out to ensure the adequate genetic background of each animal. Their allocation to the home cages was in a randomized order. All animals were housed in groups of two mice per cage (Green Line, Tecniplast, Hohenpeissenberg, Germany). In the maintenance room, constant temperature (mean: 22.7 • C) and humidity (mean: 48.6%) conditions as well as a 12/12 h dark/light cycle were provided. The pathogen-free status of the maintenance room was regularly monitored using sentinel mice. After an acclimatization phase of one week, the mice were allocated randomly to the experimental groups based on different diets. Body conditions scores were monitored, and the mice were weighed every week. Experimental Diets The composition of the FC-like diet was oriented towards the work of Jansen et al. All diets containing PUFAs were stored at −20 • C to minimize oxidation [35]. Due to coprophagia (the ingestion of fecal matter) in mice, the vitamin B deficient diet additionally contained the antibiotic sulfathiazole sodium (Sigma-Aldrich, Taufkirchen, Germany) to prevent bacterial folate synthesis in the gut [58]. All experimental diets were purchased from Ssniff-Spezialdiäten GmbH (Soest, Germany). Open Field Besides its value as a test for locomotor activity and anxiety, the open field task provides information on habituation as a form of learning [37]. For this purpose, each mouse was placed into the center of a 28.5 × 29.8 cm box (in-house manufactured, Fraunhofer IME, Schmallenberg, Germany). Animals were allowed to explore the new environment for 30 min. Total distance moved and percentage of time spent in the inner zone of the box were automatically detected by camera tracking and corresponding EthoVision XT 13 software (Noldus, Wageningen, The Netherlands). Data were analyzed additionally for time blocks of 5 min. Elevated Zero Maze To investigate anxiety related behavior [78], we placed each animal into the open corridor of a 60 cm diametric elevated zero maze (Ugo Basile SRL, Gemonio, Italy) for a duration of 5 min. The maze consisted of two open and two closed 5 cm wide corridors. Besides the time spent in the open corridors, the total distance moved by the mice was automatically detected by camera tracking and corresponding EthoVision XT 13 software. Barnes Maze The Barnes maze test is a common tool to measure spatial learning and memory [38] in AD mouse models, based on the aversion of mice to bright open spaces. We particularly preferred the Barnes maze over the Morris water maze, since it presents a less aversive alternative [79]. The apparatus (Ugo Basile SRL, Gemonio, Italy) consisted of a circular surface (diameter 100 cm) with 20 holes at the edge and an escape box positioned below one of the holes. There were four different visual cues positioned around the maze. The task required the mouse to localize the escape hole and enter the box. Initially, we transported each animal to the center of the maze in an opaque vessel to prevent an orientation before the start of the trial. The procedure was divided into two phases. First, in the acquisition phase, each mouse was subjected to two trials per day for four days (3-min limit per trial; inter-trial interval 15-30 min). The trials ended when either the mouse entered the escape box or when a duration of 180 s was over. On day 5, animals were subjected to a probe trial (90 s). During this phase, the escape box was not available anymore. Latencies to the target hole (acquisition & probe) were automatically detected by camera tracking and corresponding EthoVision XT 13 software. Social Interaction Test This method enables the assessment of sociability and social recognition in mice [39]. For this purpose, a three-chamber cage consisting of a central chamber and two lateral compartments (Noldus, Wageningen, Netherlands) was used. The lateral compartments included sex-matched stimulus mice in separate acrylic rod cages, which allowed social interaction without direct contact. Test animals explored the setup during three consecutive phases. During the first time block of 5 min, the mice were allowed to explore only the middle chamber. As a next step, we opened the dividers to the lateral compartments and placed a stimulus mouse into one of the rod cages (social cage). The second rod cage remained empty. The experimental mouse had a period of 10 min to explore the whole three-chamber cage and to interact with the unknown stimulus mouse. For the next 10 min, we placed an additional unknown stimulus mouse into the second rod cage. The cumulative contact time with the familiar and non-familiar conspecific was automatically detected by camera tracking and corresponding EthoVision XT 13 software. Paired Associates Learning (PAL) Task The ability of visuospatial associative learning was tested in males in the touchscreen PAL (touchscreen and corresponding Abet II Touch 18.7.6 software: Campden Instruments, Loughborough, UK and Lafayette Instrument Company, Lafayette, IN, USA). The task requires a lot of training, but is also a valuable tool in terms of translational cognitive research due to its similarities with the human CANTAB [41,80]. Based on the Bussey-Saksida method, animals initially were habituated to the touchscreen chambers during different pre-training phases. After completion, mice were introduced to the proper PAL task. Here, two objects were shown in two spatial locations on the screen. In each trial, only one correct association of object and location was presented, and the animal had to detect it via nose poke. As a result, a reward was delivered automatically (sugared condensed milk, 7 µL, Hochwald Foods GmbH, Thalfang, Germany). Incorrect responses were followed by an aversive light stimulus (5 s time-out period). After an inter-trial interval (20 s), the next trial was initiated by the mouse. A session ended when either 36 trials were completed, or 60 min ran out. The animals were food restricted through the whole experiment with the aim of reducing body weights to about 90% of the baseline weight before the test. This should enhance the motivation of the mice to collect the reward after each correct trial. Animal weights were monitored three times a week. For the assessment of the 36 sessions of the PAL task, the parameters' session duration, trials completed, and percentage of correct trials were analyzed. The procedure was highly standardized and the closed touchscreen chambers reduced variability due to the experimenter to a minimum. Place Learning (PL) and Reversal Learning (RL) Task The start of the IntelliCage experiment (IntelliCage and IntelliCagePlus 3.2.8 software: New Behavior, TSE Systems, Bad Homburg, Germany) in female mice was scheduled around the time when male mice entered the proper PAL task. Thus, males and females were largely age-matched during the last phase of the behavioral test battery (27 weeks old, resp. 22 weeks on diet). We chose not to test males in the IntelliCage setup, because males are more prone to show aggressive behavior and hierarchical fighting, potentially resulting in injuries due to the housing of male mice in large groups. The IntelliCage tasks of learning ability cover a broad cognitive spectrum by combining the analysis of spatial memory with operant conditioning [43] and provide the advantage of being both home cage and behavioral test during the time of the experiment. Animals from all experimental groups lived together in the special cage for the period of about 7 weeks. Due to this mixed group housing, the experimental diets were substituted by standard maintenance chow (ad libitum) for the duration of this behavioral test. Each apparatus had the capacity to house and detect up to 16 mice simultaneously. The experiment started with a habituation period of 1 week, followed by a pre-training phase on nose poke behavior in corners for water access for 1-2 weeks. During the following week, the animals were habituated to the two defined drinking sessions per day (5-7 a.m.; 7-9 p.m.). In the PL, only one corner per mouse yielded water access in response to nose pokes during drinking sessions (~2 weeks). Motorized doors, controlled by radio-frequency identification (RFID) transponders, opened when a mouse was detected in its adequate corner. In the RL, a different corner was designated as correct (2 weeks). Visits to the correct corners were analyzed for PL and RL. We did not weigh the animals for the duration of the experiment to avoid interference with the automated behavior recording; instead, we visually observed the mice for any sign of deficiency. The IntelliCage enabled a high throughput cognitive investigation of mice, while stress due to human intervention was reduced to a minimum. Sample Collection Blood was taken by carrying out a puncture of the facial vein using 5 mm Goldenrod animal lancets (MEDIpoint, Mineola, NY, USA). A maximum volume of 170 µL per 25 g mouse according to animal welfare guidelines (GV-SOLAS) was collected in serum tubes containing a clotting factor to accelerate coagulation in the subsequent 15-30 min (Sarstedt Microvette 200 Z, Nümbrecht, Germany). The tubes were centrifuged at 3200× g for another 15 min at 4 • C and subsequently frozen on dry ice. For 24-h urine sampling, mice were placed into metabolic cages (Tecniplast, Hohenpeissenberg, Germany). Absolute urine volumes were documented for subsequent calculations. In order to harvest the brains, the animals were deeply anaesthetized by injecting a mixture of 200 mg/kg (body weight) ketamine (Vétoquinol GmbH, Ismaning, Germany) and 10 mg/kg (body weight) xylazine (Bayer Health Care, Leverkusen, Germany) intraperitoneally. After cessation of reflexes, blood was taken cardially and treated as described before. Mice were then perfused transcardially with 0.1 M phosphate-buffered saline (PBS) followed by 4% paraformaldehyde (Medite, Burgdorf, Germany). Brains were removed and postfixed in the same fixative for another three days followed by a stepwise dehydration in increasing ethanol concentrations (Medite) and xylene steps (Medite). Brains were then embedded in paraffin (Medite) in a heated embedding station (Thermo Fisher, Frankfurt am Main, Germany) and cut with a microtome (Thermo Fisher). 10 µm thick sections were retrieved from three different positions of the animals' brains: −1.2, −1.7 and −2.2 posterior to bregma [81]. Data of the three positions were pooled because of absent statistically significant differences. Finally, the sections were mounted on glass slides (Klinipath, Typograaf, Netherlands). Biochemical Analysis The determination of homocysteic acid (HCA) was performed as recently described in detail [36] with minor modifications, as the method was originally validated for the analysis of human serum and urine. Briefly, HCA was determined in murine serum and urine using a combination of protein precipitation and solid phase extraction for sample preparation followed by an LC-MS/MS analysis using a combination of a HILIC separation and tandem mass spectrometry. Samples were processed as previously described [36] by adding formic acid followed by protein precipitation using cooled acetonitrile. Samples were vortexed, centrifuged and loaded onto conditioned tables (Strata X AW SPE columns (33 µm, 30 mg / 1 mL, Phenomenex, Aschaffenburg, Germany) using the automated sample preparation system Extrahera (Biotage, Uppsala, Sweden). After washing the cartridges using water, methanol and a mixture of acetonitrile and aqueous ammonium hydroxide solution, HCA was eluted using two times a mixture of methanol and aqueous ammonium hydroxide solution. The eluate was dried and reconstituted by adding ammonium acetate solution and acetonitrile separately. Afterwards, the samples were injected into the LC-MS/MS system. The LC-MS/MS system consisted of a triple quadrupole mass spectrometer QTRAP 6500+ (Sciex, Darmstadt, Germany) equipped with a Turbo Ion Spray source operated in negative electrospray ionization mode and an Agilent 1290 Infinity LC-system with binary HPLC pump, column oven and autosampler (Agilent, Waldbronn, Germany). The chromatographic separation was performed using a Luna 3 µm HILIC 200 Å 100 × 2 mm column in combination with a KrudKatcher in-line filter (both Phenomenex, Aschaffenburg, Germany). Data acquisition was done using Analyst Software 1.6.3 and quantification was performed with MultiQuant Software 3.0.2 (both Sciex, Darmstadt, Germany), employing the internal standard method. Calibration curves were calculated by linear regression with 1/x weighting. Acceptance criteria and quality assurance measures have been applied as previously described [36]. The determination of homocysteine (HCys) was performed using protein precipitation in combination with LC-MS/MS. Briefly, 20 µL of serum or urine was pipetted to a polypropylene tube and 20 µL of 15 mg/mL aqueous TCEP-solution (tris(2-carboxyethyl)phosphine), 40 µL IS working solution (500 ng/mL HCys-d4 in methanolic TCEP solution, 1 mg/mL) and 40 µL methanolic TCEP solution, 1 mg/mL were added. Afterwards, samples were vortexed, centrifuged, transferred into another polypropylene tube, and dried using nitrogen. The dried samples were reconstituted using 50 µL of water containing 10 mM ammonium acetate buffer and 10 mM acetic acid, centrifuged again and injected into the LC-MS/MS system. The same LC-MS/MS-system and acceptance criteria as described for HCA were used. However, positive electrospray ionization mode was applied and a Luna Omega 1.6 µm Polar C18 100 × 2.1 mm column in combination with a respective pre column (both Phenomenex, Aschaffenburg, Germany) was used. Immunohistochemical Analysis A stepwise rehydration of the brain sections was conducted, followed by a heat-induced antigen retrieval in 10 mM citrate buffer (pH 6.0) including 0.05% Tween-20 (Sigma-Aldrich, Taufkirchen, Germany). After rinsing, sections were incubated for 5 min in 0.6% H2O2 (Sigma-Aldrich) in PBS (0.1 M; pH = 7.3) in order to block endogenous peroxidases. Sections were rinsed and incubated for 30 min in PBS containing 1% bovine serum albumin (PBS-B) and 5% normal goat serum (NGS, Sigma-Aldrich) to prevent unspecific binding of the antibody. After subsequent rinsing, sections were incubated overnight at 4 • C in PBS-B containing 1% NGS and the primary antibody (anti-human Aβ 82E1 mouse IgG MoAb 1:1000, IBL international, Hamburg, Germany). Rinsing was followed by an incubation with goat anti-mouse IgG H&L Biotin (1:1000, Abcam, Berlin, Germany) in PBS-B containing 1% NGS for one hour. Sections were rinsed followed by a 1-h incubation with avidin-biotin conjugate in PBS (ABC; Vectastain Elite ABC HRP Kit, Linaris, Dossenheim, Germany). After another rinsing step, sections were treated with 3,3 -diaminobenzidine tetrahydrochloride (DAB; Sigma-Aldrich) in water (0.2 mg/mL; pH = 7.6) for 10 min. The immunostaining was then developed by adding 50 µL H 2 O 2 to a final concentration of 0.006%, incubating for another 10 min. The reaction was stopped by rinsing in ice-cold distilled water followed by a counterstaining using Mayer's hematoxylin (Morphisto, Frankfurt am Main, Germany). Sections were finally dehydrated and covered with Pertex (Medite). We digitized appropriate sections using a Nicon Eclipse Ni-E microscope (Nikon Instruments Europe BV, Amsterdam, Netherlands). Whole brain images were taken at a final magnification of 100x and the area occupied by plaques in several regions of interest (ROI ; Table A2) was analyzed using the color segmentation plugin (Daniel Sage, Biomedical Imaging Group, EPFL, http://bigwww.epfl.ch/sage/soft/colorsegmentation/) for ImageJ software (National Institute of Health, Bethesda, MD, USA). Only animals of the first cohort were immunohistochemically investigated in this study. Preclinical Quality Parameters Several aspects were considered to ensure the quality of the applied methodologies and resulting data. These points are in accordance with initiatives such as EQIPD ("European quality in preclinical data"; https://quality-preclinical-data.eu/). The aim of EQIPD is broadly to implement various quality improving measures in order to enhance the reproducibility of preclinical data [63]. In the present study, we performed a power calculation to estimate the needed group size (http://www.biomath.info/power/). The resulting total amount of 112 animals was tested in two consecutive cohorts. Nine animals were lost during the course of the whole study. In terms of translatability, we have decided to include both male and female animals in the experiments, since Alzheimer's disease affects both sexes in the clinical context, with a higher rate in women than in men [48]. In general, female animals are largely underrepresented in neuroscience research [82]. Randomization was applied at several stages along the study course. Mice were initially allocated to the home cages according to a random list (https://www.random.org/) and target holes in the Barnes maze were set randomly. Besides, drinking corners in the IntelliCages as well as the stimulus mice in the social interaction test were also assigned randomly. A within-cage randomization between groups was not applicable in this case because every mouse matched strictly to its adequate experimental diet. All animals were regularly pre-handled and transferred to the experimental rooms at least half an hour before behavioral analysis. Blinding of the experimenter in order to prevent detection bias was not performed here, because in all behavioral tests automated outcome assessment was applied (via EthoVision XT, IntelliCage and Touchsreen software). However, blinding was performed during the immunohistochemical analysis. Here, a second experimenter marked the ROI in the images without being aware of animal ID or experimental group. Furthermore, an automated animal management software as well as an electronic lab-book were used throughout the study. Standard operating procedures had been written prior to the experimental procedures.
2020-10-29T09:08:05.558Z
2020-10-23T00:00:00.000
{ "year": 2020, "sha1": "e76ee8e395d509790688c3ffeeeff59efb7b6967", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/12/11/3248/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e780312a22911c653873ae95133cd77783c65c77", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
249593821
pes2o/s2orc
v3-fos-license
The restricted possible worlds of depression: A stylistic analysis of Janice Galloway’s The Trick is to Keep Breathing using a possible worlds framework This article uses a theoretical framework of possible worlds to explore the ways in which Janice Galloway’s novel about grief and depression, The Trick is to Keep Breathing, may elicit emotional responses in readers. I give an overview of some of the emotional responses expressed by readers by using online review data, before employing stylistic analysis to demonstrate how emotional effects may be created through the linguistic construction of degrees of possibility. Drawing on Possible Worlds Theory, I demonstrate how readers’ emotional responses may be linked both to the presentation of possibility and to the restriction of possibility. The combination of the empirical methodology utilised here alongside stylistic analysis allows me to harness the capacity of Possible Worlds Theory to cast light on constructions of textual possibility and actuality and to facilitate understanding of some of the mechanisms eliciting readers’ emotions. Introduction Janice Galloway's The Trick is to Keep Breathing (1999[1989) is a Scottish novel which was shortlisted for the Whitbread First Novel Award and the Irish Times International Fiction Prize and won the MIND Book of the Year Award (British Council, 2021). The text focuses on the severe depression of its narrator, a drama teacher ironically called 'Joy', who has recently lost both her partner and her mother. A possible worlds framework adapted from Ryan (1991) and Bell (2010) is used here to facilitate stylistic analysis of the ways in which the linguistic construction of depression in this novel may partly work to elicit emotional effects. Before applying the framework, I outline some of the emotional responses of readers, using data obtained from online reviews. In so doing, I acknowledge the role of the reader in co-constructing meaning in a transaction between reader and text (Rosenblatt, 1978). My stylistic analysis subsequently enables me to harness the flexibility of Possible Worlds Theory as a productive framework used here to illuminate how constructions of possibility may work to affect the emotions experienced by readers. 'Emotions' are conceptualised in this article in accordance with Hogan's (2016: 12) definition of emotions as 'events' resulting from 'the activation of systems by causes' and thus as cognitive responses to stimuli, with the stimulus in this case being the novel itself. Possible Worlds Theory as a framework Possible Worlds Theory originated in the philosophical field of modal logic, with the notion of 'possible worlds' used to conceptualise alternative realities existing beyond the 'actual world' of our existence (Kripke, 1959(Kripke, , 1963Lewis, 1973aLewis, , 1973bLewis, , 1986. In philosophy, debate has centred on the extent to which these alternative 'possible worlds' can be viewed as concrete entities just as 'real' as our worldas espoused by Lewis (1973aLewis ( , 1986)or as metaphorical constructions (Rescher 1975;Materna 1998) which enable us to visualise the way the world 'might have been'. These latter conceptualisations of Possible Worlds Theory have lent themselves to its adaptations for literary theory (Eco, 1984;Pavel, 1975); Possible Worlds Theory can enable us to conceptualise texts as containing a set of potentialities existing beyond the text and can thus be used as a vehicle to explore notions of possibility. Possible worlds have been viewed as 'mental constructs' (Bell and Ryan, 2019: 6) with Ryan's influential typology for the analysis of possible worlds (1991) proposing a modal universe that comprises 'three modal systems [the actual universe, the textual universe, and the referential universe], centred around three distinct actual worlds ' (24). In other words, each of these separate systems contains an 'actual world' with 'possible worlds' revolving around a central point of actuality and reality. Hence a fictional text itself will contain a textual actual world as well as textual possible worlds which exist in the minds of characters; for example, within their wishes, dreams and beliefs for the future. The actual universe refers to the same process taking place within the 'real' world, with actual possible worlds within this created in our imagination. The textual referential universe is 'the world for which the text claims facts' (Ryan, 1991: vii); in other words, the world that is established to exist by the text. Ryan's (1991: 26) own suggestion that, in fictional texts, the textual reference world and the textual actual world are 'interchangeable' led Bell (2010) to develop a simplified modal universe for the analysis of hypertext fiction in which the 'textual universe of fiction comprises a textual actual world and alternative possible worlds only' (Bell, 2010: 24). This is the framework adopted by this article, which focuses on the text as containing its own set of textual actual and possible worlds, and acknowledges the existence of the actual world of the reader, but does not use the third concept of the textual reference world. While Possible Worlds Theory has not previously been used as a specific framework for the exploration of readers' emotions, the experience of the reader is central to Possible Worlds Theory in which possibilities becomes 'recentered' around the textual actual world as constructed in a narrative, thus creating for the reader a new world of 'actuality and possibility' (Ryan, 1991: 2). Textual possible worlds are 'story-like constructs contained in the private worlds of characters' (Ryan, 1991:156) which exist also in the minds of readers. Researchers in Possible Worlds Theory recognise, therefore, the role of readers in co-constructing meaning, such as in Raghunath's (2020) development of the concept of 'Reader-Knowledge Worlds'. Importantly, the 'text as world' metaphor is not 'indebted to PW theory' (Bell and Ryan, 2019: 8); readers' mental representations of texts have variously been called 'textworlds' (Werth 1999), 'storyworlds' (Herman, 2002;Ryan, 2019), 'mental spaces' (Fauconnier and Turner, 2002: 89), 'narrative worlds' (Gerrig, 1993) and 'fictional worlds' (Doležel, 1998(Doležel, , 2019Fort, 2016). Some criticism of Possible Worlds Theory suggests that its flexibility detracts from its usefulness: Ronen cautions against its use as a 'diffuse metaphor' (1994: 7), while Stockwell (2010: 425) points to a lack of a 'genuine cognitive discourse grammar' in Possible Worlds Theory and suggests that its 'top-down' focus on schematisation and world-building may preclude stylistic analysis. However, I aim to demonstrate here that it is possible to explore the construction of possible worlds in a text linguistically at the micro-level, alongside benefiting from the capacity of Possible Worlds Theory to facilitate exploration of constructions of possibility within narratives. The study of readers' emotions The study of readers' emotions is a flourishing area of research both in and beyond stylistics. In Text World Theory, Whiteley (2010) and Canning (2017) have used reading group data to enrich understanding of how fiction elicits readers' emotions, while Gerrig (1993), Oatley (1999Oatley ( , 2004 and Miall (2006) have extensively examined the emotional experience of literature in a broadly empirical sense. Empirical studies of readers' emotions have shown that the experience of emotions when reading is a near-universal experience (Mar et al., 2011: 828). Readers' emotions have been shown to correspond to 'appraisal patterns (objective correlatives)' in texts (Mar et al., 2011: 828); in other words, research has demonstrated that identifiable features of the language of the text can elicit readers' emotions. Oatley and Mar (2008: 173) propose that the 'simulation' of emotion and social experience through reading leads readers to engage in empathetic processes which can engender greater capacity for empathy in their own lives. Stockwell summarises that the feelings experienced when reading are 'fundamentally the same as authentic real-world emotions' (2020: 183); literary emotions can thus be viewed as genuinely felt even though they are engendered through simulation. Character identification has also been explored as one of the key factors eliciting narrative empathy (Keen, 2007: 169;Oatley and Gholamain, 1987). Stockwell suggests that the more richly developed a character, the more readers will be able to 'mind-model' them to 'a rich level of impersonation' (2020: 183), and therefore, these characters are more likely to be the ones whom we feel emotions towards. Stylisticians in Text World Theory suggest that character identification takes place through a process of projection which consequently engenders emotions; Gavins (2005Gavins ( , 2007 suggests that we project enactors of ourselves into a text-world when we encounter it, and that such projection results in 'empathetic identification ' (2007: 64). Adapting this concept, Whiteley posits that readers may project themselves simultaneously into multiple character roles when encountering a text, calling this an act of 'mindreading' (2010: 121) whereby readers imagine the thoughts and feelings of characters and thus experience emotional responses. Therefore, stylistic research into character identification centralises readers' emotional experiences. Accordingly, Ryan (2019: 74) suggests that the construction of a fictional world which is accessible to the reader, with characters 'perceived as ontologically like us' who undergo recognisable emotions or experiences, contributes to our ability to identify with characters. Ryan (1991: 32-33) also suggests a typology of accessibility relations delineating the distance a textual actual world has from, or how accessible it is, from the actual world, in order to explain how readers' sense of reality is applied in fictional worlds. Character identification, according to Stockwell (2012: 172), is partly created through focalisation patterns throughout a text which lead us to feel emotions 'at points of juncture in the evolution of the plan, as in real life'. The concept of character identification might thus be aligned with a possible worlds approach, because if we become immersed in the world of a character, we may, conceivably, begin to imagine, dream or hope for certain outcomes for that character throughout the process of reading. Correspondingly, the conceptualisation of reading as immersion may help to shed light on some of the processes eliciting emotions. Our experience of textual actual and possible worlds may generate a sense of immersion due to the 'fictionally complete' nature of a storyworld (Ryan, 2019: 75). Linking to her notion of reading as a process of 'recentering' (Ryan, 1991: 2), Ryan (2015: 73) defines immersion as a process of 'consciousness [which] relocates itself to another world', thus suggesting a process of movement into the world of the narrative. Similarly, metaphors of transportation (Gerrig, 1993;Green and Brock, 2002;Harrison and Nuttall, 2020;Stockwell, 2009) and absorption (Braun and Cupchik, 2001;Kuijpers et al., 2014), used frequently by readers to conceptualise reading, evoke relocation into the world of the text. Methodology This article utilises Bell's Possible Worlds framework (2010: 24) adapted from Ryan (1991), which holds that any fictional text contains a textual actual world (henceforth TAW) with other textual possible worlds (henceforth TPWs) existing alongside or revolving around this. TPWs can be understood, therefore, as representing those imaginary or alternative worlds which exist, for example, in a character's wishes, dreams, obligations or desires. The overarching conceptual framework of TAW and TPWs enables me to shape a stylistic analysis of the ways in which possibility, actuality and lack of possibility are depicted and may work to convey the protagonist's emotional states. Within this analysis, my own concept of restricted possible worlds is used to refer to the aspects of the text in which the reader is afforded access to only a limited range of TPWs. Prior to the stylistic analysis within a possible worlds framework presented here, I outline reader response data from online reviews in order to exemplify some of the emotions reported by readers of this novel. In recent years, several researchers in stylistics have used online reviews as a source of empirical data. Giovanelli (2018) uses excerpts from online reviews to demonstrate readers' views of the narrator in The Girl on the Train, before then using Cognitive Grammar to explore the presentation of the narrator's mind style (Fowler, 1977). A different approach is taken by Harrison and Nuttall (2020) who focus on analysing metaphors for reading in online reviews of the novel Twilight. Their focus is on linguistic analysis of the reviews themselves and the ways readers conceptualise reading, rather than analysis of the novel, and thus provides illuminating insights into the way the text is experienced by readers. Meanwhile Allington (2016) compares Amazon customer reviews with those of literary critics, using online review data to cast light on the ways in which customers' responses differ from those of professional reviewers. Online reviews can be categorised as a 'naturalistic' (Swann and Allington, 2009) form of reader response data rather than 'experimental' data gleaned within an artificial, experimental or laboratory setting; the reader reviews already exist as a form of social reading (Peplow et al., 2017) undertaken after the initial individual reading process. Six reviews were selected for this analysis from a total of 69 Amazon reviews and seven reviews were selected from a total of 225 GoodReads reviews, with both sites utilised in order to gain access to a range of reviews. Whilst both Amazon and GoodReads invite readers to contribute reviews, there are some key differences in the format and the purpose of reviews, even though both websites are owned by the Amazon company. GoodReads is a community social cataloguing website where readers share and discuss their opinions on books they have read, whilst reviewers on Amazon have often purchased books on the website. Any customer with an Amazon account can leave a review, with reviews of those customers who purchased the book on the website being labelled 'Verified Purchase'. I chose to include both Amazon reviews from verified and nonverified customers for two reasons. Firstly, since there can be no external verification of whether readers have truly read a book on GoodReads, the inclusion of both verified and unverified purchases on Amazon aligned my selection of Amazon reviews with my selection of GoodReads reviews. Secondly, whilst counterfeit Amazon reviews may occasionally be written to promote sales (He et al., 2022), by carefully selecting reviews which discussed feasible emotions or feelings experienced during the process of reading the novel, I was able to ensure as far as possible that reviews were by genuine readers. A further limitation of using online reviews may be their performative element (Driscoll and Rehberg Sedo, 2019), but arguably any form of verbal or written response to literature intended for any audienceeven if only a single researcheris inherently performative since it involves the expression of one's feelings about reading to others rather than a private emotional experience. Thus, any study of readers' emotionswhether expressed in online reviews or in any other formatis in reality a study of readers' expressions of their emotions, given the impossibility of verifying the veracity of individuals' verbal or written communication. Reviews were selected for analysis from the dataset which expressed strong emotions towards Galloway's novel. By 'strong emotions', I mean that I focused on reviews that expressed identifiable emotions such as sadness, frustration or sympathy regarding particular aspects of the novel, rather than reviews expressing a greater sense of neutrality or apathy. However, particular emotions were not pre-determined and specific linguistic markers of emotion in reviews were not pre-set, as I wanted to facilitate consideration of a wide range of emotions in my analysis including those which I had not foreseen. For example, R8 described feeling 'trapped in [the main character's] head', an emotion which I later coded as demonstrating the reviewer feeling 'claustrophobic'. I acknowledge that the use of the overarching concept of emotion as a guide for data selection means there is an element of subjectivity in the review selection process, as the reviews identified as demonstrating emotional responses were inevitably partly informed by my own perspective as a reader, as well as by features of the language used by reviewers. The reviews chosen, therefore, should not be viewed as a comprehensive and representative sample of readers' emotional responses to the novel but rather as an exemplification of the way in which the stylistic construction of the novel may affect some readers and also as a tool to guide me as a researcher towards the exploration and analysis of certain emotion-causing elements of the novel referred to in the review data. Once reviews had been selected and a dataset had been assembled, an inductive approach to coding was utilised. Inductive coding enables researchers to condense raw textual data, thus rendering data a manageable tool in assessing research objectives (Thomas, 2006) and thereby facilitating analysis (Rapley, 2011: 282). In an inductive coding approach, emergent codes are identified during analysis of the dataset rather than selected from an a priori list, and coding should always be 'customized to suit the unique needs and disciplinary concerns of [the] study' (Saldaña, 2016: 64) with researchers recognising that coding is fundamentally a 'decision-making process' (Elliott, 2018(Elliott, : 2850 driven by the context and aims of the research. By identifying readers' emotions in an inductive rather than deductive coding process, I ensured that a broad range of emotions could be examined and identified and did not need to rely on foreseeing a predetermined set of emotions in the data. The first stage of coding involved the initial labelling of data, before these were then grouped into a smaller number of categories identified as prominent. Codes were also used to direct my attention to particular aspects of the novel, although I do not attempt to suggest that particular excerpts have elicited particular emotions in readers but rather use my stylistic analysis to exemplify the way in which the language of the novel is constructed to achieve emotional effects. Since specific phrases or words used by participants can be 'good leads' (Miles et al., 2014: 74) in indicating emerging themes, I did not exclude participants' own terminology from my categorisations, but the emergent codes were descriptive rather than in vivo, as not all readers explicitly used emotion labels to describe their feelings, and sometimes used idioms, metaphors or alternative descriptions of emotions to describe their responses to the novel. By developing my own codes, I was able to utilise coding as a form of 'data condensation' enabling discovery of the 'core content or meaning' of the data (Miles et al., 2014: 73). While informed consent is not typically sought for the use of pre-published online texts by researchers (Gao and Tao, 2016: 185; British Association for Applied Linguistics Research, 2017), researchers should be mindful of their ethical responsibilities even when data is in the public domain (Boyd and Crawford, 2012: 672). It is problematic to conflate agreement with the terms and conditions of social media platforms with informed consent (Townsend and Wallace, 2016), and therefore, anonymisation of reader data can help to limit the capacity for ethical harm to participants who have not been able to give consent for participation (Townsend and Wallace, ibid.). I have therefore removed reviewers' names and have labelled them as 'R1', 'R2' and so on within this article to limit the risk of identification. In conclusion, then, my research design can be summarised as a process of datagathering from online reviews to cast light on readers' emotions regarding aspects of the novel, followed by a subsequent process of textual analysis using the possible worlds framework outlined above. I selected excerpts for stylistic analysis in order to explore some of the ways in which these emotional effects of the novel might be elicited; a researcher's own stylistic analysis is in itself inherently empirical (Stockwell 2021: 166;Brône and Vandaele 2009: 7) as a researcher draws on their own personal, felt experiences of texts as a form of data. Therefore, the triangulation of both 'rich first-person phenomenology and rigorous third-person observation' (Brône and Vandaele 2009: 6)in other words, undertaking my own stylistic analysis whilst also acknowledging the responses of other readersmay facilitate a richer understanding of the potential emotional effects of the language of the text. In the analysis below, I outline some key findings from my analysis of reader data, before applying a possible worlds framework to the language of the novel itself. Reader reviews As noted in the introduction to this article, The Trick is to Keep Breathing focuses on the depression of its protagonist, Joy. The novel details aspects of her daily life in the aftermath of her lover's death, including her loneliness, alcoholism, multiple affairs, bulimia and treatment for depression. Several reviews analysed for this research discuss the capacity of this novel to elicit sadness. For example, R1 states: 'This isn't a book you'd read if you're feeling down or in winter, it is bleak and even the rare moments of hope are pretty subdued.' The reader's use of the metaphor 'feeling down' suggests a belief in the novel's capacity to cause sadness and therefore potentially exacerbate existing depression, whilst their caution not to read the book 'in winter' might suggest an association of the winter months with feeling depressed, such as in individuals suffering with Seasonal Affective Disorder (NHS 2018a). The reader, then, conceptualises the novel as one that could make readers feel depressed themselves; their warning regarding the content of the novel indicates a perception of its ability to elicit unhappiness for readers, particularly if their own context reflects the experience of the protagonist. Similarly, R11 cautions: 'If you are looking for a fun lighthearted read or just have enough of your own depression to deal with, this may not be the book for you. As for me, I'm off to eat some chocolate in hopes of cheering up a bit'. As well as this reader's comments indicating that the depiction of the protagonist's depression might exacerbate other readers' 'own depression', their comment regarding being in need of 'cheering up' indicates that the novel has elicited sadness. R12's description of the novel as 'depressing and painful' and R2's description of the novel as 'devastating' were also both coded with 'sadness', with the reviewers' words suggesting the power of the novel to incite this emotion. Two reviewers of The Trick is to Keep Breathing comment negatively on their feelings about being confined to the protagonist's worldview, an emotion which I coded as feeling 'claustrophobic': R8 refers metaphorically to being 'trapped inside Joy's bleak vision, with little sense of what life is really like outside it' and reflects that 'the main problem for me was that we were so trapped in Joy's head that a lot of the world around her remained hazy'. As well as using the metaphor 'trapped' twice to denote the claustrophobic emotions experienced when reading the novel, the reader also specifically uses the term 'claustrophobic' in the review, suggesting the novel is 'horribly claustrophobic' although suggesting that 'this is inevitable, bearing in mind how claustrophobic depression can be'. Meanwhile, the word 'claustrophobic' is also used by R7 who found the book to be 'claustrophobic and hopelessregardless of the ending'. While this word is used by R7, unlike R8, to describe the novel itself rather than the reader's specific emotions, R7's subsequent discussion of their dislike of the novel suggests a similar sense of feeling claustrophobic: 'This book is horrendous […] 4/5 stars for the consuming and unusual narrative style, and accurate portrayal of mental illness at its worst. 1/5 stars for how terrible this made me feel about life'. The use of 'horrendous', 'unusual' and feeling 'terrible […] about life' work together with 'claustrophobic' to suggest a sense of feeling trapped within the pain of the novel. Evidently, R7 appreciates the technical skill evident in Galloway's writing but finds it difficult to cope emotionally with the strength of the novel's impact on them, to the extent that this limits their reading enjoyment. Similarly, R11 writes that 'this woman's head is NOT a fun place to be. […] I thought I had dealt with depression before, turns out, my head is practically a Disney movie by comparison'. This reader's explanation that they felt inside 'this woman's head' when reading also suggests a sense of feeling restricted to Joy's experience, as well as evoking the conceptual metaphor READING IS TRANSPORTATION (Gerrig 1993;Stockwell 2009). Finally, several reviews were coded with 'empathy', including the reviews already outlined above, as empathy frequently co-existed with other emotions in readers' reviews. For example, R7's sense of feeling 'trapped' in the narrator's head was also coded with empathy, because their description of feeling inside Joy's head, like R8's, suggests an experiential transportation into Joy's mind and thus the ability to both feel Joy's pain and to experience this as uncomfortable. The word 'empathy' is also used explicitly by R1, who compares Joy's experiences to their own experiences with depression: 'I found the central character a person I have huge sympathy and empathy for, a lot of her brain numbing and frustrating conversations with medical 'experts' ring very true for me, I often think of her as I grate [sic] my teeth speaking with doctors'. As well as the explicit use of 'empathy', the reviewer's description of Joy's conversations about her depression that 'ring true' also evokes an empathetic response. Interestingly, not only does this reader's empathetic response suggest that they relate to Joy's experience, but their reference to thinking of the character during their conversation with doctors indicates that the novel has had an impact on their subsequent experiences and interactions in life. The corporeal metaphors some readers use to describe their experience of reading the novel also evoke an empathetic response to Joy's pain: R9 comments that 'Joy Stone, the main character, ripped my guts out and took my heart along the way and it was indeed hard to breathe at many points in the book'. The use of physical imagery enables this reviewer to draw parallels between their own experience of reading the book with Joy's depression as depicted within the novel; by stating that it became 'hard to breathe', the reader implies that reading the book enabled them to experience an element of Joy's feelings of depression. Furthermore, the implication of emotions being felt in the 'guts' and 'heart' reflects the way in which Joy's depression is depicted as physically allconsuming, as demonstrated in the novel when she describes her inability to eat and her difficulty in performing everyday tasks. Similarly, R5 states that 'the fierceness of [Joy's] pain feels like a knife in the gut'. This simile evokes the physical experience of emotion normally associated with the experience of personal upset or emotional pain, whilst also implying a favourable opinion of the novel in that the ability to 'feel' the narrator's pain demonstrates its power. Likewise, R3 explains that Joy's 'mental collapse [was] intimate, and their situation familiar. It also hurts my brain', suggesting a physical identification with Joy's pain by using bodily imagery to describe the psychological pain wrought by empathising with the narrator. Readers' responses here, then, indicate an empathetic response to the ways in which the TAW is constructed as painful, to the degree that readers are able to physically 'feel', to some extent, the narrator's emotions. Even R13, who 'didn't enjoy' the novel, suggests that this is a 'heavy book to get through' in which 'you do feel as though you are wading through deep water right along with the MC' [main character]. The metaphor of 'wading through deep water' might conceivably suggest an empathetic response to the protagonist's depression, in which the reader experiences some of Joy's feelings during the process of reading. On the other hand, others respond positively to the richly detailed depiction of Joy's psychological pain: 'I was intimately involved in ''Joy's'' life and I felt it would have been disrespectful to her to rush through all of her pain as though it really didn't matter' (R10). Some readers relate Joy's depression to their personal experiences: R12 suggests that 'we've all felt the way Joy feels at some time or another' while R6 explains that Joy is 'so relatable… the character is incredibly, worryingly familiar', with both readers therefore suggesting feelings of empathy elicited not only by the novel itself but also by their own personal experiences. Overall, then, notable emotions expressed in the reviews analysed for this article include sadness, empathy and a sense of claustrophobia. These emotions have been selected from a large dataset and therefore should not be interpreted as exemplifying the most frequent emotions experienced in response to the novel; a code that appears even only once or twice in a dataset may be meaningful (Saldaña, 2016). Rather, the categorisations outlined here represent an exemplification of some of the emotions readers report feeling in response to The Trick is to Keep Breathing. In the analysis below, I analyse several excerpts from the novel using a possible worlds framework in order to explore some of the way in which these emotions may be elicited. Analysis The framework of the TAW and TPWs can be used to cast light on the presentation of the protagonist's hopelessness and despair in the following excerpt: I watch myself from the corner of the room sitting in the armchair, at the foot of the stairwell. A small white moon shows over the fencing outside. No matter how dark the room gets I can always see. It looks emptier when I put the lights on so I don't do it if I can help it. Brightness disagrees with me: it hurts my eyes, wastes electricity and encourages moths, all sorts of things. I sit in the dark for a number of reasons. (7) Here, Galloway's use of language anchors the reader in the unhappy TAW of the narrator by presenting her life as dark, small and with limited possibilities for escape to happier TPWs. There is an implicit reference to a TPW existing beyond the TAW in the negation 'No matter how dark the room gets', but this simply works to construct a different TPW in which the room, and therefore the narrator within it, exists in various degrees of darkness. While there also an implicit construction of a TPW of a brighter room in the reference to a world which 'looks emptier' with the lights on, this world is depicted as even less hopeful than the present, dark TAW in its emptiness. This evokes both the TAW and any alternative TPWs as equally miserable: the TAW is dark and thus seems to be a desolate place due to the connotations of darkness with lack of hope, but a brighter room would seem even 'emptier', with both prospects suggesting loneliness and unhappiness. Joy's pain is also emphasised in the way even light itself is constructed as a harmful force for the narrator which 'hurts (her) eyes' and 'disagrees with (her)'. The construction of both the TAW and any alternative TPWs as unhappy conveys the totality of Joy's depression and inability to see beyond her experience of the present. This is reinforced, too, by the fact that as minuscule details of the protagonist's decision-making process regarding sitting in the dark are revealed, the reader becomes aware that this is a character who is spending a great deal of time reflecting and perhaps even obsessing over the smallest elements of their present existence to the extent that even the process of leaving the light off is justified. Alongside the insight into Joy's state of mind that this level of detail creates, a sense of dissociation is also evoked here with the foregrounded verb phrase 'I watch myself'. The unusual structural placement of 'I watch myself from the corner of the room', which is followed by a line break, highlights the importance of this action of self-observation and indicates a fragmented sense of identity. In positioning the narrator's mind as separate from her body, the locative adverbial phrase constructs the TAW as a disturbing and unreal space, and, in suggesting that her perceptive faculties are operating from 'the corner' of this space, reinforces her desire to retreat from the world. The spatial detail of Joy's location in 'the corner' in the 'armchair' at 'the foot of the stairwell' also emphasises the overpowering nature of her present experience in the TAW and suggests that the mundane details of her environment are her principal focus, thus restricting our viewpoint to the present moment and environment. The repeated use of the simple present tense in 'I sit', 'I put', 'I can' and 'I watch' portrays Joy's existence as centralised around her banal actions in the TAW, thus creating an impression of the TAW as restricted in size and potential for escape. This, then, suggests the unhappiness of Joy's worldview by evoking a constrained life with limited possibilities for happiness and thus conveys the depression that she is experiencing. In the following excerpt, the narrator begins to plan for the future event of going to work, giving us a glimpse of alternative TPWs beyond this roombut this TPW is developed as a bleak and depressing contemplation: The green numbers on the stereo flash 03.25. But it goes fast. I know perfectly well it doesn't matter what the real time is. This is all beside the point. The fact remains it's so late it's early and I have to move. I have to go upstairs. I have work tomorrow and I have to go upstairs. (7) The repeated use of deontic modality here -'have to'constructs an unrealised TPW of going to work the next day. Rather than constructing multiple, potentially hopeful TPWs in the reader's imaginationas might be developed via other modal formsthere is only one future TPW repeatedly and explicitly constructed here, and it is one which is presented as unfavourable and compulsory. Thus, Galloway makes it clear that not only is Joy's present situation miserable, but so too is her contemplation of the future. By granting the reader explicit access only to the TPW of fulfilling the obligation of work, the author makes it clear that both the TAW and the narrator's conceptualisation of future TPWs are unhappy. This is reiterated by the present tense repeated construction of 'I have to' and the way in which each action in the narrator's planning for work is separately outlined, such as in her statements 'I have to go upstairs' and 'I have to move', which works to suggest that not only does work present a miserable future TPW but the preparation for bedtime in the TAW is also an unhappy, enforced experience. The unhappiness both of the TAW and any alternative TPWs is reinforced by the statement 'it doesn't matter what the real time is' which suggests that even an aspect of life as fundamental as time has become meaningless. This works to develop our sense that Joy's depression colours her interpretation of time itself, meaning all possible worlds available to her are equally dark and restricted. Galloway therefore conveys the pain and unhappiness of Joy's depression by suggesting that she has access only to a narrow spectrum of alternative TPWs. Alongside this, the TAW itself is presented as a dark and unhappy place. This is similarly demonstrated in the following extract, in which the narrator describes the experience of watching television whilst she drinks tea in the morning: There are interviews with junior ministers while I make tea. Always tea in the morning unless I've eaten the night before: then, it's black coffee. Bad mornings, I have only hot water. But I drink something. But I drink something, as much as I can. It helps the headache and the dryness: the weight of fluid is calming. (10)(11) The present tense description of the events of Joy's morning initially seems to refer to only the immediate TAW of that day, as she listens to the radio 'while' she prepares her tea. However, the foregrounding of Joy's daily routine in the verbless phrases 'Always tea' and 'Bad mornings' suggests that this mundane morning procedure is ongoing rather than transient and therefore a monotonous aspect of her life extending to TPWs beyond the TAW. The elision of more typical subject and verb constructions removes agency from Joy as the events of the morning seem to be inflicted upon her, rather than actively chosen, as she follows the restrictive pattern set on previous days. Further to this, the use of the premodifier 'Bad' without any explicit reference to cognition or emotion suggests that these mornings are categorically bad, rather than just momentarily perceived as such, and thus evoke a sense of Joy being trapped within the TAW. The evaluative adjective 'Bad' also works to construct TPWs of 'mornings' beyond the immediate present that are even worse than the TAW, as this is a world in which Joy would only drink 'hot water'. Thus, even though the TAW itself is presented a dark and depressing space, the reader is made aware that life is sometimes even worse than this. Similarly, the irrealis 'unless I've eaten the night before' suggests TPWs that might exist beyond Joy's experience of the current morning, and yet this is an alternative world in which the only difference in Joy's morning routine would be the consumption of 'black coffee' rather than 'tea' and in which she would still not manage to eat anything. Thus, here we see again the linguistic construction of an unhappy TAW alongside a limited spectrum of alternative TPWs which are depicted as even more miserable than the TAW, evoking the restriction of Joy's depression. Alongside this, Joy explains she drinks as much liquid as possible to help with 'the headaches' and 'the dryness'. By introducing both physical symptoms with the definite article, Galloway evokes both physical states as pre-existing, continuous aspects of Joy's existence rather than transient experiences, thus conveying the pain of the TAW as an ongoing rather than momentary discomfort. Therefore, when we are told that 'the weight of fluid is calming' we can infer that it is not only a single moment in which Joy feels in need of 'calming' but that this is a daily experience. In describing the drink as 'calming' and as helping 'the headaches and dryness', Galloway depicts Joy's psychological and physical malaise as intertwined, thus reiterating the multifaceted discomfort experienced in the TAW and implicitly emphasising the difficulty of accessing alternative TPWs. The metaphor of the calming 'weight' of fluid indicates the extent to which she feels, in contrast to the liquid, adrift and ungrounded, therefore helping to construct a sense of unreality. The notion of the tea as 'calming' also emphasises the extent of Joy's anxiety which means she needs external entities to bring her back to reality, but it is notable that the specific features of her pain are not delineated, and her depression must be inferred from the depiction of her interaction with her surroundings. The frequent use of material process verbs (Halliday 2013) such as 'have', 'eaten', 'make' and 'drink' to denote practical actions alongside the infrequent use of verbs of cognition also creates a sense of distance from cognitive and emotional processes, potentially suggesting that the narrator may be experiencing the feelings of numbness or emptiness often associated with depression (Mind, 2019). This evokes R12's comment that grief 'renders you a shell of yourself', a statement that suggests an implicit awareness of the way Joy feels emotionally empty and numb in the midst of her depression. The detailing of the banal elements of this morning routine also temporally extends this event, whilst restricting our perspective of the space beyond the house, helping to evoke the monotony of the present and therefore reinforcing the lack of opportunity for psychological escape from the darkness of the TAW, thus potentially contributing to readers' perceptions of the novel as 'claustrophobic' (R7 and R8). This creates a sense of oppression and constriction as our perspective is limited to a constrained view only of Joy's presentperhaps reflecting the way she is, in her depressed state, merely existing in the moment as she struggles to survive each day. Similarly, when Joy describes the next stage of her morning, the focus of the narrative is on the tedious actions of her daily routine: When I'm ready, I rinse the cup, mop the sink dry and lift my coat. My mouth is still dry as I lock the back door. My mouth is always dry. (11) There is again a lack of mental process verbs here, with material process verbs such as 'rinse ', 'mop', 'lift' and 'lock' depicting the banality of the daily routine and thus locating the narrative firmly in the monotonous processes of the TAW. The tedious tasks of the morning are foregrounded through the syntactical parallelism of 'rinse the cup, mop the sink dry and lift my coat' here, perhaps reflecting the concentration required to undertake ordinary tasks in a clinically depressed state alongside constructing a sense of mundanity, while the repeated use of the present tense active voice creates a sense of constriction to the TAW. The detail of Joy's 'dry' mouth is foregrounded here, both due to repetition and the preceding adverbs 'still' and 'always'. This extends understanding of the 'dryness' (11) established in the previous excerpt, with the emphasis on this element of physical discomfort again conveying her emotional state despite the lack of explicit references to cognition. A dry mouth can be a symptom of anxiety (NHS 2018b) and a side effect of medications for depression (NHS 2018c) but even readers without this knowledge are likely to have schematic awareness that emotions are felt and experienced physically as well as mentally, meaning that the emphasis on an uncomfortable bodily sensation implicitly suggests unhappiness. Furthermore, when considered alongside the emphasis on drinking tea in the previous excerpt as an attempt to help the 'dryness' and also for its calming effect, the fact that Joy's mouth is 'still dry' may indicate that both her physical and mental discomfortor her depressionfeel ultimately unfixable. Again, the reference to physical sensation rather than explicit linguistic references to cognition or emotion evokes numbness and inability to process emotions, emphasising the physicality of depression as well as reinforcing that the TAW is a restricted space in which emotions cannot be easily defined or expressed, and in which happiness is out of reach. The focus on Joy's actions and routine, coupled with the descriptions of bodily sensation, thus implies a schism between her body and mind as she narrates her external experience and physical feelings and avoids explicit reference to thoughts or emotions. The TAW is therefore established as a constricted and painful place, but one in which emotions are not able to be expressed, giving us an insight into the depth of loneliness of the narrator and her imprisonment within her depression. This focus on physicality as a representation of cognition is also demonstrated in the following excerpt in which the physical consequences of Joy crying are foregrounded in the first-person narrative, rather than her inner thoughts or feelings, and both the TAW and alternative TPWs are presented as unfavourable: Blisters. Little moon craters on the smooth paper. I push the magazine aside and let the tears drip onto the rug until I'm ready to move to the kitchen for some paper towels. My nose fills and drips too, my face will be bloated. (27)(28) The emphasis on the narrator's physical actions here, such as the 'tears' that 'drip', imply that she is acting as witness to her own pain and has become disassociated from cognitive comprehension of her emotions. By describing the immediate results of crying in metaphorical terms that evoke their impact -'blisters' that form 'moon craters' on the paperrather than in terms of the narrator's feelings, Galloway again implies that Joy is observing herself from an external perspective, reinforcing our sense of her withdrawal from society and her inability to express or fully conceptualise her emotions. Furthermore, the focus on the physical details of crying such as Joy's 'face, 'tears' and 'nose' that 'drips' works to separate Joy's body into discrete parts, thus evoking a sense of fragmented identity and dissociation, which is reiterated by the fact that the word 'crying' is not used at all here. The agency afforded, too, to the tears which create 'moon craters' and the nose which 'fills and drips', creates a contrasting impression of Joy's passivity as a narrator who withstands and endures the crying. This is reiterated when she moves the magazine so that the tears fall 'onto the rug' which suggests that while Joy can choose where the tears land, she cannot prevent them from falling. Again, then, the narrator's body and mind are presented as separate as she observes the results of her crying and her physical symptoms, indicating an inability to psychologically process her feelings. This constructs the TAW as a deeply painful place, with a lack of possibility for mental escape. The only explicit references to alternative TPWs here are upsetting, unfavourable future events which are as unhappy as the present experience of crying; for example, the preposition 'until' merely introduces a world in which Joy will be 'ready to move into the kitchen for some paper towels', therefore constructing a TPW in which she will respond with a practical solution to her crying, but not one in which her emotional pain will dissipate. Similarly, whilst she imagines a TPW in which 'my face will be bloated', again Joy is envisioning a future in which she will have to suffer the consequences of her crying. Not only is the TPW here constructed as even more desolate, perhaps, than the TAW, but the focus on this physical detail also develops Joy's lack of care for herself, as her principal concern is how she will appear to the outside world, rather than how she feels. Therefore, the protagonist's depression is presented as granting access only to a limited, restricted range of alternative TPWs which are equally as unhappy as the TAW. Conclusion The psychological pain that is experienced as an inherent aspect of depression is evoked convincingly in this novel through the portrayal of the TAW as a space with limited mental access to alternative TPWs. Where TPWs do exist, they are often constructed as even less favourable or equally as unhappy as the TAW. The fact that readers are granted access, in the main, only to Joy's present experience in the TAWa place of grief, misery and depressionalongside a narrow spectrum of alternative TPWs means that the TAW is evoked as a space of pain, darkness and mental constriction, whilst the alternative TPWs available to the protagonist are portrayed as restricted in their potential to provide happiness. Feasibly, this construction of an unhappy TAW and a restricted range of TPWs may contribute to readers' emotional responses to the text, which include feelings of claustrophobia, sadness and empathy. While extratextual factors as well as emotions elicited by a wide variety of textual factors are also likely to contribute to readers' emotions, within the scope of this article the application of a framework of possible worlds, using the concepts of the TAW and TPW, has helped to cast light on the stylistic construction of the novel and thus to illuminate some of the factors that may contribute to the emotional effects of the novel. Reader responses gleaned from online reviews help to ensure, too, that my assertions regarding the emotional impact of the novel are not simply intuitive and based on my own interpretations, but are also grounded in a broader set of empirical data. Elements of the online reader reviews demonstrate that readers' own personal experiences can intensify their emotional responses to the novel, with some readers explicitly referring to their own knowledge or experience of depression as increasing their emotional responses to the novel. The nuances of readers' experiences were not examined in depth for this article, given the nature of the reader response data gleaned from online reviews and thus the lack of contact with research participants. However, further research in this area using alternative naturalistic social reading methodologies, such as reading groups, might examine the impact of individual contextual and experiential factors in more depth in order to build an enriched comprehension of the role of the reader in co-constructing the TAWs and TPWs of fiction. Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/ or publication of this article. Funding The author(s) received no financial support for the research, authorship, and/or publication of this article.
2022-06-12T15:16:42.059Z
2022-06-09T00:00:00.000
{ "year": 2023, "sha1": "492d930686eff45b6b403fa2269c57cbbadd49bd", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1177/09639470221106882", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "469569638ad37c807c85f5531104f3b7f960e4b9", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [] }