id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
7310527
pes2o/s2orc
v3-fos-license
Regional-Scale Migrations and Habitat Use of Juvenile Lemon Sharks (Negaprion brevirostris) in the US South Atlantic Resolving the geographic extent and timing of coastal shark migrations, as well as their environmental cues, is essential for refining shark management strategies in anticipation of increasing anthropogenic stressors to coastal ecosystems. We employed a regional-scale passive acoustic telemetry array encompassing 300 km of the east Florida coast to assess what factors influence site fidelity of juvenile lemon sharks (Negaprion brevirostris) to an exposed coastal nursery at Cape Canaveral, and to document the timing and rate of their seasonal migrations. Movements of 54 juvenile lemon sharks were monitored for three years with individuals tracked for up to 751 days. While most sharks demonstrated site fidelity to the Cape Canaveral region December through February under typical winter water temperatures, historically extreme declines in ocean temperature were accompanied by rapid and often temporary, southward displacements of up to 190 km along the Florida east coast. From late February through April each year, most sharks initiated a northward migration at speeds of up to 64 km day−1 with several individuals then detected in compatible estuarine telemetry arrays in Georgia and South Carolina up to 472 km from release locations. Nineteen sharks returned for a second or even third consecutive winter, thus demonstrating strong seasonal philopatry to the Cape Canaveral region. The long distance movements and habitat associations of immature lemon sharks along the US southeast coast contrast sharply with the natal site fidelity observed in this species at other sites in the western Atlantic Ocean. These findings validate the existing multi-state management strategies now in place. Results also affirm the value of collaborative passive arrays for resolving seasonal movements and habitat preferences of migratory coastal shark species not easily studied with other tagging techniques. Introduction It is now widely recognized that as a group, sharks are unusually susceptible to overfishing, relative to most other marine fishes, due to their slow growth, late age of maturation, and low fecundity [1,2]. However, management of shark stocks is further complicated by a growing realization that many species undertake seasonal migrations spanning hundreds or thousands of kilometers in which they transit through jurisdictions with incongruous fishing regulations and enforcement strategies [3,4]. Prudent management in a given area can be largely negated by unsustainable harvest or habitat degradation in other portions of a species range. Better understanding the geographic scale, directionality, and timing of shark migrations will help guide shark conservation efforts in coming decades as oceans are further stressed by habitat loss and ever-growing human dependence on marine resources. Specifically, migration data can be used to resolve stock boundaries, refine fishing seasons and catch quotas, limit shark bycatch, identify high value habitats (such as Habitat Areas of Particular Concern in US waters), and establish time-area closures or marine reserves [5]. The migrations of coastal shark species are often closely coupled with seasonal variations in water temperature [6][7][8]. These migrations appear to be adaptations to stay within a preferred temperature range, exploit seasonally productive foraging grounds, utilize optimal mating and parturition sites, or a combination thereof. Along the US Atlantic and Gulf coasts, fishery landings and field surveys demonstrate that most coastal sharks become more abundant in northern and inshore portions of their range as waters warm in spring [9][10][11][12][13][14][15]. Females use nearshore waters and estuaries as pupping grounds where neonates remain through summer, presumably taking advantage of high prey availability and reduced predation [16]. By fall, individuals again shift southward and/or offshore. Yet even in this region where shark behavior has been a priority research focus for several decades, migrations have not been resolved in detail for most species due to the difficulties of following individual animals as they travel long distances through open water. Passive acoustic telemetry is steadily gaining favor as an approach for resolving the detailed movements of fishes, including sharks, in estuarine and coastal settings [17]. Passive telemetry utilizes an array of submerged acoustic receivers deployed to autonomously record the presence of fish carrying acoustic transmitters. Individual animals can therefore be tracked for intervals much longer than is possible with manual telemetry where movements are recorded with a mobile (usually boat-based) receiver. One limitation, however, is that detections are only obtained when animals pass within a few hundred meters of a receiver. Consequently, a large percentage of passive telemetry studies of sharks to date [18][19][20][21][22][23][24], have occurred at insular locations or targeted reef-associated species where site fidelity is expected to be high. Studies of migratory shark species in continental settings [25][26][27][28] are often more challenging and generally yield data on individual animals for days to months, and encompass small sections of coastline. Theoretically, however, passive arrays are readily up-scalable so as to be suitable for resolving multi-year, regional-scale migrations and habitat associations in the coastal realm. Such efforts are arguably of greater management value since they better identify natural and anthropogenic risks facing long-lived marine species including sharks. The life history of the lemon shark (Negaprion brevirostris) has received considerable scrutiny compared with most coastal sharks. Not only is it widely distributed throughout the western Atlantic from North Carolina to Brazil, the Gulf of Mexico, Caribbean Sea, and tropical eastern Atlantic and eastern Pacific, it is an apex predator in several habitats including turbid estuaries, seagrass beds, mangroves, and coral reefs [29,30]. Moreover, the lemon shark exhibits life history traits that leave it prone to overfishing. They grow slowly, only reaching sexual maturity when 225-240 cm total length and 11-13 years of age [31]. Fecundity is also low with females producing only 4-18 offspring every other year [32]. Like many large sharks, the species has been heavily fished throughout its range, is currently listed by the IUCN as a nearthreatened species, and is the subject of growing management concern. Studies of the lemon shark using mark-recapture, acoustic telemetry, and genetic techniques in the Bahamas [33][34][35][36][37], south Florida [38], Caribbean [39], and Brazil [24,40,41] demonstrate that juveniles maintain fidelity to their natal nurseries for several years, have home ranges that expand gradually with age, and show little tendency for long distance dispersal until they approach adulthood. However, recent findings from the US southeast coast suggest a very different strategy with young lemon sharks forming high density aggregations each winter in the surf zone at Cape Canaveral, Florida, with evidence of a northward spring migration as far as North Carolina [42]. Adult lemon sharks in the region exhibit a similar migratory behavior but with winter aggregations occurring near Jupiter, Florida [43], 170 km south of Cape Canaveral. We argue here that better understanding details of these aggregations and migration patterns is necessary to guide long-term management of the species in the US South Atlantic region. Therefore, the specific objectives of this study were to: (1) use a collaborative regional-scale passive acoustic array to resolve the degree of site fidelity of juvenile lemon sharks to Cape Canaveral, and (2) document the timing, rate, destinations, and temperatures associated with their seasonal migrations. Ethics Statement Lemon shark collection and handling was performed in accordance with a State of Florida Special Activity License (permit SAL-09-512-S) and the study was specifically approved by the Kennedy Space Center Institutional Animal Care & Use Committee (permit GRD-06-049). Study Area Tagging of juvenile lemon sharks was conducted at Cape Canaveral, east-central Florida (28.5u N, Fig. 1) from the beaches of Cape Canaveral Air Force Station and NASA's Kennedy Space Center. The shoreline here is among the most pristine of the Florida Atlantic coast with no residential or commercial development. Habitat disturbance is limited to space launch infrastructure set back from the beach several hundred meters. Due to security concerns associated with launch activities, public beach access has been prohibited along 45 km of this coast since the mid-1950s although vessel-based activities (including fishing) are permitted. Nearshore waters are characterized by the expansive Southeast and Chester Shoals (minimum depth 1-3 m), with adjacent waters reaching 15 m. Bottom sediments are a mosaic of sand, shell, and mud with little hard-bottom substrate near the beach [44]. The shoreline exhibits longshore troughs that are partially sheltered from the surf zone by parallel sandbars. Juvenile lemon sharks up to 2 m long commonly aggregate within these troughs [42]. The Indian River Lagoon system lies directly inland of the study site, however the nearest ocean inlets are Ponce de Leon Inlet (60 km north) and Sebastian Inlet (62 km south) as well as a small lock system in nearby Port Canaveral. Salinity remains roughly 35 psu year-round and tides have an amplitude of , 1 m. The Canaveral region is a recognized climatic transition zone between warmtemperate and sub-tropical biogeographic realms [45]. Winter water temperatures remain above 15uC most years, however periodic cold fronts can induce brief but rapid declines in coastal water temperature. Shark Tagging A total of 54 juvenile lemon sharks were collected from two recurring aggregation sites at Cape Canaveral ( Fig. 1) over three successive fall-winter periods from 2008 to 2010. The number of sharks using each site occasionally exceeds several hundred individuals. All animals were collected from shore using a 3.7 m radius monofilament cast net. After capture, sharks were transferred to a 125-liter tank where they were placed ventral side up. The inverted position induced tonic immobility, after which a 25 mm incision was made parallel to the ventral midline and anterior to the cloaca. A coded acoustic transmitter was inserted into the peritoneal cavity and the incision was then closed with 2-4 absorbable sutures (Look TM Polysyn) and cyanoacrylate adhesive (Vetbond TM , 3 M Corporation). In the first year, all sharks were fitted with Vemco V9-2H tags (5 g in air, 180 sec. nominal delay, ,270 day battery life). In subsequent years, larger Vemco V16-6H tags (34 g in air, 90 sec. nominal delay, 6.4 year battery life) were used. Sharks were also marked with external dart tags offering a reward in case of angler recapture and then released on site. Total time from capture to release was usually 10-15 minutes. Florida Atlantic Coast Telemetry (FACT) Array Movements of tagged sharks were monitored via the Florida Atlantic Coast Telemetry (FACT) Array, a regional-scale passive acoustic array maintained by several marine research organizations. During this study, the FACT Array consisted of 160-180 acoustic receivers (Vemco VR2 and VR2W) deployed over 300 km of the Florida east coast from West Palm Beach (26.5u N) to Ponce de Leon Inlet (29.1u N; Fig. 1). FACT monitored multiple habitats including beaches and nearshore reefs/wrecks in the open Atlantic Ocean as well as estuarine waters of the adjacent Indian River Lagoon. Special attention was taken to anchor receivers at migratory chokepoints including all ocean inlets as well as natural constrictions, causeway channels, and river mouths. In addition to FACT, several other compatible passive acoustic arrays were deployed in the US South Atlantic. Most notably, an expansive array was established in estuarine and riverine waters of Georgia, South Carolina, and North Carolina by January 2011, during the third year of this study. Arrays were also located at various locations in the Florida Keys, Bahamas, and Chesapeake Bay for the duration of this study. At Cape Canaveral, the number of FACT receivers (referred to herein as the Canaveral Array) was expanded each winter ( Fig. 1). In December 2008, five ''nearshore'' receivers were deployed 250 m off the beach at a large lemon shark aggregation site south of Cape Canaveral. In December 2009, an additional ''offshore'' row of five receivers was installed 1250 m from the beach at this same site. Finally, in December 2010, four additional receivers were added just north of Cape Canaveral near a second aggregation site, bringing the total local receiver count to 14. Mean depth of nearshore and offshore stations were 3.7 and 6.7 m, respectively. All receivers were bracketed to large sand screws and downloaded using SCUBA at six-month intervals. Daily water temperature (uC) and wave height (m) within the Canaveral Array was obtained from NOAA buoy #41113 moored 5 km east of Port Canaveral. Water temperature was also measured using temperature recorders (HOBO TM loggers, Onset Corporation) attached to receivers at Ponce de Leon Inlet, St. Lucie Inlet, and Jupiter Inlet. When sharks were detected at nearshore locations lacking loggers, surface water temperature was estimated using NOAA AVHRR satellite imagery (available at http://marine.rutgers.edu/mrs/sat_data). Air temperature data from 1901-2011, used to provide historic context as to the relative severity of winter temperatures experienced at Cape Canaveral during the study, was obtained from the nearby Titusville National Climatic Data Center Station #088942. The relationship between air and water temperature at Cape Canaveral was explored using Spearman's rank correlation for all 984 days of the study when both values were available. Acoustic Array Performance Assessing the performance of Canaveral Array receivers over a broad spectrum of ocean conditions was important given that lemon sharks frequent the surf zone where wave action may hinder transmitter detection. The large study area made it impractical to quantify detection distances throughout the entire array. We instead deployed a range-test transmitter with a 3-min fixed interval at a single location for 162 continuous days to gauge detection rates in relation to changing habitat conditions. This transmitter, which had a signal strength (160 dB) identical to that used in most sharks, was deployed on a small rod midway between a nearshore and offshore station (depth 4.2 and 8.5 m, respectively). The transmitter was thus 750 m from the shore and 500 m away from each receiver. We tested daily detection probability of this transmitter as a function of water depth (shallow vs. deep receiver), daily wave height, and daily water temperature, using a limited set of nested generalized least squares models [46] within the nlme package [47] of R. To account for potential serial autocorrelation between successive days, we investigated models incorporating simple autoregressive correlation structures ARMA and AR1. Because variance in daily detection rate appeared to differ between depths, we also considered models which allowed for this difference in the variance structure. Once we had chosen the best correlation and variance structure, we used model selection based on adjusted Akaike Information Criteria (AIC c ) [48] to compare the full model with both interaction terms to all simpler models (i.e., one or more terms removed). Shark Habitat Use and Movement Analyses Analyses of shark movements were constrained to data collected from December 2008 through December 2011 (37 months). To avoid inclusion of false detections resulting from code collisions and background noise, detections at a receiver were deemed valid only if two or more occurred within a 30-min period for a given shark unless detections for that individual were also recorded at a receiver , 5 km away on the same date. Traditional measures of animal home range size (e.g., kernel density estimates) derived from passive receivers in the open ocean are likely to be misleading. We instead sought to identify individual-based and environmental variables that helped predict lemon shark presence at Cape Canaveral by developing a series of 72 a priori candidate logistic regression-type generalized linear models [49].The support for each ''residency model'' was measured by its AIC c value [48]. Our binomial response variable was the daily presence/absence of an individual shark anywhere within the Canaveral Array (not detections at specific receivers). Individual-based explanatory variables considered were shark sex, log-transformed size at capture, size class (large vs. small), and days at liberty. We also considered days at liberty as a categorical variable with four levels to explore the scale of this effect on shark detection probability. Environmental variables considered included water temperature (uC), the magnitude of water temperature change over the previous 3, 7, 14, and 30-day intervals (termed D3temp, D7temp, D14temp, D30temp), day length (hours), wave height (m), and month of year. Water temperature and day length were highly correlated and thus never included in the same model. Individual sharks were considered a random effect to account for any individual heterogeneity. Study Year was included as a random effect since the expanding array footprint each winter resulted in growing detection probability through time. Month crossed with year was considered a random effect to account for temporal patterns not explained by any fixed effects. Sharks present at Cape Canaveral for less than one week (n = 5) provided limited information and were not included. To account for potential serial autocorrelation in daily detection probability, we included state dependence and time series approaches [50]. Specifically, we created six state dependence variables which coded for whether or not an individual shark was detected at Cape Canaveral over the previous 1-6 days. We then considered six state dependence models which included the first order through sixth order autocorrelation terms added to the full model (e.g., 1 day lag + 2 day lag). We used AIC c to decide which state dependence model had the best support. We also considered time series models which incorporated the serial autocorrelation structure directly into the generalized linear mixed effects models, using function glmmPQL from the MASS package in R version 2.14.1 [51]. Because these models were fit using quasi-likelihood methods, we could not use this formulation directly in model selection; instead they were used to evaluate the use of state dependence variables to address the serial autocorrelation. Once we decided on the optimal random effects and state dependence structure, we fit all 72 candidate residency models with this structure using the lme4 package [52] in R version 2.14.1 [51]. In addition to residency, we examined depth preferences of lemon sharks in the Canaveral Array by comparing the distribution of detections on the nearshore receiver row vs. offshore receiver row using a x 2 test. Further, to explore whether shark detections varied across the day as a result of onshoreoffshore movements, time of each detection was rounded to the nearest hour and the resulting distribution was also explored using a x 2 test with the null hypothesis being equal detections throughout a diel cycle. Only data collected after November 2009, after which equal numbers of receivers were deployed in each row, were included. To provide a range for lemon shark migration speeds along the coastline, rate of movement was calculated for all occasions when sharks transitioned between our six pre-defined coastal regions (e.g., Cape Canaveral, Ponce Inlet, SE Florida). These movements exceeded 50 km in all instances. Rates were noted as km day 21 , and in body lengths sec 21 for events which occurred within six months of shark tagging. Distance was measured as the straightline distance through water between receivers. We considered movements from Cape Canaveral to either Ponce de Leon Inlet (north) or the Sebastian Inlet-West Palm Beach region (south) as providing truest estimates of migration rates. These migrations follow a relatively linear coastline and all tidal inlets were monitored with acoustic receivers, allowing us to account for any excursions into the Indian River Lagoon. Differences in swimming speed between direction (north vs. south) and sex were compared using Student's t-tests. Table 1). Captured sharks ranged in size from 610 to 1430 mm fork length (FL) with a mean of 840 mm FL. Shark size was similar across years (ANOVA, F 2,51 = 0.84, P = 0.44) and between sexes (Wilcoxon Rank Sum Test, W = 376.5, P = 0.84). Acoustic Array Performance The performance trial of the Canaveral Array ran for 162 days with an overall daily detection rate of a range test transmitter deployed near the surf zone being 64.2% from a distance of 500 meters. Performance varied markedly through time with daily detection rates ranging from 0.4-95.3%. The best supported model had main effects for both wave height and temperature (P ,0.001; see Table S1 for model details). As wave height increased, tag detection rates decreased, and as water temperature increased, tag detection rates increased. Water depth was not a significant factor in this setting with the nearshore (4.2 m deep) and offshore (8.5 m deep) receivers performing similarly with daily detection rates of 64.5% and 63.9%, respectively. Shark Residency and Habitat Use at Cape Canaveral Juvenile lemon sharks were followed for 0.5-751 days with a mean (6 1 SD) of 217 (6 226) days (Table 1; see Table S2 for details on individual sharks). A total of 41,869 position detections were recorded from December 2008 through December 2011 and all 54 sharks were detected in the Canaveral Array at some point. With the exception of early 2010 (see migration details below), tagged sharks generally demonstrated site fidelity to the Cape Canaveral region from late November through late February with few detections elsewhere along the southeastern US coast (Fig. 2). While no shark was detected at Cape Canaveral more than 2600 times, many sharks were recorded here on a near-daily basis for several weeks duration while others were detected more sporadically. The installation of receivers at a second (more northerly) aggregation site in late 2010 demonstrated that individual sharks regularly moved between aggregations and thus commonly spent time beyond the bounds of the initial Canaveral Array footprint. The best-supported residency model (AIC c weight = 0.87; Table 2) determined that day length, categorical days at liberty, and the magnitude of water temperature change over the previous three days (i.e., D3temp) helped predict daily detection probability of lemon sharks at Cape Canaveral. In this model, day length had the greatest (negative) effect size with individuals most likely to be present on the shortest days of the year (Table 3; Fig.3). D3temp also had a negative effect meaning that cooling trends resulted in higher predicted probability of shark detection, while warming trends resulted in lower predicted probability. The effect size for days at liberty was also negative meaning that sharks were more often detected on dates nearer their release date. Neither sex nor size helped predict lemon shark presence at Canaveral. Further, an effect of wave height on detection probability, shown during range testing to reduce receiver performance, was not supported, confirming that sharks were detected at least sporadically when present in the Canaveral Array, even during periods of high seas. The state dependence variables showed a strong positive correlation between the probability of detection for the 1 day lag, and weaker effects for the 2-4 day lags (Fig. 3). Measures of autocorrelation for Days 1-4 (0.23, 0.05, 0.05, 0.03) agreed well with those estimated by the time series model (0.32, 0.1, 0.03, 0.01). Both methods produced similar parameter estimates, increasing confidence in state dependence modeling for evaluating the effects of covariates on shark detections. Lemon sharks were strongly associated with the shoreline when at Cape Canaveral. Nearly 82% of all detections were recorded by the nearshore receiver row, more than expected by chance if sharks used both depths equally (x 2 = 9820, df = 1, P,0.001; Fig. 4). Only eight of 42 animals were more commonly detected at offshore receivers, all of which spent little time at Cape Canaveral relative to other sharks. Further, detections were not evenly distributed across the diel period with peak detections occurring at night between 1900-0600 (x 2 = 5289, df = 23, P ,0.001). Direction, Timing, and Rate of Coastal Migrations Of the 54 lemon sharks tagged, 41 were detected away from Cape Canaveral. These individuals were recorded on 62 additional FACT stations from Palm Beach Inlet (26.8u N) to Ponce de Leon Inlet (29.1u N) at various times during the study (Figs. 1,2). Sharks also entered other passive arrays in Ossabaw Sound (n = 3) and Savannah River (n = 1), Georgia (32.0u N), and Charleston Harbor (n = 2), South Carolina (32.8u N). The minimum linear distance between the northernmost and southernmost detection was 663 km but over 770 km when following the coast. On average, sharks were detected on 9.1 receiver stations with individual animals visiting as many as 27 stations. Locational information was also provided via angler recaptures at Jupiter Inlet (170 km south of release site) and Ponce de Leon Inlet, Florida (88 km north of release site), and Little St. Simons Island, Georgia (323 km north of release site). Nearshore water temperature at Cape Canaveral ranged from 11-30uC and averaged 23.3uC across the study (Fig. 2). Lemon sharks were detected throughout this range (12-30uC) but .70% of detections occurred at temperatures between 15-20uC (Fig. 5).Winter water temperature, averaged from December through March, differed across years (One-Way ANOVA, F = 17.85, P ,0.001) as a result of severe declines in January-March 2010 and again in December 2010. This atypical variability was accompanied by notable differences in shark migration patterns across the three winters of this study. While extensive records of water temperature are unavailable, local water and air temperatures were strongly correlated (Spearman's rank correlation, r s = 0.92, df = 982, P,0.001) suggesting that air temperature serves as a good proxy for the relative severity of winters at Cape Canaveral. The winter of 2008-2009 was moderate with air temperature averaging 17.2uC, (near the long term mean of 17.0 uC) and water temperature ranging from 16-23uC. The nine lemon sharks released in December 2008 were detected locally for 3-106 days with the last two sharks recorded on 5 March (Fig. 2). Five of these sharks were later detected at Ponce de Leon Inlet between 27 February and 22 March 2009 confirming a northward spring migration for these individuals (Table 4). Sharks were not detected elsewhere until sharks #8 and 9 returned to Cape Canaveral on 24 November and 9 December 2009, respectively. The 23 lemon sharks released in the second winter of the study were also initially detected only in the Canaveral Array (Fig. 2). By early January 2010, however, several reinforcing cold fronts swept across peninsular Florida resulting in one of the most severe cold weather events on record at Cape Canaveral. Daily air temperature from 2-13 January averaged from 2-10uC, resulting in significant cold-induced mortalities of coastal fishes with tropical affinities (Reyier, personal observation). Moreover, temperatures remained below average for several weeks; winter air temperature averaged only 14.7uC, the sixth coldest on record since 1901. Water temperature in the Canaveral Array reached a minimum of 11uC on 11 January and generally remained below 16uC through mid-March. This rapid drop in ocean temperature was accompanied by the exodus of all 23 tagged sharks from the Canaveral Array with the last individual (#19) detected on 10 January at a water temperature of 12.0uC. Fifteen sharks made confirmed southward migrations along the coast and were recorded at multiple FACT stations from Sebastian Inlet to West Palm Beach, 62-191 km south of Cape Canaveral. Water temperature in this region was typically 3-6uC warmer than Cape Canaveral due to the moderating influence of the Florida Current which diverges from the Florida east coast near Jupiter. Migrating sharks were always detected singly, a behavior observed consistently across the study. Sharks generally followed the coastline with 13 individuals detected at ocean inlets although one shark was detected 10 km offshore in water 22 m deep. Shark #25 reached West Palm Beach in three days, a rate of 59 km day 21 and several others moved at . 40 km day 21 . Five animals (#22-25, 31) passed by receivers at the south end of FACT at this time and never returned. Other sharks were simply never detected again after leaving the Canaveral Array in early January. Notably, shark #12 actually moved north to Ponce de Leon Inlet in late January (water temperature of 13.6uC) before returning to Cape Canaveral in early February. Eight of 23 tagged sharks returned to the Canaveral Array from 29 January to 8 April. Five of these individuals (#10, 11,13,20,29) were then recorded swimming north past Ponce de Leon Inlet from 3 -26 April (later in the spring than observed in 2009) with shark #10 subsequently harvested nearby. Like the previous year, the location of these four remaining animals from late spring through fall was undetermined but all four returned to Cape Canaveral between 14 November and 6 December 2010. A single shark (#33) tagged in spring remained within the Canaveral Array throughout the summer 2010 and summer 2011 as well, confirming that at least some juvenile lemon sharks at Cape Canaveral do not undertake northward spring migrations. December 2010 was also unusually cold; local air temperature averaged 10.8uC, the second coldest December on record since 1901. This event also resulted in mortality of tropical fish species but water temperature was less severe than the previous winter, falling to a low of 14.1uC in late December before returning to more seasonable conditions by early January day 21 ). In cases when migrations occurred within six months of release (i.e., when shark size was known), movement rate measured as body lengths sec 21 ranged from 0.02-1.1 (mean 0.27, n = 59). And when considering only migrations along the linear Florida east coast, rates were similar, averaging 18.9 km day 21 (n = 66) and 0.27 bl sec 21 (n = 53), respectively. Southerly and northerly migrations occurred at similar speeds (t-test, t = -1.384, df = 43.6, P = 0.17), were similar across sexes (t-test, t = -0.349, df = 47.1, P = 0.72) and were not related to size at capture (Spearman's rank correlation, r s = 0.02, P = 0.88). Regional Habitat Use Tagged lemon sharks were detected within every major habitat monitored by the FACT Array. While all 54 sharks (73% of all detections) were recorded in nearshore Atlantic waters, 40 sharks (19% of detections) were also recorded at tidal inlets including Ponce de Leon (n = 30), Sebastian (n = 2), Ft. Pierce (n = 9), St. Lucie (n = 7), and Jupiter Inlets (n = 6) as well as nearby Port Canaveral (n = 2). Nine animals (7% of detections) penetrated .5 km into estuarine waters of the Indian River Lagoon and one shark (1% of detections) was recorded 7 km up the Loxahatchee River near Jupiter Inlet although salinity at this site was not available. Shark #21 spent $ 166 days in the estuary and moved 106 km north from Sebastian Inlet before returning south and offshore, spending more time and moving farther up-estuary than any other tagged individual. With the exception of Ponce de Leon Inlet, which lies along the annual migration route, use of inlet and estuarine habitats within the FACT Array occurred almost exclusively during early 2010 as sharks moved south from Cape Canaveral in association with rapidly falling water temperature. Discussion In this study, we utilized a collaborative passive acoustic array to document regional-scale migrations and habitat associations of juvenile lemon sharks in the US South Atlantic for the first time. Tagged sharks utilized at least 660 km of coastline from southeast Florida to South Carolina with individuals tracked for up to 751 days. Our findings clearly demonstrated that: (1) immature lemon sharks found in nearshore aggregations at Cape Canaveral exhibited site fidelity to this region from December through February under seasonally typical water temperatures; (2) temperature declines below 15uC were accompanied by a rapid but often temporary southward displacement along the Florida east coast; and (3) in contrast to other populations studied to date, most juvenile lemon sharks overwintering in east-central Florida undertook an annual northward migration starting in late winter, and spent summer in nearshore and estuarine waters of north Florida, Georgia, and the Carolinas before returning south to eastcentral Florida in fall. Cape Canaveral as a Lemon Shark Nursery The notion that many coastal shark species have discrete nurseries has been widely accepted for decades with many adopting the definition of Bass [53] who states that primary nurseries are locations where parturition takes place and secondary nurseries are where young reside when growing to maturity. Huepel et al. [54] argue convincingly that this concept is too often applied to areas where immature sharks occur in low density or spend little time. They instead propose three testable criteria for evaluating whether a location is indeed a shark nursery: (1) young sharks of a given species are more abundant than in other areas, (2) individuals use the putative nursery for extended periods (i.e., exhibit site fidelity), and (3) the area is utilized by a species repeatedly across years. Our growing understanding of lemon shark life history in the US South Atlantic suggests that nearshore waters at Cape Canaveral merit the definition of a winter nursery for the species even under these stricter standards, and may constitute the single most valuable winter nursery for lemon sharks in US waters north of the Florida Keys-Florida Bay region. While abundance was not quantified here, tagged sharks were sampled from aggregations of several hundred individuals, and winter densities as high as 22 sharks per shoreline km have been observed locally in recent years [42]. To our knowledge, this aggregating behavior has not been noted for juveniles elsewhere along the US Atlantic coast, and immature lemon sharks are a minor component of shark surveys elsewhere in Florida [29,55], Georgia [56], South Carolina [57,58], and North Carolina [59]. Our findings directly address more challenging questions regarding site fidelity and seasonal philopatry (i.e., homing) to the Canaveral region. The FACT Array provided strong evidence that most juvenile lemon sharks arrived at Cape Canaveral beginning in late November, remained through February (often longer), and utilized coastal waters south of Cape Canaveral only when water temperature receded below 15uC. And while aggregations dissipated each spring, they reformed the ensuing winter, as they have annually since first encountered in 2003 [42]. Most notably, 19 of 54 tagged individuals returned for a second or even third successive winter. Given that mortality of young lemon sharks has been estimated at 38-65% annually [60], and that transmitters deployed the first winter had battery life , 1 year, this rate of return appears high. The reason(s) why lemon sharks aggregate at Cape Canaveral is not fully understood but our data suggest that water temperature largely underlies this phenomenon. Cape Canaveral is a climatic transition zone where winter water temperature grades rapidly from north to south and does not drop below 15 uC most years [45]. This condition is partially a function of latitude, however satellite ocean temperature imagery also suggests that the nearby shoal complex partially deflects the predominant south-flowing nearshore current eastward, allowing warmer north-flowing offshore currents to intrude near the coast. On some winter days, water temperature on either side of the shoals may differ by up to Table 2. Ten best supported models from the 72 a priori models relating environmental and individual covariates to daily detection probability (DDP) of lemon sharks at Cape Canaveral. 2-3uC. In most years, therefore, the Canaveral region may simply be the highest latitude where lemon sharks can safely overwinter without serious repercussions to survival and growth. Since tagged sharks returned to Canaveral as early as November when water temperature in northeast Florida was still typically .20uC, aggregations may be an instinctive or learned behavior, not a direct response to ambient temperatures encountered during southward fall migrations. The sand shoals here may also serve as a predator refuge or productive foraging grounds. In fact, following the conclusion of this study, the Canaveral Array was further expanded with receivers deployed further offshore. To date, a total of 13 juvenile lemon sharks have been detected up to 12 km from the beach (E. Reyier, unpubl. data). Finally, it is conceivable that these juvenile aggregations were historically more widespread in east Florida during winter but now persist only at Cape Canaveral due to limits on public shore access and fishing enacted for space launch security in the 1950s. Seasonal Migrations in the US South Atlantic The historically cold water temperature during January 2010 resulted in widespread mortality of tropical fish species throughout peninsular Florida [61], but was fortuitous in the sense that it allowed us to observe a broader suite of lemon shark behavior than might be expected in a typical three year period. Like other marine fishes, lemon sharks exposed to temperature approaching their lower lethal limit would be subject to disruption of neuroendocrine, metabolic, osmoregulatory, and immune func- tions, potentially culminating in death [62]. The sudden exodus of all tagged lemon sharks from Cape Canaveral once water reached 12uC in early 2010, and a rapid southern migration of at least 15 individuals to coastal waters moderated by the warm Florida Current, was clearly in direct response to this unusual meteorological event. The near-complete exodus of sharks from Cape Canaveral from February through April in all three years of the study and the subsequent detections of 30 individuals at Ponce de Leon Inlet (northeast Florida), Georgia, and South Carolina, demonstrate that lemon sharks as small as 660 mm FL commonly undertake extensive northward migrations each spring. In contrast to southern migrations observed in early 2010, these annual migrations may not be cued directly by water temperature. Day length, not temperature, appeared as single most important factor when predicting lemon shark presence at Cape Canaveral over the long term, and many north-migrating sharks passed Ponce de Leon Inlet when water temperature was only 16-18uC. We suggest that growing day length in spring provides the primary stimulus to initiate annual coastal migrations, as has also been been suggested for sandbar sharks (Carcharhinus plumbeus) in Chesapeake Bay [11]. The extensive migrations we observed contrast with results of virtually every other study of lemon shark behavior and dispersal in the Bahamas [33,35], Caribbean [39], Brazil [24], and even south Florida [38]. Most notably, Chapman et al. [34] used genetic techniques to conclude that dispersal of lemon sharks in Bimini, Bahamas (only 320 km from Cape Canaveral), was very slow; the majority of individuals up to six years old at Bimini were locally born. Most previous studies have occurred at insular sites or lower latitudes where seasonal migrations may be less advanta- geous because annual temperature variability is less extreme, or because dispersal is not attempted -or not often successful -due to high juvenile mortality in the open ocean. Regular lemon shark migrations along the US southeast coast are presumably an adaption which allows seasonal use of productive estuaries from spring through early fall as temperatures allow. These migrations also result in lower densities which may be necessary since the condition of lemon sharks in aggregations deteriorates as winter progresses [42], suggesting that Canaveral waters cannot sustain such high shark numbers year-round. The stark regional differences in lemon shark behavior and habitat associations underscore the wisdom of tailoring management strategies to both a species basic biology, which may vary little over broad geographic scales, and its behavior, which varies from site to site. Along the US east coast, lemon sharks are currently managed as a single stock in the large coastal shark management group [63], subject to recreational and commercial size and catch quotas. Further, In 2010, due to mounting evidence that lemon shark aggregating behavior made them especially vulnerable to overfishing, the State of Florida imposed an outright, although potentially temporary, harvest ban in state waters [43]. Given the extensive migrations we observed in individual sharks, coupled with the spatially predictable nature of their aggregations, this dual approach seems warranted. That said, permanent protection of Florida's lemon shark aggregations in both state and federal waters (possibly through extremely stringent quotas or time-area closures) may be the single most important step for ensuring long-term conservation of the species in the US south Atlantic region. Remaining Questions Adult lemon sharks that overwinter off Jupiter, Florida, exhibit a similar north-south migratory pattern along the coast. Almost 60 tagged adults passed through the Canaveral Array in late spring and several remained in the region well into summer. Female lemon sharks give birth in spring but the apparent lack of neonates at Cape Canaveral or adjacent estuary [42] suggests that parturition occurs primarily north of east-central Florida; to date these pupping areas have not been located. And while ongoing genetic sampling has demonstrated that adults in Jupiter aggregations are the parents of some Canaveral juveniles (D. Chapman, unpubl. data), it remains unclear to what extent, and at what age, the immature sharks recruit into adult aggregations down the Florida coast. That said, this study validates the use of collaborative passive arrays for the purposes of resolving regionalscale migrations for managed coastal fishes not easily tracked in detail with satellite-based techniques. As the technology becomes more widely embraced, answers to these questions will be within reach for lemon sharks and other coastal shark species. Supporting Information Table S1 Canaveral Array Performance. The best supported generalized least squares model for the receiver performance trial had main effects for wave height and temperature. Test distance between transmitter and receivers was 500 m. Instances where sharks made forays to/past Ponce Inlet but quickly returned to Canaveral (n = 2) are excluded. *Burial of two receivers in fall 2011 limited the ability to detect south-migrating lemon sharks passing by this area. doi:10.1371/journal.pone.0088470.t004
2018-04-03T04:33:27.130Z
2014-02-26T00:00:00.000
{ "year": 2014, "sha1": "d0da286fbb648a871e1deb0d6c1215eddea7e7ad", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0088470&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d0da286fbb648a871e1deb0d6c1215eddea7e7ad", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
2336692
pes2o/s2orc
v3-fos-license
Original CIN: reviewing roles for APC in chromosome instability You may have seen the bumper sticker “Eve was framed.” Thousands of years of being blamed for original sin and still many wonder, where's the evidence? Today, the tumor suppressor adenomatous polyposis coli (APC) may have the same complaint about accusations of a different type of CIN, chromosome instability. A series of recent papers, including three in this journal, propose that loss of APC function plays an important role in the CIN seen in many colon cancer cells. However, a closer look reveals a complex story that raises more questions than answers. Does loss of APC promote CIN? Adenomatous polyposis coli (APC) was fi rst identifi ed as a tumor suppressor gene mutated in familial colon cancer; it is also mutated in most sporadic cases (for review see Polakis, 2007 ). Its best-known role is as a negative regulator of Wnt signaling (for review see Clevers, 2006 ), but it also plays Wnt-independent roles in cytoskeletal regulation (for review see N ä thke, 2004 ) through its ability to bind microtubules (MTs) and MT-associated proteins as well as associate with the actin cytoskeleton. APC has usually been implicated in chromosome instability (CIN) via its proposed cytoskeletal regulatory roles, but, as we see below, recent work also suggests possible roles for activated Wnt signaling in CIN. When considering guilt or innocence, the fi rst question is whether a crime actually occurred -does the loss of APC increase CIN? To answer this, we fi rst must defi ne CIN. In colon cancer, sequential mutations promote progression from polyp to adenoma to carcinoma. In colon and other cancers, advanced tumors exhibit CIN, with both aneuploidy (changes in chromosome number) and chromosome aberrations (translocations or other rearrangements) increasing as cancer progresses . Despite this strong correlation, it has remained uncertain whether CIN causes cancer or is a side effect of mutations in guardians of genome integrity; however, recent evidence supports a causal role ( Weaver et al., 2007 ). Mutations in checkpoint mediators, DNA damage sensors, or kinetochore proteins help explain CIN but only account for ‫ف‬ 10% of aneuploid tumors ( Cahill et al., 1998 ;Wang et al., 2004 ). APC loss is the fi rst step in colon carcinogenesis. If APC loss causes CIN, CIN should occur early in cancer progression. Data from a variety of laboratories suggest that many but not all APC mutant adenomas are aneuploid (e.g., 53% in Cardoso et al., 2006 ; also see Haigis et al., 2002 ;Sieber et al., 2002 ). Thus, although APC mutant adenomas often become aneuploid, some do not. Collectively, the data suggest that APC loss does not lead to wholesale failure in chromosome segregation in vivo but may trigger defects in the fi delity of chromosome segregation that promote cancer progression. APC localizes to kinetochores, centrosomes, and astral MTs We next must ask where the crime occurred and whether APC was at the scene of the crime. APC ' s subcellular localization is complex and somewhat contentious ( Fig. 1 ). There is agreement about APC localization during interphase: it localizes to puncta in cell protrusions near the ends of MTs ( N ä thke et al., 1996 ) and can surf on MT plus ends ( Mimori-Kiyosue et al., 2000 ). Surprisingly, this does not require the +tip protein EB1 ( Kita et al., 2006 ). +Tip localization is consistent with roles in stabilizing astral MTs, thus affecting spindle or contractile ring position during cytokinesis ( Green et al., 2005;Caldwell et al., 2007 ). In mitosis, different groups report different localizations, each consistent with distinct roles in CIN. APC has been reported at kinetochores ( Fodde et al., 2001 ;Kaplan et al., 2001 ) and at centrosomes ( Banks and Heald, 2004 ;Louie et al., 2004 ). At kinetochores, it might regulate MTkinetochore attachment or the spindle assembly checkpoint (SAC); at centrosomes, it could infl uence centrosome duplication or nucleation of spindle/astral MTs during mitosis. Both are consistent with roles in CIN. Truncated APC proteins like those found in tumors also have been localized to different sites, including puncta along spindle MTs ( Green and Kaplan, 2003 ) or at centrosomes ( Tighe et al., 2001 ). This diversity in reported sites of action raise questions about which is critical for CIN. Evaluating a role in CIN The fi rst evidence linking APC and CIN came in 2001, when two groups reported that cultured embryonic stem (ES) cells homozygous mutant for APC (the Min or 1638T truncation mutations) You may have seen the bumper sticker " Eve was framed. " Thousands of years of being blamed for original sin and still many wonder, where ' s the evidence? Today, the tumor suppressor adenomatous polyposis coli (APC) may have the same complaint about accusations of a different type of CIN, chromosome instability. A series of recent papers, including three in this journal, propose that loss of APC function plays an important role in the CIN seen in many colon cancer cells. However, a closer look reveals a complex story that raises more questions than answers. Original CIN: reviewing roles for APC in chromosome instability dominant negative. However, caution must be used in interpreting these experiments. Because APC has many partners in both Wnt signaling and cytoskeletal regulation, high level overexpression of truncated APC may affect processes in which APC is not essential by sequestering binding partners that are essential for the process. In this paper, we focus on studies that used RNAi to reduce APC in otherwise wild-type cells, as this approach allows one to defi ne the normal role of APC. We also consider some studies that express truncated APC, as dominant effects of these may well be relevant to tumorigenesis. If APC is guilty of CIN, by what mechanisms does it act? Surprisingly, even simple loss-of-function experiments revealed diverse phenotypes and an equally diverse set of proposed mechanisms by which APC prevents CIN. In the following sections, we consider these models in turn, evaluating the evidence for and against each model by comparing and contrasting different studies. become aneuploid in culture and accumulate rearranged chromosomes ( Fodde et al., 2001;Kaplan et al., 2001 ). Kaplan et al. (2001) also reported lagging chromosomes, presumably precursors of CIN, whereas Fodde et al. (2001) found elevated numbers of tetraploid cells. Both groups found APC localized at kinetochores, suggesting a possible role in either kinetochore -MT attachment or in the SAC. This was followed up by many groups who used three approaches to reduce APC function (see below). Some used colon cancer cell lines mutant for APC . Surprisingly, in tumors, there is no selection for homozygosity of null mutations; instead, one allele encodes a truncated protein retaining the N-terminal half of APC. It remains unclear whether these are dominant negative; recent work suggests that they are selected because they reduce but do not eliminate Wnt signaling ( Albuquerque et al., 2002 ;McCartney et al., 2006 ). Others expressed similarly truncated APC proteins in wild-type cells, reasoning that they are Figure 1. APC localization. APC has been localized to several subcellular locations, some of which are highlighted here. Figure 2. Models suggesting APC modulates the SAC or its response to attachment defects. (A) Proposed pathway in wild-type cells. APC promotes stable MT -kinetochore attachment through an unknown mechanism. The SAC monitors MT -kinetochore attachment and only allows mitotic exit once MT occupancy and tension are satisfactory. (a) In Sorger ' s model ( Draviam et al., 2006 ), SAC function does not require APC. (b) In N ä thke ' s model ( Dikovskaya et al., 2007 ), APC plays a direct role in SAC function. (B) Proposed model accounting for chromosome segregation defects in the absence of APC. Disruption of APC seems to lead to defects in MT -kinetochore attachment, but models differ in what happens downstream. (a) In Sorger ' s model ( Draviam et al., 2006) , a functional SAC prolongs metaphase to attempt to correct attachment defects, but some defects remain undetected/uncorrected. This leads to mitotic exit of cells with lagging chromosomes, leading to aneuploidy. (b) In N ä thke ' s model ( Dikovskaya et al., 2007 ), defects in APC lead to compromised SAC function. This leads to mitotic exit without chromosome segregation, generating tetraploid cells. checkpoint " after nocodazole treatment ( Dikovskaya et al., 2007 ) because APC-siRNA reduces mitotic arrest in response to nocodazole relative to control cells. However, it is important to note that the U2OS cells used had a modest response even to high levels of nocodazole relative to other cells (mitotic index decreased from 6% in control cells to 4% after APC-siRNA at 100 nM nocodazole, a standard dose, or from 22 to 12% at 5 μ M nocodazole; some sublines of U2OS cells respond more robustly to 100 nM nocodazole, with a > 70% mitotic index; Sihn et al., 2003 ). Another assay of SAC function is the proper localization of SAC proteins at kinetochores. APC depletion in U2OS cells reduced kinetochore Bub1 and BubR1 levels during prometaphase to ‫ف‬ 60% of normal ( Dikovskaya et al., 2007 ). Together, these data suggest that in U2OS cells, APC loss compromises the SAC. In N ä thke ' s model ( Figs. 2 B and 3, B and C ), this SAC defect leads to premature anaphase onset, which, in turn, triggers mitotic exit without cytokinesis, generating tetraploid cells. Finally, they suggest that APC loss inhibits apoptosis, which would normally be triggered by this sort of abnormal event. These interesting SAC defects may be cell type specifi c, however, rather than a general response to APC-siRNA. Sorger ' s laboratory found a functional SAC in APC-depleted HeLa cells ( Draviam et al., 2006 ); nocodazole triggered > 90% mitotic arrest in both wild-type and knockdown cells. Kaplan ' s laboratory found " a modest decrease in mitotic index " (60 to 45%) in APC-depleted 293 cells relative to controls and concluded that " the spindle checkpoint is functional " ( Green et al., 2005 ). They also found a functional SAC in APC Min mutant ES cells or blastocysts ( Kaplan et al., 2001 ). Furthermore, Mad2 and BubR1 are normally recruited to kinetochores in APC-depleted HeLa cells ( Draviam et al., 2006 ) or Xenopus laevis egg extracts ( Zhang et al., 2007 ). The spindle assembly checkpoint: APC jumps into the SAC with BUB The importance of correct chromosome segregation drove the evolution of a self-policing SAC that assures proper segregation of the duplicated genome ( Musacchio and Salmon, 2007 ). The kinetochore protein complex mediates MT attachment to each chromatid. It is critical to ensure that the two sisters each attach to different spindle poles. Once this occurs, sister chromatids are pulled in opposite directions, generating tension between kinetochores. The SAC monitors kinetochore MT occupancy and tension across kinetochore pairs and is inactivated only when all chromosomes are correctly attached to both spindle poles, allowing anaphase onset and chromosome segregation. The SAC is regulated by MAD (mitotic arrest defective) and BUB (budding uninhibited by benzimidazoles) proteins, which localize to kinetochores. Defects in the SAC can lead to premature anaphase onset before both kinetochores of all chromosomes are properly attached and, thus, lead to defects in chromosome segregation (i.e., CIN). One model for APC ' s role in CIN suggests that APC plays a key role in the SAC. To test this, one must assess whether a functional SAC is present in APC mutant cells. One way to do so is to disrupt kinetochore attachment or tension using MT poisons (nocodazole or taxol). This should result in SAC activation and arrest cells in mitosis, so an increased mitotic index suggests a functional SAC. In many wild-type cell types, nocodazole is quite effective at blocking mitotic exit, with 60 -80% of the cells blocked in mitosis by 100 nm nocodazole (e.g., Draviam et al., 2006 ). N ä thke ' s laboratory suggests that APC is required for a functional SAC in U2OS cells ( Dikovskaya et al., 2007 ). They report that APC-siRNA " substantially compromises the mitotic whereas CLIP170-depleted cells had a 70% reduction ( Draviam et al., 2006 ). Kaplan ' s laboratory found slightly reduced kinetochore -MTs in APC Ϫ versus APC + colorectal tumor cell lines, but this difference was lost by anaphase ( Green and Kaplan, 2003 ). Thus, APC loss does not dramatically disrupt kinetochore -MT attachment. Another way of assessing whether all is right at the kinetochore -MT interface is to examine distance between the kinetochores on sister chromatids, a measure of tension. Once both kinetochores are attached, opposing pulling forces pull them apart, with cohesion between sisters preventing segregation ( Pinsky and Biggins, 2005 ). If APC regulates either MT -kinetochore attachment or kinetochore -MT dynamics, interkinetochore distance might be altered in APC mutant cells ( Fig. 3 B ). Strikingly, cells expressing APC truncations or that are APC depleted have reduced interkinetochore distance ( Tighe et al., 2004;Green et al., 2005 ;Draviam et al., 2006 ;Dikovskaya et al., 2007 ). This is one of the few phenotypes consistent among all studies. Interestingly, EB1 siRNA has similar effects ( Green et al., 2005;Draviam et al., 2006 ). This suggests that reduced interkinetochore distance is a fundamental effect of APC loss; thus, experiments exploring mechanisms by which it occurs are needed to explain APC ' s role in CIN. Sorger, Mao, and Kaplan ' s laboratories further found that APC depletion disrupts metaphase chromosome alignment and chromosome segregation ( Figs. 2 A and 3 B ). Sorger ' s group found that chromosome congression occurred in APC-or EB1depleted cells, suggesting bioriented kinetochore attachment, but metaphase plates were less compact, and most kinetochore pairs were misoriented relative to the spindle axis ( Draviam et al., 2006 ). Kaplan ' s laboratory found that APC mutant tumor cells also have less compact metaphase plates; the expression of truncated APC1-1450 or APC or EB1 depletion also led to failure of some chromosomes to reach the metaphase plate ( Green and Kaplan, 2003 ;Green et al., 2005 ). In Xenopus extracts, Mao ' s laboratory found even more dramatic defects in metaphase chromosome alignment after APC or EB1 depletion, leading to chronic SAC activation and mitotic arrest ( Zhang et al., 2007 ). Once anaphase began, 30 -65% of APC-or EB1depleted cells ( Green et al., 2005;Draviam et al., 2006 ) and > 80% of cells overexpressing truncated APC1-1450 ( Green and Kaplan, 2003 ) contained lagging chromatin strands. Thus, the studies from Mao and Sorger and the earlier paper from Kaplan are all consistent with defects in kinetochore -MT attachment Perhaps the most accurate assay of SAC function is to directly monitor mitosis by live cell imaging. Sorger ' s laboratory found that APC depletion delayed anaphase onset threefold; this delay was abolished by codepleting Mad2 ( Draviam et al., 2006 ). This suggests that APC depletion leaves a functional SAC; they attributed subsequent CIN to the failure to fully detect/respond to defects in MT -kinetochore attachment caused by APC depletion. Cells are delayed in mitosis but ultimately proceed through it in an error-prone way. In contrast, N ä thke ' s group saw the opposite: APC depletion reduced time to anaphase onset, suggesting a defective SAC ( Dikovskaya et al., 2007 ). Thus, different groups using different cell lines fi nd quite variable effects of APC loss on SAC precision. We feel this raises serious questions about whether defects in the SAC are a primary effect of APC loss in all cell types. Furthermore, all agree that there is not wholesale abrogation of this checkpoint in the absence of APC, as is seen in the absence of Mad2 ( Dobles et al., 2000 ), suggesting that APC is not essential for the SAC. APC may also play additional roles at kinetochores, helping to explain why APC and EB1 coimmunoprecipitate with Bub proteins and why Bub1/BubR1 phosphorylates APC in vitro ( Kaplan et al., 2001 ;Zhang et al., 2007 ). In particular, given its role as a +tip protein, APC could help anchor MTs at kinetochores during mitosis. Virtually all researchers ascribe some role for defects in kinetochore -MT attachment in explaining APC ' s phenotype. Some, like Sorger ( Draviam et al., 2006 ) and Mao ( Zhang et al., 2007 ), suggest that this is the primary defect. Mao ' s laboratory suggests that APC is a BubR1 target in a SAC-independent function ( Zhang et al., 2007 ). They found that depleting either APC or EB1 from Xenopus egg extracts disrupts metaphase chromosome alignment, and kinase-dead BubR1 inhibits APC recruitment to kinetochores. Thus, they suggest that APC and EB1 act together or in parallel to promote stable kinetochore -MT interactions. This is an interesting model, but it will be important to extend these studies to intact cells. One key to high-fi delity chromosome segregation is ensuring that the right number of MTs attach to each kinetochore ( Fig. 3, A and B ). If APC regulates this, one might expect to see decreased kinetochore -MT density in its absence. However, kinetochore -MT bundles were similar in fl uorescent intensity in APC-depleted HeLa cells chilled to remove non-kinetochore -MTs, somes observed by Sorger or Kaplan after APC knockdown ( Green et al., 2005;Draviam et al., 2006 ) and the outright cytokinesis failure leading to tetraploidy reported by N ä thke ( Dikovskaya et al., 2007 ) or the later Kaplan laboratory study ( Caldwell et al., 2007 ). This must be resolved if we want to have a unifi ed hypothesis for the role of APC in normal chromosome segregation and in CIN. Does activated Wnt signaling cause CIN? An additional critical issue with these models for APC function in CIN is that all assume that APC acts in CIN as a cytoskeletal regulator. However, APC ' s best-understood role is as a key negative regulator of Wnt signaling ( Fig. 5 ; for review see Clevers, 2006 ). In APC ' s absence, Wnt signaling is inappropriately activated via stabilization of the key Wnt effector ␤ -catenin and activation of downstream target genes by ␤ -catenin -T cell factor (TCF) complexes. In the colon, where CIN caused by APC loss would have its greatest impact, Wnt signals regulate proliferation, maintaining stem cells ( Reya and Clevers, 2005 ). Wnts stimulate proliferation by up-regulating the key transcription factor c-myc, which, in turn, down-regulates the cell cycle inhibitor p21. Thus, cells remain in cycle ( van de Wetering et al., 2002 ). Strikingly, deletion of myc abrogates tumorigenic effects of the loss of APC . APC loss activates Wnt signaling, locking cells in the stem cell fate and creating a colon polyp. Thus, it is possible, although not often appreciated, that APC loss contributes to CIN via activation of transcriptional targets of the Wnt pathway in its absence rather than through effects on cytoskeletal regulation. Cells lacking functional APC have levels of Wnt activation much higher than cells seeing endogenous Wnts; thus, their expression of cell cycle regulators may not match those of any normal cell, potentially altering cell cycle transitions. leading to highly penetrant defects in the segregation of individual chromosomes, leading to aneuploidy and CIN. APC and cytokinesis: does the road to aneuploidy pass through tetraploidy? Another possible place for APC action is at plus ends of astral MTs. Kaplan ' s laboratory ( Green et al., 2005 ) reported that one of the most striking effects of expressing truncated APC1-1450 was the reduction of astral MTs with consequent spindle mispositioning ( Fig. 4 ). APC or EB1 depletion led to similar but less penetrant astral MT reduction and spindle mispositioning ( Green et al., 2005 ). Sorger ' s laboratory also observed spindle mispositioning after APC depletion, including spindle rotation during metaphase ( Draviam et al., 2006 ). Caldwell et al. (2007) explored this further, fi nding that overexpression of the putative dominant-negative APC1-1450 leads to cytokinesis failure. They suggest that reduced MT contact with the cortex is the cause; consistent with this, they found a strong correlation between spindle rotation and failure to initiate a cytokinetic furrow ( Fig. 4 ). The resulting tetraploidy could lead to aneuploidy after further divisions. One concern is that this occurred only after the overexpression of truncated APC1-1450 and was not reported in their earlier studies of APC knockdown ( Green et al., 2005 ). Thus, it is possible that these effects are not strictly caused by APC loss of function. However, Kaplan ' s laboratory did fi nd elevated aneuploidy and tetraploidy in intestines of APC min /+ mice, even in crypts that are presumably heterozygous mutant ( Caldwell et al., 2007 ). Thus, they concluded that defects in astral MTs and altered spindle positioning caused by APC mutations result in cytokinesis failure and that this is critical in CIN. These data bring into focus one of the most substantial problems in comparing data from these different studies. It is difficult to reconcile the failure to properly segregate single chromo- knockdown may vary between studies, and low levels of APC may rescue some but not all functions. For example, in kinetochore assembly, one must reduce CENP-A > 10-fold to see an effect on CENP-I localization ( Liu et al., 2006 ). Second, it is critical to remember that APC has a closely related paralogue, APC2. APC2 can regulate Wnt signaling, but its function in cytoskeletal regulation is unexplored. It is possible that, as in Drosophila melanogaster ( Ahmed et al., 2002 ;Akong et al., 2002 ), the two APC family members play partially redundant roles in some tissues. This might explain different results of APC-siRNA if cells have different levels of APC2 expression. Certain other data put limits on APC ' s roles in CIN. First, many of APC ' s cytoskeletal interactions are dispensable for its tumor suppressor function -APC1638T mutant mice lacking the C-terminal half of APC, including the MT-and EB1-binding sites, are viable and not tumor prone ( Smits et al., 1999 ). Second, we think it is unlikely that truncated APC proteins seen in tumors have strong dominant effects on chromosome segregation. Null mutations in key kinetochore or SAC proteins are lethal to cells or organisms. Even less severe checkpoint defects seen in people carrying biallelic hypomorphic mutants in BubR1 result in massive aneuploidy, developmental defects, and cancer in many tissues ( Hanks et al., 2004 ). None of this is characteristic of patients heterozygous for truncating APC mutations, whose tumors are largely restricted to the gastrointestinal tract. However, even weak dominant-negative effects of truncated APC causing modest reductions in segregation fi delity might promote tumor progression; slight reductions in SAC function can do so, as is demonstrated by the haploinsuffi cient cancer-prone phenotype of BubR1 or CENP-E heterozygous mice ( Michel et al., 2001 ;Weaver et al., 2007 ). It is also important to emphasize what does not go wrong when APC is depleted. Kinetochores still attach to MTs and align at least roughly at the metaphase plate, spindles are largely normal in structure (although with defects in position and astral MTs), and most, although not all, agree that there are no strong SAC defects. The downstream effects on chromosome segregation fall into two disparate categories: several laboratories report problems with alignment of individual chromosomes at the metaphase plate and subsequent loss of individual chromosomes, whereas other laboratories report cytokinesis failure. Given the diversity of phenotypes and models, what can we conclude? First, we must seriously consider the possibility that activated Wnt signaling plays a role in CIN and that it may be the primary cause. Second, in evaluating possible cytoskeletal roles, we think it is important to focus on phenotypes seen in most cell types. All observe reduced interkinetochore distance. How might this happen? In principle, it could involve either defects in MT -kinetochore attachment or MT dynamics. Surprisingly, the effect seen is not predicted by the role of APC in interphase MT dynamics, where it promotes MT stability and growth ( Kita et al., 2006 ). If loss of APC reduced MT stability without decreasing kinetochore attachment, in the simplest model, this would increase interkinetochore tension. However, the consequences of changes in MT -kinetochore attachment are diffi cult to predict given current data, depending on whether they involve global reduction in MT -kinetochore attachment or Taketo ' s laboratory tested this hypothesis in cultured cells and the colon ( Aoki et al., 2007 ). To assay CIN, they assessed the frequency of anaphase chromosome bridges (the anaphase bridge index [ABI]). In the normal colon, the ABI is ‫ف‬ 1%. In contrast, in polyps homozygous for truncated APC, this increased threefold, which is consistent with APC loss promoting CIN. They next examined whether activating the Wnt pathway downstream of APC using activated ␤ -catenin affected CIN. Strikingly, this also elevated ABI fi ve-to ninefold. Similar results were seen in ES cells ( Aoki et al., 2007 ). To confi rm that anaphase bridges accurately refl ect CIN, they scored ES cell karyotypes after ‫ف‬ 10 doublings. In wild-type ES cells, 3 -5% were abnormal, whereas in APC mutant cells or those expressing activated ␤ -catenin, 15 -22% were abnormal. Finally, blocking transcriptional effects of Wnt signaling with a dominant-negative TCF transcription factor reduced the ABI to levels near those in wild-type ES cells while expressing a ␤ -catenin -independent version of TCF-induced CIN (as assessed by ABI). Thus, activating Wnt signaling downstream of APC led to CIN, and CIN caused by APC loss requires activation of Wnt target genes by TCF ( Fig. 5 ). Interestingly, CIN frequency was much higher in culture than in vivo, even for wild-type cells; thus, cell culture may represent a sensitized environment. Taketo ' s laboratory also examined possible CIN mechanisms ( Aoki et al., 2007 ). APC mutant or activated ␤ -catenin -expressing ES cells both had a robust SAC, but prolonged MT depolymerization led to highly elevated numbers of 8N cells without apoptosis, suggesting mutant cells can escape the G2/M block without mitosis. This phenotype is strikingly similar to that reported by N ä thke ( Dikovskaya et al., 2007 ) and Kaplan ( Caldwell et al., 2007 ), but in this case the proposed cause is activated Wnt signaling. Behrens ' laboratory provided further support for the hypothesis that Wnt signaling plays a role in CIN ( Hadjihannas et al., 2006 ). In primary colon tumors, they found a very strong cor relation between the level of Wnt signaling (assessed via a known Wnt target gene conductin) and the probability of CIN; 60% of CIN + and only 7% of CIN Ϫ tumors showed a greater than fi vefold activation of this Wnt target. Furthermore, they could mimic CIN by APC siRNA in cultured cells and could block this by simultaneously depleting ␤ -catenin. Both Behrens ' ( Hadjihannas et al., 2006 ) and N ä thke ' s ( Dikovskaya et al., 2007 ) laboratories also examined whether removing APC increased CIN in colon cancer cells expressing activated ␤ -catenin: if APC loss causes CIN in a cell in which Wnt signaling is already on, it would suggest that APC acts directly on chromosome segregation. In both cases, removal of APC increased CIN. However, it also increased activation of a Wnt reporter and an endogenous Wnt target, and, in the Behrens ' laboratory study ( Hadjihannas et al., 2006 ), codepletion of ␤ -catenin abolished the effect of APC depletion. Together, all of these data suggest that APC loss may trigger CIN in part via activated Wnt signaling and downstream transcriptional effects; furthermore, the level of Wnt signaling elevation may be critical to whether CIN occurs. How do we reconcile these disparate results? Two key caveats may help explain discrepancies in the results of different APC-siRNA experiments. First, the effi ciency of unbalanced attachment to the two kinetochores. One interesting avenue that should be carefully addressed is APC ' s possible role in regulating mitotic centromere-associated kinesin (MCAK), an MT-depolymerizing kinesin. MCAK is thought to monitor incorrect MT -kinetochore attachments, and defects in MCAK are known to lead to merotelic attachment and lagging chromosomes ( Kline-Smith et al., 2004 ). These data, in combination with a known APC -MCAK interaction ( Banks and Heald, 2004 ), suggest that APC may positively regulate MCAK, possibly by preventing aurora B ' s ability to phosphorylate and inactivate it. We suspect that APC plays a modulatory but not essential role in several different processes. For example, imagine that in its absence, there are small changes in MT dynamics, in the strength of the MT -kinetochore attachment, in accuracy of the SAC, and in astral MTs, and, on top of that, the loss of APC triggers changes in the cell cycle and apoptosis via its role in Wnt signaling. Each defect might have little or no effect in isolation; however, in combination, they may destabilize the accuracy of chromosome segregation and, in extreme cases, lead to cytokinesis failure. However, we must be cautious in extrapolating work from cultured cells into the animal, as cultured cells may not be as " happy " as those in vivo and thus may be more susceptible to these effects. In our view, the role of APC in CIN and the mechanisms by which it acts remain unclear. It is critical to continue analyzing the effects of APC loss both in cultured cells and in animal models, testing the reigning hypotheses and critically examining mechanisms.
2016-08-09T08:50:54.084Z
2008-06-02T00:00:00.000
{ "year": 2008, "sha1": "f21b94af9e266ca2dac8d5654debb51b3a25bc44", "oa_license": "CCBYNCSA", "oa_url": "http://jcb.rupress.org/content/181/5/719.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "a967b4030bbd4f165007c33c9ddd5c91769e14ac", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
51939790
pes2o/s2orc
v3-fos-license
Presenilin1 regulates Th1 and Th17 effector responses but is not required for experimental autoimmune encephalomyelitis Multiple Sclerosis (MS) is an inflammatory demyelinating disease of the central nervous system (CNS) where pathology is thought to be regulated by autoreactive T cells of the Th1 and Th17 phenotype. In this study we sought to understand the functions of Presenilin 1 (PSEN1) in regulating T cell effector responses in the experimental autoimmune encephalomyelitis (EAE) murine model of MS. PSEN1 is the catalytic subunit of γ-secretase a multimolecular protease that mediates intramembranous proteolysis. γ-secretase is known to regulate several pathways of immune importance. Here we examine the effects of disrupting PSEN1 functions on EAE and T effector differentiation using small molecule inhibitors of γ-secretase (GSI) and T cell-specific conditional knockout mice (PSEN1 cKO). Surprisingly, blocking PSEN1 function by GSI treatment or PSEN1 cKO had little effect on the development or course of MOG35-55-induced EAE. In vivo GSI administration reduced the number of myelin antigen-specific T cells and suppressed Th1 and Th17 differentiation following immunization. In vitro, GSI treatment inhibited Th1 differentiation in neutral but not IL-12 polarizing conditions. Th17 differentiation was also suppressed by the presence of GSI in all conditions and GSI-treated Th17 T cells failed to induce EAE following adoptive transfer. PSEN cKO T cells showed reduced Th1 and Th17 differentiation. We conclude that γ-secretase and PSEN1-dependent signals are involved in T effector responses in vivo and potently regulate T effector differentiation in vitro, however, they are dispensable for EAE. INTRODUCTION Background 3 a. Include sufficient scientific background (including relevant references to previous work) to understand the motivation and context for the study, and explain the experimental approach and rationale. b. Explain how and why the animal species and model being used can address the scientific objectives and, where appropriate, the study's relevance to human biology. Objectives 4 Clearly describe the primary and any secondary objectives of the study, or specific hypotheses being tested. METHODS Ethical statement 5 Indicate the nature of the ethical review permissions, relevant licences (e.g. Animal [Scientific Procedures] Act 1986), and national or institutional guidelines for the care and use of animals, that cover the research. Study design 6 For each experiment, give brief details of the study design including: a. The number of experimental and control groups. b. Any steps taken to minimise the effects of subjective bias when allocating animals to treatment (e.g. randomisation procedure) and when assessing results (e.g. if done, describe who was blinded and when). c. The experimental unit (e.g. a single animal, group or cage of animals). A time-line diagram or flow chart can be useful to illustrate how complex study designs were carried out. 7 For each experiment and each experimental group, including controls, provide precise details of all procedures carried out. For example: a. How (e.g. drug formulation and dose, site and route of administration, anaesthesia and analgesia used [including monitoring], surgical procedure, method of euthanasia). Provide details of any specialist equipment used, including supplier(s). b. When (e.g. time of day). c. Where (e.g. home cage, laboratory, water maze). d. Why (e.g. rationale for choice of specific anaesthetic, route of administration, drug dose used). Experimental animals 8 a. Provide details of the animals used, including species, strain, sex, developmental stage (e.g. mean or median age plus age range) and weight (e.g. mean or median weight plus weight range). b. Provide further relevant information such as the source of animals, international strain nomenclature, genetic modification status (e.g. knock-out or transgenic), genotype, health/immune status, drug or test naïve, previous procedures, etc. temperature, quality of water etc for fish, type of food, access to food and water, environmental enrichment). c. Welfare-related assessments and interventions that were carried out prior to, during, or after the experiment. Sample size 10 a. Specify the total number of animals used in each experiment, and the number of animals in each experimental group. b. Explain how the number of animals was arrived at. Provide details of any sample size calculation used. c. Indicate the number of independent replications of each experiment, if relevant. Allocating animals to experimental groups 11 a. Give full details of how animals were allocated to experimental groups, including randomisation or matching if done. b. Describe the order in which the animals in the different experimental groups were treated and assessed. 12 Clearly define the primary and secondary experimental outcomes assessed (e.g. cell death, molecular markers, behavioural changes). Statistical methods 13 a. Provide details of the statistical methods used for each analysis. b. Specify the unit of analysis for each dataset (e.g. single animal, group of animals, single neuron). c. Describe any methods used to assess whether the data met the assumptions of the statistical approach. Baseline data 14 For each experimental group, report relevant characteristics and health status of animals (e.g. weight, microbiological status, and drug or test naïve) prior to treatment or testing. (This information can often be tabulated). Numbers analysed 15 a. Report the number of animals in each group included in each analysis. Report absolute numbers (e.g. 10/20, not 50% 2 ). b. If any animals or data were not included in the analysis, explain why. Outcomes and estimation 16 Report the results for each analysis carried out, with a measure of precision (e.g. standard error or confidence interval). Adverse events 17 a. Give details of all important adverse events in each experimental group. b. Describe any modifications to the experimental protocols made to reduce adverse events. c. Describe any implications of your experimental methods or findings for the replacement, refinement or reduction (the 3Rs) of the use of animals in research. Generalisability/ translation 19 Comment on whether, and how, the findings of this study are likely to translate to other species or systems, including any relevance to human biology. Funding 20 List all funding sources (including grant number) and the role of the funder(s) in the study.
2018-08-14T20:08:22.934Z
2018-08-01T00:00:00.000
{ "year": 2018, "sha1": "ed914275ee7b82c39fdfc91681c7dd6839cbc074", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0200752&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e31513f792a6c85a45790b0bf66605a41a556001", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
119072578
pes2o/s2orc
v3-fos-license
Simulations of the polarized radio sky and predictions on the confusion limit in polarization for future radio surveys Numerical simulations offer the unique possibility to forecast the results of surveys and targeted observations that will be performed with next generation instruments like the Square Kilometre Array. In this paper, we investigate for the first time how future radio surveys in polarization will be affected by confusion noise. To do this, we produce 1.4 GHz simulated full-Stokes images of the extra-galactic sky by modelling various discrete radio sources populations. The results of our modelling are compared to data in the literature to check the reliability of our procedure. We also estimate the number of polarized sources detectable by future surveys. Finally, from the simulated images we evaluate the confusion limits in I, Q, and U Stokes parameters, giving analytical formulas of their behaviour as a function of the angular resolution. INTRODUCTION The capabilities of forthcoming radio telescopes, such as the Square Kilometre Array 1 (SKA) and its precursors, will allow us to study the sky with an unprecedented detail and they will dramatically improve our knowledge of the radio Universe. One of the main advantages of next generation radiocontinuum surveys will be the possibility to study the faint signals coming from the most distant regions of the Universe over large field of views both in total intensity and in polarization. This is extremely important for a number of scientific applications, from the study of the physical and evolutionary properties of different classes of radio sources, to the investigation of the cosmic magnetism. Concerning the first topic, important steps forward are expected from the radio continuum surveys that will be carried out with the SKA precursors: the Evolutionary Map of the Universe (EMU, Norris et al. 2011) planned with the Australian Square Kilometre Array Pathfinder (ASKAP), the MeerKAT International GHz Tiered Extragalactic Exploration (MIGHTEE) survey (Jarvis et al. 2016), the Westerbork Synthesis Radio Telescope (WSRT) Apertif (Norris E-mail: francesca.loi@inaf.it 1 https://www.skatelescope.org/ et al. 2013), and the Very Large Array (VLA) Sky Survey (VLASS) (Lacy et al. 2016). For a detailed discussion of the scientific expectations of the SKA for continuum science we refer to Prandoni & Seymour (2015). Regarding cosmic magnetism, the origin and the evolution of large scale magnetic fields have not yet been established, despite many observational and numerical simulation-based efforts. To determine the characteristics of large scale magnetic fields in galaxy clusters, one can analyse the Faraday rotation which affects every linearly polarized signal (the one from a background radio source) passing through a magnetised plasma (the intra-cluster medium) (see the reviews on the determination of cluster magnetic fields of Carilli & Taylor 2002;. The Faraday rotation of extra-galactic radio sources can also be used to evaluate the Galactic magnetic field. Taylor et al. (2009) have used the NRAO VLA Sky Survey (NVSS, Condon et al. 1998) at 1.4 GHz to produce a rotation measure (RM) Grid which has an average of 1 polarized source per square degree. These data have been used by Oppermann et al. (2015) to produce a reconstruction of the Galactic foreground Faraday rotation. Since the sensitivity of future radio surveys will significantly improve, it will be possible to realise a denser RM Grid. In this framework an important step forward will be represented by the polarization Sky Survey of the Universe's Magnetism (POSSUM, Gaensler et al. 2010), that will be carried out with ASKAP. POSSUM will make use of the same full Stokes observations dedicated to EMU, and therefore will share the same observational parameters (rms noise ∼10 µJy beam −1 , 10 of resolution). While EMU will produce total intensity images, POSSUM will use the data to extract polarization and RM information producing a RM grid of approximately 25 polarized sources per square degree. In its first phase of implementation, the mid frequency element of SKA (SKA1-MID) is expected to reach an average of 230−450 RMs per square degree at the sensitivity of 4 µJy beam −1 with a resolution of 2 (Johnston-Hollitt et al. 2015). Radio observations performed with next generation radio telescopes would be sensitive enough to be potentially limited by confusion rather than thermal noise. Confusion is an additional noise term due to the presence of background unresolved sources whose signal enters into the synthesised beam of the telescope. It is therefore clear that the larger the beam, the higher the confusion noise term. In total intensity the behaviour of the confusion noise as a function of angular resolution have been extensively studied in the literature (see Condon 1974Condon , 2002Condon et al. 2012). On the other hand, confusion noise has never been investigated in polarization, as the polarized signal from background radio sources is typically a factor 10-100 lower than the total intensity signal, and it has never been an issue in existing polarization surveys. However, this may be not true for the upcoming generation of extremely deep radio surveys, that may be confusion limited also in polarization. This work aims at estimating the confusion noise in polarization at 1.4 GHz. Generally, the existing studies in the literature make use of analytical formulas to estimate confusion at a given angular resolution. Such formulas are based on extrapolations of the observed source counts, assumed to follow a power law with slope and normalisation depending on observing frequency and depth. In this work, we use a different approach, that relies on end to end simulations. We simulate I, Q, and U Stokes images of a synthetic population of discrete radio sources distributed over cosmological distances and we analyse them to evaluate the confusion limit at different angular resolutions both in total intensity and in polarization. The paper is organised as follows: in Section 2, we describe the models and the procedure adopted to produce spectropolarimetric images of a population of discrete radio source; in Section 3, we show the comparison with data at 1.4 GHz, giving our expectation on the number of polarized source that future surveys could detect; in Section 4, we present the confusion limit in I, Q, and U Stokes parameters and the analytical formulas that describe its behaviour as a function of the angular resolution; in Section 5, we discuss about the applicability of the obtained results. Finally, the conclusions are drawn in Section 6. Throughout the paper, we adopt a ΛCDM cosmology with H 0 = 71 km s −1 Mpc −1 , Ω m = 0.27, and Ω Λ = 0.73. MODELLING THE RADIO SKY For this project, we make use of the FARADAY software package (Murgia et al. 2004) which has been further developed to reproduce the polarized emission of a population of discrete radio sources. As a first step, we produce a simulated catalogue of radio sources, generated by implementing recent determinations of the radio luminosity function (RLF) for the two main classes of objects dominating the faint radio sky: star forming galaxies (SFG) and Active Galactic Nuclei (AGN). The resulting catalogue contains all the discrete radio sources inside the "conical" portion of Universe whose angular aperture is set by the chosen field-of-view and whose depth extends from redshift z=0 up to a given z=z max . It is worth mentioning that simulated radio source catalogues already exist in the literature. An example is the one produced by Wilman et al. (2008) which with a semiempirical approach, starting from radio luminosity functions, simulates the radio continuum (total intensity) and HI emission of several radio source populations. Assuming a luminosity dependence for the fractional polarization, O'Sullivan et al. (2008) realised a simulated polarized image based on the radio source catalogue of Wilman et al. (2008). Very recently a new simulated catalogue was produced (T-RECS; Bonaldi et al. 2019) based on cosmological dark matter simulations to reproduce the clustering of sources and it models the radio sky both in total intensity and polarization with updated information on radio sources. Our simulation, like the above simulations, aims at giving useful information for the advent of the SKA. Similarly, it is based on cosmological radio luminosity functions integrated over cosmological volumes but the models adopted to reproduce the characteristics of the radio sources and also the procedure are in general different. In addition, alternatively to the previous works, we use observed high-quality images of extended radio sources to reproduce the morphology and the spectro-polarimetric properties of the simulated radio sources. This is especially important as these simulations will be used to study magnetic fields in galaxy clusters (Loi et al. in prep.). For each simulated radio source, our catalogue lists the following parameters: • type, in principle we can classify our sources in several sub-classes, radio-loud or radio-quiet AGN, SFG, quasar etc. Following Novak et al. (2017) and Smolčić et al. (2017) we consider two main families depending on the mechanism that triggers the radio emission: SFG and AGN; • redshift, z; • size, we used the relations adopted by Wilman et al. (2008) for radio-loud AGN and SFGs. The size model are redshift dependent and in particular the SFG size depends also on luminosity; • luminosity at 1.4 GHz, we extract this information from the RLFs of Novak et al. (2017) and Smolčić et al. (2017) for the SFGs and AGN respectively, based on the results of the VLA−COSMOS 3 GHz Large Project ), extrapolated to 1.4 GHz assuming the spectral index derived in combination with the the VLA−COSMOS 1.4 GHz Large and Deep Projects (Schinnerer et al. 2010(Schinnerer et al. , 2007(Schinnerer et al. , 2004; • coordinates, (x,y); • morphology and spectro-polarimetry properties, we select a model of radio source from a dictionary depending on its luminosity and type. Each model of this dictionary consists of four 1.4 GHz images: -the surface brightness I ν in total intensity; Figure 1. Models of radio galaxies where the colour represents the total intensity surface brightness distribution (normalised to one) and the vectors the intrinsic polarization strength and orientation. On the top, from left to right, we can see models of respectively Fanaroff-Riley (FR) type I and type II (Fanaroff & Riley 1974), while in the bottom we show different models of SFGs. -the spectral index distribution α determined by assuming that the flux density S ν at a frequency ν is S ν ∝ ν −α ; -the fractional polarization defined as the ratio between the polarized intensity and the total intensity FPOL = P/I; -the intrinsic polarization angle which is defined with respect to the Q, and U Stokes parameters as: The images of the dictionary are real high-quality highresolution images performed at high-frequency. In particular, we used VLA images at C and X bands at arcsecond resolution so that the polarization properties can be considered very close to the intrinsic values. Some examples of models are shown in Fig. 1, where the colour represents the total intensity surface brightness (normalised to one) and the vectors the intrinsic polarization strength and orientation. For the AGN class we consider sources with two different morphology: Fanaroff-Riley (FR) type I and type II Fanaroff & Riley (1974). For the SFG class we use images of spiral galaxies. From an operative point of view, the generation of the catalogue is generally based on a Monte Carlo extraction from the corresponding cumulative distribution functions of the models. A flow chart of the adopted procedure is shown in Fig. 2. As a first step, we set the maximum redshift up to which we populate the simulated portion of the Universe. We split the slice into sub-volumes of ∆z=0.01 in width. We perform the integral of the AGN and SFG RLFs throughout the solid angle of the simulated observation sub-volume by sub-volume : the result is the total number of "cosmological" AGN and SFGs respectively. As a maximum redshift we set z max =6, since the adopted RLFs sample AGN and SFGs up to a redshift of z=5.7 and z=5.5 respectively. The radio source redshift is assigned through a Monte Carlo extraction from the cumulative distribution computed for each specific type from the corresponding RLF evolution. We populate each sub-volume by randomly extracting the coordinates. The luminosity is assigned from the corresponding cumulative distribution function based on the evolved RLF at the redshift of the radio source. We compute the source size taking into account the redshift and also the luminosity in the case of SFGs. A model for the radio source is extracted from the dictionary according to the luminosity and type. The surface brightness distribution at a given frequency is re-scaled such that the luminosity: matches the one assigned to the source, where the integral is performed over the radio source area Σ. Once we obtain our simulated catalogue of radio sources, we set the frequency bandwidth and channel resolution, and we used FARADAY to generate a spectralpolarimetric cube for each source and for each of the I, Q, and U Stokes parameters. In this process, the algorithm considers the correct spectral index for each pixel according to the catalogue. Indeed, the observed surface brightness at a given pixel of coordinates (x, y) depends on the redshift z and on the spectral index at the corresponding coordinates α(x, y): where A is the pixel area. By multiplying the surface brightness and the fractional degree polarization maps, we obtain the intrinsic polarized intensity of the selected radio galaxy. The radio sources which constitute our dictionary are not enough to represent the level of polarization statistically observed and reported in the literature. This is why we decided to re-scale the fractional polarization images in such way that the AGN and the SFGs can assume values between 0−10% and 0−5% respectively as observations of statistical samples suggest (Hales et al. 2014). The Q, and U Stokes parameters are computed by combining Eq. 1 and p ν = U 2 ν + Q 2 ν , where p ν is the polarized intensity at a given frequency ν: We neglect the effect of the Galactic Rotation Measure (RM) and we assume that no other magnetised plasma is present in the simulated portion of Universe. Otherwise, the observed polarized intensity would not be equal to the intrinsic one and we should compute the U and Q Stokes parameters starting from the polarization angle Ψ defined as: where the φ(l) is the Faraday depth defined as the integral performed over the length l (in kpc) of the crossed magnetoionic plasma of the line-of-sight parallel component of the magnetic field B | | (in µG) times the thermal density n e (in cm −3 ): COMPARISON WITH DATA: TOTAL INTENSITY AND POLARIZATION SOURCE COUNTS To test the reliability of our simulations, we compare our results with total intensity and polarization source counts available from the literature. In Fig. 3, we plot the 1.4 GHz differential source counts of our simulated radio source population together with those estimated from surveys sampling at a wide flux density range, from ∼60 µJy up to 1 Jy. The counts are Euclidean normalised 2 and the data refers to large-scale (> few square degree) 1.4 GHz surveys (White et al. 1997;Prandoni et al. 2001;Bondi et al. 2003Bondi et al. , 2008Kellermann et al. 2008;Hales et al. 2014;Prandoni et al. 2018). The flux density is evaluated taking into account the k-correction: where D L is the luminosity distance. As shown in the plot, the simulated differential counts (green points) are in agreement with data. This simulation can be used to predict the radio sky at sub−µJy fluxes, that will be accessible with the next generation radio telescopes like the SKA over large field-of-view. In Fig. 4, we show the 1.4 GHz cumulative counts of polarized sources as a function of the polarized source flux density p ν in mJy. The black points are 1.4 GHz data (Hales et al. 2014;Rudnick & Owen 2014) which cover the range between ∼ 16 µJy and ∼ 60 mJy while the purple points are the cumulative counts obtained from our simulation. The error bars of the cumulative source counts σ N are the poissonian uncertainties. Even in this case, the agreement between data and simulation is remarkable. We observe that the cumulative source counts as a function of the polarized flux density can be well described by a power-law: which turns to be a linear function in the log-log space: where y = log(N(> p)/deg 2 ), x = log(p), B = log(N 0 ), and A = γ. With the least squares method, we fit the cumulative represented with a purple line in Fig. 4. In our fitting, we take into account the uncertainties on the measurements as: The errors associated to the parameters N 0 and γ are then σ N 0 = 10 B · ln 10 · σ B and σ γ = σ A . Also in this case, our simulation can be used to investigate radio source populations with polarized flux density lower than the limit of current observations. In particular, Table 1 reports our expectations in terms of polarized source numbers and densities for several radio continuum polarization surveys. From left to right, each column shows the survey name, the sensitivity level in polarization σ p in µJy at 1.4 GHz, the expected number of sources per square degree with polarized intensity higher than 3σ p , the field of view of the survey in square degree, and the number of sources that each survey would detect. The number and the number density per square degree have been computed from Eq. 10. THE CONFUSION LIMIT IN TOTAL INTENSITY, Q, AND U STOKES PARAMETERS The possibility to simulate all the radio sources that are present in a given field-of-view let us explore the effect of the confusion noise, which is due to the faint unresolved radio sources whose signal enters in the beam of the telescope. While we can reduce thermal noise by increasing the exposure time, confusion is a physical limit that we cannot overcome for a fixed maximum baseline length and it is important to have an accurate estimate of its statistical properties. Here, we simulate the full-Stokes parameters at 1.4 GHz of a radio source population in a computational grid corresponding to ∼ 0.72 deg 2 with a resolution of 1 . The resulting images have been convolved with different beam sizes. In particular we consider beam Full-Width-at-Half-Maximum (FWHM) equal to 1 , 2 , 6 , 10 , 20 , 45 , 60 , and 120 . In Fig. 5, we show the resulting images at, from top to bottom, 1 , 10 , 45 , and 120 beam FWHM. Columns, from left to right, show the I, Q, and U Stokes images respectively. Starting from these images, we want to determine the confusion limits at different beam FWHM. In total intensity the spatial distribution of confusion sources over a large region of the sky forms a plateau characterised by a mean different from zero. However, this base level cannot be observed in the interferometric images due to the missing short space baselines in the u − v plane. Thus, what we observe in general is the fluctuating component of the confusion. The distribution of these fluctuations is highly non-Gaussian but it presents a long tail at high flux densities due to bright sources. This is shown in Fig. 6, where we plot the Stokes I surface brightness distribution obtained from the image of Fig. 5 at 1 of resolution. The y-axis represents the number of the pixels at a given surface brightness normalised to 1. In the top right corner, it is reported a zoom out of the same histogram to show the full range of surface brightness values assumed by the distribution. The long tail towards high surface brightness values is due to the presence of real sources. In real images, the confusion is estimated from the probability distribution P(D) measured in a cold part of the sky, which corresponds to the distribution of the surface brightness image. The P(D) distribution is the convolution of the confusion due to the faint sources and the thermal noise which are independent of each other, so that the total observed variance σ 2 o is the sum of the variance due to the confusion noise σ 2 c and to the thermal noise σ 2 n : To estimate the confusion, in general it is necessary to start from images where the σ 2 c σ 2 n . The simulated images obtained in this work are not affected by any kind of noise except the confusion. Therefore, to measure the confusion we could simply measure the rms from the simulated images. However, to be sure that we are not taking into account bright sources which should be distinguishable from the confusion, we measure the average and the rms with an iterative procedure. For each image at a given beam resolution, we follow these steps: (i) we cover the image with boxes with sizes 10 times the beam FWHM; (ii) we evaluate the rms in every box by iterative clipping all the pixels having an intensity larger than 10×rms, until convergence and no other pixels are excluded. In practice, we consider that the confusion noise is related only to the sources fainter than a signal-to-noise ratio S/N<10, where N is evaluated numerically by clipping the tail of the distribution as described above; (iii) we compute the confusion limit by averaging the rms values of the different boxes and its error as the square root of the standard deviation of the obtained mean divided by the number of boxes. The computed confusion limits in total intensity at different FWHM are plotted in Fig. 7: the measurements performed on the 1.4 GHz simulated images are represented with green dots. As for the case of the cumulative counts, we assume a power law behaviour for the confusion noise σ = N 0 · (FW H M) γ Figure 7. The plot show the 1.4 GHz confusion noise in total intensity calculated from the convolved images with respect to the FWHM as green dots fitted with the solid green line. The black line represents the formula proposed by Condon (2002) which is reported together with the fitted relation in the bottom left corner. We also plot in magenta the expected sensitivity of different surveys: the SKA1-MID all-sky, wide, deep, and ultra-deep surveys (Prandoni & Seymour 2015), the WSRT Apertif survey (Norris et al. 2013), the ASKAP MIGHTEE survey (Jarvis et al. 2016), the ASKAP EMU survey (Norris et al. 2011), and the VLA VLASS (Lacy et al. 2016). as a function of the beam resolution, and we fit the results with the least squares method in the log-log space, where y = log(σ), x = log(FW H M), B = log(N 0 ), A = γ. We find the following relation: Assuming an average spectral index for the source population of α = 0.8 the previous relation can be written as: where we consider ν GHz −α as a constant and therefore we simply divided the fitted parameter N 0 and its uncertainty by this constant. Our results can be compared with the confusion noise expected on the basis of the formula provided by Condon (2002): FWHM min · FWHM max arcmin 2 mJy/beam, where FWHM min and FWHM max are the minimum and the maximum beam FWHM. As reference, we trace this relation with a black line in Fig. 7 where we assume α=0.8, ν=1.4 GHz and that FWHM min = FWHM max . We note that for what concerns the total intensity there is a remarkable agreement between the predictions of our simulations and the formula by Condon (2002) widely used in literature. In the same Figure, we show in magenta the sensitivity levels foreseen for the same future surveys of Table 1. As we can see, all the survey are very close to the confusion even at very high angular resolution, for example at 0.5 resolution of the SKA1-MID ultra-deep survey, where the confusion noise is lower and it is possible to deeply explore the radio continuum sky. No information in the literature has been reported so far about the confusion limit in Q, and U Stokes parameters. The values measured in this work at different FWHM are plotted in Fig. 8: the red and the blue solid line are respectively the fits of the 1.4 GHz simulations of the Q, and U confusion noise whose equations are indicated in the right bottom corner. We report in the following the best fit equations shown in the plot: µJy/beam (16) By assuming an average spectral index for the source popu-lation of α = 0.8, the previous relations can be written as: FWHM arcmin 2.093±0.001 µJy/beam (17) As expected the confusion limits of the U and Q Stokes parameters is lower than in total intensity, according to our simulation by a factor of ∼400. Concerning future surveys, we observe that in Q, and U Stokes parameters the confusion limit is well below their sensitivity level, which has been reported in the same plot with magenta symbols. This represents an important result since, according to the modelling presented here, it means that with next generation telescopes we could perform very deep targeted observations in polarization without being limited by confusion noise. APPLICABILITY OF THE RESULTS The simulations presented in this work aimed at determining the confusion limit in polarization as a function of angular resolution. Our approach consists in a modelling of the discrete radio sources populating the Universe starting from their observed properties at 1.4 GHz. Our investigation is based on a number of assumptions. We discuss in the following the reasons behind and the possible limitations introduced by each of them. (i) Frequency. At 1.4 GHz the radio sky has been extensively studied down to µJy flux levels, both in total intensity and in polarization. This enables us to compare our modelling with existing data in the literature and assess the reliability of our simulations. The results of the simulations at 1.4 GHz can be extrapolated to other frequencies by assuming an average spectral index for the various source populations. This approach has been followed by both Wilman et al. (2008) and Bonaldi et al. (2019), obtaining good results in reproducing observational trends, like source counts, etc. Nevertheless directly simulating the extra-galactic radio sky at lower and/or higher frequencies would certainly be the right approach to follow. (ii) Galactic foreground. We neglect the presence of a Galactic foreground. The effect of the Galactic RM is the rotation of the polarization plane of the signal, as shown in Eq. 5. If we do not correct for the right value of Galactic RM the signal will be depolarized and measurements of the Q, and U confusion limits would give values lower than what reported in this work. By applying techniques like the Rotation Measure Synthesis (Brentjens et al. 2005;Burn 1966), it is possible to infer the Galactic RM value. Our results will correspond to the de-rotated U, and Q Stokes images. (iii) Clustering. The simulated images used to estimate the confusion do not include clustering of sources. In other words, we are simulating a cold region of the sky, without galaxy clusters. The presence of source clustering would have the effect to create regions with different density of sources and likely a different distribution of the confusion. To evaluate the effect of clustering on confusion it is necessary to implement the clustering of sources along the filament of the cosmic web in our simulation and this is the goal of a future work. However, since our simulations agree with data (see Section 3), we are sure about the reliability of our results. If discontinuities in the number of sources can be clearly observed in images, our results would represent an average behaviour of the confusion between the higher and the lower density regions. It is worth noting that Wilman et al. (2008) include a clustering recipe in their simulations, but the results are questioned by radio source clustering analyses reported in the literature (see e.g. Hale et al. 2018). Bonaldi et al. (2019) also implemented source clustering in T-RECS, using a high-resolution cosmological simulation. Issues that can be introduced by source over-densities is the possible presence of a magneto-ionic plasma in the intercluster medium and more generally in the filaments of the cosmic web. This will have the effect to depolarise the signal of background sources, resulting in a lower Q, and U confusion limit. Provided that up to now the presence of magnetic fields in filaments is not firmly confirmed by observations (but see Vacca et al. 2018), the magneto-hydro-dynamical simulations which explore this possibility suggest very weak magnetic fields in these structures (Vazza et al. 2015) Therefore, the depolarization due to filaments should not have a significant impact on our estimates. However, the effect of source clustering mentioned here deserves dedicated studies and we consider them as a future prospect. (iv) Sidelobe contribution. An additional source of confusion, especially important in total intensity rather than in U and Q Stokes images, is due to the sidelobes of uncleaned sources lying outside the image. In the work presented here, we did not consider this contribution. This choice was made because we wanted to estimate the confusion noise due to the faint unresolved sources and compare it with the sensitivity foreseen for several surveys performed (or which are going to be performed) with different instruments. These instruments will be characterised by a different response, i.e. by a different (and sometimes still unknown) shape of primary beam, therefore the addition of the sidelobe contribution would make the results valid just for a particular instrument in a particular configuration. With this work, we give a first estimate of the confusion noise in polarization due to the faint unresolved sources. Thanks to this, we could focus on that instruments which seem capable to reach a thermal noise closed to the confusion value reported here and perform the analysis considering also the sidelobe contribution. CONCLUSIONS In this work, we presented an original numerical approach developed to generate full-Stokes images of the radio sky. We described the models and the procedure adopted to reproduce the discrete radio sources populating the Universe. After that we successfully compared the results of our modelling and data from the literature concerning the differential source counts in total intensity and the cumulative source counts of polarized sources, we identified a simple functional relation between the number of polarized sources per square degree and the polarized flux density. From this relation, we computed the number of polarized sources that future surveys will detect, an useful information especially for cosmic magnetism investigations. Finally, we evaluated the confu-sion limits in I, Q, and U Stokes parameters at different beam resolution. Even in this case we found analytical formulas which describe the confusion limits as a function of the angular resolution. These formulas can be used as additional input for setting up observational strategies to maximise the impact of the next generation radio telescopes.
2019-02-15T19:00:01.000Z
2019-02-15T00:00:00.000
{ "year": 2019, "sha1": "b03124e50b83d8bbb819c1b7b1803ee7cb248dec", "oa_license": null, "oa_url": "https://cris.unibo.it/bitstream/11585/704373/4/11585_704373.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b03124e50b83d8bbb819c1b7b1803ee7cb248dec", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
59293038
pes2o/s2orc
v3-fos-license
Breast Cancer Characteristics in Middle Eastern Women Immigrants Compared With Non-Hispanic White Women in California Abstract Background Emerging evidence has indicated that Middle Eastern (ME) immigrants might be more likely to be diagnosed with breast cancer at advanced stage, yet have better overall survival than nonimmigrant non-Hispanic whites (NHW). This study aims to analyze the association between ME immigration status and breast cancer stage at diagnosis and survival. Methods Using the California Cancer Registry, a total of 343 876 women diagnosed with primary in situ or invasive breast cancers were identified during 1988–2013. Multinomial logistic regression models were fitted to evaluate the risk of in situ and nonlocalized breast cancer stage in comparison with localized breast cancer among first-generation ME immigrants, second- or subsequent-generation ME immigrants, and NHW. Cox proportional hazard models were applied to calculate hazard ratios (HRs) with their 95% confidence intervals (CIs) for breast cancer mortality among the three population groups with invasive primary breast cancer. Results First-generation ME immigrants had higher odds of being diagnosed with a nonlocalized stage (vs localized) than NHW (odds ratio [OR] = 1.17, 95% CI = 1.09 to 1.26). Second- or subsequent-generation ME immigrants also had higher odds of being diagnosed with a nonlocalized stage (vs localized) than NHW (OR = 1.31, 95% CI = 1.20 to 1.43). First-generation ME immigrants were 11% less likely to die from breast cancer than NHW (HR = 0.89, 95% CI = 0.82 to 0.97). Conclusions First-generation ME immigrants had higher breast cancer survival despite being diagnosed at a nonlocalized breast cancer stage at diagnosis when compared with NHW. Screening interventions tailored to this ME immigrant group need to be implemented. In the United States, breast cancer mortality has been decreasing over the past few decades. Five-year breast cancer-specific survival rates have improved from 75.2% in 1975 to 91.3% in 2009 (1). Stage at diagnosis is considered to be the strongest determinant of breast cancer survival (2). Survival rates vary by stage at diagnosis, with 100.0% for in situ, 98.5% for localized, 84.6% for regional, and 25.0% for distant breast cancers (3). Studies have shown that immigrants to the United States present with more advanced cancer stage at diagnosis and have lower survival rates compared with nonimmigrant non-Hispanic whites (NHW) (4)(5)(6)(7)(8)(9). Access to health care, lower rates of mammography screening, language barriers, genetic factors, and other sociocultural factors have been suggested to explain these disparities (10,11). Lower rates of mammography screening among immigrant women have been explained by multiple factors, including having a lower education level, being a new immigrant, and not having public insurance coverage (12). It has also been suggested that immigrants do not have a clear knowledge of the health care system, which can be a barrier in breast cancer screening (13). One of the growing immigrant populations in the United States (14), and particularly in California (15,16), is the Middle Eastern (ME) immigrant population. Studies have been conducted to compare breast cancer stage and survival in different immigrant groups in the United States (4)(5)(6)(7)(8)(9)(10)17,18). To our knowledge, only two studies have investigated stage at diagnosis and survival in the ME immigrant population (19,20). One of the reasons is that immigrants from the Middle East are not recognized as a separate ethnic group in the US census and are combined with NHW (21). A study conducted in Michigan has shown that ME immigrants were more likely to be diagnosed at advanced stage, yet had better overall survival than NHW (19), while a study performed in California has shown similar survival patterns for stage IIA breast cancers only (20). Cancer in different generations of immigrants has been investigated by using place of birth as an estimation for acculturation (22,23). To our knowledge, this is the first study to examine breast cancer stage at diagnosis and survival in different generations of ME immigrants in California. First-generation ME immigrants are born in the Middle East, while second-or subsequent-generation ME immigrants are born elsewhere. This study aims to analyze the association between ME immigration status and breast cancer stage at diagnosis and survival in California between 1988 and 2013. Data Source The California Cancer Registry (CCR) is California's statewide population-based cancer surveillance system. The registry monitors incidence and death from cancer among Californians since 1988 (24). CCR captures information on the patients' demographics, cancer characteristics, treatment, and follow-up information. The demographic information includes marital status, health insurance, and socioeconomic status (SES). The cancer characteristics include age at diagnosis, year at diagnosis, stage at diagnosis, estrogen and progesterone receptors (ER and PR), tumor grade, and cancer histology. Treatment options include surgery and chemotherapy. This study did not require institutional review board approval. Study Populations This study cohort consisted of all female patients from CCR who 1) were diagnosed in California, 2) between January 1, 1988, and December 31, 2013, 3) with a primary breast cancer, 4) were younger than age 100 years at diagnosis, 5) had an available social security number (SSN), 6) were part of the three population groups of interest (first-generation ME immigrants, second-or subsequent-generation ME immigrants, and NHW), and 7) had a known breast cancer stage at diagnosis. The three population groups of interest in this study were first-generation ME immigrants, second-or subsequent-generation ME immigrants, and NHW. If the patient had a Middle Eastern last name (25), did not have a Hispanic or Asian last name, and was born in one of the Middle Eastern countries, she was considered a first-generation ME immigrant. If the patient had a Middle Eastern last name (25), did not have a Hispanic or Asian last name, was not born in one of the Middle Eastern countries, and did not have a missing birth country, she was considered a second-or subsequent-generation ME immigrant. Finally, if the patient did not have an ME or Hispanic or Asian last name and was identified as white in the CCR data set, she was considered NHW in our analysis. Stage at Diagnosis and Survival Summary stage at diagnosis existing in the CCR data set (SUMSTAGE) was used for cancer stage in this study (26). Breast cancer stage at diagnosis was categorized into in situ, localized, and nonlocalized, with nonlocalized tumors including regional and distant cancers. Regional breast cancers involve cancers that have spread to nearby lymph nodes, tissues, or organs. Distant breast cancers involve cancers that have spread to distant parts of the body. Localized cancer at diagnosis was used as the reference stage in this study. CCR contains the patient's underlying cause of death, vital status, and follow-up time in months. The last date for followup observation was December 31, 2013. Breast cancer-specific deaths were classified as codes 1740-1749 of the International Classification of Diseases (ICD), ninth revision, for deaths that occurred between 1988 and 1998 and codes C500-C509 of the ICD, 10th revision, for deaths that occurred in 1999 and beyond. Cancer survival analysis was completed for invasive primary breast cancer cases only; hence, in situ breast cancers were excluded from survival analysis. Other Study Variables In a previous study, principal component analysis was utilized, and data from the 1990 census were used to create an SES composite score for each of the census block groups (27). These scores were sorted, categorized into quintiles, and added to the CCR data set. The lowest quintile corresponds to the lowest SES. Age at diagnosis was used as a continuous measurement, in addition to the three age categories created (<45, 45-54, and >55 years). Year at diagnosis, ranging from 1988 to 2013, was divided into five categories: 1988-1992, 1993-1997, 1998-2002, 2003-2007, and 2008-2013. ER and PR were categorized as positive, negative, and unknown. Surgery and chemotherapy treatment were categorized as no, yes, and unknown. Tumor grade was divided into five categories: well differentiated, moderately differentiated, poorly differentiated, undifferentiated/anaplastic, and unknown if differentiated. Lastly, cancer histology was categorized into ductal, lobular, ductal/lobular, mucinous, and other. Statistical Analysis Descriptive data were stratified and presented for the three population groups of interest and by country of birth for firstgeneration ME immigrants. Means 6 standard deviations and medians were presented for continuous variables and numbers (percentages) for categorical variables. Multinomial logistic regression (28) models were fitted to evaluate the risk of in situ and nonlocalized breast cancer stage in comparison with localized cancer (reference stage) among the different generations of ME immigrants and NHW. We started with a model including age at diagnosis, year at diagnosis, and marital status (model 1). We then added SES to model 2, and health insurance to model 3. Ten-year overall and breast cancer-specific survival percentages with 95% confidence intervals (CIs) were calculated using lifetables. The log-rank test was employed to compare survival curves among the three population groups. Cox proportional hazard models were applied to calculate hazard ratios (HRs) with their 95% confidence intervals for breast cancer-specific death among the three population groups. We began with a model including age at diagnosis, stage at diagnosis, year at diagnosis, and marital status (model 1). We then added health insurance and SES to model 2, ER and PR to model 3, and finally chemotherapy, surgery, tumor grade, and cancer histology to model 4. The proportional hazard assumption was examined by testing the interaction of time with the covariates. There was no violation for this assumption. All data analyses were completed using SAS statistical software, version 9.4 (SAS Institute Inc., Cary, NC). Results Female breast cancer patients accounted for 651 270 of the patients in the CCR data set between 1988 and 2013, of which 543 180 female patients had primary breast cancers. We restricted eligibility to women younger than 100 years at diagnosis (n ¼ 542 974) who had available SSNs (n ¼ 541 182). Of those, 3922 were first-generation ME immigrants, 2448 were second-or subsequent-generation ME immigrants, and 345 643 were NHW. After excluding breast cancer cases with unknown stage at diagnosis, our sample included 3841 first-generation ME immigrants, 2405 second-or subsequent-generation ME immigrants, and 337 630 NHW women. Survival analysis was performed on invasive breast cancers only. Therefore, the final sample used in the survival analysis was 3246 breast cancer cases for first-generation ME immigrants, 2056 for second-or subsequent-generation ME immigrants, and 285 256 for NHW ( Figure 1). Table 1 shows the descriptive characteristics for breast cancer cases for all stages combined (in situ, localized, and Table 2. First-generation ME immigrants had higher odds of being diagnosed with a nonlocalized stage (vs localized stage) when compared with NHW (odds ratio [OR] ¼ 1.17, 95% CI ¼ 1.09 to 1.26), after adjusting for age at diagnosis, year at diagnosis, marital status, SES, and health insurance. Second-or subsequent-generation ME immigrants also had higher odds of being diagnosed with a nonlocalized stage (vs localized stage) when compared with NHW (OR ¼ 1.31, 95% CI ¼ 1.20 to 1.43). No statistically significant differences were detected in the odds of being diagnosed with in situ breast cancers (vs localized stage) between ME immigrants and NHW or in the odds of being diagnosed with nonlocalized stage (vs localized stage) among the different generations of ME immigrants. The 10-year overall and breast cancer-specific survival analyses are illustrated in Table 3. Regardless of the breast cancer diagnosis stage, first-generation ME immigrants had the highest overall survival, while NHW had the lowest overall survival among the three population groups. First-generation ME immigrants also had the highest breast cancer-specific survival among the three population groups for localized and nonlocalized breast cancer stages. Survival percentages from breast cancer were higher than overall survival. Nonlocalized breast cancer cases had lower survival when compared with localized breast cancers. The log-rank test was computed, and it showed a statistically significant difference among the three population groups, except for breast cancer-specific survival in localized cancer stage. After adjusting for age at diagnosis, stage at diagnosis, year at diagnosis, marital status, health insurance, SES, ER, PR, chemotherapy, surgery, tumor grade, and cancer histology, firstgeneration ME immigrants were 11% less likely to die from breast cancer than NHW (HR ¼ 0.89 with 95% CI ¼ 0.82 to 0.97). There were no statistical differences in breast cancer death rates between second-or subsequent-generation ME immigrants and NHW (HR ¼ 1.03, 95% CI ¼ 0.93 to 1.13). First-generation ME immigrants were less likely to die from breast cancer than second-or subsequent-generation immigrants. However, in the final model after full adjustment, the difference was marginally statistically significant (HR ¼ 0.88, 95% CI ¼ 0.77 to 1.00) ( Table 4). Discussion This study found that first-generation ME immigrants had higher breast cancer survival despite being diagnosed at a nonlocalized breast cancer stage when compared with NHW. Previous studies have shown that immigrants present with more advanced breast cancer stage at diagnosis (4,6,7,29-31). Our results are similar, with first-generation ME immigrants having higher odds of nonlocalized breast cancer stage when compared with NHW. A comparative survey among four ME registries and the United States showed more than 45% of ME registry participants (except Israel-Jewish area) being diagnosed with breast cancer at a regional stage (32). Multiple factors have been reported to contribute to this advanced stage at diagnosis in immigrants. These factors included lower mammography screening rates (33,34), lower SES, different cultural beliefs (35), and limited access to health care (36). Studies have been conducted to look at predictors of mammography screening and breast cancer examination in immigrant groups. These predictors included having health insurance, having higher income, longer duration of residency in the United States, and greater acculturation (37). Reasons for mammogram noncompliance Table 3. Ten-year overall and breast cancer-specific survival for primary female invasive breast cancers for stages combined and stratified by breast cancer stage in the three population groups: California Cancer Registry, 1988-2013* included not having previous mammograms, fear of mammography, and lack of time to take the test (38). A report from Jordan showed that only 7% of the 1549 population-based randomly selected women, who were 18 years and older, ever had a mammogram (39). Studies have been conducted to understand factors influencing breast cancer screening and examination in ME women. These factors included perceived importance of mammography, intent to be screened, and religious/cultural restrictions (40)(41)(42)(43)(44)(45)(46). We hypothesized that a potential reason for first-generation ME immigrants to be diagnosed with advanced breast cancer stage at diagnosis might be the lack of access to health care. However, our results showed that even after adjusting for SES and health insurance, first-generation ME immigrants still had higher odds of being diagnosed with nonlocalized stage compared with NHW. Cultural and immigration-related barriers might be responsible for these findings, as shown in a study conducted in the Washington, DC, area on Jordanian and Palestinian first-generation immigrants (47). ME women tend to get very busy in their houses, prioritize their families, and not go to the clinician until symptoms appear. Women from the Middle East have their own beliefs in Allah's Will. In some cases, they get strong objections from their partners and their families on getting seen by a clinician (particularly a male clinician). Exposing their female body is forbidden by their Islamic religion. Lastly, they do not have a habit of getting annual check-ups, are not motivated in screening, and have a deep fear of cancer (47). This study also showed first-generation ME immigrants having higher breast cancer survival when compared with NHW. Our results are similar to the limited literature conducted on ME immigrants in the United States (19,20). This higher survival in first-generation ME immigrants may be explained by their social support and adherence to a Mediterranean diet. Studies have shown that women with an increase in their social support system after breast cancer diagnosis have higher survival rates (48). Furthermore, the absence of emotional support increases the risk of dying from breast cancer (49). Family is the fundamental social unit in ME families (50)(51)(52). After cancer diagnosis, ME culture play a role as the patients' caregivers. ME families often provide emotional and social support. This can help increase the chance of survival from breast cancer for firstgeneration ME patients. The higher survival in first-generation ME immigrants can also be explained by their adaptation of the Mediterranean diet. Studies have shown that adherence to a Mediterranean diet is associated with higher survival (53,54). The lower mortality pattern in immigrants has also been studied in the Latino community, where two different hypotheses have been suggested and tested: salmon bias and healthy migrant effect (55). Salmon bias, where immigrants tend to return back home to die when they are diagnosed with terminal cancer, has been considered as an explanation for lower mortality in different immigrant groups including ME immigrants traveling back to Europe (56). The United States is geographically close to Mexico, and so is Europe to the ME countries. We speculate that the lower mortality in ME first-generation immigrants is not due to salmon bias given the long travel distance between the United States and the countries of the Middle East. However, this lower mortality can be explained by the healthy migrant effect, where healthier ME people immigrate to the United States. In this study, we assessed whether acculturation is associated with breast cancer stage at diagnosis and survival by investigating place of birth and looking at different generations of ME immigrants (23). Second-or subsequent-generation ME immigrants had higher odds of being diagnosed with nonlocalized breast cancer stage when compared with NHW. We believe that the same cultural barriers preventing first-generation ME immigrants from being screened are possible explanations for the observed advanced cancer stage in second-or subsequentgeneration ME immigrants. This was further demonstrated by the absence of stage differences between the different generations of ME immigrants. However, no statistically significant differences exist in breast cancer mortality between second-or subsequent-generation ME immigrants and NHW, suggesting the impact of acculturation on breast cancer survival. Secondor subsequent-generation ME immigrants tend to adopt a Westernized diet, which is positively associated with higher mortality (57). This was also shown by first-generation ME immigrants having a survival advantage (although marginal) over second-or subsequent-generation ME immigrants. This is the first study to investigate breast cancer stage and survival in different generations of ME immigrants in California. We used CCR, California's statewide cancer registry, which captures cancer incidence and characteristics among Californians since 1988. Our study also bares a few limitations. Women who had an ME maiden name but changed their last name after marriage or children born to ME women but not ME men were not captured in this study. In addition, we were not able to identify ME immigrants with missing ME last names or patients with missing places of birth. Data on human epidermal growth factor receptor 2 was missing in more than 60% of the cases; therefore, we did not include this variable in the analysis. Our study lacks information on reproductive factors (nulliparity, early menarche, and late menopause), which are known to increase breast cancer risk. It also lacks information on body mass index, smoking, alcohol consumption, and diet. Immigrants tend to adopt a Westernized diet after immigration or with further generations. Data on other comorbid conditions were not available in CCR. These comorbidities could have clarified some of the survival patterns seen in this study. We could not measure time since immigration for first-generation ME immigrants. Lastly, there was a statistically significant difference in sample sizes among the three population groups, limiting the comparability of our groups. In summary, first-generation ME immigrants were diagnosed at a nonlocalized breast cancer stage at diagnosis when compared with NHW. However, they had higher breast cancer survival. Other studies are needed to confirm our results. Furthermore, screening interventions conducted in an appropriate language and tailored to this ME immigrant group, taking into consideration their specific cultural beliefs, need to be implemented. Considerations should be made to start breast cancer screening at a younger age in ME immigrants (58,59), and perhaps to screen more frequently. Funding No funding sources reported. Note Affiliation of authors: Department of Epidemiology, School of Medicine, University of California Irvine, Irvine, CA.
2019-01-28T14:07:47.558Z
2018-04-01T00:00:00.000
{ "year": 2018, "sha1": "5a24d55a71f09b5e00285ad4d2c343970b9a38f4", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/jncics/article-pdf/2/2/pky014/28907511/pky014.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5a24d55a71f09b5e00285ad4d2c343970b9a38f4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
7305992
pes2o/s2orc
v3-fos-license
Lattice-based Minimum Error Rate Training for Statistical Machine Translation Minimum Error Rate Training (MERT) is an effective means to estimate the feature function weights of a linear model such that an automated evaluation criterion for measuring system performance can directly be optimized in training. To accomplish this, the training procedure determines for each feature function its exact error surface on a given set of candidate translations. The feature function weights are then adjusted by traversing the error surface combined over all sentences and picking those values for which the resulting error count reaches a minimum. Typically, candidates in MERT are represented as N - best lists which contain the N most probable translation hypotheses produced by a decoder. In this paper, we present a novel algorithm that allows for efficiently constructing and representing the exact error surface of all translations that are encoded in a phrase lattice. Compared to N -best MERT, the number of candidate translations thus taken into account increases by several orders of magnitudes. The proposed method is used to train the feature function weights of a phrase-based statistical machine translation system. Experiments conducted on the NIST 2008 translation tasks show significant runtime improvements and moderate BLEU score gains over N -best MERT. Introduction Many statistical methods in natural language processing aim at minimizing the probability of sentence errors. In practice, however, system quality is often measured based on error metrics that assign non-uniform costs to classification errors and thus go far beyond counting the number of wrong decisions. Examples are the mean average precision for ranked retrieval, the F-measure for parsing, and the BLEU score for statistical machine translation (SMT). A class of training criteria that provides a tighter connection between the decision rule and the final error metric is known as Minimum Error Rate Training (MERT) and has been suggested for SMT in (Och, 2003). MERT aims at estimating the model parameters such that the decision under the zero-one loss function maximizes some end-to-end performance measure on a development corpus. In combination with log-linear models, the training procedure allows for a direct optimization of the unsmoothed error count. The criterion can be derived from Bayes' decision rule as follows: Let f f 1 , ..., f J denote a source sentence ('French') which is to be translated into a target sentence ('English') e e 1 , ..., e I . Under the zero-one loss function, the translation which maximizes the a posteriori probability is chosen: e arg max e PrÔe f Õ´ (1) Since the true posterior distribution is unknown, PrÔe f Õ is modeled via a log-linear translation model which combines some feature functions h m Ôe, f Õ with feature function weights λ m , m 1, ..., M : The feature function weights are the parameters of the model, and the objective of the MERT criterion is to find a parameter set λ M 1 that minimizes the error count on a representative set of training sentences. More precisely, let f S 1 denote the source sentences of a training corpus with given reference translations r S 1 , and let C s Øe s,1 , ..., e s,K Ù denote a set of K candidate translations. Assuming that the corpusbased error count for some translations e S 1 is additively decomposable into the error counts of the individual sentences, i.e., EÔr S 1 , e S 1 Õ S s 1 EÔr s , e s Õ, the MERT criterion is given as: In (Och, 2003), it was shown that linear models can effectively be trained under the MERT criterion using a special line optimization algorithm. This line optimization determines for each feature function h m and sentence f s the exact error surface on a set of candidate translations C s . The feature function weights are then adjusted by traversing the error surface combined over all sentences in the training corpus and moving the weights to a point where the resulting error reaches a minimum. Candidate translations in MERT are typically represented as N -best lists which contain the N most probable translation hypotheses. A downside of this approach is, however, that N -best lists can only capture a very small fraction of the search space. As a consequence, the line optimization algorithm needs to repeatedly translate the development corpus and enlarge the candidate repositories with newly found hypotheses in order to avoid overfitting on C s and preventing the optimization procedure from stopping in a poor local optimum. In this paper, we present a novel algorithm that allows for efficiently constructing and representing the unsmoothed error surface for all translations that are encoded in a phrase lattice. The number of candidate translations thus taken into account increases by several orders of magnitudes compared to N -best MERT. Lattice MERT is shown to yield significantly faster convergence rates while it explores a much larger space of candidate translations which is exponential in the lattice size. Despite this vast search space, we show that the suggested algorithm is always efficient in both running time and memory. The remainder of this paper is organized as follows. Section 2 briefly reviews N -best MERT and introduces some basic concepts that are used in order to develop the line optimization algorithm for phrase lattices in Section 3. Section 4 presents an upper bound on the complexity of the unsmoothed error surface for the translation hypotheses represented in a phrase lattice. This upper bound is used to prove the space and runtime efficiency of the suggested algorithm. Section 5 lists some best practices for MERT. Section 6 discusses related work. Section 7 reports on experiments conducted on the NIST 2008 translation tasks. The paper concludes with a summary in Section 8. Minimum Error Rate Training on N -best Lists The goal of MERT is to find a weights set that minimizes the unsmoothed error count on a representative training corpus (cf. Eq. (3)). This can be accomplished through a sequence of line minimizations along some vector directions Ød M 1 Ù. Starting from an initial point λ M 1 , computing the most probable sentence hypothesis out of a set of K candidate translations C s Øe 1 , ..., e K Ù along the 1 results in the following optimization problem (Och, 2003): 5) Hence, the total score Ô¦Õ for any candidate translation corresponds to a line in the plane with γ as the independent variable. For any particular choice of γ, the decoder seeks that translation which yields the largest score and therefore corresponds to the topmost line segment. Overall, the candidate repository C s defines K lines where each line may be divided into at most K line segments due to possible intersections with the other K ¡ 1 lines. The sequence of the topmost line segments constitute the upper envelope which is the pointwise maximum over all lines induced by C s . The upper envelope is a convex hull and can be inscribed with a convex polygon whose edges are the segments of a piecewise linear function in γ (Papineni, 1999;Och, 2003): The importance of the upper envelope is that it provides a compact encoding of all possible outcomes that a rescoring of C s may yield if the parameter set λ M 1 is moved along the chosen direction. Once the upper envelope has been determined, we can project its constituent line segments onto the error counts of the corresponding candidate translations (cf. Figure 1). This projection is independent of how the envelope is generated and can therefore be applied to any set of line segments 1 . An effective means to compute the upper envelope is a sweep line algorithm which is often used in computational geometry to determine the intersection points of a sequence of lines or line segments (Bentley and Ottmann, 1979). The idea is to shift ("sweep") a vertical ray from ¡ to over the plane while keeping track of those points where two or more lines intersect. Since the upper envelope is fully specified by the topmost line segments, it suffices to store the following components for each line object ℓ: the x-intercept ℓ.x with the leftadjacent line, the slope ℓ.m, and the y-intercept ℓ.y; a fourth component, ℓ.t, is used to store the candidate translation. Algorithm 1 shows the pseudo code for a sweep line algorithm which reduces an input array a[0..K-1] consisting of the K line objects of the candidate repository C s to its upper envelope. By construction, the upper envelope consists of at most K line segments. The endpoints of each line Algorithm 1 SweepLine input: array a[0..K-1] containing lines output: upper envelope of a sort(a:m); j = 0; K = size(a); segment define the interval boundaries at which the decision made by the decoder will change. Hence, as γ increases from ¡ to , we will see that the most probable translation hypothesis will change whenever γ passes an intersection point. The optimal γ can then be found by traversing the merged error surface and choosing a point from the interval where the total error reaches its minimum. After the parameter update,λ M 1 λ M 1 γ opt ¤d M 1 , the decoder may find new translation hypotheses which are merged into the candidate repositories if they are ranked among the top N candidates. The relation K N holds therefore only in the first iteration. From the second iteration on, K is usually larger than N . The sequence of line optimizations and decodings is repeated until (1) the candidate repositories remain unchanged and (2) γ opt 0. Minimum Error Rate Training on Lattices In this section, the algorithm for computing the upper envelope on N -best lists is extended to phrase lattices. For a description on how to generate lattices, see (Ueffing et al., 2002). Formally, a phrase lattice for a source sentence f is defined as a connected, directed acyclic graph Each arc is labeled with a phrase ϕ ij e i 1 , ..., e i j and the (local) feature function values h M i n) defines a partial translation e π of f which is the concatenation of all phrases along this path. The corresponding feature function values are obtained by summing over the arc-specific feature function values: In the following, we use the notation inÔvÕ and outÔvÕ to refer to the set of incoming and outgoing arcs for a node v È V f . Similarly, headÔεÕ and tailÔεÕ denote the head and tail of ε È E f . To develop the algorithm for computing the upper envelope of all translation hypotheses that are encoded in a phrase lattice, we first consider a node v È V f with some incoming and outgoing arcs: v v ½ ε Each path that starts at the source node s and ends in v defines a partial translation hypothesis which can be represented as a line (cf. Eq. (5)). We now assume that the upper envelope for these partial translation hypotheses is known. The lines that constitute this envelope shall be denoted by f 1 , ..., f N . Next we consider continuations of these partial translation candidates by following one of the outgoing arcs ε È outÔvÕ. Each such arc defines another line denoted by gÔεÕ. If we add the slope and y-intercept of gÔεÕ to each line in the set Øf 1 , ..., f N Ù, then the upper envelope will be constituted by segments of f 1 gÔεÕ, ..., f N gÔεÕ. This operation neither changes the number of line segments nor their relative order in the envelope, and therefore it preserves the structure of the convex hull. As a consequence, we can propagate the resulting envelope over an outgoing arc ε to a successor node v ½ headÔεÕ. Other incoming arcs for v ½ may be associated with different upper envelopes, and all that remains is to merge these envelopes into a single combined envelope. This is, however, easy to accomplish since the combined envelope is simply the convex hull of the union over the line sets which constitute the individual envelopes. Thus, by merging the arrays that store the line segments for the incoming arcs and applying Algorithm 1 to the resulting array we obtain the combined upper envelope for all partial translation candidates that are associated with paths starting at the source node s and ending in v ½ . The correctness of this procedure is based on the following two observations: (1) A single translation hypothesis cannot constitute multiple line segments of the same envelope. This is because translations associated with different line segments are path-disjoint. (2) Once a partial translation has been discarded from an envelope because its associated linef is completely covered by the topmost line segments of the convex hull, there is no path continuation that could bring backf into the upper envelope again. Proof: Suppose that such a continuation exists, then this continuation can be represented as a line g, and sincef has been discarded from the envelope, the path associated with g must also be a valid continuation for the line segments f 1 , ..., f N that constitute the envelope. Thus it follows that To keep track of the phrase expansions when propagating an envelope over an outgoing arc ε È tailÔvÕ, the phrase label ϕ v,headÔεÕ has to be appended from the right to all partial translation hypotheses in the envelope. The complete algorithm then works as follows: First, all nodes in the phrase lattice are sorted in topological order. Starting with the source node, we combine for each node v the upper envelopes that are associated with v's incoming arcs by merging their respective line arrays and reducing the merged array into a combined upper envelope using Algorithm 1. The combined envelope is then propagated over the outgoing arcs by associating each ε È outÔvÕ with a copy of the combined envelope. This copy is modified by adding the parameters (slope and y-intercept) of the line gÔεÕ to the envelope's constituent line segments. The envelopes of the incoming arcs are no longer needed and can be deleted in order to release memory. The envelope computed at the sink node is by construction the convex hull over all translation hypotheses represented in the lattice, and it compactly encodes those candidates which maximize the decision rule Eq. (1) for any point along the line λ M 1 γ ¤ d M 1 . Algorithm 2 shows the pseudo code. Note that the component ℓ.x does not change and therefore requires no update. It remains to verify that the suggested algorithm is efficient in both running time and memory. For this purpose, we first analyze the complexity of Algorithm 1 and derive from it the running time of Algorithm 2. After sorting, each line object in Algorithm 1 is visited at most three times. The first time is when it is picked by the outer loop. The second time is when it either gets discarded or when it terminates the inner loop. Whenever a line object is visited for the third time, it is irrevocably removed from the envelope. The runtime complexity is therefore dominated by the initial sorting and amounts to OÔK log KÕ Topological sort on a phrase lattice G ÔV, EÕ can be performed in time ΘÔ V E Õ. As will be shown in Section 4, the size of the upper envelope for G can never exceed the size of the arc set E. The same holds for any subgraph G Ös,v× of G which is induced by the paths that connect the source node s with v È V. Since the envelopes propagated from the source to the sink node can only increase linearly in the number of previously processed arcs, the total running time amounts to a worst case complexity of Upper Bound for Size of Envelopes The memory efficiency of the suggested algorithm results from the following theorem which provides a novel upper bound for the number of cost minimizing paths in a directed acyclic graph with arcspecific affine cost functions. The bound is not only meaningful for proving the space efficiency of lattice MERT, but it also provides deeper insight into the structure and complexity of the unsmoothed error surface induced by log-linear models. Since we are examining a special class of shortest paths problems, we will invert the sign of each local feature function value in order to turn the feature scores into corresponding costs. Hence, the objective of finding the best translation hypotheses in a phrase lattice becomes the problem of finding all cost-minimizing paths in a graph with affine cost functions. Theorem: Let G ÔV, EÕ be a connected directed acyclic graph with vertex set V, unique source and sink nodes s, t È V, and an arc set E V ¢ V in which each arc ε È E is associated with an affine cost function c ε ÔγÕ a ε ¤ γ b ε , a ε , b ε È R. Counting ties only once, the cardinality of the union over the sets of all cost-minimizing paths for all γ È R is then upper-bounded by E : § § § γÈR π : π πÔG; γÕ is a cost-minimizing path in G given γ´ § § § E (7) Proof: The proposition holds for the empty graph as well as for the case that V Øs, tÙ with all arcs ε È E joining the source and sink node. Let G therefore be a larger graph. Then we perform an s-t cut and split G into two subgraphs G 1 (left subgraph) and G 2 (right subgraph). Arcs spanning the section boundary are duplicated (with the costs of the copied arcs in G 2 being set to zero) and connected with a newly added head or tail node: The zero-cost arcs in G 2 that emerged from the duplication process are contracted, which can be done without loss of generality because zero-cost arcs do not affect the total costs of paths in the lattice. The contraction essentially amounts to a removal of arcs and is required in order to ensure that the sum of edges in both subgraphs does not exceed the number of edges in G. All nodes in G 1 with out-degree zero are then combined into a single sink node t 1 . Similarly, nodes in G 2 whose in-degree is zero are combined into a single source node s 2 . Let N 1 and N 2 denote the number of arcs in G 1 and G 2 , respectively. By construction, N 1 N 2 E . Both subgraphs are smaller than G and thus, due to the induction hypothesis, their lower envelopes consist of at most N 1 and N 2 line segments, respectively. We further notice that either envelope is a convex hull whose constituent line segments inscribe a convex polygon, in the following denoted by P 1 and P 2 . Now, we combine both subgraphs into a single graph G ½ by merging the sink node t 1 in G 1 with the source node s 2 in G 2 . The merged node is an articulation point whose removal would disconnect both subgraphs, and hence, all paths in G ½ that start at the source node s and stop in the sink node t lead through this articulation point. The graph G ½ has at least as many cost minimizing paths as G, although these paths as well as their associated costs might be different from those in G. The additivity of the cost function and the articulation point allow us to split the costs for any path from s to t into two portions: the first portion can be attributed to G 1 and must be a line inside P 1 ; the remainder can be attributed to G 2 and must therefore be a line inside P 2 . Hence, the total costs for any path in G ½ can be bounded by the convex hull of the superposition of P 1 and P 2 . This convex hull is again a convex polygon which consists of at most N 1 N 2 edges, and therefore, the number of cost minimizing paths in G ½ (and thus also in G) is upper bounded by N 1 N 2 . Corollary: The upper envelope for a phrase lattice This bound can even be refined and one obtains (proof omitted) E ¡ V 2. Both bounds are tight. This result may seem somewhat surprising as it states that, independent of the choice of the direction along which the line optimization is performed, the structure of the error surface is far less complex than one might expect based on the huge number of alternative translation candidates that are represented in the lattice and thus contribute to the error surface. In fact, this result is a consequence of using a log-linear model which constrains how costs (or scores, respectively) can evolve due to hypothesis expansion. If instead quadratic cost functions were used, the size of the envelopes could not be limited in the same way. The above theorem does not, however, provide any additional guidance that would help to choose more promising directions in the line optimization algorithm to find better local optima. To alleviate this problem, the following section lists some best practices that we found to be useful in the context of MERT. Practical Aspects This section addresses some techniques that we found to be beneficial in order to improve the performance of MERT. (1) Random Starting Points: To prevent the line optimization algorithm from stopping in a poor local optimum, MERT explores additional starting points that are randomly chosen by sampling the parameter space. (2) Constrained Optimization: This technique allows for limiting the range of some or all feature function weights by defining weights restrictions. The weight restriction for a feature function h m is specified as an interval R m Öl m , r m ×, l m , r m È R Ø¡ , Ù which defines the admissible region from which the feature function weight λ m can be chosen. If the line optimization is performed under the presence of weights restrictions, γ needs to be chosen such that the following constraint holds: (3) Weight Priors: Weight priors give a small (positive or negative) boost ω on the objective function if the new weight is chosen such that it matches a certain target value λ ¦ m : A zero-weights prior (λ ¦ m 0) provides a means of doing feature selection since the weight of a feature function which is not discriminative will be set to zero. An initial-weights prior (λ ¦ m λ m ) can be used to confine changes in the parameter update with the consequence that the new parameter may be closer to the initial weights set. Initial weights priors are useful in cases where the starting weights already yield a decent baseline. i 1 Õ has a larger range, and the choice of γ opt may be more reliable. (5) Random Directions: If the directions chosen in the line optimization algorithm are the coordinate axes of the M -dimensional parameter space, each iteration will result in the update of a single feature function only. While this update scheme provides a ranking of the feature functions according to their discriminative power (each iteration picks the feature function for which changing the corresponding weight yields the highest gain), it does not take possible correlations between the feature functions into account. As a consequence, the optimization procedure may stop in a poor local optimum. On the other hand, it is difficult to compute a direction that decorrelates two or more correlated feature functions. This problem can be alleviated by exploring a large number of random directions which update many feature weights simultaneously. The random directions are chosen as the lines which connect some randomly distributed points on the surface of an M -dimensional hypersphere with the hypersphere's center. The center of the hypersphere is defined as the initial parameter set. Related Work As suggested in (Och, 2003), an alternative method for the optimization of the unsmoothed error count is Powell's algorithm combined with a grid-based line optimization (Press et al., 2007, p. 509). In (Zens et al., 2007), the MERT criterion is optimized on N -best lists using the Downhill Simplex algorithm (Press et al., 2007, p. 503). The optimization procedure allows for optimizing other objective function as, e.g., the expected BLEU score. A weakness of the Downhill Simplex algorithm is, however, its decreasing robustness for optimization problems in more than 10 dimensions. A different approach to minimize the expected BLEU score is suggested in (Smith and Eisner, 2006) who use deterministic annealing to gradually turn the objective function from a convex entropy surface into the more complex risk surface. A large variety of different search strategies for MERT are investigated in (Cer et al., 2008), which provides many fruitful insights into the optimization process. In (Duh and Kirchhoff, 2008), MERT is used to boost the BLEU score on N -best re-ranking tasks. The incorporation of a large number of sparse feature functions is described in (Watanabe et al., 2007). The paper investigates a perceptron-like online large-margin training for statistical machine translation. The described approach is reported to yield significant improvements on top of a baseline system which employs a small number of feature functions whose weights are optimized under the MERT criterion. A study which is complementary to the upper bound on the size of envelopes derived in Section 4 is provided in (Elizalde and Woods, 2006) which shows that the number of inference functions of any graphical model as, for instance, Bayesian networks and Markov random fields is polynomial in the size of the model if the number of parameters is fixed. Experiments Experiments were conducted on the NIST 2008 translation tasks under the conditions of the constrained data track for the language pairs Arabicto-English (aren), English-to-Chinese (enzh), and Chinese-to-English (zhen). The development corpora were compiled from test data used in the 2002 and 2004 NIST evaluations. Each corpus set provides 4 reference translations per source sentence. Table 1 summarizes some corpus statistics. Translation results were evaluated using the mixedcase BLEU score metric in the implementation as suggested by (Papineni et al., 2001). Translation results were produced with a state-ofthe-art phrase-based SMT system which uses EMtrained word alignment models (IBM1, HMM) and a 5-gram language model built from the Web-1T collection 2 . Translation hypotheses produced on the blind test data were reranked using the Minimum-Bayes Risk (MBR) decision rule (Kumar and Byrne, 2004;Tromble et al., 2008). Each system uses a loglinear combination of 20 to 30 feature functions. In a first experiment, we investigated the convergence speed of lattice MERT and N -best MERT. Figure 2 shows the evolution of the BLEU score in the course of the iteration index on the zhen-dev1 corpus for either method. In each iteration, the training procedure translates the development corpus using the most recent weights set and merges the top ranked candidate translations (either represented as phrase lattices or N -best lists) into the candidate repositories before the line optimization is performed. For N -best MERT, we used N 50 which yielded the best results. In contrast to lattice MERT, N -best MERT optimizes all dimensions in each iteration and, in addition, it also explores a large number of random starting points before it re-decodes and expands the hypothesis set. As is typical for N -best MERT, the first iteration causes a dramatic performance loss caused by overadapting the candidate repositories, which amounts to more than 27.3 BLEU points. Although this performance loss is recouped after the 5th iteration, the initial decline makes the line optimization under N -best MERT more fragile since the optimum found at the end of the training procedure is affected by the initial performance drop rather than by the choice of the initial start weights. Lattice MERT on the other hand results in a significantly faster convergence speed and reaches its optimum already in the 5th iteration. For lattice MERT, we used a graph density of 40 arcs per phrase which corresponds to an N -best size of more than two octillion Ô2 ¤ 10 27 Õ entries. This huge number of alternative candidate translations makes updating the weights under lattice MERT more reliable and robust and, compared to N -best MERT, it becomes less likely that the same feature weight needs to be picked again and adjusted in subsequent iterations. Figure 4 shows the evolution of the BLEU score on the zhen-dev1 corpus using lattice MERT with 5 weights updates per iteration. The performance drop in iteration 1 is also attributed to overfitting the candidate repository. The decline of less than 0.5% in terms of BLEU is, however, almost negligible compared to the performance drop of more than 27% in case of N -best MERT. The vast number of alternative translation hypotheses represented in a lattice also increases the number of phase transitions in the error surface, and thus prevents MERT from selecting a low performing feature weights set at early stages in the optimization procedure. This is illustrated in Figure 3, where lattice MERT and N -best MERT find different optima for the weight of the phrase penalty feature function after the first iteration. Table 2 shows the BLEU score results on the NIST 2008 blind test using the combined dev1+dev2 corpus as training data. While only the aren task shows improvements on the development data, lattice MERT provides consistent gains over N -best MERT on all three blind test sets. The reduced performance for N -best MERT is a consequence of the performance drop in the first iteration which causes the final weights to be far off from the initial parameter set. This can impair the ability of N -best MERT to generalize to unseen data if the initial weights are already capable of producing a decent baseline. Lattice MERT on the other hand can produce weights sets which are closer to the initial weights and thus more likely to retain the ability to generalize to unseen data. It could therefore be worthwhile to investigate whether a more elaborated version of an initial-weights prior allows for alleviating this effect in case of Nbest MERT. Table 3 shows the effect of optimizing the feature function weights along some randomly chosen directions in addition to the coordinate axes. The different local optima found on the development set by using random directions result in additional gains on the blind test sets and range from 0.1% to 0.6% absolute in terms of BLEU. Summary We presented a novel algorithm that allows for efficiently constructing and representing the unsmoothed error surface over all sentence hypotheses that are represented in a phrase lattice. The proposed algorithm was used to train the feature function weights of a log-linear model for a statistical machine translation system under the Minimum Error Rate Training (MERT) criterion. Lattice MERT was shown analytically and experimentally to be superior over N -best MERT, resulting in significantly faster convergence speed and a reduced number of decoding steps. While the approach was used to optimize the model parameters of a single machine translation system, there are many other applications in which this framework can be useful, too. One possible usecase is the computation of consensus translations from the outputs of multiple machine translation systems where this framework allows us to estimate the system prior weights directly on confusion networks (Rosti et al., 2007;Macherey and Och, 2007). It is also straightforward to extend the suggested method to hypergraphs and forests as they are used, e.g., in hierarchical and syntax-augmented systems (Chiang, 2005;Zollmann and Venugopal, 2006). Our future work will therefore focus on how much system combination and syntax-augmented machine translation can benefit from lattice MERT and to what extent feature function weights can robustly be estimated using the suggested method.
2014-07-01T00:00:00.000Z
2008-10-25T00:00:00.000
{ "year": 2008, "sha1": "2e74e29298f0f71694ac21958996d147191fe4b0", "oa_license": null, "oa_url": "https://dl.acm.org/doi/pdf/10.5555/1613715.1613807", "oa_status": "BRONZE", "pdf_src": "ACL", "pdf_hash": "b2b5295d9699a78c60e4afc26c45ad40ddcb716c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
257837615
pes2o/s2orc
v3-fos-license
Analytical review of Tiryāq-i-Wabāī – A Unani panacea for the control of COVID-19 Introduction COVID-19 has affected the whole world drastically and led to a substantial loss of human life. Relentless research is underway to identify effective treatment to control the disease. Traditional systems are also being explored to search for a potent drug. Unani formulation ‘Tiryāq-i-Wabāī’ has long been used in cholera, plague and other epidemic diseases. This review is aimed at analysing the possible role of Tiryāq-i-Wabāī in the prevention and control of COVID-19. Methodology Unani classical texts and Pharmacopoeias available in the library of Regional Research Institute of Unani Medicine, Chennai were reviewed to collect information related to epidemics, commonly prescribed drugs during epidemics, and therapeutic uses of Tiryāq-i-Wabāī ingredients. ScienceDirect, Springer, PubMed and Google Scholar were searched to collect information regarding current pandemic and pharmacological activities of ingredients and phytoconstituents present in the formulation. The collected data was analyzed and interpreted. Results Tiryāq-i-Wabāī was found to be the most recommended prophylactic and curative drug during epidemics. The formulation ingredients, Sibr (Aloe vera (L.) Burm.f.), Murr Makki (Commiphora myrrha (T.Nees) Engl.) and Zāfrān (Crocus sativus L.) are categorized under Tiryāqi Advia (literally – antidote drugs) and are considered to be very effective in SARS related conditions. These ingredients have been reported to exhibit immunomodulatory, antioxidant, antiviral, antibacterial, antitussive, smooth muscle relaxant, antipyretic and anti-inflammatory activities corroborating the traditional use of Tiryāq-i-Wabāī. Conclusion Scientific data imply great potential and utility of the formulation which could be a possible alternative approach for the prevention and control of current and future pandemics. Introduction The outbreak of coronavirus disease 'COVID-19 ′ has become a matter of great public health concern worldwide and declared as global pandemic by World Heatlth Organization (Gautret et at, 2020). This is the third coronavirus disease to occur in the 21st century, after the severe acute respiratory syndrome coronavirus (SARS-CoV) in 2002-03 and Middle-East respiratory syndrome coronavirus (MERS-CoV) in 2012, which caused disastrous outbreak of pneumonia in human beings (Nikhat and Fazil, 2020). The disease was identified first in Wuhan, China and spread to nearly 216 countries in a very short period of time. The rapid increase in cases of Covid-19 caused widespread panic among people across the globe. Despite the fact that the spread and threat of COVID-19 is currently declining, between 5 and 11 September 2022, over 3.1 million new cases and 11,000 fatalities were reported by WHO globally (World Health Organization (WHO), 2022a). As per the current statistics, as at September 16, 2022, more than 608 million confirmed cases of COVID-19 including 6.5 million deaths had been reported globally (World Health Organization (WHO), 2022b). In India, the first case of Covid-19 reported on 30 January 2020, was a student who traveled from China to India. As of 16 September 2022 there had been 44,522,777 confirmed cases with 528,273 deaths reported in the country (World Health Organization (WHO), 2022c). There have been many global initiatives to address the situation efficiently. However effective and specific therapy options for the pandemic still remain limited (Niknam et al., 2022). Certain drugs have been investigated and recommended for the management of COVID-19, including remdesivir, lopinavir, ritonavir, interferons, steroids, monoclonal antibodies and repurposed drugs such as chloroquine and hydroxychloroquine (Fazil and Nikhat, 2022;Niknam et al., 2022). Chloroquine 'a widely used anti-malarial drug' and hydroxychloroquine were shown to have in vitro antiviral activity. Both drugs share similar chemical characteristics and mechanism of action. Hydroxychloroquine has been reported to curb the SARS-CoV-2 replication. Reports suggest that these drugs may have efficacy in treating patients infected with Covid-19 (Meo et al., 2020). Hydroxychloroquine and azithromycin in combination have demonstrated a synergistic effect in viral load reduction and early recovery (Gautret et al., 2020). At present, a number of vaccines are available for the control of COVID-19. Mass vaccination drive has been implemented across the globe. However, the search for potent and specific treatments is still imperative, particularly in nations with low vaccination rates and where mutations can potentially threaten vaccine evasion. Further, the effectiveness of COVID-19 vaccine is limited in some individuals such as the immunocompromised, patients with malignancies and those receiving chemotherapies (Niknam et al., 2022). The scientific community across the globe is actively engaged in exploring novel therapeutics to address the current need. In a similar vein, prophylactic and curative aspects of traditional medicines are also being explored for the treatment of the disease. Significant antiviral activities against a wide range of viruses have been reported for many traditional remedies used for millennia in Ayurveda, Unani, Siddha and other traditional systems of medicine (Mukherjee, 2019). A recent study on a siddha formulation demonstrated a high binding affinity and interactions with spike protein of SARS-CoV-2 (Kiran et al., 2020). These studies support the hypothesis that traditional remedies may have a direct effect against SARS-CoV-2. Epidemics and infectious diseases had been discussed meticulously in classical literature of Unani medicine, one of the most recognized traditional systems of medicine in India, with evidence of a wide range of prophylactic and therapeutic potential. (Nikhat and Fazil, 2020). Tiryāq-i-Wabāī is one such well documented formulations in Unani classical literature for its wide use as a prophylactic during epidemics. The very nomenclature of the drug connotes "an antidote during epidemics". The ingredients of this pharmacopoeial formulation have been reported for a wide range of pharmacological activities including antiviral, antimicrobial, antioxidant and immunomodulatory activities. The formulation may prove beneficial in augmenting immune resilience and may be used for prophylactic and therapeutic purposes in the current situation as well as in future pandemics. This review will highlight the potential of Tiryāq-i-Wabāī in epidemics and its possible role in combating COVID-19. Methodology The authors reviewed Unani classical texts and Unani Pharmacopoeias available in the library of Regional Research Institute of Unani Medicine, Chennai and collected information related to epidemic diseases, commonly prescribed drugs for prophylactic and therapeutic purposes during epidemics, and ingredients of Khan (1813-1902, Qar-ābādīn-i Najm al-Ghanī and Khazāin al-Advia by Najm al- Ghanī (1859-1932. Other books and journals were also reviewed for further information. Major scientific databases including ScienceDirect, Springer, PubMed and Google Scholar were searched to collect information regarding current pandemic and pharmacological activities of formulation ingredients. The search terms used included COVID-19, SARS-CoV-2, etiology, symptoms, traditional medicine for COVID-19, bioactive compounds, antiviral, immunomodulatory, antitussive, antipyretic, anti-inflammatory activities along with drug name such as saffron, Aloe vera, Myrrh. Selected reviews and original articles were reviewed and interpreted accordingly. Coronavirus disease 2019 (COVID-19) The current pandemic of novel Coronavirus disease-2019 (COVID-19) has grossly affected the livelihood of the world population with dwindling GDP of several nations. It is a highly contagious and transmittable disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) (Shereen et al., 2020). SARS-CoV-2 belongs to the genus β-coronaviruses and shares similar characteristics of other coronaviruses with spike protein (Phan, 2020a;Shereen et al., 2020). It uses ACE2 (angiotensin-converting enzyme 2) cell receptor and mechanism for entry into host cells as previously reported by Shereen et al. (2020). The primary target of antibodies and vaccines is spike glycoprotein which is a mixture of bat SARS-CoV and unknown beta-CoV (Shereen et al., 2020;Walls et al., 2020). Mutation in spike protein of SARS-CoV-2 has been reported which may enhance its binding affinity to ACE2 and hence its infectivity (Wan et al., 2020). Human to human transmission of virus can occur through different modes; speech and respiratory droplet or fomites is considered to be the direct and most significant mode as 50-80% of virus transmission is reported from asymptomatic carriers (Anfinrud et al., 2020). Various reports have confirmed the presence of virus in sputum, saliva, bronchoalveolar secretions, and nasopharyngeal swab of infected individuals (Phan, 2020b). The virus can also be transmitted through tears and other body fluids. Oro-fecal route is also possible as viral RNA has been detected in feces (Ling et al., 2020). In most cases, average incubation period for Covid-19 is 5-6 days and in few cases, up to 24 days has also been reported (Jean et al., 2020). The novel coronavirus disease affects both male and female with a slight predominance in male (Guo et al., 2020). The disease is characterized by fever, dry cough and malaise in most of the cases (83-98%), however, few cases present other symptoms like shortness of breath, diarrhea, abdominal pain, headache and vomiting (Del Rio and Malani, 2020). Most of the patients present flu-like mild symptoms and recover within few days; however some cases may develop hypoxemia leading to cardiac arrest. Elderly and people with comorbid conditions such as cardiovascular disease, hypertension, diabetes, and chronic obstructive pulmonary disease, are more vulnerable to the infection, and are likely to develop complications such as acute respiratory distress syndrome, arrhythmia, shock, acute renal failure, acute cardiac injury and other conditions leading to death (Guo et al., 2020). Ta'diya and Wabāī (infection and epidemic) and its management in Unani medicine Unani scholars postulated the basic form of germ theory a millennium ago before the transitional period began in the late 1850 s. Various Unani scholars have narrated the contagion theory and given detailed description of Ta'diya (infection) and Wabāī (epidemic) in their texts. They named the causative factors of epidemic and putrefaction as Ajsām Arḍ iyya Khabītha (close resemblance to microorganism) which may pollute water and air. These factors after invasion in the body cause Ta'diya (infection) in immunocompromised individuals and can migrate from diseased to healthy individuals (Baghdādī, 2005;Rushd, 1987;Sīnā, 2010). Unani scholars have advocated several measures to adopt for preventing the spread of epidemic and pandemic diseases including quarantine, oxygenation and purification of air, and improving host immunity (Jurjānī, 2010;Rāzī, 1991;Samarqandī̄̄, 2007). A number of single and compound drugs have been prescribed by Unani scholars for the prevention, control and symptomatic treatment of infectious diseases and epidemics (Nikhat and Fazil, 2020). Tiryāq-i-Wabāī is one such time tested Unani formulation used during epidemics, which may have great potential in the current pandemic. Tiryāq-i-Wabāī: overview Tiryāq-i-Wabāī is a well-documented formulation in Unani medicine for its wide use for prophylaxis during epidemics of cholera, plague and other epidemic diseases. It is comprised of 3 ingredients, viz Sibr/Aloe vera (Aloe vera (L.) Burm.f.), Murr Makki/myrrh (Commiphora myrrha (T. Nees) Engl.) and Zāfrān/saffron (Crocus sativus L.). During literature survey, authors have found three different names i.e. Tiryāq-i-Afāī, Tir-yāq-i-Af'ā and Tiryāq-i-Wabāī containing same ingredient, however the formulation is not mentioned in few classical Unani literatures by its name; rather its composition is mentioned and claimed to be very effective as prophylaxis during epidemics (Baghdādī, 2007;Rushd, 1987;Sīnā, 2010). In some books, the composition is mentioned separately, while formulation name is separately mentioned with reference to Jālīnū s statement, which creates confusion as to whether these formulations are same or are different. Rāzī (Rhazes, 865-925 CE) recounts from an old physician that 'whoever has used a mixture of two part of Sibr, one part Zāfrān and one part Murr Makki, remained protected during epidemics'. Further he reported from Jālīnū s (Galen,, that Tiryāq-i-Af'ā is very effective during epidemics (Rāzī, 1991). Abū al-Manṣ ū r ibn Nū ḥ al-Qamarī (10th CE) followed the same pattern in his treatise 'Minhāj al-Ilāj' and discussed the composition separately; later he narrated from Jālīnū s that Tiryāq-i-Afāī has a miraculous effect during epidemics (Qamarī, 2008). Unani pharmacopoeias suggest that all these are same formulations, as it is mentioned in different pharmacopoeia with different names but with same composition. The formulation is claimed to have antivenom property and has been a very effective remedy during epidemics. The formulation was used by Galen and Avicenna in healthy persons as well as in patients during epidemics. It is indicated that the use of Tiryāq-i-Wabāī twice or thrice a week in a dose of 2 -2.25 g, with Arq Gulāb 60 ml or Arq Bādiyān 120 ml, may protect the individual from infection during epidemics (Anonymous, 2006;Ghanī̄̄, 2019;Hafī z, 2005;Kabīruddīn, 1935). According to Unani system of medicine all the three ingredients of the formulation, Aloe vera, myrrh and saffron, fall under the category of Tiryāqi Advia (literallyantidote drugs) and are considered to be very effective in SARS-like conditions especially in respiratory distress (Baghdādī, 2007;Kabīruddīn, 2007;Khan, 2011). These drugs have been reported to possess a wide range of pharmacological activities. Aloe vera has been reported to have anti-inflammatory, hepatoprotective, antiviral, antimicrobial, anticancer, immunomodulatory and antioxidant activities (Kumar et al., 2019). Saffron has been reported to have anti-HSV, anti-HIV (Soleymani et al., 2018), immunomodulatory, anti-oxidant, anticancer, chemopreventive, antigenotoxic, anti-inflammatory, antihypertensive, and antihyperlipidemic activities (Kianbakht and Ghazavi, 2011). Myrrh, on the other hand, has been reported to have antioxidant, anti-inflammatory, antimicrobial and antiviral activities (Fahad and Shameem, 2018;Ghadir and Ahmed, 2014;Mohammad et al., 2014). The formulation (Tiryāq-i-Wabāī) has been reported to possess immune-stimulation activity in immunocompromised elderly persons (Nigar and Itrat, 2013). The medicinal use of Aloe vera can be traced back thousands of years in the history, and has been used for various diseases in Unani system of medicine including digestive, respiratory, nervous system and skin disorders. The drug is known to possess anti-inflammatory, antiseptic, detergent and cleansing, purgative, deobstruent, and anthelmintic properties. It prevents sepsis and body decay, hence it was applied on dead bodies in the past; evacuates morbid matter from the body and cleans mainly head, chest, stomach and joints; and resolves obstructions in liver mesentery and other organs. It is claimed to be very effective in liver disorders, splenomegaly, bronchial asthma and other respiratory distress conditions; useful in non-healing ulcers and prevents spread of septic wounds (Baitār, 1999;Ghanī̄̄, 2011). It has been reported that the use of Aloe vera in any form; oral intake, fumigation, and spraying has promising effects during epidemics. Gargle of Aloe vera in combination with myrrh is claimed to be very effective in shortness of breath (Khan, 2011). Potential of Tiryāq-i-Wabāī ingredients Pharmacological activities of herbs are attributed to the presence of biologically active compounds in the plant. Aloe vera contains several bioactive compounds including vitamins, minerals, enzymes, polysaccharides, anthraquinones or phenolics, lignin, saponins, sterols, amino acids and salicylic acids among others. It exhibits a wide range of pharmacological activities such as anti-inflammatory, antiviral, antimicrobial, antiseptic, immune-stimulating and wound healing activities (Kumar et al., 2019;Surjushe et al., 2008). Studies suggest that Aloe vera exerts antiviral activity via a number of mechanisms on different viruses. Saoo et al. (1996) reported the interference of DNA synthesis as the major mechanism involved in the inhibitory effect of Aloe vera extract against human cytomegalovirus (HCMV). A recent study demonstrated antiviral activity of emodin 'anthraquinone compound' against influenza A virus (IAV), and suggested that emodin could inhibit viral replication and influenza viral pneumonia via activation of nuclear factor E2-related factor 2 (Nrf2) signaling and by inhibiting IAV-induced activation of Toll-like receptor 4 (TLR4), Mitogen-Activated Protein Kinase (MAPK) and Nuclear Factor Kappa B (NF-kB) Pathways (Dai et al., 2017). Antiviral activity of Aloe vera extract has also been reported against herpes simplex type 2 virus (HSV-2) via inhibiting virus replication in both pre and post attachment stages of virus to host cell (Zandi et al., 2007). It is reported that Aloe polysaccharide exerted significant antiviral activity against H1N1 subtype influenza virus in vivo and in vitro via direct interaction with PR8 (H1N1) influenza virus particles to prevent its adsorption and replication (Sun et al., 2018). Aloe vera may exert antiviral activity in two ways; directly and indirectly. Directly, through biologically active compounds such as emodin having direct effect on virus, and indirectly, through stimulating immune system of the host. Aloe polysaccharide is known to have immune stimulating effect besides its other activities, and has a complex mechanism of antiviral activity. It can directly interfere with virus and limit its infectivity and it can improve host immunity as well, which in turn can promote the differentiation of immature dendritic cells (Sun et al., 2018;Surjushe et al., 2008). Six antiseptic compounds 'namely lupeol, urea nitrogen, cinnamonic acid, salicylic acid, phenol and sulphur' have been identified in Aloe vera and reported to have significant inhibitory effect on bacteria, fungi and viruses (Kar and Bera, 2018). Zāfrān / Saffron (Crocus sativus L.) Saffron is one among the acclaimed herbs extensively used as spice and medicine to promote human health since ages (Leone et al., 2018). It is known to possess a wide range of therapeutic actions such as exhilarant, deobstruent, antispasmodic, anti-inflammatory, expectorant, antitussive, anticatarrhal, aphrodisiac, diaphoretic, stomachic and stimulant. It has been used widely as tonic for vital organs including brain, heart, lung, liver and kidney (Hosseini et al., 2018;Khazdair et al., 2015). In traditional medicines, saffron is used in treating numerous diseases such as fever, cold, asthma, chest pain, small pox, scarlet fever, atherosclerosis, coronary artery diseases, hypertension, diabetes, stomach disorders, dysuria, dysmenorrhea, renal colic, cancer, insomnia and other neurodegenerative disorders (Baitār, 2000;Bukhari et al., 2018;Ghanī̄̄, 2011;). The vitality of saffron in treating respiratory diseases is well acknowledged by many Unani scholars. It is reported that saffron is very effective in all kinds of altered respiratory functions (Khan, 2011). Ibn Baitār (1197-1248 reported that saffron has antiseptic property and prevents Khilt ̣ (humour) from sepsis. Further he stated that 'saffron invigorates the pneuma and respiratory organs and facilitates respiration' (Baitār, 2000). Avicenna has also stated that saffron especially its oil facilitates respiration and strengthens the respiratory organs (Hosseinzadeh and Nassiri-Asl, 2013). Current scientific studies, on saffron and its major bioactive compounds such as crocin, crocetin and safranal, indeed substantiate the claims made by Unani sholars about therapeutic benefits of saffron. Several in vitro and in vivo studies have shown that saffron has numerous biological activities, including smooth muscle relaxant, antitussive, anti-allergic, antibacterial, anti-inflammatory and antinociceptive, immunomodulatory, antioxidant, antispasmodic, anticancer, anti-genotoxic, antihypertensive, antidiabetic, neuroprotective, cardioprotective, hepatoprotective, nephroprotective, anti-Alzheimer's, anticonvulsant and antidepressant (Bukhari et al., 2018;Hosseini et al., 2018;Hosseinzadeh and Nassiri-Asl, 2013;Khazdair et al., 2015). Despite the wide use and exhaustive scientific work done on saffron, it is hardly reported for its antiviral activity. To our knowledge, only one study has reported antiviral effect of saffron and its constituents. Soleymani et al. (2018) demonstrated significant anti-HSV and anti-HIV activity of the bioactive components of saffron 'crocin and picrocrocin'. Both the components inhibited virus replication and suppressed their penetration in the target cells (Soleymani et al., 2018). Recent computational studies suggest great potential of saffron bioactive molecules, crocin and crocetin, for inhibiting SARS-CoV-2 spike glycoprotein and main protease and limiting the virulence of the disease (Ahmed et al., 2021;Kordzadeh et al., 2020). Various in vitro and in vivo studies suggest the potential of saffron in respiratory diseases. Studies have demonstrated the relaxant effect of saffron and its compounds on various smooth muscles including vascular, tracheal, gastrointestinal and urogenital smooth muscles (Mokhtari-Zaer et al., 2015). It is established fact that the ratio of type 1 T helper (Th1) and (Th2) cells play major role in the occurrence of asthma and airway inflammation. Th1 cells produce cytokines such as interleukin (IL)− 2, interferon gamma (IFN-γ) and tumor necrosis factor (TNF)-α, whereas T2 cells produce IL-4, IL-5, IL-6, IL-9, IL-10 and IL-13. Various studies have reported suppressant effect of saffron and its constituents 'crocin and safranal' on airway inflammation and asthma and demonstrated a stimulatory effect on Th1 cells and suppressive effect on Th2 cells (Zeinali et al., 2019). A number of recent studies suggest saffron as promising immunomodulatory agent to treat various immune disorders. It is reported that saffron and its constituents 'crocin, crocetin and safranal' modulate inflammatory mediators, humoral immunity, and cell-mediated immunity responses. Saffron inhibits serum levels nuclear transcription factor κB (NF-κB) p65 unit, TNF-α, IFN-γ and IL-1β, IL-6, IL-12, IL-17A; it suppresses key pro-inflammatory enzymes such as myeloperoxidase (MPO), cyclooxygenase-2 (COX-2), inducible nitric oxide synthase (iNOS), phospholipase A2; and modulates MAPK and NF-κB pathways. It controls the expression of genes encoding of pro-inflammatory cytokines, inducible enzymes and adhesion molecules etc which play vital roles in controlling inflammatory processes (Boskabady and Farkhondeh, 2016;Zeinali et al., 2019). There is a close interaction between inflammatory response, oxidative stress and immune system. Deregulation of normal immune response may activate the inflammatory pathways resulting in inflammation which plays a vital role in pathogenesis of several diseases including allergy, asthma, cardiovascular and other diseases. It could be inferred that immunomodulatory and anti-inflammatory effects of saffron may reverse the destructive processes and prevent human beings from various diseases (Boskabady and Farkhondeh, 2016). Murr makki / Myrrh (Commiphora myrrha (T.Nees) Engl.) Murr makki, commonly known as myrrh is an aromatic resin produced by C. myrrha tree. It has been used as medicine for millennia in different cultures such as Egyptian, Greece, Roman and Chinese to treat various diseases (Ghadir and Ahmed, 2014;Shen et al., 2012). Myrrh has been used in Unani system and other traditional systems of medicine to treat a number of diseases including fever, common cold, chronic cough, diphtheria, tonsillitis, pharyngitis, bronchitis, flu, catarrh, asthma, arthritis, tumors and cancer, gastrointestinal and urogenital disorders, infectious diseases including leprosy and syphilis, septic wounds and other skin disorders etc (Alhussaini et al., 2015;Baitār, 2003;Ghanī̄̄, 2011). It is considered to be very beneficial during epidemics due to its antiseptic property. Ancient Unani physicians used to apply it on dead bodies to prevent sepsis. It has also been utilized as antidote for insect and snake bite (Baitār, 2003;Ghanī̄̄, 2011). Myrrh alone and in combination with other suitable drugs is considered to be very effective for asthma and respiratory distress syndrome in different forms i.e. oral, fumigation and local application on chest. It evacuates thick phlegm and pus from the lung and facilitates respiration (Khan, 2011). Immunomodulatory effect of myrrh has been demonstrated by various studies. It induced significant improvement in cellular immune response via stimulation of lymphocyte transformation, phagocytic activity; it improved levels of IL-4 in a patient suffering from fascioliasis. It exhibited protective effect against lead-acetate (PbAc)-induced hepatic oxidative damage and immunotoxicity by reducing lipid peroxidation and enhancing the antioxidant and immune defense mechanism (Ashry et al., 2010). Though some articles reviewed and claimed antiviral activity of myrrh essential oil against influenza virus type A (H1N1) and herpes simplex virus type 1 (HSV-1), this survey could not find any such study upon exhaustive online literature review. To our knowledge, only one study has been carried out to investigate antiviral effect of myrrh essential oil which showed moderate antiviral activity against Newcastle virus (NDV) on chicken embryo (Ghadir and Ahmed, 2014). In view of the above summarized information about all the three ingredients of Tiryāq-i-Wabāī, it may be reasonable to state that the traditional use of these drugs has been substantiated by scientific evidence. It is worth mentioning here that Unani formulations contain multiple herbs and a single herbal drug contains several bioactive compounds with a wide range of pharmacological activities. The complex mixture of these compounds may act synergistically on multiple targets. Hence, it may be assumed that the combined effect of all the three ingredients of Tiryāq-i-Wabāī may exhibit an enhanced antioxidant, immunomodulatory, antiviral and other beneficial effects and, therefore, effective in the treatment of COVID-19. With the ever increasing use of herbal medicine, safety has become a major concern for both health authorities and the public across the globe. There are very limited data available on the potential adverse effects of herbal medicines. Though herbal medicines are considered to be least toxic compared with synthetic drugs, their misuse and selfmedication are issues of great concern (Zhang et al., 2015). Thus, herbal medicines should be used with caution and only on the advice of a registered practitioner. Self-medication and over-the-counter use should always be discouraged. It is an established fact that the immune system plays a vital role in fighting against infection and protecting the body from infectious diseases. Maintenance of immune fitness is the prime concern in preventive healthcare (Nigar and Itrat, 2013). This has become even more important given the periodic outbreak of infectious diseases. Strengthening and building a more resilient immune system is considered to be a sustainable way to survive in the current pandemic (Aman and Masood, 2020). Tiryāq-i-Wabāī is one among the widely used formulations during epidemics as prophylaxis and generally not recommended for respiratory diseases. However, its ingredients are highly recommended by Unani scholars in treating SARS-like conditions including respiratory distress. The formulation has been reported to possess significant immune-stimulation activity in immunocompromised elderly persons with no potential adverse effects at a dose of 500 mg thrice a week for 45 days (Nigar and Itrat, 2013). There have been no major side effects reported to date despite wide use of the formulation as a prophylaxis in epidemic diseases. However, establishing the safety profile of the formulation is vital for its wide and global acceptability, and for providing equally safe and effective remedy to human beings. Tiryāq-i-Wabāī may exert its protective effect against COVID-19 via a number of possible mechanisms. It may exert its effect by modulating inflammatory mediators, humoral immunity and cell mediated immunity responses. It may directly interfere with the virus and limit its infectivity as the ingredients of the formulation have shown their inhibitory effects against various viruses via interference of DNA synthesis, inhibition of virus adsorption and replication and suppression of its penetration in the target cells. Recently, a preliminary molecular docking study has been carried out to generate in-silico evidence and evaluate the potency of Tiryāq-i-Wabāī against SARS-CoV-2 Spike Glycoprotein and Main Protease. The study result was encouraging as the phytoconstituents present in the formulation exhibited good binding capacity, suggesting its potential in inhibiting the SARS-CoV-2 spike glycoprotein and main protease (Ahmed et al., 2021). The formulation may reduce cough frequency and facilitate respiration by relaxing airway smooth muscles, suppressing airway inflammation and reversing lung pathological changes. Hence, the formulation may be used as a prophylactic during the current and future pandemics to improve the host immunity. It may also be used as an adjuvant therapy for symptomatic relief in infected individuals. Conclusion The novel coronavirus disease has severely affected livelihoods, and led to a substantial loss of human life with catastrophic social and economic consequences. Although the disease is currently on decline with the roll out of vaccines, there is still a need to identify effective remedies with the capability of targeting virulence, augmenting immune resilience and protecting target-organs. Traditional medical systems are being explored to search for equally effective remedies along with conventional treatment. Unani system of medicine may play a vital role in protecting and reducing disease burden and improving overall wellbeing. It would be fair to state that the combination of all the three ingredients of Unani pharmacopeial formulation 'Tiryāq-i-Wabāī' (Aloe vera, saffron, and myrrh) with their bioactive compounds, may help in strengthening the immune system and protect individuals from infections during current and future pandemics. But these claims have to be validated through vigorous evidence-based research to establish the real effect of the formulation.
2023-03-31T13:06:01.834Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "0bff6eef3747dbd8be0492fea641e0732b2cb9fc", "oa_license": null, "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10101772", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "130a9ee49e3bac8c2f0ab3c700cca01e77a112a4", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
252368102
pes2o/s2orc
v3-fos-license
Using a Surrogate with Heterogeneous Utility to Test for a Treatment Effect The primary benefit of identifying a valid surrogate marker is the ability to use it in a future trial to test for a treatment effect with shorter follow-up time or less cost. However, previous work has demonstrated potential heterogeneity in the utility of a surrogate marker. When such heterogeneity exists, existing methods that use the surrogate to test for a treatment effect while ignoring this heterogeneity may lead to inaccurate conclusions about the treatment effect, particularly when the patient population in the new study has a different mix of characteristics than the study used to evaluate the utility of the surrogate marker. In this paper, we develop a novel test for a treatment effect using surrogate marker information that accounts for heterogeneity in the utility of the surrogate. We compare our testing procedure to a test that uses primary outcome information (gold standard) and a test that uses surrogate marker information, but ignores heterogeneity. We demonstrate the validity of our approach and derive the asymptotic properties of our estimator and variance estimates. Simulation studies examine the finite sample properties of our testing procedure and demonstrate when our proposed approach can outperform the testing approach that ignores heterogeneity. We illustrate our methods using data from an AIDS clinical trial to test for a treatment effect using CD4 count as a surrogate marker for RNA. Introduction There has been a substantial growth in clinical and methodological research on identifying and using valid surrogate markers in the past few decades. A valid surrogate marker is a biological measurement that can be used as a replacement for a primary outcome of interest in a clinical study. Many statistical methods have been proposed to evaluate and validate surrogate markers using a wide variety of innovative methodological approaches. [22,4,26,12,17] The primary benefit of identifying a valid surrogate marker is the ability to use it in a future trial to test for a treatment effect with less required follow-up time or less cost. For example, the U.S. Food and Drug Administration announced in 2020 that a surrogate marker that could be measured earlier than COVID-19 infection could be used to assess the vaccine efficacy in preventing infection, [3] [19] proposed a nonparametric approach to test for a treatment effect in a timeto-event outcome setting based on a surrogate marker measured at an earlier time point utilizing information about the relationship between the surrogate marker and primary outcome obtained from a prior study. Chen et al. (2020) [7] suggested a model-based approach that uses surrogate information to make interim decisions about whether to drop a treatment arm or stop a trial for futility. Price et al. (2018) [23] defined an optimal surrogate that optimally predicts a primary outcome and proposed super-learner and targeted super-learner based estimation procedures. Athey et al. (2019) [2] proposed to combine multiple surrogate markers to predict a long term outcome and estimate a treatment effect, and explicitly characterized the difference between the treatment effect estimated based on the primary 3 outcome versus the surrogate combination. Previous clinical and methodological work has demonstrated potential heterogeneity in the utility of a surrogate marker i.e. that a surrogate marker may be more useful (with respect to capturing the treatment effect on the primary outcome) for some subgroups than for others. [15] Parast et al. (2021) [20] offers a nonparametric estimation procedure and formal test for heterogeneity of surrogate utility with respect to a baseline covariate. When such heterogeneity exists, existing methods that use the surrogate to test for a treatment effect while ignoring this heterogeneity may lead to inaccurate conclusions about the treatment effect, particularly when the patient population in the current study has a different mix of characteristics than the prior study (used to evaluate the utility of the surrogate marker). For example, in the simulation study in this paper, we examine a setting where the estimated treatment effect based on the primary outcome is 33.7 (standard error [SE] = 1.6); applying the testing approach of Parast et al. (2019) [19] which uses surrogate marker information but does not account for heterogeneity, the estimated treatment effect on the primary outcome is 39.2 (SE=3.5). The approach of Parast et al. (2019) [19] guarantees that the treatment effect based on the surrogate will be a lower bound for the true treatment effect on the primary outcome under certain conditions. However, these conditions may be violated when there is heterogeneity in the utility of the surrogate and thus leads to this type of situation where the estimated treatment effect using the surrogate is much higher than that using the primary outcome. Our approach that we propose in this paper which incorporates heterogeneity produces a treatment effect estimate that retains the lower bound property, with similar power to the treatment effect using the primary outcome. While we focus on heterogeneity with respect to a continuous baseline covariate, we provide a motivational example in Appendix A where there is heterogeneity with respect to a discrete covariate, gender. In this example, the surrogate marker is strong among males (explaining 99% of the treatment effect on the primary outcome) but weaker among females (explaining 67%). In a new study where the distribution of gender is 95% female and 5% male and the treatment effect on the primary outcome is 38.95, using the surrogate marker and accounting for heterogeneity in surrogacy produces an estimated treatment effect on the primary outcome equal to 17.95 while ignoring heterogeneity produces an estimate of 44.5, again, failing to correctly provide a lower bound on the true treatment effect. In contrast, if we consider a future study where the distribution of gender is 5% female and 95% male, the treatment effect on the primary outcome is 74.05, while the treatment effect using the surrogate and accounting for heterogeneity is 71.05 versus not accounting for heterogeneity is 44.5, indicating a potential loss in power to detect a treatment effect when heterogeneity is ignored. In this paper, we develop a novel test for a treatment effect using surrogate marker information that accounts for heterogeneity in the utility of the surrogate. We compare our testing procedure to a test that uses primary outcome information only (gold standard) and a test that uses surrogate marker information, but ignores heterogeneity. We demonstrate the validity of our testing procedure and derive the asymptotic properties of our estimator and variance estimates. A simulation study is used to examine the finite sample properties of our testing procedure and demonstrate when our proposed approach can outperform the testing approach that ignores heterogeneity. In particular, we demonstrate examples where the test of Parast et al. (2019) [19] provides an incorrect estimate with respect to the treatment effect. We illustrate our approach using data from an AIDS clinical trial to test for a treatment effect using CD4 count as a surrogate marker for plasma HIV-1 RNA. 5 2 Testing Procedure Notation and Setting We focus on a setting where we are currently conducting a study to examine the effect of a treatment on a primary outcome of interest, denoted by Y , and we additionally have data available from a prior study. We assume that this prior study was used to examine the strength of the surrogate, denoted by S, and heterogeneity in the utility of the surrogate, and has measurements of both Y and S of the current study. Let Z denote the treatment indicators where treatment is randomized and Z ∈ {0, 1} (i.e., treatment vs. control), and W denote a baseline covariate such that S has been shown to have heterogeneous utility with respect to this covariate. Without loss of generality, we take W to be continuous; all proposed procedures can easily accommodate a discrete W as well. We focus on a setting with heterogeneity with respect to a single baseline covariate W ; in Section 3.3, we discuss an extension to multiple W . In addition, we assume we are in a setting where either S is measured earlier than Y or S is measured at the same time as Y but is less expensive, invasive or burdensome, and there is no censoring or missing data. Throughout this paper, we quantify surrogate strength/utility using the quantity: the proportion of treatment effect on the primary outcome explained by the treatment effect on the surrogate marker. [11,26,17] We use potential outcomes notation where each person has a potential {Y (1) , is the outcome when Z = g and S (g) is the surrogate when Z = g. Observed data from the current study is denoted as and consists of D = {(Y gi , S gi , W gi ), i = 1, ..., n g ; g = 0, 1}, where n g denotes the number of individuals in treatment group g. The goal in the current study is to test for a treatment effect on the primary outcome quantified as Our aim is to leverage information from the prior study to test H 0 using surrogate marker information in order to reduce study follow-up time, costs, and/or participant burden, i.e., making inference on ∆ without using {Y gi , i = 1, · · · , 1, n g ; g = 0, 1}. We use a superscript p to denote "prior" when referring to data or quantities from the prior study. For example, we denote observed data from the prior study by D p = {(Y p gi , S p gi , W p gi , i = 1, ..., n p g , g = 0, 1}, where n p g is the sample size of treatment group g. Assumptions Given that our setting rests on the existence of a valid surrogate marker, we first define S to be a valid surrogate marker for Y if the following conditions hold: for all s and w; for all s and w. (C4) A large proportion of the treatment effect on the primary outcome can be explained by the treatment effect on the surrogate marker for all w. Assumptions (C1)-(C3) are parallel to those required in Wang and Taylor (2002) In order to ensure that the proposed test statistic to be described in Section 2.3, has a reasonable interpretation with respect to ∆, we also require: for all s and w; where Ω J is the common compact support for both (S (g) , W (g) ) in g = 0, 1. Assumption (C5) implies that in the control groups, the current study and the prior study share the same conditional expectation for Y given S and W . This assumption is reasonable when, for example, the control condition in both studies are the same, such as "usual care." Importantly, such an assumption is not required to hold for the treatment groups and it relaxes the requirement that the distribution of Y conditional on S be transportable from the prior to current study. Even so, this assumption is admittedly very strong and needs to be carefully considered before using this approach; however, any testing procedure that attempts to borrow information from a prior study to test a hypothesis in a future study is going to require some type of strong transportability assumption. If there is reason to believe that such transportability between studies is not appropriate, then the prior study should not be considered for informing the future study. Assumption (C6) ensures that we can 8 approximate E(Y 0 |S 0 = s, W 0 = w) for all observed pairs of S (g) and W (g) , g = 0, 1 in the current study. We discuss robustness to these assumptions as well as additional assumptions needed for a causal interpretation in Appendix B. Proposed Testing Procedure Recall that our aim is to take advantage of information from the prior study to test H 0 using surrogate marker information such that this test accounts for known heterogeneity in the utility of the surrogate marker. To achieve this goal we note that ∆ can be expressed as: is the conditional cumulative distribution function of S (g) given W = w, and F W (w) is the cumulative distribution of W. In expressing ∆ as (1), we have simply used a conditional expectation to incorporate S and W into our expression. By expressing ∆ in this way, this motivates the following earlier treatment effect definition: where F (g) (s, w) is the cumulative distribution function of (S (g) , W ) in the current study. The only change in going from (1) to (2), is that we have replaced µ 1 (s, w) with µ 0 (s, w) in the first term which will ensure that this quantity provides a lower bound on the treatment effect. In the second equality, (3), we replace µ 0 (s, w) with µ p 0 (s, w) which follows from 9 Assumption (C5). The expression (3) is now a quantity that only involves µ p 0 (s, w) which is the conditional risk in the prior study, and the distribution of S and W in the current study. Importantly, the expression does not involve Y from the current study at all. In practice, µ p 0 (s, w) is unknown and must be replaced with an estimate, µ p 0 (s, w), which we describe in Section 3.1. Because of this, we define the following earlier average treatment effect quantity, where the notation makes the dependence on information from the prior study explicit: This quantity, ∆ H , measures the treatment effect on a transformation of the surrogate marker and baseline covariate, i.e., the difference between µ p 0 (S (1) , W ) and µ p 0 (S (0) , W ). First, due to randomization, W has the same distribution between two treatment groups and ∆ H has an appealing causal interpretation reflecting the treatment effect on the surrogate marker. Second, ∆ H represents the part of the treatment effect on the primary outcome explained by the surrogate marker and an approximation to ∆ H , which is the quantity of our primary interest. Under the null hypothesis of no average treatment effect on the primary outcome, there will also be no average treatment effect in any subgroup of patients with W = w (see Appendix B). Under the null, Assumptions (C1)-(C3) imply that S (1) | W = w has the same distribution as S (0) | W = w for all w in the support of W , and thus, ∆ H = 0. Therefore, we may formally define our test statistic for H 0 based on the early average treatment effect as Alternative Testing Approaches We consider two alternative tests that would be reasonable options for testing H 0 in this setting. The first quite obvious approach is simply to assume the primary outcome is measured in the current study and use primary outcome information to estimate ∆ and conduct a t-test of H 0 : ∆ = 0. This reflects the gold standard as it directly tests the hypothesis we are interested in. Importantly though, the whole point of this setting is to provide a way to not have to measure the primary outcome. We include this option so that we can compare to this gold standard. The second alternative test we examine is one which uses information from the prior study about the relationship between the surrogate and the primary outcome, but does not account for heterogeneity. This test is an extension of a test proposed in Parast et al. (2019) [19] which was developed for the time-to-event outcome setting. Our description of it here, for a non-survival setting, is new and will be useful in practice for those analyzing a nonsurvival study in a setting with no heterogeneity in the utility of the surrogate. Similar to our proposed test, but without regard for W , we note that ∆ = µ 1 (s)dF (1) where µ g (s) = E(Y (g) |S (g) ) which motivates the following earlier treatment effect definition: where µ 0 p (s) is a consistent estimator of µ p 0 (s). As with the proposed test, this early treatment effect quantity replaces µ g (s) with µ 0 (s) for both treatment groups and will ensure it is a lower bound on the ∆ under certain conditions. This test, however, requires the assumption , that this conditional expectation in the control group is the same in the current study as the prior study. It is important to note that this assumption may not hold when there is heterogeneity in the utility of the surrogate marker. To test H 0 : ∆ = 0, we instead test H 0P : ∆ P = 0 and define the test statistic for H 0P based on the early treatment effect as Z P = √ n ∆ P / σ P , where ∆ P is a root-n consistent estimate of ∆ P and σ 2 P is the estimated variance of √ n( ∆ P − ∆ P ). We reject H 0P (and H 0 ) when |Z P | is large. In Appendix C, we discuss estimation and testing for ∆ using the primary outcome, propose estimation procedures to obtain ∆ P and σ P , and discuss why we do not consider directly testing the surrogate. Intuitively, we would expect that both our proposed test and this test based on ∆ P should work well when there is no heterogeneity. When there is heterogeneity, we expect that the test based on ∆ P (or even ∆ P ) could lead to erroneous conclusions about the treatment effect and/or have less power than the proposed test. 12 3 Estimation and Inference Estimation of Proposed ∆ H For our proposed testing procedure, we first define , and , as nonparametric smoothed estimators of the conditional expectation of Y (0) given (S (0) , W ) = (s, w) in the prior study, and the conditional expectation of µ(S (g) , W ) given W = w and a bivariate function µ(·, ·) in the current study, respectively. Here, is a smooth symmetric density function with finite support, h 0 , h 1 , h 2 , h 3 are specified bandwidths which may be data dependent, and n p 0 denotes the sample size of group Z = 0 in the prior study. We utilize undersmoothing and select all bandwidths throughout to be of order O(n − ), ∈ (1/4, 1/2), to eliminate the asymptotic bias, where n = n 1 + n 0 in an effort to avoid a need for bias correction in subsequent statistical inference. A very straightforward estimate of ∆ H would be which simply takes our estimated conditional mean function from the prior study and applies it to data in the current study. However, it is possible for us to improve upon this estimator 13 in terms of efficiency. To do this, we note that and thus we now consider an estimate of ∆ H as which is asymptotically equivalent to (4). Note that this estimate only uses S (g) and W data from the current study (no Y data from the current study) and µ p 0 (s, w), which in turns depends on S (0p) , W p , Y (0p) data in group Z = 0 from the previous study. While either (4) or (5) would be consistent estimates of ∆ H , we utilize the fact that the distributions of W from the two treatment arms are identical due to randomization and construct the estimator: We show in Appendix D that (6) improves upon the efficiency of (5). Essentially, ∆ H is equivalent to an augmented version of the simple estimator (described below), taking advantage of the independence of W and treatment, since treatment was randomized. In Appendix D we show that conditional on µ p 0 (·, ·), ∆ H is a consistent estimate of ∆ H , and that √ n{ ∆ H − ∆ H } weakly converges to a mean zero normal distribution as n → ∞. A consistent estimate of the conditional variance of ∆ H given the prior study, σ 2 H , can be 14 obtained asσ where π g = n g /n and S gi = µ Remark. The efficiency of the simple estimator can be improved by considering the fact that E[m(W 1i ; µ p 0 )] = E[m(W 0i ; µ p 0 )] for any transformation m(·) due to randomization. Specifically, one may consider a new class of consistent estimators indexed by m(·) : R → R, The optimal choice of m(·) minimizing the asymptotic variance is 15 In practice, m 0 (w) can be consistently estimated by m opt (w) = π 0 m 1 (w; µ Denote the resulting estimator of ∆ H by In Appendix D we show that conditional on µ weakly converges to a mean zero normal distribution as n → ∞. 0 (·, ·), σ 2 AU G , can be consistently estimated by In Appendix D, we show that ∆ AU G H is asymptotically equivalent to our proposed ∆ H and Inference To construct a confidence interval for ∆ H we use our estimated variance σ 2 H and define a 100(1 − α)% confidence interval as ∆ H ± Z 1−α/2 σ H . We examine the empirical performance of our proposed estimation procedure, variance estimation, confidence interval construction, and testing procedure in Section 4. It is important to note that we consider the prior study, the study from which we estimate the conditional mean function, µ p 0 (s, w), as fixed. This is a reasonable assumption given that in practice, there is truly some previously conducted prior study which one is using to inform testing in the current study. However, one could argue that this prior study should be considered random and that all inference should be derived as such. In such a case, the estimation of our point estimate ∆ H would remain the same but the standard estimation and confidence interval construction would be more complex. Multiple Baseline Covariates While in this paper we focus only on heterogeneity with respect to a single baseline covariate, it may be the case that there is heterogeneity with respect to multiple baseline covariates. In such a case, one still can consider a straightforward estimator for the treatment effect using surrogate marker and baseline covariates: line covariate vector of interest (including an intercept term, with a slight abuse of notation). The difficulty is that fully nonparametric estimation of µ 0 (s, w) will likely be infeasible for practical sample sizes with a vector W of moderate dimension, e.g., ≥ 3. In such a case, one may be willing to consider a parametric or semi-parametric model. For example, an estimator can be obtained based on a simple regression model µ 0 (s, where g Y (·) is a known, strictly increasing link function and β 0 and β 1 are unknown regression coefficients to be estimated based on the prior study. Alternatively, one could consider a more flexible varying coefficient model for .., β L (s)} , and β l (s) is the unknown smooth function of s to be estimated nonparametrically. This modeling approach would allow complex interactions between S and W. Here, we use the additional subscript m in µ (p) 0m (·, ·) to emphasize the fact that this estimator of µ 0 (·, ·) will now be fully or partially dependent on model assumptions, i.e., model-based. Certainly, given this model dependence, robustness (or lack thereof) to model misspecification would need to be carefully considered when using this approach in practice. Simulation Goals and Setup The two main goals of our simulation study were: 1) to examine the finite sample properties of our estimation procedure for ∆ H in terms of bias, accuracy of our variance calculation, and coverage of constructed confidence intervals, and 2) to compare testing results based on the three different testing quantities: ∆ (using the primary outcome, gold standard) vs. ∆ P (using the surrogate marker, ignoring heterogeneity) vs. ∆ H (using the surrogate marker, accounting for heterogeneity). For the testing results, we focus on the point estimates themselves, the resulting effect sizes (point estimate/standard error estimate), and power. Importantly, when there is heterogeneity, we do not necessarily aim to demonstrate improved power with our proposed approach but rather, to demonstrate settings where the testing procedure using ∆ P (using the surrogate marker, ignoring heterogeneity) can be incorrect. To achieve these goals, we examined eight simulation settings. For all settings, results were summarized over 500 replications; we examined all settings with (n p 1 , n p 0 ) = (1000, 800) (sample sizes in prior study) and (n 1 , n 0 ) = (300, 300) (sample sizes in current study). All simulation settings were also repeated with (n p 1 , n p 0 ) = (300, 300) (sample sizes in prior study) and (n 1 , n 0 ) = (300, 300); results were similar and are not shown here. In setting 1, we generated data such that there was heterogeneity in the utility of the surrogate with respect to a baseline covariate and the distribution of this baseline covariate was different in the current study compared to the prior study. Specifically, in the prior study, which is fixed in and S p 0i ∼ gamma(shape = 2.5, scale = 2.5). We then generate the outcomes from: where throughout N (a, b) indicates a normal distribution with mean a and variance b. The motivation behind this setup was (a) to generate a surrogate marker where higher values are desirable and the surrogate level tends to be higher in the treated group, and (b) to generate an outcome where the surrogate marker is positively associated with the outcome but this association is stronger in magnitude in the treated group, reflecting residual treatment effect beyond the surrogate marker. In addition, to induce heterogeneity, we generate data such that the treatment effect on the primary outcome and the association between primary outcome and surrogate marker depend on whether the covariate is less than or greater than 5. With this setup, there was a statistically significant heterogeneity in surrogacy based on the test for heterogeneity proposed by Parast et al. (2021); the estimated proportion of treatment effect explained by the surrogate marker was 0.52 for W p gi < 5 and 0.95 for W p gi ≥ 5, g ∈ {0, 1}. In this setting, the (S gi , Y gi ) | W gi in the current study was generated the same as in the prior study, but W 1i and W 0i were generated from a U (0, 4), which is different from the prior study. Note that for all patients in the current study, the surrogate strength is not very strong and thus, we would expect that using the surrogate but ignoring heterogeneity will lead to an overestimation of the treatment effect. While the variability of the primary outcome, Y gi , is large in both treatment groups, the size of the treatment effect is large as well. For example, in this setting, our results will show that the average estimated treatment effect on the outcome in the current study is 14.10, and the empirical power of testing the treatment effect is 100% using the primary outcome only. In setting 2, W p gi and Y p gi |S p gi , W p gi in the prior study were generated exactly the same as in setting 1, but S p 1i ∼ gamma(shape = 2.66, scale = 2.66) and S p 0i ∼ gamma(shape = 2.5, scale = 2.5). The motivation behind this change in the distributions for the surrogate marker is that we aimed to make the treatment effect on both the primary outcome and surrogate marker smaller than in setting 1, in order to explore how the various tests performed when less power would be expected. As in setting 1, there was significant heterogeneity in surrogacy with the estimated proportion of treatment effect explained by the surrogate being 0.39 for W p gi < 5 and 0.90 for W p gi ≥ 5. The current study was generated the same as the prior study except that W 1i and W 0i were generated from a U (6, 10) distribution. In contrast to setting 1, for all patients in the current study, the surrogate is strong and thus, we would expect that using the surrogate but ignoring heterogeneity will lead to an underestimation of the treatment effect. With respect to the size of the treatment effect and empirical power in this setting, our results will show that the average treatment effect on the outcome in the current study is 13.34 , and the empirical power of testing the treatment effect is 69% using the primary outcome only. In setting 3, (W gi , S gi ) in the prior study were generated as in setting 2, but Y p N (0, 16). The motivation behind this change in the distributions for Y was to explicitly make the surrogate useless among those with W p gi < 5 i.e., a more extreme version of setting 2. As expected, there was significant surrogacy heterogeneity with the treatment effect on the surrogate marker not explaining any of the treatment effect on the primary outcome among patients with W p gi < 5, and explaining the majority of the treatment effect on the primary outcome among patients with W p gi ≥ 5 (proportion explained ≈ 0.92). Similar to setting 2, the current study was generated the same as the prior study except that W 1i and W 0i were generated from a U (6, 10) distribution and thus, we expect a potentially larger gain in power using our proposed approach (though again, this is not our primary goal). With respect to the size of the treatment effect and empirical power in this setting, our results will show that the average treatment effect on the primary outcome in the current study is 13.34 , and the empirical power of testing the treatment effect is 69% using the primary outcome only, parallel to setting 2. In setting 4, the prior study was generated exactly the same as in setting 1, and the current study was generated exactly the same as the prior study, i.e., W 1i and W 0i were generated from a U (0, 10) distribution. Here, even though there is heterogeneity as described above for setting 1, since the covariate distribution is the same in prior and current studies, we expect the tests ignoring vs. accounting for heterogeneity to produce similar results. With respect to the size of the treatment effect and empirical power in this setting, our results will show that the average treatment effect on the primary outcome in the current study is 19.12 , and the empirical power of testing the treatment effect is 96% using the primary outcome only. In setting 5, data were generated such that there is no heterogeneity. Specifically, in the prior study, W p 1i ∼ U (0, 10), W p 0i ∼ U (0, 10), S p 1i ∼ gamma(shape = 2.78, scale = 2.78), S p 0i ∼ gamma(shape = 2.5, scale = 2.5), Y p 1i = 3.5 + 5S p 1i + N (0, 1), and Y p 0i = 3.2 + 4S p 0i + N (0, 1), independent of the baseline covariate. The proportion of the treatment effect explained by the surrogate in the prior study was 0.47, which is homogeneous in the study population. Data from the current study was distributed the same as for the prior study. The purpose of this setting was to examine how the tests perform when there is no heterogeneity and no difference in distribution from the prior study to the current study. With respect to the size of the treatment effect and empirical power in this setting, our results will show that the average treatment effect on the outcome in the current study is 13.90 , and the empirical 21 power of testing the treatment effect is 100% using the primary outcome only. In setting 6, data are generated similar to setting 1 but with lower variability in the primary outcome resulting in a much larger effect size. In the prior study, W p 1i ∼ U (0, 10), respectively. There was a substantial heterogeneity in the utility of the surrogate with the proportion of treatment effect explained by the surrogate being 0.67 for W p gi < 5 and 0.98 for W p gi ≥ 5. In the current study, the S and Y were generated the same as in the prior study, but W 1i and W 0i were generated from a U (0, 4) distribution. As in setting 1, since the surrogate strength is not very strong in the current study, we would expect that using the surrogate but ignoring heterogeneity will lead to an overestimation of the treatment effect. With respect to the size of the treatment effect and empirical power in this setting, our results will show that the average treatment effect on the outcome in the current study is 33.70 , and the empirical power of testing the treatment effect is 100% using the primary outcome only. Settings 7 and 8 reflect a null treatment effect setting and we include them so that we may examine the empirical Type 1 error rate. In both settings, data from the prior study are generated as W p gi ∼ U (0, 10), S p gi ∼ gamma(shape = 2.5, scale = 2.5), and Y p gi = 3.2 + 4S p gi + N (0, 16) for g = 0, 1. That is, there is neither treatment effect on the surrogate marker nor the treatment effect on the primary outcome, and S gi and Y gi are positively associated. In setting 7, data in the current study are generated exactly as the prior study. In setting 8, data in the current study are generated such that (S gi , Y gi )|W gi are generated the same as the prior study, but W gi ∼ U (0, 4), g ∈ {0, 1}, i.e., the distribution of the baseline covariate is different in the current study. The purpose of setting 8 is to specifically examine estimation and testing when there is no treatment effect and no heterogeneity, but the current study does have a different patient population compared to the prior study. In both settings, the true treatment effect on the primary outcome is 0 and the empirical Type 1 error of the test using the primary outcome is 0.06. In both settings, there is no empirical evidence that S is an "informative" surrogate marker, and no empirical evidence of heterogeneity in surrogacy, as expected. With respect to our bandwidth selection, we let h 0 = 1.06 × min(σ W 0 , IQR 0 /1.34)n Table 1 shows estimation results for ∆ H for all settings, using our proposed estimating procedure. We examine bias in coverage with respect to both ∆ H (fixed prior study) and ∆ H . These results demonstrate good performance with minimal bias, average standard error estimates that are close to the empirical standard error, and coverage of the confidence intervals close to the nominal value of 95%. Table 2 shows results from testing using ∆, ∆ P , and ∆ H . In setting 1 where there is heterogeneity and the distribution of W in the current study is different from the prior study, results show that ∆ P overestimates the treatment effect and thus, does not retain the lower boundedness property. In contrast, our approach using ∆ H does not overestimate the treatment effect. The power using ∆ H is smaller than that using ∆, but this is expected since the data generation in this setting is such that the population in the current study is composed largely of individuals where the surrogate marker is not very strong. In setting 2 Simulation Results where there is again heterogeneity and the distribution of W in the current study is different from the prior study, results show that both ∆ P and ∆ H are less than ∆, but ∆ H is much closer to ∆ and has power equivalent to that using ∆. This, again, is what was expected since the data generation in this setting is such that the population in the current study is composed largely of individuals where the surrogate marker is strong. In setting 3, which is similar to setting 2 but we have made the data more extreme with the surrogate being useless for those with W < 5, results show a larger departure in ∆ P from ∆, and a larger decrease in power for ∆ P compared to ∆ H . In setting 4 where there is heterogeneity but the distribution of W in both the prior study and the current study is the same, we see similar point estimates for ∆ P and ∆ H but a slightly higher standard error and lower power for ∆ H . This indicates that in some settings, we may pay a price in terms of power and efficiency when we use the approach that accounts for heterogeneity when it is not necessary. In setting 5, where there is no heterogeneity, we see similar performance for ∆ P and ∆ H . In setting 6, where we have a very large treatment effect on the primary outcome, there is heterogeneity and the distribution of W in the current study is different from the prior study, results show that, as expected, ∆ P overestimates the treatment effect and does not retain the lower boundedness property, as in setting 1. In settings 7 and 8, where there is no treatment effect, results show that all three testing procedures perform well with an estimated treatment effect close to zero and Type 1 error rate close to 0.05. We additionally examined the efficiency gain comparing our proposed estimator to the simple estimator in (4); indeed, we did observe efficiency gains using our proposed estimator, quantified by the ratio of the estimated standard error using our proposed estimate to that using the simple estimate, that ranged from 0.79-0.98 across settings. In summary, results from this simulation study show 1) good finite sample performance of our estimation and inference procedures for ∆ H , 2) a potential slight loss in power when using the proposed ∆ H compared to ∆ P when accounting for heterogeneity is not needed, and 3) a potential for inaccurate conclusions and/or loss in power when ∆ P is used instead of the proposed ∆ H when accounting for heterogeneity is needed. Application We apply our proposed approach to test for a treatment effect based on a heterogeneous surrogate using data from two distinct AIDS clinical trials, the AIDS Clinical Trials Group (ACTG) 320 Study and the ACTG 193A Study. [14,13] count and weaker among those with a higher baseline CD4 count [20] as shown in Figure 1. We aim to use our proposed method to test for a treatment effect on RNA using CD4 count as a surrogate marker, accounting for the known heterogeneity in the utility of the surrogate which was demonstrated in the prior study. In Figure 2 we show the distribution of the baseline covariate, baseline CD4, in the prior study compared to the current study. Clearly, the current study is composed of a different participant population with lower CD4 counts due to the study eligibility criteria. In Figure 1, we also see that the surrogate is strongest in this subgroup. Using our proposed approach, we obtain a treatment effect estimate of ∆ H = −0.10 (standard error [SE] = 0.03) with a p-value < 0.001. Note that since lower plasma HIV-1 RNA is better, a negative change in RNA indicates a beneficial treatment effect for the three-drug regimen. Using the approach that does not account for heterogeneity, we obtain a treatment effect estimate closer to the null, but still significant: ∆ P = −0.07(SE = 0.02), p < 0.001. That is, while the overall conclusion regarding the treatment effect based on the surrogate would be significant using either test, our proposed test provides a treatment effect point estimate that is larger in magnitude. This is expected since the surrogate strength is greater in this subgroup that makes up the current study, and our proposed approach takes advantage of that information. Discussion For settings where it is known that the strength of a surrogate marker varies by a certain baseline characteristic, we have proposed an approach and estimation procedures to appro-priately test for a treatment effect using only the surrogate marker, accounting for this known heterogeneity. We demonstrated good finite sample performance of our estimation procedure and showed that our proposed testing procedure can outperform an approach that does not account for heterogeneity. An R package implementing the methods proposed here, named hettest, is available at https://github.com/laylaparast/hettest. While we largely focus, specifically in the numerical studies, on settings where the distribution of W is different in the current study as compared to the prior study, it is still possible for a test based on ∆ P , i.e., ignoring heterogeneity, to provide inaccurate results about the treatment effect when there is heterogeneity in the utility of the surrogate and the W is distributed the same in the two studies; we provide an example in Appendix E. In the presence of heterogeneity, both the treatment effect and the utility of the surrogate marker may depend on W . While we focus exclusively on the average treatment effect in this paper, it may be of interest to test for a treatment effect based on alternative summaries that account for such heterogeneity. For example, one may define Such a hybrid approach has the potential to reduce costs if S is less costly to measure than Y and/or reduce the follow-up time needed for those in Ω w if S is measured earlier than Y . Though not exactly within this context, previous work has explored the potential for auxiliary information (including but not limited to surrogate markers) to improve efficiency when testing for a treatment or intervention effect. [10,21] While this is beyond the scope of this paper, further work on this topic within the framework of a heterogeneous surrogate is warranted. Our proposed approach has some limitations. First, if the current study includes participants with w values outside the observed distribution in the prior study, our approach will not be able to obtain µ p 0 (s, w) for that w without extrapolation. In such a case, when there is observed heterogeneity in the prior study, use of the surrogate marker to test for a treatment effect in the current study should likely be limited to those with w contained in the prior study. Second, given our use of kernel smoothing, we require a relatively large 28 sample size. Robust nonparametric methods for surrogate markers are lacking in general for small sample size settings; future work in this area would be needed. Lastly, we require several assumptions, outlined in Section 2.2, which are generally untestable though they may be empirically explored using the observed data. These assumptions are needed for identifiability, to ensure our lower-boundedness property of ∆ H (i.e., ∆ H ≤ ∆), and to guard against the surrogate paradox which occurs when the surrogate and outcome are positively associated, the treatment has a positive effect on the surrogate, but the treatment in fact has a negative effect on the outcome. [25] The surrogate paradox is especially of concern here as our primary goal is to make a conclusion about the treatment effect on the primary outcome based on information about the surrogate marker. While these assumptions are strong, they are more likely to hold than the parallel assumptions required for ∆ P [19] to be valid due to the additional conditioning on W . Further work on methods that allow for more relaxed assumptions and/or that allow one to assess sensitivity to violations of these assumptions would be useful. [ Density Baseline CD4 in current study Baseline CD4 in prior study Appendix A Discrete Example Let Y denote the primary outcome and S denote the surrogate marker. We use potential outcomes notation where each person has a potential {Y (1) , Y (0) , S (1) , S (0) } where Y (g) and S (g) are the outcome and surrogate when the patient receives treatment g. Our main quantity of interest is the treatment effect on the primary outcome quantified as ). The earlier treatment effect incorporating S information is defined in the main text as where µ p 0 (s) ≡ E(Y (0p) = y|S (0p) = s). In this example, we will have heterogeneity in the utility of the surrogate with respect to gender. Consider our prior study, which we refer to as Study A in this example, and is shown in Figure 3. To calculate ∆ P for a future study, let's consider the conditional mean that is central to this calculation, µ p 0 (s) = E(Y (0p) = y|S (0p) = s) where the superscript p indi-37 cates that this is referring to the prior study, i.e., study A. In this example, this would be µ p 0 (s) = 0.5 × (1 + 3s) + 0.5 × 14.8s = 8.9s + 0.5. Now assume our current study is Study B shown in Figure 3 which is 95% female and 5% male. Importantly, the joint distributions of (Y (1) , Y (0) , S (1) , S (0) ) in males and females remain as described above; the only difference is the distribution of gender. The treatment effect, ∆ in this new study is 0.95 × 37 + 0.05 × 76 = 38.95. If one were to calculate ∆ P not accounting for this known heterogeneity in the utility of the surrogate, the quantity obtained would be ∆ P = 8.9 × 10 + 0.5 − 8.9 × 5 − 0.5 = 44.5, recalling that E(S (1) ) = 10 and E(S (0) ) = 5 for all individuals in both studies. However, using our proposed approach which does account for heterogeneity, we use ∆ H as the earlier treatment effect, defined in the main text as: Therefore ∆ H < ∆ < ∆ P and ∆ P no longer retains the property of providing a lower bound on the treatment effect on Y . Now we consider a study, labeled Study C in Figure 3, which is 95% males and 5% females. Thus, in this case, ∆ H will provide better lower bound for ∆ and the test based on ∆ H is expected to be more powerful than that based on ∆ P . The discrete case, as illustrated in this example, is relatively straightforward in terms of how to go about calculating the needed quantities separately by group and appropriately accounting for the different distribution in the new study. The continuous baseline covariate case, however, is more complex, and our Appendix C presents an example such that even if the prior and current studies have the same distribution for covariates, ∆ P may still fail to be a valid lower bound for ∆. Appendix B As noted in this text, Assumptions (C1) − (C3) together guarantee that E(Y (1) , for all w in the support of W . This result is due to the derivation: This relationship allows us to test the common null H 0 : ∆ = 0 via testing a seemingly more 39 restrictive null that S (1) | W = w ∼ S (0) | W = w, for all w in the support of W. For (C2) and (C3), if the primary outcome or surrogate are such that lower values are "better", one can simply define the outcome/surrogate as −X where X is the initial value. Assumptions (C5) − (C6) are not required for the validity of the testing procedure proposed in the next section in that the p-value under the null follows a uniform distribution even without them, but it allows us to estimate a lower bound of the average treatment effect, ∆, and construct the corresponding test statistic. Under the following additional assumptions: Appendix C To estimate ∆ using the primary outcome (gold standard) we use ∆ = n −1 and conduct a t-test to test H 0 : ∆ = 0. To estimate ∆ P , we use the nonparametric estimation approach of [19] by estimating µ p 0 (s) as , and then estimate ∆ P as Note that this estimate only uses S data from the current study (no Y data from the current study) and S, Y data from the previous study in group Z = 0 only. To obtain an estimate for the standard error of ∆ P , σ P , we simply take the empirical standard deviation of the transformed surrogate i.e., let Y gi = µ p 0 (S gi ), and then σ P = var( Y 1i )/n 1 + var( Y 0i )/n 0 where var indicates the empirical variance. This alternative testing procedure would then use the test statistic Z P = ∆ P / σ P and reject the null hypothesis when |Z P | > Φ −1 (1 − α/2). Importantly, one may also consider simply using the surrogate markers measured in the current study and define ∆ M = E(S (1) )−E(S (0) ) and conduct a t-test of H 0M : ∆ M = 0. The disadvantage of this approach is that there is no way to relate ∆ M and ∆ i.e., the estimate of ∆ M does not give any helpful information about the magnitude of ∆. In addition, this approach does not take advantage of information from the previous study nor does it account for heterogeneity in the utility of the surrogate marker. For these reasons, we do not compare our approach to this test. Appendix D Our proposed estimator for ∆ H is In this section, we only consider the randomness in the current study, i.e., the probability measure is conditional on µ p 0 (·, ·). Now consider the centered term where π g = n g /n and f 1 (w) is the nonparametric estimator for the density function of W based on observations in treatment group 1. Now, consider the expansion m 1 (w; µ p 0 ) − m 1 (w; µ p 0 ) = 1 n 1 log(n 1 ) n 1 h uniform in w. Therefore, Similarly, =O p √ n 1 h 2 + log(n 1 ) √ n 1 h + o p (1). (AU G) H can also be consistently estimated by
2022-09-20T01:16:01.152Z
2022-09-17T00:00:00.000
{ "year": 2022, "sha1": "b298b7bfbad782944b50648b3a5df1ee9a49f1e9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b298b7bfbad782944b50648b3a5df1ee9a49f1e9", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Mathematics" ] }
30210906
pes2o/s2orc
v3-fos-license
Non-GPS Data Dissemination for VANET Fast, reliable, and efficient data dissemination in VANET is a key of success for intelligent transportation system. This requires a broadcasting protocol which has efficient forwarder nodes and an efficient broadcasting mechanism. In this paper, we propose a self-decision algorithm that allows a node to know that it belongs to a member of connected dominating set or not. The algorithm is a combination of density based algorithm and topology based algorithm, called “DTA.” The algorithm does not require any geographical knowledge. Therefore, it can avoid violating a privacy issue. Moreover, the algorithm can resist inaccurate data than position-base algorithms that need high frequent beaconing for accurate data. The simulation results show that our algorithm provides the highest coverage results compared to existing solutions. We also propose a new broadcasting protocol, called “NoG.” NoG consists of a broadcasting mechanism, a waiting timeout mechanism, and a beaconing mechanism. The proposed protocol operates without any geographical knowledge and provides reliable and efficient data dissemination. The performance is evaluated with a realistic network simulator (NS-3). Simulation results show that NoG with DTA outperforms other existing protocols in terms of reliability and data dissemination speed. Introduction Vehicular ad hoc network (VANET) is one of mobile ad hoc networks (MANET).Vehicles in the network are equipped with wireless communication devices.Therefore, they can directly communicate to each other without infrastructure and without centralized control.The data can be quickly delivered to applications.This can support some applications of intelligent transportation system (ITS) such as driver assistant or safety transport applications.These applications need a fast and reliable solution for data dissemination to provide accurate and reliable services [1].So the efficient data dissemination is one of the key successes for such applications.To achieve this, the unique characteristic of vehicle environment should be considered.Vehicle's movement changes frequently and rapidly.The speed of vehicles also affects wireless signal that leads to high intermittent connectivity occurrences between vehicles.Moreover, vehicles may be very densely packed in urban areas but are very sparse on highway roads or in rural areas. A traditional approach for data dissemination or broadcasting for wireless ad hoc networks is simple flooding.Simple flooding does not require any information from environment or nodes.A packet is rebroadcasted once by every received node.This approach can provide very high data dissemination speed.However, simple flooding may cause the contention and collision [2] due to its redundant transmission in dense areas and it may cause useless broadcasting as there is no neighbor to receive data in sparse area [3].Epidemic protocol [4] was proposed to improve the performance in sparse areas by using store and forward technique.So upon receiving of a broadcasting packet, nodes will store the packet and forward it later when nodes meet a new neighbor.Then this technique has been employed to most of broadcasting protocols in VANET because it can handle the intermittent connectivity issue.As a result, reliability or delivery ratio is increased. In VANET, several reliable broadcasting protocols have been proposed.We can categorize these reliable broadcasting protocols into 2 groups by their main algorithm.In the first group, the protocols make their decision based on node position such as EAEP [5], APBSM [6], POCA [7], and DV-Cast [8].These protocols prefer nodes at the edge of broadcasting circular to rebroadcast the packets.All of protocols in this group rely on geographical knowledge.They use position or direction of nodes to make decision.In the second group, the protocols make their decision based on node's properties such as DECA [9].The properties of nodes that are used in these protocols are number of onehop neighbors (density) or relation between nodes and their neighbors (topology).So these protocols do not require any geographical information to make decision. However, every algorithm in every protocol has the same goal.The goal is to minimize the number of rebroadcast nodes that can cover all of their neighbors in each group.This can minimize number of retransmissions for delivering a packet to most of nodes in networks.This problem can be solved by minimum connected dominating set (CDS).This algorithm can construct graph and select the minimum number of nodes to cover 100% of their neighbor nodes in each group as shown in Figure 1, but the algorithm requires global knowledge and the CDS computation is an NP-complete problem [10].Therefore, an approximation algorithm is a practical solution that can construct CDS.Some previous works have been proposed for general mobile ad hoc networks such as [11][12][13][14].These algorithms are selfdecision algorithm.This means each node will decide by itself whether it is in CDS or not.Most of them make decision based on topology properties.However, these algorithms have high complexities and they are not specifically designed for vehicular environment. In this paper, we focus on a nongeographical knowledge based CDS forming algorithms.These methods can avoid privacy issue that most of users are concerned with.Moreover, the nongeographical knowledge based algorithms can resist inaccurate data than position-base algorithms that need high frequent beaconing for accurate position data.We propose a hybrid algorithm, that is, a combination of density based algorithm and topology based algorithm (DTA).DTA has advantage points from both density based algorithm and topology based algorithm.The density based algorithm is a simple algorithm that works well in simple connection scenarios.On the other hand, a topology based algorithm is a complex algorithm that efficiently works in complex connection scenarios.So DTA is an appropriate algorithm for vehicular environment that has such a dynamic topology.We evaluate our algorithm by simulations.The evaluation is focused on coverage results and the ratio between CDS members to total nodes.DTA can provide the highest coverage results than other algorithms. We also propose a nongeographical broadcasting protocol (NoG).It is designed to provide the fast, reliable, and efficient data dissemination in VANET.The broadcasting protocol consists of a broadcasting mechanism, a waiting timeout mechanism, and a beaconing mechanism.NoG is implemented with our proposed DTA algorithm in NS-3.The simulation results show that NoG with DTA outperforms other previous protocols in terms of reliability and data dissemination speed.NoG also operates well with other algorithms. The rest of this paper is organized as follows.In Section 2, the related works are discussed.Section 3 describes the overview and details of density and topology based CDS forming algorithm (DTA).Section 4 describes the overview and details of nongeographical Broadcasting Protocol (NoG).The performance evaluation is reported in Section 5. Finally, this paper is concluded in Section 6. Related Work We discuss more details on the previously mentioned protocols in Section 2.1 and the nongeographical knowledge algorithm is discussed in Section 2.2. Broadcasting Protocol in VANET. Simple flooding is a traditional approach for broadcasting.It provides very high data dissemination speed, but all nodes will participate in rebroadcasting packets.This causes the broadcast storm problem due to redundant retransmissions.Epidemic protocol [4] is the most simple store and forward protocol.It can handle an intermittent connectivity in VANET, but all nodes still rebroadcast packets the same as simple flooding.So the broadcast storm problem is still found in epidemic protocol. There are many previous broadcasting protocols for VANET that we have found in the works of the literature.These protocols use store and forward technique to handle intermittent connectivity that frequently occurs in vehicular environment.All protocols reduce the number of redundant retransmissions by self-decision algorithm.We can categorize these protocols into two groups based on their self-decision algorithm.The first group makes a decision based on position of node.These protocols prefer nodes at the edge of broadcasting circular to rebroadcast the packets.All of protocols in this group rely on geographical knowledge (GPS).The protocols use position or direction of nodes to make decision.This can cause privacy issue [15][16][17] because nodes need to broadcast their location to the others to exchange the geographical knowledge.A malicious node can track the past and current positions of these nodes.The treats that are from position tracking are discussed in [18].The protocols in this group include PGB [19], POCA [7], EAEP [5], POCA [7], DV-Cast [8], and APBSM [6]. PGB [20] (preferred group broadcast) is a broadcasting mechanism in CAR protocol.When nodes receive a packet, they calculate the waiting timeout.A node with the shortest timeout will rebroadcast the packet.Nodes at the edge of broadcasting circular have shorter waiting timeout than nodes that are closer to the source.However, PGB is used for routing information broadcasting, so it does not concern about a reliability issue. EAEP [5] (edge-aware epidemic protocol) uses both the waiting timeout and probabilistic function.The waiting timeout is calculated by distance between nodes and source nodes.While the waiting timeout does not expire, nodes will count number of redundant retransmissions.The number of redundant retransmissions is used to calculate rebroadcast probabilistic value.Nodes at the edge of broadcasting circular have higher probability value than other nodes. POCA [7] (position-aware broadcasting protocol) uses the geographical knowledge to select the next rebroadcast node.A node, that is, the furthest node to source node, will be selected by source node.The source node piggybacks the selected node's identifier to the broadcasting packet.The selected node will immediately rebroadcast once it receives the packet.This mechanism avoids the delay from waiting timeout. DV-Cast [8] (distributed vehicular broadcast protocol) uses the broadcast suppression mechanism.A node, that is, the furthest node to source node, has the shortest waiting timeout, but if a node meets another node in the same direction of broadcasting packets, it will immediately rebroadcast the packets.This is because a node in the same direction of packets can help source node to forward the packets while it is running. APBSM [6] (acknowledged parameterless broadcasting in static to highly mobile wireless ad hoc) is an extended version of PBSM.Nodes in APBSM use position of their neighbors to construct CDS.The CDS is calculated by Stojmenovic's algorithm [14], which is a combination of selfdecision CDS forming from Wu and Li's algorithm and rebroadcast node elimination in scalable broadcast algorithm (SBA).Both of Wu and Li's algorithm and SBA will be discussed later in Section 2.2.Stojmenovic's algorithm uses geographical knowledge to select CDS members.In the case that a node is a CDS member, it will set shorter waiting timeout than other nodes.While timeout does not expire, the algorithm uses the rebroadcast node elimination the same as in SBA. The other group of protocols makes a decision by node properties.The decision relies on comparison of node properties, so the protocols in this group can avoid using the geographical knowledge.These protocols use the density information (number of 1-hop neighbors) or the topology information, such as a list of 2-hop neighbors and relationship between neighbor nodes.The interesting protocol in this group is DECA [9]. DECA [9] (density-aware broadcasting protocol) relies on only the density information.A source node makes a decision by selecting its neighbor with the highest number of 1-hop neighbor nodes.Upon receiving the packet, the selected node will immediately rebroadcast it to avoid delay from waiting timeout.DECA also uses an adaptive beaconing mechanism to reduce overhead in dense areas. However, most of nongeographical knowledge protocols are designed for general mobile ad hoc networks.But the CDS forming algorithms for these protocols are interesting because they can operate without any geographical knowledge. Nongeographical Knowledge CDS Forming Algorithm. These CDS forming algorithms efficiently select CDS members and they also eliminate unnecessary retransmissions without any geographical knowledge. Wu and Li's algorithm [11] proposed a self-decision algorithm to determine nodes in CDS, called gateway node.To be a CDS member, a node has to pass all three conditions.The first condition is an intermediate node condition.A node has to have at least two neighbors that are not directly connected to each other.The second condition is an intergateway node condition.A node has to have at least one neighbor, that is, not covered by its other neighbors.Let be a set of node A's neighbors and NB a set of neighbor nodes of A's neighbors.If ⊆ NB , node will be eliminated from CDS because all of A's neighbors can be covered by its other neighbors.The final condition is a gateway condition.A gateway node has at least a neighbor, that is, not covered by a pair of gateway node's neighbors and these two neighbors also are neighbors of each other.For example, let node be a node that considers its gateway condition.A needs to have at least a neighbor (D), that is, not covered by a pair of A's connected neighbors (B and C).If is a gateway node, the neighbor (D) is not covered by or .Therefore, is not a neighbor of or .Let be a set of node A's neighbors, a set of node B's neighbors, and N a set of node C's neighbors.B and are neighbors of node .If {, } ∈ , {} ∈ , {} ∈ and ⊆ ∪ , node will be eliminated from CDS.Therefore, nodes in CDS are only the necessary nodes for covering the other nodes in the group. LENWB [12] (lightweight and efficient network-wide broadcast) uses a set of 1-hop neighbors to eliminate unnecessary rebroadcast nodes.When nodes receive a packet, they will estimate the neighbor list of source node by number of their 1-hop neighbors.If a source node has higher number of 1-hop neighbors than the received nodes, this means that the source node may cover all neighbors of received nodes so the received nodes will not rebroadcast the packets.Otherwise the received nodes will randomly set backoff delay and rebroadcast the packet.If nodes have the same number of 1-hop neighbors, the algorithm will compare with values of node identifiers. SBA [13] (scalable broadcast algorithm) has the similar elimination algorithm as found in LENWB.Upon receiving the broadcasting packet, nodes calculate the waiting timeout.While the waiting timeout does not expire, nodes will remove the rebroadcast nodes' neighbors from their neighbor list.If the neighbor list does not empty after waiting timeout, they will immediately rebroadcast the packet. These algorithms are based on topology properties.They use 1-hop neighbor list or 2-hop neighbor list to select the CDS members.The advantage is that these algorithms do not require any geographical knowledge, but they are designed for general mobile ad hoc networks that may not be efficient in vehicular environment. New Density and Topology Based CDS Forming Algorithm Section 3.1 presents the motivation and the new density and topology based algorithm overview.Section 3.2 describes the details of the proposed algorithm. Motivation and Overview. The unique characteristic of vehicular environment is the speed of nodes.Node's movement changes frequently and rapidly.So beacon messages have to be frequently broadcasted to provide the accurate geographical knowledge to position based protocols.This can cause the broadcast storm problem from beacon transmission.The information from equipment like GPS device also does not provide accurate data due to GPS drift.Moreover, broadcasting location data that can be tracked by unknown people may be concerned as privacy violation [15][16][17].Therefore, we propose a new algorithm for CDS forming that does not require any geographical knowledge.It uses only density information (number of 1-hop neighbors) and 2hop neighbor list that can be exchanged by beacon message. Another interesting characteristic of vehicle environment is that vehicles always form groups.The vehicle environment is a nonuniform distribution and the topologies are mixed with very dynamic density environment; for example, the density is very sparse in highway scenarios, but nodes are very densely packed at the middle of intersection in urban areas.The algorithms need to be adaptable to each environment.So the algorithm should consider a node with the highest number of 1-hop neighbors to rebroadcast a packet because it can maximize a number of received nodes while minimizing a number of rebroadcast nodes.This algorithm works well for all sizes of group in every scenario.Therefore, DTA uses the number of 1-hop neighbors as a primary condition for algorithm.A node with the highest number of 1-hop neighbors is a CDS member. However, only nodes with the highest density cannot cover all nodes in high density and complex scenarios, so DTA uses a topology based decision to increase the coverage results.In the case that nodes do not satisfy the density condition, they will use a topology based condition for their decision.Our topology based decision is a simplified version of Wu and Li's algorithm.DTA employs only the gateway condition, that is, the most important condition especially on vehicular environment because the vehicular environment (a road) consists of narrow and long distance topology.The standard width of a road in US is 3.4 meters in each lane [18], but the maximum transmission range of 802.11p is up to 1000 meters [20].Therefore, the width of the road is much less than the width of transmission range.For example, a pair of connected neighbors (A and B) can cover the red area behind node as shown in Figure 2. If node does not exist in this scenario, C will be at the edge of the group, so is unnecessary to rebroadcast the message.Otherwise, if exists, C is a connector between A, B (red area) and D (yellow area).In this case, C is considered as a gateway node because has a neighbor (D), that is, not covered by a pair of C's connected neighbors (A and B).This scenario shows that the gateway condition is an important condition for CDS member selection. Algorithm Detail. Upon receiving of a new beacon, a node always updates its CDS state.There are two conditions for checking CDS state.First, a node has to check a density based condition.If a node has the highest number of neighbors compared to its neighbors, it will be a CDS member.The other nodes that do not have the highest density will use a topology based condition.If they complete the condition, they will be CDS members.Otherwise they are not the CDS members.The procedure of DTA can be described as shown in Procedure 1. New Nongeographical Knowledge Broadcasting Protocol for VANET Nongeographical Knowledge Broadcasting Protocol (NoG) consists of three main modules: (1) broadcast mechanism that uses our DTA for CDS forming, (2) waiting timeout mechanism that is used for collision avoidance, and (3) beacon mechanism that helps nodes to exchange their local information and it helps nodes to detect the missing packet.Section 4.1 describes the protocol mechanism overview 4.1.Protocol Overview.Our proposed broadcasting protocol is a store and forward protocol with adaptive beacon intervals.A node uses beacon to exchange its information between its neighbors.The beacon includes a number of 1-hop neighbors, a 1-hop neighbor list, and a received packet identifier list.A node in protocol makes a decision by itself from this information whether to be a CDS member or not.If it is a CDS member, upon receiving the broadcasting packet, it randomly sets very short backoff delay (<10 ms.).After the delay expires, it immediately rebroadcasts the packet.The nodes that are not CDS members set their waiting timeout with longer period than CDS members.While waiting timeout does not expire, they are listening to rebroadcasting from the other nodes.If they hear any rebroadcasting of the same packet in their waiting list, they will remove this packet from their waiting list to avoid redundant retransmissions. For intermittent connectivity scenarios, NoG can detect a missing packet via an acknowledgement from the beacon.If there are some missing packets, a node will set their waiting timeout.If other nodes do not rebroadcast the packet before its waiting timeout expires, it will retransmit this packet to its neighbors. Let us show the examples of protocol behaviors in a normal broadcasting scenario and in an intermittent connectivity scenario. Figure 3 shows a normal broadcasting scenario.S is a source node.Let be a node that has the highest local density, so will be a CDS member.When broadcasts a packet, A, B, and receive the broadcasting packet.A and calculate their waiting timeout and wait for rebroadcasting from CDS members.C, that is, a CDS member, will randomly set very short backoff delay before it rebroadcasts the packet.In the case that correctly rebroadcasts the packet, A and will cancel their waiting timeout to avoid redundant retransmissions.On the other hand, if does not rebroadcast the packet, one of and that has the shortest waiting timeout will rebroadcast the packet.Let have the shortest waiting timeout, so rebroadcasts the packet instead of C. A will cancel its waiting timeout not causing redundant retransmission.This mechanism will occur until all nodes in the group receive the packet or until the packet is expired. In another case, there is an intermittent connectivity scenario.A node needs to retransmit the packet between groups of nodes.The scenario is illustrated in Figure 4. Nodes A, B, and already received the broadcasting packet from .When Protocol Detail. Each node in NoG has two lists: neighbor list and broadcast list.Neighbor List maintains identifiers of all 1-hop neighbors and their neighbor information (a number of 1-hop neighbors and a 1-hop neighbor identifier list).When nodes receive a new beacon, they will update their Neighbor List and they also update their CDS state.The neighbor entry will be removed if nodes do not receive an updated beacon from their neighbors within the next beacon intervals so nodes can avoid using stale information from the neighbors that currently stay out of their transmission range.Broadcast List maintains the identifiers of broadcasting packets and their waiting timeouts.Broadcast List is a list of packets that are waiting to be rebroadcasted.An entry of Broadcast List will be removed by two events.The first one is that nodes rebroadcast the packet when waiting timeout expires.The other one is when nodes receive the redundant retransmission from their neighbors.The entry will be removed although the waiting timeout still does not expire.Pseudocode 1 describes the pseudocode of the protocol.The details of main modules are explained as shown in Pseudocode 1. Waiting Timeout Mechanism. Waiting timeout is a solution to avoid broadcasting collision in distributed system.Nodes will randomly set their waiting timeout as backoff delay for rebroadcasting.There are two events that use waiting timeout.The first event is when nodes receive the broadcasting packet, but they are not members of CDS.They will add the packet to Broadcast List and set waiting timeout.These nodes have to listen to the rebroadcasting by their neighbors that are CDS members.If waiting timeout is expired and no CDS members rebroadcast the packet, a node with the shortest waiting timeout will rebroadcast the packet.The second event is when nodes detect the missing packet from their neighbors.They add the packet to Broadcast List and set waiting timeout the same as the first case.As a result, a node with the shortest waiting timeout will rebroadcast the missing packet to its neighbors.These two events are explained in Pseudocode 1. The disadvantage of waiting timeout is that it increases delay to overall system.Most of previous works calculate their waiting timeout as a reversed function to number of 1-hop neighbors.The purpose is to maximize number of received nodes in each retransmission by a node with the highest number of 1-hop neighbors, but this leads to a contention problem.It also increases extremely high redundant retransmissions in high density scenarios.The reason is that when nodes are in the dense areas, the reverse function calculates very short range of delay.For this reason, most of nodes in the same area will have the same waiting timeout.Then they simultaneously rebroadcast the packet causing collision.In order to prevent such situation, protocols should use the number of 1-hop neighbors to be directed variation of waiting timeout function.As reported in [21], the directed function can prevent collision in extremely high density scenarios.This new waiting timeout also increases the data dissemination speed in sparse areas.Since the directed function provides much shorter waiting timeout period than the inversed function in sparse area, the data dissemination speed can be increased. The waiting timeout can be calculated by (1). represents the network delay since a packet is sent by source until it is delivered to receivers.n is a number of 1-hop neighbors. is a constant value used for expanding the range between minimum waiting timeout and maximum waiting timeout.The best value can significantly reduce collision occurrences in dense areas while increasing only a little delay.The minimum term of waiting timeout represents the possibility delay from a beacon queuing in MAC layer.So the minimum term will be equal to total delay of all neighbors' beacon sending time.The maximum term of waiting timeout consists of two terms.The first term, 2, is equal to two times of network delay.This is because in the case that nodes have one neighbor, they have possibility to wait for one beacon from the neighbor and another network delay from rebroadcasting.The second term, n, is the possibility delay from a beacon queuing in MAC layer, that is, multiplied by the expanding value (). is used for expanding the range between minimum term and maximum term.The configuration of is discussed in Section 5.2.The waiting timeout value can be illustrated in Figure 5 () = Random [, (2 + ) ] . (1) Beacon Mechanism (a) Beacon Structure.Nodes in NoG use beacon messages for discovering 1-hop neighbors and exchanging their local information.The beacon message header consists of a source identifier, a number of 1-hop neighbors, a list of 1-hop neighbor identifiers, and a list of received packets that still do not expire.The list of received packet contains an identifier of source and an identifier of the packet.This list is used for missing message detection.The beacon size will be at least 5 bytes in case there is no 1-hop neighbor and received packet.The beacon size will increase 4 bytes for each 1-hop neighbor and 5 bytes for each received packet.In order to reduce the number of beacons, nodes piggyback the beacon header with the broadcasting packet when they have a packet to rebroadcast, as shown in Figure 6.Then the next beacon will be postponed until the next beacon interval. (b) Beacon Interval Calculation.The accuracy of 1-hop neighbors' position depends on the frequency rate of beacon.In fact, this can cause a broadcast storm problem in dense areas. In this paper, the nongeographical knowledge algorithms are focused.These algorithms do not require very high accurate data from beacon information.Moreover, density of vehicles has related to speed of vehicles [22], so the vehicles in dense area are moving slower than the vehicles in sparse area.Consequently, a short beacon interval is needed in sparse areas, but it is unnecessary in dense areas.NoG uses an adaptive beacon interval algorithm to appropriately calculate the beacon interval in each density environment.The algorithm linearly increases the beacon interval based on network density, called Linear Adaptive Interval or LIA [23].The algorithm can reduce beacon overhead without decreasing the protocol performance. As mentioned, the beacon interval is linear increased depending on the network density.The network density (netDensity) is calculated by a number of 1-hop neighbors (n) and a number of broadcasting packets (p) that do not expire.This can be represented by (2).The beacon interval (beaconInv) calculation is represented by (3).minInv is a minimum beacon interval.c is a constant value.maxInv is the longest interval that does not affect the performance of protocol.The parameter setting of beacon interval calculation is explained in Section 5.2: (c) Missing Packet Mechanism.In VANET, an intermittent connection always occurs.In order to provide reliable broadcasting, protocols need an ability to detect missing packets.The packets can be lost due to channel error.The missing packet mechanism has two parts for its operation.The first part is that a node checks for neighbor's missing packets from incoming beacon.A node can check the missing packets by the list of received packets in the beacon.If a node detects that its neighbor has the missing packets, it will set waiting timeout and add the missing packets to Broadcast List.Another part is that a node checks whether there are any packets that it never receives from the incoming beacon.If it finds that it has the missing packets, it will immediately broadcast a beacon to let its neighbors know and detect its missing packets.However, this mechanism can flood many beacons to the networks, so the beaconing for missing packets will be restricted to broadcast only once within a beacon interval.This means that a node cannot rebroadcast its beacon until the next beacon schedule.uses mobility traces from NS-2 [24].The trace is generated via simulation of urban mobility (SUMO) [25].The vehicle traces obtained from SUMO are in XML format.They are converted to NS-2 traces format by traffic simulation environment (TraNS) [26].There are two traffic scenarios: (1) a highway scenario is a straight 4-kilometer road with two lanes per direction; Performance Evaluation (2) an urban scenario is 2×2 kilometers Manhattan grid.Nodes are equipped with 250 meters transmission range wireless device.The simulator samples groups of nodes every 10 seconds and then it analyses the CDS forming algorithm in terms of a coverage result and a ratio of CDS members to total nodes in groups.There are more than 2000 groups of nodes that are sampled.No real broadcasting is employed in this simulation.The real broadcasting performance evaluation is done (v) Wu and Li's algorithm (WLA): members of CDS are nodes that can complete all of three conditions of Wu and Li's Algorithm.This represents the most efficient topology based algorithm in our literature review. Metrics. There two metrics considered.All simulation results are averaged from 100 of runs with 95% confidence interval.A group of nodes, that is, a complete graph connection, is not included in the results.The reason is that nodes can directly communicate to each other in this type of group. Note that an overhead result from exchanged beacon is not considered in this evaluation.However, the overhead results are discussed in Section 5.2. (i) Coverage node is measured as a percentage of the number of nodes that are covered by members of CDS to total nodes in the group. (ii) Ratio of CDS members is measured as a ratio of the number of nodes that are members of CDS to a number of total nodes in the group. Simulation Results. Figures 7 and 8 show the occurrences of node groups in each size for highway scenarios and those for urban scenarios, respectively.For highway scenarios, vehicles are uniformly distributed although vehicles are randomly released and vehicles have the different maximum speed.This is because the highway scenario is a simple straight road with nonstructure the same as the realistic long distance highway road.On the other hand, vehicles are nonuniform distributed in urban scenarios.There are many several sizes of group in each scenario.Therefore, both scenarios and mobility traces can represent the realistic environment of vehicles in both highway areas and urban areas. Coverage Results.All coverage results in both highway scenarios and urban scenarios are shown in Figure 9. DEN that considers only the number of 1-hop neighbors provides well coverage results on low density scenarios.The coverage results decrease in high density scenarios because there are more nodes and more complex connections in dense scenarios than in sparse scenarios.The reason is that the number of members in CDS from DEN is not enough to cover all nodes in the groups. On the other hand, WLA, that is, Wu and Li's algorithm that forms CDS by using topology information, does not operate well in sparse scenarios because the algorithm prunes too much nodes so it decreases a number of covered nodes in sparse scenarios.The advantage of Wu and Li's algorithm is it can construct the efficient members of CDS that can cover all nodes in groups in dense scenarios.WLA works well with complex connections in high density scenarios.These scenarios are similar to general mobile ad hoc scenarios that the algorithm is designed for.Therefore, we combine the advantages from both density based algorithm and topology based algorithm.We use the density based algorithm that can provide high coverage results in low density scenarios with a simple concept.Then we combine it with topology based algorithm that provides the efficient CDS members that can cover all nodes in groups in dense scenarios. The combination algorithms are DEN + IN, DEN + IG, and DTA (DEN + G).These algorithms are a combination of density based algorithm and topology based algorithm.All of them provide the highest coverage results in the simulation.The algorithms can construct CDS members with almost 100% coverage results. Ratio of CDS Member.The results are shown in Figure 10.The ratio results represent the efficiency of algorithm.A number of CDS members should be as low as possible, while the CDS members can cover all nodes in the group. DEN has the least ratio results because it considers only nodes with the highest number of 1-hop neighbors.The number of CDS members converges to about 0.07 of total nodes. WLA is the second least ratio results.It provides almost constant ratio results in every density scenario.The algorithm is very efficient, but this leads to low coverage results in sparse areas.There are many small groups of vehicles in the sparse scenarios and the distance between nodes is longer than in dense scenarios, so the ratio of CDS members should be higher. DEN+IN has extremely high ratio results.The results are almost 1.This means that the internode condition of Wu and Li's algorithm cannot efficiently prune nodes in vehicular environment.As a result, almost all nodes in the scenarios are CDS members. DEN+IG also provides the efficient CDS members.It has very low ratio results, but the ratio results are higher than DTA.This is because the gateway condition can significantly prune more the unnecessary nodes than the intergateway condition as described in Section 3.1. DTA is the most efficient algorithm because it can provide very low ratio of CDS members to total nodes.The ratio results converge to about 0.2 of total nodes.In low density scenarios, DTA has the high ratio results which are close to the results from DEN. DTA also has the ratio results that almost are the same as the results from WLA in high density.The reason is that DTA has the advantages from both density based algorithm and topology based algorithm so DTA will appropriately keep a number of CDS members depending on scenarios.This can maximize the coverage results while minimizing a number of CDS members. Simulation Setup. All broadcasting protocols evaluated their performance with the same road scenarios and vehicle mobility traces the same as in Section 5.1.1.There are 5 source nodes in each simulation.After the simulation has run for 100 seconds, source nodes randomly start to broadcast their packet every 10 seconds until simulation ends at 200 seconds.The last packet will be expired at 200 seconds of simulation.All protocols use IEEE802.11b with contention for MAC.We cannot use IEEE802.11p[22] because it is under development phase in NS-3.All nodes are equipped with a wireless module with Rayleigh fading.The transmission success rate is 80% at distance 250 meters.Unless stated otherwise, parameters setting for simulations is configured as indicated in Table 1. We have implemented all of the following protocols in the well-known network simulator NS-3.16 [27].All of previous works are configured following their publications. (i) DECA [9]: DECA represents a protocol that uses only density information to select the next rebroadcast node.It provides very high data dissemination speed by avoiding waiting timeout. (ii) APBSM [6]: APBSM represents a protocol that uses both density and geographical knowledge to construct members of CDS by extending Wu and Li's algorithm. ( Waiting Timeout.As mentioned in Section 4.2, the efficient value can significantly reduce collision occurrences in dense areas while increasing only a little delay.In order to select the , we performed a simulation.The simulation setup is the same setup as in Section 5.2.(Table 1).The highway scenario is used in this simulation.We evaluated the performance of NoG using the directed function with varied (1-5).From the results, 3 is the best value that provides low overhead and it introduces the lowest additional delay.According to (1) in Section 4.2.1, the maximum waiting timeout depends on value. Beacon Interval.The efficient beacon interval should help the protocol to provide the fastest data dissemination speed, while it increases the least additional overhead to each network density.In order to select the efficient beacon interval, we performed a simulation.We used the highway scenario in this simulation.The beacon interval is varied from 0.1 to 9 seconds in different density scenarios (2-80 veh/km). The other parameters such as communication setup and packet setup are set the same as those in Section 5.2 (Table 1). From simulation results, we observed that 1.5 seconds are the beacon interval that provides the fastest data dissemination speed with the lowest overhead in low density scenarios and 7 seconds are the longest beacon interval that provides the fastest data dissemination speed with the lowest overhead in dense scenarios.Therefore, the suitable beacon interval for NoG is between 1.5 seconds and 7 seconds.According to (3) in Section 4.2.2,(c) is equal to 0.2, minInv is 1.5, and maxInv is 7. Metrics. Five metrics are considered.All simulation results are averaged from 20 of runs with 95% confidence interval. (i) Reliability is measured as a percentage of nodes that received the packets at the end of simulation. (ii) Retransmission overhead is measured from bandwidth consumption, that is, from packet retransmission. (iii) Beacon overhead is measured from bandwidth consumption that is from beacon transmission. (iv) Source of retransmission is measured as percentages of three sources of packet retransmission that consist of retransmission by CDS members, retransmission by waiting timeout mechanism, and retransmission by neighbor's missing packet mechanism. (v) Speed of data dissemination is measured as (4), where represent number of nodes that received the packet for the first time at the time and is total number of vehicles in the scenario: Simulation Results Reliability.The reliability results in highway scenarios are shown in Figure 11(a).All protocols provide the same reliability in every scenario because these protocols are well designed to operate in vehicular environment.All of them employ store and forward technique that can handle the intermittent connectivity.The difference of CDS forming algorithm does not affect the reliability due to simple scenarios.On the other hand, the difference of algorithms affects reliability results in urban scenarios as shown in Figure 11(b).NoG+DTA provides the highest reliability results in every scenario because the rebroadcast nodes are efficiently selected to cover all of nodes in the scenarios.NoG+WLA that operates well in urban scenarios provides reliability slightly less than NoG+DTA.This is because the coverage ability of WLA is less than DTA as mentioned in Section 5.1.APBSM that uses the extended version of WLA provides reliability less than NoG with the original WLA about 1-5%.The reason is from its broadcasting mechanism and its waiting timeout mechanism.A node in APBSM has to wait for waiting timeout expiration before each rebroadcasting.Moreover, when a node detects the missing message from its neighbors, it has to wait for more than one beacon interval before each retransmission.This reduces the opportunity to increase the reliability.DECA and NoG + DEN have the lowest reliability result due to its only density based algorithm that does not perform well in high density and complex scenarios. Retransmission Overhead.The retransmission overhead results are illustrated in Figure 12. For highway scenarios, all of protocols have the same retransmission overhead except APBSM.This is because APBSM uses the inversed function to calculate their waiting timeout, so the redundant retransmissions increase in dense area.Although DECA also uses the inversed function, it avoids using waiting timeout by selecting the next rebroadcast node from source.All of algorithms on NoG protocol can efficiently operate in every highway scenario. For urban scenarios, APBSM still has the highest retransmission overhead due to its waiting timeout calculation.Although its CDS algorithm is extended from Wu and Li's algorithm, but Wu and Li's can work better on NoG (NoG+WLA).NoG+WLA can decrease up to 35% of redundant retransmission from APBSM.For density based algorithm, DECA and NoG+DEN have the same retransmission overhead, but the results are worse than NoG+WLA and NoG+DTA by about 23%.The reason is that density based algorithm cannot work well on complex scenarios.NoG+DTA has the most efficient operation.NoG+DTA can provide the lowest overhead in every urban scenario.DTA has the advantage from density based algorithm and topology based algorithm so it is only a protocol that has the least International Journal of Distributed Sensor Networks number of retransmissions in normal density scenarios that consist of many sizes of groups of vehicles. Beacon Overhead.The beacon overhead results are illustrated in Figure 13.For highway scenarios, DECA and NoG+DEN have the lowest beacon overhead results in the simulation.The overhead of DECA and NoG+DEN is very low because DECA and NoG+DEN use only density information so they require only a number of 1-hop neighbors.For NoG+DTA and NoG+WLA, their beacon messages need to contain 1hop neighbor list.For APBSM, its beacon needs to contain position knowledge of neighbors and it has to use the constant beacon interval for accurate neighbors' position.So APBSM has the highest overhead results. For urban scenarios, all of results are in the same trend with highway scenarios.APBSM has the highest overhead due to its constant beacon interval.DECA and NoG+DEN have the lowest beacon overhead.NoG+WLA and NoG+DTA have 55% more beacon overhead than density based algorithm.However, the difference of overhead results between density based algorithm and topology based algorithm in urban scenarios is less than the difference of results in highway scenarios.This is because the average beacon sizes in urban scenarios are larger than highway scenarios.Note that the protocol has to maintain 2-hop neighbor list for topology based algorithm.The size of beacon depends on the size of scenario.The adaptive beacon interval significantly reduces overhead in the following case.When a node is in the dense area, the size of beacon is larger, while the beacon interval is also longer, so the large beacon will be reduced. Source of Retransmission. The source of retransmission represents the efficiency of protocols and algorithms.The protocols and algorithms that have the higher retransmissions from their preferred nodes are better because these nodes are working as designed.This affects the performance in terms of data dissemination speed.The reason is that the preferred nodes can immediately rebroadcast or have the shorter waiting timeout than other nodes.The preferred node of DECA is selected by source node and the preferred node of APBSM, NoG+DEN, NoG+DTA, and NoG+WLA is a CDS member. The results are shown in Figure 14.For density based algorithms, DECA and NoG+DEN have the best results in highway scenarios because the algorithms can operate well in simple scenarios.Both of the protocols have very close percentages of preferred node retransmissions, but DECA has better results than NoG+DEN in highway scenarios.The reason is that DECA selects the next rebroadcast node from source's perspective.The selected node is a node with the highest density of source's neighbors, so a number of selected nodes are higher than a number of CDS members from NoG+DEN.However, in urban scenarios, a number of rebroadcast nodes from both algorithms are not enough to cover all nodes in scenarios. For topology based algorithms, the results of APBSM and NoG+WLA are the same trend with coverage results in Section 5.1.The topology based algorithm is appropriate to complex scenarios, so in higher density these algorithms have higher percentages of preferred node rebroadcasting. NoG+DTA has the highest percentage of preferred nodes retransmission in every scenario because NOG+DTA is the combination of density-based algorithm that works well in simple scenarios and topology-based algorithm that works well in complex scenarios. Speed of Data Dissemination.The speed of data dissemination results in the highway scenarios and the urban scenarios are, respectively, shown in Figures 15 and 16.The results at density 6 veh/km represent sparse scenarios (2-10 veh/km).The results at density 30 veh/km represent normal density scenarios (20-40 veh/km) and the results at density 80 veh/km represent high density scenarios. For highway scenarios, NoG+DEN is the fastest protocol in low density scenarios, but it is the slowest one in high density scenarios due to its density based algorithm.Nodes in DECA have the results the same as NoG+DEN.NoG+DTA is the fastest protocol from simulation results.APBSM and NoG+WLA are slightly slower than NoG+DTA for all scenarios, but the difference is less than 0.1 milliseconds. For urban scenarios, DECA and NoG+DEN are slower than topology based algorithm due to complexity of connection.APBSM is a bit slower than NoG+DTA and NoG+WLA in sparse areas and medium density areas.The reason is that rebroadcast nodes in APBSM have to wait for waiting timeout before each rebroadcasting.On the other hand, APBSM provides the fastest data dissemination in high density scenarios due to a lot of redundant retransmissions discussed in retransmission overhead results.NoG+DTA and NoG+WLA provide almost the same speed of data dissemination. Conclusion In this paper, we propose an approximation algorithm for constructing CDS members.It is a density and topology based algorithm, called DTA.DTA combines the advantages from density based algorithm and topology based algorithm.The density based algorithm can construct the efficient CDS members in simple connections or low density scenario.On the other hand, the topology-based algorithm can construct the efficient CDS members in complex connections or high density scenario.The simulation results show that DTA outperforms other algorithms in terms of coverage results and ratio of CDS members to total nodes.DTA has the coverage results better than other previous algorithms' results up to 50%.We also proposed a nongeographical International Journal of Distributed Sensor Networks knowledge broadcasting protocol, called NoG.The protocol consists of a broadcast mechanism, a waiting timeout mechanism, and a beacon mechanism.It is designed to operate with high data dissemination speed and consume the least network resource as possible.The simulation results show that NoG provides the fastest data dissemination speed and the highest reliability.Currently, the beacon size of NoG depends on the size of 2-hop neighbor list which can be significantly increased in dense area.Most of broadcasting protocols in VANET uses beacon with variable size.Therefore, our future work is to reduce the beacon overhead by using fixed size beacon.The solution may be applied to other broadcasting protocols in VANET. Figure 1 : Figure 1: An example of connected dominating set (CDS). Figure 2 : Figure 2: An example of gateway condition. Figure 8 :Figure 9 : Figure 8: Occurrence of each size of group in urban scenarios. 5.1.Performance Evaluation of CDS Forming Algorithm5.1.1.Simulation Setup.In order to evaluate performance of our algorithm, we implement a Java simulator.The simulator
2018-04-03T01:36:40.946Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "0fd3f0ee8c375510d1e44f849bf8626e53012e5d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2014/906084", "oa_status": "CLOSED", "pdf_src": "Sage", "pdf_hash": "d68694d4eaacc63991c94d272281ab4e9e44c0bb", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
234237463
pes2o/s2orc
v3-fos-license
A values-based phenomenology for substance use disorder: a new approach for clinical decision-making Abstract Phenomenological psychopathology has been defined as a human science that is concerned with the object on which clinical psychology and psychiatry act. How psychopathological experiences are understood is an important factor determining decision-making in clinical care. An accurate understanding of psychopathology is fundamental to the effectiveness of mental health treatments. This is even more important in a field such as substance use disorders in which social and cultural values influence both diagnosis and decision-making. In this article, we offer a contribution to clinical decision-making in substance use disorders by suggesting the association of Phenomenological Psychopathology and Values-Based Practice, constituting a Values-based Phenomenology We present a fictitious clinical case (to preserve confidentiality), illustrating a three-step practical application of Values-based Phenomenology. We conclude that although still a nascent discipline, Values-based Phenomenology offers a promising approach to reducing the gap between services and patients’ needs in clinical decision-making, and thus to improving clinical care in substance use disorders. Palavras-chave: Alcoolismo; Transtornos relacionados ao uso de álcool; Fenomenologia; Psicopatologia fenomenológica; Valores. Phenomenological Psychopathology has been defined as the human science that is concerned with the object of psychological and psychiatric clinical care (Messas, Tamelini, Mancini, & Stanghellini, 2018). The therapeutic strategy adopted for a given clinical case depends critically on how the disturbed experience of the person in question is understood. In this respect, it can be said that psychopathology is a founding discipline for the processes of clinical decision-making and of establishing the criteria for evaluating therapeutic success or failure. Therefore, it is essential that a clear definition of the psychopathological concepts with which a clinician operates precede any discussion of strategies for mental health care. This general condition is particularly relevant in the case of Substance Use Disorders (SUD). In this field, the limits between normal and altered experiences are diffuse, favoring superficial interpretations of the phenomena, which can lead to harmful repercussions for the people who use the substances in question and suffer consequential harms. These harmful consequences can extend to areas that go far beyond clinical insufficiency or incompetence, reaching basic principles of contemporary societies, such as human rights. A common example is the different attitude society has towards a person who abuses substances, according to their diagnostic status. The identification by a clinician of the validity of a psychopathological state favors a more positive societal attitude toward that person, while an interpretation, for example, moralistic, accentuates the frequent stigma that plagues this population. The most recent attempts to support substance abuse-related diagnoses on a more solid and more objective basis have been quite disappointing, if not counterproductive (Arria & McLellan, 2012). Many commentators (including some of those working in the neurosciences) have argued that the broad use of neuroscientific concepts and knowledge as markers of diagnostic certainty that could provide this foundation have not provided the desired results, either in terms of improved services or reduced stigma (Cuthbert, 2014;Hall, Carter, & Forlini, 2015;Kupfer, First, & Regier, 2002). The universally accepted criteria for SUD continue to consist of a mixture of subjective elements (e.g., craving for substance use), behavioral elements (e.g., narrowing of the drinking repertoire) and biological elements (tolerance and abstinence) (American Psychiatric Association, 2013;World Health Organization, 1999). Although these criteria are able to cover, in a generic way, the majority of problematic experiences, they do not shed much light on the specific differences among the ways in which people, in their daily realities, actually experience them (Leal, Muñoz, & Serpa, 2019). Their application depends on subjective criteria that cannot be defined objectively -as, for instance, what a person or a social group understands by restriction of the drinking repertoire is by no means universal. Some social groups may even have an attitude that positively values routine alteration of the state of consciousness (Messas & Soares, 2021). It is, therefore, the specifics of each clinical case, scientifically captured through implicit values within the scientific categories of thought, that determine mental health decision-making. In the same way, the interpretations that each clinician, each patient, and each family member gives to the substance abuse greatly influence the decision-making processes. It is not uncommon for different clinicians to propose different and even conflicting referrals in situations of substance abuse that are similar, from a descriptive or behavioral point of view. Adopting a diagnostic tool that is capable of accurately capturing the ways in which people experience their substance abuse is therefore central to the decision-making processes in this mental health field. Phenomenological Psychopathology (PhP) is a branch of psychopathology that seeks to explore the conditions of possibilities of experiences (Fuchs, Messas, & Stanghellini, 2019). Originally conceived as a descriptive psychopathology of subjective experiences (Jaspers, 1997), PhP has gradually deepened its object of interest, seeking to reach the subjective and intersubjective structures that underpin human existence (Messas, 2014a). Especially in the last two decades, the social and scientific relevance of this discipline has increased all over the world, as can be seen with the publication of the Oxford Handbook of Phenomenological Psychopathology . As a procedure that investigates the structure of existence, Phenomenological Psychopathology has been expanding its sphere of application in the field of mental health, inspiring developments in phenomenological clinical care, its use becoming widespread in both the psychotherapeutic (Cury, 2016;Holanda, 2016;Moreira, 2013) and psychiatric dimensions (Tamelini & Messas, 2017. Phenomenological Psychopathology contribution to SUD is still somewhat limited, although there have been a number of relevant initiatives in that sense (Di Petta & Tittarelli, 2019; Kemp & Butler, 2014;Messas, 2014bMessas, , 2015Messas, , 2021Messas, Fukuda, & Pienkos 2019;Pringuey, 2005). The phenomenological exploration of SUD is thus more of a scientific frontier than of a consolidated ground. An emerging and promising field for the application of Phenomenological Psychopathology as an instrument to increase clinical effectiveness in SUD care, is its association with the Values-Based Practice (VBP), as we will argue in the next section. Values-Based Practice and clinical decision-making Diagnoses in mental health are loaded with value-led decisions, most often implicit (Fulford, 1989(Fulford, /1999Fulford, Broome, Stanghellini, & Thornton, 2005;Sadler, 2005). Take, for example, the classic case of the psychopathological diagnosis of homosexuality, which formerly belonged to the set of mental disorders. Nowadays, with the cultural changes that have rapidly allowed the reduction of the stigma linked to sexual conducts and gender identities, it seems natural to the scientific community that this existential condition figures among the variations of normality. In the same way, decision-making is heavily charged with assumptions which, in certain clinical situations, are/may be more important than the criteria on which a psychopathological condition is defined. For example, it is expected that society will interpret the alcohol abuse of an underage person differently from the same condition for, say, someone in their 40s. Although in diagnostic terms both may be similar, they immediately imply different decision-making. One would expect the clinician or society not to tolerate the abuse (or the use) of the former and to show greater tolerance towards the latter. Only valuation differences in the interpretation of the meaning of use in each case would justify a firm prohibition in the first case and leniency in the other. This difference in attitudes would be based on the understanding, by contemporary societies, that alcohol abuse in an adolescent may lead to brain injuries or even behavioral injuries that the underage person is unable to choose for him or herself. In the case of adults, although there is much difference among societies about the acceptable limits for substance abuse, there is certainly no absolute refusal of the idea that they can decide in part on a self-inflicted conduct, as long as it does not cause harm to others. The importance of understanding the role of personal values in clinical decision-making has been increasingly recognized in the United Kingdom (UK) since the establishment in 2003 of a Values-Based Practice program in the UK's Department of Health (under the auspices of its then policy implementation section, the National Institute for Mental Health in England) (Fulford, Dewey, & King, 2015) 3 . Values-Based Practice has since been developed as a decision support tool that works as a partner to Evidence Based Practice in all areas of medicine and health sciences (Fulford, Peile, & Carroll, 2012). Where Evidence-Based Practice provides a process that supports the balanced use of complex and sometimes conflicting evidence in clinical decision-making, VBP does the same for values. Values-Based Practice provides a process (based on learnable clinical skills and other key process elements) that supports the balanced use of complex and sometimes conflicting values in clinical decision-making. Specifically in relation to SUD care, this means that VBP supports the incorporation of the different (and potentially conflicting) values of the various stakeholders involved in promoting the co-responsibility of the subjects directly involved in the care process: users, caregivers, clinical professionals, and managers. Values-Based Practice is not, therefore, a practice that presupposes judgment of the behavior of the other, but the establishment of care relationships permeated by respect between the parties. This is why VBP allows balanced decision-making (in both clinical and policy contexts) on procedures and care processes implemented in a democratic and consensual manner. Differences in values are potentially the cause of conflict, so, for applying VBP, there is a need to develop some practical skills based on theoretical concepts, that can minimize potential friction. Although VBP has been used for some time in the United Kingdom, there is no record of its application in the decision-making regarding the SUD clinical care. The early development of VBP was based on the analytic philosophy of values (Fulford & van Staden, 2013), but there has been growing recognition of the need to extend its theoretical base to incorporate insights from phenomenology and other areas of contemporary philosophy (Fulford & Stanghellini, 2019). Since its inception, phenomenological philosophy has also evidenced the importance of values for the understanding of reality (Scheler, 1980). The incorporation of phenomenological insights into VBP -and of insights from VBP into phenomenology -, provides new opportunities to develop treatment approaches that benefit from accurate observation of the changes in the subjectivity structure associated with psychiatric disorders, using this to develop collaborative treatment approaches that reflect the values and experiences of patients and, thus, adjust the perceptions of subjective needs of mental health services to their provision. Moreover, evidence indicates that shared decision-making increases effectiveness in the treatment of SUD (Joosten, Jong, Weert-van Oene, Sensky, & van der Staak, 2011). The objective of this article is to advance the links between PhP and VBP and illustrate its potential applications to SUD. Building on this objective, and in line with recent publications on eating disorders (Stanghellini, Mancini, Castellini, & Ricca 2018), schizophrenia (Stanghellini & Ballerini, 2007), and other areas of psychopathology (Stanghellini & Mancini, 2017;Stanghellini et al., 2019) that have used similar approaches, our aim is to offer initial elements for the constitution of a new discipline, Values-Based Phenomenology (VBPh). We believe that VBPh can contribute to improving clinical effectiveness in SUD, an area of public health that is still relatively neglected. We argue that attention to the aspects of experience highlighted by VBPh and the ways in which they are defined and treated by patients and caregivers, is essential for approaches that fully recognize the personality of mental health patients, leading to their greater empowerment, agency, and engagement in health care settings. To illustrate how this method can be used in a clinical setting, we will present in the next section the story of "Marcos". As we describe further below, Marcos' is a fictitious case, although his story is based on many cases encountered by the authors in their careers. In Marcos' story, there is some controversy in clinical decision-making involving personal and family values and the interpretation of the existential meaning of substance use. We outline some of the pragmatic consequences of these controversies -including a three-step practical approach to clinical decision-making -, that the use of VBPh brings to the SUD approach. We argue that in complex clinical situations such as that presented by Marcos, in which decision-making is difficult, VBPh shows promise as an instrument for dealing with dilemmas and existential complexities, offering conceptual and practical tools to support informed, shared clinical decision-making. We will present Estud. psicol. I Campinas I 38 I e200102 2021 in our case analysis some preliminary fundamental concepts of the functioning of VBPh, highlighting the way in which concepts derived from phenomenology can be associated, in a productive way, with those from VBP. We will highlight the usefulness of a combined VBPh as a guide for decision-making by the clinician, in shared action with the patient and relatives. The case of Marcos 4 Marcos is 50 years old and has a well-established professional career, which allows him a stable economic situation and a very comfortable life. Married, he is the father of two teenage daughters. Marcos has a long history of alcohol abuse, since he was 18 years old. His pattern of daily intake of high doses of ethanol has varied very little throughout his life. He recognizes his abuse and describes himself as an alcoholic. He adds, however, that he only knows how to live this way, that he likes to drink. He evaluates that he has achieved everything in life by being like this and, therefore, he would like to be aware of the damages of the drink, but without interrupting its use. Since his laboratory tests show no signs of damage caused by alcohol (except for slight cortical brain atrophy, possibly attributable to the use of alcohol), he is proud of his body's strength. Even when alerted to the risks that cannot be seen by laboratory tests, such as the high risk of stroke, heart disease and cancer, he does not change his position. He sought treatment to have a person with whom to share the difficulties of a stressful day-to-day life. He is a kind and cordial patient, sincere in the manifestations of his feelings, although intractable when it comes to alcohol abuse. His family, however, disagrees with his optimism regarding the damaging effects of alcohol on him. His wife and one of his daughters report that drinking exposes him to many situations of social wear and tear (such as conflicts at work or inappropriate behavioral jocosity) and aggressive attitudes at home, occasionally amounting to marital violence. In addition, they say that Marcos is very repetitive in his themes and his habits; they fear that this restrictive behavior is caused by frequent intoxication. They say that Marcos' close friends share this opinion, but they do not know what to do about his behavior. They all fear that his professional situation may deteriorate if he continues with this lifestyle. After much insistence from his family, Marcos underwent a neuropsychological evaluation to investigate possible alterations in his cortical functions. In the evaluation, important deficits were found in executive functions, incompatible with the demands of his work. In all probability, the deficits were not evident in his professional daily life due to his high intellectual capacity. The results of the test were presented to the patient and the family, leading to different suggestions for referral. Marcos disdains their importance, maintaining that they tell him nothing he did not already know. The family, on the other hand, receives them with extreme concern, demanding immediate measures from the patient to stop his drinking habit. His wife says that she cannot continue in the relationship if he does not do this, but Marcos refuses to take any action. It is important to stress that at no time had the patient demonstrated any cognitive disability that would impede or compromise his ability to make decisions. A three-step VBPh process In the next three subsections, we outline three basic steps in the exercise of VBPh as exemplified by Marcos' story: step 1, the analysis of conflicting values; step 2, a values-enriched phenomenological psychopathological understanding, and, finally, step 3, the process of clinical decision-making using VBPh. 4 "Marcos" is a fictitious character but his story is based on those of a number of personal histories from the authors' extensive clinical experience. The events and other details of his story thus bear no relation to and are not intended to represent any person living or deceased. ▼ ▼ ▼ ▼ ▼ The three sections, taken together, illustrate the application of VBPh in SUD. We should emphasize, however, that any application of VBPh (like that of VBP itself) depends critically not just on what is done but on how it is done. Communication skills, in particular, including skills of listening, are crucial (Fulford et al., 2012). This is why we have chosen to illustrate how the three-step process works out in Marcos' case rather than presenting it as a direct continuation of his story. Our three-step process should not be understood as a procedure to be followed blindly. It is rather an outline for a process, the proper use of which in clinical decision-making requires the skills of a fully trained and appropriately experienced practitioner applied in a way that is sensitive to the particular values of the individual patient concerned. Step 1 -Analysis of conflicting values In his story, what Marcos values, above all, is the preservation of his personal will as a manifestation of his independence and autonomy. He supports this belief based on the fact that he attained his position as a successful professional through the ability to exercise his free will without any constraints. He interprets professional success as strong evidence of his ability to assess and control his own risks and to test them to their limits. Although he cares about his family's opinions, he does not guide his decision-making by their suggestions or even the ultimatum he received from his wife. He also places greater reliance on the objective test results that do not indicate significant current injury, than on the findings related to his cognitive state. This highlights Marcos' hierarchy of values in which his will prevails, both over family concerns and objective findings. This is an individualistic valuation of his SUD. On the other hand, the family structures its decision-making regarding treatment from a collectivist perspective. They understand that their intervention is necessary and will have a salutary effect, as the patient is on the verge of seriously harming himself, in addition to the damage he has already inflicted on his family relationships. On first reading Marcos' story, it may seem that his values, and the conflicts between him and his family, are obvious. But this is an illusion. In the context of everyday practice, clinicians all-too-often misread what is important to their patients, and this is why enhanced understanding of patients' values is central to values-based practice (Fulford et al., 2012). As in Marcos' story, the key values involved may be in part explicit and in part implicit. As to explicit values, values-based practice offers a range of skills including (as noted above) specific aspects of communication skills. Positive results with values-based training in these skills aimed at enhanced understanding of patients' values in other clinical areas (Handa et al., 2016) suggests that VBP training for those working with SUDs could be similarly productive. But (as in Marcos' case) many of the key values involved in a given patient's values hierarchy may be implicit rather than explicit in nature. These values have thus to be made explicit if they are to be balanced in a process of shared decision-making between clinician and patient. This is where the resources of phenomenology become important alongside the established processes of VBP. For it is by way of phenomenological insights that implicit values inherent in a patient's values hierarchy may become explicit. Step 1 (analysis of conflicting values) in our three-step process can thus be thought of as comprising two levels. The first level involves enhanced understanding of the values included in a patient's hierarchy. Where these values are explicit this can be achieved essentially by exercising the (learnable) clinical skills of values-based practice. But analyzing the implicit values requires the additional insights of phenomenology. In other areas of clinical work, understanding the explicit values involved may be sufficient. This is true for example in many areas of surgical decision-making (Handa et al., 2016). But as Marcos' story illustrates, with SUD, effective decision-making may depend on understanding not only the explicit but also the implicit values involved. It is here that the enriched understanding offered by Phenomenological Psychopathology comes into play. For it, only once the conflicting values (implicit as well as explicit) are identified is the patient (sometimes Estud with relatives and other stakeholders and sometimes on their own) empowered to work in a process of shared decision-making with the clinician based on an understanding of his or her own hierarchy of values. Step 2 -Values-enriched phenomenological psychopathological understanding The above values-based insights into the values likely to be conflicting between Marcos and his family/ society allow a values-enriched phenomenological psychopathological understanding of his story. Thus, the fact that the patient continues in his successful professional trajectory, despite the objectively verified important cognitive losses, shows the almost exclusive relevance that Marcos' professional identity has at this moment of his existence. The evaluation he makes of himself, with which he justifies his decisions regarding alcohol, is based on the history of his professional success. It is important to emphasize that the fact that he has been practicing his profession for many years strengthens the automatic maintenance of that identity. The very sustainability of his professional identity depends rather on an incorporated habit than on an impulse to expand professionally, whether in economic terms or in search of social relevance. Thus, the way that Marcos' existence is supported by his professional identity would not seem tenable if there was a negative change in his work situation brought about by his alcohol consumption, as his family and friends fear. Marcos' refusal to change his behavior in response to any of the suggestions of his family and friends indicates high personal rigidity and a restriction of existential temporality to the present dimension. This rigidity, in turn, derives from the low relative participation of intersubjectivity in his existence. Consequently, Marcos shows an inability to consider and adapt to the perceptions of his closest peers, further reinforcing the need for hegemonic existential support in his professional identity. At this moment in his life, Marcos is increasingly just his professional identity. Step 3 -The clinical decision-making process by VBPh The values-enriched phenomenological psychopathological understanding of Marcos' story outlined above in Step-2 informs the process of clinical decision-making. To anticipate, phenomenology provides insightful understanding of the hierarchy of values driving Marcos' life choices. But as in so many other similar stories of patients with SUD, the values by which Marcos' hierarchy is comprised are complex and conflicting. Thus, effective clinical decision-making depends on balancing the tensions between these complex and conflicting values. This balancing of values (which is made possible by the exercise of learnable valuesbased clinical skills) is the essence of what VBP adds to a phenomenologically informed process of clinical decision-making (Fulford et al., 2012). The starting point for clinical decision-making in this case is that as Marcos does not present any impairment in his ability to judge reality, any decisions in respect of his treatment must (according to contemporary standards of medical ethics and law) follow his wishes, established after discussion with family members. This, however, as we have seen, is precisely where conflicts arise between Marcos' values and those of his family and the wider society. The patient intends to continue drinking despite the risks pointed out by his family and the clinician. If he has to choose, he will maintain his pleasurable drinking habit and put up with the losses this brings to his relationships and to himself. Thus, he has established a hierarchy of values in favor of his personal independence and decisional autonomy, with indifference to the opinions of those closest to him. This is where, as noted above, the skills and other resources of VBP for balanced clinical decision-making come into play alongside those of phenomenology. Phenomenological psychopathological comprehension of the situation deepened the understanding of this conflict, presenting the structural bases on which Marcos' free decision took place. Using a VBPh approach, the sense of personal values is understood from its conditions of possibility. From this perspective, one can say that Marcos opts to maintain the integrity of his existence based almost exclusively on his professional identity. Hegemonic reinforcement of one of the conditions of possibility of existence favors the weakening of the other conditions of possibility, notably his closest intersubjective relations. He opts for this choice even in the face of the risk of destroying his marriage, an event that would further reduce his existential support in the world and would demand further reinforcement of his professional identity. In short, from a phenomenological point of view, Marcos' decision represents an existential movement of growing investment in the supremacy of a single personal identity, leading to an anthropological disproportion (Messas, 2021) with a corresponding increase in existential vulnerability, due to the lack of support in other dimensions. A balanced approach to meeting Marcos' existential needs Once the patient is informed of the phenomenological meaning of his decisions, he can decide, together with the clinician and family members, the least harmful way to manage his behaviors in order to voluntarily move toward a situation of growing vulnerability. With the phenomenological contribution, these decisions become informed by an expanded sense of personal agency. The patient's decision becomes understood no longer as merely seeking for autonomy, but rather as a voluntary reinforcement of a condition of anthropological disproportion. This allows the patient, clinicians, and family members to make the important strategic decisions necessary to achieve this difficult objective by fostering the following three specific existential needs of Marcos: (a) Increased care in the preservation of professional identity: If his existence depends more and more on this identity, it is vital that the first objective of clinical decisions should be to prevent the patient from taking risks that damage his professional image through inappropriate behaviors related to alcoholism (Note: it is important to emphasize that the exercise of his profession does not imply immediate risks to third parties) and to support him in the best possible performance of his role, in the face of the risks of disintegration that alcoholism brings; (b) Preservation of the functionality of his professional identity: In order for the above item to be carried out as effectively as possible, it is important that automatic acts of daily working life -particularly those strictly related to Marcos' job description and which no longer require creativity, but arise as a prereflexive habit -, have priority in his work tasks. At work, he should try to avoid positions or functions that involve behavioral innovations that require cognitive plasticity, as this is quite reduced, and demands of this nature may make the existing cognitive deficit visible, ultimately compromising the previous strategy; and (c) a voluntary effort on the part of Marcos to offer greater participation and openness to intersubjectivity, especially to his family, seeking the multiplicity of perspectives that this increase brings. It is characteristic of SUD psychopathology to reduce the ability to experience the dialectical complexities of situations . This existential condition restricts the power of appreciation of oneself and of the world, by favoring a unilateral subjectivism enclosed within oneself. The inclusion of subjectivities that bring more perspectives on his life serves as an antidote to the closure of his historical self in his own solipsistic subjectivity. Summary of the impact of three-step process of VBPh Building on a phenomenological appreciation of the conflicting values that hindered his strategic treatment decisions, Marcos comes to understand the pre-reflexive dimensions involved in his decision, and, consequently, is able to orient himself and the clinician by reference to them. The final result of the exercise of the three-step process of VBPh in Marcos' story is thus an enhanced capacity for the efficient exercise of shared decision-making between patient and clinician, enhanced by a better understanding of the existential meaning of the conflicts inherent in Marcos' personal hierarchy of values. Conclusion Clinical decision-making is the fruit of a complex process, in which knowledge-based strategies should be adapted to patients' personal values and social context if they are to be fully effective. Due to the complexity of this process, a gap is often observed between the supply of services and the needs and other values subjectively experienced by patients. This gap is particularly evident in SUD where, as in Marcos' story, the patient's values are in direct conflict with those of the clinician (and the patient's family). We have argued that a promising strategy for reducing this gap and thus increasing clinical effectiveness in the field of SUD care is the association of Phenomenological Psychopathology with values-based practice, constituting a new discipline, values-based phenomenology (VBPh). We presented a clinical case, Marcos', a fictitious case based on an amalgam of real cases, illustrating the basic dynamics of values and psychopathological findings. As Marcos' story indicated, VBPh follows three key steps, analysis of values, values-enriched phenomenological understanding, and clinical decisionmaking. In the first step, the multiple value conflicts involved in clinical decisions in SUD are analyzed using the skills of values-based practice enhanced with phenomenological insights. In the second step, the values identified and elected as central to the treatment undergo a comprehensive phenomenological analysis, in which their meanings for existence are dissected in terms of their structure. Through this analysis, the patient and the family/stakeholders can better understand the existential meaning of the decisions taken and their consequences for their whole existence. Finally, in the third and last step, the analysis of values undergoes a comprehensive phenomenological analysis, allowing the elaboration of clinical decisions to follow the wills and values of the patient and, at the same time, explain the existential dimensions of the valuation options. This is shared decision-making in action. For the association of psychopathology with the analysis of values allows the best clinical strategy to be adapted to the strategies chosen by the patient in consultation with the clinician. Although VBPh is still a nascent discipline, the clinical example above demonstrates how its application can contribute to the construction of effective clinical care in a mental health sector as multifaceted as SUD, while taking into account both the scientific evidence and the values (the desires and conflicts) of the patient and their family members. VBPh supports this effort by investigating in depth and illuminating the global existential meanings involved in clinical strategies. Using this method, we intend to develop clinical care based on a phenomenologically enhanced values-based model of shared clinical decision-making that avoids a merely superficial treatment of the existential complexities involved in so many human actions and decisions. This new VBPh model allows clinical decision-making to offer complex solutions to necessarily complex issues. Contributors G. MESSAS was responsible for conceptualization, data curation, formal analysis, supervision, writing the original draft, and review and editing. K. FULFORD was responsible for conceptualization, formal analysis, methodology, supervision, validation, and writing, review and editing.
2021-05-11T00:07:07.267Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "23c09086172f7f7cd536a2bafd22ce980083aef0", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/j/estpsi/a/Zs63bLnjbJhpB7h8WNSZXGK/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "a803a47f35e296fcc6d71ccb3f157b97575e6b4e", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Psychology" ] }
246330383
pes2o/s2orc
v3-fos-license
New Trends of Culture Response in Contemporary China: A Case Study of Cultural Localization With the development of modern China, some great changes happened locally. There are different trends of responses to Disney culture in China. This paper finds that nowadays Chinese people's response to western culture has changed from full acceptance of the past news to positive response in contemporary times, including a reinterpretation of Disney's classic images and critical responses. In terms of the new interpretation of Disney's classic princess image, the cultural phenomenon of “princess on the run’’ reveals the impact of indigenous social thoughts and the development of new technologies on cultural localization in contemporary China. Critiques of 2020-released Mulan, represent the enhancement of Chinese cultural strength, a deeper understanding of local culture, and a new warning for cross-cultural companies. The discussion on the above issues shows the growing cultural confidence and new response to western culture in modern China. INTRODUCTION Disney culture seems to have become a part of the new local culture of China, and people have a very high acceptance of this culture. Disney culture entered China by Mickey Mouse in 1930, and as the mainstream of popular culture at that time, it attracted a large number of Chinese people and had a strong appeal to all ages. The witty cartoon characters were not only popular with children, but also with parents who took their children to watch cartoons. Lu Xun, the famous author, took his wife and children to Shanghai's Guanglu Theater in 1933 to watch The Disney animation Mickey Mouse. With the development of time, nowadays Disney is still the pop culture in China, but now people's view of it has changed. The way young Chinese interpret Disney culture has changed a lot, and Disney culture seems to become a new kind of local culture in China with local interpretation. In the past research, a lot of literature on cultural localization and cultural globalization mainly focus on the situation of cultural export in developing countries and some advantages and disadvantages of cultural globalization in the context of globalization. However, in some rapidly developing countries, especially China, there is a lack of corresponding research on the current cultural innovations. After China's reform and opening up, the great changes in the domestic cultural environment due to the enhanced cultural strength, new cultural communication needs. In terms of the study of Disney films, most of them are about the description of the "cultural hegemony" in the early 1980s, while there is little analysis and research on the cultural localization in contemporary China. Moreover, in modern times most of the literature on the occurrence of such changes is stating phenomena and discusses different phenomena separately. This paper uses the method of case study to analyze the new responses to Disney culture in China and discuss the causes behind it to a certain extent.By exploring the new trend of the Chinese people and discussing the relationship between native culture and media, it provides a global trend towards diversity, raise a point about finding global cultural exchange fusion at the same time retain local cultural characteristics. POSITIVE RESPONSE INSTEAD OF NEGATIVE ACCEPTANCE OF DISNEY CULTURE reform of contemporary mainstream thought are closely related to western culture. With the wave of reform and opening up, people seek their own cultural identity by embracing western cultural products in various fields. In the initial stage, Disney movies entered the Chinese cultural market, people fully accepted the cultural interpretation of images, while Snow White, Mickey Mouse, and other images were very popular and household names. At this point, people are more appreciative and enjoy the great novelty brought by it. Growing culture confidence in contemporary China A country's local cultural strength and the development of a cultural environment have a profound impact on people's attitudes and views of foreign culture. The improvement of cultural strength caused by the construction of Chinese native culture is the fundamental guarantee for the interpretation and criticism of foreign re-construction. After China's reform and opening up, the internal culture of China has undergone tremendous changes. Especially since the 18th generation of the Communist Party of China (CPC), the prospect of Chinese cultural development and the important ideas of Chinese cultural development and construction will take enhancing cultural confidence, cultural responsibility, and enhancing cultural soft power as the entry point (Xi Jin Ping, 2011). Correspondingly, due to the entertainment industry and cultural performance accessible to Chinese people at all levels, the overall acceptance and understanding of Chinese culture has been enhanced, and the soft power of local culture has also been enhanced. In recent years, there has been widespread cultural awareness and confidence in China. Based on absorbing and digesting the western literary theory, China has gradually explored a road of contemporary literary theory construction integrating and co-existing with the West. In the contemporary context, The Chinese people try to establish a fluid cultural standpoint, and look at foreign cultures critically, to promote the development of their own culture. This is a shift from acceptance to symbiosis, accompanied by an increase in the soft power of Chinese culture. China's acceptance of western cultural products is rapidly deepening and diversifying. New demand for Western cultural products The Chinese public has a new demand for Western cultural products: from high praise to reinterpretation, to criticism after reflection. The reinterpretation of Disney's indigenous cultural image is not just mechanical, passive acceptance, or simple pop worship. This is in keeping with China's cultural and social development. On this basis, these interpretations are creative and innovative. At the same time, people also show a trend of cultural criticism: no longer blindly praise, but based on their own cultural position to establish a critical response. "PRINCESS ON THE RUN": LOCAL CULTURE INTERPRETATION OF DISNEY AESTHETICS Disney culture, as one of the well-known foreign cultures in China, has inevitably been interpreted in a new way as the development of China. " Princess on the run", as a term created by Chinese net citizens, was prevalent in Chinese main online platforms. net citizens use this term to describe the real-life female characters with princess-related traits. Due to the prevalence of Disney movies and the repeated appearance of the classical images, and new meaning of princess is more and more widely applicable, as long as there is something similar to the Disney princess, can be called "runaway princess". An extended interpretation of Disney princesses in contemporary China Take the celebrities Zhang Han Yun as an example, Ms. Zhang, 32, has been described as a ''Disney princess on the run'', and not because she's played Disney princesses and is as beautiful as Disney princesses on performing: her adherence to music and the fabulous performance (Ma Xiao Ran,2021). Not only her, but the words have also been widely used to describe glamorous female stars, female athletes competing in the Tokyo Olympics, and even women who shared moral characters in everyday life. Its concept changed more than itself and was given a new definition under China context. In traditional Disney movies, Disney princesses are often portrayed as the one in a million, who are born out of the ordinary and though the image changed a few it's still the collection of great virtues and always show up astonishingly. Specifically, "princess on the run" is commonly used to describe an act of rebellion in which a lady of noble birth runs away for freedom or love against others' decisions and seeks her own identity. In the present context "On the run does not only mean running away but that the object has the same qualities or characteristics as the central phrase. In this case, the word could be easily applied to any female image who share similar traits. The Response of Contemporary Chinese Feminist Social Thoughts to Disney Princess Culture What people pursue is not only the equal rights of women and men but also the development of women 's "independent personality". With the development of the Advances in Social Science, Education and Humanities Research, volume 631 economy and the influx of feminist trends of thought, people's concepts about females have been extended to independent individuals and their self-values. They fill with strong self-esteem, care for their characteristics and needs, and express themselves. At this time, women are not dependent on their families but have their financial resources and the ability to live independently. People's expectations for the role of princess add the image of the true, good, and beautiful princess in the classics, as well as the image of courage, independence, and rebellious spirit, which is more in line with real-life and people's spiritual needs. People's definition of the princess has become blurred and adapted to the general public, which also reflects the female personality consciousness development in China. The multi-level interpretation of classic images reflected in the culture of "Princess on the Run" is the epitome of China's present women 's multiple and rich personality consciousness. Technological progress promotes the development of cultural response Additionally, the innovations of digital devices also play a role. According to the reform and opening up, with the development and dissemination of Chinese culture at home and abroad, Chinese local culture has ushered in a new climax with the dissemination of new media. New media technology has expanded and enriched mass cultural activities and improved people's acceptance of culture and lowered the threshold of content adaptation and dissemination. Due to the breakthrough of regional restrictions, people with various cultural shocks in the cultural environment not only enhanced people's acceptance and tolerance of information for a long time but also diversified cultural experience provided soil for cultural recreation. The lowering of the threshold for cultural producers has also spawned a large number of online careers, with practitioners and producers from different backgrounds providing more ideas and possibilities for new interpretations of Disney culture. In this process, the masses gradually become the subject of cultural production, and the cultural discourse is more used to describe the lives of the masses. In short, Disney culture, as a carrier and technology of cultural communication, is a manifestation of China's acceptance of Western civilization and values. In the process of acceptance, it is transformed by Chinese people, forming the phenomenon of mixed culture, namely "cultural localization". At the same time, as a medium of artistic expression, the content and output of core values displayed in the film are also the points that people will consider when watching the film. "MULAN" IN THE COLD FIELD: LOCAL RESPONSES TO DISNEY 'S EXOTIC RECONSTRUCTION. Mulan, in its first 24 hours, the live-action trailer racked up 175.1 million online views worldwide, 52 million of which came from China. However, its data was much worse than in foreign countries when it was released in China mainland, and the score was relatively low. After the premiere in china 2020, its score was low to 4.9 and the majority (36.7%) gave 2 stars in Douban, one of the biggest movie rating websites in China until now. Whereas, the Aladdin released in 2019 was scored as 7.4 as another culture reconstruction product. Behind the low rating, except the disappointment of high expectations for the film and the filming techniques and plot settings of the film itself, the criticism about the distortion of local culture and history is the most common complaint about the film. Misunderstanding of Eastern Culture in Western Society In the process of globalization of Disney films, the cross-cultural communication of Disney films also depicts stories under different cultural backgrounds. For that cultural inheritance and re-creation products, normally result in much cultural imagination and permeate many factors such as cultural conflict, cultural filtering, and cultural identity transformation [1]. While there may be graphical and technical advances, the Mulan story they created may be far from the current tastes and values of Chinese people. to the Orientalism built up only through some symbols and imagination in the colonial period, without the reflection of truth but full of fantasy. What received more critiques is about the wizard and the uses of chi. One word that is frequently used in the film is "chi." Mulan's father says that chi is for men who are warriors, and women have to hide it or they will be considered witches. In Chinese Kung Fu, Chi is similar to internal force cultivation, which requires slow cultivation to deepen and everybody gets access to it with work. However, the movie sets that Mulan is gifted from the moment she comes out, with a powerful "chi" in her body that is different from ordinary people. She can hide it when she doesn't need it, and she can press the switch to release it at any time when she needs it as the movie goes on, which sounds very magical. Chi sounds similar to the Force in Star Wars, Chakra in Naruto, which is not being presented in the original text. The film captures and confuses a concept of Chinese elements to make it sound like Mulan's own extraordinary superpowers. As for the storyline, the re-created Mulan diverged from the original China Mulan poetry. The "Mulan" is originally a story about an ordinary girl, through her efforts to grow up gradually to be a hero finally recognized by the nation. She knew she was not as strong as a man physically, but she beards the responsibility to join the army for her father for the whole family and filial piety showing. Those are the story of the "Mulan" kernels. The traditional Mulan story is a story of inheritance and rebellion. She inherited the values in ancient china society, while joining the army, masculine actions are the rebellion and the awakening of self-awareness. In the Disney film, Mulan is portrayed as a superwoman with a gift and a powerful "chi" hidden inside her. The characters around her, from the fairy maiden to the Governor of Dong to the emperor, all seem to serve as a tool for Mulan's role in guiding her gradual awakening. The characters are not full of personality and descriptions are monotonous. The film taps into the current wave of female empowerment in Hollywood, as Disney has done with other great movies about women with superpowers. The Chinese would be more receptive if Disney recreated a new movie IP with a Chinese female protagonist. However, in a country with growing cultural power, it is clear that many Chinese viewers do not like the idea of revising traditional content without respecting facts Those costumes, props, architecture, and the misunderstanding of the definition of Chi that do not respect Chinese history are not the most criticized part of the film. More importantly, Disney failed to understand the spiritual core of Mulan, and the narration of women' s rights in Mulan failed to reflect the key. Feminism is not that women have superpowers (Chi) that dominate a battlefield full of male characters, but that women can fight with men on a battlefield exclusively for men. However, in the local Chinese stories, the description of Mulan who has a strong sense of responsibility and tries to improve herself in a battlefield full of men has already reflected the core of women 's rights. On the contrary, the changes made by Disney to satisfy enough drama as a film not only deviated from the original story but also failed to well express real women 's rights, It's like a self-righteous depiction of Chinese culture and exploration and realization of American style self-worth and heroism, full of exotic reconstruction [2]. Contemporary China's cultural resistance to misunderstandings from the West All the criticism about Mulan is evidence of cultural self-assistance. The deeper and more direct critical speech is more dependent on the development of national cultural soft power, the prosperity of local culture, and cultural consciousness [3]. Mr. Fei Xiaotong puts forward the view of cultural consciousness given some provocative thoughts of the new culture. He pointed out that cultural consciousness means that people have self-knowledge of culture in daily life and can have a systematic cognition of development, prospect, and history. The Chinese researchers ' self-examination of their own culture reflected behind Mulan's film is a good confirmation of this concept. This is also a harmonious cultural concept under the environment of global integration. When The Chinese understand their cultural characteristics, take the essence and discard the dregs, and carry out foreign culture research. Only through such comparison and research, can there be space for mutual communication, to effectively carry out cultural globalization and maintain the characteristics of local culture itself. The reaction showed by Mulan in China is a warning for Disney movies in globalization as well. "We didn't want to make Mulan a Chinese film, because we are not Chinese, we have different sensibilities and different narrative styles," said the assistant director of the film. This shows that the cultural misreading and imagination reflected in Mulan is no accident. Disney has blurred Mulan's cultural boundaries and turned it into a carrier of American popular values dressed in traditional Chinese culture. In this context, what The Chinese see as a chaotic historical background becomes logical in Disney World. In the form of film, it eliminates the boundaries of time and space, history and culture, dispels and filters the spirit of "loyalty and filial piety" embodied in local culture, and endows new connotations of individual heroism and feminism. The process of such regional culture being processed and accepted by different global cultures is the process of cultural globalization. Hollywood's film industry is arguably the best in the world, with its ability to turn any mediocre Advances in Social Science, Education and Humanities Research, volume 631 script into a blockbuster through technology. But if it loses the ability to understand and embrace other cultures, if it only focuses on selling points of its products and ignores its original culture, it will be fatal in cross-cultural artistic creation. This leads directly to the creation of a "self-righteous" product, a system that calls itself in its logic, but which probably has fewer takers as globalization advances. CONCLUSION In the era of globalization, Chinese people have shown a new demand for western culture: they need both the affirmation of the western mainstream and the export of their own national culture. Local cultures strive to develop and inherit the national culture, grasp the cultural core. While exporting culture to the world, it should be neither humble nor overbearing, but it should not resist cultural input from other countries without any reason. Telling the stories of own nation well and is the true reflection of cultural confidence and it is something that local countries can strive for in the process of globalization. The controversy over the two cultures also reflects a new problem behind globalization as many local cultures are gaining strength. In the context of globalization, as new requirements come from different markets, multinational cultural companies need to change the form and depth of artistic presentation of local culture and make it more suitable and acceptable for locals. After all, what is irreconcilable may not be the aesthetic difference between regions, but the natural conflict between commerce and culture.
2022-01-28T16:24:44.881Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "ad7153b17aea9af0e4ad405daf0f2fa403d23157", "oa_license": "CCBYNC", "oa_url": "https://www.atlantis-press.com/article/125968689.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8357efb424fadc689ce7d9f6e0d5c9f1fe882a4f", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [] }
119033176
pes2o/s2orc
v3-fos-license
Overhauser effect in individual InP/GaInP dots Sizable nuclear spin polarization is pumped in individual InP/GaInP dots in a wide range of external magnetic fields B_ext=0-5T by circularly polarized optical excitation. We observe nuclear polarization of up to ~40% at Bext=1.5T and corresponding to an Overhauser field of ~1.2T. We find a strong feedback of the nuclear spin on the spin pumping efficiency. This feedback, produced by the Overhauser field, leads to nuclear spin bi-stability at low magnetic fields of Bext=0.5-1.5T. We find that the exciton Zeeman energy increases markedly, when the Overhauser field cancels the external field. This counter-intuitive result is shown to arise from the opposite contribution of the electron and hole Zeeman splittings to the total exciton Zeeman energy. I. INTRODUCTION Recent progress in nano-science and technology has allowed access to desirable properties of single electron and hole spin states in semiconductor nano-structures [1,2,3,4], that can be addressed both optically [3,4] and electrically [1,2]. It has been demonstrated that due to suppression of the spin-orbit interaction in quantum dots, T 1 of the electron spin is in the ms range [5], opening potential applications in quantum information processing. Of particular importance in this context is the electron-nuclear spin interaction in quantum dots, representing a major source of decoherence of electronspin based qubits [2,7]. Several approaches to overcome such decoherence have been suggested mainly focused on nuclear spin cooling methods [8]. Dynamic nuclear polarization arising under circularly polarized optical excitation so far resulted in degrees of nuclear polarization S N ≈ 60% and 50% for interface (GaAs/AlGaAs [10]) and self-assembled (In-GaAs/GaAs [11]) GaAs-based dots, respectively. The reasons for the relatively low degrees of nuclear polarization are largely unclear. Nuclear spin pumping relies on the electron-nuclear spin flip-flop and may be slowed down due to the large electron Zeeman splitting either due to the external (B Z ) [12,13,14] or nuclear (Overhauser, B N ) magnetic field [15]. The pumping competes with nuclear spin diffusion into the matrix outside the dot [36], which may prevent high nuclear polarization degrees. Slowing down of the spin cooling rate is also possible due to formation of "dark" nuclear states [9]. Currently available III-V semiconductor QDs offer access to small isolated ensembles of nuclear isotopes with spins ranging from 1/2 (for P 31 ) to 9/2 (for In 115 ). New insights into the electron-nuclear spin interactions and nuclear spin cooling in particular are possible from the study of different types of QDs, where the whole nuclear spin ensemble as well as each individual nucleus experience different magnetic surrounding. In this work we study a III-V InP/GaInP quantum dot system [16,17,18,19,20,21], which, compared to a well-studied GaAs-based dots, provide electron spin states with a large g-factor [16], and a possibility, in principle, to manipulate phosphorus nuclei possessing a simple spin configuration with I P = 1/2. In contrast, in (In)GaAs dots all isotopes possess nuclear spin I ≥ 3/2 and more complex nuclear spin pumping mechanisms may take place. This work reports on the nuclear spin pumping in an individual electron-doped dot in a III-V system. This opens up possibilities to study the influence of the hyperfine interaction on the optically controlled electron spin with the life-time not limited by interaction with the electron reservoir in the contact [22,23] or fast electron-hole recombination [14,24]. Strong implications for the nuclear spin dynamics in the presence of the resident electron are also expected [25]. More specifically, this paper reports on optically induced Overhauser fields up to 1.2 Tesla in individual InP/GaInP dots charged with a single electron in a wide range of external fields B Z = 0 − 5T. A strong dependence of the spin pumping efficiency on the circular polarization of the incident light is found, a manifestation of strong feedback of the optically pumped nuclear spin on the electron-to-nuclei spin transfer efficiency. The highest degree of nuclear polarization in an InP dot is S max N ≈ 40% (at B Z = 1.5T). We find that the splitting in magnetic field between the trion recombination peaks in a dot markedly increases under the conditions of positive feedback, when the Overhauser field B N is anti-parallel to the external field and the electron Zeeman splitting is minimized. We show that this initially counter-intuitive increase of the total splitting, E xZ , is the consequence of the opposite contributions to E xZ of the (smaller) electron and (larger) hole Zeeman splittings. We also find that the feedback of the nuclear field B N on the electron-nuclear spin transfer rate results in nuclear spin bi-stability, 4 II. SAMPLE AND EXPERIMENTAL METH-ODS The InP dots in the GaInP matrix studied in this work were grown by low-pressure metalorganic vapour phase epitaxy in a horizontal flow quartz reactor. The samples were grown on (100) GaAs substrates with a 10 o misorientation towards <111>, used to suppress the CuPt-type ordering in the GaInP matrix. The growth temperature of the GaAs buffer and bottom GaInP layer was 690C. Before the deposition of InP, the wafer was cooled to 650C. After the deposition of InP and formation of the dot layer, the growth temperature was again raised to 690C, and a GaInP capping layer was deposited without growth interruption. The grown GaInP layers were nearly (within 0.04%) lattice matched to GaAs as derived from X-ray diffractometry measurements. The growth rates for the GaAs and GaInP layers were ≈0.7 nm/s and for InP ≈0.35 nm/s [26]. For such growth conditions, InP dots with a density of ≈ 10 10 cm −2 are formed. A typical low temperature (15K) photoluminescence (PL) spectrum of an ensemble of InP/GaInP dots excited with a HeNe laser at 633 nm is shown in Fig.1a. The dot PL is centered around 1.79eV with a full width at half maximum (FWHM) of the peak of 100meV. The as grown wafer was then covered with a 10/90 nm Ti/Al shadow mask, with 400 nm diameter clear apertures fabricated by means of electron beam lithography to allow optical access to individual QDs. PL was excited with a semiconductor diode laser emitting at 650 nm. A standard micro-PL set-up was employed, with the sample mounted on a cold finger (at temperature 15K) in a continuous flow helium cryostat equipped with a superconducting magnet. Both Faraday and Voigt geometries were employed and PL was measured with a double spectrometer and a liquid nitrogen cooled CCD camera. III. PL CHARACTERIZATION At zero magnetic field a typical PL spectrum of an individual InP/GaInP QD in our sample consists of a single line exhibiting no fine structure splitting, a signature of dot charging [27,28,29]. Under excitation with circularly polarized light, PL exhibits negative circular polarization, the degree of which increases with excitation power and can reach up to 30%. This effect is observed at zero field and in a wide range of magnetic fields B Z applied in the growth direction. Such behavior was previously found in both InGaAs and InP dots charged with a single electron and corresponds to optical orientation of the spin of the resident electron left behind after recombination of the optically excited trion [15,17,30]. It has been found that in negatively charged dots excitation with σ + (σ − ) leads to stronger PL in σ − (σ + ) polarization leaving a localized electron with predominantly spin down (up). Additional evidence for the dot charging is obtained from PL measurements in magnetic field. Fig.1b shows typical exciton PL spectra of an individual InP/GaInP QD excited with linearly polarized light in magnetic field B Z = 5T along the sample growth direction. A Zeeman doublet is measured with high and low energy peaks observed in σ + and σ − circular polarization, respectively. In the field applied in the plane of the dot B X ′ , the emission line splits into four linearly polarized peaks with co-polarized inner and outer pairs (Fig.1c). Fig.1d shows a summary of peak positions measured in magnetic fields 0-5T applied either in the growth direction (circles) or in-plane (squares). In the Voigt geometry (with the in-plane field) all four lines exhibit nearly linear energy shifts with a very small diamagnetic component and a common origin at the spectral position of the line at B = 0. The behavior observed in Fig.1d, found previously for singly-charged dots [31,32], is in striking contrast to what expected for a neutral exciton, PL spectra of which in the Voigt geometry may have up to four lines originating from the pairs of dark and bright exciton states split at B = 0 by the electron-hole exchange inter-action [28,32]. Bright neutral exciton states at B = 0 are also split by the electron-hole exchange interaction usually spectrally resolved in PL [27,28,33,34]. Identifications of the four lines in Fig.1c,d is conducted by comparison with the diagram in Fig.1e where the scheme of optical transitions of a negatively charged exciton in Voigt (Faraday) geometry is shown in the left (right) part of the figure. The four lines in the Voigt configuration in Fig.1c originate from the hole spin splitting in the initial state and the electron spin splitting in the final state. The splittings between the four lines in Fig.1c,d are found to depend on the direction of the in-plane magnetic field. Such dependence, a signature of a low in-plane symmetry of the dot, originates from the variation of the hole g-factor, whereas the electron gfactor is expected to be isotropic. Based on this consideration and comparing results obtained for various in-plane directions of B-field we deduce that in Fig.1d g hX ′ = 0.5 and g e = 1.46, the latter with high accuracy being the same in other in-plane directions. We assume that this magnitude of the electron g-factor can also be used for the experiments in the Faraday geometry. In the Faraday geometry two peaks are observed that exhibit the Zeeman splitting and notable diamagnetic shifts (circles in Fig.1d). The peak splitting in the Faraday geometry in Fig.1b,d is well described by the expression E xZ = g x µ B B Z (see the vertical arrow in Fig.1d), where g x = 1.35 is an effective g-factor describing the splitting between the trion PL peaks. g x is smaller than g e indicating that g e and g h have opposite contributions to the resulting magnitude of the trion peak splitting. Since g h is expected to be larger than the in-plane gfactor g hX ′ [28,32] we conclude that |g h | > |g e | and |g x | = |g h | − |g e | with |g h | ≈ 2.8. In order to obtain full agreement with experiment (Fig.1b), where the line in σ + (σ − ) is observed at high (low) energy, both electron and hole should have positive g-factors as depicted in the right part of Fig.1e. The majority of spectrally isolated PL lines in our sample showed the properties described above. Based on the evidence presented we assume that the majority of dots in the sample studied in this work are electron-charged, probably due to a low level residual doping in the bulk material. Although hole-charging will produce similar patterns of peaks in magnetic field, negative circular polarization has never been observed for positively charged trions. Nuclear spin effects in the dots studied in this work (discussed below) further confirm our conclusions about the electron charging of the dots and the relation between the electron and hole g-factors. IV. NUCLEAR SPIN PUMPING The electron-nuclear hyperfine spin interaction leads to a finite probability of spin exchange (spin "flip-flop") between the resident electron confined in the dot and a single nucleus of the large (about 10 4 ) ensemble of nuclei. Re-pumping of the spin-polarized electron on the dot occurring under circularly polarized optical excitation leads to a build-up of sizable nuclear spin polarization on the dot, S N . The nuclear spin pumping efficiency can be described by the probability of the electronnuclear spin flip-flop. The efficiency of this process decreases with the increasing electron Zeeman splitting E e which is the major energy cost of the spin flip-flop [11,12,13,24,35,38,39]. The collective effect of all nuclei on the dot can be described in terms of the occurrence of local nuclear magnetic fields B N ∝ S N leading to modification of the electron Zeeman splitting E e = |g e |µ B (B Z ± B N ), the effect of nuclei on the hole splitting being negligible. The Overhauser shifts δE = ±|g e |µ B B N can be evidenced in PL experiments on individual dots as a modification of the trion splitting. The opposing contribution of the hole and electron Zeeman splittings to the observed trion peak splitting deduced above implies that the dynamic nuclear polarization in external field along Z-direction will result in modification of E xZ in the following way: In addition, as discussed above, the spin pumping efficiency is strongly dependent on E e , and is therefore sensitive to B N . Fig.2 shows the power dependence of the splitting E xZ measured for a single dot trion at B Z = 1.5 and 5T for σ + and σ − circularly polarized excitation. Data for a different dot to that described in Fig.1 is reported in Fig.2. In Fig.2a the splitting changes from 85 µeV at small powers to 191(47) µeV at high powers for σ + (σ − ) excitation. The modification of E xZ is related to the pumping of the nuclear spin on the dot due to spin exchange with the resident spin-polarized electrons. The pumping, being a dynamical process competing with nuclear spin depolarization [22,36], becomes more efficient at higher powers as the rate of excitation of the electron spin on the dot increases. Clearly, much more efficient pumping is observed in the case of σ + excitation: the total splitting changes before saturation in Fig.2a are +106 and -38 µeV for σ + and σ − excitation, respectively. A negligible power dependence is observed for linearly polarized excitation (triangles in Fig.2) with the splitting between the lines close to that observed for low power pumping with circularly polarized light. An important feature is observed in Fig.2a: the more efficient nuclear spin pumping is achieved in the case where E xZ increases. The explanation can be found if Eq.1 is considered. For σ + excitation B N anti-parallel to B Z is expected and E e is strongly reduced. Similar effect is observed in electron charged InGaAs dots [38,39], where nuclear spin pumping occurs due to the spin relaxation of the extra electron tunneled into the dot from the contact. As observed in Fig.2a for σ + exscitation, the total trion line splitting increases. On the other hand, the reduction of E e observed for σ + excitation will result in a positive feedback on the nuclear spin pumping efficiency. A negative feedback and as a consequence, a slower rate of the spin pumping is observed for σ − excitation, for which E e increases leading to a decrease of E xZ as predicted by Eq.1. The manifestation of the positive feedback in InP dots, i.e. more efficient nuclear spin pumping leading to the increase of the trion line splitting E xZ , is opposite to that observed for InGaAs dots [12,13,14]. However, this difference is explained by the different relation between the electron and hole g-factors in the two types of dots: g e and g h have contributions of the same and opposite signs to the exciton Zeeman splitting for InGaAs/GaAs and InP/GaInP dots, respectively. The observations in both cases are consistent in that the positive feedback occurs due to the decrease of E e when B N is anti-parallel to B Z . From the data in Fig.2a, we obtain maximum B N ≈ 1.2T for σ + excitation (with B N anti-parallel to B Z ) and B N ≈ 0.4T for σ − excitation (with B N parallel to B Z ). The high spin pumping efficiency in the case of σ + excitation in Fig.2a can thus be explained by almost complete compensation of B Z by B N , resulting in negligible E e and high probability of the flip-flop. Note, that significantly larger Overhauser fields compared to previously found for InP dots [17,18] are reported here. We now estimate the degree of nuclear spin polarization on the dot. For this we assume that the dot contains In and P nuclei only with hyperfine constants A In = 56µeV and A P = 44µeV [40]. The spins of In and P nuclei are I In = 9/2 and I P = 1/2, respectively. Fully polarized material will then produce an Overhauser shift of Σ OH = I In A In + I P A P = 274µeV. We therefore conclude that the shift of 106 µeV observed in Fig.2a for σ + excitation corresponds to a degree of nuclear spin polar- ization S N = 39%. This value is similar to the maximum degree of polarization obtained for InGaAs dots at low T [12,14]. The importance of the feedback mechanism observed in Fig.2a becomes less significant if a higher external field is applied since it becomes more difficult to compensate the external field. This is demonstrated in Fig.2b, where power dependences of the trion splitting E xZ at 5T are plotted. The maximum Overhauser shift observed for σ + excitation is 68 µeV, compared to 106 µeV at B Z = 1.5T. Although the nuclear spin pumping is still more efficient for σ + excitation, the process is markedly slowed down by the high external field, and is less sensitive to B N since B Z ≫ B N . The inset in Fig.2b also shows that a significant nuclear polarization can be optically pumped at zero external field. Trion splittings up to 28µeV are observed occurring solely due to the Overhauser field B N , which can be estimated to be 0.3T with the corresponding degree of nuclear spin polarization S N ≈10%. These magnitudes of B N and S N are similar to those reported for the electron charged InGaAs dots in Ref. [39]. Note that at B Z = 0 the pumping efficiencies are very similar for both σ + and σ − excitation. Observation of nuclear polarization at zero field is another piece of evidence implying that we deal with negatively charged dots as has been observed in Ref. [39]. We have performed similar experiments at B = 0 on positively charged and neutral self-assembled InGaAs/GaAs and interface GaAs/AlGaAs. Nuclear spin pumping have not been observed. On the other hand, in negatively charged dots, the optically orientated resident electron can relax its spin due to the hyperfine interaction [38,39], thus leading to a build-up of nuclear spin polarization. V. NUCLEAR SPIN BISTABILITY Positive feedback of B N on the spin flip-flop probability p hf , as observed in Fig.2a, has been shown to lead to nuclear spin bi-stability in InGaAs/GaAs dots [12,13,14,24,36]. We find a similar behavior in negatively charged InP dots studied here. However, we find that such bi-stability effects can only be observed at low magnetic fields. Fig.3 shows an example of the bi-stable behavior measured for a single InP dot at B Z = 0.5T. In this graph E xZ splitting is plotted as a function of power for σ + polarized excitation. When the power, P , is scanned from P ≈ 0, the exciton Zeeman splitting changes gradually from 80 to 118 µeV (shown in gray) up to P ≈500µW, where E xZ changes abruptly from 118 to 146 µeV. For higher powers E xZ shows very weak dependence on the excitation power. The abrupt switching occurs when the Overhauser field has almost compensated B Z , which will lead to a fast electron-nuclear spin transfer rate. If the power is now scanned from P > 1000µW to zero, E xZ first shows a negligible dependence on power in the range 300 < P < 1000µW. At P ≈ 300µW a very sharp decrease of E xZ is observed corresponding to the reduction of the nuclear polarization. In the range of powers 200 < P < 600µeV two stable nuclear spin states differing by ∆S N ≈ 10% are observed on the dot, constituting the observation of nuclear spin bi-stability. In general the observation of the optically induced bistability of the nuclear spin in a quantum dot is strongly dependent on the electron spin dynamics, determined in turn by the population and spin dynamics of all charge carriers on the dot [24]. On the other hand, it is possible to predict the range of external magnetic fields where the switching behavior as in Fig.3 can be observed. This will occur if the optically pumped Overhauser field can completely compensate the external field [14,24]. Although this is not the only necessary condition for the observation of the nuclear spin switch [41], this is a reliable starting criterion. The largest B N observed for the dots studied in this work is below 1.5T in contrast to InGaAs dots, where the Overhauser fields up to 3T have been observed [12,14]. This indicates that in InP dots the occurrence of the nuclear spin bi-stability should be expected at low external magnetic fields as found in our experiments where bi-stability is observed in the range of B Z ≈ 0.3 − 1T. VI. SUMMARY In conclusion, strong nuclear spin effects are reported in individual optically pumped electron-charged InP dots due to dynamic nuclear polarization. This opens up the possibility to optically manipulate a polarized system of phosphorus nuclei, possessing the simplest nuclear spin configuration with I P = ±1/2 in a semiconductor nanostructure. The InP dots contain a common element, In, with the widely studied InGaAs/GaAs system and a comparative analysis of the two types of dots is presented here. Similarly to InGaAs dots, the nuclear spin polarization on the dot is shown to produce a strong feedback on the nuclear spin pumping efficiency. A degree of polarization of ≈ 40% has been pumped optically, a limit very similar to InGaAs dots at low temperature. This limit might be related to a finite re-excitation rate of the dot, limiting supply of the electron spin to the dot, and low probability of the spin flip-flop. Development of nuclear magnetic resonance techniques is likely to shed further light on the spin transfer mechanism in nano-structures built from these complex semiconductor alloys. Electron-doped III-V QDs present an interesting optically controlled system of coupled electron and nuclear spins. Due to extremely long electron life-times on the dot (at low temperatures), it should be possible to achieve a regime where nuclear polarization becomes frozen with its decay suppressed due to the inhomogeneous Knight field induced by the localized electron. This work has been supported by the Sheffield EP-SRC Programme grant GR/S76076, the EPSRC IRC for Quantum Information Processing, ESF-EPSRC network EP/D062918 and by the Royal Society. AIT was supported by the EPSRC (grants EP/C54563X/1 and EP/C545648/1).
2008-03-14T14:51:40.000Z
2007-10-15T00:00:00.000
{ "year": 2007, "sha1": "27dcf070a45313ce8898a504fdad901ae465b2e8", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0710.2837", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "27dcf070a45313ce8898a504fdad901ae465b2e8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
118923627
pes2o/s2orc
v3-fos-license
Numerical solution of a non-linear conservation law applicable to the interior dynamics of partially molten planets The energy balance of a partially molten rocky planet can be expressed as a non-linear diffusion equation using mixing length theory to quantify heat transport by both convection and mixing of the melt and solid phases. In this formulation the effective or eddy diffusivity depends on the entropy gradient, $\partial S/\partial r$, as well as entropy. First we present a simplified model with semi-analytical solutions, highlighting the large dynamic range of $\partial S/\partial r$, around 12 orders of magnitude, for physically-relevant parameters. It also elucidates the thermal structure of a magma ocean during the earliest stage of crystal formation. This motivates the development of a simple, stable numerical scheme able to capture the large dynamic range of $\partial S/\partial r$ and provide a flexible and robust method for time-integrating the energy equation. We then consider a full model including energy fluxes associated with convection, mixing, gravitational separation, and conduction that all depend on the thermophysical properties of the melt and solid phases. This model is discretised and evolved by applying the finite volume method (FVM), allowing for extended precision calculations and using $\partial S/\partial r$ as the solution variable. The FVM is well-suited to this problem since it is naturally energy conserving, flexible, and intuitive to incorporate arbitrary non-linear fluxes that rely on lookup data. Special attention is given to the numerically challenging scenario in which crystals first form in the centre of a magma ocean. Our computational framework is immediately applicable to modelling high melt fraction phenomena in Earth and planetary science research. Furthermore, it provides a template for solving similar non-linear diffusion equations arising in other disciplines, particularly for non-linear functional forms of the diffusion coefficient. Introduction Modelling diffusion has broad applications in science and engineering, but there is a tendency for the published literature to focus on diffusive systems that either possess convenient mathematical properties or are weakly non-linear. Here, we are motivated by a physically-relevant and peculiar equation, similar to a non-linear diffusion equation, which arises from consideration of energy transport in a planetary interior. Our solution method for this equation has immediate applicability to Earth and planetary sciences research for developing multi-scale and multi-physics models of the interior dynamics of planets. Furthermore, the technique that we employ provides general insights for solving non-linear conservation laws and diffusion equations that extend to other disciplines. Mixing length theory (MLT) enables energy transport by convection to be represented as a diffusive process, and this approach has been applied extensively to model the physics that governs the structure and evolution of stars (e.g., Kippenhahn et al., 2012). The theory estimates the contribution of eddy motions to bulk energy transfer by considering the distance that a parcel of material travels before it thermally equilibrates with its surroundings. Often it is convenient to encapsulate this energy transfer within an "effective" or "eddy" diffusivity. MLT was later applied to interrogate the cooling of an initially molten planet (Abe, 1993). As a molten planet cools from liquid to solid its viscosity increases by around 19 orders of magnitude and hence its dynamics transition from inviscid flow (liquid) to viscous creep (solid). To model this process the eddy diffusivity switches between an inviscid and viscous flow scaling law based on the local Reynolds number. Large astronomical objects such as stars and planets are close to global hydrostatic equilibrium and therefore 1-D radial models are appropriate for capturing the first-order structure and evolution of the interior of these bodies. Modelling the radial structure is often the most scientifically informative approach, particularly when astrophysical data constraints are limited or model parameters poorly constrained. Using MLT, 1-D models can replicate the general results of 3-D models (e.g., Kamata et al., 2015) with significantly less calculation cost, thus permitting an extensive search of the parameter space. An MLT scheme allows physical properties to be determined locally which enables properties to vary substantially throughout the 1-D domain. In short, MLT remains a heavily utilised method for understanding planetary and stellar structure and also has applications in other fields. The discovery of planets beyond the Solar System, so-called exoplanets, has thrust planet formation and evolution modelling to the forefront of planetary science. In particular, a large fraction of detected exoplanets are rocky (i.e., akin to Earth) and may have evolved from an initially molten state due to the heat generated by accretion, core formation, short-lived radioisotopes, and late-stage impacts (e.g., Stixrude, 2014). Furthermore, some exoplanets are sufficiently close to their star that they retain a permanent magma ocean on their dayside. Hence, developing techniques and methods to model high melt fraction phenomena is crucial to advance modelling capabilities and enable us to bridge the dynamic timescale between melt (inviscid) and solid (viscous) processes. In particular, rocky exoplanets demand a flexible modelling strategy because their size-in terms of mass, radius, and internal pressure-can far exceed those of the terrestrial planets in the Solar System, and their composition may be strikingly different from Earth (e.g., Madhusudhan et al., 2012). Incorporating high melt fraction dynamics in existing 2-D geodynamic viscous flow codes is a recent development (e.g., Tackley et al., 2017). For simplicity and ease, the dynamics are typically encapsulated using a constant eddy diffusivity to model the enhanced energy transport due to the advection of melt (e.g., Neumann et al., 2014). Other energy transport mechanisms that operate during the high melt fraction regime, such as convective mixing, are often excluded. These simplifications are often a necessary compromise to limit the computational expense for models that are already teeming with physical and chemical complexity. Similarly, resolving the ultra-thin thermal boundary layer at the top of a magma ocean (∼ few cms) (e.g., Abe, 1993) is beyond the computational capabilities of 2-D models. Interestingly, even studies that use quasi 1-D dynamic interior models invoke similar simplifications, often because they are primarily concerned with the evolution of the atmosphere (e.g., Hamano et al., 2013;Lebrun et al., 2013;Hamano et al., 2015). Therefore, we are motivated to revisit the MLT formulation for a partially molten planet (Abe, 1993) and analyse the full functional form (without simplification) of the eddy diffusivity. The eddy diffusivity not only depends on material properties, which themselves are a function of the local thermodynamic state, but also on the local super-adiabatic temperature gradient or equivalently local entropy gradient. This gradient-dependence to the diffusivity introduces numerical precision challenges that are typically absent from simpler diffusive process models. The insights that we gain about the behaviour of the system enable us to devise a fast and flexible code that is amenable to large-scale simulation of high melt fraction dynamics in planetary bodies. The code can be used to explore parameter space, test modelling intuition, and compare with results from 2-D simulations. It further provides techniques for improving the stability, speed and precision of high melt fraction dynamics in high dimension (2-D+) simulations. Fundamental equations The basic equations for modelling energy transport in a planetary interior are provided by Abe (1993Abe ( , 1995Abe ( , 1997. These chart the evolution of a planet that is initially fully molten and during subsequent cooling evolves to a partially molten state where melt and solid crystals coexist. Here, we express the nonlinear conservation of thermal energy in integral form in terms of entropy: where S is specific entropy, ρ density, T temperature, F heat flux vector, n outward unit normal to the bounding surface(s) A, H internal heat generation per unit mass, t time, and V volume. We formulate the problem using entropy because it is a natural coordinate for both convecting systems, which are near-adiabatic, and thermodynamic models of mantle melting (see Stolper and Asimow, 2007, for discussion). Total heat flux is: where F conv is the convective flux, F mix is the flux due to the mixing of melt and solid, F cond is the conductive flux, and F grav is the flux due to gravitational separation of melt and solid. For a 1-D system where radius r is the spatial coordinate: where the eddy diffusivity κ h is estimated using the average mean free path of convective parcels: where u is a characteristic velocity that is derived from force-balance considerations and mixing length theory, and l is the mixing length. We choose the mixing length to be the distance from the nearest thermal boundary layer so that the calculated heat flux fits experimental results ( Fig. 3 in Abe, 1995;Stothers and Chin, 1997). Therefore, in the simplest case of single layer convection, the mixing length at a given radius is the minimum distance from the top or bottom surface. Post-magma ocean crystallisation, viscous creep is the norm for planetary convection where the rise and fall of convective parcels is primarily resisted by viscous drag. Therefore, the Stokes settling velocity of convective parcels is (e.g., Sasaki and Nakazawa, 1986): where α is the thermal expansion coefficient, c specific heat capacity, g gravity (negative by convention), and ν = η/ρ kinematic viscosity. The dynamic viscosity η of the aggregate is given by Eq. 14. Here, Eq. 5b stipulates that the adiabatic temperature gradient must be exceeded for convection to occur. The Reynolds number is: In highly turbulent systems, such as a fully molten magma ocean, dynamic pressure rather than viscous drag is the dominant force resisting the vertical transport of convective parcels. This gives rise to a characteristic inviscid flow velocity (Vitense, 1953): By defining a critical Reynolds number Re crit = 9/8 (e.g., Abe, 1995) we can construct a piecewise function for κ h that switches between the viscous and inviscid velocity scaling depending on the local Reynolds number 1 . Note the strong non-linear sensitivity of κ h to l; for Re crit < Re it scales as l 2 and otherwise as l 4 : Convective mixing is described using Fick's law and quantifies latent heat transport as crystals form and remelt as they are displaced quasi-adiabatically by convective flow 2 : where ∆S fus = S liq − S sol is the entropy of fusion and S liq and S sol are the liquidus and solidus, respectively. Melt fraction φ is: Conduction is determined by Fourier's law: where κ is the thermal diffusivity and (∂T /∂r) S is the adiabatic temperature gradient. Gravitational separation occurs by permeable flow of melt in the solid 1 We prefer this representation of κ h using Re due to its physical insight but the formulation is identical to Eq. 15 in Abe (1993) and Eq. 6 in Abe (1997). Eq. 47 in Abe (1995) also gives the same κ h , although Eq. 47c erroneously has ρCp inside the square root. 2 Eq. 9 in Abe (1997) should include ρ as in Eq. 14 in Abe (1993) and Eq. 56 in Abe (1995) matrix at small melt fraction and crystal settling or floatation at large melt fraction: where a is the grain size, η m melt viscosity, and subscripts "liq" and "sol" denote that the quantity is evaluated at the liquidus and solidus, respectively. The flow mechanism factor ζ grav depends on the flow law and is a function of melt fraction 3 : Eq. 13a is derived by considering the Stokes' velocity for spherical crystals. Eqs. 13b and 13c arise from permeability flow laws given by Rumpf-Gupte (Rumpf and Gupte, 1971) and Blake-Kozeny-Carman, respectively (see Abe, 1995, for discussion). From Eq. 13 it is clear that gravitational separation only occurs in the mixed phase region, as expected, since F grav = 0 for φ = 0 or φ = 1. If melt is less dense than solid (ρ liq < ρ sol ) then F grav is positive and heat is transported upwards toward the top surface. But if a melt-solid density crossover exists in the mantle then F grav can be negative for certain radii (depths) and heat is carried down towards the core-mantle boundary. The formulation for the dynamic viscosity is designed to capture the rheological transition where the aggregate viscosity changes fairly abruptly between the melt and solid viscosity at a critical melt fraction. Our formulation captures the trend observed in the semi-empirical model of Costa et al. (2009), although we use the end-member melt and solid viscosities in Abe (1993). The viscosity of the aggregate η is: where η m and η s are the melt and solid viscosity, respectively, and z is the 2.586 × 10 20 m 3 transition function: where φ c is the critical melt fraction and φ w is the transition width. We choose the planetary radius R 0 and a reference entropy S 0 , temperature T 0 , and density ρ 0 , to non-dimensionalise the equations. For convenience we choose reference values that correspond to the maximum of the liquidus in Stixrude et al. (2009) where dS liq /dr = 0. The primary scalings are given in Table 1 and others are straightforward to derive. Pressure and material properties The fundamental equations (Section 2.1) are expressed in terms of radius r and it is necessary to relate this to hydrostatic pressure P to interface the evolution with equations of state for planetary materials. We determine the hydrostatic pressure within a planet as a function of depth using an equation of state (EOS). An appropriate choice is the Adams-Williamson EOS, which assumes adiabatic compression of a chemically homogenous material: where ρ r is a reference surface density, β is a measure of the compressibility of the material, and z is depth. For Earth, gravity is near constant throughout the mantle and we solve for ρ r and β using a least-squares fit to the density profile of the lower mantle (Dziewonski and Anderson, 1981). We use a thermodynamic description for a mantle composed of MgSiO 3 , which can exist as a melt (S > S liq ), solid (S < S sol ), or partially molten aggregate (S sol ≤ S ≤ S liq ). Solid and melt thermophysical properties are determined by Mosenfelder et al. (2009) and Wolf and Bower (2017), respectively. The melt-solid density contrast is on average -5% in the lower mantle and since we only consider a single component there is no density crossover and the melt is less dense than the solid everywhere. Order-of-magnitude estimates for the thermophysical properties of the aggregate are derived by considering an ideal solution that assumes linear additivity (e.g., Solomatov, 2007). It is advantageous to pre-calculate lookup tables of material properties that can subsequently be queried during the evolution of a model, although analytical expressions could also be used if available. This ensures the model retains flexibility to incorporate any dataset of material properties and eliminates computational overhead associated with Gibb's free energy minimisation that would otherwise be required to compute chemical and phase equilibria. Therefore, from the perspective of the numerical scheme that we subsequently develop (Section 4), we treat thermodynamic quantities (including temperature) as lookup quantities that are a function of entropy S and pressure P . For the full model (Section 5) we use lookup tables for melt and solid properties with an approximate resolution of 23 Jkg −1 K −1 in entropy and 2 GPa in pressure. The resolution is chosen according to the smoothness of the data, but since it is difficult to predict the influence of the input data in a non-linear model we run three additional cases with coarser lookup tables and compare the output (see supplementary material). In fact, thermodynamic considerations may result in discontinuous material properties across S liq and S sol and this is typically the source of the largest gradients in material properties. For this reason we use the transition function (Eq. 15a) to ensure properties vary smoothly across the liquidus and solidus to avoid numerical difficulties. It is further used to smoothly introduce the convective mixing flux F mix as the system cools below the liquidus and enters the mixed phase region. The smoothing is formulated akin to the viscosity transition (Eq. 15) where now φ is the generalised melt fraction (Eq. 10b), φ c is 1 and 0 for smoothing across the liquidus and solidus, respectively, and the smoothing width φ w = 0.01 which corresponds to ≈ 8 Jkg −1 K −1 in the lower mantle. See supplementary material for discussion on the sensitivity of our results to the smoothing width. Boundary conditions Our objective is to devise a flexible framework to probe a variety of planetary cooling scenarios and therefore it is useful to consider both linear and non-linear boundary conditions. For the top surface, an isothermal boundary condition is appropriate for a planet in radiative equilibrium where the surface temperature is dictated by incoming solar radiation rather than interior heat. Another simple choice is a constant heat flux boundary condition. However, for the earliest stage of planetary cooling the surface radiates as a grey-body (e.g., Elkins-Tanton, 2008): where F surf and T surf are surface heat flux and temperature, respectively, ǫ the emissivity which is unity in the absence of atmospheric effects, and σ the Stefan-Boltzmann constant. T eq is the radiative equilibrium temperature of the planet, which is approximately 273 K for Earth. Both mixing length and boundary layer theory predict that a magma ocean has an ultra-thin thermal boundary layer at the top surface with a thickness of a few centimetres. The temperature drop across the boundary layer ∆T bl is parameterised (e.g., Solomatov, 2000;Reese and Solomatov, 2006): For the bottom surface, which corresponds to the core-mantle boundary (CMB), we consider the energy balance for the core, neglecting the energy contribution from growth of the inner core and internal heat sources: where m core and c core are the mass and heat capacity of the core, respectively, T core is the mass-weighted effective temperature of the core, r cmb is the planetary radius at the CMB, and F cmb is the CMB heat flux. Note that the core and mantle are thermally coupled which is why we must consider temperature rather than entropy in the formulation of this boundary condition. It is convenient to relate the bulk temperature of the core to the temperature at the CMB: whereT core is a constant that accounts for the thermal profile of the core, and T cmb is the CMB temperature, which formally is the foot temperature of the core adiabat. We deriveT core by assuming the core is isentropic (i.e., vigorously convecting) with Grüneisen parameter γ = 1.3 (Stacey, 1994;Labrosse et al., 2001). Changes in the mass distribution of the core due to cooling are negligible compared to changes in temperature. Therefore, the core density profile (time-independent) is given by the Preliminary Reference Earth Model (Dziewonski and Anderson, 1981) which we modify to exclude the compositional variation of the inner core. These considerations result in a thermal structure correction factor ofT core = 1.147 and the boundary condition becomes: Physical description We present a simplified model, derived from the fundamental equations (Section 2.1), which captures key aspects of the full system and highlights the relationship between the non-linear conservation law and non-linear diffusion. The insights gained from this analysis guide our selection of the numerical method that we implement to solve the full model. The strong form of the conservation law (Eq. 1) is: Assuming that F is a purely radial function (along with ρ, T ) and excluding internal heat sources (H = 0), implies that in spherical coordinates: We neglect variations in ρT and assume that the domain of interest has sufficiently little variation in radius to ignore the spherical geometric terms such that the problem is locally 1-D: We further restrict our interest to the centre of a partially molten mantle just below the liquidus and away from thermal boundary layers. Here, gravitational separation is insignificant due to efficient mixing at high Rayleigh number and conduction is negligible in comparison to convection. Therefore, the total flux is dominated by convection (Eq. 3) and mixing (Eq. 9). In Eq. 25c, ∂φ/dr reduces to a simple expression of ∆S fus , ∂S/∂r and dS liq /dr using Eq. 10 and then we apply the approximation that φ = 1 near the liquidus. Flow is inviscid close to the liquidus so the eddy diffusivity κ h is: For Earth, the (non-dimensional) mixing length l ≈ 1/4 since the core and mantle thickness are both approximately half of the planetary radius. This gives the prefactor of 1/64 when combined with the 1/16 constant inside the square root in Eq. 7a. The non-linear diffusion coefficient κ h depends on ∂S/∂r via a squareroot non-linearity (Eq. 26a); for small ∂S/∂r, this has the effect of strongly amplifying any error in ∂S/∂r. In the full set of equations it also depends on S and r through its dependence on material parameters (Eqs. 5a, 7a). Crucially, κ h enforces a strong asymmetry because non-trivial solutions-defined by non-zero convective and mixing flux-are only admissible when ∂S/∂r < 0. Steady-state analysis Inspection of the flux (Eq. 25) reveals a solution space where F can either be positive (radially outward), negative (radially inward), or zero: For dS liq /dr < 0 both negative and positive fluxes are admissible since Eq. 27a and 27b can both be satisfied for a different ∂S/∂r. For dS liq /dr > 0, however, only a positive flux is permitted since Eq. 27b can otherwise never be satisfied. This asymmetry arises because F conv is always positive regardless of the sign of dS liq /dr, but the total flux F can be negative if mixing overwhelms convection, with | F mix | > | F conv | and dS liq /dr < 0. In this situation, mixing enables heat to be buried deep in the interior by transfer of latent heat. However, the capacity to transport energy downwards towards the CMB is restricted by the availability of latent heat. In this simplified model, latent heat is manifest through the relative difference between the entropy profile (∂S/∂r) and the liquidus (dS liq /dr). We compute the derivative of F (Eq. 25a) with respect to ∂S/∂r and determine a minimum (i.e., largest negative) flux: In essence, a small negative entropy gradient can drive a net negative flux, but once the magnitude of ∂S/∂r becomes large then convective heat transport dominates and the net flux reverts to positive. Note that there is no maximum (largest positive) flux because increasing the magnitude of ∂S/∂r can increase the (positive) flux without bound. We plot the solution space for F as a function of dS liq /dr and ∂S/∂r and mark the regions defined by Eqs. 27 and 28 (Fig. 1). We then compute steadystate solutions to Eq. 24 (∂S/∂t = 0) for constant positive and negative fluxes. For F min < F < 0 there are multiple solutions-for a given dS liq /dr a negative flux can be accommodated by either a small or large negative ∂S/∂r. However, the physical relevance of F < 0 is questionable and this region can likely be excluded from further consideration for two reasons. Firstly, a cooling molten planet radiates as a grey body and thus has a positive heat flux boundary condition at the surface. Secondly, only F > 0 provides a globally continuous steady state solution for all dS liq /dr. As previously mentioned, the simplified model does not inherently apply an upper limit to the (positive) flux that a magma ocean can transport. This is evident in Fig. 1 a comparatively large ∂S/∂r. For dS liq /dr < 0, positive flux is always accommodated by ∂S/∂r that is less than half the gradient of the liquidus. Crucially, this constrains the dynamic range of ∂S/∂r within about 4 orders of magnitude regardless of the magnitude of F . For dS liq /dr > 0, however, latent heat transport is sufficiently large and positive that the system can only accommodate reduced fluxes by driving ∂S/∂r toward zero to minimise κ h (Eq. 26) and hence the total flux. This is important because the total flux is therefore limited by the efficiency of radiative heat transfer to space. An appropriate estimate is 10 6 Wm −2 , corresponding to a black body temperature of 2050 K which is compatible with the expected surface temperature of a magma ocean. This flux predicts a large dynamic range of ∂S/∂r of about 12 orders of magnitude. To elucidate this behaviour further it is useful to restrict the subsequent analysis to a range of dS liq /dr that is physically reasonable. The liquidus S liq controls the pressure at which crystals first form in a magma ocean. For the stan- dard "bottom-up" crystallisation scenario, dS liq /dr remains everywhere negative. In contrast, a "middle-out" crystallisation scenario-recently proposed for the early Earth (Stixrude et al., 2009)-results from a liquidus overturn where the sign of the gradient changes at around 75 GPa (Fig. 2d). Using this liquidus we solve the simplified model for a heat flux of 10 6 Wm −2 (Fig. 2). In addition to again revealing the large dynamic range of ∂S/∂r (Fig. 2b), we see that heat is dominantly transported by mixing where dS liq /dr > 0 (Fig. 2c). For dS liq /dr < 0, however, both convection and mixing flux have large magnitude (∼10 12 Wm −2 ) and opposite signs that largely cancel to result in a smaller positive flux of 10 6 Wm −2 (Fig. 2a,c). This analysis provides crucial insight for devising a suitable numerical method for the full system of equations. Importantly, we must ensure the numerical method is both accurate and stable when ∂S/∂r varies over 12 orders of magnitude. Furthermore, ∂S/∂r appears inside a non-linear term (square root) which can amplify errors when computing fluxes. This is because the square root function has a vertical tangent line at the origin, so as ∂S/∂r → 0 a proportionally small change of ∂S/∂r results in a large relative change of flux. The total error can be further compounded if component fluxes are computed individually and differenced to obtain the total flux; this is because for dS liq /dr < 0 the total flux arises from the combination of a positive convective flux and a negative mixing flux, yielding F ∼ 10 6 Wm −2 , which is around 6 orders of magnitude less than both F conv and F mix (Fig. 2c,d). However, in this part of the domain the entropy gradient is comparatively large which will help to mitigate large fluctuations in flux due to small changes of ∂S/∂r. For dS liq /dr > 0, the total flux is dominantly carried by mixing and hence there is less concern that differencing the convective and mixing flux will introduce significant error or loss of precision. However, it is important to retain precision in the estimate of ∂S/∂r since the entropy gradient is generally small throughout convecting systems (Fig. 2b). Auxiliary variable approach We use the finite volume method (FVM) on a fixed grid to formulate a numerical solution to the full model because it is naturally energy conserving, intuitive, and flexible. It is straightforward to compute fluxes of arbitrary algebraic complexity in a FVM which is important because the fluxes are non-linear and depend on material parameters. Furthermore, a FVM can easily accommodate additional fluxes with minor modifications to ensure our method remains customisable. There are also fully dynamic mantle convection codes that use the FVM (e.g, Tackley, 2008) and hence we can readily port components of our model to other codes. The simplified model (Section 3) reveals a large dynamic range for ∂S/∂r of around 12 orders of magnitude (Figs. 1, 2). Therefore, controlling numerical errors in ∂S/∂r is important for computing an accurate and robust numerical solution. For this reason we solve for the time-dependence of an auxiliary variable q = ∂S/∂r, rather than S as would typically be done, since this eliminates the need to compute a finite difference estimate of ∂S/∂r using S. This is advantageous because S is defined pointwise on a numerical mesh and near-identical neighbouring values are expected in regions where dS liq /dr > 0 (Fig. 2). In these regions the finite precision representation of floating point numbers can result in catastrophic cancellation and amplify numerical noise. However, we do not eliminate finite difference operations entirely from the numerical method as they are still implicitly used to obtain a numerical estimate of ∂q/∂t (Eq. 32a) and explicitly used to compute an approximate Jacobian (Section 4.3). Using the auxiliary variable q = ∂S/∂r and equivalence of mixed partial derivatives (assuming S is sufficiently smooth), it follows that: Assuming radial symmetry and substituting this expression into Eq. 23 gives the continuous form: However, we actually formulate and solve the problem using the integral form (Eq. 1) to determine ∂S/∂t and subsequently calculate q using Eq. 29. Both the continuous and integral solution approaches require an additional equation to relate q and S: Eq. 31 requires that the entropy is known at radius r 0 along the entropy profile (S 1 2 ) in order to unambiguously recover S(r, t) using q; this is facilitated by Eq. 1. The index of 1 2 in Eq. 31 is arbitrary but has been chosen for consistency with the semi-discrete equations that are presented in Section 4.2. Semi-discretisation We solve the energy transport equations (Eqs. 1, 29, 31) with sphericallysymmetric geometry using the FVM to compute S(r, t). It is natural to employ a staggered grid where fluxes are defined at basic nodes at cell boundaries and quantities that are integrated over a control volume (cell) are associated with staggered nodes, which are defined as equidistant between neighbouring basic nodes. We discretise the spatial coordinate (radius) using a piecewise constant reconstruction and evaluate integrals using the midpoint rule. The energy balance (Eq. 1) can then be expressed as a non-linear system of equations to solve for ∂q/∂t (Eq. 29) at the basic nodes: and ∂S/∂t at the uppermost staggered node: where i is a mesh index that is zero at the top surface (a basic node) and p at the bottom surface, C i = ρ i T i V i is the capacitance term on the LHS of Eq. 1, and F is the numerical flux which is an approximation to the true flux F (Eq. 2). The discrete radial increment ∆r i = r i+ 1 2 − r i− 1 2 is negative by convention because the mesh index increases as radius decreases. Eq. 32b is recognised as a central finite difference of fluxes evaluated at cell boundaries (basic nodes) that are weighted by capacitances determined at cell centres (staggered nodes). Since we solve for q at the basic nodes we avoid a finite difference approximation for ∂S/∂r using S at neighbouring staggered nodes. This limits our scheme to just one finite difference operation acting on fluxes which also helps to retain overall numerical precision. By tracking the entropy at the uppermost staggered node (Eq. 33) we can integrate q to obtain entropy S(r, t) and therefore evaluate material properties at the basic and staggered nodes for F and C, respectively (Eq. 31). This integration is a numerically stable operation, and furthermore S is used to return material quantities from smoothly-varying lookup tables (Section 2.2). Therefore errors in S are not amplified like errors in ∂S/∂r. We implement the boundary conditions (Section 2.3) in our semi-discrete system of equations as follows. The radiative surface boundary condition (Eq. 17) is: and using Eq. 18: We can solve for T 0 by finding the root of the cubic equation (Eq. 35) since T 1 2 is known at the current time and b is constant. The CMB condition (Eq. 21) is cast into an alternative form. Recall that it is natural to formulate this condition using temperature as opposed to entropy since the core and mantle are thermally coupled. We determine the thermal energy balance of the cell that neighbours the core: The lowermost cell is at the CMB temperature (T p− 1 2 = T cmb ) so we substitute in Eq. 21 and rearrange to determine the CMB heat flux: F p−1 is evaluated using Eq. 2 since it is a basic node within the magma ocean solution domain. Note that ρ p−1/2 and c p−1/2 depend on entropy and pressure and hence are time-dependent. Cast in this form it is readily apparent that the boundary condition provides no restriction on the heat flux that the core provides the mantle. It effectively enables the (unmodelled) ultra-thin thermal boundary layer at the base of the mantle to instantaneously adjust its thickness to accommodate the magma ocean heat flux. Time integration We formulate the numerical problem using C and PETSc (Balay et al., 2016) and solve the resulting system of equations using SUNDIALS (Hindmarsh et al., 2005;Hindmarsh and Serban, 2015). SUNDIALS's and PETSc's interfaces are customised to facilitate quadruple precision ( float128 from GCC libquadmath) calculations and to use CVODE's direct sequential linear solver. We use the stiff ODE solver (CVODE) in the SUNDIALS package, based on backward differentiation formulae. This implicit timestepping approach is warranted because the system is stiff owing to the large difference in timescales between inviscid and viscous flow. We configure the ODE solver to use a 5th order method with dynamic time stepping and automatic order reduction if the solution becomes unstable. Since the problem has low dimensionality, we use a direct solve within the Newton iteration, rather than an iterative approach, thus avoiding introducing additional error in the form of a convergence tolerance. We opt to compute a finite-difference Jacobian, due to the non-trivial nature of the fluxes that depend on gradients and also material properties obtained from lookup tables; in practice these are often obtained from opaque third-party software which does not provide derivatives. Results We demonstrate our numerical method by investigating two crystallisation scenarios for Earth that are dictated by the shape of the liquidus and solidus. The liquidus in case "MO" (middle-out) is derived from Stixrude et al. (2009) and has a characteristic overturn in the middle of the domain and hence dS liq /dr changes sign from negative to positive as pressure increases (Fig. 3a). In contrast, case "BU" (bottom-up) uses the liquidus from Andrault et al. (2011) where crucially dS liq /dr < 0 everywhere (Fig. 6a). These archetypal cases demonstrate the role of dS liq /dr in dictating the necessary numerical precision to obtain a satisfactory solution. All other physical parameters are identical (Table 2) and we use a regular grid with 200 basic nodes (p = 200). For case MO we use quadruple precision calculations with a relative and absolute timestepper tolerance of 10 −18 . The mantle cools along approximate adiabats (constant S) until the adiabat intercepts the liquidus around 75 GPa (Fig. 3a). Then ∂S/∂r proceeds to decrease by 12 orders of magnitude below the liquidus as a consequence of dS liq /dr switching sign from negative to positive. Note the excellent agreement between the simplified and full models near the liquidus (compare Fig. 2b and Fig. 5c, 0.4 kyr). The large dynamic range of ∂S/∂r is the fundamental reason why quadruple precision is necessary for this case. Where dS liq /dr > 0, ∂S/∂r is driven to a small value (≈ −10 −13 ) to reduce the eddy diffusivity (Fig. 5a) and hence the convective flux (Fig. 4a). This is because the majority of the total flux is accommodated by mixing (Eq. 9, Fig. 4c) since ∂φ/∂r < 0 in this region (κ h > 0). This behaviour adheres to the insight gained from the simplified model (Section 3) where small negative ∂S/∂r is expected for dS liq /dr > 0. Despite the large variation in convective and mixing fluxes across the mantle, the total flux is near-constant and decreases steadily with time (Fig. 4d). For case BU, ∂S/∂r remains relatively large (negative) and crucially has a reduced dynamic range in comparison to MO (Fig. 8c). As the mantle cools, the liquidus is first intercepted by the adiabat at the bottom of the domain (Fig. 6a). The simplified model reveals that ∂S/∂r is always less than half of dS liq /dr (Eq. 27a), which restricts the dynamic range of ∂S/∂r. This ensures that the system is resolvable using double precision calculations and we set the relative and absolute tolerance of the time-stepper to 10 −10 . Prior to 1.2 kyr, convection (Fig. 7a) and mixing (Fig. 7c) have comparable magnitude but opposite sign, which drives a net flux of around 10 6 Wm −2 (Fig. 7d). At later time (1.8 kyr), however, the mixing flux changes sign in the deepest part of the mantle (Fig. 7c). This is because ∂φ/∂r switches sign as a consequence of the cooling profile lying over the rheological transition. Note that this sign change does not occur for MO because ∂φ/∂r < 0 is already established in the lower mantle (Fig. 3c). The total variation of the mixing flux for BU is not as large as for MO (compare Fig. 4c and Fig. 7c). But most importantly, ∂S/∂r remains relatively large for BU yet varies over about 12 orders of magnitude for MO (compare Fig. 5c and Fig. 8c). This again emphasises the fundamental difference between the cases and the requirement for extended numerical precision for case MO. Discussion Our equations are based on Abe (1993Abe ( , 1995Abe ( , 1997 (Hereinafter "Abe") which uses mixing length theory (MLT) to formulate the convective flux in terms of an eddy diffusivity that is a function of the local super-adiabatic temperature gradient (Eq. 8). This follows a classic approach in stellar structure modelling where the mixing length explicitly appears in the equations and scales the gradient of entropy or temperature (e.g., Spiegel, 1971), introducing the dependence of eddy diffusivity on ∂S/∂r. Furthermore, Abe's are the only studies (prior to this work) that consider convection, conduction, mixing, and gravitational separation-other studies typically just model the convective flux. Abe uses a finite difference scheme with a modified Euler backward method of time stepping. However, because he investigates bottom-up crystallisation (dS liq /dr < 0 everywhere) he is not exposed to the numerical precision challenges that we uncover when the liquidus exhibits an overturn (case MO). In fact, all previous work has focussed on bottom-up crystallisation similar to case BU where dS liq /dr ≪ 0 everywhere in the domain (e.g., Abe, 1993Abe, , 1997Lebrun et al., 2013;Monteux et al., 2016). An alternative approach to formulate the convective flux, as opposed to MLT, uses boundary layer theory (BLT). Formally, BLT is a simplified variant of MLT (Siggia, 1994;Kraichnan, 1962) but for clarity we distinguish it separately to avoid confusion with our formulation based on Abe. Analysis of boundary layer stability in Rayleigh-Bénard convection provides the classical result that the non-dimensional heat flux (Nusselt number) scales with the Rayleigh number to the power of one-third (e.g., Siggia, 1994). In this case the length scale of convection is encapsulated within the Rayleigh number rather than a mixing length parameter. The one-third scaling law is in reasonable agreement with experiments (e.g., Table 1 in Grossmann and Detlef, 2000) and MLT (see Fig. 3 in Abe, 1995). Solomatov and Stevenson (1993) popularised BLT for application to magma oceans. Subsequent studies that focus on understanding the energetic coupling and volatile exchange between a magma ocean and an atmosphere use BLT to quantify the convective heat transport in the interior (Lebrun et al., 2013;Hamano et al., 2013Hamano et al., , 2015. But since these studies focus on the role of the atmosphere, they implement a simple model of interior dynamics. For example, they solve a single equation that describes the evolution of the mantle potential temperature, which involves calculating an effective heat capacity for the entire planet by using the mantle potential temperature to reconstruct the temperature profile as a function of depth. This requires additional assumptions, most commonly that the mantle is adiabatic and predefining a path of crystallisation (e.g., bottom-up in the case of Lebrun et al. (2013)). Whilst the aforementioned quasi 1-D interior models inform about the bulk cooling of a planet, they provide limited information about the depth-dependence of the evolving interior. This is where a local description of energy transfer within an interior is advantageous (this study, Abe, 1993;Monteux et al., 2016), since the evolving energy balance can be determined self-consistently with thermodynamic models of mantle materials (e.g., Wolf and Bower, 2017). For example, both case MO and BU demonstrate that the thermal structure in a crystallising magma ocean is not strictly adiabatic due to the latent heat transport associated with convective mixing. A local description also provides flexibility to include additional energy fluxes and accommodate arbitrary liquidus and solidus curves. This provides crucial infrastructure to investigate magma ocean crystallisation for rocky planets in general where cooling could be markedly different from Earth depending on planet composition, core size, atmospheric composition and dynamics, etc. The convective flux determined by BLT can be used to compute an eddy diffusivity and may be formulated piecewise to account for regime changes such as a transition from soft to hard turbulence (Neumann et al., 2014;Monteux et al., 2016). Since in BLT the convective flux is a power law of the Rayleigh number, the latter of which contains a difference of two temperatures, the functional complexity of the eddy diffusivity is considerably reduced for BLT in comparison to MLT. This is because both the flux and hence eddy diffusivity do not depend on local gradients, which therefore reduces the requirement for precise numerical estimates of derivative quantities. Monteux et al. (2016) determine a local Rayleigh number based on the difference between the temperature profile of the magma ocean and a reference adiabat, the latter of which is computed with a potential temperature corresponding to the surface temperature. Hence although the flux is determined locally at each depth in the magma ocean, it is computed relative to a somewhat arbitrary reference adiabat-it is not apparent how physically reasonable this approach is. They also ignore convective mixing and consider only bottom-up crystallisation, which alleviates any potential difficulties relating to the numerical precision of their finite difference scheme. We choose the mixing length to be the distance to the nearest material boundary (e.g. Stothers and Chin, 1997), although alternative formulations exist (see supplementary material). This formally means that during magma ocean crystallisation, whether from the bottom-up or middle-out, thermal boundary layers exist at the top and bottom of the mantle and hence single layer convection ensues. The model thus presumes that crystal growth is sufficiently disrupted by turbulent flow that the magma ocean does not partition into multiple convecting domains. Future work should investigate how to relax this requirement despite several physical and numerical challenges. Imposing an abrupt change in the mixing length during crystallisation to allow formation of an additional boundary layer will likely produce artefacts that hinder physical interpretation of the model. Furthermore, boundary layers in the magma ocean may only be a few centimetres thick and will migrate as crystallisation proceeds -models with both high mesh resolution and a moving mesh may be required to address this scenario. The angular momentum of the Earth-Moon system constrains the rotation rate of the early Earth to a few hours following the formation of the Moon. Furthermore, accretion simulations predict that Earth-mass planets may initially rotate even faster, within about 30% or less of their rotational stability limit (e.g., Kokubo and Genda, 2010). This raises the question of how rotational effects can influence the subsequent cooling and crystallisation of a magma ocean. In our modelling framework, the interaction of rotation and convection can be accommodated by modifying the mixing length. Käpylä et al. (2005) show that increasing the rotation rate reduces the spatial scale of convection and the efficiency of convective energy transport. The mixing length decreases by a factor of 2 to 3 as the Rossby number decreases from 10 to 0.1. These authors also find that the superadiabaticity increases as a function of latitude. Conclusions We present an intuitive and flexible method for solving a non-linear conservation law relevant to the interior dynamics of partially molten planets. The numerical scheme solves the mixing length theory (MLT) representation of the energy balance. In a single formulation (i.e., one solution domain with a fixed grid), our model captures both inviscid flow (high melt fraction) and viscous flow (low melt fraction) and thus can chart the cooling trajectory of an initially molten planet as it solidifies. This encapsulates a viscosity range of 19 orders of magnitude. Using the finite volume method we formulate a local description of the dynamics and consider heat transport due to conduction, convection, mixing (i.e., latent heat transport), and gravitational separation. Our models are not constrained by a priori assumptions that the magma ocean is necessarily convecting or that it will follow a particular crystallisation sequence (e.g., bottom-up). It is straightforward to introduce additional energy transfer mechanisms within the finite volume framework. The model interfaces with equations of state for melt and solid phases to ensure thermodynamic self-consistency for each phase and can additionally accommodate arbitrary melting curves. Through a simplified MLT model, we reveal the requirement for extended numerical precision where the liquidus exhibits an overturn-a scenario which may be applicable to Earth. In this case, it is necessary to resolve entropy gradients spanning 12 orders of magnitude. We subsequently demonstrate the ability of the numerical scheme to handle this scenario using a full model which includes all four of the aforementioned energy fluxes, in addition to material properties obtained from lookup tables. In contrast, the dynamic range of the entropy gradient is significantly less for a case where the liquidus monotonically increases with pressure, which explains why all previous modelling studies did not encounter the same technical challenge that we uncover. Both the simplified and full model reveal cooling profiles that depart from adiabatic as a consequence of latent heat transport associated with convective mixing. This emphasises the importance of considering latent heat transport in a magma ocean since this energy flux is frequently omitted in dynamic studies. Finally, we expect the analysis and numerical scheme we propose to be of broad interest to other fields of science and engineering due to the prevalence of systems that are modelled by non-linear conservation laws. In particular, we offer a simple and robust approach to accommodate a diffusion coefficient that is sensitive to a local gradient. Code availability SPIDER: "Simulating Planetary Interior Dynamics with Extreme Rheology" is hosted on Bitbucket at https://bitbucket.org/djbower/spider and access can be requested by contacting Dan J. Bower. for Advanced Scientific Computing (PASC) program. ASW acknowledges the financial support and scientific freedom provided by the Turner Postdoctoral Fellowship at the University of Michigan, Ann Arbor. We thank Paul J. Tackley for discussions and for providing comments on an earlier version of the manuscript. Two anonymous reviewers provided helpful critique that enhanced the clarity of the manuscript. Perceptually-uniform and colour-blindness friendly colour arrays are available at http://www.fabiocrameri.ch/visualisation.php. Demonstration of convergence Our numerical solution scheme uses the finite volume method (FVM) with an auxiliary variable to enable greater numerical precision to be achieved. In this regard, there is extensive discussion in the literature on the accuracy and convergence behaviour of the FVM (e.g., Droniou, 2014;LeVeque, 2002); we note that our introduction of an auxiliary variable can be seen simply as change of variables for the semi-discretised system. Nevertheless, given the unusual nature of our system, particularly the dependence of a diffusion coefficient on a gradient, we provide numerical demonstrations of the convergence behaviour of our scheme here. An estimate of the tightest upper bound of the total error incurred in a central finite difference is on the order of 10 −11 using double precision calculations (e.g., Nocedal and Wright, 2006). Hence this is the error we can expect when we evaluate ∂q i /∂t using Eq. 32 for models that use double precision such as case BU. Therefore, this constrains the tightest tolerance for the accuracy of the timestepper to also be around 10 −11 , on the basis that this is the maximum total accuracy we can expect to achieve. To demonstrate convergence, we thus rerun case BU with the absolute and relative tolerances of the timestepper ranging from 10 −1 to 10 −11 for mesh sizes (number of basic nodes) of p =25, 50, 100, and 200. The nominal case BU in the main manuscript has a tolerance of 10 −10 and p = 200. For each p we compute the Euclidean norm of the difference between the solution vector for a given tolerance and the vector for the tightest tolerance (10 −11 ) at four output times that roughly correspond to 25%, 50%, 75% and 100% of the total integration time. The results are summarised in Fig. A.9. Importantly, the norm generally decreases as the timestepper tolerance decreases for all mesh sizes considered, which is consistent with a convergent scheme. The scheme contains several complicating factors, in addition to its inherent nonlinearities, including nonlinear boundary conditions and smoothing parameters (as discussed in the following section); thus, even though exact benchmark solutions or rigorous convergence analysis are not available, we are confident in the convergence of our scheme to a physically meaningful solution. Resolution of lookup tables Melt and solid thermophysical properties are stored in lookup tables that are accessed as the model timesteps. For case BU in the main manuscript the lookup tables have a resolution of approximately 23 Jkg −1 K −1 in entropy and 2 GPa in pressure. We test the sensitivity of our result to this resolution by running an additional three cases that have the same parameters as BU and coarser lookup table resolutions: (1) Pressure data coarsened by a factor of two, (2) Entropy data coarsened by a factor of two, (3) Both pressure and entropy data coarsened by a factor of two. In all three of these cases the difference relative to BU is visually imperceptible (Fig. 6) which suggests that the lookup tables have sufficient resolution to not influence our results. Smoothing width φ w The lookup tables contain smooth data and the bilinear interpolation scheme in the code effectively provides additional smoothing. Therefore, the only smoothing parameter that we implement is to ensure that material properties vary smoothly across the liquidus and solidus; a smoothing width φ w and Eq. 15 are utilised for this cause. Smoothing is required to ensure a stable numerical solution and it is important to note that the smoothing width does not correspond to a physical parameter. In cases BU and MO in the main manuscript the smoothing width φ w = 0.01 is 1% of the width of the mixed phase region in (non-dimensional) units of melt fraction; clearly, φ w should always be much smaller than the width of the mixed phase region. To test the sensitivity of our result to this choice we rerun BU using a smoothing width 5 times larger (φ w = 0.05) and 5 times smaller (φ w = 0.002) than the nominal case. The results of these two cases are near-visually identical to BU (Fig. 6) and are therefore omitted from this supplementary material. Nevertheless, this analysis shows that the model results are robust for smoothing widths ranging by more than an order of magnitude. Mixing length In the main manuscript we choose the mixing length to be the distance to the nearest material boundary (e.g. Stothers and Chin, 1997). However, to test the sensitivity of our result to this formulation we modify case BU to use a constant mixing length whilst retaining all other parameters to be identical (case BUM). In BUM, the mixing length is set to the 1/4 of the mantle thickness which corresponds to the average mixing length in BU. The results for BUM are shown in Figs. A.10, A.11, A.12 and can be directly compared with Figs. 6, 7, 8, respectively. The results for BU and BUM are qualitatively very similar. In fact, the evolution of BU and BUM during the earliest times before the rheological transition is reached (∼ 1.6 kyr) is often visually indistinguishable (compare Fig. 6 and Fig. A.10). During these times, the eddy diffusivity of the two cases is clearly different (Fig. 8a and Fig. A.12a) because the eddy diffusivity is a strong function of the mixing length (Eq. 8). This affects the partitioning of the total flux between convective and mixing fluxes (Fig. 7a,c and Fig. A.11a,c) but does not strongly dictate the overall cooling behaviour. This is because in both cases the eddy diffusivity is sufficiently large to enable efficient heat transport to the surface where the cooling rate is imposed by the ability of the planet to radiate heat. This suggests that the high melt fraction dynamic regime of our model is not sensitive to the choice of the mixing length. BU and BUM exhibit some differences in behaviour as the rheological transition is reached at 1.8 kyr. In BU, the entropy is slightly higher at the base of the mantle compared to BUM (Fig. 6a and Fig. A.10a). This is reasonable to expect given that the capacity to advect heat is partly controlled by the mixing length and the mixing length is larger at the base of the mantle in BUM than BU. This is consistent with the total heat flux which is less in the deep mantle for BU compared to BUM (Fig. 7d and Fig. A.11d). For BUM, a higher cooling rate in the deep mantle also enables the melt fraction to remain a monotonically increasing function with increasing radius throughout the evolution of the model. Therefore, because the melt fraction gradient has the same sign (positive with respect to radius), the mixing flux also retains the same sign (negative) in the mixed phase region (Fig. A.11c). In contrast, for BU the slower cooling at the base of the mantle gives rise to a melt fraction minimum around 90 GPa (Fig. 6c) and hence for pressures greater than this the mixing flux switches sign from negative to positive (Fig. 7c). This analysis reveals that the cooling behaviour once the rheological transition is reached is more sensitive to the formulation of the mixing length than the earliest phase of rapid cooling due to liquid convection.
2017-11-20T13:33:32.000Z
2017-11-20T00:00:00.000
{ "year": 2017, "sha1": "1eef118a7661137702db5685038089ca9d012044", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1711.07303", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1eef118a7661137702db5685038089ca9d012044", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
4993914
pes2o/s2orc
v3-fos-license
Imbalance between endothelial damage and repair capacity in chronic obstructive pulmonary disease Background Circulating endothelial microparticles (EMPs) and progenitor cells (PCs) are biological markers of endothelial function and endogenous repair capacity. The study was aimed to investigate whether COPD patients have an imbalance between EMPs to PCs compared to controls and to evaluate the effect of cigarette smoke on these circulating markers. Methods Circulating EMPs and PCs were determined by flow cytometry in 27 nonsmokers, 20 smokers and 61 COPD patients with moderate to severe airflow obstruction. We compared total EMPs (CD31+CD42b-), apoptotic if they co-expressed Annexin-V+ or activated if they co-expressed CD62E+, circulating PCs (CD34+CD133+CD45+) and the EMPs/PCs ratio between groups. Results COPD patients presented increased levels of total and apoptotic circulating EMPs, and an increased EMPs/PCs ratio, compared with nonsmokers. Women had less circulating PCs than men through all groups and those with COPD showed lower levels of PCs than both control groups. In smokers, circulating EMPs and PCs did not differ from nonsmokers, being the EMPs/PCs ratio in an intermediate position between COPD and nonsmokers. Conclusions We conclude that COPD patients present an imbalance between endothelial damage and repair capacity that might explain the frequent concurrence of cardiovascular disorders. Factors related to the disease itself and gender, rather than cigarette smoking, may account for this imbalance. Introduction Chronic obstructive pulmonary disease (COPD) is a life-threatening lung disease with systemic impact [1]. The primary cause of COPD is cigarette smoking, which is known to also produce endothelial dysfunction [2]. Previously, we demonstrated that COPD patients show endothelial impairment in both the pulmonary and the systemic circulation that may predispose to pulmonary hypertension (PH) and/or cardiovascular events [3][4][5]. Circulating biomarkers have emerged as promising non-invasive surrogates which may provide insights on endothelial function status and reveal mechanisms of endothelial derangement. Cellular biomarkers such as endothelial microparticles and progenitor cells have recently been related to endothelial health [6]. Circulating endothelial microparticles (EMPs) are small membrane-bound vesicles (0.1-1μm diameter) shed from the endothelium in response to cell activation, injury and/ or apoptosis [6][7][8]. As they circulate in the vasculature, they not only act on their local environment but also on sites far from their origin, serving as a cell-to-cell communication network [9]. In plasma of healthy subjects, EMPs are present at low levels, reflecting normal endothelial turnover [10]. Increased levels of EMPs have been reported in a variety of vascular-related disorders [7] and their levels are inversely correlated with endothelial function [11]. Cigarette smoke may alter the levels of circulating EMPs [12][13][14][15]. In COPD, elevated EMPs levels have been reported [16,17]. Higher EMPs levels with activated phenotype may denote COPD patients susceptible to an exacerbation episode [18], and to be predictive of rapid FEV 1 decline [19]. Recently, a negative correlation between EMPs in sputum and FEV 1 has been described in COPD [20]. Circulating progenitor cells (PCs) are adult stem cells derived from the bone marrow that are mobilized into the circulation in response to vascular injury [21]. They are involved in maintenance of the endothelium and restoration of its normal function [21]. However, it has also been reported that PCs could participate in the progression of pre-existing lesions [22]. In a recent study, we showed that COPD patients have lower levels of circulating PCs which appear to be a consequence of the disease itself and not related to smoking habit [5]. In this line, other studies also showed lower PCs levels in COPD [23][24][25][26][27][28], while others did not find significant differences [29,30]. Decreasing number of circulating PCs has been established as an independent prognostic risk factor associated with endothelial dysfunction and higher cardiovascular risk [31]. As the magnitude of endothelial damage may result from an imbalance between injury and repair capacity, the combined assessment of circulating EMP and PC levels may be used to evaluate the status of vascular health in different disorders [32][33][34]. Based on this background, we hypothesized that in COPD, alterations of endothelial function are associated with changes in the number of circulating EMPs and PCs. Accordingly, the present study aims to investigate whether COPD patients have an imbalance between EMPs to PCs compared to nonsmokers and current smokers, and the relationship with COPD severity and the presence of PH. Study population Sixty-one patients with COPD and 47 control subjects (27 nonsmokers and 20 current smokers) with normal lung function were enrolled in the study. Patients were recruited from the outpatient clinic and nonsmokers and current smokers were volunteers. The study was approved by the internal review board and all subjects gave written informed consent. Patients with left ventricle dysfunction in echocardiography were excluded. Patients were clinically stable at the time of the study without exacerbation episodes or oral steroid treatment for the previous 4 months. All patients were on regular bronchodilator treatment and most of them received inhaled corticosteroids. In control subjects, the absence of lung disease was confirmed by clinical evaluation and lung function tests. COPD was diagnosed according to current guidelines [1]. All subjects underwent standard evaluation by means of medical history, clinical examination, lung function tests and electrocardiogram. COPD was defined by clinical history compatible and evidence of chronic airflow limitation on spirometry (postbronchodilator forced expiratory volume in the first second (FEV 1 ) / forced vital capacity (FVC) < 70%). COPD patients underwent additional diagnostics for the assessment of associated PH; defined on the basis of right heart catheterization (mean pulmonary arterial pressure !25 mmHg and pulmonary artery occlusion pressure 15 mmHg) or by Doppler echocardiography (tricuspid regurgitation velocity !2.8 m/s). Assessment of circulating endothelial microparticles. Circulating EMPs were assessed by flow cytometry by the expression of the platelet endothelium adhesion molecule-1 (PECAM-1) marker CD31 in the absence of the platelet-specific glycoprotein Ib marker CD42b (S1A-S1B Fig) [12]. To further evaluate whether EMPs were derived from apoptotic or activated endothelial cells, EMPs were also assessed by annexin V staining for the presence of phosphatidylserine, a marker linked to apoptosis (S1C Fig) [8,12,35] or assessed by CD62E (E-selectin) staining, a cell adhesion molecule expressed only on endothelial cells activated by cytokines (S1D Fig) [8,36,37]. Briefly, peripheral blood was collected and within 1 hour was centrifuged 10 minutes (800g, 4˚C) to prepare platelet rich plasma. Within 5 minutes, the supernatant was subsequently centrifuged 10 minutes (300g, 23˚C) to discard cells, 10 minutes (2.000g, 23˚C) to discard dead cells and finally 30 minutes (10.000g, 23˚C) to discard cell debris and to obtain platelet-poor plasma (PPP). EMP phenotype analysis was performed based on size and fluorescence (S1A Fig). Events less than 1 μm were identified in forward (size) and side (density) light scatter plots using size calibration microspheres (FluoSpheres1carboxylate-modified microspheres 1.0μm, yellow-green fluorescent (505/515), Invitrogen, Oregon, EEUU). MPs levels were assessed by comparison with calibrator beads (Perfect Count Microspheres, Cytognos, Salamanca, Spain) with a known concentration, using 2.000 events beads-PE as a stop time. Then 100.000 MPs/μL for fluorescent minus one (FMO) tubes and 500.000 MPs/μL for each phenotype were stained 45 minutes at room temperature using pre-conjugated anti-human monoclonal antibodies and isotype controls: anti-CD31-FITC (BD Pharmingen TM , San Jose, CA), anti-CD42b-PE (BD Pharmingen TM , San Jose, CA), anti-CD62E-APC (BD Pharmingen TM , San Jose, CA), anti-IgG1 k-PE isotype control and anti-IgG1k-APC isotype control both from (BD Pharmingen TM , San Jose, CA) for the activated phenotype. For the apoptotic phenotype, MPs without anti-CD62E-APC were additionally labeled using annexin V-APC (BD Pharmingen TM , San Jose, CA) in the presence of CaCl 2 (25mM) according to manufacturer's recommendation. The fluorescence minus one technique was employed to provide negative controls [38]. Samples were analyzed by two-or three-color fluorescence histograms as CD31 + CD42b -(S1B Fig Assessment of circulating progenitor cells. The number of circulating progenitor cells was evaluated by flow cytometry using antibodies against CD45 (pan-leukocyte marker), CD133 (sub-population of hematopoietic stem cells) and CD34 (mature and progenitor endothelial cells) as previously described [5]. In brief, circulating progenitor cells were isolated from fresh peripheral blood by Ficoll density gradient centrifugation, washed once with phosphate buffered saline (PBS) supplemented with 2% of fetal calf serum (FCS) and ressuspended at 2x10 6 cells for FMO tubes and at 4x10 6 cells for sample tubes. Circulating PCs were stained and analyzed by flow cytometry for phenotypic expression of surface markers using pre-conjugated anti-human monoclonal antibodies and isotype controls anti-CD45-FITC (BD Pharmingen TM , San Jose, CA), anti-CD34-PECy7 (eBiosciences, San Diego, CA), anti-CD133-PE (MACS Miltenyi Biotec, Bergisch Gladbach, Germany), anti-IgG1k-PECy7 isotype control (eBiosciences, San Diego, CA), anti-IgG1k-FITC isotype control (BD Pharmingen TM , San Jose, CA) and anti-IgG1k-PE isotype control (BD Pharmingen TM , San Jose, CA). The fluorescence minus one technique was employed to provide negative controls [38]. After 45 minutes of incubation, cells were washed, ressuspended into 500 μL of PBS + 2%FCS and proceeded to flow cytometry analysis. A total of 750.000 CD45+ events were run through the LRSFortessa (BD Bioscience, San Jose, CA). The data were analyzed using FACSDIVA (Tree Star, OR) following ISHAGE (International Society of Hematotherapy and Graft Engineering) gating strategy (S2 Fig) [39]. Statistical analysis In the non-adjusted analysis, data are expressed as mean±SD for normally distributed data or as median with interquartile range for skewed distributions. Group comparisons were performed using one or two way ANOVA and post hoc pairwise comparisons using the Student Newman-Keuls test for normally distributed variables or the Dunn's test for non-normally distributed variables. Correlations between variables were analyzed with Pearson's or Spearman's coefficient depending on data distribution. In the statistical adjusted model, as EMP and PC counts were skewed in distribution, values were ln-transformed to improve normality. Linear regression models were used to adjust for potential confounders, which were selected based on biologic plausibility such as age, gender, body mass index (BMI) and Framingham risk score [40]. A p-value <0.05 was considered statistically significant. Population characteristics Anthropometric, clinical and functional characteristics of subjects are shown in Table 1. Nonsmokers and current smokers were matched for age, gender and BMI. COPD patients were older and with a higher proportion of men. All healthy smokers and 26% of COPD patients were current smokers. The COPD group had higher Framingham risk score compared with the other groups. Three patients with COPD (5%) were in spirometric GOLD stage 1, 17 (28%) in stage 2, 18 (29%) in stage 3, and 23 (38%) in stage 4. Patients with COPD had moderate to severe airflow obstruction, air trapping, reduction of CO diffusing capacity and mild to moderate hypoxemia ( Table 1). Characteristics of subjects by gender are shown in S1 Table. Circulating EMPs levels In the non-adjusted model, the number of total circulating EMPs was significantly increased in COPD patients and current smokers compared with nonsmokers ( Table 2 and Fig 1A). In the adjusted model, where levels of EMPs were corrected by age, gender, BMI and Framingham risk score, COPD patients also showed significantly increased levels of total circulating EMPs compared with nonsmokers. However, while in current smokers, adjusted levels of EMPs were also higher than in nonsmokers, differences did not reach statistical significance (Table 2). Similarly, levels of EMPs derived from apoptotic origin were increased in both COPD patients and smokers compared with nonsmokers in the non-adjusted analysis ( Table 2 and S3A Fig). In the adjusted model, only patients with COPD showed significantly higher number of these EMPs compared to nonsmokers. No statistical differences were found in the levels of EMPs derived from activated endothelial cells between groups, either using the non-adjusted or the adjusted models (Table 2 and Circulating PCs levels Initial analysis of circulating levels of CD34 + CD133 + CD45 + PCs showed no significant differences between the different groups ( Fig 1B). However, analysis of covariates revealed that gender had a significant effect on the levels of circulating PCs. Accordingly; the subsequent assessment of circulating PC levels was performed dividing all groups by gender. Among women, those with COPD showed significantly lower levels of circulating PCs than both nonsmokers and smokers in the non-adjusted analysis (Table 3 and S3C Fig). Similar findings were observed using CD34+CD133+ and CD34+CD45+ combinations of PCs (data not shown). In men, no significant differences were found in the number of PCs between groups (Table 3 and S3D Fig). In the adjusted model, levels of circulating PCs were reduced in COPD compared to smokers in both men and women (Table 3). No statistical differences were found between the nonsmoker and smoker groups in the number of circulating PCs (Table 3). Interestingly, women showed reduced levels of PCs compared to men throughout all groups, primarily in COPD subjects (p = 0.031) ( Table 3). Assessment of EMPs/PCs ratio We assessed the ratio of EMPs to PCs as a measure of the balance between endothelial damage and repair capacity. In our series, the EMPs/PCs ratio was increased in COPD patients compared to nonsmokers; being the smokers in an intermediate position that did not significantly differ neither from the nonsmoker nor from the COPD groups ( Fig 1C). Further analysis considering the gender, revealed that women with COPD had greater EMPs/PCs ratio compared to control groups (S3E Fig), while no significant differences were found in men (S3F Fig). Effect of cigarette smoking on circulating EMP and PC levels Within the COPD group there were both current and ex-smokers. To assess whether smoking status could influence the levels of circulating EMPs and PCs, group comparisons were re-analysed according to smoking status. Our results show that levels of total and apoptotic https://doi.org/10.1371/journal.pone.0195724.g001 Table 3. Circulating progenitor cell counts. CD34 + CD133 + CD45 + (% lymphomonocytes) Gender Non-adjusted model (Mean, (95% CI)) Adjusted model † (Predicted mean, (95% CI)) Non circulating EMPs were increased in COPD compared to nonsmokers irrespective of smoking status (Fig 2A and S4A). Regarding circulating PCs, COPD patients that were current smokers had less circulating PCs than the other groups ( Fig 2B). When we divided subjects by gender, women with COPD showed a significant reduction of PC levels compared with the other groups, irrespective of the smoking status (S4B Fig), whereas no differences were found in men (S4C Fig). The EMPs/PCs ratio was higher in COPD patients compared to nonsmokers irrespective of the smoking status (Fig 2C). Similar results were obtained in women (S4D Fig), while in men only COPD patients that were current smokers showed higher EMPs/PCs ratio than nonsmokers (S4E Fig). Inflammatory and vascular markers Compared with nonsmokers and smokers, COPD patients had higher plasma levels of hsPCR, fibrinogen, HGF and siCAM. In the COPD group, VEGF was lower than in the nonsmokers and sTNFα higher than in the smokers (Table 4). No relationship was found between plasma levels of the different biomarkers and the severity of airflow obstruction. Relationship among circulating EMPs and PCs vs lung function and cardiovascular parameters There was no significant correlation between the number of circulating EMPs and PCs, in both the whole population or when divided by gender (S5A-S5C Fig). Circulating EMPs, PCs and the EMPs/PCs ratio were not related to the levels of inflammatory or vascular biomarkers. In the COPD group, no differences in EMP and PC levels or the ratio were observed between those with and without PH. In women, the EMPs/PCs ratio was higher in those with CO diffusing capacity (DLco) (Fig 3A), and those with forced expiratory volume in the first second (FEV 1 ) below the median (Fig 3B). Discussion In the present study we assessed comprehensively the balance between endothelium injury and repair capacity, which is critical for the maintenance of vascular homeostasis, in patients with COPD. Our results show elevated levels of circulating EMPs, an indicator of endothelial cell damage, and reduced numbers of bone marrow-derived PCs able to maintain and repair the endothelium. The ratio between these two factors was significantly disturbed in COPD suggesting a phenotype of vascular incompetence in those patients [6,33]. Interestingly, while cigarette smoking was not related to this vascular incompetence, gender appears to play a key role on endothelial repair capacity. In our study the levels of total and apoptotic circulating EMPs were elevated in COPD patients, in agreement with previous observations [16][17][18]. Since COPD patients showed increased cardiovascular risk factors compared with the other groups, particularly systemic hypertension, and increased cardiovascular risk is associated with greater numbers of circulating EMPs [36], we analysed EMP levels by using an adjusted statistical model to compensate for potential confounding factors. EMP levels in COPD remained significantly higher in the adjusted model, suggesting that increased EMP levels are related to the disease itself rather than to other factors of cardiovascular risk. Interestingly, smokers without COPD also showed significantly higher levels of circulating EMPs than nonsmokers in the non-adjusted analysis but not in the adjusted model. Accordingly, differences in circulating EMPs between smokers and nonsmokers appear to be related to other influencers, i.e. cardiovascular risk factors, rather than to the smoking habit. In our study, the number of circulating EMPs was not related to indices of COPD severity, suggesting that endothelial damage might be associated with the presence of COPD rather than to its progression. Our results concur with those of Thomashow et al [16] who showed increased numbers of apoptotic EMPs in COPD patients that did not differ among a wide range of COPD severity. Circulating EMPs may act as signalling elements capable of producing endothelial damage in systemic vessels and explain, at least in part, the frequent association between COPD and cardiovascular disease. Bone marrow-derived circulating PCs are key to endothelial repair [21]. In a recent study, we showed that COPD patients present reduced numbers of circulating PCs [5]. In the present study, COPD patients showed numerically lower numbers of circulating PCs, but differences did not reach statistical significance. Unlike some previous studies in which there was no matching for age and/or gender between groups [24,28], we statistically corrected the data for confounder parameters. Our results revealed that gender had an effect on the levels of circulating PCs in all groups, highlighting the importance of gender matching in PCs studies. In the adjusted model, levels of circulating PCs were reduced in both men and women with COPD. Interestingly, women showed reduced levels of PCs compared to men throughout all groups, primarily in COPD patients. Recent evidence suggests gender-related differences in COPD patients [41]. Women develop more severe COPD at younger ages than men and with lower levels of cigarette smoke exposure [41]. Accordingly, it has been suggested that men and women may be phenotypically different in their response to cigarette smoke [42]. Progenitor cells are actively involved in cardiovascular homeostasis and provide in basal conditions a pool of circulating cells able to repair ongoing endothelial damage. We hypothesize that reduced levels of circulating PCs in women, as compared to men, may result in lower repair capacity and higher susceptibility to the effects of cigarette smoke. In recent years the concept of impaired endothelial cell survival has emerged as a relevant factor in the pathogenesis of emphysema [43]. Although we did not assess the severity of emphysema, in our study a higher Circulating endothelial microparticles and progenitor cells in COPD EMPs/PCs ratio was associated with lower DLco. Such impairment in the repair capacity of vascular endothelium might explain the more rapid progression of emphysema observed in women [44]. Levels of PCs fluctuate throughout lifetime in women and decline dramatically after menopause, coupled to hormonal mechanisms and endometrial vascular turnover [45]. It has been shown that premature menopause, due to natural or surgical causes, is associated with increased cardiovascular risk compared to non-premature menopause (around 50 years old), suggesting the presence of an estrogen protective effect accumulated during the women´s lifetime [46]. This protective effect on the cardiovascular system is thought to be carried out through the regulation of the endothelial function by the release of nitric oxide and its vasodilation effects [47] All women involved in this study were above 45 years old which could further explain the decline in circulating PCs. Other factors such as genetic predisposition, anatomic, social or hormonal differences might also explain the influence of gender in COPD development [42]. In our study, we did not find any relationship between the levels of circulating EMPs and PCs, suggesting that both markers may reflect different but complementary physiologic cellular mechanisms of action. However, both markers measured together may characterize a phenotype of vascular competence [6,33]. Our study assesses for the first time such vascular competence in COPD patients by the combined measurement of markers of vascular integrity and repair capacity in the same subjects. Our results show a phenotype of disturbed vascular competence in COPD patients, being smokers without COPD in an intermediate position between COPD and nonsmokers. Our findings in COPD are in line with those reported in hypercholesterolemia, psoriasis or Sjögren syndrome [32][33][34], suggesting a common background for the frequent development of cardiovascular disease in these different conditions. In our study, COPD patients with concomitant pulmonary hypertension did not show differences in EMPs or PCs count, or in the EMPs/PCs ratio, suggesting that such vascular incompetence is insufficient to produce pulmonary hypertension and that additional factors may concur for its development. Some studies have reported that cigarette smoke increases the levels of circulating MPs [12][13][14][15], suggesting early development of emphysema in healthy smokers. The effect of cigarette smoke on PCs is controversial [30,48]. We and others have reported similar numbers of circulating PCs in current smokers and nonsmokers [5,25]. In the present study, there were no differences between total and apoptotic circulating EMPs between former or current smokers. Since differences shown in the present study in COPD patients in circulating levels of EMPs and PCs and in the EMPs/PCs ratio were independent of the current smoking status, we consider that the impairment of vascular competence in COPD appears to be a consequence of the disease itself rather than to an effect of cigarette smoking. In our study we did not find any relationship between circulating EMPs and PCs and any of the plasmatic markers assessed. This is consistent with recent studies [24] and denotes the complex interactions between markers of systemic inflammation and vascular impairment. In the same line, most of the correlations of EMPs and PCs levels with conventional pulmonary function or demographic parameters were weak or absent. The study has some limitations. Firstly, as we sampled EMPs in peripheral venous blood we cannot be certain that the elevation of EMPs is from pulmonary origin. Secondly, we did not perform CT scans to assess pulmonary emphysema, therefore we were unable to relate the presence of circulating apoptotic EMPs with emphysematous destruction of lung parenchyma. And thirdly, there is no worldwide exclusive procedure to isolate and analyse circulating MPs and PCs from plasma. Conclusions COPD patients present disturbed vascular competence, as reflected by an imbalance between increased endothelial damage and reduced repair capacity, which might explain the frequent concurrence of cardiovascular disorders. Factors related to the disease itself and gender, rather than the smoking habit, may account for this imbalance. Supporting information S1 Fig. Gating strategy for endothelial microparticles (EMPs). A) MPs analysis based on size and fluorescence; B) Sample analyzed by two-color fluorescence histograms as CD31+-CD42b-(total EMPs); C) Sample analyzed by three-color fluorescence histograms as CD31+-CD42b-Annexin V+ (apoptotic EMPs) and D) Sample analyzed by three-color fluorescence histograms as CD31+CD42b-CD62E+ (activated EMPs). (EPS) Table. Clinical characteristics, lung function, cardiovascular and laboratory measurements by gender. Data are shown as mean± SD. COPD: chronic obstructive pulmonary disease; BMI: body mass index, GOLD: global initiative for chronic obstructive lung disease, FEV 1 : forced expiratory volume in the first second; FVC: forced vital capacity; TLC: total lung capacity; RV: residual volume; DLco: diffusing capacity of the lung for carbon monoxide; PaO 2 : arterial partial oxygen pressure; PaCO 2 : arterial partial carbon dioxide pressure; HDL: high-density lipoprotein; LDL: low-density lipoprotein and NA: not applicable. ‡The Framingham risk score can range from -6 to 19, with higher scores indicating greater cardiovascular risk. Ã p<0.05 compared with men nonsmokers. § p<0.05 compared with men smokers. # p<0.05 compared with women nonsmokers. ¥ p<0.05 compared with women smokers. o p<0.05 compared with men COPD. (DOC)
2018-04-27T04:56:13.220Z
2018-04-19T00:00:00.000
{ "year": 2018, "sha1": "8c8059877c46b5cbc47a5c16d96df9a213d0cfe8", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0195724&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8c8059877c46b5cbc47a5c16d96df9a213d0cfe8", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
71814008
pes2o/s2orc
v3-fos-license
COMPARATIVE STUDY OF LIPOPROTEIN (a) AND LIPID PROFILE IN CHRONIC KIDNEY DISEASE PATIENTS WITH HEMODIALYSIS AND WITHOUT HEMODIALYSIS AIM: This study was under taken to compare the lipid profile pattern including Lipoprotein (a) or Lp (a) levels , Total cholesterol , Triglycerides, High density lipoprotein , Low density lipoprotein and other biochemical parameters like Bun, Creatinine, Fasting Plasma Glucose and Post prandial Plasma Glucose in Chronic kidney disease patients wtih Hemodialysis and without Hemodialysis. INTRODUCTION: Chronic kidney disease (CKD) is associated with early development of atherosclerosis and increased risk of cardiovascular morbidity and mortality which is the leading cause of death among these patients. Alterations in lipid metabolism resulting in abnormal lipoprotein composition and concentration (dyslipidemia) have been noticed in chronic renal insufficiency. Dyslipidemia is a primary risk factor in the development of a number of disease multitudes ranging from atherosclerosis to stroke. Dyslipidemia may be worsened by dialysis, especially continuous ambulatory peritoneal dialysis (CAPD). Dyslipidemia among HD patients negatively impacts cardiovascular profiles, which in turn influence the frequency and/or duration of hospitalizations. METHODS AND MATERIALS: The study was conducted among subjects attending Nephrology Department, Sri Ramachandra Medical College and Research Institute Chennai, Tamil Nadu as inpatients. The study was conducted over a period of three months. The study includes 40 CKD patients in the age group of 40 to 60 years. They were divided into 2 groups. Group 1 consisted of subjects with chronic kidney disease with hemodialysis. Group 2 consisted of subjects with chronic kidney disease without hemodialysis. Each group had 10 males and 10 females. RESULTS: The mean and standard deviation of biochemical parameters of the two groups was calculated. The biochemical parameters include Bun, Creatinine, Fasting Plasma Glucose, Post prandial Plasma Glucose, Lipoprotein (a), Total Cholesterol, Triglycerides, High density lipoprotein and Low density lipoprotein. Data evaluation was done using SPSS programme. The results were expressed as Mean with standard deviation. The P value < 0.05 was considered significant. CONCLUSION: In this study, there is no significant difference in all the biochemical parameters between chronic kidney disease patients with Hemodialysis and without Hemodialysis. MATERIALS AND METHODS: The study was conducted among subjects attending Nephrology Department, Sri Ramachandra Medical College and Research Institute Chennai, Tamil Nadu, as inpatients. The study period was from January to March 2005, for a period of three months. The study includes 40 CKD patients in the age group of 40 to 60 years. The CKD patients were classified based on GFR as per the NKFK/ DOQI guidelines. They were divided into 2 groups. Group 1 consisted of subjects with chronic kidney disease on hemodialysis. Group 2 consisted of subjects with chronic kidney disease without hemodialysis. Each group had 10 males and 10 females. Group 1 included 20 patients with CKD stage V (GFR < 15ml /min / 1.73 m 2 ) on hemodialysis. Group 2 included 20 patients in CKD stage I to IV (GFR of 15 to 59 ml /min / 1.73 m 2 ). Patients with Diabetes, obesity, liver disease and systemic illness were excluded from the study. STUDY DESIGN: As it is a comparative cross-sectional study, all patients underwent a full medical history that included age, family history of diabetes, hypertension, coronary artery disease, Chronic kidney disease, duration of Chronic kidney disease, type of dialysis, smoking and alcohol, Drug history and treatment history for any other disease was collected through a standard questionnaire. Blood samples were collected after 12 hours of fasting in the vacutainers for estimation of glucose, lipoprotein (a), lipid profile, Bun and creatinine. Blood samples were collected in the morning after 12 hours of overnight fasting. The samples were separated by centrifugation at 2400 rpm. BIOCHEMICAL METHODS: Lipoprotein (a)levels was determined by Agglutination reaction using Latex daiichi kit in Konelab 60 autoanalyser .Serum total cholesterol and triglycerides and HDL were estimated using Randox kits by enzymatic endpoint analysis. Serum LDL-cholesterol levels was calculated using Freidwald's formula: LDL cholesterol= Total cholesterol-HDL cholesterol-TGL/5. GFR was calculated using Modification of Diet in Renal Disease (MDRD) formula. Glucose was analyzed by enzymatic end point method in konelab 60 automated systems using commercially available kit by Accurex. Bun and creatinine was analyzed by Endpoint method in Kone lab 60 automated systems using commercially available kit by Trace. STATISTICAL ANALYSIS: Data evaluation was done using SPSS programme. The results were expressed as Mean with standard deviation. The P value < 0.05 was considered significant. RESULTS: A total number of 40 subjects were recruited for the study .Among them 20 were CKD patients with Hemodialysis and 20 were CKD patients without Hemodialysis. Data evaluation was done using SPSS programme. The mean and standard deviation of all the biochemical parameters were calculated and their results shown in Table 1. The P value was used to compare the different groups. The P value < 0.05 was considered significant. As per In this study, only a small number of Chronic kidney disease patients were included, out of which 20 were wtih Hemodialysis and the rest without Hemodialysis. According to Gruber et al, the type of therapy for renal failure does not seem to influence the elevation in Lipoprotein (a) concentration. 12 Lp (a) is an LDL-like particle having an apolipoprotein (a) which is attached to apolipoprotein B-100 by a disulfide linkage. It is synthesized in the liver, but its sites of catabolism are not clear. The increase in Lp (a) levels in CKD patients could be due to its increased synthesis by the liver or due to its decreased catabolism in kidneys. 13 A significant decrease in Lp (a) concentrations between the ascending aorta and renal vein 14 and the identification of apo (a) fragments in urine 15 indicate kidneys' active participation in the degradation of Lp (a). Recent studies have also shown a strong genetic basis for the increase in serum Lp(a) levels in chronic kidney disease. In chronic kidney disease, individuals with low molecular weight (LMW) apo(a) isoforms have been shown to have high serum Lp (a) levels and those with high molecular weight HMW apo (a) isoforms have low levels. Sechi and coworkers studied 160 patients with early impairment of renal function. They found an increase in plasma Lp (a) levels in comparison with healthy controls. In another study, Sechi and colleagues evaluated Lp (a) concentrations and apo (a) isoforms in a group of patients with moderate renal failure. They found an increased plasma Lp (a) concentrations in patients and a similar apo (a) isoform distribution between patients with renal disease and controls. 15 A consistent moderate elevation in plasma lipoprotein (a) concentrations has been observed in large case control studies in hemodialysis patients. Increased plasma free apolipoprotein (a) fragments have been observed in hemodialysis patients, but appear to account for only a small proportion of increased lipoprotein (a) observed in such patients. The frequency of apolipoprotein (a) isoforms in hemodialysis patients is comparable to that in healthy controls. An increase in lipoprotein (a) plasma levels has been identified specifically in hemodialysis patients exhibiting high molecular weight apolipoprotein (a) isoforms, but this has not been confirmed by others in different ethnic groups. It was found that due to unknown reasons, inflammation affected only high molecular weight apolipoprotein (a) isoforms in hemodialysis patients. 16 In hemodialysis patients, by in vivo turnover studies using stable isotope techniques, Frischmann KE et al 17 have elucidated that the fractional catabolic rate of the Apolipoprotein (a) was significantly reduced resulting in its longer residence time in plasma (9 days) compared to the controls (4.4 days). This decreased clearance could be the result of loss in kidney function, in hemodialysis patients. Malnutrition and inflammation have also been associated with high plasma Lp (a) levels in hemodialysis patients. 18 However, it still remains to be clarified through which pathophysiological mechanisms Lp (a) might contribute to the progression of glomerular disease .The underlying mechanisms responsible for the elevation of Lp (a) plasma concentrations in patients with renal insufficiency are not known. Although plasma levels of Lp (a) in healthy individuals are predominantly genetically determined, the alterations seen in conjunction with renal disorders such as advanced renal insufficiency, i.e. ESRD, and nephrotic syndrome are not primarily due to genetic factors. 12 Plasma triglycerides are predominantly present in 2 types of lipoproteins namely the chylomicrons and VLDL. Hypertriglyceridemia may be due to high production rate of these lipoproteins and a low catabolic rate 19 . Renal insufficiency can cause insulin resistance which in turn promotes hepatic VLDL production and hence elevated triglyceride levels. 20 But the predominant mechanism for increased triglyceride levels in pre dialysis patients is that of delayed catabolism and hence impaired clearance. The reduced catabolism is due to the decreased activity of 2 endothelium-associated lipoprotein lipases -hepatic lipase and lipoprotein lipase (LPL). There may be down regulation of the LPL and hepatic lipase enzyme gene expressions, contributed in part by secondary hyperparathyroidism. 21 The decrease in lipoprotein lipase activity may be due to the increase in plasma apo C-III levels resulting in a decrease in apo C-II/ apo C-III ratio. Apo C-II is an activator of lipoprotein lipase while apo C-III is a potent inhibitor of LPL and so the increase in apo C-III levels results in inactivation of lipoprotein lipase resulting in reduced triglyceride lipolysis and hence increased triglyceride levels. Another inhibitor of lipoprotein lipase has been identified as pre β -HDL, whose concentration is found to be elevated in CKD patients. 22 Though many studies have shown hypertriglyceridemia in hemodialysis patients, we did not observe any significant change in this group of patients. This may either be due to the patients receiving carnitine injections, multivitamin supplementations or HMG-CoA inhibitors. 23,24 These factors could have marginally prevented the rise of serum triglyceride levels in hemodialysis patients. Many authors have noticed that in hemodialysis patients low serum cholesterol is associated with increased mortality. 25,26 It appears that many dialysis patients have a condition identified as malnutrition inflammation complex syndrome (MICS), which is a combination of protein-energy malnutrition and inflammation and is related to poor dialysis outcomes 27 . This MICS leads to a low body mass index, hypocholesterolemia, hypocreatininemia, and hypohomocysteinemia, increasing the risk of death. 29 The hypocholesterolemia is a strong mortality risk factor in dialysis patients and a marker of poor nutritional status. 28 The mechanism by which systemic inflammation and malnutrition may explain this hypocholesterolemia is unclear. A cytokine mediated acute-phase reaction to acute or chronic inflammation may partially account for the hypocholesterolemia (cholesterol-negative acute phase reactant), in dialysis patients by increasing catabolism and decreasing appetite. 30 The lowering of LDL in CKD may be due to the same above said reason inflammation/ malnutrition or due to reduced production of LDL resulting in its near normal levels. 31 The inflammation may change the lipoprotein structure and function by oxidatively modifying low density lipoprotein. In CKD patients there is a relative increase in IDL (intermediate density lipoproteins) and small dense LDL (sdLDL) particles which undergo further modifications like glycation, oxidation and carbamylation, making them highly atherogenic. 32 These modified lipoproteins are in turn taken up by the scavenger receptors on macrophages and vascular smooth muscle cells, which are increased in uremia, favoring the development of atherosclerotic plaques. . Though many studies have shown a reduction in HDL level with progression of renal disease, we could not find any changes. 33 STRENGTH AND LIMITATION OF THE STUDY: The main strength of this study is to reduce the early development of atherosclerosis and increased risk of cardiovascular morbidity and mortality which is the leading cause of death among CKD patients with and without hemodialysis. The number of patients recruited for the study was small which the only limitation of the study. CONCLUSION: Cardiovascular disease (CVD) is a major cause of mortality in patients with mild to moderate chronic kidney disease and end stage renal disease (ESRD). In our study there was no significant difference in Lipoprotein (a) and Lipid profile between Chronic kidney disease patients on Hemodialysis compared to chronic kidney disease patients without hemodialysis. As a smaller group was included in this study, this study can be further extended to a larger group to confirm whether early detection and treatment (diet /drug therapy) of this dyslipidemia is quite promising, in the prevention of adverse clinical outcomes in CKD patients with hemodialysis and CKD patients without hemodialysis. Further research can also be done to see the correlation of Lipoprotein (a) and the rate of progression of renal disease and the use of apolipoprotein (a) isoforms as a predictor of cardiovascular disease in hemodialysis patients.
2019-03-09T14:05:17.844Z
2014-09-09T00:00:00.000
{ "year": 2014, "sha1": "a9adc3a48570e78b91f228707c6f559057c9badc", "oa_license": null, "oa_url": "https://doi.org/10.14260/jemds/2014/3388", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c04ebb7e2f17ae84cad53886134c0a134a933889", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
203996397
pes2o/s2orc
v3-fos-license
Numerical Simulation of a Supersonic Ejector for Vacuum Generation with Explicit and Implicit Solver in Openfoam Supersonic ejectors are used extensively in all kind of applications: compression of refrigerants in cooling systems, pumping of volatile fluids or in vacuum generation. In vacuum generation, also known as zero-secondary flow, the ejector has a transient behaviour. In this paper, a numerical and experimental research of a supersonic compressible air nozzle is performed in order to investigate and to simulate its behaviour. The CFD toolbox OpenFOAM 6 was used, with two density-based solvers: explicit solver rhoCentralFoam, which implements Kurganov Central-upwind schemes, and implicit solver HiSA, which implements the AUSM+up upwind scheme. The behaviour of the transient evacuation ranges between adiabatic polytropic exponent at the beginning of the process and isothermal at the end. A model for the computation of the transient polytropic exponent is proposed. During the evacuation, two regimes are encountered in the second nozzle. In the supercritic regime, the secondary is choked and sonic flow is reached. In the subcritic regime, the secondary flow is subsonic. The final agreement is good with the two different solvers, although simulation tends to slightly overestimate flow rate for large values region. Introduction Supersonic ejectors use a primary supersonic flow, obtained from a pressure source, to generate a secondary flow that mix with the former in a a mixing chamber. Both mixed flows are then expelled to the atmosphere or used in a feedback cycle. A scheme of these flows is depicted in Figure 1. Vacuum ejectors are extensively used for a wide number of applications: compression of refrigerants in cooling systems, pumping of volatile fluids or in vacuum generation applications. Vacuum generators have, presently, many applications because of their velocity of vacuum production. Vacuum can be used also to manipulate objects. The mechanism is simple, since it only needs a pressure source, an ejector and a vacuum cup. Although high vacuum levels as large as 99% can be reached, for objects manipulation in industry vacuum ejectors can easily create what it is called a "useful vacuum", which is a vacuum level of about 80%. In recent years, a significant number of investigations have been performed. José Sierra-Pallares [1] and García del Valle [2] did a shape optimization of a rectangular ejector for refrigeration, with R134a with Ansys, and they increased the entrainment ratio to the valueof 16%. Lambert et al. [3][4][5] analysed, by using OpenFOAM and experimental data, a new method of transport phenomena in supersonic ejectors and found that net gain the secondary flow is maximum when the back pressure is close to its critical value, and they found an explanation of the entrainment ratio limitation in supersonic ejectors as the compound-choking theory. They used the rhoCentralFoam changing the outlet pressure instead of the secondary inlet pressure. Expósito Carrillo et al. [6] optimized a single-phase ejector geometry by means of a multi-objective evolutionary algorithm and CFD model with Ansys and they obtained a potential increase of 55% and 110% in the back pressure, and also on the entrainment ratio ratio for air. They used another gas, CO 2 , and air, as well. Petrovic et al. [7] evaluated the performance of 1-D models to predict variable area supersonic gas ejector performance, although they used different gases. Arun Kumar et al. [8] showed the physics of vacuum generation in ejectors when they are in zero-secondary flow such as bubbles on the secondary flow that obstruct the entrainment ratio of air. Zhang et al. [9] studied the effect of the friction on the ejector and they found that it can, actually, effect on the function, although their most incisive part of this are the constant-area section and the diffuser, that were considered of low incidence for the present work. Chen et al. [10] developed a theoretical model that could work overall modes although they worked with different gases. Croquer et al. [11] compared an ejector designed by thermodynamic model versus another designed by CFD. They found that they both are in a good agreement at the design conditions, about 2% difference. Jafarian et al. [12] studied, both numerically and experimentally, the transient phenomenon when the air is being evacuated. Although they worked basically on the motive flow profile they found that as the slope of the motive flow pressure profile increases, the suction increases BY 40%. Kong et al. [13] studied analytical and with CFD the performance of vacuum of a chevron ejector and they found that with this kind of nozzle helps the secondary stream into the mixing chamber. Mazzelli et al. [14] also performed a numerical and experimental analyses in order to check the effectiveness of the different turbulence models. They found that k − ω SST showed as the most suitable turbulence model in both 2D and 3D simulations. The objective of the present work is to numerically simulate an ejector and study its transient flow regimes. It is organised as follows. The next section describes both experimental and numerical methods. Section 3 presents the main results obtained, in Section 4 these results are discussed, and finally, conclusions are drawn. Experimental Method Two types of experiment were made. In the first experiment, the secondary flow rate moved by the ejector from a vessel in steady state flow was measured, for different secondary pressure. It was performed in a testbench with a 0.5 m 3 vessel. The vacuum pressure in vessel (P3 in Figure 2) was measured with open tubes manometers with mercury. In Figure 2, the experimental setup used for the present paper is depicted and, in table 1, more details about the valves, gauges, are given. The primary flow rate was obtained with a flange orifice, according to ISO 5167 standard [15], where A is the orifice surface, β is the rate between orifice and pipe diameters, ρ 1 is air density upstream, ∆p is the pressure difference measured, is the expansion factor and C is the discharge coefficient for the flange orifice. The calculations were performed with the python package fluids [16]. Secondary flow is measured with an inlet nozzlė m s = C n A n 2ρ n P n where A n is the nozzle surface, ρ n is the air density at atmospheric pressure and temperature , P n is the pressure measured and C n = 0.9975 is the discharge coefficient for the nozzle. Two diameter flanges' orifices and two different fluids, in open tubes manometers, were used to measure the pressure difference, ∆p. The secondary pressure in the vessel was normalised with atmospheric pressure: The first flange, with a diameter of 10.30 mm and water, was used to obtain ∆p from p * s = 1 to p * s = 0.4. The second, with a diameter of 5.15 mm, was used from p * s = 0.4 to p * s = 0.3, with water. From p * s = 0.3 to its maximum vacuum pressure used the small flange and alcohol (ρ = 791.8 kg/m 3 ) because of its precision. The primary pressure source, P in Figure 2, was kept steady and at a value of 6 bar (relative) for all the measurements. The ball valve, in the scheme V2, controls the secondary flow rate when the system reaches a steady state. The second experiment consisted of the transient measurement of the pressure decay in the vessel versus time. This experiment was performed with two different vessel volumes, 0.5 m 3 and 0.1 m 3 . For the 0.1 m 3 a vacuometer sensor by AR-Vacuum [17], with an error of 200 Pa, was used. The Solver The present work used the OpenFOAM [18] simulation toolbox that solves the equations for unsteady compressible flow of mass, momentum and energy: along with the equation for perfect gases p = ρR T where E = c v T + 1 2 |u| 2 , c v is the specific heat at constant volume and R is the constant for the gas. Equations (4)-(6) can be expressed as where with An explicit and an implicit solver were used for the sake of comparison. The explicit solver, available in the standard distribution of OpenFOAM, rhoCentralFoam, is a transient density-based solver, which uses the Kurganov and Tadmor central-upwind schemes for face interpolation [19,20]. First-order Euler explicit time scheme was used and the semi-discrete form for the set of equations reads where V is the cell volume and R(W) is the right side of Equation (8) after face integration. This integration is usually performed with some limiter, like a van Leer limiter in the present case. For details about computational algorithm, the reader is referred to Greenshields et al. [19]. CFL number was set in 0.5 for all the operating conditions, and the steady-state solution was stated by monitorization of flow rate in outlet. The open source transient implicit solver is HiSA, which implements the AUSM+up upwind scheme for face fluxes [21,22]. This solver permits solving unsteady flows with a significant larger Courant numbers than the explicit solvers. Moreover, it was also reported that for high Mach numbers, implicit solvers are much more efficient and accurate that explicit solvers [20,23,24]. In the present case, however, the steady-state time scheme was used, so that the set of non-linear equations is These equations are solved with the Generalized Minimal Residual (GMRES) Algorithm [22,25] with Lower-Upper Symmetric Gauss-Seidel (LU-SGS) preconditioning [26]. For the turbulence model we used k-ω SST, since it was reported to give better results for these kind of simulation by comparison with k-, k-realizable and stress-ω Reynolds Stress Model [14]. The Mesh A 2D axisymmetric mesh was used for the sake of simplicity and performance times. The general view of the mesh is shown in Figure 3. The details of the mixing region of the supply flow and the secondary flow are shown in Figure 4. The mesh is composed of structured blocks, generated with blockMesh, with the help of the open source Python script ofblockmeshdicthelper [27], which is useful when geometry is complex, but the meshing process has to be fast and accurate [28]. A grid convergence study was performed in order to state the goodness of the mesh. The study mesh is 2D axisymmetric with 20,300 tetrahedrical cells. A coarser mesh, with 13,000 cells, and a finer mesh, with 29,250 cells, were tested. The study was performed following the recommendations of Celik et al. [29] and, hence, the variation in cells size is about 30% and the refinement was structured. Three results were considered for the grid convergence study: primary and secondary flow rates for atmospheric inlet pressure (and, thus, maximum secondary flow rate), and minimum pressure in secondary inlet. Data are presented in Table 2. The grid convergence analysis gives a relative error of 0.4% for the secondary flow rate and, hence, the result isṁ s = (94.2 ± 0.4) × 10 −3 kg/s. The relative error for all the numerical results for secondary flow rate will be considered the same in the rest of the work. Boundary Conditions The boundary condition in primary inlet is 6 bar (relative) of total pressure, and Neumann condition for velocity for both HiSA and rhoCentralFoam solvers. In the secondary inlet also the total pressure is prescribed in function of the operating condition, and the flow rate is given by the simulation. In the outlet the standard atmospheric pressure is set but, in order to avoid reflections, the waveTransmissive [30,31] boundary condition was used for the rhoCentralFoam solver. For the implicit HiSA solver, the characteristic far field boundary condition [32] was used in outlet. The temperature was set as the standard atmospheric total temperature of 293 K in all the boundaries. Simulations Performance The performance of both solvers were compared. All the simulations were executed in a 64 AMD Opteron cores cluster, with 64 GB of RAM memory, using 8 cores in each run, with the same domain decomposition for parallel processing. As an illustration, here we show the performance of the simulations for the case of atmospheric pressure in outlet boundary, when secondary flow rate is maximum. Simulations were considered to be converged when secondary flow rate reaches a steady value. Figure 5 shows the convergence history for both solvers. The transient explicit rhoCentralFoam converges for about 0.003 s of flow time and the steady-state implicit HiSA needs about 15,000 iterations. The run time for the first case (rhoCentralFoam) was about 2 h and half, and for the second case (HiSA) was about half an hour, that is, 5 times smaller. Experimental Data Evacuation of air from the vessel is a process that can be considered to follow a general polytropic relationship, p where p 0 and ρ 0 are pressure and density reference values. Continuity equation, combined with perfect gases (7) and polytropic (15) expressions leads tȯ m s = V kR T dp s dt (16) where V is the volume of the vessel. Secondary flow rate is normalised with primary mass flow rate (17), giving the entrainment ratio µ. Results from the second experiment are shown in Figure 6. The pressure decay is presented versus time. It was normalised with the characteristic time: Secondary pressure could be estimated from Equation (16) assuming a known value of k. If limit values related to isothermal (k = 1) and adiabatic (k = 1.4) processes are considered, the result is shown in Figure 7, compared with actual values of secondary mass flow rate measured in laboratory. It is clear from Figure 7 that air evacuation exhibits an adiabatic behaviour in the first phase of the process, tending gradually to isothermal for small secondary flow rate at the end of the test. This behaviour agrees with the intuitive argument that heat transfer is less efficient for a high flow rate and, thus, process is adiabatic and, on the contrary, temperature has more time to balance for a small flow rate and the process is more likely isothermal. The actual value of the polytropic coefficient k can be experimentally estimated from the computation of the pressure decay rate and the secondary flow rate for each value of the pressure. Secondary mass flow rate is obtained by experimental tests for the large vessel volume. Computation of k, with Equation (19), is plotted in Figure 8, for both vessel volumes. This computation requires, besides the time derivative of the pressure, the value of the secondary flow rate for each pressure. This value is obtained by the interpolation of the flow rate in function of the vacuum pressure. This function can be considered to be a piece wise linear composition of two segments [33] where the breakpoint is the change of regime from the supercritic, with choked secondary flow, and the subcritic. The computation of the breakpoint,ṁ p = 0.00075 kg/s and p * s = 0.226, was done with the python library pwlf [34], which uses global optimization for the location of the point, as illustrated in Figure 9. The polytropic coefficient starts with the expected value of 1.4, but it quickly decreases, almost linearly, until it reaches the isothermal value of k = 1 for t ≈ τ. It seems to stabilise in this value up to t ≈ 2τ. After that, the behaviour differs depending of the vessel's volume. Since the measurements with the larger volume were higher and they were acquired in a more accurate way, it is assumed that from t ≈ 2τ the flow rate measurements with the small volume can be discarded. Then, the value of k decreases slightly down to 0.9 dropping quickly to 0.6 at the very final stage of the process. This roughly agrees with results by [35] with air discharging from a pressurised vessel. Numerical Simulations The numerically simulated entrainment ratio is shown in Figure 10 for explicit (RCF) and implicit (HiSA) solvers. In addition, experimental measurements are plotted for the sake of comparison. Figure 10 shows the ejector turns to be zero-secondary flow [8] reach maximum vacuum pressure of p * s = 0.2 on the experimental results. However simulation results differ in vacuum pressure and entrainment ratio at µ below 0.3. The maximum vacuum pressure obtained with HiSA is p * s = 0.217 (see Figures 11 and 12). Instead, Figure 13 shows an entrainment ratio µ = 0.017 at p * s = 0.2 for RCF. This entrainment ratio, for the numerical results of HiSA, is found at p * s = 0.29. Figures 11 and 12 show contours of the Mach number for the secondary flow obtained from the simulations with the implicit solver, HiSA. Normalised vacuum pressure ranges from p * s = 1, corresponding to the vessel at atmospheric pressure to the minimum secondary pressure, for zero secondary flow rate, p * s = 0.217. In the first subfigure of Figure 11, for p * s = 1 secondary flow accelerates until Ma = 1.5 at the end of the second nozzle. Downstream there is a shock wave (not shown in Figure) before air exits to atmospheric pressure. At the pressure of p * s = 0.8, the Ma = 1.5 in the entrainment ratio is not located at the end of the nozzle, like in the previous subfigure, but advanced in the nozzle. Thus, downstream there is a shock wave (not shown in Figure). In this case the shock wave is located more advanced. At the pressure of p * s = 0.6, the Ma = 1 is located in the middle of the secondary nozzle and at p * s = 0.4, the Ma = 1 is located advanced the middle, close to the entrance of the nozzle. The first subfigure of Figure 12 is the last to reach Ma = 1 and it is situated at the entrance of the second nozzle. At the pressure of p * s = 0, 275, the flow does not reach the sonic condition (Ma = 1). Thus, the flow is not choked. In the last subfigure, at the pressure of p * s = 0, 217, there is no secondary flow rate due to the flow expansion of the primary flow. In the experimental results, the critical point is at p * s = 0.225 and the flow rate isṁ s = 0.00075 kg/s. The critical point in the HiSA numerical results was spotted qualitatively, although it could have appeared a little higher p * s . However HiSA numerical results for the critical point, approximately, are 22% higher the experimental ones: Ps = 0.275, and the flow rate is 2.5 times:ṁ s = 0.0019 kg/s. Finally, the maximum vacuum pressure reached on the experimental results is p * s = 0.19. The maximum vacuum pressure, on the HiSA numerical results, is 15% higher: p * s = 0.22. Discussion A good agreement was found after using two density-based solvers: explicit solver RCF, and implicit solver HiSA. It seems that both solvers for numerical simulations overestimate the flow rate in the same suction pressure for small vacuum level (large values of suction pressure in secondary level) while underestimate the flow rate in the low values region. It also tends to slightly overestimate the value of maximum suction pressure (zero flow performance), especially the explicit solver for small values of µ. Explicit RCF is not able to correctly resolve flow rate for small values of pressure in vessel. On the contrary, implicit solver HiSA, although gives slightly high values of µ for large pressure in vessel, gives a very good result for small values of pressure. Whereas the implicit HiSA reaches p s = 0.217 as a maximum vacuum level, the explicit RCF shows secondary flow choked for a vacuum level (p s = 0.2). Figure 13 shows the secondary flow is choked. According to the experimental results, the maximum vacuum level reached for the ejector is p * s = 0.19. Thus, RCF is not able to calculate the primary flow rate expansions after the shock wave. Two regimens of secondary flow rate are found in the second nozzle. In the supercritic regime the secondary flow is choked and a sonic condition (Ma = 1) is reached to different points of the nozzle, according to the pressure on the secondary inlet. However, the less pressure left at the vessel, the closer to the entrance of the nozzle is found the sonic condition. In the subcritic regime, the secondary flow is subsonic. Finally, the secondary flow gets stuck, creating a big bubble, and becoming a zero-secondary flow ejector. Figure 12 shows the secondary flow retained before the mixing chamber, due to the flow expansion of the primary flow. A breakpoint of regimes was spotted in the experimental results, and confirmed by the numerical results from the simulation. Figure 9 shows, at the vacuum pressure p s = [1, 0.8, 0.6, 0.4, 0.3], the entrainment ratio has a critical regime. The presence of Ma = 1 means the flow is choked. Figure 12 shows that, at the vacuum pressure p * s = [0.275, 0.25], the regime changes to subcritical. The air evacuation from the vessel is a transitory phenomenon and depends on the polytropic coefficient. When the evacuation starts, flow rate is maximum and heat transport is negligible. The process is practically adiabatic. For a time of the order of the characteristic time (Equation (18)) the polytropic coefficient drops until the corresponding value to a isothermal flow. This progression of the polytropic coefficient is more less lineal. From τ ≈ 2 to τ ≈ 3, the flow seems to behave isothermically. Experimental measurements suggest that for τ >≈ 3 the polytropic coefficient adopt a value of k ≈ 0.9. Conclusions In this study, numerical simulations with an explicit and implicit density-based solver were used to investigate the transient phenomenon in a vacuum ejector. Validation of the numerical results were made with a double check of experiments in the laboratories with two vessel volumes. The main measured parameters were flow rate and suction pressure in the vessel. The numerical simulation offered results in compliance with experiments at an acceptable level of correspondence for steady state experiments. Nevertheless, implicit solver presents better agreement with experimental results than explicit solver for low flow rate regime. The maximum vacuum level obtained in laboratory is about 19%. The implicit solver presents a value of 22% and the explicit solver tends to a much inferior value. It is obvious that for the simulation of maximum vacuum level for an ejector geometry, the implicit solver gives a better performance. For high values of entrainment ratio both solvers agree with experimental results. At the secondary pressure of 19%, the primary flow expanses in the mixing chamber and, as a consequence of this expansion, obstructs the secondary flow and the vacuum ejector becomes a zero-secondary ejector. This secondary pressure corresponds to the ultimate vacuum capacity of the ejector. Two different regimes of flow rate profiles were found, numerically and experimentally, on the ejector performance: the supercritic and subcritic. In the first regime the secondary flow is choked in the second nozzle of the ejector, whereas, in the second regime, there is no secondary choked flow. Transient pressure decay was measured and a model for polytropic coefficient k transient behaviour has been proposed. It was observed that k decreases linearly from its adiabatic value k = 1.4 to the isothermal value k = 1 in the first characteristic time τ and it keeps its value for about another τ, dropping finally to about k = 0.9 until the end of the process, for τ = 3. This work will be used in future investigations to infer transient behaviour for a numerical simulation of an ejector from the performance curve of vacuum pressure versus flow rate obtained from steady state simulations. The solver chosen, HiSA, will be used for future research.
2019-09-26T09:05:43.243Z
2019-09-17T00:00:00.000
{ "year": 2019, "sha1": "ae1a4b3382a28def3002fd58b02f51a1c5d590f1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/12/18/3553/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e4bcf33a877ddb85b0a1269c14f52a1ec6786317", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
15741372
pes2o/s2orc
v3-fos-license
NMR spectroscopy of single sub-nL ova with inductive ultra-compact single-chip probes Nuclear magnetic resonance (NMR) spectroscopy enables non-invasive chemical studies of intact living matter. However, the use of NMR at the volume scale typical of microorganisms is hindered by sensitivity limitations, and experiments on single intact organisms have so far been limited to entities having volumes larger than 5 nL. Here we show NMR spectroscopy experiments conducted on single intact ova of 0.1 and 0.5 nL (i.e. 10 to 50 times smaller than previously achieved), thereby reaching the relevant volume scale where life development begins for a broad variety of organisms, humans included. Performing experiments with inductive ultra-compact (1 mm2) single-chip NMR probes, consisting of a low noise transceiver and a multilayer 150 μm planar microcoil, we demonstrate that the achieved limit of detection (about 5 pmol of 1H nuclei) is sufficient to detect endogenous compounds. Our findings suggest that single-chip probes are promising candidates to enable NMR-based study and selection of microscopic entities at biologically relevant volume scales. Nuclear magnetic resonance (NMR) is a well-established spectroscopic technique widely employed in physics, chemistry, medicine, and biology. It allows for experiments on living matter 1,2 , whose relevance in biology is proven by developments such as in vivo protein structure determination 3 , metabolic profiling 4 , visualization of gene expression 5 , and latent phenotype characterization 6 . Despite its advantages, NMR suffers from a significantly lower sensitivity with respect to other methods. As a result, experiments are often restricted to large ensembles of cells 1,3,4,6 . Single cell studies are necessary to investigate heterogeneous phenomena within a cell population [7][8][9] . Recently, a number of techniques were applied to intracellular metabolic profiling at single cell scale, all having different limitations and degree of invasivity. For instance, mass spectrometry and fluorescence labeling allow high sensitivities, but require cellular content extraction or selective labeling with fluorophores 7,9 . Questions concerning invasivity stimulated the coin of the biological equivalent of the so called observer effect, referring to the inability to separate a measurement from its potential influence on the observed cell 9 . In this regard, NMR is one of the most promising techniques for studies of intracellular compounds in untouched living entities (i.e., with extremely weak physical and chemical perturbations) 1,7 . The application of NMR to intact individual microscopic biological entities was previously reported down to a volume of 5 nL. The first single-cell NMR experiments were performed on Xenopus laevis ova 10 which have volumes of about 1 μ L. Later, single giant neurons of Aplysia californica, with volumes of approximately 10 nL, were studied 11 . The particularly large volumes of these cells allowed several pioneering studies such as the profiling of highly concentrated metabolites and their subcellular localization 12,13 , imaging of Xenopus laevis cleavage 14 and neurons structure 15 , and study of water diffusion properties within the cytoplasm and nucleus 10,11,[16][17][18] . Recently, also spectroscopy of a single adult C. elegans worm (about 5 nL volume) was reported 19 . In this work we report, for the first time, NMR-based spectroscopy of single untouched sub-nL ova, specifically describing experiments on the tardigrade Richtersius coronifer (Rc) and the nematode Heligmosomoides polygyrus bakeri (Hp). These ova are just two of the many models present at the sub-nL scale (Fig. 1a), which include numerous species of microorganisms, echinoderms, and mammals (humans included) 20 . Rc ova are spherical with conical processes on the cuticular surface of the egg shell and have a typical volume of 0.5 nL (Fig. 1b). Hp ova are ellipsoidal and have a typical volume of about 0.1 nL (Fig. 1c). NMR spectroscopy of sub-nL biological samples is both a volume and concentration limited problem, setting severe constraints on the required spin sensitivity. Here we employ a recently developed single-chip integrated inductive NMR probe 21 entirely realized with a commercially accessible complementary-metal-oxide-semiconductor (CMOS) technology, where the combination of a low noise transceiver and a multilayer microcoil allows for high spin sensitivities in sub-nL volumes (Fig. 1d). In brief, the entire NMR probe occupies an area of about 1 mm 2 , it has a sensitive region of about 200 pL (on top of the microcoil) with a spin sensitivity at 7 T of about 1.5 × 10 13 spins/Hz 1/2 , and its planar geometry allows for a relatively easy access to the sensor. In order to use the device for the spectroscopy of sub-nL ova of microorganisms, we manually place the sample in the sensitive region of the probe using a polystyrene cup filled by agarose gel (see Methods). Figure 1e describes the assembled probe where single ova are in contact with the microcoil surface and embedded in the gel. This setup systematically allows for experimental times as long as one day. Results Linewidth in Rc ova. Figure 2a shows three 1 H NMR spectra obtained at 7 T (300 MHz) from single Rc ova embedded in H 2 O-based agarose gels. Due to a measured linewidth of about 70 Hz, the strong water signal (used as internal chemical shift reference at 4.7 ppm 17 ) overlaps with nearby resonance lines. The relatively short spin-spin relaxation times typically observed in oocytes explain only partially these broad lines 13,17,18 that must be caused by susceptibility mismatches. In order to investigate the origin of the field distortions we performed measurements with an alternative setup enabling the spectroscopy of these samples in pure water and with controlled and reduced hardware-related field distortions (see S.I). Repeated experiments suggest that the linewidth measured in Rc ova is intrinsically related to the sample, probably resulting from microscopic constituents of the ovum introducing susceptibility mismatches whose typical spatial distribution impedes field shimming in the intracellular region. In line with this observation, previous studies limited to the vegetal cytoplasm of intact Xenopus laevis ova attributed similarly broad linewidths (about 0.3 ppm) to the presence of yolk platelets, or other organelles with paramagnetic components, generating local susceptibility mismatches 13 . However, despite the relatively low spectral resolution that characterizes Rc ova, the achieved limit of detection (advantageous in the setup employing the integrated single-chip probe) is sufficient for a qualitative detection of intracellular compounds (Fig. 2a). Rc ova spectroscopy. In presence of susceptibility mismatches enlarging the water signal it is difficult to apply water suppression techniques without introducing significant spectral artifacts 22 . As an alternative to the use of water suppression techniques we embedded the biological sample in gels based on heavy water (D 2 O), thus eliminating the water signal by replacement of water with D 2 O. In D 2 O-based agarose gels, HDO is formed by proton exchange with the OH groups in the agarose molecule. HDO resonates at about 0.03 ppm relative to the H 2 O chemical shift 23 and contributes to the only background signal that is visible in our experimental conditions and time scales (Fig. S2). The weaker background signal in D 2 O gels (about 100 times smaller than in H 2 O gels) is reproducible, allows one to better resolve the resonance lines close to water, and can be used as internal chemical shift reference (at 4.7 ppm as water). We do not exclude that, in presence of the sample, the peak at 4.7 ppm results also from leftover H 2 O within the ova. Figure 2b shows NMR spectra of eight single Rc ova in D 2 O-based gels obtained by dispersing agarose in pure heavy water. These spectra exhibit linewidths and chemical shifts compatible with the ones observed in H 2 O-based gels. A detailed comparison among spectra of different ova seem to indicate that the Rc ova exhibit a certain degree of spectral heterogeneity (Fig. 2c). In what follows we discuss the possible experimental artifacts that could lead to artificial spectral diversities and show a reproducibility study of single ova spectra. Rc ova are randomly selected from a population where there is no control over fertilization and/or development stage. Their volume is approximatively spherical, with a diameter naturally varying from 100 to 130 μ m. As shown in detail by the sensitivity maps in Fig. S3, the most sensitive region of our excitation/detection microcoil roughly corresponds to a deformed semi-ellipsoid of about 200 pl, i.e. smaller than the ova volume. Consequently, the signal amplitude does not depend linearly on the ovum volume. In order to quantitatively estimate the dependence of the signal amplitudes on the natural variability of ova volumes, we performed a numerical integration of the effective sensitivity shown in Fig. S3 over spherical volumes (representing Rc ova) having diameters of 100 and 130 μ m, placed on top of the microcoil, in which an homogeneous spin density is considered. The result of this calculation indicate that the maximum variability of signal amplitude due to different ova volumes is of about 25%. This value slightly increase to about 30% when the smaller sphere is laterally displaced by 15 μ m with the respect to center of the microcoil. From this estimation, we deduce that the variability in terms of signal amplitudes shown in Fig. 2 (as large as 350%) cannot be explained by the natural variability of ova volumes and/or the ovum-to-microcoil misalignment. Other factors that might provoke artificial heterogeneity among these NMR spectra can be: (1) the non-homogeneous coil sensitivity combined with a non-uniform intracellular chemical composition; (2) the random orientation of the ovum within the structural field inhomogeneity of the setup; (3) the presence, upon sample placing, of invisible air bubbles at the microchip-sample-gel interface (see Methods for assembly procedure). In order to investigate these possible sources of artifacts, we performed six additional experiments on four Rc ova, in particular on the ova which produced the spectra (d), (e), (f) and (g) shown Fig. 2b. Figure 3a shows spectra of ovum (d) and ovum (e) after three arbitrary repositioning, realized delicately rotating the ova within the respective spent gels. Although we observe some variations of the linewidths as well as of the signal amplitudes, the dominant spectral features (i.e. the ones between 0 and 4 ppm) are conserved upon sample rotation and change of local environment. Figure 3b shows the result of experiments where both ovum (f) and (g) are repositioned in a fresh gel. As we can see, the dominant spectral features were conserved also upon transfer into fresh gels. Figure 3c and d compare the averaged spectra of ova (f) and (g) to spectra of ova (a) and (h), showing that the variability in spectra of different ova can be larger than the variability of repeated experiments on the same ovum. Overall, Fig. 3 suggests that the observed diversity among spectra of Rc ova cannot be attributed only to the manipulation and positioning of the ovum but must be caused, at least partially, by its intrinsic properties. Sensitivity of the single-chip probe. Due to the planar geometry of the excitation/detection coil, our probe has an effective spin sensitivity which depends on the sample volume, shape, and distance from the coil surface. Our experimental conditions are characterized by a spectral resolution of about 0.3 ppm and a field strength of 7 T. In the case of a spherical sample of 30 μ m diameter in contact with the chip surface, the time-domain spin sensitivity of about 1.5 × 10 13 spins/Hz 1/2 corresponds to a limit of detection (LOD) in the frequency domain of about 700 pmol of 1 H nuclei per single scan (quantity of 1 H nuclei that gives a signal-to-noise ratio of three). In this example the sensing capability of the microcoil is fully exploited as the sample is contained within the most sensitive region of the detector (see S3). In the case of a spherical sample of 100 μ m diameter in contact with the chip surface, the spin sensitivity is reduced to about 4 × 10 13 spins/Hz 1/2 , corresponding to an LOD in the frequency domain of about 1900 pmol of 1 H nuclei per single scan (the spins intended to be distributed homogeneously within the whole sample). In terms of LOD, the performance of our single-chip probe are competitive with the most sensitive inductive NMR devices so far reported 24-27 . Chemical shifts in Rc and Hp ova. In this study the relatively small number of samples available (see Methods) poses significant and non-trivial technical challenges to studies of ova collections aimed at the elucidation of proton peaks assignment (see Discussion). In our experiments the peak assignment is hindered by the combination of a small number of spins with a relatively poor spectral resolution. Nevertheless, a few qualitative observations can be done by comparison to previously reported NMR spectra of intact C. elegans worms 28 and Xenopus laevis ova 13,17 . Although these NMR-based studies analyze biological entities that are different from the ones investigated in this work, they probably represent the closest term of comparison available in literature in terms of volume size and samples nature. The NMR signals in Xenopus laevis 13,17 at about 0.9, 1.3, 2.1, 2.8, 5.2 ppm were attributed to highly concentrated yolk lipids (in particular triglycerides 13,29 ). These results well explain the origin of the dominant features in both Rc and Hp ova spectra. In Fig. 4a and b, a peak at about 3.2 ppm seems to discriminate the intracellular content of Hp ova from the one of Rc ova. Prominent resonances at about 3.2 ppm were previously assigned to a relatively restricted group of metabolites in intact C. elegans worms, which are nematodes as Hp 28 . As shown in Fig. 2 a visible signal at 3.8 ppm is present in some Rc ova. A resonance at about 4 ppm was assigned to the glycerol backbone in Xenopus laevis, typically lower and broader with respect to the other lipid signals (to which this compound is strictly related) 13 . Hence, this resonance is hardly related to yolk lipids. The presence of a highly concentrated endogenous compound is a more likely explanation for the signal detected at this particular chemical shift. Discussion In this study we reported on the use of a state of art sub-nL NMR probe for the analysis of single sub-nL ova of microorganisms, indicating the limits of the technique for the non-invasive detection of intracellular compounds within ova as small as 0.1 nl. The results shown may be used as a starting point to extrapolate the realistic experimental possibilities offered by NMR tools for applications such as the non-invasive selection of microscopic entities based on the direct quantification of highly concentrated endogenous compounds. In terms of spin sensitivity performance that future setups may offer, a straightforward improvement is the use of a higher field. Moving from 7 T to 23.5 T (the highest field commercially available) with the same microcoil should improve the spin sensitivity by a factor of six if the linewidth originates entirely from magnetic susceptibility issues (see S.IV). In these conditions it is reasonable to achieve limits of detection on 1 H nuclei in the order of 7 pmol in 10 minutes and 0.9 pmol in 10 hours for samples having a volume below 100 pl and linewidths as large as 0.3 ppm. Further improvements are obvious for samples exhibiting typical linewidths narrower than the ones observed in this study. Improved spectral resolutions may be obtained by MAS techniques 13,19,30 . A few explorations on microscopic intact biological samples report linewidths of about 0.1 ppm in Xenopus laevis eggs at 14 T 13 and C. elegans at 23.5 T 19 . Experiments on large collections of C. elegans and bovine tissues demonstrate linewidths as narrow as 0.05 ppm 19,30 . It seems therefore reasonable to obtain significant narrowing of the line via MAS. Although MAS probes are not yet optimized for maximum sensitivity at the sub-nL scale, its application at larger volume scales (few tens of nL) may already provide tools supporting the study of sub-nL ova. In wider terms, static and/or spinning probes analyzing 10 nL collections of rare or precious sub-nL ova would allow for proton assignments (elucidating eventual heterogeneities detected among individual samples at the single ovum level) and a better characterization of the spin-spin relaxation properties without need of excessive sample accumulation. However, the realization of such tools is hindered by significant technical challenges, simultaneously requiring small sensitive volumes, high filling factors, high resolution, MAS, and sample loading and manipulation capabilities. Our results, obtained at a relatively weak field of 7 T, suggest that a LOD of about 5 pmol of 1 H nuclei within a sub-nL region (in this study specifically corresponding to sensitivities ranging from 20 to 50 mM in terms of intracellular concentration) is sufficient for the detection of the most concentrated compounds in individual ova of microorganisms having volumes below 1 nL. Curiously, signals at chemical shifts that are not typical of yolk lipids are visible. This indication seems, at first sight, in contradiction with the previous NMR spectroscopic studies of intact Xenopus laevis ova 13 , where yolk lipids explain all the spectroscopic features, which are essentially identical to those of the yolk of an hen egg 31 . In order to detect metabolites in these samples it was indeed necessary the use of magic angle spinning probes at 14 T loaded with more than one ovum 13 . However, the Xenopus laevis ovum (the smallest previously analyzed with NMR spectroscopy) might not be the best term of comparison, as its typical volume (about 1 μ L) is larger by a factor ranging from 10 3 to 10 4 with respect to the ova studied in this work. A peculiar class of sub-nL ova that justifies the interest in approaches for the non-invasive intracellular spectroscopy of individual samples is constituted by the mammalian zygotes. Recent studies demonstrate, using Scientific RepoRts | 7:44670 | DOI: 10.1038/srep44670 techniques other than NMR, that in sheep 32 and human 33 oocytes the uptake or production rates of metabolites such as lactate, pyruvate, and glucose can reach 100 pmol/oocyte/h and change radically along the natural development. It is worth noting that these results concern exchange rates measured in the extracellular medium and, hence, do not provide a direct quantification of the intracellular content and its time evolution. Spectrophotometry of intracellular extracts, on the other hand, has shown that up to 30 pmol/oocyte of glutathione (GSH) are contained in oocytes of goat 34 and pig 35,36 and can change in reaction to environment and developmental stage 37 . Variations of a few pmol/oocyte of GSH in time scales of the order of several hours have been reported in hamster 38 and rat 39 oocytes. In these studies, the intracellular GSH content and its evolution is directly measured, but the ensemble measurements hide possible heterogeneities among single entities. These findings indicate that the sensitivity achievable with high sensitivity miniaturized inductive NMR probes should be sufficient for a non-invasive real-time intracellular monitoring of GSH in single mammalian zygotes. The application of NMR spectroscopy to the analysis of spent culture media was recently proposed to aid the selection of viable human embryos for in vitro fertilization purposes 40 . The direct application of NMR on single embryos using miniaturized high sensitivity probes is potentially advantageous for this aim. We suggest that systematic and extensive NMR studies on single cultured ova may provide new data that could shed light on cryptic processes involved in embryonic development [32][33][34][35][36][37][38][39] and provide new methodologies to estimate embryonic health 37,40 . The hardware used in this work is an ultra-compact integrated probe entirely realized with commercially accessible complementary-metal-oxide-semiconductor (CMOS) technologies that might open to the realistic possibility of implementing relatively low-cost arrayed miniaturized probes. However, improvements for what concern samples manipulation are required, especially for applications aiming at studying precious samples such as mammalian embryos. Recently, many efforts were successfully dedicated to the microfabrication of devices for manipulation and culture of individual living embryos 41,42 . Both integrated circuits and microfluidics are suitable for arrays implementation, and their combination has been demonstrated in applications such as single cell magnetic manipulation 43 and flow cytometry 44 . We believe that this combination can be extended to NMR applications for the realization of arrayed high sensitivity NMR probes, enabling simultaneous studies on a large number of single biological entities in the same magnet. Methods Experiments and protocols were approved by the SV-biosecurity unit committee of the École Polytechnique Fédérale de Lausanne and carried out in accordance with the experimentation guidelines of the institution. Single ovum probe mounting. The ova were first transferred, using a 100 μ l pipette, from the tube into a Petri dish filled by 1.5% H 2 O-based agarose gel. Often more than one ovum was found on the Petri dish, in which case the additional samples were left isolated on the gel for eventual later use and stored at 4 °C between successive experiments. Single ova were transferred into a 1.5% agarose gel-filled polystyrene cup using two eyelashes. No visible damage to the ova was provoked during this procedure. The concentration of agarose was carefully chosen, based on repeated assemblies of ova, such that the resulting gel was hard enough to allow a stable placement of the ovum but still sufficiently soft to avoid ovum rupture during the setup assembly (typically happening for gels with more than 3% of agarose). The gel matrix was providing a deformable soft surface to embed and hold the ovum. When placed on the gel, the ovum was protruding from the surface by about half of its volume, hence ensuring an initial physical contact between the ovum and the surface of the microcoil upon placing. Later, the cylindrical polystyrene cup, containing the gel with the ovum on its surface, was positioned on top of the microchip in such a way that the ovum was precisely placed over the microcoil. The local depletion of any visible air bubble was relatively easy and reproducible. The cup was fixed to the printed circuit board with candle wax. The gel keeps the sample in close contact with the coil for days without physically damaging it, whereas the wax prevents gel drying. The microchip was wire bonded to a printed circuit board, with bonding wires electrically isolated by a silicone glue. Figure 1e describes the assembled probe. Tardigrade Richtersius coronifer (Rc). Eggs of Rc were extracted from a moss sample collected in Öland (Sweden) by washing the substrate, previously submerged in water for 30 min, on sieves under tap water and then individually picking up eggs with a glass pipette under a dissecting microscope. The eggs were shipped within 24 hours in sealed tubes with water and subsequently stored at − 20 °C before use. The embryonic development of Rc ova is relatively slow, with the eggs hatching in more than 50 days 45 . All experiments were carried out within a week after tube opening. The tube was stored at 4 °C between separated experiments. The NMR experiments were performed in H 2 O, M9, and D 2 O. It is known that prolonged exposure to a high concentration of D 2 O affects living organisms to different extents, from lethal to marginal 46,47 . In order to test the effects of D 2 O exposure on Rc specimens, 16 eggs and 10 animals were submerged in D 2 O (at 15 °C) for 36 and 24 hours respectively and then transferred in H 2 O. A control group of 16 eggs was kept in H 2 O. The effects of the exposition to D 2 O on the survival of the specimens were not negligible but definitively not systematically lethal: all the animals survived, and a hatching of 84% in the control group and of 63% in those exposed to D 2 O was observed after a time of approximately 2 months. The total amount of Rc ova available for this study was of about 120 units. Nematode Heligmosomoides polygyrus bakeri (Hp). Eggs of Hp were collected from faeces of infected mice. Faeces were first dissolved in water and then washed with a saturated NaCl solution. Floating eggs were collected from the top layer of the solution and washed twice. Final centrifugation in water for 5 minutes at 2000 rpm sedimented clean eggs at the bottom of the tube. The amount of ova typically available at each extraction varied from tens to a few hundred depending on the host organism response to the infection. Fecundated ova of Hp develop into a fully embryonated state within 24 hours and within two days stage 1 larvae begin to emerge 48 . In H 2 O-based gels Hp ova regularly hatched after a few hours, the emerging larvae migrating far from the sensitive region of the microcoil. In D 2 O-based gels the ova never hatched within two days of observation, hence allowing for the necessary long averaging time. All experiments were carried out within two days after sample extraction. The tube was stored at 4 °C between separated experiments. NMR experimental details. NMR experiments were performed in the 54 mm room temperature bore of a Bruker 7.05 T (300 MHz) superconducting magnet. The electronic setup was identical to the one described in details in ref. 21. All experiments employing the single-chip probe were performed with a repetition time of 2 s, a π /2 pulse length of 2.5 μ s, and an acquisition time of 400 ms. The time domain data were post-processed by applying an exponential filter with decay of 50 ms. The alphabetic order in Fig. 2b corresponds to the chronologic order of the measurements.
2017-05-04T18:11:20.310Z
2017-03-20T00:00:00.000
{ "year": 2017, "sha1": "845b0e81b3f6056127c2c2ea51428c08dadaef27", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep44670.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "90cd09ef670b2b0efe09d6f3336ce9a1348665ad", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
253116809
pes2o/s2orc
v3-fos-license
Joint Waveform and Passive Beamformer Design in Multi-IRS-Aided Radar Intelligent reflecting surface (IRS) technology has recently attracted a significant interest in non-light-of-sight radar remote sensing. Prior works have largely focused on designing single IRS beamformers for this problem. For the first time in the literature, this paper considers multi-IRS-aided multiple-input multiple-output (MIMO) radar and jointly designs the transmit unimodular waveforms and optimal IRS beamformers. To this end, we derive the Cramer-Rao lower bound (CRLB) of target direction-of-arrival (DoA) as a performance metric. Unimodular transmit sequences are the preferred waveforms from a hardware perspective. We show that, through suitable transformations, the joint design problem can be reformulated as two unimodular quadratic programs (UQP). To deal with the NP-hard nature of both UQPs, we propose unimodular waveform and beamforming design for multi-IRS radar (UBeR) algorithm that takes advantage of the low-cost power method-like iterations. Numerical experiments illustrate that the MIMO waveforms and phase shifts obtained from our UBeR algorithm are effective in improving the CRLB of DoA estimation. INTRODUCTION An intelligent reflecting surface (IRS) is composed of a large array of scattering meta-material elements, which reflect the incoming signal after introducing a pre-determined phase shift [1,2]. Recently, the benefits of IRS have been investigated for future wireless communications [3][4][5] applications, including multi-beam design [6], secure parameter estimation [7] and joint sensing-communications [8][9][10]. In this paper, we focus on the IRS-aided radar, where combined processing of line-of-sight (LoS) and non-LoS (NLoS) paths has shown improvement in target estimation and detection [11][12][13][14] through an optimal design of IRS phase shifts. Target detection via multiple-input multiple-output (MIMO) IRS-aided radar was studied extensively in [11]. In our earlier works on target estimation [12,15], we derived the optimal IRS phase shifts based on the mean-squared-error of the best linear unbiased estimator (BLUE) for complex target reflection factor [12] and the Cramér-Rao lower bound (CRLB) of Doppler estimation for moving targets [15]. Recent studies [13,16] focused on optimization of IRS beamforming based on CRLB of direction-of-arrival (DoA) 1 Equal contribution. This work was sponsored in part by the National Science Foundation Grant ECCS-1809225, and in part by the Army Research Office, accomplished under Grant Number W911NF-22-1-0263. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. estimation for a single IRS-aided radar. More recent works [15,17] demonstrate the benefits of deploying multiple IRS platforms instead of a single IRS. Similar to a conventional radar [18], a judicious design of transmit waveforms improves the performance of IRS-aided radar. Whereas designing radar probing signals is a well-studied problem [18][19][20][21][22], it is relatively unexamined for IRS-aided radar. In this context, transmit sequences that mitigate the non-linearities of amplifiers and yield a uniform power transmission over time are of particular interest. Unimodular sequences with the minimum peakto-average power ratio exhibit these properties and have been studied in previous non-IRS works for radar applications [21]. In this paper, we jointly design unimodular sequences and IRS beamformers. Multipath propagation through multiple IRS platforms increases the spatial diversity of the radar system [23]. To this end, we investigate the benefits of multipath processing for multi-IRS-aided target estimation. We first derive the CRLB of DoA estimation for a multi-IRS-aided radar. Then, we formulate the unimodular waveform design problem based on the CRLB minimization for IRSaided radar as a unimodular quadratic program (UQP). The unimodularity constraint makes the UQP an NP-hard problem. In general, UQP may be relaxed via a semi-definite program (SDP) formulation but the latter has a high computational complexity as well [24,25]. Inspired by the power method that has the advantage of simple matrix-vector multiplications, [22,26] proposed power method like iterations (PMLI) algorithm to approximate UQP solutions leading to a low-cost algorithm. We formulate the IRS beamforming design as a unimodular quartic programming (UQ 2 P). Prior works [19,27] on unimodular waveform design with good correlation properties also lead to UQ 2 Ps, for which they employ a more costly majorization-minimization technique. On the contrary, we use a quartic to bi-quadratic transformation to solve UQ 2 P by splitting it into two quadratic subproblems. Our unimodular waveform and beamforming design for multi-IRS radar (UBeR) algorithm is based on the cyclic application of PMLI and provides the optimized CRLB. In summary, the contributions of our work are introducing the signal model for a multi-IRS-aided radar system, derivation of the Fisher information for the DoA estimation and developing our algorithm called UBeR for joint Unimodular waveform and beamforming design in multi-IRS-aided radar. Throughout this paper, we use bold lowercase and bold uppercase letters for vectors and matrices, respectively. We represent a vector x ∈ C N in terms of its elements {xi} as x = [xi] N i=1 . The mn-th element of the matrix B is [B] mn . The sets of complex and real numbers are C and R, respectively; (·) , (·) * and (·) H are the vector/matrix transpose, conjugate and the Hermitian transpose, respectively; trace of a matrix is Tr(.); the function diag(.) returns the diagonal elements of the input matrix; and Diag(.) produces a diagonal/block-diagonal matrix with the same diagonal entries/blocks as its vector/matrices argument. The Hadamard (element-wise) and Kronecker products are and ⊗, respectively. The vectorized form of a matrix B is written as vec (B). The s-dimensional all-ones vector, all-zeros vector, and the identity matrix of size s × s are 1s, 0N , and Is, respectively. The minimum eigenvalue of B is denoted by λmin(B). The real, imaginary, and angle/phase components of a complex number are Re (·), Im (·), and arg (·), respectively. vec −1 K,L (c) reshapes the input vector c ∈ C KL into a matrix C ∈ C K×L such that vec (C) = c. MULTI-IRS-AIDED RADAR SYSTEM MODEL Consider a colocated MIMO radar with Nt transmit and Nr receive antennas, each arranged as uniform arrays (ULA) with inter-element spacing d. The M IRS platforms indexed as IRS1, IRS2,...,IRSM , are implemented at stationary and known locations, each equipped with Nm reflecting elements arranged as ULA, with element spacing of dm between the antennas/reflecting elements of IRSm. The continuous-time signal transmitted from the n-th antenna at time instant t is xn(t). Denote the Nt × 1 vector of all transmit signals as The steering vectors of radar transmitter, receiver and the m-th IRS are, respectively, where λ, is the carrier wavelength and d and dm are usually assumed to be half the carrier wavelength. Each reflecting element of IRSm reflects the incident signal with a phase shift and amplitude change that is configured via a smart controller [28]. We denote the phase shift vector of IRSm by vm = [e jφ m,1 , . . . , e jφ m,Nm ] ∈ C Nm , where φ m,k ∈ [0, 2π] is the phase shift associated with the k-th passive element of IRSm. The received signal back-scattered from a single target is the superimposition of echoes from both LoS and NLoS paths as α ritir,m Hir,mΦmHti,mHit,mΦmHri,m where Φm = Diag (vm), α (·),m is the complex reflectivity which depends on the target back-scattering coefficient and the atmospheric attenuation, and w(t) ∼ CN (0, σ 2 IN t ) denotes a stationary (homoscedastic) additive white Gaussian noise (AWGN). In general, the received signal may also have an additional inter-IRS interference that should be included while accounting for the SNR. When there is some blockage or obstruction between the radar and target, we have αrtr 0, αritr,m 0 and αrtir,m 0. We replace α ritir,m by α m for notation brevity. The received signal becomes Our goal is to design a radar system for inspecting a range cell located at distance dtr with respect to (w.r.t.) the radar transmitter/receiver for a potential target. Assume that the relative time gaps between any two multipath signals are very small in comparison to the actual roundtrip delays, i.e., τritir,m ≈ τ0 = 2d tr c for m ∈ {1, . . . , M }, where c is the speed of light. We collect N slow-time samples at the rate 1/Ts from the signal, at t = nTs, n = 0, . . . , N − 1. Hence, corresponding to the range-cell of interest, the received signal vector is where , and we define Hm = Hir,mΦmHti,mHit,mΦmHri,m ∈ C Nr ×N t . The delay τ0 is aligned on-the-grid so that n0 = τ0/Ts is an integer [29]. Collecting all discrete-time samples for Nr receiver antennas, the received signal is the Nr (1), it is easily observed that y ∼ CN (µ, R), where µ =XHα and R = σ 2 I Nr N . Note that, since w(n) is a stationary process and i.i.d. with σ 2 variance, through vectorization and stacking all ensembles as one vector, the resulting process is still stationary and i.i.d with the same variance. Our goal is to show the effectiveness of placing M IRS platforms in estimating the DoA of the target in the LoS path, i.e. θtr. For simplicity, we consider a two-dimensional (2-D) scenario, where the radar, IRS platforms and the target are in the same plane. Our analysis can be easily extended to 3-D scenarios. The following remark states that the estimation of DoAs in the NLoS paths, θti,m, for m ∈ {1, . . . , M } is equivalent to an estimation of θtr. where B = √ 2 Proof. GivenX = X ⊗ I Nr , rewrite Fisher information in (5) as (8) Since the argument of real operator is a real number, we can put it out of the real operator. Using the identity (8), we immediately get (7). Using the expression in (7), we recast the unimodular waveform design objective as a unimodular quadratic objective that leads to a UQP. To proceed with IRS beamformer design, define,Ḣ m = Dmvec (Vm), D = Diag (D1, . . . , Dm), and Cm = Diag (bm(θti,m)) Hri,m, where the unimodular phase shifts for IRSm are given by vm = diag (Φm) or Vm = vec vmv m . In order to obtain (9), we imposed the reciprocity, Hir,m = H ri,m for a radar with collocated antennas and Nr = Nt. For the IRS beamforming, the Fisher information F θ w.r.t. phase shifts is recast in the following proposition. Proposition 2. The Fisher information is quartic in phase shifts: , and P is the commutation matrix, i.e., vec Ḣ = Pvec Ḣ . UBER ALGORITHM We resort to a task-specific alternating optimization (AO) or cyclic algorithm [22,31,32], wherein we optimize (12) for X and ν cyclically. To tackle each subproblem, we adopt power method-like iterations (PMLI) [26], which is a computationally efficient procedure to tackle the UQP. The PMLI resembles the well-studied power method for computing the dominant eigenvalue/vector pairs of matrices [26]. Given a matrix G, the following problem is a UQP [26]: If G is positive semidefinite, the PMLI iterations lead to a monotonically increasing objective value for the UQP. Unimodular Waveform Design: From Proposition 1, the Fisher information F θ for the unimodular waveform X is the unimodular quadratic objective in (7). Let s = vec (X), and G = (IN ⊗ B) H (IN ⊗ B). Therefore, the vectorized X is obtained from P1 via the iterations (t ≥ 0): vec X (t+1) = e j arg((I N ⊗B) H (I N ⊗B)vec(X (t) )) . To guarantee that the maximization of g (ν1, ν2) w.r.t. ν1 and ν2 also maximizes F θ (ν), a regularization would be helpful. Therefore, we add the norm-2 error between ν1 and ν2 as a penalty function to (18), we obtain where η is Lagrangian multiplier. Rewrite the objective of (19) as whereλM is the maximum eigenvalue of E(νi). To tackle the UQ 2 P for maximizing F θ , we solve the biquadratic program (20) using PMLI in (14). Algorithm 1 summarizes the proposed steps. The PMLI in UBeR have previously been shown to be convergent in terms of both the optimization objective and variable [21,22]. 2 , Lagrangian multiplier η, total number of iterations Γ1 and Γ2 for problems P1 and P2, respectively. Output: Optimized phase shifts ν * , unimodular waveform X * . SIMULATION RESULTS We consider a radar, equipped with Nr = Nt CN (0, 1). In Algorithm 1, we set Γ1 = 50 and Γ2 = 20 for all iterations. Throughout all our experiments the Lagrangian multiplier η is tuned to 0.1. Initially, all IRS platforms are set to impose zero phase shift ν (0) i = 0 M Nm for i ∈ {1, 2}. The number of slow-time samples is set to N = 50 and the samples in X (0) are generated from a normal distribution. Fig. 1a illustrates that the multiple IRS-aided radar outperforms the single-IRS aided radar. Further, Fig. 1b indicates that iterations of Algorithm 1 result in a monotonically decreasing CRLB. SUMMARY Waveform design for IRS-aided radar is relatively unexplored in prior works. In this context, this paper studies a new set of waveform design problems. Numerical experiments demonstrate that the deployment of multiple IRS platforms leads to a better achievable estimation performance compared to non-IRS and single-IRS systems. Some IRS model enhancements that should be accounted for in the future include the inter-IRS interference and quantization of the IRS phases.
2022-10-27T01:16:27.038Z
2022-10-26T00:00:00.000
{ "year": 2022, "sha1": "57c75c09c573722c59dc27164c441d0809eded4d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "22ff03c91eeafa4f4ff6f2f034544355d93688d3", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering", "Mathematics" ] }
14377585
pes2o/s2orc
v3-fos-license
Prognostic nomogram for nonresectable pancreatic cancer treated with gemcitabine-based chemotherapy Background: A nomogram is progressively being used as a useful predictive tool for cancer prognosis. A nomogram to predict survival in nonresectable pancreatic cancer treated with chemotherapy has not been reported. Methods: Using prospectively collected data on patients with nonresectable pancreatic cancer receiving gemcitabine-based chemotherapy at five Japanese hospitals, we derived a predictive nomogram and internally validated it using a concordance index and calibration plots. Results: In total, 531 patients were included between June 2001 and February 2013. The American Joint Committee on Cancer (AJCC) TNM stages were III and IV in 204 and 327 patients, respectively. The median survival time of the total cohort was 11.3 months. A nomogram was generated to predict survival probabilities at 6, 12, and 18 months and median survival time, based on the following six variables: age; sex; performance status; tumour size; regional lymph node metastasis; and distant metastasis. The concordance index of the present nomogram was higher than that of the AJCC TNM staging system at 12 months (0.686 vs 0.612). The calibration plots demonstrated good fitness of the nomogram for survival prediction. Conclusions: The present nomogram can provide valuable information for tailored decision-making early after the diagnosis of nonresectable pancreatic cancer. Pancreatic cancer is the fourth leading cause of cancer death in the United States (Siegel et al, 2013) and the fifth in Japan. It is often nonresectable at the time of diagnosis, and is generally associated with a poor prognosis. However, recent advances in chemotherapy have prolonged the survival time of patients with nonresectable pancreatic cancer (Nakai et al, 2010;Conroy et al, 2011;Sun et al, 2012). Systemic administration of gemcitabine has been the mainstream first-line chemotherapy for nonresectable pancreatic cancer, since Burris et al (1997) demonstrated the superiority of gemcitabine over 5-fluorouracil. More recently, gemcitabine-based combination chemotherapies have been intensely investigated (Heinemann et al, 2006;Nakai et al, 2012b;Ueno et al, 2013), but few trials have shown their superiority over gemcitabine monotherapy (Moore et al, 2007;Von Hoff et al, 2013). In this setting, reliable prognostic information is desired for tailored management of individual patients with advanced pancreatic cancer receiving gemcitabine-based chemotherapy. The most widely used staging system for pancreatic cancer is the American Joint Committee on Cancer (AJCC) TNM staging system (Edge et al, 2010). However, it is relatively nondiscriminatory for survival prediction in nonresectable pancreatic cancer, which is mostly diagnosed as TNM stage III or IV. A nomogram is a simple graphical presentation of a multivariate predictive model showing the impact of each included variable on an outcome of interest that provides a numerical probability of the outcome (Iasonos et al, 2008), and is progressively being used as a useful predictive tool for cancer prognosis in the field of oncology (Kattan et al, 2002(Kattan et al, , 2003International Bladder Cancer Nomogram Consortium et al, 2006;Touijer and Scardino, 2009). Several prognostic factors for survival of nonresectable pancreatic cancer have been reported (Ishii et al, 1996;Ueno et al, 2000;Sezgin et al, 2005;Nakai et al, 2008Nakai et al, , 2011, but were evaluated separately in different cohorts. One of the strengths of a nomogram is the ability to integrate multiple prognostic factors into a single numerical estimate of survival in an individual patient and thus provide an individualised prediction of survival. A nomogram to predict survival after resection of pancreatic cancer was generated by Brennan et al (2004) and has been validated externally as well as internally (Ferrone et al, 2005). However, to our knowledge, a nomogram for survival prediction in nonresectable pancreatic cancer treated with chemotherapy has not been reported. The aim of this study was to generate and internally validate a nomogram to predict survival in patients with nonresectable pancreatic cancer receiving gemcitabine-based chemotherapy. MATERIALS AND METHODS Patients. From a collaborative prospective database of patients with pancreatic cancer including data from the University of Tokyo Hospital and affiliated hospitals, we identified consecutive patients who were diagnosed with nonresectable pancreatic cancer and subsequently received gemcitabine-based chemotherapy as the first-line anticancer treatment between June 2001 and February 2013. Pancreatic cancer was diagnosed by pathological examination or typical radiographic findings or by a clinical follow-up of at least 6 months. Nonresectability was confirmed via consultation with the departments of surgery and anaesthesiology in each hospital. Patients were followed-up at least every 2 weeks on an outpatient basis, and the tumour responses were evaluated by computed tomography, which was performed at baseline and then after every two cycles (8 weeks), according to the Response Evaluation Criteria in Solid Tumors, version 1.0 (Therasse et al, 2000). Follow-up was performed until October 2013. The study was approved by the ethics committee at each institution. Gemcitabine-based chemotherapy. Patients received gemcitabine alone or in combination with S-1, candesartan, or erlotinib. The regimen of each chemotherapy was as follows. For gemcitabine monotherapy, gemcitabine was administered intravenously at 1000 mg m À 2 on days 1,8, and 15 within each 4-week cycle. For gemcitabine and S-1, gemcitabine was administered intravenously at 1000 mg m À 2 on days 1 and 15, and S-1 was given orally b.i.d. from days 1 to 14 within each 4-week cycle. The doses of S-1 were determined according to the body surface area (BSA) as follows: BSAp1.25 m 2 , 80 mg per day; 1.25 m 2 oBSAp1.5 m 2 , 100 mg per day; and BSAX1.5 m 2 , 120 mg per day. For gemcitabine and candesartan, gemcitabine monotherapy plus oral candesartan at a dose of 4-32 mg per day was administered within each 4-week cycle. For gemcitabine and erlotinib, gemcitabine monotherapy plus oral erlotinib at a dose of 100 or 150 mg per day was administered. Some of the patients were enrolled in our clinical trials: GEMSAP (Nakai et al, 2012b) and GECA1/2 (Nakai et al, 2012a(Nakai et al, , 2013. AJCC TNM staging system. In the current staging system utilising T, N, and M factors (AJCC Cancer Staging Manual, 7th Edition, 2010) (Edge et al, 2010), the pancreatic cancer stages are categorized as follows: Stage IA, T1 N0 M0; Stage IB, T2 N0 M0; Stage IIA, T3 N0 M0; Stage IIB, T1-3 N1 M0; Stage III, T4 N-any M0; and Stage IV, T-any N-any M1. Statistical analysis. Survival time was defined as the time from initiation of chemotherapy to all-cause death. Patients who were alive at the last follow-up or lost to follow-up were treated as censored at the time of last follow-up. Survival times were estimated using the Kaplan-Meier method and compared using the log-rank test. The 95% confidence interval (CI) of the median survival time (MST) was calculated (Brookmeyer and Crowley, 1982) and that of the survival rate was also calculated by the formula of Greenwood (Kalbfleisch and Prentice, 1980). The multivariate Cox proportional hazards model was used to generate a nomogram to predict survival probabilities at 6, 12, and 18 months and MST. The following variables were included: age; sex; performance status based on the criteria of the Eastern Cooperative Oncology Group; tumour size; regional lymph node metastasis (AJCC N-factor); and distant metastasis (AJCC M-factor). Tumour size was defined as the maximum diameter of a primary tumour based on the findings of computed tomography. The performance status values of 2 and 3 were grouped because of the small numbers of patients. The proportional hazards assumption of each variable was verified by Schoenfeld residual plots. The newly generated nomogram was internally validated via two steps. First, the nomogram was subjected to bootstrapping with 1000 resamples to calculate a relatively unbiased measure of its ability to discriminate the survival times of two patients(concordance index).The concordance index quantifies the level of concordance between predicted probabilities and actual outcomes, and ranges from 0.5 (no discrimination at all) to 1.0 (perfect discrimination). In other words, it reflects the probability that a patient with a lower probability of survival predicted via the nomogram dies earlier than another patient with a higher predicted probability, when considering two patients randomly selected from the study population. Second, the predicted probability was compared with the observed frequency in the total study population, again using bootstrapping with 1000 resamples to reduce an overfit bias (calibration). The superiority of the present nomogram over the AJCC TNM staging system for survival prediction was confirmed as follows. The bias-corrected concordance indexes of the nomogram and the AJCC TNM staging system were calculated to compare their predictive abilities for survival. The heterogeneity of survival within each AJCC stage was evaluated by demonstrating histograms of nomogram-predicted survival probabilities. Stratification of actual survival via the nomogram-predicted probabilities was illustrated by categorising the total cohort according to quartiles of the probabilities and comparing survival times between the groups. All analyses were performed using R software version 2.15.2 (R Development Core Team; http://www.r-project.org) and the RMS package developed by Harrell (Harrell et al). Values of Po0.05 were considered statistically significant, and all tests were two sided. Generation and internal validation of a prognostic nomogram. A nomogram was generated via the Cox proportional hazards model including the above mentioned variables, and is demonstrated with brief instructions for its usage in Figure 1. The results of the underlying univariate and multivariate Cox models for survival time are shown in Table 2. The nomogram predicts the survival probabilities of 6, 12, and 18 months and the MST after initiation of chemotherapy in a given patient. For example, a 65year-old ('Points' ¼ 10) male ('Points' ¼ 5) patient with performance status of 1 ('Points' ¼ 20), tumour size of 60 mm ('Points' ¼ 33), regional lymph node metastasis ('Points' ¼ 15), and absence of distant metastasis ('Points' ¼ 0) has a 'Total Points' score of 69, which corresponds to 6-, 12-, and 18-month-predicted survival probabilities of 80%, 54%, and 34%, respectively, and to predicted MST of 13 months. The bias-corrected concordance indexes of the present nomogram were higher than those of the AJCC TNM staging system at all time points (0.686 vs 0.612 for 6-month survival; 0.686 vs 0.612 for 12-month survival; and 0.686 vs 0.611 for 18-month survival).These results demonstrated that the discrimination via the newly generated nomogram was superior to the grouping via the AJCC TNM staging system. Calibration plots of the 6-, 12-, and 18-month survival are shown in Figure 2. The mean absolute errors between the observed and predicted probabilities were 0.019, 0.045, and 0.038 for 6-, 12-, and 18-month survival, respectively, and the errors for 90% of the study population were within 0.046, 0.027, and 0.057, respectively. AJCC TNM staging and nomogram-predicted survival probabilities. Figure 3 illustrates histograms of nomogrampredicted survival probabilities at 12 months after initiation of chemotherapy within each of the TNM stages (III and IV). There was considerable heterogeneity in the nomogram-predicted survival probabilities even for the same TNM stage. DISCUSSION The present prognostic nomogram derived from prospectively collected data on 531 patients from five hospitals was shown to provide improved ability for individualised survival prediction in patients with nonresectable pancreatic cancer receiving gemcitabine-based chemotherapy, compared with the existing TNM staging system. By using this nomogram, individualisation of patient counselling and decision-making regarding management can be promoted. Patients with nonresectable pancreatic cancer have a very high probability of ultimately dying of their primary disease. However, given the improved survival via recent advances in chemotherapy for this condition (Nakai et al, 2010;Conroy et al, 2011;Sun et al, 2012), the importance of tailored management of patients with nonresectable pancreatic cancer has increased, and clinical physicians and patients alike desire reliable prognostic information tailored to individual patients. Although the AJCC has developed and revised a TNM staging system for pancreatic neoplasms (Edge et al, 2010), it was not specifically developed for survival prediction of nonresectable cases, but instead was rather related to resectability evaluation and preoperative staging. In this staging system, most patients with nonresectable pancreatic cancer are diagnosed as stage III or IV, and are thus only dichotomised. Therefore, the TNM staging system is relatively nondiscriminatory as a means for survival prediction of nonresectable pancreatic cancer treated with chemotherapy. Actually, within the group at each TNM stage in the present study, considerable heterogeneity was found in terms of nomogram-predicted survival probabilities among the patients (Figure 3). In other words, the patients were associated with various survival times, even if they were diagnosed as the same TNM stage. A nomogram was developed as a statistical tool to provide the overall probability of a specific outcome via a simple graphical presentation, and was shown to be more accurate than conventional staging systems for predicting prognosis in various malignancies (Kattan et al, 2002(Kattan et al, , 2003International Bladder Cancer Nomogram Consortium et al, 2006) and benign diseases (Klein et al, 2002;Sugihara et al, 2013a, b). As a nomogram is simple and understandable, it has been easily introduced into daily clinical practice. In addition, a nomogram can generate individualised predictions, and thus patients can be evaluated for their participation in clinical trials using this tool. For example, several randomised controlled trials have included expected survival time in their eligibility criteria, and a nomogram that can more accurately predict survival can provide valuable information for this consideration, potentially making such trials more sophisticated (Iasonos et al, 2008). To estimate survival in a given patient, the 'Total Points' score is calculated by summing the respective 'Points'values corresponding to each variable. Using this 'Total Points' score, the survival probabilities at 6, 12, and 18 months and the median survival time can be predicted according to the lower scales. In this study, we aimed to generate and internally validate a nomogram to overcome the above mentioned drawbacks of the TNM staging system as a means for survival prediction in patients with nonresectable pancreatic cancer receiving chemotherapy. The present nomogram included the following six variables that can be readily determined, thus providing information on predicted survival at the time of chemotherapy initiation: age; sex; performance status; tumour size; regional lymph node metastasis; and distant metastasis. Age and sex are baseline characteristics of patients. Performance status, which was reported to be a significant risk factor for survival in nonresectable pancreatic cancer (Ishii et al, 1996;Ueno et al, 2000;Sezgin et al, 2005), can be determined by physical examinations, and the remaining factors by computed tomography, which is routinely performed for patients with unresectable pancreatic cancer. In addition, no specific molecular markers (Liu et al, 2012;Lee et al, 2013) are required for this nomogram. Therefore, survival prediction via the present nomogram can be made immediately at an outpatient clinic without any additional costs, potentially providing valuable information for decision-making regarding treatments early after diagnosis of the disease. Carbohydrate antigen 19-9 (CA 19-9) is widely recognised as a prognostic factor for pancreatic cancer (Ueno et al, 2000;Ikeda et al, 2001;Nakai et al, 2008). However, patients whose red blood cell phenotyping for both Lewis A and B antigens is negative are unable to secrete CA 19-9 into their serum. Therefore, we did not include this variable in the model to secure usability for the general population, but did include tumour size, which is positively correlated with CA 19-9 (Sakahara et al, 1986;Tian et al, 1992). Overall, the internal validation demonstrated good fitness for survival prediction at several specific time points, as the predicted survival probabilities at 6, 12, and 18 months estimated via the nomogram were closely aligned with the actual survival times. Again, Figure 3 emphasizes that our nomogram provides a more differentiating prediction model compared with the AJCC TNM staging system. Figure 4 shows a clear risk stratification of survival times using nomogram-predicted survival probabilities. Therefore, using this nomogram, physicians can predict their patients' prognosis, provide more informative explanations to the patients, and initiate more individualised management. There are limitations to be addressed in the present study. First, all potential predictive variables could never be included in the analysis to generate a nomogram with an absolute predictive ability. However, the internal validation demonstrated good fitness of the present nomogram based on the six variables for survival prediction. Nonetheless, we should recognise that bootstrapping is a sample reuse method that is useful to mitigate an overfit bias of the data for nomogram generation, but cannot ensure applicability to an external cohort. Therefore, our nomogram should be externally validated using an independent population in the future. Second, despite our use of a large multicenter collaborative database that provides prospectively collected data on patients with nonresectable pancreatic cancer, some of the patients were not followed-up until death. In conclusion, the present nomogram can predict the prognosis of patients with unresectable pancreatic cancer receiving gemcitabine-based chemotherapy with considerable accuracy, potentially facilitating highly tailored patient management.
2016-05-12T22:15:10.714Z
2014-03-18T00:00:00.000
{ "year": 2014, "sha1": "30ba44d3bf675514c765afb09e79aa8aa5f1b83a", "oa_license": "CCBYNCSA", "oa_url": "https://www.nature.com/articles/bjc2014131.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "30ba44d3bf675514c765afb09e79aa8aa5f1b83a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245445177
pes2o/s2orc
v3-fos-license
Traditional Artificial Neural Networks Versus Deep Learning in Optimization of Material Aspects of 3D Printing 3D printing of assistive devices requires optimization of material selection, raw materials formulas, and complex printing processes that have to balance a high number of variable but highly correlated variables. The performance of patient-specific 3D printed solutions is still limited by both the increasing number of available materials with different properties (including multi-material printing) and the large number of process features that need to be optimized. The main purpose of this study is to compare the optimization of 3D printing properties toward the maximum tensile force of an exoskeleton sample based on two different approaches: traditional artificial neural networks (ANNs) and a deep learning (DL) approach based on convolutional neural networks (CNNs). Compared with the results from the traditional ANN approach, optimization based on DL decreased the speed of the calculations by up to 1.5 times with the same print quality, improved the quality, decreased the MSE, and a set of printing parameters not previously determined by trial and error was also identified. The above-mentioned results show that DL is an effective tool with significant potential for wide application in the planning and optimization of material properties in the 3D printing process. Further research is needed to apply low-cost but more computationally efficient solutions to multi-tasking and multi-material additive manufacturing. Introduction Additive manufacturing (3D printing) has been widely used in clinical practice since the 1980s, including, for example, for preoperative simulation, training, and manufacturing of implants and rehabilitation supplies. Process design methodologies and models can be implemented more efficiently and faster using artificial intelligence (AI), including machine learning (ML). The following AI methods and tools are used: • Determining the structure of the technological process (sequences of technological operations and procedures): decision rules • Building models of selecting materials, semi-finished products, tooling and their parameters, and settings: artificial neural networks (ANN) and decision trees (DT) • Pre-processing (normalization, coding) of selected data used to build models: fuzzy logic, including ordered fuzzy numbers (OFN) • Implementation of models for the selection of materials, semi-finished products, tools, devices, and parameters of their processing as a prototype expert system used to design the technological process: ANN • Attempts to eliminate disturbances in the course of the planned technological process affecting product quality by means of the developed methodology and models of An artificially intelligent system is designed to support technologists, both experienced (as an opinion system) and inexperienced (as a system that complements their knowledge and experience, and teaches) in process design. Methodologies, models, and prototype expert systems are developed based on copying the activity of a human who is an expert in a given field, with the ability to gather experience and knowledge, analyze data, and draw conclusions to solve problems. Research conducted within the project will demonstrate their usefulness and effectiveness in the design and supervision of the technological process of 3D printing. In addition, application of AI methods will increase the use of data included in technological databases. Types, structures, and parameters of the learning and testing processes have been optimized to make efficient use of knowledge, including that extracted in real time from sensors. We expect interesting research conclusions and the emergence of models that are more effective than existing ones. The main purpose of this study was to compare the optimization of 3D printing properties toward the maximum tensile force of an exoskeleton sample based on two different approaches: traditional ANN and a DL approach based on convolutional neural networks (CNNs). The plan is to solve the same technological problems using two completely different tools and compare the results obtained and the effectiveness of both approaches: traditional ANNs and DL. In particular, the idea is to avoid, as far as possible, generating results on a "black box" basis (i.e., without an explanation of how the result was obtained)-in this context, decision trees are more intuitive and simpler than ANNs. This is particularly important in the production of individualized, one-off production, with a large number of product variants, and therefore a low degree of standardization. In this way, knowledge gained from experienced technologists and from already designed and tested technological processes, proven in production, can be effectively used to design new technological processes for new products. This labor-intensive process, requiring many consultations with technologists, can be performed more efficiently, quickly, and accurately, bringing a new quality to CAPP (Computer Aided Process Planning) systems, based in part on technologists' intuition, which is difficult to describe. The acquisition of new knowledge will also be realized through periodic learning of solutions, i.e., according to incoming new data [1][2][3]. This translates into automatic updates, based on actual data and not just catalogue data, subject to changes over many years. This avoids errors in process design and thus minimizes company losses. The new knowledge gained in the project will significantly improve the implementation of production processes, the optimization of existing technologies and the emergence of new technologies, and is the authors' contribution to research on artificial intelligence and its applications [1][2][3]. The structure of the article is as follows: we start with theoretical background regarding DL and optimization, then we present the material (analyzed data sets) and research methods/tools (ANN, DL) used in the work. We successively present the results of both approaches, discuss their advantages and disadvantages compared to the solutions of competing teams, and indicate the directions of further research. The work ends with detailed conclusions. Deep Learning DL is a ML technique in artificial intelligence that is a rapidly developing area of research and engineering practice. DL far surpasses many of its predecessors in its ability to recognize speech, computer vision, and natural language processing, as well as to implement ML or intelligent machine design. In this paper, we use the deep ML paradigm and different types of neural networks to optimize 3D printing [1][2][3]. In contrast to traditional learning methods, DL refers to ML techniques that use supervised or unsupervised strategies to automatically learn hierarchical representations in deep architectures for the purpose of classifying intelligent patterns. Multilayer information processing in hierarchical architectures is used here for features learning and further learned patterns classification. DL has been combined even more effectively in industrial components that use vast amounts of advanced information. It is at the intersection of the research areas of neural networks, models optimization, pattern recognition, and signal processing. Two main reasons for the popularity of DL are: • A significant reduction in hardware costs • Drastically increased computing capabilities of processors (e.g., graphic processing units (GPUs)). Since 2006, researchers have demonstrated the success of DL in many applications such as computer vision, speech recognition, image feature encoding, semantic classification, handwriting recognition, information retrieval, and robotics [1][2][3]. Four key attributes are used to classify a ML paradigm and place it in the context of a specific application: input representation, source and target distribution, training data, and loss function ( Figure 1) [1][2][3]. to recognize speech, computer vision, and natural language processing, as well as to implement ML or intelligent machine design. In this paper, we use the deep ML paradigm and different types of neural networks to optimize 3D printing [1][2][3]. In contrast to traditional learning methods, DL refers to ML techniques that use supervised or unsupervised strategies to automatically learn hierarchical representations in deep architectures for the purpose of classifying intelligent patterns. Multilayer information processing in hierarchical architectures is used here for features learning and further learned patterns classification. DL has been combined even more effectively in industrial components that use vast amounts of advanced information. It is at the intersection of the research areas of neural networks, models optimization, pattern recognition, and signal processing. Two main reasons for the popularity of DL are: • A significant reduction in hardware costs • Drastically increased computing capabilities of processors (e.g., graphic processing units (GPUs)). Since 2006, researchers have demonstrated the success of DL in many applications such as computer vision, speech recognition, image feature encoding, semantic classification, handwriting recognition, information retrieval, and robotics [1][2][3]. Four key attributes are used to classify a ML paradigm and place it in the context of a specific application: input representation, source and target distribution, training data, and loss function ( Figure 1) [1][2][3]. ` Key attributes DL can help push the boundaries of what has previously been possible in the field of 3D printing optimization. However, this does not automatically mean that traditional techniques that were gradually developed in the years before DL have become obsolete. It may be that in some applications, legacy solutions will prove to be more effective, and a hybrid approach, combining old and new methods and techniques, will have to be used to solve some problems-these issues still require further research, and this paper is one of the first to address this complex problem [2][3][4][5][6][7][8] Tables S1-S3. A multilayer perceptron (MLP) is a feed-forward ANN that has a minimum of three layers: • Input layer • Hidden layer • The output layer. The neurons in MLP use a non-linear activation function ( Figure 2). The main disadvantage of the MLP is that it has many parameters due to its full internal connection. This can result in redundancy and inefficiency. CNN is also a feed forward neural network. The core element of CNN's architecture is the convolution layer, consisting of a set of learning filters. In CNN hidden layers, the convolution, and linking functions are usually used instead of using the normal activation functions (Figures 2 and 3). DL can help push the boundaries of what has previously been possible in the field of 3D printing optimization. However, this does not automatically mean that traditional techniques that were gradually developed in the years before DL have become obsolete. It may be that in some applications, legacy solutions will prove to be more effective, and a hybrid approach, combining old and new methods and techniques, will have to be used to solve some problems-these issues still require further research, and this paper is one of the first to address this complex problem [2][3][4][5][6][7][8] Tables S1-S3. A multilayer perceptron (MLP) is a feed-forward ANN that has a minimum of three layers: The output layer. The neurons in MLP use a non-linear activation function ( Figure 2). The main disadvantage of the MLP is that it has many parameters due to its full internal connection. This can result in redundancy and inefficiency. CNN is also a feed forward neural network. The core element of CNN's architecture is the convolution layer, consisting of a set of learning filters. In CNN hidden layers, the convolution, and linking functions are usually used instead of using the normal activation functions (Figures 2 and 3). Adaptive process control and sensor fusion can be an important part of smart manufacturing [9]. ML can be divided into: 1. reinforcement learning: Deep Q-network 2. Supervised learning: • Regression (neural networks, decision trees, ensembles methods, linear, non-linear (GLM logistic) • Classification (naive Bayes, k-nearest neighbors-kNN, discriminant analysis, support vector machines-SVM) 3. Unsupervised learning: Clustering (k-means, hierarchical, neural, Gaussian, hidden) [4]. Main ANN, CNN, DBNN architectures are presented below ( Figure 4). Adaptive process control and sensor fusion can be an important part of smart manufacturing [9]. ML can be divided into: Clustering (k-means, hierarchical, neural, Gaussian, hidden) [4]. Main ANN, CNN, DBNN architectures are presented below ( Figure 4). The many methods use various algorithms for implementation, but ANN and SVM are the most popular techniques to implement the ML paradigm. DL is an extended version of supervised learning. CNN and Deep Belief Network are two powerful techniques that can be used to solve various complex problems using DL. DL platforms can also leverage engineering features when learning more complex representations that engineering systems typically do not have. It is absolutely clear that there has been insufficient progress in the development of deep ML systems. One of the most common decision-making tasks in human activity is classification. This classification problem arises when an object must be assigned to a predefined class based on a number of observed attributes associated with that object. Many problems in business, science, industry, and medicine can be treated as such classification problems. Examples include bankruptcy prediction, credit scoring, medical diagnosis, quality control, handwriting recognition, and speech recognition. The many methods use various algorithms for implementation, but ANN and SVM are the most popular techniques to implement the ML paradigm. DL is an extended version of supervised learning. CNN and Deep Belief Network are two powerful techniques that can be used to solve various complex problems using DL. DL platforms can also leverage engineering features when learning more complex representations that engineering Optimization of Solutions 3D printing material features, limitations in the fabrication of complex geometries, and processing parameters have significant effects on the performance of 3D-printed parts (and possibly their therapeutic effect), so it is necessary to optimize these parameters which constitute a difficult task. The idea of optimizing 3D printing and its control systems is key for the development of this group of technologies by relying on new 3D printing technologies, the acquisition and processing of control signals, their classification and interpretation, novel mechanical properties of materials (including programmable strength in different directions and ease of disinfection), and automation of their use in 3D printing (including multi-material printing). AI/ML-based tools can be utilized in different simulation environments. The so-called batch production systems allow for quick product creation and easy modification through recipe amendments-the modifications are made by a technologist without the involvement of programmers. Batch production systems are suitable for all applications where there is mixing and thermal, pressure or chemical processing of many components to obtain a finished product, e.g., for the chemical, pharmaceutical, and food industries. The system itself takes care of the availability of equipment and raw materials needed for production by checking the possibility of fulfilling orders, and if this is not possible, it informs the employees. The production process simulator is a complete, virtual model of a factory with an accurately reproduced production process (a so-called digital twin)-a practical implementation of the idea of Industry 4.0. The digital process simulator makes it possible to check the correct functioning of the entire system before it is implemented in the facility. This makes it possible to control barriers, dependencies, and production processes step-by-step without the risk of losses resulting from wasted material, poor product quality or installation damage. This reduces start-up time: even from several weeks to a few days. Data Analysis and Computational Model The main objective of this study was to compare the optimization of 3D printing properties toward the maximum tensile force of an exoskeleton sample based on two different approaches: traditional ANNs and DL based on convolutional neural networks (CNNs). We want to have a discussion on whether the familiarity with classical ANN optimization techniques should be retained and whether and how it is worthwhile to combine the two sides of optimization (traditional ANN and DL). Analyzed Data Sets For testing of the two computational approaches presented below, five 3D-printed structures (a set of exoskeleton parts of different sizes) were prepared using the FDM technique and checked by the expert for mastering the correctness of the technology and the absence of defects. Examples of the 3D-printed parts are presented in Figure 5. The Cura 0.1.5 and SLICER software (3D Ultimaker, Utrecht, The Netherlands), and fused filament fabrication (FFF) technology were used in this research to create and 3D print the aforementioned parts of the exoskeleton. Slicing software determined a way to decompose the digital 3D model into layers for printing by an FFF printer. This FFF printer uses a particular sequence of operations to print: • First, depending on the type of printer, the nozzle, the print bed or both move while the plastic is being extruded • Simultaneously, the heated nozzle ejects molten plastic, and deposits it in thin layers, one on top of another, layer-by-layer, forming the shape of the whole 3D printed object • The aforementioned filament layers fuse together due to the thermal fusion bonding occurring between the individual layers, to create a solid part (after cooling down). To measure the maximum tensile force of the exoskeleton samples, the tests consisted of subjecting each sample mounted in the grips of an INSTRON 5966 testing machine (Instron, High Wycombe, UK) to a monotonically increasing tensile load with a travel speed The Cura 0.1.5 and SLICER software (3D Ultimaker, Utrecht, The Netherlands), and fused filament fabrication (FFF) technology were used in this research to create and 3D print the aforementioned parts of the exoskeleton. Slicing software determined a way to decompose the digital 3D model into layers for printing by an FFF printer. This FFF printer uses a particular sequence of operations to print: • First, depending on the type of printer, the nozzle, the print bed or both move while the plastic is being extruded • Simultaneously, the heated nozzle ejects molten plastic, and deposits it in thin layers, one on top of another, layer-by-layer, forming the shape of the whole 3D printed object • The aforementioned filament layers fuse together due to the thermal fusion bonding occurring between the individual layers, to create a solid part (after cooling down). To measure the maximum tensile force of the exoskeleton samples, the tests consisted of subjecting each sample mounted in the grips of an INSTRON 5966 testing machine (Instron, High Wycombe, UK) to a monotonically increasing tensile load with a travel speed of the piston of the testing machine of 0.2 mm/s. The tests were carried out at a temperature of 21-23 • C and 55% air humidity. During the test, the instantaneous values of the loading force and displacement of the grip of the testing machine were measured until the sample cracked and completely detached. Balancing the technical requirements with user safety constraints requires analysis to move from the initial stages of the project. List of optimized parameters for 3D printing is shown in Table 1. Testing Procedure First, the obtained data were analyzed using the ordinary ANN algorithm, and then using DL (CNN), whose task was to enhance the contrast between changes in the 3D print as a result of the material features and identification of selected optimized parameters. It should be mentioned that a key condition for the replication of our study may be appropriate selection of the used PLA material, its storage, preparation, and then the same procedures with the 3D-printed objects. We are aware that the influence of microstructure and atomic defects on the properties of the materials used and printed objects is assessed as strong. The data served as the source of the variables for training ANN and CNN, respectively. The above-mentioned data has been divided into two sets: training and testing as follows: • The training set was used to identify systematic errors and network weights during their learning • The testing set was used to calibrate, prevent network overtraining, and measure and compare the ANN and CNN performance. Traditional Approach To optimize the 3D printing parameters in the traditional way, we used a three-layer feed-forward artificial neural network (ANN) built and trained in the MATLAB environment with Neural Networks Toolbox (version R2021b, MathWorks, Natick, MA, USA). Multi-layer perceptron (MLP) proved to be beneficial to optimize the process parameters in the FFF technique [2]. We used: • Back-propagation (BP) algorithm-a popular gradient-based local search optimization technique • Naive initialization technique • Neural network weights preset instead of setting the aforementioned scales to small random numbers to avoid a slow error convergence rate, being trapped at local minima, etc. • Optimization of the connection weights of the MLP set to minimize the error function (i.e., average mean square error (MSE) between the target and actual outputs averaged over all training examples). The structure of the used ANN is shown in Figure 6 and Table 2. in the FFF technique [2]. We used: • Back-propagation (BP) algorithm-a popular gradient-based local search optimization technique • Naive initialization technique • Neural network weights preset instead of setting the aforementioned scales to small random numbers to avoid a slow error convergence rate, being trapped at local minima, etc. • Optimization of the connection weights of the MLP set to minimize the error function (i.e., average mean square error (MSE) between the target and actual outputs averaged over all training examples). The structure of the used ANN is shown in Figure 6 and Table 2. All of the layers of the ANN contained neurons with the same sigmoid activation function ( Table 2). 5-20-10 Sigmoid Sigmoid where: NS-structure of ANN; AH-activation function in the hidden layer; AO-activation function in the output layer. All of the layers of the ANN contained neurons with the same sigmoid activation function ( Table 2). Deep Learning Approach To optimize the 3D printing parameters in a deep learning way, we used a four-layer convolutional neural network (CNN) built and trained in the MATLAB environment with Deep Learning Toolbox (version R2021b, MathWorks, Natick, MA, USA). DL is used in the field of digital data processing to solve problems that are impossible or difficult to solve by traditional CI methods (e.g., detection, classification, segmentation, etc.,), usually with super-human accuracy. We provided here a comparison of simulations on a traditional and deep ANN using the same data in an attempt to answer: from what level of complexity of the system and its description increased calculation effort in deep ANN turns. That is to say: when do we use DL and why? As DL methods, we have used convolutional neural networks (CNNs), which improve the prediction efficiency in most cases by using large amounts of data and abundant computational resources, and push the boundaries of what was possible before, both by humans and traditional CI systems. This is due to the fact that questions have arisen in recent years: does greater use of DL make traditional CI techniques obsolete, or is there still a need to research and develop the study of traditional CI techniques or perhaps even to combine them with DL in the form of hybrid systems? There are still some tasks where traditional CI techniques with global properties are a better solution, especially when considering computing power, time, accuracy, and the characteristics and quantity of the inputs, as far as their application in the Internet of Things (IoT) and mobile solutions is concerned. We compared traditional and deep simulations on the same 3D printing data in an attempt to answer the question: what level of system complexity and its description returns DL's increased computational effort? The result of the study is to be a suggestion: when should we switch to DL in the optimization of 3D printing and with what calculation processes parameters? The structure of the above-mentioned network is shown in Figure 7 and Table 3. time, accuracy, and the characteristics and quantity of the inputs, as far as their application in the Internet of Things (IoT) and mobile solutions is concerned. We compared traditional and deep simulations on the same 3D printing data in an attempt to answer the question: what level of system complexity and its description returns DL's increased computational effort? The result of the study is to be a suggestion: when should we switch to DL in the optimization of 3D printing and with what calculation processes parameters? The structure of the above-mentioned network is shown in Figure 7 and Table 3. Almost all layers of the network contained neurons with the same sigmoid activation function, but the Output layer contains linear neurons that provide the easy-to-compare cost function: Gaussian cross-entropy (MSE) ( Table 3). Selection of the functions performed by the hidden layers in the CNN is key for the course of the learning process. The following are possible problems with restricted functions (sigmoidal and hyperbolic tangent) in the hidden layer: unstable gradient, thus learning can get stuck when the feature is saturated. Almost all layers of the network contained neurons with the same sigmoid activation function, but the Output layer contains linear neurons that provide the easy-to-compare cost function: Gaussian cross-entropy (MSE) ( Table 3). Selection of the functions performed by the hidden layers in the CNN is key for the course of the learning process. The following are possible problems with restricted functions (sigmoidal and hyperbolic tangent) in the hidden layer: unstable gradient, thus learning can get stuck when the feature is saturated. Results After training and testing the ANN and CNN networks, the results, i.e., the classification accuracy and (R)MSE coefficients, showed that traditional ANN was able to minimize the MSE for the data in the training set to very small values (0.01), made it quicker than CNN, but with lower exactness (Figures 8 and 9, Tables 4 and 5). Results After training and testing the ANN and CNN networks, the results, i.e., the classification accuracy and (R)MSE coefficients, showed that traditional ANN was able to minimize the MSE for the data in the training set to very small values (0.01), made it quicker than CNN, but with lower exactness (Figures 8 and 9, Tables 4 and 5). The (R)MSE value as a function of the number of epochs decreased faster in the conventional ANN network (Figure 8). Compared with the results from the traditional ANN approach, optimization based on DL decreased the speed of the calculations by up to 1.5 times with the same print quality, increased quality (both learning and testing), and decreased MSE, and unique formulas and printing parameters not found previously through trial-and-error approaches were also identified. The longer computation time is the result of the more complex CNN structure (Tables 4 and 5). Our results indicate that DL is an effective tool with the potential for broad application for planning and optimizing of materials features in 3D printing. CNN has the potential to solve more complex computational tasks; thus, the DL algorithm. Compared with the results from the traditional ANN approach, optimization based on DL decreased the speed of the calculations by up to 1.5 times with the same print quality, increased quality (both learning and testing), and decreased MSE, and unique formulas and printing parameters not found previously through trial-and-error approaches were also identified. The longer computation time is the result of the more complex CNN structure (Tables 4 and 5). Our results indicate that DL is an effective tool with the potential for broad application for planning and optimizing of materials features in 3D printing. CNN has the potential to solve more complex computational tasks; thus, the DL algorithm. It can more quickly predict the behavior of complex physical systems using sparse data sets through integration of physical modeling. The aforementioned shorter time and properties may be increasingly important in the future when it will be necessary to apply the most powerful computational solutions to the most complicated 3D printing projects for their optimization. Network Name (R)MSE Higher values of the quality (learning) and quality (testing) observed in CNN (Table 4) reflected CNN's better ability to infer from the collected data for the training and testing sets. The resultant optimized 3D of the ten printing features established owing to the CNN-based analysis are presented in Table 6, and the optimal tensile force of the selected exoskeleton part can be seen in Figure 10. Optimal tensile forces estimated at 2122.2 N can practically be compared only to the hand grip strength in the exoskeleton, estimated at 20-60 N, while the grip strength of the ill person may be 50% lower. In the 3D-printed exoskeleton, material considerations are very important to the design, safety, and usability of the exoskeleton. Optimization requires balancing the many features of the exoskeleton, but AI support can play a key role in this, making the process easier and faster, increasing production efficiency, and the convenience and safety of the end product. Discussion Our results indicate that the proposed data analysis method is highly effective for optimizing sample parameters, regardless of their shape and size (including depth). Applications of 3D printing have significantly increased in recent years, its broad application in health care is still in progress, especially accompanied by novel AI-based optimization. The use of DL in medical 3D printing parameter selection systems is not obvious, and in rehabilitation engineering, it is not common. Applications of DL in medical science and clinical practice using 3D printing are cited below for comparison. Our results confirmed that 3D printing with FFF technology based on the existing PLA/PLA+ material can be optimized for the effective printing of the usable/functional part of the exoskeleton and its strength parameters. 3D printing of exoskeleton elements, and in general, 3D printing for biomedical purposes, is already a complicated issue, because the materials and ready-made elements used, in addition to the specified mechanical and chemical properties, should be biocompatible. It seems that the direction of further research has already confirmed the concept of artificial and intelligent material optimization of 3D printing for biomedical purposes by developing specialized filaments adapted Optimal tensile forces estimated at 2122.2 N can practically be compared only to the hand grip strength in the exoskeleton, estimated at 20-60 N, while the grip strength of the ill person may be 50% lower. In the 3D-printed exoskeleton, material considerations are very important to the design, safety, and usability of the exoskeleton. Optimization requires balancing the many features of the exoskeleton, but AI support can play a key role in this, making the process easier and faster, increasing production efficiency, and the convenience and safety of the end product. Discussion Our results indicate that the proposed data analysis method is highly effective for optimizing sample parameters, regardless of their shape and size (including depth). Applications of 3D printing have significantly increased in recent years, its broad application in health care is still in progress, especially accompanied by novel AI-based optimization. The use of DL in medical 3D printing parameter selection systems is not obvious, and in rehabilitation engineering, it is not common. Applications of DL in medical science and clinical practice using 3D printing are cited below for comparison. Our results confirmed that 3D printing with FFF technology based on the existing PLA/PLA+ material can be optimized for the effective printing of the usable/functional part of the exoskeleton and its strength parameters. 3D printing of exoskeleton elements, and in general, 3D printing for biomedical purposes, is already a complicated issue, because the materials and ready-made elements used, in addition to the specified mechanical and chemical properties, should be biocompatible. It seems that the direction of further research has already confirmed the concept of artificial and intelligent material optimization of 3D printing for biomedical purposes by developing specialized filaments adapted to professional biomedical applications, and then filaments with designed properties corresponding to the needs of the body, its use in combination with living tissue, body fluids, etc. In addition, the coexistence of technologies and materials in 3D printing makes it possible to relatively quickly and cheaply produce polymer, metal, ceramic, and even composite/multi-material objects, which are often impossible or too expensive to produce using conventional manufacturing technologies, with unique mechanical, thermal, and dimensional properties. The advantage of the proposed research is a holistic approach, not only regarding the selection of the material for 3D printing, but also the choice of technology and taking into account the requirements of a medical device (patient, therapy, therapist). This approach can be a good starting point for building the entire environment connecting software for designing medical devices (MDR, ISO 13485), selecting parameters based on pre-programmable templates and analysis of artificially intelligent requirements, and then optimization of printing, fitting or even necessary corrections. The main limitation of the study is its focus on a specific exoskeleton solution, which is high technology in itself, perhaps unattainable for some scientists trying to replicate. It seems, however, that the proposed solutions can be easily adapted to simpler 3D printed medical solutions, e.g., orthoses. The gap in the contemporary scientific and professional literature concerns not only the optimization of artificial intelligence in the selection of material properties for 3D printing, but also the entire process of diagnostics, selection, and adjustment of 3D printed rehabilitation equipment as part of personalized medicine. This does not mean a higher effectiveness of healthcare, but this is because any individual approach is difficult to apply on a mass scale, especially to non-homogeneous groups of patients. This paper contributes to the existing body of literature because this article is only the beginning of a whole series of works devoted to changing the approach to the rehabilitation supply industry as part of Industry 4.0, and maybe even Clinic 4.0, based on the wider use of artificial intelligence, preventive medicine, and personalized medicine. The challenge is multi-screen printing and the programming of the life cycle of medical devices to best serve patients. Many different variants of processes, technologies, and materials and their improvements must be considered, creating many novel subtechnologies and possibilities in order to select those that provide new, demanding product features while maintaining the accuracy and speed of production, and the quality of the printed object (final product). Materials whose parameters/functions change with the structural parameters, e.g., with depth, i.e., like in living tissue, constitute an additional challenge. Hybrid methodologies can help improve 3D printing performance and solve problems that are not suitable for DL. Combining traditional techniques with profound learning may be popular in new areas for which profound learning models have not yet been fully optimized. The ability of AI-based systems to monitor the state of knowledge and engineering practice, search/generate new solutions (including alternatives to existing ones), assess progress, dynamically modify the characteristics/parameters of design, planning, production, and recycling (including cycle planning) is becoming increasingly important. Sustainable development requires not only taking into consideration monitoring of life cycle of products but also problem solving approach based on accurate sensors for collecting data, aggregation, inference, and prediction for greater accuracy. DL is quite often used in 3D printing optimization, including medical applications. 3D printing enables the construction of affordable, patient-specific, anatomically accurate physical models which are more convenient and realistic during simulations of complex (neuro)surgical approaches in a safe didactic environment. All stages of the surgical procedure can be simulated: from positioning and exposure to deep microdissection, taking into account the complex anatomy, working angles, and pathoanatomical relationships. Thermoplastic polymers with different properties can be used to reflect the visual and tactile responses of bones, neurological, and vascular tissues [10]. A personalized 3D model can characterize e.g., a patient's individual thyroid lesions, not only for medical professionals, but also for the patients and their families. This model can be an effective tool to improve patient understanding and satisfaction. A U-Net-based DL architecture and a 3D mesh modeling technique were used to produce a personalized 3D model of a thyroid gland. The average 3D printing time was long: over 4 h for each patient), but the average production price was only USD 4.23 for each patient. The size, location, and anatomical relationships of the tumor and thyroid gland could be represented better and more accurately, and the group of patients receiving personalized 3D printed models showed significant improvements in all four categories: general knowledge, benefits and risks of surgery, and satisfaction. All patients who received their 3D model found it helpful in understanding the disease, surgery, and possible complications, as well as generally satisfying [11]. DL was used to automatically measure the left ventricular (LV) ejection fraction and also to automatically measure the LVEF using two-dimensional echocardiography (2DE) images for different clinical centers, ultrasound machines, and heart disease phenotypes. A U-Net-based DL algorithm (DPS-Net) was used based on 36,890 frames of 2DE taken from 340 patients, and the two-plane Simpson method was applied to calculate the LVEF. The high performance of the DPS-Net in LV detection and LVEF measurement in heart failure with several phenotypes is shown. This was observed in a large dataset, i.e., DPS-Net is highly adaptive across different echocardiographic systems [12]. Computed tomography (CT) image reconstruction of a life-sized 3D-printed chest phantom placement of tissue mimicking inserts was performed using a commercial reconstruction algorithm (HDFoV) and a novel DL-based approach (HDeepFoV). Reconstruction of images outside the field of view of the CT scanner (e.g., in patients with obesity) requires use of extrapolated data. The DL-based algorithm showed much better performance in quantitative assessments based on 3D-printed phantom data, and in qualitative assessments of patient data [13]. Using low-powered AI acceleration chips, CNN also works interactively on mobile devices (even an iPhone 11 Pro), offering real-time performance in mobile headsets, virtual and augmented reality [14]. 3D printing has emerged as a potential way to produce general and personalized IUDs. To ensure controlled release of contraceptive hormones, Monte Carlo simulation and DL models based on ANN, could prove effective in developing precise contraceptive delivery systems, improving the quality of life for women worldwide [15]. Automated face recognition technology based on DL has achieved high accuracy in diagnosing various endocrine diseases and genetic syndromes. A CNN-based facial diagnostic system achieved a high accuracy of 97%, and the results of a prospective study demonstrated the application value of this system in Turner syndrome screening are promising [16]. The recent development of 3D printing has taken hold in healthcare and has led to clinical applications from anatomical models, through devices supporting diagnosis, treatment, rehabilitation, and care, to bioink 3D printing. Although much research to date has focused on materials, designs, processes, and products, little attention has been paid to efforts to enable their commercialization and rapid implementation into clinical practice, including addressing important issues such as reproducibility, quality control, and meeting regulatory requirements. Increasing process uniformity, consistent design, development, and manufacturing will require automation and the use of flexible artificial intelligent information systems, standardization of facilities, equipment, and processes in therapeutic and non-therapeutic applications [17]. Automated pathology detection and 3D vertebral reconstructions based on DL-based labeling and vertebral segmentation methods for biomechanical simulation and 3D printing facilitate clinical decision-making, surgical planning, and tissue engineering [18]. The integrated approach addresses materials processing, fabrication of engineering components and structures including: 3D printing, thin-film and multi-layer structures to obtain coupled mechanical and functional properties. DL solutions are trained to extract the elastoplastic properties of metals and alloys from indentation results using multiple datasets to achieve desired levels of accuracy improvement [19]. High levels of engagement in content-intensive subjects can be difficult to achieve. The majority of students considered 3D-printed models of the skeleton and its parts to be a resource that helped them to improve their study habits, achieve greater confidence, and improve their academic performance [20]. Simulation methods are increasingly used to improve medical skills, allowing trainees/practitioners to practice in a risk-free, reproducible environment. To this end, after segmentation of anatomical features using a 3D printer, several realistic 1:1 scale anatomical models can be produced containing all of the relevant structures, including vascular [21]. Careful surgical planning can determine the success or failure of a whole surgical procedure. A full understanding of the complex spatial relationship between the boundaries of a tumor and the surrounding healthy tissues enables accurate surgical planning. The use of 3D printing to produce anatomical models can be introduced into standard clinical practice, but requires incorporation of best practice and description of a workflow and methodology used to standardize affordable, realistic preoperative virtual and physical simulation that is cost-effective [22]. The study group found the 3D-printed model of a cranial fossa significantly more useful compared to the half skull used by the control group [23]. This approach can be accelerated by optical neural networks, combining wavelet optics with DL methods, demonstrating all-optical inference and generalization to subclasses of data. Combining native or designed dispersion of different material systems with a DL-based design strategy, broadband diffractive neural networks will help to design light-matter interactions in 3D, allowing the creation of task-specific optical components (optically deterministic tasks or statistical inference) [24]. The process of using CAD (Computer Aided Design), Pro/Engineer (Pro/E) software, and 3D printing to construct physical products follows three consecutive required steps: 1. 3D construction of the implant 3. 3D printout for physical printing. Thanks to the integration of clinical imaging, digital templates and 3D printing, the final prints of, for example, implants can be adapted to the needs of an individual patient, both in terms of shape and material properties [25]. AI should be considered as part of a comprehensive set of solutions, linked to comprehensive specialist education, diagnosis, treatment, rehabilitation and care, 3D printing and virtual/augmented reality technologies and telemedicine, including as part of a coherent therapeutic and business model that can be brought to the healthcare market in the future [26]. Even middle school students are already able to tinker in a virtual world using 3D design software and then tinker in the real world using printed parts, fostering staff development in new specialties [27]. 3D printing allows the creation of typical cyberphysical systems for mass customization, not only in rehabilitation, but also, for example, in dentistry. Short "series" and complex shapes make it necessary to compensate for errors, and doing this manually is neither easy nor economical, hence the need for automatic error compensation. For these reasons: 1. We obtain the shape using technologies such as 3D scanning 2. We use 3D DL to train a deep neural network for a specific task (printing an orthosis or a dental crown)-the CNN can learn the deformation function owing to the large amount of data used for training. 3. We verify the performance of the neural network: The accuracy achieved is sufficient with low hardware and software costs [28]. Endoscopic navigation systems look for integration of big data with multimodal information (i.e., from CT scans, magnetic resonance imaging, ultrasound images, and even external trackers) with respect to anatomy/physiology, patient pathology, controlling the movement of medical endoscopes and surgical instruments, and guiding the surgeon's actions during intervention (including haptic coupling i.e., transferring tissue properties to the endoscope's cusps). This allows the introduction of new techniques and promising directions for endoscopic navigation, including 3D printing reconstruction and the creation of teaching aids to support medical simulation [29]. These solutions can be integrated, e.g., with microfluidic devices as a new, low-cost, and convenient platform for, e.g., bacterial cell culture, antibiotic sensitivity, using DL-based vision data regression for robust data reporting [30]. Guidelines and ideas for future research constitute an important impact. In our opinion, optimization of the materials used should be a key part of future medical 3D printing. Novel materials and their pre-projected features may be better tailored to the patient's needs. The most promising direction for further research is computational analysis and optimization of material and energy suitability (taking into account both efficiency and environmental-friendliness criteria) combined with defect detection and classification as part of quality control in line with the Industry 4.0 paradigm. A signal analysis algorithm and a multi-label classifier based on a deep convolutional neural network (DCNN) trained on the results from active infrared thermography (IRT) has already been applied to evaluate the condition of 3D-printed structures [31]. It should be noted that cracks and pores are also common defects in metal parts produced by 3D printing, hence the need for mass defect detection and classification by segmenting images (still and moving-as in a production line for monitoring the 3D printing process in situ) with defects. This is achieved with almost 100% accuracy using a simple CNN model [32]. A review of DL methods in defect detection highlighted: • The use of ultrasonic testing, filtering, DL, machine vision, and other technologies used to detect defects • Classification of product defects into categories in different products • Functions and characteristics of existing equipment used for defect detection, related to high precision, high positioning, fast detection, small objects, complex backgrounds, hidden object detection and object association • And only then can DL methods be used to optimize production processes to avoid these defects [33]. Research on a data-driven ML model for predicting the performance of polyhydroxyalkanoates (PHAs) yielded an ML model using a deep neural network (DNN) to predict the glass transition temperature (Tg) of PHA homo-and copolymers. The DNN model performed better here than a support vector machine (SVD), the nonlinear ML model and the least absolute shrinkage and selection operator (LASSO), a sparse linear regression model. Compared to the commonly used ML models using quantitative structure-property relationships, this model does not require an explicit descriptor selection step but shows comparable performance [34]. High defect recognition accuracies by deep networks are not uncommon: an image recognition technique based on convolutional neural networks for multiple concrete defect recognition (CMDnet, 1981 types of concrete surface defects) showed a defect detection accuracy of 98.9% [35]. Verifying the usefulness of DNN and statistical modeling in predicting the strength of bone cements with defects resulting from the introduction of contaminants (blood, saline) into the cement at the stage of its preparation may play an important role in the initial, qualitative assessment of the effects of surgery and in limiting errors resulting in the failure to maintain the required mechanical parameters and, consequently, patient dissatisfaction [36]. A concurrent neural network (ConCNN) with different image scales performs better than other approaches, offering 98.89% classification accuracy with a latency of only about 5.58 ms [37]. Deep ML models allow for material-and energy-efficient designs with a lower environmental impact, e.g., for different strength classes, including optimal recycled content with the lowest cost and environmental footprint [38]. DL-based models also perform well for nanocomposites despite their non-linear nature of processing parameters and the difficulty in predicting the desired features using the conventional regression approach [39]. No doubt it is possible to generalize the DL methodology to a more advanced, multi-material analysis [40], thus we encourage other scientists to develop this area of research and industrial practice. Increasingly many new challenges toward the support of 3D printing by AI are posed not only by predictive operations [41][42][43] and process control [44] under the Industry 4.0 paradigm, but also in terms of eco-design [45] related to the policy of sustainable development and protection of our planet's potential. According to the newest research and publications, we can see that the current research is leading in the proper direction compared to the newest studies [46,47]. Conclusions Additive manufacturing of medical devices, including soft materials, requires optimization of the materials themselves, sometimes printable inks, raw materials formulas, and 3D printing processes that must balance a large number of variable but highly correlated factors. New 3D printing materials and processes may be as important in rehabilitation as technologies such as biosensors, robotic devices, myoelectric control methods, and advances in brain-machine interaction. New 3D printing materials could be the next breakthrough in patient-tailored devices, but should be cost-effective and useful with semi-automated, AI-assisted matching and decision support. 1. Experimental practices are time-and cost-intensive so the application of AI-based optimization may be a quicker and cheaper solution. 2. PLA-based 3D printing can be optimized to successfully print a utility/functional part of an exoskeleton. Optimization powered by AI/ML can play a key role in the 3D printing process, increasing the efficiency and safety of the printed object (end product). 3. The DL-based approach will become the leader in 3D printing optimization as the complexity of the printed objects increases. 4. Compared with the results from the traditional ANN approach, optimization based on DL decreased the calculating speed by up to 1.5 times with the same print quality, increased quality (both learning: 0.9577 and testing: 0.9721), decreased MSE (0.001), and a set of printing parameters not previously determined by trial and error was also identified. 5. With the current complexity and type of computation, there is no need to combine two optimization solutions (traditional ANN and DL).
2021-12-16T17:18:11.858Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "4eafb535fda29028ba5a84898f7d4f2155f17b11", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/14/24/7625/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0669e4538a65c5abd67c4d0ff396f33e0316b588", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
16701074
pes2o/s2orc
v3-fos-license
Baxter Algebras and Shuffle Products In this paper we generalize the well-known construction of shuffle product algebras by using mixable shuffles, and prove that any free Baxter algebra is isomorphic to a mixable shuffle product algebra. This gives an explicit construction of the free Baxter algebra, extending the work of Rota and Cartier. Introduction In this paper we generalize the well-known construction of shuffle product algebras by using mixable shuffles, and prove that any free Baxter algebra is isomorphic to a mixable shuffle product algebra. This gives an explicit construction of the free Baxter algebra, extending the work of Rota [15] and Cartier [2]. In an important paper published in 1958, Ree [12] constructed algebras in which the product is expressed in terms of shuffles. He was motivated by Chen's work on iterated integrals of paths [3], where this shuffle product is derived from the integration by parts formula (1) Subsequently, shuffle product constructions have been studied extensively and have found applications in many areas of pure and applied mathematics. In another important paper published in 1960, Baxter [1] considered operators P that satisfies the identity P (x)P (y) + P (xy) = P (xP (y)) + P (yP (x)) and used this identity to study the theory of fluctuations. Rota studied Baxter's operators from an algebraic point of view and defined a Baxter algebra to be an algebra A with an operator P satisfying the identity P (x)P (y) + qP (xy) = P (xP (y)) + P (yP (x)) for some fixed q in the base ring of A. Rota [15] and Cartier [2] gave explicit constructions of the free Baxter algebra on a set X in the case when q = 1. It is easy to see that the product in Baxter algebras when q = 0 can be described by shuffle products, and the shuffle product algebras considered by Chen and Ree have a canonical Baxter operator. However, there does not appear to be an explicit and systematic study of this connection between Baxter algebras and shuffle products except a remark in a recent paper of Rota [18] implying such a connection. The motivation for the current paper came from the desire of developing a theory that is "dual" to the beautiful theory of differential algebras obtained by Ritt [13] and Kolchin [10]. Since a differential algebra is an algebra with an operator that satisfies the Leibniz product rule, it seems natural to study "integration algebras", i.e., algebras with an operator that satisfies an identity similar to the one in equation (1). Of course, these are just Baxter algebras where q = 0. Unaware of the above mentioned work on Baxter algebras, we gave a description of the free integration algebra by using shuffle products. It was Rota who pointed out to us the earlier work on free Baxter algebras and suggested that we extend our shuffle product description of free integration algebras to Baxter algebras. This is carried out in this paper, by making use of a modified shuffle product, called the mixable shuffle product. Thus we not only consider shuffles of two vectors, but also shuffles in which certain components of a shuffle will "merge". This enables us to construct the mixable shuffle product algebras, generalizing the classical construction of shuffle product algebras, and to give a more intuitive and constructive description of the free Baxter algebra. Also, our description is of free Baxter algebras on any commutative algebra with any value q. Furthermore, the free Baxter algebra of Cartier or Rota is in the category of Baxter algebras not necessarily having an identity, while the free Baxter algebra we consider is in the category of Baxter algebras with an identity. When specialized to the case considered by Rota or Cartier, the free Baxter algebra we construct contains their free Baxter algebra as a sub-Baxter algebra in the category of Baxter algebras not necessarily having an identity. The mixable shuffle product of Baxter algebras allows us to study in more detail the properties of Baxter algebras. This will be the subject of a forthcoming paper. Shuffle products have occurred in many other fields and contexts, such as Hopf algebras, algebraic K-theory, algebraic topology and combinatorics, as well as in computational mathematics and applied mathematics; see, for example, [4,5,7,9,12,14,19]. One might hope that the mixable shuffle introduced here will provide interesting and useful generalizations to those theories. One might also hope that the mixable shuffle product will be useful for the applications of Baxter algebras, such as those considered by Baxter and Rota. In this paper, we first give a brief summary of basic definitions and properties of Baxter algebras in section 2. We then make a careful study of mixable shuffles in section 3. In section 4 we apply properties of mixable shuffles to construct mixable shuffle product algebras and to prove that any free Baxter algebra is isomorphic to such an algebra. In this section, we also briefly consider variations of the free construction by examining the free Baxter algebra on a set, on a commutative monoid and on a module. The relation between the free Baxter algebras we construct and the free Baxter algebras constructed by Rota and Cartier is described in section 5. We conclude by considering the special case of the free Baxter algebra on the empty set. Definitions and basic properties In this paper, any ring R is commutative with identity element 1 R . All notation will be standard unless otherwise noted. In particular, we write N for the additive monoid of natural numbers {0, 1, 2, . . .} and N + = {n ∈ N | n > 0} for the positive integers. Also, ( n k ) will denote the usual binomial coefficient defined for any n, k ∈ N with k ≤ n by ( n k ) = n!/(k!(n − k)!). Definition 2.1. Let C be a ring, q ∈ C, and let R be a C-algebra. A Baxter operator on R over C is a C-module endomorphism P of R satisfying P (x)P (y) + qP (xy) = P (xP (y)) + P (yP (x)), x, y ∈ R. (2) We will find it convenient for the remainder of this paper to write equation (2) in the form P (x)P (y) = P (xP (y)) + P (yP (x)) + λP (xy), so that our λ is −q. We will also say that P has weight λ. Definition 2.2. A Baxter C-algebra of weight λ is a pair (R, P ) where R is a C-algebra and P is a Baxter operator of weight λ on R over C. If the meaning of λ is clear, we will suppress λ from the notation. Note that the mapping 0 : R → R defined by 0(r) = 0 for all r ∈ R is trivially a Baxter operator on R over R, for any ring R. Hence every C-algebra can be viewed as a Baxter C-algebra. Definitions of basic concepts for C-algebras can be similarly defined for Baxter C-algebras. In particular, let (R, P ) and (S, Q) be two Baxter C-algebras of weight λ. A homomorphism of Baxter C-algebras f : Let Bax C,λ denote the category of Baxter C-algebras of weight λ. A Baxter ideal of (R, P ) is an ideal I of R such that P (I) ⊆ I. For a Baxter ideal I of (R, P ), the quotient Baxter C-algebra is the quotient algebra R/I, together with the C-linear endomorphismP : R/I → R/I induced from P . If f : (R, P ) → (S, Q) is in Bax C,λ , then ker f is a Baxter ideal of R, and imf , with the restriction of Q, is a Baxter sub-C-algebra of (S, Q). Mixable shuffle products This section is a preparation for the next section, where we will give a description of free Baxter algebras in terms of mixable shuffles. We start with a study of mixable shuffles in the context of permutations. We then apply this study to mixable shuffles of vectors. Finally we consider mixable shuffles of tensors, using the mixable shuffle of vectors as a "generic" form. For the purpose of providing a solid foundation for later applications, we give full details of the proofs, even though some of them might be intuitively clear to the expert. So some readers might just want to look at the definitions and results and move on to the next section. 2. Denote the set on the left hand side of the equation byS (i) (m, n). We use induction on m + n, with m, n ≥ 1. When m + n = 2, this can be verified directly. In general, defineS (4), we obtain, Here A ∼ = B means that the two sets A and B have the same cardinality. By the inductive assumption, the size of the right hand side is Applying Pascal's identity, we see that the last sum is also the value of ( m+n−i m )( m i ). 3. This follows from part 2 by summing over i for i = 0, . . . , n. 4. The proof is similar to the proof of part 3. For 0 ≤ k ≤ n + ℓ, denotē We will use induction on j = m + n + ℓ, m, n, ℓ ≥ 1 to prove When m + n + ℓ = 3, the equation can be verified directly. Now assume that the equation holds for m + n + ℓ < j and consider the case when m + n + ℓ = j. Denotē Here again ∼ = stands for a bijection between sets. Taking cardinalities and applying the induction hypothesis, we see that the left hand side of equation (6) is On the other hand, using Pascal's identity we see that the right hand side of the equation (6) equals This completes the induction, proving equation (6). Then the equation in part 4 is obtained by summing equation (6) for k = 0, . . . , n + ℓ. Mixable shuffles of vectors Let Ω be a countable infinite set. Let Ω be the set consisting of finite non-empty subsets of Ω. where, for each 1 ≤ ℓ < m, is called a shuffle of F and G. Denote for the set of shuffles of F and G. 3. Let σ ∈ S(m, n) and let T be a subset of T σ . The element is called a mixable shuffle of F and G. DenoteS for the set of mixable shuffles of F and G. If we further have H = (H 1 , . . . , H ℓ ) ∈ Ω ℓ , then we denotē For σ ∈ S(m, n, ℓ) and T ∈ T σ , define a mixable shuffle of F, G and H by for the set of mixable shuffles of F, G and H. Proof. 1. We use induction for m + n with m, n ≥ 1. When m = 1, S(X, Y ) contains the vectors So there are at least 2n + 1 elements in S(X, Y ). By construction, the set S(X, Y ) has no more elements than S(m, n) which is s(1, n) = 2n + 1 by Proposition 3.1. So the claim holds in this case. A similar argument verifies the claim in the case when n = 1. Now assume that the claim is true for m + n < k with m, n > 1, and consider the case when m + n = k. Since m, n ≥ 2, it makes sense to definē By assumption, X 1 , Y 1 and X 1 ∪ Y 1 are distinct. So we havē By induction hypothesis, the three sets on the right have cardinalities s(m−1, n), s(m, n− 1) and s(m − 1, n − 1). So which is s(m, n) by equation (4). So by Proposition 3.1, |S(X, Y ) |= s(m, n). Then from equation (7) we havē 2. We prove by induction on m + n, m, n ≥ 1. The statement can be directly verified for m + n ≤ 2. Assume that it is true for m + n < k and let X ∈ A m , Y ∈ A n with m + n = k. Then and similarly,S Then it follows from Equation (8) and its symmetric form forS(Y, X) thatS(X, Y ) = S(Y, X). 3. We first prove thatS is a disjoint union. By assumption, X i , Y j and Z k are disjoint subsets of Ω. Since each component of U = (U r ) ∈S(X, Y ) is either a X i , or Y j or a X i ∪ Y j , it follows that the subset U r and Z k are also disjoint. Let U = (U 1 , . . . , U r ) and U ′ = (U ′ 1 , . . . , U ′ s ) be two distinct mixable shuffles of X and Y . If r = s, then without loss of generality, we could assume that r > s. Then there is a U r 0 that is different from any U ′ j and therefore is disjoint with any U ′ j . Thus U r 0 is disjoint with any component of any W ∈S(U ′ , Z). On the other hand, U r 0 has non-trivial intersection with some component of every W ∈S(U, S). Therefore,S(U, Z) ∩S(U ′ , Z) = φ. Now assume that r = s. We use induction on r. For r = 1, U = U ′ means U 1 is different from any components of U ′ . So U 1 is disjoint from any components of W ∈S(U ′ , S), and the claim is proved. Assume that the claim is true for r, and let U and U ′ both have length r + 1. SupposeS(U, Z) ∩S(U ′ , Z) is not empty. Then there is a W ∈S(U, Z) ∩S(U ′ , Z). Write W = (W 1 , · · · , W k ), then W 1 = U 1 or Z 1 or U 1 ∪ Z 1 . If W 1 = U 1 , then since U 1 is disjoint from any Z ℓ , from W ∈S(U ′ , Z) we get W 1 = U ′ 1 . This shows that (W 2 , . . . , W k ) ∈S((U 2 , . . . , U r ), Z) ∩S((U ′ 2 , . . . , U ′ r ), Z). But since U 1 = W 1 = U ′ 1 , from U = U ′ we get (U 2 , . . . , U r ) = (U ′ 2 , . . . , U ′ r ). Then by induction assumption,S ((U 2 , . . . , U r ), Z) ∩S((U ′ 2 , . . . , U ′ r ), Z) = φ. This is a contradiction. For the same reason, W 1 = Z 1 or U 1 ∪ Z 1 also implies contradiction. Therefore, the claim is true for r + 1. This proves that To prove the second equality in part 3, definē and similarly forS(X, Y ) Yn ,S(X, Y ) Xm∪Yn . Then the same argument as above gives The rest of the proof is similar. Mixable shuffles of tensors Notation: For the rest of this paper, let λ be a fixed element of C. For any C-modules M and N, the tensor product M ⊗ N is taken over C unless otherwise indicated. Let A be a C-algebra. For n ∈ N, denote with the convention that A ⊗0 = C. 1. σ(x ⊗ y) ∈ A ⊗(m+n) is called a shuffle of x and y. Let T be a subset of T σ . The element where for each pair (k, k + 1), 1 ≤ k < m + n, is called a mixable shuffle of x and y. It follows from the universal property of the tensor product A ⊗k that σ(x ⊗ y; T ) does not depend on the choice of x 1 , . . . , x m , y 1 , . . . , y n representing the tensor x ⊗ y. Now fix λ ∈ C. Define, for x and y as above, The operation ⋄ + extends to a mapping Extending by additivity, the binary operation ⋄ + gives a C-bilinear map is the scalar multiplication. This binary operation is called the mixable shuffle product of weight λ. Theorem 3.5. The mixable shuffle product ⋄ + defines an associative, commutative binary operation on X + C (A) = k∈N A ⊗k , making it into a C-algebra with the identity 1 C ∈ C = A ⊗0 . X + C (A) will be called the mixable shuffle algebra (of weight λ) on A. In the special case when λ = 0, X + C (A) is denoted by Sh(A) in [19]. Proof. We only need to verify the commutativity and associativity of the operation ⋄ + . For this we make use of the mixable shuffle of vectors discussed in the previous section. Recall that Ω is an infinite set, and Ω is the set of finite non-empty subsets of Ω. where, for each 1 ≤ ℓ < m, If ϕ sends each of the capital letters to the corresponding lower case letter in the polynomial algebra A = C[x 1 , x 2 , y, z], then For any fixed m, n, ℓ and fixed x ∈ A ⊗m , y ∈ A ⊗n and z ∈ A ⊗ℓ , choose distinct elements X 1 , . . . , X m , Y 1 , . . . , Y n , Z 1 , . . . , Z ℓ from the infinite set Ω. Also use the same letters for the singletons {X i }, {Y j } and {Z k } of Ω. Let ϕ be the map sending X i to x i , Y j to y j and Z k to z k . Then ⋄ + (of weight λ) could be described as where deg(U) =| T | if U is given by (σ, T ) ∈S(m, n). The commutativity of ⋄ + follows from Proposition 3.3.2. Let a mixable shuffle W ∈S(X, Y, Z) be given by (σ, T ) ∈S(m, n, ℓ), i.e., W = σ((X, Y, Z); T ). Define deg(W ) = deg T , with deg T defined in equation (5). By Proposition 3.3.3, W could also be obtained from a mixable shuffle V inS(U, Z), given by (σ 1 , T 1 ), where U is from a mixable shuffle inS(X, Y ), given by (σ 2 , T 2 ). Thus we have V = σ 1 ((U, Z); T 1 ) and U = σ 2 ((X, Y ); T 2 ). It follows from the definition of W and deg W that the length of the vector W is m + n + ℓ − deg(W ). Since W is also given by V = σ 1 ((U, Z); T 1 ), its length is also given by Thus we have deg(W ) = deg(U) + deg(V ). Then it follows from Proposition 3.3.3 that We similarly have This proves the associativity. The free Baxter algebra We now use the mixable shuffle product from last section to describe the free Baxter algebras. We will first construct the free Baxter algebra on a C-algebra A. We will then give constructions of other types of free Baxter algebras. The basic free construction With the same notations as those in last section, we define X C (A) to be the tensor product algebra A ⊗ C X + C (A). Thus as A-modules and the product on X C (A) is defined by the augmented mixable shuffle product (of weight λ) Define a C-linear endomorphism P A on X C (A) by assigning for all x 0 ⊗ x 1 ⊗ . . . ⊗ x n ∈ A ⊗(n+1) and extending by additivity. Let j A : A → X C (A) be the canonical inclusion map. Theorem 4.1. For any C-algebra A, (X C (A), P A ), together with the natural embedding j A : A → X C (A), is a free Baxter C-algebra on A (of weight λ) in the sense that the triple (X C (A), P A , j A ) satisfies the following universal property: For any Baxter C-algebra (R, P ) and any C-algebra map ϕ : A → R, there exists a unique Baxter C-algebra homomorphismφ : (X C (A), P A ) → (R, P ) such that the diagram commutes. Remark: By the same argument used to show the uniqueness of other "universal" objects, X C (A) is the unique free Baxter C-algebra on A up to isomorphism. The Baxter C-algebra X C (A) will be called the free Baxter C-algebra (of weight λ) on A. Proof: We first show that P A is a Baxter operator on X C (A). For this we only need to verify that for any x, y ∈ X C (A), By additivity, we only need to verify this equation for any x = x 1 ⊗ . . . ⊗ x m ∈ A ⊗m and y = y 1 ⊗ . . . ⊗ y n ∈ A ⊗n . By definition, Recall that equation (4) gives us S(m, n) =S 1,0 (m, n) Therefore, This shows that P A is a Baxter operator of weight λ on X C (A), making it into a Baxter C-algebra. Before verifying that (X C (A), P A ) satisfies the universal property of a free Baxter C-algebra over A, we need some preparations. Let (R, P ) be a Baxter C-algebra. For x, y ∈ R, denote P x (y) = P (xy). For x 0 ⊗ . . . ⊗ x k ∈ R ⊗(k+1) , denote with the convention that • 0 r=1 P xr = id R . It follows from the universal property of the tensor product R ⊗(k+1) and the C-linearity of the Baxter operator P that the right hand side of equation (10) is well-defined, and does not depend on the choice of x 0 , . . . , x k ∈ R representing the tensor x 0 ⊗ . . . ⊗ x k . For σ ∈ S n , denote For m, n ∈ N + , denote and, for (σ, T ) ∈S(m, n), denote σ((• m r=1 P xr ) • (• n s=1 P ys ); T ) = P z 1• . . .•P z m+n where for each (k, k + 1), 1 ≤ k < m + n, Proof. It is clear that the equation in general follows from the case when x 0 = y 0 = 1, in which case the equation is with m, n ≥ 1. For this we prove by induction on k = m + n. If k = 2, then m = n = 1. The equation to be proved in this case is P (x 1 )P (y 1 ) = P (x 1 P (y 1 )) + P (y 1 P (x 1 )) + λP (x 1 y 1 ) which is part of the definition of P . Assuming that the equation holds for all x ∈ R ⊗(m+1) , y ∈ R ⊗(n+1) with m + n < k. Let x ∈ R ⊗(m+1) , y ∈ R ⊗(n+1) with m + n = k. Then +λP (x 1 y 1 (σ,T )∈S(m−1,n−1) This completes the proof of Proposition 4.2. We now continue with the proof of Theorem 4.1 and verify the universal property for X C (A). For a given Baxter C-algebra A and a C-algebra map ϕ : A → R, we extend ϕ to an Baxter C-algebra homomorphismφ : X C (A) → (R, i) as follows. For . This is a well-defined C-linear map, hence extends uniquely by additivity to a C-module homomorphismφ : It follows from the definition of the operation ⋄ and Proposition 4.2 thatφ preserves multiplication. Sinceφ is C-linear, and ϕ is a homomorphism of Baxter C-algebras. It is clear from its construction thatφ is the unique homomorphism of Baxter C-algebras extending ϕ. This proves Theorem 4.1. Let Alg C be the category of C-algebras and let U C : Bax C → Alg C be the forgetful functor. 1. The assignment A → X C (A) gives a functor where for a C-algebra homomorphism f : 2. X C is the left adjoint functor of the forgetful functor U C . 3. Any Baxter C-algebra is isomorphic to a quotient of (X C (A), P A ) for some Calgebra A. By Corollary 4.3, the study of Baxter C-algebras is reduced to studying quotients of free Baxter C-algebras. From the proof of the theorem and Proposition 4.2, we also obtain Corollary 4.4. 1. For any sub-C-algebra B of a Baxter C-algebra (R, P ), the Baxter sub-C-algebra of R generated by B is generated by as an additive group. 2. For any C-algebra homomorphism ϕ : A → R, the image ofφ : X C (A) → R is the Baxter subalgebra of (R, P ) generated by ϕ(A). Proof. LetB be the Baxter sub-C-algebra of (R, P ) generated by B. Denote and letB ′ be the additive subgroup of R generated by S. Since S is closed under scalar multiplication by C and the operator P ,B ′ is a C-module and is closed under P . By Proposition 4.2,B ′ is closed under multiplication. Therefore,B ′ is a sub-C-algebra of R, hence containsB. On the other hand, sinceB contains B and is closed under multiplication and Baxter operator, it must contain S. ThenB must containB ′ by closure under addition. This proves the first statement. By its construction, the Baxter C-algebra X C (A) is generated by j A (A). Sinceφ is an Baxter C-algebra homomorphism,φ(X C (A)) is also an Baxter C-subalgebra, and is generated byφ(j A (A)) = ϕ(A). Other free constructions The construction of the free Baxter C-algebra X C (A) on a C-algebra A in the last part could be combined with other free constructions to obtain free Baxter algebras on other structures. We now discuss the free Baxter algebra on a set. The free Baxter algebras on a monoid or on a C-module will also be considered. For a given set X, let C[X] be the polynomial C-algebra on X with the natural embedding X ֒→ C[X]. Let (X C (X), P X ) be the Baxter C-algebra (X C (C[X]), P C[X] ). Proposition 4.5. (X C (X), P X ), together with the set embedding is a free Baxter C-algebra on the set X, described by the following universal property: For any Baxter C-algebra (R, P ) over C and any set map ϕ : X → R, there exists a unique Baxter C-algebra homomorphismφ : (X C (X), P X ) → (R, i) such that the diagram Remark: When λ = −1, it is easy to show that X C (X) is closely related to the free Baxter algebra constructed by Cartier [2]. See section 5 for detail. Proof. We only need to verify that (F C (X), P X ) satisfies the universal property of a free Baxter C-algebra on X. Fix a given Baxter algebra (R, P ) over C, and a set map ϕ : X → R. By the universal property of the C-algebra C[X], ϕ extends uniquely by multiplicity and C-linearity to a C-algebra homomorphismφ : C[X] → R. Then by Theorem 4.1,φ extends uniquely to a homomorphism of Baxter C-algebras This proves the proposition. Note that the free Baxter C-algebra on a set X can be described as the composite of two free constructions. First we construct the free C-algebra C[X] on X, and then we construct the free Baxter C-algebra on C[X]. In a similar manner, we can construct the free Baxter C-algebra on a monoid M by first constructing the free Calgebra C <M > on M and then the free Baxter C-algebra on C <M >. Here, for a given commutative monoid M, C <M > is the free C-module ⊕ x∈M Cx, where the multiplication on C <M> is induced by the multiplication on M. Likewise, we can construct the free Baxter C-algebra on a C-module N by first constructing the tensor C-algebra T C (N) on N and then the free Baxter C-algebra on T C (N). Relation to constructions of Cartier and Rota We now give an explicit description of the relation between our construction of free Baxter algebras and the constructions of Cartier [2] and Rota [15]. As before, let C be a commutative ring with identity. Let Alg 0 C be the category in which the objects are C-algebras not necessarily having an identity and the morphisms preserve the addition and the multiplication, but do not necessarily preserve the identity. Rota and Cartier considered the category Bax 0 C whose objects are pairs (R, P ) where R is an object in Alg 0 C and P is a Baxter operator (of weight −1), and whose morphisms are morphisms in Alg 0 C that commute with the Baxter operators. For any non-empty set X, the existence of a free Baxter algebra on X in either of the two categories Bax C or Bax 0 C can be proved by general results from universal algebra [6,11]. Rota and Cartier have given explicit descriptions of the free Baxter algebra in Bax 0 C . Below we will focus on the relation of our construction of free Baxter algebras in Bax C with Cartier's construction of free Baxter algebras in Bax 0 C . This will also explain the relation with Rota's construction of free Baxter algebras in Bax 0 C , since, by the uniqueness of universal objects in Bax 0 C , the free Baxter algebras of Cartier are isomorphic to the free Baxter algebras of Rota. We first recall the free Baxter algebra in Bax 0 C constructed by Cartier. Let M be the free commutative semigroup with identity on X. Let X denote the set of symbols of the form Let B(X) be the free C-module on X. Cartier gave a C-bilinear multiplication ⋄ c on B(X) by defining if j is the β − th element in Q, j ∈ P ; a α b β , if j is the α − th element in P and the β − th element in Q Define a C-linear operator P c X on B(X) by Cartier proved that the pair (B(X), P c X ) is a free Baxter algebra on X in the category Bax 0 C On the other hand, since C[X] is a free C-module on M, it follows from our construction of the mixable shuffle product algebra X C (X) = X C (C[X]) that X C (X) is a free C-module on the setX of tensors The Baxter operator P X on X C (X) is defined by We define a map f : X →X by and extend it by C-linearity to a C-linear map f : B(X) → X C (X). Proposition 5.1. f is an injective morphism in Bax 0 C , identifying B(X) with the sub-Baxter algebra of X C (X) with the C-basis Cartier showed that the pair (B(X), h) is a free Baxter algebra on X in the category Bax 0 C . Define g : X → X C (X) by g(x) = x ∈ X C (X), x ∈ X, and regard X C (X) as an element in Bax 0 C (X). By the universal property of B(X), there is a unique morphism in Bax 0 C such thatg(x · [ ]) = x. It follows from the fact thatg preserves multiplication thatg(u 0 · [ ]) = u 0 for u 0 ∈ M, u 0 = 1. Thusg is the C-linear map f defined above, proving that f is a morphism in Bax 0 C . Since f is C-linear and sends the C-basisX of B(X) injectively to the C-linearly independent set {u 0 ⊗ . . . ⊗ u m | m ≥ 0, u i ∈ M, u m = 1}, we see that f is injective. Proposition 5.1 enables us to identify B(X) as a sub-Baxter algebra of X C (X) in the category Bax 0 C . We further have Proposition 5.2. The injective morphism f : B(X) → X C (X) in Bax 0 C satisfies the following universal property. For any element A in Bax C , also regarded as an element in Bax 0 C , and any morphism φ : B(X) → A in Bax 0 C , there is a unique morphism φ : X C (X) → A in Bax C such thatφ • f = φ. Proof. With the notations introduced in the proof of Proposition 5.1, we have the following diagram We only need to findφ : X C (X) → A in Bax C such that the lower right triangle commutes. By the universal property of X C (X) in Bax C , the map φ • h : X → A induces a morphism ψ : X C (X) → A in Bax C such that ψ • g = φ • h. We then have From the universal property of B(X) in Bax 0 C , we obtain So we can takeφ = ψ. 6 The free Baxter C-algebra on C As a particular example, we consider X C (C), the free Baxter C-algebra on C. X C (C) is also X C (φ), the free Baxter C-algebra on the empty set φ, defined in Corollary 4.5. This free Baxter algebra not only provides the simplest example of a free Baxter algebra, it also helps to explain an important difference between our construction of the free Baxter algebra and the construction of the free Baxter algebra of Rota or Cartier. As we see from the previous section, the free Baxter algebras of Rota and Cartier are in Bax 0 C and have no identity. In fact, the free Baxter algebra on φ in Bax 0 C is the zero algebra with the zero Baxter operator. We also have the following consequence of Theorem 4.1. Corollary 6.2. X C (C) is the initial object in the category Bax C of Baxter C-algebras. In other words, for any Baxter C-algebra (R, P ), there is a unique Baxter C-algebra homomorphism (X C (C), P C ) → (R, P ). Now we consider the special case when λ = 0. In this case this C-algebra has been studied earlier [19, §12.3], as the shuffle algebra Sh(C) over C. The following facts from there can easily be verified. 1. The free Baxter C-algebra X C (C) is a bialgebra. 2. For any m, n ∈ N, 1 ⊗(m+1) ⋄ 1 ⊗(n+1) = ( m+n n )1 ⊗(m+n+1) . Now let HC be the ring of Hurwitz series over C [8], defined to be the set of sequences {(a n ) | a n ∈ C, n ∈ N} in which the addition is defined componentwise and the multiplication is defined by Denote e n for the sequence (a k ) in which a n = 1 C and a k = 0 for k = n. Since e n e m = ( m+n n )e m+n , from Proposition 6.3, we obtain Proposition 6.4. Let λ = 0. The assignment 1 ⊗(n+1) → e n , n ≥ 0 defines an injective homomorphism of Baxter C-algebras from X C (C) to HC, identifying X C (C) with the subalgebra of "Hurwitz polynomials" {(a n ) ∈ HC | a n = 0, n > > 0} = { n≥0 a n e n | a n = 0, n > > 0}.
2014-10-01T00:00:00.000Z
2000-03-01T00:00:00.000
{ "year": 2004, "sha1": "db0ef4fa8404e927b986c69add58033e46ef0ee4", "oa_license": "elsevier-specific: oa user license", "oa_url": "https://doi.org/10.1006/aima.1999.1858", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "781b6e70a20465b0c08d650d62530b85220b9b22", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
16869325
pes2o/s2orc
v3-fos-license
Is screening for colorectal cancer worthwhile? The large amount of research into screening for colorectal cancer has shown that it is feasible and that its early findings are reasonably optimistic. However, information about its effects on reducing mortality and incidence is still lacking and it cannot be recommended other than on an research basis The well-recognised adenomacarcinoma sequence in the natural history of large bowel cancer suggests that this is a malignancy likely to be susceptible to screening. Detection and removal of precancerous adenomas and early invasive carcinomas should lead to a reduction in incidence and mortality respectively. Moreover the disease itself and the methods of testing for it fulfil many of the criteria required before implementation of a public health screening programme (Wilson & Jungner, 1968). Firstly, colorectal cancer is a major problem in most developed countries. In the UK it is the third commonest cause of death from malignant disease coming after lung cancer in men and breast and lung cancer in women. The number of colorectal cancer deaths in England and Wales in 1987 was 8,228 men and 8,825 women (OPCS, 1989). Survival rates have improved slightly over recent decades, 5-year relative survival of all registered cases diagnosed in 1971 being 30% (OPCS, 1981) and of cases registered in [1979][1980][1981] being 35% (OPCS, 1986). Nevertheless, recent advances in treatment have had less success in improving survival than they have in improving patient comfort, and the principal determinant of survival remains the stage of the tumour at presentation. Survival rates of patients with tumours diagnosed at Dukes' stage A are 80% or more, falling to 20% or less for those diagnosed at Dukes' stage C or who already have metastases. Unfortunately in normal clinical practice less than 10% of patients present with stage A disease (Stower & Hardcastle, 1985). Another criterion for screening is that the natural history of the disease should be understood. This requirement is not fully met in the case of colorectal cancer or, for that matter, for any other malignant disease. Nevertheless it is known that at least some carcinomas of the large bowel develop within polypoid adenomas, and that increasing size of adenomas and a villous, as opposed to tubular, histology indicate increasing likelihood of malignancy (Morson, 1976). This implies a progression of epithelial changes of increasing severity over time. The great majority of adenomas, however, do not progress to malignancy, as shown by the prevalence of adenomas found at autopsy of people who died from other causes. One study (Vatn & Stalsberg, 1982) estimated that the autopsy prevalence of large bowel adenomas was ten times higher than the cumulative lifetime incidence of colorectal cancer. Little is known about the distribution of time from onset of adenoma to invasion in those adenomas that do progress. In a recent series of patients who developed carcinoma having previously had an untreated polyp, the cumulative risk of progression to cancer was 2.5% at 5 years, 8% at 10 years and 24% at 20 years (Stryker et al., 1987). Recent evidence on the molecular genetics of colorectal cancer also supports the adenomacarcinoma sequence, since mutation of the ras oncogene occurs in premalignant adenomas as well as colorectal carcinomas; it has been suggested that two later events in carcinogenesis, recessive changes on chromosomes 5 and 18, mark the transition from adenoma to carcinoma (Kerr, 1989 In summary, the natural history suggests that detection and removal of adenomas may prevent some invasive cancers although most of those detected would be non-progressive; and detection and removal of stage A cancers may prevent some deaths. There is insufficient knowledge of the distribution of the duration of the pre-invasive phase or the stage A phase to decide an appropriate interval between repeated screens. Who should be screened? The clearest risk factor for colorectal cancer is age. Both incidence (OPCS, 1988) and mortality, (OPCS, 1989) rise steeply with increasing age; 94% of cases and 96% of deaths occur among people aged over 50. Family history is the only other risk factor currently identifiable in the general population, but is not nearly sensitive enough for population screening because the great majority of tumours occur in people with no affected relatives. At young ages, however, a history of familial polyposis coli or similar inherited syndromes, is an indication for screening. Follow-up of such families is more akin to clinical management of patients than population screening and it will not be considered further here. Three principal tests have been used to screen for colorectal neoplasia, digital rectal examination, sigmoidoscopy and faecal occult blood. Digital examination is an easy test included as a routine part of clinical examination of a person with gastrointestinal symptoms. But it is of minimal value for screening since less than 10% of colorectal tumours are within range of the examining finger (Winawer et al., 1985). Sigmoidoscopy has been widely practised as a screening test in the USA (e.g. Gilbertson, 1974). But this too is clearly limited by the range of the instrument, the rigid sigmoidoscope only reaching the distal 15-20 cm of the bowel and the flexible fibreoptic sigmoidoscope reaching up to 60 cm. The latter range includes the whole rectosigmoid region in which 50% of colorectal neoplasia occurs. It seems to be assumed that sigmoidoscopy is 100% sensitive for detecting tumours in its range (no false negatives), and also 100% specific (no false positives). Its acceptability to a general population is unknown because its use in the US has been among volunteers. Recently much more research has been done on the use of faecal occult blood tests (UKCCCR, 1989), and their value has been comprehensively reviewed by an EC/ESO Advisory Group (Hardcastle et al., 1990). The qualitative tests, which may be chemical or immunological are quick, easy and cheap to perform. In some test kits, e.g. Haemoccult, the person being screened places a small stool sample on a guaiacimpregnated card and sends it off to be tested; in others, e.g. Coloscreen, he performs and interprets the test himself, by observing colour change. Unlike sigmoidoscopy, faecal occult blood tests can detect blood from any part of the bowel but, because haemoglobin is degraded as it passes through the gastrointestinal tract, they are less sensitive for upper gastrointestinal lesions than for lower. They are also less sensitive for rectal lesions than for higher left-sided lesions possibly because there has been less opportunity for blood to be diffused widely through the whole stool. Blood loss from colorectal cancers is variable from day to day (Doran & Hardcastle, 1982), and therefore the usual, if arbitrary, recommendation is that three or six successive stool specimens should be screened. Adenoma detection is related to the size of the adenoma (Macrae & St John, 1982). Test sensitivity is also inversely related to the dryness of the stool sample when tested and some authorities recommend rehydration. There is also a problem with false positives in faecal occult blood testing. Red meat and peroxidase-containing vegetables such as tomatoes may give false positive results to the chemical tests and therefore dietary restriction for three days before the test is sometimes recommended. Immunological tests are specific to human haemoglobin but detect levels within the range of normal blood loss, thus leading to many false positives. Thus faecal occult blood tests for use in screening need to balance their level of sensitivity for detection of haemoglobin against the requirement to keep false positives as low as possible. Sensitivity of screening in the epidemiological sense describes the test's ability to pick up all the neoplasia detectable at that time plus that which is likely to arise to a symptomatic stage in the interval before the next routine screen. Cancers presenting symptomatically after a negative screen are known as interval cases. Sensitivity may be expressed as the proportion of cancers which are screen-detected out of the sum of screen-detected and interval cancers. Using this definition and a two year interval, the sensitivity of Haemoccult screening in a large population based study in Nottingham was 75% (Hardcastle et al., 1989). An alternative definition expresses sensitivity as the proportion of cancers whose diagnosis was advanced by screening out of all those expected in the interval after screening. The expected incidence can be derived from that in a control group. The same study calculated sensitivity by this method to be 65%. The sensitivity for adenoma detection is unknown. Specificity means the test's ability to discard all people without neoplasia, and in the Nottingham study was 99%. The predictive value of a positive screening test in Nottingham was 58%, in Funen, Denmark was 57% (Kronborg et al., 1989) and in Dijon, France was 44% (Bedenne et al., 1990). Another major determinant of the success of any screening programme is its acceptance by the target population. Nicholls et al. (1986) compared different methods of invitation to do a Haemoccult test and found that acceptance was greatest (57%) among people offered the test during a consultation with their general practitioner, but was only 38% when the test was sent by post. Inclusion of an educational leaflet made no difference, a fact also confirmed in Nottingham (Pye et al., 1988). In Dijon, administration of the test by a doctor also achieved higher response (57%) than when it was mailed (40%). Acceptance of a posted Haemoccult test in the Nottingham study (Hardcastle et al., 1989) has been 53% with a slight variation with age and sex (greatest among men aged 55-69 and women 50-69), but higher rates of 65% have been found in Scandinavian countries (Kronborg et al., 1987;Kewenter et al., 1988). The factors influencing compliance were studied by Farrands et al. (1984), who found that acceptors of screening had much more positive attitudes towards preventive medicine and were more optimistic about health than non-attenders. This emphasises the need for education targetted on people with negative fatalistic views about prevention, for otherwise they will continue to decline screening and present later with advanced disease, thus lessening the potential of the screening programme to achieve its objective. From the foregoing it is clear that screening for colorectal cancer by faecal occult blood is feasible, having tolerable levels of acceptance, sensitivity and specificity. But what of its effectiveness? All of the research so far, concludes that screening can meet the initial requirements for success, namely an increased prevalence of cancer and adenomas at the first screen, and a shift towards an earlier stage distribution. The Nottingham study found a prevalence of cancer three times greater than the annual incidence in the control group and a prevalence of adenoma 40 times greater. Moreover the proportion of stage A cancers in all recent studies is over 50%, and in two (Gilbertsen, 1974;Kronborg et al., 1989) the survival of screen-detected cases is shown to be greater than that of a control population. But, although changes in prevalence, stage distribution and survival are necessary findings if screening is to succeed, they are insufficient proof that screening saves lives (Chamberlain, 1988). A reduction in the death rate from colorectal cancer in the whole target population is the only valid way of proving benefit from screening for cancer -or a reduction in incidence of invasive cancer in the case of screening for premalignant adenomas. Retrospective correlation of death rates with screening intensity is sometimes possible. In West Germany where screening has been available for people over 45 for many years mortality rates have fallen by nearly 20% in the past 10 years but there is insufficient information on screening intensity to draw any firm conclusion on cause and effect (Robra, personal communication). A case-control study to compare the screening history of people who have died of colorectal cancer and matched living controls is in progress. A preferable form of evaluation is a prospective randomised controlled trial and several of these are under way. The earliest was a trial of a multiphasic health check-up in which sigmoidoscopy was one of the tests offered. After 11 years this study reported a statistically significant reduction of 70% in colorectal cancer mortality among the study group who had been invited to be screened, but closer examination of the data revealed a number of reasons why the lower mortality could not be attributed to screening (Selby et al., 1988). A trial of faecal occult blood screening has been in progress in Minnesota for the past 11 years. A report in 1987 showed no difference in overall gastrointestinal cancer mortality (Mandel et al., 1987) between the population offered screening and the control population, but no data are yet available on colorectal cancer mortality. This study enrolled only 30,000 subjects aged 50-79 into the study group and 15,000 into the control group, and most were volunteers; both the small sample size and the self-selected population suggest that the number of colorectal cancer deaths among the control group will be too small to be able to demonstrate a statistically significant difference without many years of follow-up. In Denmark a trial involving 31,000 subjects in each group has reported a deficit of deaths in the group offered screening (37 deaths) compared with the control groups (51 deaths) but this is not statistically significant and, again, many years of further follow-up will be required (Kronborg et al., 1989). The Nottingham trial calculated that 56,000, subsequently revised to 78,000, subjects were required in each group to be 80% certain of showing a statistically significant difference of 20% or more between colorectal cancer mortality in study and control groups after a minimum follow-up of 7 years (Moss et al., 1987). The Swedish controlled trial (Kewenter et al., 1988) has a small sample of 13,750 subjects aged 60-64 at entry, in each group, which, even with long follow-up, will probably only be capable of showing a difference if it is dramatically large. So far only one study (Gilbertson & Nelms, 1978) has reported the effect of adenoma removal on subsequent incidence. Screening in this study was by rigid sigmoidoscopy and 40 rectal cancers were found over a 5-year period, compared to an expectation of 90 cancers. However, Miller (1987) has cast doubt on the way in which the number of expected cancers was calculated suggesting that a figure of 38 is nearer the truth. Hence the effect of adenoma removal on subsequent cancer incidence also remains unproven. In summary, the large amount of research into screening for colorectal cancer has shown that it is feasible and that its early findings are reasonably optimistic. However, information about its effects on reducing mortality and incidence is still lacking and it cannot be recommended other than on an research basis. It is probably unnecessary to set up any more randomised controlled trials but further research priorities include trials of methods of improving compliance, developing screening tests of greater sensitivity without loss of specificity, and investigating the costs as well as the benefits of this public health programme.
2014-10-01T00:00:00.000Z
1990-07-01T00:00:00.000
{ "year": 1990, "sha1": "ec6b210a31d916bdfaba1ab6ed83d7621f069917", "oa_license": null, "oa_url": "https://www.nature.com/articles/bjc1990216.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "ec6b210a31d916bdfaba1ab6ed83d7621f069917", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
158642264
pes2o/s2orc
v3-fos-license
URBAN DEVELOPMENT OF BRATISLAVA: SUBURBANIZATION IN YEARS 1995-2009 Urban development of Bratislava: suburbanization in years 1995-2009 This paper deals with 14 years of urban development in Bratislava, especially aimed at the suburbanization processes. The main subject of this paper is to find the spatial shape of suburbanization in that case, the intensity of suburbanization and the regularities suburbanization occurs under. Furthermore, the main goal of this paper supposes suburbanization to be dynamic and changing throughout the derived time framework. Confirmation of this has to be done together by theoretical and empirical knowledge. All above mentioned assumptions have been confirmed. Introduction Modern suburbanization has become the one of the popular phenomenons in growing cities in the second half of 19 th century.During 20 th century, this turned to be the object of several sciences such as architecture, spatial science, geography, economy, sociology or ecology.Therefore, there is a lot of points of view to this process and thus lot of definitions what suburbanization is and what not.It is beyond the scope of this paper to analyze or introduce all of these definitions, but in respect to the mostly geographic-related papers, we can aggregate the relevant definitions into three groups by the way how they consider such process. The first meaning of suburbanization may be understood as a part of the conception of the stages of urban development introduced by Klaassen (Van den Berg and Klaassen 1986).According to this model, suburbanization is the second stage in city growth and occurs after the urbanization phase.Regarding to demographic changes in city and its surrounding, authors have distinguished two sub-stages: (a) relative decentralization when the population of city surroundings is growing faster than in city itself and (b) absolute decentralization when the city suffers from population decline whereas the surroundings population is increasing.This model has been discussed in detail and strongly criticized by some authors due to incredibility of its cyclical and some other features (Champion 2001, Storper and Manville 2006, Fishman 2005). In the second meaning, suburbanization is considered as a sociologic issue in case the urban population is moved to the rural environment while the urban way of life is being infiltrated along with (Boyer 2001).Sometimes, suburbanization is treated as a paradox process of seeking a lost community and individualization as well.Bauman (2004) considers the actors of suburbanization tired of anonymity and uniformity in city, so they are looking for new unordinary environment, which will finally find in the city surroundings.This seems to be the most typical especially for young families (Rerat 2012). The third environmental meaning treats suburbanization as the one of the most important factors responsible for the land cover change (Antrop 2004).Suburbanization is often the most profitable activity to be located in the closest city surroundings without taking the environment into account. For the purpose of our work, the first meaning of suburbanization is the most important.In spite of proclaimed incredibility, we perceive the Klaassen model as a good tool to determine the stage of city's development omitting its cyclical feature and disurbanization stage (which occurs when the population of entire urban region made up by core city and ring is affected by population decline).Hence, we understand the stage of suburbanization in case, when the population in urban ring is increasing faster than as in the city. The Position of Bratislava Bratislava, the capital city of Slovakia located in Central Europe, went through different governing systems throughout its history.This has been reflected in some features such as in spatial form, stages of population growth, administrative boundaries, etc. Equally, the environmental conditions affected the city's development as well. It seems to be beyond the scope of this short paper to introduce the basic environmental conditions and modern history in brief, but we perceive that as a basis to understand the suburbanization processes in case of Bratislava.The city itself is located on the confluence between Danube and Morava River, both representing a significant spatial barrier, especially in the past, when the technical development was not on the current level.Likewise, but less intensively, the Little Carpathians mountain range also works as a barrier.The strength of these rivers as a barrier has been increased, since they have become boundary rivers.Morava was a river separating the Cisleithania from Transleithania in Austro-Hungarian Kingdom till the end of World War I in 1918.Afterwards, in interwar period, the rivers became a boundary among Czechoslovakia and Austria and Hungary respectively.In postwar period until 1989, sections of these rivers located within Slovakia were part of the well known "Iron Curtain" which made Bratislava impossible to expand towards nowhere but north and east. Therefore, the ring representing the suggested city influence is not circle-shaped, but crescent-shaped.However, the suburbanization was considered as the "pure capitalistic phenomenon" and had not been developed during the socialist period due to three major circumstances: 1. Building restriction in so called "non-central municipalities1 ", what resulted in broken demographic balance because ageing inhabitants had not enough resources to recover their old houses, whereas young inhabitants was forced to live in the industrial cities (Bašovský 1995) 2. Low difference of real estates' prices in cities and surrounding rural areas, so living in city was cheaper regarding to the transportation costs (Musil 2001).Cities have been usually better equipped than rural communes, especially in terms of apartment amenities (e.g.central heating, hot water, flushing restroom, etc.).3. "Lame urbanization" (Węclawowicz 1998) -process typical for socialist countries, where urbanization had been driven directly by government without taking the negative externalities (society, geographical conditions, ethnical structure) into account.Some socio-pathogenic phenomena's such as countrymen locked in pre-fabricated apartment houses, different social classes living together on one store, etc. have appeared. Meanwhile after the fall of socialism and since Slovakia has joined EU and Schengen Area, the situation has become different and the influence of Bratislava is expanding even towards Hungary (e.g.Bezenye, Dunakiliti, Hegyeshalom, Rajka municipalities) and Austria (e.g.Berg, Hainburg an der Donau, Kitsee, Wolfsthal municipalities).First significant indications of suburbanization beyond the territory of Slovakia have appeared just few years ago and are not the object of this paper due to expected difficulties with obtaining proper comparable data. Current research state in case of Bratislava Being the capital and largest city, Bratislava and its surrounding have become the most used example of suburbanization in Slovak scientific literature.Furthermore, the suburbanization processes in its hinterland are the most essential.Therefore, there are a lot of papers devoted to suburbanization in Bratislava, basically, following the classification mentioned above.Regarding to demographical approach, Bratislava has been studied individually (Slavík and Kurta 2007) or by comparing to other cities (Bezák 2011;Vigašová and Novotný 2010;Novotný 2011;Hudec and Tóth 2012).Apart from this first approach, the sociologic research of Bratislava and its hinterland involving the areas beyond the territory of Slovakia has been also made (Zubriczký 2010).Due to fact that Bratislava is located within the area of very, perhaps the most fertile soils, the research devoted to land cover changes caused by suburbanization has also proved to be very important (Šveda 2010, Šveda andVigašová 2010) in respect to the environment. Besides all of these mentioned works, there is still a lack of papers focusing on individual development of the suburbanization in different time sections.Although some definitions stressing the time aspect has been written in Czecho-Slovakian conditions (Sýkora 2001, Matlovič andSedláková 2004), the time is often underestimated in papers related to individual suburbanization. The last two mentioned works (Sýkora 2001, Maltovič andSedláková 2004) imply the existence of phases in terms of suburbanization, probably based on so called trade-off theory, which has been introduced in Slovakia (Buček 2006) as well. According to this, suburbanization is often driven by two major factors: 1. Transportation costs including the price of the transportation whatever it is related to public transport or individual transport.In terms of suburbanization, the theory assumes increasing transportation costs by increasing distance from the center of the core city.2. And land rents meaning the average price of real estates.Due to market mechanism, lack of space in city which obviously generates greater demand for accommodation or commercial activities increases the living costs.In terms of suburbanization, the theory assumes decreasing prices of real estates by increasing distance from the center of the core city. The sum of these two factors is called overall costs and is different in every distance from the core city.We may assume the best place for suburbanization to be in the distance with the lowest overall costs because it can attract a lot of suburbanization actors.Nevertheless, as the territory along the distance with best conditions for suburbanization has some territorial limitations or alternatively, any local government decisions leading to construction attenuation may appear, we may expect the most intense suburbanization to shift into different distances.Spatial saturation and thus, the lack of space caused by suburbanization might be the significant factor responsible for land rents and hence overall cost increase. The main goal of this paper is to point out how the suburbanization changes its spatial form in different time periods and which municipalities could be marked as suburban leaders on the example of particular urban region of Bratislava. Methodology and data In order to fulfill the main goal of this paper, it is necessary to identify the suburbanization in the spatial and time framework.It may be identified by empirical field research or by studying the statistical data provided by Statistical office of Slovak republic.We have decided to combine both of these two approaches: first to identify the suburbanization by provided statistical data and then to verify it by field research.Some additional questions have appeared as we applied such methodology: 1. What should be the spatial framework of this study?2. How can be the stage since the suburbanization phase has started identified? 3. Which spatial units should be used?4. What methodology to use in order to mark studied spatial units as suburban? In previously mentioned Klaassen's model and its other derivations, the term "urban region" has been noted.Usually, urban region is made up by two sub-regions of internal structure: the urban core and the urban ring.Since the urban core is the part of urban region consisting from the important core city or cities respectively, which can be treated as population source areas according to the suburbanization, the remaining predominantly rural areas can be considered as the region, where suburbanization may occur.Therefore, in respect to this paper, urban ring is equal to spatial framework of our work. In Slovakia, the system of functional urban regions based on the daily commuting has been introduced and modified by prof.Bezák (1990Bezák ( , 2000) ) and unlike the official administrative divisions, it is perceived to be the proper and suitable regional system used in different geographical studies (Bezák 2011;Novotný 2011;Hudec and Tóth 2012).Since the internal structure of functional urban regions in Slovakia has not yet been delimited (Bezák 2012), we will treat the administrative territory of Bratislava as the urban core even though its administrative boundaries are often considered as so called overbounded (Ouředníček 2004).Furthermore, we have decided to omit the intraurban suburbanization due to difficulties with data obtaining.Location of functional urban region of Bratislava within Central Europe is illustrated in Fig. 1. In order to avoid processing of useless data, it is necessary to define the credible time framework to cover the whole suburbanization process in case of Bratislava. Considering that, determination of year when the suburbanization in Bratislava has started seems to be the very important step.As we have defined above, the beginning of the suburbanization stage should be assigned to year, when the population of core city started to decline while the population of urban ring is increasing.According to the table 1, the suburbanization in Bratislava has started in 1996.In order to involve the pre-suburbanization period, we have chosen to extend the time framework to period 1995-2009.Regarding to the methodology, the data after 2009 are not necessary. The next step is to define the spatial units.As we have noted before, the administrative divisions of Slovakia are not considered as credible according to the geographical aspect of population activities.However, municipalities (in Slovak obce), the smallest spatial units the annual statistical data are issued for, seems to be the best way in order to study suburbanization at most highest fidelity.The last problem is how to mark these municipalities as suburban.As the suburbanization is strongly related to migrations, net migration has proved to be the best way.In respect to the proper data compatibility, normalization by number of inhabitants per each municipality at the end of exposed period is the most important.This can be expressed by simple formula: m i = I i (t; t + 1) − E i (t; t + 1) P i (t + 1) .100 % where: mi = net migration rate per i municipality Ii(t;t+1) = absolute number of in-comers to i municipality during exposed period Ei(t;t+1) = absolute number of out-comers from i municipality during exposed period Pi(t+1) = population of i municipality at the end of exposed period The usage of population at the end of exposed period instead of population at the beginning of exposed period or mid-period population respectively is concluding in better expression of value of net migration rate.Thus, the value of net migration rate reflects the proportion of population at the end of exposed period, which might be the result of recent suburbanization processes. Since the time framework of this paper is too long for annual study of statistical data in detail and for study of suburbanization, it has been disaggregated into five three-years (sub)periods.Due to basic annual statistical data for the year 2012 has not yet been issued, sixth period is impossible to be created.Therefore the time framework could not have been extended to the year 2012.The Tab. 1 shows the list of five suburban periods with its dates. Tab. 1: Suburban stages in case of Bratislava.The major problem in identifying of suburban municipalities lies on a value of net migration rate that credibly reflects and follows the suburban processes.If this value is underestimated, the non-suburban municipalities can be identified as suburban.Likewise, if is overestimated, some suburban municipalities can be identified as non-suburban.Therefore, the proper estimation of this suburban threshold should be done together with field research. Results Based on the methods mentioned above, we have made the analysis of suburbanization processes in case of Bratislava.As the spatial and time framework has been properly selected, the last task was to define the credible suburban threshold.It has been proved, that the most credible value of net migration rate in order to distinguish suburban municipalities and non-suburban municipalities in case of Bratislava is 7 % per each suburban stage.As it can be seen in Fig. 2, the intensity and spatial shape of suburbanization has been changing throughout the time framework of this paper.In the first stage, only two small municipalities were marked as suburban.Also, the significant population increase based on migrations was observed only in closest distance ring.We may assume this stage to be the transition between urbanization and suburbanization period.In the following period, the number of indentified municipalities was increased along the average size of suburban municipalities (Tab.2).In the third period, suburbanization became more essential on the eastern parts of urban ring, while the northern parts still was not affected by such process.This trend has continued in following periods till the end of the time framework.Suburbanization on the east side of Bratislava urban ring has been becoming more and more essential as well as the intensity of significant positive net migration rate has been becoming greater. Tab. 2: Municipalities of f.u.r.Bratislava with highest values of net migration rate per each suburban stage.Interesting thing is the low intensity of suburbanization on the northern parts of the functional urban region.According to the land rent map, the real estates are and have been quite cheaper on east than north.It is not confirmed, but we assume this can be caused by Slovnaft oil-factory located near to the eastern administrative boundaries of the city or by close airport with its landing zones respectively.Due to prevailing western-winds in latitudes where Bratislava is located on, the eastern part of functional urban region can be affected by air pollution produced by Slovnaft oilfactory, what obviously reduces the prices of real estates in such localities.Besides this, the nearby airport with landing zones makes really noisy environment what is not in compliance with the nature of suburbanization (quiet and stress-free living in countryside).Those two factors are not present on the northern parts of urban ring. On the other hand, the analysis confirmed the assumptions proclaimed in the introduction of this paper.The hypothetical place for suburbanization, represented by distances where the suburbanization is more intense, is shifting throughout the time framework of this paper.As it appears in Fig. 2 and Tab. 2, different municipalities became the most suburban regarding to the net migration rate in each stage.The size of suburban municipalities, the number of affected municipalities as well as the number of suburban actors tend to be greater and greater.It seems, the suburbanization in Bratislava compensates the delayed urban development comparing with the western cities by its strength. Conclusion This analysis has shown how the suburbanization processes are changing throughout the history.The basic hypothesis has been proved as we figured out that the distance line of the most intense suburbanization is shifting in the hinterland and has some regularities.Number of municipalities affected by suburbanization is increasing as well as annual number of inhabitants included in these processes.However, there is still a number of unanswered questions to further research.First, how would the suburbanization trends in Bratislava have looked like, if we had had the statistical data of net migration for year 2012 and had created the sixth stage of suburbanization.Second, how would the comparison based on the similar methods has looked like, if we had incorporated other most populated cities in Slovakia or abroad respectively.At last, would the suburban threshold has been different in different cities or not?Nevertheless, we consider this paper to be the proper and valuable contribution into issues related to such processes like suburbanization is. URBAN DEVELOPMENT OF BRATISLAVA: SUBURBANIZATION IN YEARS 1995-2009 Summary There are a lot of proofs in geographical literature related to the difference of urban development among European cities located in the former East bloc and West bloc during the postwar period.Generally, regarding to the housing policy, urban development of eastern cities is perceived as delayed comparing to the west.In the Eastern bloc, suburbanization was considered as pure capitalistic phenomenon, and therefore was prohibited by many regulations, while urbanization and forced growth of industrial cities was preferred.Although Bratislava is located on the boundaries between Austria and Slovakia or Hungary and Slovakia respectively, it is not an exception.This position has significantly limited the spatial development of city.However, suburbanization after the fall of socialism has appeared and currently is the most intense all over Slovakia. The main goal of this paper has been to verify whether the assumption related to the proclaimed dynamics of suburbanization is true or not.According to that hypothesis derived from the scientific literature, suburbanization should change its spatial shape, intensity and municipalities affected by.The research had to be done combining the theoretical and field research.For this purpose, basic statistical data related to the migrations has been used.In order to identify whether any municipality is suburban or not, the threshold value of net migration rate had to be determined and if the net migration rate was above the threshold, the municipality has been marked as suburban.Regarding to this, determination of the credible threshold of net migration rate was the major problem.As the suburbanization processes related to Bratislava in this paper are not expected to be beyond the travel-to-work area of Bratislava, its functional urban region has been used as the spatial framework.According to the initial appearance of suburbanization as well as to data availability, the time framework between years 1995 -2009 has been used.For better fidelity, time framework was disaggregated into five three-year periodssuburban stages.The approximation of proper threshold value had to be done in compliance with the field research that proves the best value of net migration rate is 7 % per each suburban stage. Results of this analysis have shown the correctness of proclaimed hypothesis.Suburbanization is dynamical and is changing around Bratislava throughout exposed time period.In each stage, different municipalities were affected by suburbanization by different ratio.Moreover, the number of affected municipalities is still increasing.It is obvious that suburbanization has become one of the typical urban processes located in surrounding of Bratislava.Further analyses would probably prove, whether this trend will continue or not. Fig. 1 : Fig. 1: Location of functional urban region of Bratislava within Central Europe. Fig. 2 : Fig. 2: Development of suburbanization in functional urban region of Bratislava within the suburban stages in 1995 -2009.
2019-05-20T13:04:45.232Z
2012-12-31T00:00:00.000
{ "year": 2012, "sha1": "a4bbfb23ec5b67d9a95cef71ad8f67647f1690cc", "oa_license": "CCBY", "oa_url": "https://journals.um.si/index.php/geography/article/download/3879/2719", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "c4e959d42f83aad81d2a41fcb793c1a1a91294af", "s2fieldsofstudy": [ "Geography" ], "extfieldsofstudy": [ "Geography" ] }
240314940
pes2o/s2orc
v3-fos-license
A Middle Byzantine silver treasure Από τον Οκτώβριο του 2003 παρουσιάζεται σε έκθεση στο Μουσείο Μπενάκη ένας μοναδικός θησαυρός από εννέα ασημένιους επίχρυσους δίσκους της μεσοβυζαντινής περιόδου, οι οποίοι προσφέρθηκαν για πώληση στην Ελλάδα. Οι τρεις μεγαλύτερες βυζαντινές συλλογές της χώρας, το Βυζαντινό και Χριστιανικό Μουσείο της Αθήνας, το Μουσείο Βυζαντινού Πολιτισμού της Θεσσαλονίκης και το Μουσείο Μπενάκη έχουν ξεκινήσει από κοινού έναν αγώνα εξεύρεσης των οικονομικών πόρων που απαιτούνται για την απόκτηση του θησαυρού, προκειμένου να παραμείνει στην Ελλάδα ως σύνολο. Έως τη στιγμή που ολοκληρώθηκε το παρόν άρθρο, η προσπάθεια των τριών μουσείων βρισκόταν ακόμη σε εξέλιξη. Οι εννέα δίσκοι βρίσκονται στην κατοχή του σημερινού συλλέκτη ως κληρονομιά από τον πατέρα του, ο οποίος τα απέκτησε —αντί £15,000— το 1937 από τον Βρετανό Α. Barry, γνωστό σταφιδέμπορο εγκατεστημένο στη Σμύρνη. Σύμφωνα με ανεπιβεβαίωτη πληροφορία του αρχικού ιδιοκτήτη οι δίσκοι βρέθηκαν τυχαία έξω από το Τατάρ Παζαρτζίκ της σημερινής Βουλγαρίας. Ανάλυση κράματος – Σχήματα Οι αναλύσεις του κράματος, που πραγματοποιήθηκαν από το Ινστιτούτο Πυρηνικής Φυσικής του «Δημόκριτου» σε πέντε από τους δίσκους, προσδιορίζουν την κοινή σύσταση του κράματος, με ελάχιστες αποκλίσεις. Η ίδια σύσταση παρατηρείται σε αργυρά σκεύη της ρωμαϊκής και της παλαιοχριστιανικής εποχής, καθώς και στα αντίστοιχα σασανιδικά και ισλαμικά σκεύη. Ως προς το σχήμα, δύο δίσκοι (αρ. 1-2) είναι υψίποδοι, ενώ οι υπόλοιποι επτά έχουν επίπεδο πυθμένα και κατακόρυφο χείλος. Δύο φέρουν στο κεντρικό μετάλλιο παραστάσεις με σκηνές κυνηγιού (αρ. 1, 3) και ένας την προσωποποίηση της θάλασσας που ιππεύει θαλάσσιο τέρας (αρ. 4). Οι υπόλοιποι κοσμούνται με ανεικονικό, φυτικό και γεωμετρικό διάκοσμο (αρ. 5-9). Τ α παραδείγματα με επίπεδο πυθμένα και ψηλό χείλος βρίσκουν ακριβή παράλληλα στην υστερορωμαϊκή αργυροχοΐα, σε μεσο-βυζαντινά αλλά και ισλαμικά κεραμικά σκεύη. Οι δύο δίσκοι με το δαντελωτό χείλος και το ψηλό πόδι έχουν σχήμα που θυμίζει φρουτιέρα, το οποίο συναντάται σε βυζαντινά κεραμικά παραδείγματα, αλλά και στον σχεδόν πανομοιότυπο αργυρεπίχρυσο υψίποδο δίσκο που βρέθηκε στο Μουζχί της Σιβηρίας. Εικονογραφική ανάλυση Κυνηγοί και θηράματα: η παράσταση των κυνηγών που κοσμεί το κεντρικό μετάλλιο στους δίσκους αρ. 1 και 3 συνδέεται με τις απεικονίσεις έφιππων κυνηγών που απαντούν σε σωζόμενα έργα της μεσοβυζαντινής αργυροχοΐας, όπως στις μεταξύ τους όμοιες κούπες από το Βιλγκόρτ και το Τσερνιγκόβ και στην κούπα της πρώην Συλλογής Βασιλιέβσκι, που χρονολογούνται στον 12ο αιώνα και φυλάσσονται στο Ερμιτάζ. Τα σκεύη αυτά μοιράζονται με τους δίσκους του θησαυρού όχι μόνο την ίδια εικονογραφία των κυνηγών αλλά και κοινή απόδοση του φυσικού περιβάλλοντος, με τη μορφή σχηματοποιημένων φυτών, τα κλαδιά των οποίων απολήγουν σε τρίφυλλα. Ωστόσο, οι πιο εντυπωσιακές ομοιότητες εντοπίζονται στην εγχάρακτη παράσταση του έφιππου άγιου Γεώργιου που κοσμεί ένα άλλο έργο μεσοβυζαντινής αργυροχοΐας, την κούπα από το Μπεριόζοβο (12ος αι.). Στον υφίποδο δίσκο αρ. 1 η παράσταση του κεντρικού μεταλλίου συμπληρώνεται από ταινία με ζώα που τρέχουν, η οποία ερμηνεύεται ως συνεπτυγμένη σκηνή κυνηγιού. Παρόμοιες παραστάσεις ζώων εντοπίζονται στα έργα της βυζαντινής αργυροχοΐας που προαναφέρθηκαν, στο κάλυμμα κούπας από το Νένετς, καθώς και στο μπρούντζινο μανουάλι που φυλάσσεται στη Μονή Σινά. Η προσωποποίηση της θάλασσας: η προσωποποίηση της θάλασσας στον δίσκο αρ. 4 ως γυναικείας ημίγυμνης μορφής είναι οικεία από την ελληνορωμαϊκή αρχαιότητα. Στην αμιγώς χριστιανική εικονογραφία, προσωποποιήσεις της θάλασσας εντοπίζονται σε σκηνές της Βάπτισης, της διάβασης της Ερυθράς Θάλασσας και κατ' εξοχήν της Δευτέρας Παρουσίας. Μάλιστα σε εικόνα της Δευτέρας Παρουσίας στο Σινά, η θάλασσα εικονίζεται ως ημίγυμνη γυναίκα, που ιππεύει θαλάσσιο δράκο, κρατώντας κουπί στο δεξί χέρι και καράβι στο αριστερό, όπως και στον δίσκο του θησαυρού. Στη μεσοβυζαντινή μεταλλοτεχνία, ένα θαυμάσιο παράλληλο προσφέρει ο δίσκος που βρέθηκε στο Μουζχί. Ανεικονικός διάκοσμος και οι σχέσεις του με την ισλαμική τέχνη: ο υφίποδος δίσκος αρ. 2 του θησαυρού έχει κεντρικό μετάλλιο που κοσμείται με ωοειδές δικτυωτό πλέγμα το οποίο περικλείεται πάνω και κάτω από καρδιόσχημα. Τα καρδιόσχημα είναι ένα εξαιρετικά κοινό μοτίβο, τόσο στη βυζαντινή, όσο και στη ισλαμική τέχνη. Δικτυωτό φυτικό πλέγμα όμοιο με αυτό του δίσκου συναντάται σε επίτιτλα βυζαντινών χειρογράφων, καθώς και στη εντοίχια διακόσμηση ναών του 12ου αιώνα, όπως για παράδειγμα στη Μονή Πετριτζού της Βουλγαρίας και στον καθεδρικό ναό της Τσεφαλού στη Σικελία. Ο δίσκος αρ. 5 του θησαυρού κοσμείται με ελισσόμενους βλαστούς που φέρουν ανθέμια, τα πλησιέστερα παράλληλα των οποίων απαντούν στη βυζαντινή και την ισλαμική κεραμική. Τα ισλαμικά παραδείγματα φέρουν διακόσμηση επιχρίσματος και αποδίδονται στο ανατολικό Ιράν, στα εργαστήρια της Νισαπούρ ή της Σαμαρκάνδης του 10ου αιώνα. Τα βυζαντινά παράλληλα προέρχονται από την εγχάρακτη κεραμική του 12ου αιώνα, ιδιαίτερα από την Κόρινθο και το ναυάγιο της Αλοννήσου, όπου συχνά τα φυλλώματα αναδεικνύονται στο φολιδωτό βάθος που μιμείται το στικτό βάθος της αργυροχοΐας. Οι τέσσερις όμοιοι δίσκοι του θησαυρού (αρ. 6-9) με το γεωμετρικό αστερόσχημο πλέγμα στο κέντρο και τις ακτινωτές γιρλάντες σχετίζονται με μια σειρά δίσκων από κράμα χαλκού από το ανατολικό Ιράν (12ος και αρχές 13ου αιώνα), ανάμεσα στα τυπικά διακοσμητικά θέματα των οποίων είναι τα εξακόρυφα τιλένματα ή άστρα. Οι δωδεκάκτινες γιρλάντες στους τέσσερις δίσκους του θησαυρού βρίσκουν πλησιέστερο παράλληλο σε μια ισλαμική ορειχάλκινη κούπα 13ου αιώνα από τη βόρεια Συρία ή Μεσοποταμία με διάκοσμο από ένθετο ασήμι. Αντίθετα, από το βυζαντινό διακοσμητικό ρεπερτόριο προέρχονται οι βλαστοί σε σχήμα S που παρόμοιους τους συναντάμε σε ένα κωνσταντινουπολίτικο κιβώτιο θυμιάματος του 12ου αιώνα, που φυλάσσεται στο θησαυρό του Άγιου Μάρκου στη Βενετία. Η ενσωμάτωση ισλαμικών διακοσμητικών μοτίβων που προέρχονται από τη μεταλλοτεχνία παρατηρείται και σε μια κατηγορία βυζαντινών εγχάρακτων κεραμικών του 12ου αιώνα, αποδεικνύοντας ότι έργα ισλαμικής μεταλλοτεχνίας κυκλοφορούσαν στο ελλαδικό βυζαντινό χώρο, και όχι μόνο στις παραμεθόριες ή σταυροφορικές περιοχές. Στην Κόρινθο άλλωστε βρέθηκαν και πολυάριθμα θραύσματα ισλαμικών κεραμικών που δείχνουν εμπορικές ανταλλαγές με την Αίγυπτο και τη Συρία. Ο θησαυρός του Ιζγκιρλί Ο θησαυρός των εννέα δίσκων που παρουσιάζεται εδώ έχει άμεση σχέση με τρεις ασημένιους δίσκους που φυλάσσονται στο Cabinet des Médailles στο Παρίσι, και είναι γνωστοί στη διεθνή βιβλιογραφία ως ο θησαυρός του Ιζγκιρλί ή του Τατάρ Παζαρτζίκ από τον τόπο εύρεσης τους, το 1903. Οι τρεις δίσκοι του Ιζγκιρλί είναι, με μικρές διαφοροποιήσεις, πανομοιότυποι με τους εννέα που μας απασχολούν, και αναμφίβολα οι δύο θησαυροί προέρχονται από ένα κοινό πλαίσιο παραγωγής. Οι διαφοροποιήσεις εντοπίζονται στη σύγκριση των ταινιών με τα ζώα που τρέχουν. Στον δίσκο του Ιζγκιρλί τα ζώα απεικονίζονται επάνω σε ελιοσόμενο βλαστό και όχι παρατεταγμένα ή συμπλεκόμενα με αυτούς, όπως συχνά απαντούν στη μεσοβυζαντννή ή τη σταυροφορική τέχνη του 12ου αιώνα. Παρόμοια χρήση ελισσόμενων βλαστών ως βάθος των ζώων που τρέχουν συναντάμε συχνά στα ισλαμικά έργα μεταλλοτεχνίας του 11ου-12ου αιώνα. Ωστόσο, μια λεπτομέρεια στην εξωτερική ταινία ξενίζει για ισλαμικό περιβάλλον, μέσα στο οποίο θα αποτελούσε σημαντική παραφωνία. Πρόκειται για τη γυμνή ανθρώπινη μορφή που απεικονίζεται σαν να κολυμπά ανάμεσα στους βλαστούς. Η παράσταση γυμνών ανθρώπων αποτελεί οικείο θέμα, τόσο στη σταυροφορική τέχνη —όπως στο υπέρθυρο του Πανάγιου Τάφου-, όσο και στην κοσμική εικονογραφία του Βυζαντίου, ενώ δεν είναι άγνωστη στη βυζαντινή θρησκευτική εικονογραφία. Ο Θησαυρός του Ιζγκιρλί έχει απασχολήσει και διχάσει τους μελετητές, από τους οποίους άλλοι τον αποδίδουν σε βυζαντινό και άλλοι οε ισλαμικό περιβάλλον. Η πρώτη, ωστόσο, παρουσίαση του θησαυρού έγινε λίγο μετά την ανακάλυψη του, το 1903, από τον Γάλλο πρόξενο στη Φιλιππούπολη Μ. Degrand, ο οποίος στην έκθεση του αναφέρει ότι οι δίσκοι ήταν αρχικά 10 και ότι βρέθηκαν μαζί με χρυσά νομίσματα των τριών Κομνηνών -Αλεξίου Α', Ιωάννη Β' και Μανουήλ Α'—, καθώς και έναν σταυρό και άλλα πολυτελή αντικείμενα, που δεν κατόρθωσε να διασώσει από το λιώσιμο. Η μαρτυρία της περιοχής της Φιλιππούπολης ως τόπου εύρεσης συμπίπτει με την πληροφορία που έχουμε για τον νέο θησαυρό. Τον 12ο αιώνα, η Φιλιππούπολη είναι μια πόλη-κλειδί του Βυζαντίου, από την οποία παρελαύνουν σημαίνοντα πρόσωπα της κεντρικής πολιτικής σκηνής της αυτοκρατορίας, όπως ο Μιχαήλ Ιταλικός και ο Νικήτας Χωνιάτης. Ο δέκατος τρίτος δίσκος: η ταυτότητα του ιδιοκτήτη Στην ίδια ιδιωτική συλλογή από την οποία προέρχονται και οι εννέα δίσκοι που εξετάζουμε, ανήκει ένας ακόμη. Προέρχεται από το ίδιο αρχικό σύνολο, αλλά δεν διατίθεται προς πώληση. Είναι ακόσμητος αλλά φέρει κυκλική ταινία με την εγχάρακτη, μεγαλογράμματη ελληνική επιγραφή: +Κ(ΥΡΙ)Ε ΒΟΗΘΕΙ KONCTANTINQ ΠΡΟΕΔΡΩ ΤΩ ΑΛΑΝΩ. Το πρόσωπο που αναφέρεται στην επιγραφή είναι κατά πάσα πιθανότητα ο ιδιοκτήτης του συνόλου των δίσκων. Τα παλαιογραφικά δεδομένα απαντούν σε επιγραφές από το β' μισό του 11ου και τον 12ου αιώνα. Από τα μέσα του 11 ου αιώνα το αζίωμα του προέδρου απονέμεται με μεγαλύτερη συχνότητα, συχνά μάλιστα σε μέλη της στρατιωτικής αριστοκρατίας, ενώ βεβαιωμένες αναφορές στον τίτλο δεν απαντούν μετά τα μέσα του 12ου αιώνα. Η παρουσία των Αλανών (Γεωργιανών) στο έδαφος του Βυζαντίου είναι τεκμηριωμένη σε όλο τον 12ο αιώνα —κυρίως ως μισθοφορικά στρατεύματα. Στην περιοχή της Φιλιππούπολης, εκτός από τη βεβαιωμένη παρουσία Γεωργιανών στη Μονή Πετριτζού, διαθέτουμε και μία σημαντική μαρτυρία του Χωνιάτη. Αναφερόμενος στην πολιορκία της Φιλιππούπολης το 1189 από τον Μπαρμπαρόσα κατά τη διάρκεια της Γ' Σταυροφορίας, αφηγείται ότι σε μάχη που έγινε στο κάστρο του Προυσηνού, έξω από την πόλη, οι Αλανοί πολέμησαν ηρωικά υπό τη διοίκηση του Θεόδωρου Βρανά. Ο θησαυρός στο ιστορικό περιβάλλον του Από τις συγκρίσεις που έγιναν, διαγράφονται καθαρά οι πολλαπλές συνάφειες των δίσκων με βυζαντινά και ισλαμικά έργα, αλλά και οι παραλληλίες τους με ορισμένες δημιουργίες της σταυροφορικής Ανατολής. Η πρόσληψη και οικειοποίηση ισλαμικών θεμάτων από τη βυζαντινή τέχνη είναι ένα φαινόμενο γνωστό, που όμως τον 12ο αιώνα προσλαμβάνει διαφορετικό χαρακτήρα και αποτελεί μέρος της ευρύτερης ανάπτυξης των ανταλλαγών ανάμεσα στο Βυζάντιο και τους μουσουλμάνους, τις ιταλικές ναυτικές πόλεις και τους Σταυροφόρους. Τα ισλαμικά θέματα γίνονται μέρος ενός ευρύτερου λεξιλογίου της βυζαντινής τέχνης, που χρησιμοποιείται εναλλακτικά και παράλληλα με θέματα καθαρά χριστιανικά ή βυζαντινά. Άλλωστε την περίοδο αυτή, οι σχέσεις του βυζαντινού και του ισλαμικού κόσμου δεν καθορίζονται μόνο από διπλωματικές αποστολές και μεθοριακά επεισόδια, αλλά και από τη συνεχή επαφή και τη συγκατοίκηση τους στη Μικρά Ασία. Στο κόσμο της Ανατολικής Μεσογείου αναμφίβολα λειτούργησε ως καταλύτης η ορμητική είσοδος των χριστιανών της Δύσης, των ναυτικών εμπορικών δυνάμεων της Ιταλίας και των Σταυροφόρων μαχητών της πίστης. Στο Βυζάντιο, η όσμωση πολιτιστικών στοιχείων με διαφορετικές καταβολές προερχόταν και ενισχυόταν από την επίσημη αυτοκρατορική πολιτική που ακολούθησαν οι Κομνηνοί —οι πηγές και η ιστοριογραφία φω τίζουν καλύτερα τη λαμπερή πορεία του Μανουήλ Α' (1143-1180). Η διοργάνωση ιπποτικών αγώνων, σύμφωνα με τα δυτικά πρότυπα, από τον αυτοκράτορα, μαζί με την ανέγερση ισλαμικών κτισμάτων στην Κωνσταντινούπολη και τη φιλοξενία Σελτζούκων και Φράγκων ηγεμόνων, συνιστούν τεκμήρια του νέου πνεύματος. Η επίσημη αυτοκρατορική ιδεολογία και οι κοσμικές ενασχολήσεις της αυτοκρατορικής αυλής αντανακλώνται στα κείμενα της εποχής. Χαρακτηριστική είναι η έκφραση μιας παράστασης κονταρομαχίας δυτικού τύπου με κεντρικό πρόσωπο το Βυζαντινό αυτοκράτορα, καθώς και η πληροφορία για την ανορθόδοξη απεικόνιση των κατορθωμάτων του Σελτζούκου σουλτάνου στους τοίχους της κατοικίας ενός Βυζαντινού αξιωματούχου. Η επιτομή των λογοτεχνικών κειμένων που συμπυκνώνει το ηρωικό, αριστοκρατικό πνεύμα της εποχής είναι αναμφίβολα το έπος του Διγενή Ακρίτα. Οι περιγραφές των συμποσίων και των κυνηγιών, τα ηρωικά κατορθώματα και οι ρομαντικές σκηνές του Διγενή με τη γυναίκα του Ευδοκία, αποτελούν τα λογοτεχνικά παράλληλα της εικονογραφίας στις ασημένιες κούπες που φυλάσσονται σήμερα στο Ερμιτάζ. Τα αργυρά σκεύη που παρουσιάστηκαν, με τη θεματική ποικιλία και τον πολυσυλλεκτικό χαρακτήρα της διακόσμησης τους, σε συνδυασμό με τα επιγραφικά στοιχεία και τα δεδομένα της εύρεσης του Θησαυρού του Ιζγκιρλί οδηγούν στην απόδοση τους στο Βυζάντιο του 12ου αιώνα. Οι ποιοτικές ανισότητες που διαπιστώθηκαν στην εκτέλεση τους υποδεικνύουν ότι πιθανότατα διαφορετικά χέρια ή εργαστήρια ήταν υπεύθυνα για την κατασκευή τους. Ούτως η άλλως, η σύγχρονη λογική του ομοειδούς σετ, του σερβίτσιου πιάτων με την τρέχουσα έννοια του όρου, δεν έχουμε λόγο να πιστεύουμε ότι χαρακτήριζε την αισθητική και τις ανάγκες της εποχής ANNA BALLIAN -ANASTASIA DRANDAKI A Middle Byzantine silver treasure SINCE OCTOBER 2003 there has been on display at the Benaki Museum a unique treasure consisting of nine silver-gilt dishes dating from the Middle Byzantine era, which has been offered for sale to Greece.The three largest Byzantine collections in the country, the Byzantine and Christian Museum in Athens, the Museum of Byzantine Culture in Thessaloniki and the Benaki Museum, have jointly undertaken the task of raising the necessary funds to acquire this treasure so that it can remain in Greece, and with this aim the dishes were also exhibited for two weeks in Thessaloniki.At the time of writing this initiative by the three museums is still proceeding apace. The nine dishes were previously unknown both to specialists and to the general public.The astonishment which greeted the appearance of such a treasure can be imagined, not only because it comprises rare and precious objects in an excellent state of preservation, but also because the material is largely unfamiliar and opens up a wide variety of new paths and horizons for the study of Middle Byzantine art.The present article takes the form of a general introduction to the subject; it would certainly not claim to cover all the issues involved, nor to do more than present the basic information and indicate the specific features which locate the dishes in their chronological and cultural context. The nine dishes were inherited by the present owner from his father, who acquired them in 1937 for £ 15,000 from A. Barry, an Englishman, who had been an exporter of currants in Smyrna until 1922, when he settled in Patras. 1 The provenance of the dishes is not known with certainty, but according to undocumented information from the original owner they were discovered accidentally outside Tatar Pazarcik in modern Bulgaria.The dishes were cleaned before being offered for sale and their condition is generally excellent, though in many places the gilding is missing.Some display marks which postdate their manufacture and are evidence of a change of owner or of tests made from time to time to establish the purity of the alloy and the commercial value of the objects. Two dishes (nos 1 and 2) are footed (figs 1-2), while the others have a flat base and low, rising sides (figs 3-6).Three display human figures on the central medallion, in representations of a hunting scene (figs 1 and 3) and of the Sea riding on a sea monster (fig.4).The others have vegetal and geometrical ornamentation.A detailed description of the dishes, with their dimensions, can be found in the Appendix to this article. Technical data -the shapes The composition of the alloy on five of the plates was analysed by the Demokritos Nuclear Physics Institute when the treasure was first examined.The results are presented in the following article by the metal conservator of the Benaki Museum, Despina Kotzamani.But at this point certain observations should be made.The plates which were analysed were manufactured from an alloy with a high silver content varying from 93.6 to 95%, while the composition also contained amounts of copper (3.11-4.72%),gold (1.26-1.49%)and lead (0.37-1.15%) which are normal for mediaeval silver vessels. 2 The five vessels have a similar alloy composition with few variations, the most notable of which is the slightly higher lead content of dish no. 5 (1.15%).The same composition can be found in silver vessels of the Early Christian era, 3 and also in those Sasanian and Islamic vessels which have been analysed. 4This continuity with earlier practices extends to the techniques of manufacture and ornamentation.The dishes were made by hammering on a lathe, and, in the case of dish no.7 (see appendix), this exploited the alloy to the full by creating from a relatively small quantity of metal a vessel with very thin walls. 5l the dishes display the same decorative layout, with a central medallion normally encircled by a peripheral band and complementary motifs, as is particularly apparent in dishes nos 6-9.An incised preliminary sketch was used for the ornamentation, which the craftsman subsequently followed with his tool, thus often giving a slightly unstable appearance to the contours (fig.4b).In some cases the lines appear interrupted, evidence of a failure to ensure that each application of the tool follows exactly on the previous one.A variation in the execution can be ob-served in dishes nos 1 and 5, where the motifs are incised by drawing the tool uninterruptedly across the surface of the dish.This technique was used partially on dish no. 1 and on the entire ornamentation of dish no. 5. On all the dishes the execution of the motifs is gener ally schematic; detail is lacking but the motifs stand out against a ring-punched or dot-punched background.The quality of the ornamentation is not consistent: some times the engraving is flat (dish no. 2), at others more unstable (as noted in dish no.4), or less attentive to detail (a comparison of dishes nos 1 and 3 shows that they share a common motif but the execution is uneven in qual ity).Dishes nos 1 and 5 undoubtedly display the most meticulous, indeed exemplary, execution.The variations in the ornamentation, even where the motifs are similar, suggest either that some dishes were manufactured under greater pressure, or, more probably, that different crafts men were involved in their production. Exact parallels for the shape of the dishes with a flat base and shallow sides can be found in late Roman silverware -in certain works from Naissus, for example, one of which has a star motif in the centre. 6The shape frequently oc curs in 12th century Byzantine ceramic vessels, and ex amples have been discovered in Corinth, Athens, the Alonnesos shipwreck and elsewhere, though such ob jects often have a rudimentary base to increase their durability. 7The wide circulation of this form of silver ware is apparent from its echoes in Islamic art, for al though not many dishes made of precious metals have survived, the shape is found in ceramic imitations in 9th-century Samarra moulded ware, 10th-century Samanid slip-painted vessels and a rare Fatimid bronze alloy dish (fig.7). 8 Conversely, the two very shallow plates with an orna mental raised rim and a tall foot do not appear to have their origin in late antique models, and the shape, which resembles a modern fruit dish, may be a mediaeval de velopment.The flat base, gently sloping towards the rim can be found in mid-to late 12th century Byzan tine ceramics which also have a notched rim, though in these objects the foot tends to be shorter. 9The remark able Artukid enamelled bowl has a similar shape, and its external dimensions (diam.27 cm, height 5 cm) are the same as those of plate no.l.'°But the work closest to the footed plates of the treasure is the silver-gilt dish from Muzhi in Siberia, now in the Hermitage (fig.8)." It stands on a similar cylindrical foot, the sides terminate in an ornamental raised rim and it has comparable di mensions (diam.28 cm, height 5.3-6 cm).In spite of the fact that few examples of precious utilitarian metalwork survive from the 12th and 13th centuries, the fact that their dimensions are generally similar may be evidence of a certain standardisation in the manufacture of such objects. 12 Iconographie analysis Huntsmen and running animals: The representation on the central medallion of plates nos 1 and 3 (figs 1, 3) is part of a long tradition going back to late antiquity, when the theme of hunting, a favoured pursuit of the aristoc racy, was frequently included in the decoration on mosaic pavements and portable objects of every kind. 13In Middle Byzantine art the direct link between hunting scenes and imperial iconography is evidenced by representations of imperial hunts and by explicit literary references. 14Middle Byzantine eulogies addressed to the figure of the emperor give constant emphasis to his prowess as a hunter in order to demonstrate his bravery and spiritual power. 15nting was a theme commonly found in court ico nography, but was diffused not merely on precious ob jects but also on works in mass circulation such as ce ramics and sculptures. 16Particularly interesting are the depictions of mounted figures on surviving works of Byzantine silverware, such as the similar bowls from Vilgort and Chernigov and the cup in the former Vasilevsky collection (fig.9). 17These share with our plates not only the hunting iconography but also the depiction of nature by means of stylised plants with tendrils termi nating in trefoils.The use of these motifs to represent the natural world seems to have been a standard topos in all media of 12th century art (figs 9T2). 18unted huntsmen are closely associated with rep resentations of military saints, which also proliferate in the 12th century, when they are regularly depicted in the iconography of aristocratic equestrian warriors."Strik ing resemblances can be found in wall paintings, such as the impressive mounted St George at Staraya Ladoga (1167). 20The military gear shown on the dishes -ellipsoid shields, greaves, breastplates and short chitons-occurs in numerous portrayals of soldier saints, most notably on 11th and 12th century steatite works. 21Yet the most re markable likenesses are found in an engraved representa- 10).2Z The exterior of this silver-gilt bowl has rows of convex bosses depicting scenes of court banquet and a female imperial figure in the centre flanked by servants, musicians, acrobats, dancers, animals and birds.The patently secular, court iconography is complemented on the interior by a central medallion with an engraved mounted figure of St George almost identical to that of the hunters on the plates under review. 23Indeed those hunters would be exact reproductions of the Beriozovo St George were it not for the absence of the halo and the inscription.On footed plate no. 1, the representation on the central medallion is supplemented by the band of running animals which encircles the interior just below the rim.Depictions of running animals are common from the late Roman period onwards and they can be interpreted as condensed hunting scenes which may either comple- ment or substitute for full depictions of the subject. 24In the Middle Byzantine era, running animals are found in all forms of art 25 , but the closest links occur in the Byzantine silverware mentioned earlier, the cups in the former Vasilevsky collection 26 (fig.9) and from Beriozovo 27 (fig.10) and the cup cover from Nenetz 28 (both in Siberia) (fig.11), while the resemblance of the band of animals on the pan of the Sinai bronze candelabrum is particularly striking (fig.12). 29In all these works the associations go far beyond the iconographie and extend to the style and techniques of the engraved motifs, an indication of their chronological proximity to the plates discussed in this article. The personification of the Sea: The other interesting figure included in the group of plates is the personification of the Sea on dish no. 4 (fig.4).Depictions of the Sea as a near-naked woman are found from Greco-Roman antiquity onwards both in literature and in representations on coins, sarcophagi and mosaic pavements, ΜΟΥΣΕΙΟ ΜΠΕΝΑΚΗ most notably perhaps in the church of the Apostles at Madaba in Jordan (578). 30In early Christian thematology the Sea is a fundamental part of God's Creation and is normally shown in company with the Earth, the second of the two principal constituents of the Ktesis?' Equally close to the present representation iconographically are the portrayals of Nereids riding sea monsters found, for example, on the medallion of a silver plate in the Galleria Sabauda in Turin (AD 54l). 32 purely Christian iconography, depictions of the Sea occur in representations of the Baptism, sporadically at first in the 7th and 8th centuries, in the Cappella Palatina in the 12th century, 33 and finally more frequently from the 13th century onwards. 34One of the finest Middle Byzantine representations is to be found in the Paris Psalter, in the depiction of the crossing of the Red Sea (10th century). 35In the 11th century the Sea finds a regular place in representations of the Last Judgment, both in manuscripts 36 and in wall paintings (e.g. the church of Panaghia Chalkeon in Thessaloniki), 37 where, in company with the Earth, it renders up the bodies of the dead for judgment.It is similarly depicted in the restored mosaics in Torcello, 38 in St Nicholas tis Stegis in Kakopetria, Cyprus 39 and in a 12th-century icon of the Last Judgment in Sinai. 40The last of these shows a near-naked woman astride a sea-monster, holding an oar in her right hand and a boat in her left, just as on the dish. 41The monster on the icon is also very similar with its diminutive pointed ears, small mane and leonine paws.In depictions of the Second Coming the creature ridden by the Sea spits out the limbs of humans destined to participate in the Last Judgment.The depiction of the gaping-jawed monster on the dish suggests that the craftsman used such a scene as a model, although the features have their direct ancestry in the art of late antiquity. 42ddle Byzantine metalware contains a remarkable parallel in the silver-gilt footed plate from Muzhi in Siberia (fig.8). 43The large central medallion with a depiction in relief of the Ascension of Alexander is surrounded by ten representations with cosmological-symbolic content in roundels framed by foliate scrolls.One of these displays a naked representation of the Sea, riding on a sea monster and holding a ship in her right hand and an oar in her left.The beast is similar to that on the dish, but the personification is seated with her back to its head, totally naked but with no indication of sex or other detail of her figure.Interestingly, the ship which she holds contains both rower and steersman.Aniconic decoration and the Islamic connection: It is the series of dishes with purely aniconic decoration and obvious Islamic associations which give rise to the most ambivalent interpretations.The comparative material to be discussed here will draw on both Byzantine and Islamic art.The purpose is not so much to isolate Byzantine from Islamic stylistic features, but rather to trace the motifs they have in common, establishing the extent of their dissemination, and -in so far as this is possibleidentifying the specific type of objects which formed the vehicles through which they were circulated. The footed plate no. 2 (fig.2) is decorated with an ogival vegetal lattice framed above and below by heart shapes.Rows of alternating heart shapes enclosing leaves with a central hatching are almost a hallmark of Byzantine decoration but are an equally common motif in Islamic art.Examples of the latter are a ceramic sgraffiato bowl, of a type dated variously to the 10th and the 11th century, 44 a Fatimid lustre-painted vase, 45 a cast bronze mortar from eastern Iran, 46 and a silver flask from the Harari treasure, attributed to 11th century Northern Iran. 4 Comparable Byzantine examples with pointed multi-lobed leaves can be seen in the heading of a manuscript of 1140 in the Escoriai, 48 on the fragment of a champlevé ceramic, 49 and on the silver bowl cover from Nenetz with representations of musicians and acrobats (fig.II). 50The ogival layout of the decoration occurs in the headpieces of manuscripts (fig.13), which in the 11th and especially the 12th centuries display motifs enclosed in heart shapes pointing alternatively upwards and downwards. 51This ogival design is not unknown in 12th-century wall painting, and can be found in the Petritzos monastery in Bachkovo, Bulgaria, and Cefalu cathedral in Sicily. 52sh no. 5 of the treasure has foliate palmettes on the central medallion and exceptionally intricate incised workmanship (fig.5).The meticulous herringbone pattern surrounding the medallion and the ribbons tied to the stalks at points where they divide produce a striking late antique effect which is heightened by the otherwise undecorated surface of the dish. At first sight the ornamentation has no direct parallels in silverware, Byzantine or Islamic.A meticulous exami-Fig.7. Fatimid bronze dish with a rabbit and an inscription band, 11th-12th century.Paris, Louvre Museum no.AA. 275 (photo: courtesy of the Louvre Museum). nation of Central Asian silverware -notably that from 8th and 9th-century Sogdia, which post-dates the Islamic conquest but could still use Sasanian silverware as its modelsindicates a different use of late antique decoration, with an emphasis on richer ornamentation and on larger-scale vegetation, which is often rendered naturalistically. 53e closest examples of bowls and plates with scrolling tendrils on the base are actually found in ceramics, both Byzantine and Islamic.To begin with the Islamic versions, Samanid slip painted pottery attributed to 10th century Nishapur and Samarkand is believed to reflect the ornamentation on now lost contemporary silverware which continues the tradition of Sogdian silver. 54This ceramic ware displays the most striking resemblances to the silver dish, with four scrolling tendrils sprouting from a circle (fig.14). 55There is one important difference, in that the palmettes of dish no. 5 have a foliate design, with curved, pointed ends, while Samanid and earlier Sogdian palmettes are round, many-petalled and have a floral origin. 56e mid-12th century Byzantine shipwreck at Alonnesos (fig.15) and the excavations at Corinth and Athens have produced numerous examples of sgraffiato and champlevé ceramics which display a continuing use of designs with palmettes as central motifs on bowls and plates.The foliage has the same linear character and is displayed against a scaled background which imitates the punched ground of silverware. 57All this suggests two possible interpretations for the provenance of dish no. 5. The first is that the dish predates the remainder of the treasure and is probably a 10th-century work from Eastern Iran.Alternatively, the dish is Byzantine and more or less contemporary with the rest of the treasure but reproduces models from 10th-century silverware -an instance of a return to earlier prototypes which is familiar in Byzantine art, but would be unusual in the art of Islam, which had from the 11 th century introduced the arabesque in its decorative vocabulary. 58e four dishes of the treasure, nos 6 to 9 (fig.6), share the same design of radiating garlands and geometrical star-shaped interlace, the latter deriving from the complex geometrical interlace found in Islamic ornamentation from early times. 5'' The closest parallels to our silver dishes can be found in metalwork of Eastern Iran dated to the 12th and early 13th century.The main decorative feature on a series of bronze dishes is the central roundel containing a sixpointed interlace framed by rayed garlands and inscribed bands (fig.16). 60Eastern Iran was the birthplace of Islamic inlaid metalwork, though production spread to Northern Mesopotamia and Syria: this form of metalwork is relevant here because, as we shall see, it seems to have been known to the Byzantines and the other Christians of the Near East. An example of an exactly similar star-shaped interlace with the characteristic indentation in the middle of its sides occurs in the frontispiece of a 12th-13th century Syriac manuscript 61 and also in the interior of a western silver standing cup, housed in the monastery of St Maurice dAgaune in Switzerland and attributed by Charles Oman to Norman Sicily. 62Boris Marshak, taking the argument further, considers that the combination of the western shape and the orientalising decoration indicates a place of manufacture where western and Islamic influences could co-exist side by side, such as the Crusader states of the Near East, as well as Sicily. 63e twelve-pointed garlands on the four dishes of the treasure (nos 6-9) are more closely associated with a large brass 13th-century bowl from Northern Syria or the Jazira with inlaid silver decoration.Here the gadands are twelve-pointed and bear large rounded palmettes in a reciprocal arrangement similar to that of the four dish es. 64Conversely, the Byzantine decorative repertoire is responsible for the design of the band on the base of our dishes with S-shaped tendrils terminating in two halfpalmettes.An identical motif is found on a 12th-century incense burner in the form of a domed building, now in St Mark's treasury and attributed to a Constantinopolitan workshop. 65e imitation of a base metal Islamic model by a sil ver Byzantine vessel is theoretically improbable and the reverse of what would normally be expected, since the established hierarchical order starts with objects made of precious materials and descends to cheaper materials such as copper alloy and finally to ceramics.Yet in the Is lamic world inlaid metalwork -the principal innovation of the 12th century-became a socially and aesthetically acceptable substitute for precious metal objects. 66The impact of these novel inlaid vessels would certainly have been felt in Byzantium, where they may have arrived by way of Northern Syria and the Jazira -the provenance of the dish mentioned above-the Sultanate of Rum, or the sea routes and ports of the Syrian coast.And even if at first glance these theories appear somewhat tenuous, we must remember that they are supported by the very sub stantial number of surviving Byzantine sgraffiato ware which are clearly influenced by Islamic metalwork. 67deed, the above mentioned group of Iranian bronze dishes (fig.16) with star-shaped interlace in their cen tres, is also decorated with animals, birds and concentric zones with inscriptions (fig.17) , 68 and may be considered the actual model for a certain class of Byzantine sgraffia to ceramics (figs 18, 19). 69Corroborative features include the pseudo-Kufic inscriptions and the roundels with styl ised palmettes or animals which interrupt the inscribed bands and do not occur in this form in Islamic ceramics.Corinthian sgraffiato ceramics of this type, which have the closest links with Islamic metalware, date from be tween the second quarter of the 12th century and 1200. 70his suggests that 12th century Islamic metalwork was circulating in Byzantine territories -specifically main land Greece-, not merely in frontier areas or Crusader states, while the numerous Islamic ceramic fragments found in Corinth indicate the existence of commercial relationships with Egypt and Syria before the mid-12th century.71 As is clear from the above discussion the three dishes with human figures have particularly strong links with Byzantine art, 12th century metalwork in particular, while the aniconic ornamentation of the other six con tain resemblances to Byzantine and Islamic works of the same period. The Izgirli Treasure The nine dishes of the treasure have direct links with three silver plates in the Cabinet des Médailles in Paris, whose dimensions, shape, technique and iconography are not merely comparable but nearly identical with the vessels studied in this article (figs 20-21). 72They are familiar in the bibliography as the Izgirli or the Tatar Pazarcik treasure, after the Bulgarian village near where they were found in 1903 and its nearby town.This makes a com-parative study of the two groups of objects highly desirable, as they not only belong to a common tradition and share the same provenance, but are probably made by the same or by closely related workshops; it is even possible that all the vessels originally formed a single group, though this cannot be proved. 73wo of the Izgirli plates are footed and bear identical decoration (fig.20), while the third and largest dish belongs to the type with a flat base and low rising sides (fig.21).Although the three plates from Izgirli have not been subjected to technical analysis, visual observation indicates that there are considerable discrepancies in the quality of the execution.The decoration on all three vessels was made by engraving tool, but that on the two similar footed plates is less meticulous, indeed somewhat unsteady, most obviously on the contours and the central rosettes.The third and largest dish is much more carefully worked, and the ornamentation is supplemented by zig-zag engraved lacework around the medallion and the bands, which is not found on any other plate in the group. On footed plate no. 1 (fig.1) and on the three Izgirli plates the bands of running animals, though directly comparable in subject and execution, are not identical.The most obvious variation is found on the large Izgirli dish (fig.21), where the animals are portrayed on vegetal scrolls, in a configuration known as animated or inhabited scrolls.Such scrolls are found in Middle Byzantine art both in manuscript illumination and in sculpted works and ceramics. 7,1Similar motifs occur in illustrations in 12th-century Romanesque and Crusader manuscripts 75 and in works of minor arts 76 and sculptures from the same environment, such as the celebrated east lintel from the south façade of the Holy Sepulchre, which is attributed to a local workshop of the second half of the 12th century. 77In Crusader and Romanesque works the figures of men and animals are intertwined with the scrolls as if they were struggling to escape from them, and sometimes seated on top of them. 78In such cases figures and scrolls exist on an equal plane, but on the Izgirli dish the plant motif merely exists as a ground for the figures engraved above.scripts. 86A similarly nude figure is represented on another Middle Byzantine silver vessel, the llth-12th century bowl of Theodore Tourkelis, now in the Hermitage. 87wo further details that require interpretation are the fish which fill the interstices of the star interlace on the Izgirli dish and the twelve-rayed garlands on dishes nos 6-9 of the treasure.Both point to a cosmological symbolism inherited from Late Antiquity which is found in both the Byzantine and the Islamic world, and originates in the association of circular surfaces with the Dome of Heaven.In Islamic bowls and dishes this symbolism is explicit, with the inclusion of the twelve astrological signs surrounding the sun and whorling fishes and fantastic animals. 88In Byzantine art it may take on religious overtones, as in the representation of the Ascension at Kurbinovo (1192) in which Christ is shown at the centre of the circular glory, which is occupied by fishes and fantastic beasts. 89In Byzantine secular art the silver bowl from Muzhi (fig.8) with the Ascension of Alexander displays the familiar solar associations, while in literature a golden bowl depicting the feats of Manuel I Komnenos is likened to the orb of the earth. 90he background to the Izgirli Treasure has preoccupied and divided scholars, some of whom attribute it to an Islamic and others to a Byzantine environment.The various theories which have been expressed, and which are summarised below, indicate the general problems involved in studying the common ornamental vocabulary which developed in the Eastern Mediterranean and was articulated in 11th and especially 12th-century objects. The Izgirli Treasure was published in 1903 by the French consul in Plovdiv (Philippopolis), M. Degrand, who wrote a detailed report on these major new Byzantine finds. 91Consequently the treasure was purchased by Gustave Schlumberger who donated it to the Cabinet des Médailles in 1929.As Schlumberger considered that the plates did not fall within his sphere of expertise he invited Gaston Migeon, his distinguished colleague in Islamic Art, to publish them.Migeon's article in the periodical Syria for 1922 ascribes the plates to the hoards of silver it in connection with Byzantine secular silverware. 93In t ace Fig. . 93In the accessibility to them of virtually all the 'orientalising' comparable dishes in the group discussed in this article. In this connection the evidence of the French consul in Plovdiv, M. Degrand, may be proved particularly signifi cant in establishing their cultural background and date. 101cording to his account, he was originally shown 150 gold coins of three Komnenoi -Alexios I, John II and Manuel I-which had recently been found near Tatar Pazarcik, outside Izgirli.The local police chief subsequently confirmed the discovery of a large hoard of coins in the area, amounting to 25 kilos of gold.Many of these were melted down and sold in the markets of Tatar Pazarcik and Plovdiv, while according to a local policeman around 250 coins were dispatched to the museum in Sofia. 102her precious objects were found in addition to the coins: a gold cross, a small silver vase and "dix plats en argent massif", which Degrand was told had been im mediately sold in Plovdiv and melted down by the buy er.Subsequently the consul visited the purchaser of the plates, who confirmed that he had melted down some, but had kept three to be traded as antiquities, and these were later acquired by Schlumberger.The consul did not fail to visit the find spot of the hoard, a hill with the ruins Choniates who served as governor of the city, when Fre derick Barbarossa passed through. 103e thirteenth plate: the identity of the owner We have thus far been examining the nine dishes of the new treasure and the three pieces of the Izgirli treas ure as a potential single group, taking into consideration their common characteristics and the information as to their common provenance.The problem of identi fying these twelve dishes takes on a new turn with the evidence of one further dish, which today belongs to the same private collection as the nine presented above but is not being offered for sale (fig.22).According to the owner, the dish was bought by his father together with the others and comes from the same find.It is preserved in an excellent state, and its shape is similar to dishes nos From the middle of that century the title was regularly conferred on members of the military aristocracy, but no references can be found after the mid-12th century.' 08 The office was also an ecclesiastical one which is fre quently mentioned on lead seals throughout the 13th century, 109 but the inscription on the plate does not sug gest that Alanos was an ecclesiastic, as it lacks the con ventional reference to the diocese where he was serving.The treasure and its historical context The information and comparative material contained in this article indicate very clearly not only the many as sociations between the dishes and Byzantine and Islamic works, but also the parallels with certain 12th-century items from the Crusader East.The adoption and ap propriation of Islamic motifs in Byzantine art is a wellknown phenomenon which is mentioned in Byzantine sources and can be observed in specific works of art.An earlier phase in the relationship between Byzantium and Islam might be characterised in terms of the diplomatic missions between the court and the exchange of rare and precious gifts described in the sources." 6In the words of Oleg Grabar, this is a "shared culture of objects" in volving rulers and highly placed court officials." 7The celebrated story of Constantine Porphyrogennetos gaz ing in admiration on an Arab bowl in the privacy of his apartments is sufficient to indicate the beginnings of the social acceptability of Islamic art." 8 By the 12th century, however, the world of the Eastern Mediterranean had expanded, resulting in a proliferation of political centres and the development of the market economy, and By zantium was now part of the wider framework of com mercial intercourse between Italian maritime cities, the Crusaders and the Muslims." 9 The direct consequence was the development of provincial urban centres, the in creased level of exchange and the widespread distribution of products which were no longer restricted to the exotica destined for the emperor's Cabinet of Curiosities.' 20 By now the links with Islamic art do not reflect court taste alone, and the "shared culture of objects" involves not only the imperial milieu but also the humble Corinthian potter.Islamic motifs become part of the general vocab ulary of Byzantine art used by the rising local aristocracy and the middle classes, both in parallel with and as alter natives to explicitly Christian and Byzantine themes. By this time the relationship between Byzantium and the Islamic world is not defined in terms of diplomatic missions and border incidents, but by continuous contact and co-existence within Asia Minor.The movement of an active dynamic from Byzantium to the Sultanate of Rum and back again is illustrated by the flow of claimants to the throne and of disaffected high officials, who became converts either to Islam or to Christianity according to circumstances.At a different social level large numbers of Turkish mercenaries were enticed into joining ranks of the Byzantine army, and many of these ended up settling on Byzantine soil. 121t relations between Byzantium and Islam are only one aspect of the mosaic of the 12th-century Eastern Mediter ranean.The catalytic role was undoubtedly played by the bold thrust of Christians from the West -the Italian mari time commercial states and the Crusader champions of the faith.Western mercantile communities had a strong presence in Constantinople 122 but it may also be signifi cant that during their progress through the territories of the Byzantine empire the Crusaders exchanged silver ves sels in their transactions with the local money-changers. 123nfortunately we do not know what type of silverware this was -whether it was brought from their country of origin or appropriated en route as part of the spoils of war from the regions through which they had passed. 124 Byzantium this osmosis of multifarious cultural ele ments had its origin in, and was reinforced by, the official political ideology of the Komnenoi in the 12th century, best illustrated by the brilliant career of Manuel I (1143-1180). 1" Tournaments organised by the emperor accord ing to the practices of Western chivalry, the erection of Islamic buildings in Constantinople and state visits by Seljuk and Western rulers were all notable events which bore witness to the new spirit. 126This ideology and the secular activities of the imperial court are reflected in con temporary historical and literary texts.It has been plausi bly suggested that the account in Vat.Gr. 1409 of a west ern-style tournament focusing on the figure of the Byz antine emperor may be an ekphrasis, a description of an actual representation of the subject. 1 " The description of the heroes who take part in the tournament could equally well be applied to the hunters on the two dishes: dressed in a short thigh-length chiton, and with their himatia swirling behind them, they brandish their spears and shields.Conversely, the depiction of the exploits of the Seljuk Sultan on the walls of the residence of Alexios Axouch -instead of those of the Byzantine emperor-might later have been used to support accusations of treason against its owner, but at the time of its execution it would have been viewed as acceptable, if unprecedented, by the imperial circles in which Axouch moved. 128As regards the depiction of the Sea, a literary parallel can be found in the celebrated romance of Hysmine and Hysminias, where Hysmine gives an eloquent description of her escape from a shipwreck by riding naked on a sea monster. 129e literary work which epitomises the contemporary heroic, aristocratic spirit is unquestionably the poem of Digenes Akritas. 130The hero who, as his name indicates, was himself the offspring of two races -with his prow ess on the battlefield and in the chase and the luxurious magnificence and the romantic eroticism which gov erned his daily life-could be directly associated with the ideal portrait of the 12th-century Byzantine emperor." 1 At the same time the descriptions of banqueting and hunting scenes, of heroic exploits and romantic episodes between Digenes and his wife Eudokia are the literary parallels to the iconography on the silver cups from Beriozovo, Nenetz, Vilgort, Chernigov and the former Vasilevsky collection. 132 The dishes discussed here are primarily products of the common aesthetic and the mixed iconographie vo cabulary which developed in the Eastern Mediterrane an in the 12th century.Their notable thematic variety and multifarious ornamentation, when combined with APPENDIX: CATALOGUE 1. Footed plate: diam.29 cm.h.5.5 cm.h. of foot 3.2 cm.weight 846.9 gr.Condition good.All the decorative features display traces of the original gilding.The under side of the foot has an incision in one place, which has not resulted in damage to the metal. The plate has the shape of a flat bowl with an integral raised rim decorated with alternating crescents and halflogenzes.The central medallion shows a hunting scene with a horseman armed with a kite-shaped shield and a spear; below the horse's feet a hunting dog pursues a hare whose head is turned backwards.The figures are flanked by stylised bushes and trees.The medallion is encircled by a band of undulating stalks with trefoil offshoots.The representations have a dot-punched background. The edge of the interior is surrounded by a decorative band of six pairs of running animals interspersed with vegetal scrolls enclosing two palmettes.The animals on the band are, in order: lioness and antelope; dog and hare looking backwards; dog and fox with bushy tail look ing backwards; lioness and antlered deer; dog and hare clutching a leaf in its mouth; griffin and horse.Above and below the animals are highly stylised palmette sec tions and droplet motifs. The cylindrical foot was formed separately by ham mering and then attached to the underside of the bowl.At the centre of the underside the turning point where it was attached to the lathe is visible. 2. Footed plate: diam.26.5 cm.h.5.4 cm.h. of foot 3 cm.weight 757.4 gr.Condition good.The gilding is bad ly damaged, and is barely visible on the central medal-the epigraphic evidence of the last dish and the infor mation provided by the Izgirli hoard, point strongly to a provenance in the environment of 12th-century By zantium.The inconsistencies noted in the quality of the execution of the dishes suggest that they were prob ably produced by different craftsmen or workshops.In any case we have no reason to believe that the modern concept of the uniform "set" or "dinner service" had any place in the aesthetics or the practices of the Byz antine era.Later engraving can be found on both sides of the dish, including an undecipherable cursive inscription.The central medallion is decorated with four undulating stalks which sprout from a circle decorated with four 'winged' split leaf palmettes.Spiky acanthus leaves and small spi ral shoots grow from the stalks, and at points where the stalks divide they are encircled by a thin ribbon. 2003 Particularly noteworthy are the four split leaf palmettes attached to the outer perimeter of the circle and the four small comma-shaped leaves on the inside.The meticu lous design was executed with the aid of a compass, as is indicated by the mark at the centre of the circle, though the vegetal ornamentation is two-dimensional and un shaded, and is defined only by the engraved contours.The background is covered by dense and notably precise ring punching.The medallion is surrounded by a narrow band of herringbone ornamentation.On the exterior be low the rim is a plain gilded band. On the underside of the plate is a jagged mark caused by the removal of metal in a goldsmith's workshop, a standard method of testing the alloy in Ottoman times.The central medallion is ornamented with a six-point ed star formed from two interconnected triangles and with six semi-circles.The sides of the triangles have in their centre a characteristic indentation which pro duces rhomboid and parallelogram motifs emanating from and interwoven with the triangles.The medal lion is edged with guilloche, which is in its turn edged by a series of twelve inverted semi-circular arcs, with five-lobed palmettes at their junction points.The back ground is dot-punched.The plate is encircled by a band of S-shaped undulating stalks terminating in two halfpalmettes.This band is edged with semi-circular arcs terminating in five-lobed palmettes, which in the undecorated area of the plate alternate with the palmettes on the arcs surrounding the central medallion.On the exterior of the rim there is similar decoration with arcs terminating in trilobed palmettes.The gilding on all the motifs is well preserved; the most notable feature Fig. 1 a-b.Footed place no. 1 with a mounted huntsman and running animals.Private collection (photos: Sp.Delivorrias). Fig. 4 a Fig. 4 a-b.Dish no. 4 with the personification of the Sea riding on a sea monster.Private collection (photos: Sp.Delivorrias). Fig. 10 Fig. 10 a-b.Bowl with courtly scenes and the mounted figure of Saint George in the inside, from Beriozovo, 12th century.Saint Petersburg, The State Hermitage Museum no.ω3 (photo: courtesy of The State Hermitage Museum). Fig. 20 a Fig. 20 a-b.Footed plate from the Izgirli Treasure with fantastic animals, 12th century.Paris, Cabinet des Médailles (photo: courtesy of the Cabinet des Médailles). Fig. 21 a Fig. 21 a-b.Dish from the Izgirli Treasure with a star interlace, 12th century.Paris, Cabinet des Médailles (photo: courtesy of the Cabinet des Médailles). 3 to 9 , though the diameter is smaller (24 cm.).The dish bears in its centre as sole decoration a circular inscrip tion framed by pairs of engraved lines.The inscription is written in literary Greek, in capital letters, and reads: +K(YPI)E ΒΟΗΘΕΙ KONCTANTINO ΠΡΟΕΔΡΟ ΤΩ ΑΛΑΝΩ (Lord help Constantine Alanos, Proedros).Palaeographic evidence tells us that the form of the let ters 104 -e.g. the Κ with the curved ends, the Ω with the closed, curving extremities and the closed shape of the C and the E-occur in late 11th and, especially, 12th cen tury inscriptions on works of art 105 and wall paintings.106 Similar forms of lettering occur in manuscripts with 'epigraphische Auszeichnungsmajuskel' script, which also date from the same period.107 The person mentioned in the inscription was prob ably the owner of the whole group of plates.The office of Proedros was one of the highest-ranking in the 10th century, but by the 11th it had become less exclusive. lion.Letters of the Greek alphabet are crudely scratched mainly on the underside of the plate.Cruciform marks on the main surface.The central medallion is decorated with a vegetal ogival network forming heart shapes on the upper and lower sides and containing pointed multi-lobed leaves facing in two directions.The medallion is surrounded by a band of undulating stalks with rounded offshoots enclosing trefoil leaves.The background is dot-punched.The engraving generally has a rather flat appearance, which may result from the original execution or from extensive usage.The edge of the interior is encircled by simple engraved lines.The integral ornamental raised rim and the foot resemble those of plate no.1; here too the turning point of the lathe is visible on the underside.3. Dish with flat base and low rising sides: diam.25.8 cm.h.4.3 cm.weight 613 gr.Condition excellent.The gilding is not visible to the naked eye.The decoration of the central medallion is a condensed version of the hunting scene on plate no. 1, without the dog and the hare.There are small discrepancies in the position of the spear and of the heads of the rider and the horse. 4 . Dish with flat base and low rising sides: diam.32.6 cm.h. 5 cm.964.4 gr.The largest of the nine plates.The sides have cracked in places and been repaired.The central medallion con tains a depiction of the Sea personified as a partly nude woman, riding on a sea monster with an oar and a boat in her hands.She is flanked by four fish.The medallion is surrounded by a three-ply chain band, and the back ground is ring-punched.The interior of the dish is edged with a continuous band of heart shapes enclosing trefoil palmettes.On the exterior, just below the rim, is a nar row band with undulating stalks and half-palmette off shoots.All the ornamentation is gilded.5. Dish with flat base and low rising sides: diam.29.2 cm.h.4.6 cm.weight 1043.1 gr.
2019-06-13T13:18:02.964Z
2018-08-10T00:00:00.000
{ "year": 2018, "sha1": "0cc60047c9cb93774012e6e0a06a07772f39e86d", "oa_license": "CCBYNCSA", "oa_url": "https://ejournals.epublishing.ekt.gr/index.php/benaki/article/download/18209/16173", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "579dc427dc92eb4a70c8c9caca3e57bd30e3ccf0", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [] }
154609837
pes2o/s2orc
v3-fos-license
The Case for Microcredit: Does IT Improve Maternal and Child Health and Wellbeing? There are about three billion people, half of the world€™s population, living on the income of less than two dollars a day. Among these poor communities, one child in five does not live to see his or her fifth birthday. One study in 2006 showed that the ratio of the income between the 5% richest and 5% poorest of the population is 74 to 1 as compared to the ratio in 1960, which was 30 to 12. To enhance international development, the United Nations Organization (UNO) announced the millennium development goals, aimed to eradicate poverty by 2015. According to the World Bank€™s (1990) definition of poverty, €œA condition of life so characterized by malnutrition, illiteracy, and disease as to be beneath any reasonable definition of human decency€. The complexity of poverty demands an equally multidimensional solution emphasizing on reducing unemployment, infant mortality, maintaining essential healthcare, sanitation, food, nutrition basic hygiene, and establishing gender equality. Researchers posit that it is possible to achieve the above development goals, if disposable income (especially of the poor) is increased.One of the main avenues of increasing disposable income of the poor in developing nations is through the use of microfinance and microcredit. INTRODUCTION There are about three billion people, half of the world's population, living on the income of less than two dollars a day. Among these poor communities, one child in five does not live to see his or her fifth birthday.The ratio of the income between the 5% richest and 5% poorest of the population is 74 to 1 as compared to the ratio of 30 to 12 in 1960 [1]. To enhance international development, the United Nations Organization (UNO) announced the millennium development goals, aimed to eradicate poverty by 2015 [2].I t is possible to achieve the above development goals, if disposable income (especially of the poor) is increased [3].One of the main avenues of increasing disposable income of the poor in developing nations is through the use of microfinance and microcredit [4]. Bangladesh is often viewed in most microcredit and health literature as a 'test case for development' [5]. Several dozen NGOs and international organizations operate in the country including ICDDR,B (International Center for Diarrheal Disease Research, Bangladesh) and BRAC (Bangladesh Rural ISSN: 2252-8806 108 Advancement Committee) which have been collaborating for almost 25 years. ICDDR,B operates demographic surveillance system and MCH-FP (maternal child health-family planning) programs in various districts. BRAC is an indigenous non-governmental organization involved in promoting welfare and development in response to the mass migration and resettlement of refugees in northeastern Bangladesh following the civil war [6].The NGO has been focused on the fundamental goal of poverty alleviation since its inception in 1972 and BRAC's RDP (rural development program) is an integrated, multi-sectoral initiative involving institution building, functional education, saving and group trust funds, credit disbursement, and training in income and employment generation activities, legal literacy and non-formal primary schooling. The RDP organized the rural poor into groups who work as instruments for development of human resources and occupational skills. Group members are encouraged to take on income generating activities facilitated by BRAC's credit program [7]. A joint research project BRAC and ICDDR,Bwas initiated by researchers from BRACto (1) evaluate the extent to which socioeconomic development engineered through microcredit might enhance the MCH-FP program effectiveness and (2) draw on ICDDR,B's demographic surveillance system to determine the impact of RDP on community well-being [8]. Underlying socioeconomic development policies and programs are assumption about their presumed benefits for raising health status and human well-being [8]. Marked gradients in socioeconomic differentials have been noted in life expectancy by income, education, occupational class for many different diseases and in diverse populations [9]. However, the majority of studies investigating the relationship between socioeconomic development and health are either cross sectional or conducted as trend analysis making it difficult to explore the intervening pathways and mechanisms that link socioeconomic development, health and well-being. Some research suggests that income tends to be related to health through a direct effect on the material conditions necessary for biological survival and through an effect on social participation an opportunity to control life circumstances [10]. A twenty five year follow up from the Whitehall studies [11] found that while there is no evidence of a threshold, there seems to be a clear gradient in mortality for the general population that runs from the least the most deprived. A framework developed by UNICEF identifies poverty as a key element to a decreasing quality of life [2]. Additionally, pathways between increasing economic development and health status have been hypothesis by a number of researchers. Sen's capability approach [12], Grossman's health production theory [13],and Mohindra and Haddad's conceptual framework all explore the linkages through which increased economic and microcredit activities impact health outcomes, especially for women in developing countries [14]. This background paper is a rapid synthesis of some current evidence on linkages between microcredit and women's health with a centralized focus on reviewing the BRAC working paper series from Bangladesh.It will first review the linkages between household income and microcredit, then synthesize existing literature including literature from BRAC between income and health with a focus on women and finally look at the ways that microcredit might have a positive effect on health outcomes for women. Table 1 provides a synthesis of selected papers from BRAC and assesses their methodology and results.The papers explore a number of themes crosscutting the gamut of research on microcredit and examine collection of data/baseline information on the demographic surveillance system (DSS) variables, gaining insights in concepts of illness and their causes from women's perspective and corresponding social and family attitude, identification of factors/inputs (such as microcredit) and institutions responsible for creating health/women's health outcomes and testing of hypothesis on better health status of members of RDP programs which can justify continuity of the BRAC initiatives. These multifaceted objectives would enable investigators/researchers to take a holistic view on the importance/justification of the continuation of BRAC-ICDDR,B linkages and assessing the impact of economic development programs such as microcredit on income and health outcomes. RESEARCH METHOD We conducted a systematic review on BRAC-ICDDR,B Joint Research Project Working Paper Series. The series contained 32 working papers out of which we only selected papers that examined or had references to maternal and child health (n=13). T Criteria for evaluating the studies were determined before reviewing the articles. We developed a checklist based on the Transparent Report of Evaluations with Nonrandomized Designs (TREND) criteria [15]. In contract to the CONSORT guidelines for reporting randomized trials, TREND guidelines emphasize more detailed reporting of theories use, descriptions of interventions and possible comparison conditions. Reviewers completed a TREND checklist for each article and the analysis for selected TREND criteria are provided in Table 1. Linkages between microcredit and income Ever since the inception of Grameen bank, microfinance programs have been used to target and increase disposable incomes among the poor. In the past decade, microcredit has been a development stalwart in underserved countries. In general, microcredit is a term used to describe programs that offer access to small loans, financial literacy, and social support. The concept of microcredit has evolved, and terms like microfinance, microenterprise, and micro lending, all represent some level of access to financial and/or social resources. Anecdotal evidence exists to suggest that microfinance can make a difference in the lives of those served, however, rigorous quantitative evidence on the nature and magnitude of microfinance is still lacking [16]. A systematic review byDuvendack et.al found that a vast majority of studies on microfinance are methodologically weak and have insufficient data [17] and Stewart et.al further found little evidence to suggest that microfinance has a large impact of poverty [18]. Both these review focused on studies that relied heavily on RCT (randomized controlled trial) design. It can be argued, however, that RCTs may not be the best approach to determine complex relationships in an interconnected system and for a broader picture; researchers need to embrace other methodologies [19]. Economists have long posited that participation in microcredit programs improves economic wellbeing (of the poor) by increasing income, building assets, decreasing economic inequalities and enhancing capacity for success but these variables might not have been measured in the RCTs. The TREND reviews from Table 1 demonstrate strong correlations between microcredit programs and a general increase in disposable income and savings, especially among women [5,8,20]. The women in the BRAC program often save money in the traditional way and 'know the value of savings' [21]. In addition, according to female BRAC members, RDP savings, credit and training programs provided the means to engage and diversify remunerative activities and support their husbands' income generating activities [22]. Most women also perceived related increases in their influence over household decision making. In addition, group interviews among participating men elicit that men are often humiliated at the prospect of borrowing money from friends, neighbors or the local Mahajans (money lenders). Becoming BRAC members not only saves them from approaching others, but many times the wives borrow money from the program and the men altogether do not have to approach anyone [21]. Linkages between income and health outcomes The TREND analysis of the BRAC working papers from Table 1 further found instances of relationships between economic health and health outcomes.Economic health is one of the many inputs that determine health output and status (others include biological, psychological, cultural and social) and has to be modeled with other inputs to have a significant effect on health [23].Others suggest that while some linkages between income and poverty alleviation from an economic perspective, the all-encompassing nature of poverty demands that we understand how improvements in also improves the lesser measured or quantifiable psychosocial relationships such as health status, social inferiority, isolation, powerlessness, humiliation and accepting low status work [7]. Other suggested mediators between income and health were (a) functional education, (b) health literacy, (c) increasing child education, and (d) establishing primary healthcare program [5]. Linkages between microcredit, income and women's health outcomes While microcredit interventions are not explicitly designed to have an impact on health, few practical microcredit/microfinance models such as the Grameen bank model of microfinance, posit that economic and social poverty (which includes poverty of health) go hand in hand and should thus be tackled simultaneously [14]. The relationship between poverty and ill health has been characterized as synergistic and bidirectional-poverty confines the capacity to produce health and ill health leads to further impoverishment that diminishing the potential of individuals and households to improve their economic status and there is a growing recognition that poor health is a dimension of poverty; therefore, one potential result of poverty reduction is progress in the health of the poor.An increase in microcredit activity has been linked to improvements in socioeconomic status, poverty alleviation and increased empowerment for women through an increase in individual income levels [5]. Previous empirical evidence from developed countries exists to suggest that women tend to allocate a larger share of their income to meet the health and nutritional needs of household members, especially children [24,25]. Nevertheless, there seems to be a conceptual 'black box' [23] surrounding the pathways through which increases in income produces health change and researchers need to continue to 'unpack' the black box surrounding the pathways through which an improvement in SES leads to an improvement in health status? This is an especially important question in developing countries where microcredit programs have been flourishing. Mahbub, Mayeed and Roy presented some self-reported evidence that suggests strong linkages between increases in income via microcredit activities for women [26]. The key question is whether the women are using the extra disposable income to augment theirs and their children's' health status'. If strong pathways and positive relationships exist between increasing microcredit lending and health status, the microcredit model may be used to decrease health equity beginning with maternal and child health equity. The 1995 study conducted by Scott, Evans and Cash [23] to study the impact of BRAC's socioeconomic interventions including microfinance activities on the wellbeing of the rural poor uncovered that although a wide variety of scales and measures exist in the BRAC interventions that measure 'ill-health' such as morbidity and mortality there are no indicators that measure 'health' outputs.Chowdhury and Bhuiya hypothesized several pathways linking the various BRAC rural development programs to improvements in health status [8].Specifically, they hypothesized that increases in household credit would lead to an income increase and a secure household livelihood with decreased vulnerability, equitable intra household food distribution and greater coping capacity. A second pathway linked credit programs and other income generating activities to an overall improvement in household socioeconomic status. Greater available household income may contribute to better environmental conditions within the household, permit greater spending on curative illness episodes and preventive health, improve food supply and nutrition, and increase access to and use of good quality health care services provided by BRAC and other agencies. These income effects may enable earlier illness detection and management, timely referral to healthcare facilities, improved nutritional status and higher coverage of preventive health care services. In addition to physical health, there might be pathways linking successful microcredit activities to mental health [8]. Bhuiya and Chowdhuryfurther hypothesize that participation in RDP will benefit households by increasing women's ability to respond to illness episodes and management of severe illness within the family and suggest that this process will be mediated through a reduction in gender disparity, improved husband-wife communication and greater female participation in household decision making processes [5]. Some anecdotalevidence from the BRAC working papers suggest than in Bangladesh, increase in microcredit lending in rural sectors of the country has led to an increase in social capital among women [22,27]. One of the confounding factors in determining the association between increase in microcredit loans among women and positive health outcome is the role and depth of engagement in public participation. Substantial research exists to show that participation in the public sphere, with or without access to microcredit, may improve quality of life for women. Some examples of the positive outcomes associated with participation for women in developing countries include increased levels of contraceptive use and knowledge of family planning based on survey data from three development agencies in rural Bangladesh [28]; an increase in women's feelings of empowerment based on eight indicators related to women's roles and status within the family and community using a multi-cluster design in four locations in Bangladesh with women participating in two development agencies [29]; a reduction in domestic violence suggestive of increased public visibility and social support in Bangladesh [30]; and improved health literacy related to media exposure and education, and a positive impact on the nutritional status of participants and their families [31]. However, summary data from all BRAC studiessuggest while being a female leads to a 24% rise in odds of becoming a BRAC member, women in general, borrow much less than men and are not engaged as actively as their male counterparts [7]. Therefore, the income that women generate may not be enough to invest in healthcare, especially preventive healthcare. Additionally, Adams et. alalso discovered through participatory research, that unlike most countries, in Bangladesh, men tend to be primarily responsible for major health decision making in the household [22]. Women's involvement in health decision making tends to be restricted to minor illnesses, or times when their male counterpart is absent. Further understanding of women's health status in Bangladesh also needs to take into account women's perceptions of illness for them and their children. An exploratory study of women's perceptions of illness found that women describe themselves as ill when they can no longer work and were bed ridded [32]. This perception might pose substantial issues in preventive health education. Along with structural and institutional availability of medical services these factors serve as major barriers to improving women and children's health. Even if women's income increases as a result of microcredit interventions, she might not use the income for any preventive healthcare and therefore, her health status might remain as before [32]. Description of research The paper describes how the study aims will be accomplished. Program process development begins with identifying households of the target group. Program organizer (PO) discussed problems and initiates formation of village organizations. Members begin a savings program. Gradually members are encourages to take on income generating activities facilitated by BRAC's credit program. Elect management committee from the village. (2) Create a health status model of health where production of health/illness is considered to be based on simplified health inputs giving rise to health output. (3) Explores method of examining mechanism through which health interventions produce health outcomes. (4) Proposes further research to understand mechanisms by which health interventions produce health outcomes. The proposal has a number of hypotheses related to health, healthcare access and women's health. The researchers propose a number of small scale studies that will be conducted to get in-depth information to explain mechanisms of the impact of RDP on women's lives. Consider health output measures of 'health' in addition to measures of ill health such as morbidity/mortality. A particular intervention has no association with morbidity/mortality but people consider themselves to be healthier. Health indicators need to add self-report based on individual's perception of their health status. Measuring morbidity and mortality often provide no specific information for assessing the effectiveness of interventions. Comment/qualities There is a rich empirical data source that can be mined from the DSS database that will help researchers assess variables that mediate and moderate the linkage between income an d health The paper provides an interesting conceptual framework for considering health inputs. The moderating variable from predisposing and responding factors is hypothesized as the interpretation of health outputs according to an individual's socio cultural lens that determines where the individuals falls on the spectrum of health and illness. The paper lays out the hypothesized linkages between microcredit and health. CONCLUSION AND IMPLICATIONS FOR THE FUTURE While these and other moderators and mediators of the association between income and health have been hypothesized, more studies using rigorous methodology needs to be conducted. It can be argued that to understand the relationships between income and health in developing countries, we need to focus on the simultaneity as well as the two pronged relationship between country-level income generation process (through programs like microcredit) especially of the poor and their health status and identify the factors/control variables which promote or inhibit the strength of the two stepped relationship. These findings will help to formulate policies and ascertain the overall availability of materials and social resources that can enable the poor to enjoy quality healthcare. These important findings based on rigorous research can further extend to non-governmental activities such as the introduction of microcredit and microfinance by outside organizations in addition to BRAC. A key element in decreasing social poverty and ill health among the poor is to increase maternal and child health (MCH) outcomes within underserved countries. One of the largest differences in health indicators among developed and developing countries is their maternal mortality and morbidity rates where a vast majority of the 529,000 women who die each year from complications of childbirth belong to developing countries [2]. Maternal and child health have remained pervasive and damaging to overall quality of life improvements in low and middle-income countries [3]. The health of mothers and children is closely related to the general health of the community and measures that bring about improvement in general health also tend to produce improved maternal and child health. In addition, rapid increases in population stemming from early marriage and lack of family planning can further have negative effects on health and development; however, they can be mitigated by spurring economic development, especially among women. A vital component in defining women's empowerment has been the assess women's influence over household spending on family well-being. A 2001 Nepal Demographic and Health Survey found that "[w]omen who are employed and earn cash have more say in household decision making than women who do not work and women who work but do not earn cash income" (p. 47); this included decisions about their own health care [33]. The assumption that increasing maternal empowerment through income and education leads to improvements in child health and survival is widespread and has been incorporated into many policy documents. However, this assumption has not been tested in well controlled intervention studies and further independent research needs to be conducted in order to test the hypotheses set out by the BRAC papers. It is also conceivable that BRAC facilitated socioeconomic development (especially microcredit) may also have negative effects on the health status of young children. Women's participation in employment and other activities may involve leaving the supervision of small children to other caretakers less able to respond to their particular health needs, such as for breast-feeding or the preparation of energy dense weaning foods [34,35]. Therefore, interventions tackling women's empowerment also need to focus on 'collective empowerment' and not just individual empowerment. This can be accomplished through a number of viable and low cost methods such as an establishment of community center or providing microcredit loans to women to begin low cost day care for other women. There has been further critique about the myopic focus on the positive outcomes of participation in microcredit while minimizing issues such as loan control and misuse by male members of households (Goetz & Gupta, 1996); concern about the best interests of the participants, including increased workloads and responsibilities and financial sustainability over time [36] ; criticism that the programs have difficulty reaching the most vulnerable populations whether related to choice or exclusion [37]; apprehension about the gender and power relations and the social/cultural constraints placed on women in and outside the home, which can lead to poor outcomes [34]; association between health decline and business failure [38]; concern about the overuse of empowerment for women related to participation [39]; and a difficulty in discerning the aspects of the programs that lead to positive outcomes [40]. Further interventions need to be developed in a way that addresses these legitimate issues and concerns. In addition, quite apart from BRAC's socioeconomic development interventions, other background factors can also influence the direction, velocity and nature of possible pathways of changes in well-being and these confounding variables need to be accounted for when discussing the impacts of microcredit on health and wellbeing of any community, not just maternal and child health. For example, urbanization (Islam 1990), modernization and the diffusion of new ideas, sectoral transformation [41], and increasing poverty [42] as well as regional differences are key variables that can affect population health. Further studies of BRAC data need to rigorously control for these factors to understand which pathways are the most significant. Furthermore, based on the current literature, microcredit/health research could utilize several existing theories and engage additional theory development. For example, critical social theory, which addresses power and privilege from a historical and social perspective, would support an upstream-thinking approach to discover systems and behaviors that limit opportunities and create barriers for women to receive and use microcredit [43]. Chaos theory, which posits that small changes during a sequence of events can alter outcomes in a system and that order can be found within seemingly chaotic patterns [44], would support a social ecological approach to identify pathways and evaluate changes related to health and low income women. To extricate the influences of individual pathways in a mechanism as complex as health status is a daunting task. Nonetheless, a determination of inputs and variables that increase health and wellbeing, especially maternal and child health and wellbeing, should be undertaken. While BRAC has undertaken substantive research on microcredit, key questions remain-what are the pathways through which microcredit can influence health outcomesso that microcredit can be used as an effective instrument for improving health status. The following concept may be helpful in logically formulate a 'model' for undertaking rigorous policy research. A 'demonstrative' econometric framework can establish the relationship between microcredit and health outcomes and assist in identification of instruments for strengthening the relationship. There can be a four stepped relationship between microcredit and woman/child health outcomes. This can be conceptualized by the following system of functional forms. (1) Income = f (Microcredit, education and skill, heath, other relevant local variables), (2) consumption of 'health 'goods and services =f (Income, Availability of health goods and services/state of health infrastructure, cost of health services), (3) Consumption of health goods and services by women / children = f (Consumption (total) of health goods and services, appropriate variables representing women's empowerment), and (4) Appropriate status indicator of women's/children health =f (Consumption of health goods and services by women/children, food consumption/nutrition by women/children, sanitation, time spent by women for work keeping them away from children). There are two notable features in the above system. First, there is simultaneity between health and income (equations 1 and 2). Furthermore, in addition to primary independent variables (Microcredit in 1, Income in 2, Consumption( total) of health goods and services in 3 and Consumption of heath goods and services by women/children in 4) there are a number of auxiliary variables (education in 1,health infrastructure and cost of health services in 2,women's empowerment in 3,nutrition,time spent by women for working keeping them away from children).The auxiliary variables modify(positively or negatively) the strength of the relationship or the elasticity between health of women and children and primary variables such as income or microcredit. These elasticities, when estimated in a proper way, will give very useful policy guidance if microcredit is to be used as a potent instrument for improving health status of women/children. The data from BRAC research is a unique opportunity to examine pre and post intervention of the impact of microcredit and such data sets can provides researchers with the prospect of conducting continuous rigorous research in the country. Muhiuddin Haider, PhD is a Research Associate Professor at University of Maryland. He is a highly skilled public health professional who has managed and led diverse public health projects and research studies in more than a dozen countries worldwide over thirty years, on behalf of several international agencies and universities. He has research expertise in the areas of health communications, health promotion, health education, and social marketing.
2019-08-18T16:36:25.854Z
2014-06-01T00:00:00.000
{ "year": 2014, "sha1": "ae26ce22e8984da0dd8c73344f76321ccfcea8d8", "oa_license": "CCBYNC", "oa_url": "http://ijphs.iaescore.com/index.php/IJPHS/article/download/4682/3602", "oa_status": "HYBRID", "pdf_src": "Neliti", "pdf_hash": "9f628fe5a7aaa26cd87eb5c91e35ac6f45a0f97c", "s2fieldsofstudy": [ "Economics", "Medicine" ], "extfieldsofstudy": [ "Political Science" ] }
125378020
pes2o/s2orc
v3-fos-license
Assessment of anthropometric measurements and body composition of selected beginner South West Ethiopian soccer players This study attempts to determine the anthropometric measurement and body composition quality of south west Ethiopian beginner soccer players’ considering playing position. In so doing, three soccer teams were selected by employing a cluster sampling techniques that consisted of a total forty eight players, and those who attended less than seventeen years of age. Depending on their playing position players were classified in to four categories as goalkeeper (GK), defense (DF), midfielder (MF) and striker (SK). To achieve the stated purpose, cross sectional descriptive research design was employed. The international Society for the Advancement of Kinanthropometry (ISAK) protocol was considered to measure the following anthropometric variables :weight measurements, n_/1; girths, n_/10; lengths, n_/6, skin folds, n_/2;(body fat% and lean mass)and body mass index n-1.The data was analyzed by SPSS version 19, moreover, the level of significance was set at (P<0.05). The finding of this study depicted that significant difference of anthropometric measurement among players based on their playing position. DF players possess larger lower limp, interior body and upper limp girth. Whereas, SK player exhibited smaller lower limp, interior body and upper limp girth. In lower limp length GK and SK have larger lower limp length but in upper limp GK possess greater. GK (176.6cm) are taller than all other players of team while MF (170cm) is sorter in overall body height. There is no significant difference in BMI(GK 19, DF 20.4, MD 18 and SK 18), lean mass and body fat% But GK(64.2 kg) are heavier while SK (58kg) lighter in their weight. Whereas, DF (60.7 kg) and MF (60 kg) players are average in their weight. It is found that mean anthropometric measurement of south west Ethiopian youth soccer players was slightly lower than that of top world class players of similar age group players. It is also obtained that lack of significant differences among playing positions reflect on BMI, lean mass and body fat% indicate as coaches are not give playing position specific training for players. In this study, within-position variation was quite large in some cases, which could indicate that a team that does not have the opportunity to hand-pick players, based on anthropometric characteristics, may be at a disadvantage therefore the respective soccer coaches, sport science professionals should take into account the principle of morphological optimization in talent detection, identification and selecting soccer players. INTRODUCTION Soccer one of the most popular team sport which is characterized by high intensity, short-term actions and pauses of varying length (26).To succeed soccer game, players need the optimal combination of physical quality, technical, tactical, and mental motivation (4).Due to the financial benefits of being able to promote talented players from the youth ranks into the senior first team, sports scientist employed crucial role on a professional soccer club.They in identify and develop future players.Information concerning the anthropometric and performance characteristics of players of varying age will have application to a large population, particularly coaches and sports scientists (22). Anthropometry is the branch of anthropology that is worried about human body measurement.The definition has confined to the kind of measurements commonly used in associating physical performance with body build.Anthropometry involves the measurement of external part of the body, including body diameters, body circumference, heights and breadth (11).Indeed, many experts in the field, such as soccer coaches, managers and scientists believe that the success of this sport can be associated with anthropometric characteristics of players.Even, some studies have focused on the relationship between anthropometric profiles of players and their standard positions (12).Anthropometric quality of soccer players is a major determinant of their success in playing genuine soccer however it associates within a playing position.For example a taller player is most suitable for central defensive positions, goalkeeping, and central attack (21,22).Not only these, specific anthropometric characteristics needed to be successful in certain sporting events.It is also important to note that there are some differences in body structure and composition of sports persons involved in different sporting event.The process whereby the physical demands of a sport lead to selection of body types best suited to that sport is known as "morphological optimization" (6). In soccer, the importance of body composition on performance remains unclear; however, it is a Primary concern in conditioning programs throughout a season at all levels of competition.But body composition measures are widely used to prescribe desirable body weights, to optimize competitive performance, and to assess the effects of training (24).A lower relative body fat is desirable for successful competition in most of ball games.This is because additional body fat adds to the weight of the body without contributing to its force production or energy producing capabilities, which means a decrease in relative strength (1,24). In measuring this aspect of body composition, the total body weight is divided into two components; Lean Body Weight and Fat Body Weight.Lean Body Weight includes muscle, bone and vital organs and estimated from skin fold measurement (27).Physical characteristics and body composition have been known to be fundamental to excellence in athletic performance (16).Specific athletic events require different body types and weights for maximal performance (2).Today it has been widely accepted by the experts that top performance in soccer is achieved if player possesses the basic body composition and anthropometric characteristics suitable for his/her position(goalkeeper defense, midfielder and striker) (15).At present, sportsman for superior performance in any sports is selected on the basis of physical structure and body size. Developing good conditioning programs based on the specific morphological and physiological requirement of each sport is considered as a key factor for successes (3).Actually both anthropometry and body composition have relationship with soccer performance.But still more clarification is required on anthropometric and body composition qualities Ethiopian beginner soccer players.So this research was conduct to fill this gap.It is an interesting topic for physical exercise scientists, coaches, athletes, exercise physiologist and other specialists in sports and exercise science. In fact, there were many studies regarding to analyzing the exercise and fitness, but there were no enough researches that studied the anthropometric and body composition quality and requirements of soccer game, especially in Ethiopia in relation to this issue. Currently, enhancement of athletic performance efficiency is designed upon critical study of human anatomy, physiology, modern way of feeding and scientific way of training based upon a new findings and principles of investigation.This study was highly concentrated on the assessment of anthropometric measurement and body composition of beginner South West Ethiopia Soccer players, in relation with their playing position but it does not mean that the outcome of this research is restricted to South west Ethiopia. The research will contribute in addressing the anthropometric and body composition characteristics of soccer players for coaches, players and managers to understand, formulate and implement on designing effective strategies for coaching program.The study also helps to develop our country's soccer federation to recruit skill full players based on their anthropometric quality.In addition, it will help for others as a research work for depth studies on the problem under taken.The finding of the research may help as reference for researchers who will conduct advanced researches. The general objective of this study was to determine anthropometric and Body composition quality of South west Ethiopia beginner soccer players in relation with their playing position.To examine the anthropometric quality of south west Ethiopian, beginner soccer players based on their playing position.To assess body composition of school level soccer players of south west Ethiopian in relation with their playing position. MATERIAL & METHOD The source of population for this study was male, youths of BenchiMaji, Kaffa and Shaka Zones; that was representing their zones in 2015 South Nation, Nationality and Peoples region School soccer tournament.Male, healthy, Schools soccer players of BenchiMaji, Kaffa and Shaka Zones and those who attended less than seventeen years of age was the subjects of the study.This research was conducted at Mizan, Bonga and Masha Towns.The study was applied for three months; beginning from July 2015 to September 2015. The researchers collectedthe primary data, from body mass index measurements, three site, four site skin fold measure, heights includes; standing height, sitting height, arm, fore arm and leg and girths includes; fore arm, thigh, calf, waist, and chest, neck, wrist and ankle and body weight.Secondary data was used from journals, books and magazines relevant to the research. Descriptive research design was used for this study; the detail operation is led by cross-sectional method.The research was conduct with forty eight (48) under seventeen year's young male soccer players from Kaffa, Bench Maji and Shaka district (south west Ethiopia).Beside this, the research was focused on anthropometric and body composition quality of best soccer players who was selected from different towns to represent their zones in relation with their playing position. This research study was carryout and governed by the regulations for research on human beings.To this fact, the privacy of the participants was protected, Permission was obtain from authorized administrator of zones sport office and signed consent was provide to participant earlier with a written letter.The ethical considerations was include; all of the participant have clear information about the purpose of the study, the procedure to be used, the potential benefit and possible risk of participation in this study.As well as, result was keep confidentially.Any type of information would not disclose to anyone except the researchers and the assistance technician in this experiment. Sample and sampling techniques The cluster sampling technique was used to select three soccer teams, which consists sixteen up to twenty five beginner soccer players.Their age was less than seventeen years.The sample size for this study was forty eight.First Permission was obtained from authorized administrator of zone sport office.Then all players was request to fill medical history questionnaire, which prepared with the aim of identifying whether they are free from acute and chronic sport injury and any anatomical impairment. Variables Height; Standing, sitting, arm, fore arm, medial and lateral tibiae, Girth; ankle, calf, thigh, hip, waist, chest, arm, fore arm, neck and wrist, body compassion; three site, four site skin fold measurement and body mass index and body weight. Anthropometric measures On the day of testing, the standing and sitting height, upper limb height (forearm and arm), lower limb height (tibiae medial and tibiae lateral), and body mass was determined using a standard weighing machine, stadiometer, calibrated and large sliding caliper.For accuracy researcher was use the manufacturer's guidelines.The height and weight of subjects was measure to the nearest 0.5 cm and 0.1 kg, and obtained with the participants in athletic shorts, shirt and bare feet. Girth or circumference measurements was token according to previously described and validated methods (8) protocol using a self-retracting, inelastic, nonmetallic anthropometric measuring tape.If swelling was present, the measurement also excluded.Two technicians performed a single measurement at each point then these values were averaged for further calculations. Body composition Four-site and three site skin fold measurements (14)were taken following techniques described by Harrison et al. (13)and using standard calibrated skin fold calipers, which maintained constant pressure.The caliper was held in the right hand, the skin fold elevated with the left hand, and the measurement recorded four seconds after pressure was release. Three site Skin fold fat was obtained at the chest, triceps, sub scapular locations in accordance with previously accepted procedures Jackson AS, Pollock ML, (14,20) whereas the Four site skin fold fat was obtained by Ross craft Calipers (British Indicators, UK) at four sites (biceps, triceps, subscapular and suprailiac) as recommended by (9).Two technicians was performed a measurement at each site; then two measurements at each site was recorded and later averaged for further calculations.The body mass index (kg/m 2 ) was calculated for each subject.Body weight classification for subjects was determined as described by (7). Protocol Using ISAK accredited methods; a total profile of 24 measurements was collected from each player.All data collection was performed by an ISAK protocol.The players were informed about the measurements to be taken, and the different positions required for measurement were explained and demonstrated before the start, to ensure that the procedure was quick and efficient. All players were measured in a private consulting area to ensure privacy during data collection.The measurements were taken at temperature was held constant at or around 22°C.They wore minimal clothing (training shorts and bare chest) to allow access to all measurement sites.Where possible, a recorder was present to assist the anthropometries and enter data into a software program.The equipment used for taking measurements included a stadiometer (Lester, UK), electronic weighing scales (SECA, UK), a small sliding caliper (Rosscaft, Canada), a large sliding caliper, and skinfold caliper (Harpenden@ British Indicators Ltd., Luton,UK).A complete data set was obtained before repeating the measurements for a second time to help minimize the effects of skin compressibility.All measurements were taken in the same order and, to avoid exercise-induced hypertrophy of muscles, all players refrained from exercise for at least 1 hour before the measurement session. The researchers were collecting the data with the help of three trained assistant technicians (BSc holders).To avoid errors, five day training was given for the assistance data collector on how to use data collecting instruments and measurements during data collection.Only standardized materials were used to keep the quality of the data.Additionally, all the aforementioned tests was record with video.Finally, the data had been coded and feed to software twice, with different persons to avoid error in data feeding. Data Analysis The data analysis was done by SPSS statistical software package Version 19.After the data was collect on weight, height, girth and body composition it was analyzed by descriptive statics.Moreover, the level of significance was set at (P<0.05). Overview In this study, Anthropometric assessment had been taken on three major categories (girth, height and weight) and body composition has been taken from skin fold measurement and body mass index.Only players with full data sets were used in the statistical analysis (SPSS, Version 19.0).Means and standard deviations (M±SD) were calculated for further interpretation.There is no significant difference on the age of all players based on their playing position (P>0.21), the reason is that as all of the subject are student at secondary and preparatory class.as defensive players of this three team have larger lower limp circumference than the rest of players which allow them to resist combat with the opponent and equipment.Statistically significant difference was noted in the calf girth measurement among players (p=0.04).Whereas the goalkeepers reveal slim lower limp relative to their team mate. In hip, waist and chest girth measurement defensive players were exhibited larger body structure as we realize from the mean results.The mean result for hip, waist and chest of each categories are listed respectively as follow GK 88.7±6.2cm,74.3 ± 8cm, 87.5 ± 7.2 DF 91.9± 5.5cm, 79.9 ± 6.7cm, 92.3 ± 6.3 MF 89.6± 6.8cm, 76 ± 7.5cm, 89.0 ± 7.6 SK 87.1± 2.6cm, 68.5 ± 1.3cm, 79.5 ± 1.6cm.Statistically significant difference was noted in the all hip, waist and chest circumference(p<0.05).But the strikers have very trim upper body structure relative to other players, which makes them disadvantageous in playing soccer. Overall there is heterogeneity on result of circumference among the subjects but significant dominancy is observed by defensive players in thickness of whole body structure whereas in case strikes the inverse is true.The height measurements were obtained from subjects' arm, fore arm, sitting height and standing height.The mean result indicate as there is significant heterogeneity based on their playing position.This heterogeneity of Tibia medial, tibia lateral, arm length and fore arm length of player are stated as follow in respect to their order as mean ± SD, GK 48.6 ± 2.6cm,51.3± 4.4cm,35.5 ± 2.6cm,28.6 ± 0.8cm,DF41.8± 1.9cm,48.0± 0.5cm,33.3± 2.8cm,27.5 ± 1.6,MF44.4± 12.7cm,46.2± 4.5cm,34.3± 1.5cm, 27.1 ± 1.6, SK 49.0 ± 1.4cm,8.5 ± 3cm, 35.5 ± 0.54cm, 27.8 ± 1.4cm as the above mean result reveal goalkeepers and striker players have longer lower limp length than rest of the players.This makes them advantages in increasing stride length during moment of game, which is essential to cover large distance with few gaits'.In upper limp, goal keepers have significantly higher length than others.This helps them to handle the ball during goal keeping. The sitting height and standing height of the subjects are explore as follow according to their order, GK 85.1 ± 5.4cm, 176.6 ± 6.1, DF 88 ± 4.6cm, 172.6 ± 7.7cm, MD 85 ± 2.9 cm, 170.0 ± 6.1, SK87.5 ± 1.04cm, 170.8 ± 5.1 as we understand from above mean result goal keeper players where taller than the rest of players.These characteristics would help GKs in aerial duels allowing them to defend their goals.Relatively strikers are shorter from their team mates (P<0.05). The heterogeneity in height of players is confined with the study conduct in Tunisia before two years in 2013 by Mehdi (19), which entitled Anthropometric and Physical Characteristics of Tunisians Young Soccer Players.He point out as goal keeper players were taller and heavier than other group of soccer players. The results of the present study also showed significant differences between playing position standards concerning the anthropometric measures, especially body weight and height.This result is in agreement with Gill et al. (12) that GKs were heavier and taller than other playing position groups.Other anthropometric differences according to playing position were identified.Our results are consistent in partly with the findings of Malinaet al. (17) who conduct in soccer players aged 11 to 16 years old, that point out FWs were taller than DFs and GKs were heavier than MFs.Our data are similar to those reported by Slavko et al. (23) in amateur German football players but different to those reported by Wong and et al. (28) in Under 14 years soccer players.This discrepancy could be explained in part by the sample size, the different methods of measurement as well as the performance level.The results of the current study showed substantial variations in stature and body mass suggesting that there are different physical demands in each playing position standard. Body composition variable The goalkeepers were also significantly heavier than the midfielders and striker.Despite a mean difference of 5.5 kg, differences between the midfielders and defensive players were no statistically significant.The defenders were significantly heavier than the strikers, but did not differ significantly from the midfielders.Research suggests that midfielders have a lighter body mass to move through space more efficiently, enabling them to cover greater distances.Defenders tend to be heavier and taller with less body fat, as their position requires them to be robust and strong in the tackle (5). The breakdown of body compartments for the subjects on their different playing positions is shown in Table 5.There were no statically significant differences in BMI or lean mass between the different positions, but defensive players were exhibit higher body BMI whereas strikers are the lowest than the rest of players.The percent body fat analysis revealed that the differences occurred between the defensive players and each of the outfield groups, with no statically significant differences evident between rest playing positions. The heterogeneity in body compartment of players depending on their playing position was tolerable and acceptable in soccer game.The present study also insures this quality of soccer players.The finding of this study is partially agree with finding of Sutton et al. (25), entitled "Body composition of English Premier League soccer players: Influence of playing position, international status, and ethnicity" their result explore that goal keepers are heavier than the rest of players by their body mass and their lean mass almost similar to rest of defender, midfielders and strikers.But if we can compare the men result of English Premier League players and south west Ethiopian soccer players, Ethiopian south west players GK 64.2 Kg, DF 60.7, Kg, MF 60Kg and SK 58Kg whereas English Premier League players GK 90Kg, DF86Kg, MF 78Kg SK82.7.Despite age variation between this two groups there are mean of 23 kg body mass variation.This study is also partially consistent with study conducted by Mark Russell & Edward (18).Tooley on UK before three years under title "Anthropometric and performance characteristics of young male soccer players competing in the UK" but the mean result on body mass have variation of 11kg with this study.We conclude that south west Ethiopian soccer players have similar Lean mass BMI but less in their body mass from rest of world beginner players. Our finding is concurring with those of previous studies that focused on the anthropometric characteristics of elite soccer teams.Reilly et al. (22) found that relative heterogeneity in body size is a characteristic of elite soccer teams, so anthropometric differences were therefore expected between playing positions.Previous studies have reported significant differences in a variety of anthropometric characteristics, most notably stature and body mass, perhaps suggesting that these variables denote a morphological optimization within soccer. In conclusion; based on the results of this study, the following conclusions drawn about, South west Ethiopian beginner soccer players.  Generally, the goalkeepers are heavier and taller than other subject of the study but relatively strikers were posse's slim body and average in their height. Defensive players possess larger almost in all girth measurements.But this has no influence on their game performance since they are not exposed to cover large distance during the game. Lack of inter-positional differences among the players' body fat% lean mass% and BMI components could also imply that there is no position specific training in the team.All the players are given similar training prescription during practice. Anthropometries of south west Ethiopian soccer players are comparably lower than those of players elsewhere in the world. Table 1 . Age of subject. Table 2 . Lower limp and anterior body girth measurement of the subjects (cm).From the above table, mean + SD are value of ankle, calf, thigh, hip, waist, chest.GK= goalkeeper, DF=defender MF=midfielder SK= striker players, MD= mean difference. Table 3 . Upper limp girth measurement of the subjects (cm). From the above table, mean ± SD are value of wrist, arm, fore arm girth.GK= goalkeeper, DF=defender MF=midfielder SK= striker players, MD= mean difference. Table 5 . Body mass index, percent lean mass, and percent body fat of subjects grouped by their playing position.From the above table, mean ± SD are value of mass, BMI=body mass index, lean mass% and body fat% of goalkeeper, defender midfielder and striker players Recommendation; based on these results, discussions and findings of the research, the following recommendation is made.Ethiopian sport scientist, researchers, sport administrators and coaches has to be understand the role of anthropometry on soccer performance and give emphasis during talent selection, detection and recruiting players for team. As we Ethiopian nation are compute in soccer with the world, all concerned body has to be use anthropometric and body composition quality of top world class players as bench mark point for optimizing physical quality of our soccer players.
2019-04-22T13:03:47.150Z
2016-08-31T00:00:00.000
{ "year": 2016, "sha1": "0f37ff360d88688680f9cb7f81c39ad276ce3c63", "oa_license": "CCBY", "oa_url": "http://dergipark.ulakbim.gov.tr/tsed/article/download/5000185761/5000175858", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "3743c088f5d0db98ee26dad3a7a0b569ddd53cfc", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Mathematics" ] }
119389437
pes2o/s2orc
v3-fos-license
The EAGLE simulations of galaxy formation: Public release of particle data This manual accompanies the release of the particle data for 24 simulations of the EAGLE suite of cosmological hydrodynamical simulations of galaxy formation by the virgo consortium. It describes how to download these snapshots and how to extract datasets from them, emphasising the meaning of variables, and their units. We provide examples for extracting the particle data in python. This data release complements our earlier release of numerous integrated properties of the galaxies in EAGLE through an SQL relational database. This database has been updated to include the additional simulations that are part of the present data release. Scientists wanting to use EAGLE may find it useful to first investigate whether their analysis can be performed using the database, before accessing the particle data. The particles in the snapshot files are indexed by a peano-hilbert key. This allows for an eased extraction of simply connected spatial volumes, without needing to read the entire snapshot. This makes it possible to analyse many aspects of galaxies using modest computing resources, even when using EAGLE simulations with large numbers of particles. A reading routine is provided to simplify this process. The eagle simulations The Virgo consortium's Evolution and Assembly of GaLaxies and their Environments (hereafter eagle) project consists of a suite of cosmological hydrodynamical simulations designed to enable the study of the formation and evolution of a population of galaxies in a cosmologically representative volume [1,2], adopting the cosmological parameters advocated by the Planck Collaboration ( [3]) (Ω m = 0.307, Ω Λ = 0.693, Ω b = 0.04825, h = 0.6777, σ 8 = 0.8288, n s = 0.9611, Y = 0.248). All simulations were performed with the gadget-3 tree-SPH code, which is based on the gadget-2 code described by [4], but extensively modified as described in detail by [1] and references therein. Modifications include changes to the hydrodynamics and time-stepping schemes referred to as anarchy, and the implementation of a large number of 'subgrid' modules that account for physical processes below the resolution scale (radiative cooling and heating in the presence of an imposed optically thin radiation background, star formation, stellar evolution, metal enrichment, feedback from stars, seeding and growth by accretion and merging of supermassive black holes and feedback from accreting black holes). These subgrid modules are described by [1] and references therein. Numerical parameters for the eagle reference model associated with the subgrid modules were calibrated to a limited subset of z = 0 observations of galaxies, namely: the galaxy stellar mass function, the sizes of galaxies, and the black hole mass -stellar mass relation. The motivation for doing so is detailed by [1], with the calibration strategy described in detail by [2]. Simulations were performed in cubic volumes with lengths of L = 12, 25, 50 and 100 co-moving mega-parsec (cMpc) on a side, at a range of resolutions. Simulations with an initial baryonic particle mass (SPH mass) of m g = 2.26 × 10 5 M are referred to as 'high resolution' and those with an initial baryonic particle mass of m g = 1.81 × 10 6 M referred to as 'intermediate resolution '. The corresponding ('Plummer equivalent') maximum gravitational softening lengths are prop = 0.35 pkpc (proper kilo-parsec) and 0.70 pkpc respectively, with the code switching to a softening that is a constant, com , in co-moving coordinates at z ≥ 2.8. Table 1 summarises these properties for key runs, and illustrates the naming convention, for example L0025N0376 refers to a simulation with L = 25 cMpc that starts with 2 × 376 3 (dark matter and gas) particles. In addition to the reference simulations, the eagle suite comprises runs where one or more of the subgrid modules, or parameters of these modules, were changed. The recal model, recalL0025N0752, is calibrated to the same z = 0 galaxy properties as reference, with the (relatively small) changes to the subgrid parameters as a consequence of the higher resolution compared to the default 'intermediate resolution' runs. Other models, e.g, WeakFBL0025N0376 or StrongFBL0025N0376, are not re-calibrated. These variations to subgrid parameters are aimed at understanding the effect of changing parameters one at a time (in the example, the strength of Table 1: Box sizes and resolutions of the main eagle simulations. From left-to-right the columns show: simulation name suffix; comoving box size; number of dark matter particles (there is initially an equal number of baryonic particles); initial baryonic particle mass; dark matter particle mass; comoving Plummer-equivalent gravitational softening length; maximum proper softening length. Table 2, with a short description of how they differ from reference. eagle has generated several spin off projects that use the same (or very similar) simulation code and models. These include zooms of Local Group-like regions (the apostle project described by [5,6]); zooms of galaxy clusters (the C-eagle and hydrangea projects described by [7,8]); simulations with warm dark matter [9]; and zooms of Milky Way-like galaxies that include non-equilibrium chemistry [10]. Currently, the data from those spin off projects is not part of this data release. The Virgo consortium releases these particle data in the hope that they will be useful to the community. We are keen to receive feedback and suggestions concerning how this release could be made more useful. The eagle database -FoF and subfind groups Haloes are identified in the simulations using the friends-of-friends (FoF) and spherical over-density algorithms. Baryonic particles (gas, stars and black holes) are assigned to the same halo as the nearest dark matter particle, if that particle belongs to a halo. Galaxies are identified as self-bound substructures using the subfind algorithm of [11,12]. Particles in a FoF halo are tagged with a GroupNumber (particles with GroupNumber=2 30 do not belong to any group). This integer runs from 1 (first group) to N (total number of groups). Particles in a self-bound substructure are tagged with a SubGroupNumber, which ranges from 0 (the main galaxy in this FoF group), to N − 1 (where N is the number of subgroups in this group). Particles with SubGroupNumber=2 30 do not belong to any subgroup. It is important to realise that GroupNumber and SubGroupNumber refer to a given snapshot: groups with the same value of GroupNumber in different snapshots are generally not the same physical structure. Many properties of galaxies and haloes, such as their masses, positions, velocities and spins, can be easily accessed using the sql database documented by [13]. Since the original database release, the galaxy data has been extended to include mock observables including intrinsic broad-band colours and colours computed using a dust screen model, as described by [14], and broadband colours computed using dust radiative transfer, as described by [15,16]. Images of galaxies are available as well, and can be downloaded using sql queries. We recommend the database as a first approach to analysis of the eagle simulations. 2 The eagle particle data 2.1 Downloading the data The particle data for the runs described by [1] and [2], summarised in Table 2, can be downloaded from http://icc.dur.ac.uk/Eagle/database.php, after registration. This document merely serves as a pointer to, and brief description of, the data and it is not meant as a reference for the eagle simulations (for that see [1,2]). Below we use the Courier font to denote snapshot variables. The snapshot format Particle data are output in snapshots -the state of the system at a given redshift -with different redshifts, z, corresponding to different snapshots (29 snapshots from z = 20 to z = 0). Each snapshot is distributed over several files, and to extract all particles from a given snapshots one must read all files -even when reading a single variable such as, for example, the coordinates of dark matter particles (Section 4 contains an example in Python of how to read snapshot datasets in this manner). Readers unfamiliar with gadget may want to read Volker Springel's description of the format from the gadget manual. High Higher resolution version recalibrated to same data as Reference model RefL0050N0752 Intermediate Reference model AGNdT9L0050N0752 Intermediate Higher AGN heating temperature and lower subgrid black hole accretion disc viscosity RefL0100N1504 Intermediate Reference model Models calibrated to the z = 0 galaxy stellar mass function from [2] FBconstL0050N0752 Intermediate Constant stellar feedback efficiency FBsigmaL0050N0752 Intermediate Stellar feedback dependent on dark matter velocity dispersion FBZL0050N0752 Intermediate Stellar feedback dependent only on metallicity (not on density) Model variations from [2] eos1L0025N0376 Intermediate Slope of equation of state imposed on the ISM equals 1.0 (i.e. isothermal) eos53L0025N0376 Intermediate Slope of equation of state imposed on the ISM equals 5/3 (i.e. adiabatic) FixedSfThreshL0025N0376 Intermediate Star formation threshold of n H = 0.1 cm −3 , independent of metallicity WeakFBL0025N0376 Intermediate Half as much energy feedback from star formation StrongFBL0025N0376 Intermediate Twice as much energy feedback from star formation ViscLoL0050N0752 Intermediate 10 2 times lower subgrid black hole accretion disc viscosity ViscHiL0050N0752 Intermediate 10 2 times higher subgrid black hole accretion disc viscosity C15AGNdT8L0050N0752 Intermediate 10 0.5 times lower AGN heating temperature C15AGNdT9L0050N0752 Intermediate 10 0.5 times higher AGN heating temperature Models without black holes NoAGNL0025N0376 Intermediate Same as Reference model, but without black holes NoAGNL0050N0752 Intermediate Same as Reference model, but without black holes Collisionless simulations with only dark matter particles (with mass Ωm/(Ωm − Ω b ) higher than in the reference model) DMONLYL0025N0376 Intermediate DMONLYL0025N0752 High DMONLYL0100N1504 Intermediate Individual snapshot files are written in the binary hdf5 format. Users interact with this platform-independent format through libraries, with most high-level analysis languages such as idl and python able to read variables from such files directly by name. We provide examples of how to do this in python in Section 4. Files can also be queried in compiled languages such as c or fortran, once the hdf5 libraries are installed. The hdf5 files can be directly visualised with an hdf5 viewer, for example hdfview. hdf5 groups in the snapshots Each snapshot files contains a set of groups. gadget allows for 6 different particles types (labelled 0-5). Properties of these particles are written in groups PartType0 to PartType5. In eagle type 0 are gas particles, type 1 are dark matter particles, type 4 are stellar particles, and type 5 are supermassive black holes. Types 2 and 3 are not used. We briefly describe the contents of each group next, see also Table 3. Config The svn subversion revision number of the code that wrote this snapshot, and a list of all the gadget configuration options set when this code was compiled. Constants Values of physical constants used in the calculation. HashTable Particles of each type are distributed across the different files of a snapshot in such a way that it is easy to retrieve those that are in a simply connected region -for example all particles within a given distance from a given location, say the centre of mass of a halo. This is done by dividing the computational volume into cubic cells, 2 6 cells on a side, calculating which 3D cell each particle is in (referred to as the hash key), and sorting particles based on this hash key. Cells are then distributed over the individual files that make up a single snapshot. The hash tables allow one to determine which files need reading to retrieve all particles in a spherical region around a given centre. We provide a reading routine to do this, and an example of its use in 4. Because particles are arranged in cubic cells, using the hash tables returns particles in cells -limiting the list to particles in a spherical region is left to the user. Hash tables are constructed separately for each particle type. Header This contains the standard simulation parameters from gadget, with some eagle specific additions. As with standard gadget, the arrays NumPart ThisFile and NumPart Total contain the numbers of particles of each type (0-5) in the current file, and in all of the files that constitute the snapshot, respectively. MassTable contains the particle masses for those particle types that all have the same mass -in this case PartType1. BoxSize is the linear extent of the simulation cube. Units of mass and length are the same as for the mass variables of all particle types (described below), and coordinates, respectively. Omega0 (total matter density in units of the critical density, Ω m ), OmegaLambda (density parameter corresponding to the cosmological constant, Ω Λ ), OmegaBaryon (mean baryon density in units of the critical density, Ω b ), and the HubbleParam (H 0 /(100 km s −1 Mpc −1 ) ≡ h are taken from [3]. ExpansionFactor is the current value of the expansion factor a, and Redshift=z ≡ (1/a)−1. As in gadget, the variable Time is also the expansion factor in these cosmological runs, it is not the age of the Universe. The variable E(z)≡ (Ω m /a 3 + Ω Λ ) 1/2 . Units Assumed code units of length, time, and mass, and those derived from it, in cgs (centimetre, grams, and seconds). Cosmological variables may in addition depend on powers of h and a as detailed below. Readers may recognise these units as Mpc for length, 10 10 M for mass and km s −1 for velocity. Parameters A list of the 9 species (chemical elements) tracked individually in the simulation (H, He, C, Ni, O, Ne, Mg, Si, Fe), their assumed primordial, and solar abundances, and the assumed metallicity of the Sun. Note that solar abundances and the metallicity of the Sun are not used in the code. The radiative cooling and heating interpolation tables used are described by [17], these also use Ca and S with ratios provided in this group. The values of the abundances are collected from the literature and summarised in Table 1 of [17] -most eagle papers use these values to convert metal mass fractions into abundances in units of the 'solar' abundance. RuntimePars This group contains all parameters used by the simulation, from directories for input and output, over cosmological parameters, to assumed units (these are written in single precision, which is why the mass unit appears as infinite). This list also contains the (Plummer equivalent) co-moving and maximum physical values of the softening length. As per the gadget convention, particle types 0-5 are referred to here as PartType0 = gas, PartType1 = dark matter = 'halo', PartType4 = stars = 'Stars', and PartType5 = black holes = 'Bndry'. PartType2 ('disk ') and PartType3 ('bulge ') are not used. PartType0-5 Type0 = gas, Type1 = dark matter, Types2 and 3 are not used, Type4 = stars and Type5=black holes. All particles 1 have a mass, position, velocity, and a unique particle identifier (snapshot variables Mass, Coordinates, Velocity and ParticleIDs), but different types may in addition have a large number of other variables, some of which are described below. Each variable in the hdf5 file consists of an array of numerical values 2 , and 4 attributes that describe the variable. Taking as an example the coordinates of a particle (variable Coordinates), these attributes are CGSConversionFactor=3.08 × 10 24 , h-scale-exponent=-1, aexp-scale-exponent=1 and VarDescription='Co-moving coordinates. Physical position: The variable description is a text string that clarifies what this variable represents. In the case of the Coordinates, the numerical values stored are co-moving coordinates, in units of h −1 Mpc. The proper position of a particle is therefore The convention of specifying the cgs unit, and how the proper variable depends on its co-moving counterpart in terms of powers of a and h, is used for all variables. As another example, peculiar velocity and particle mass are obtained as 3 Description of all variables Tables 4.6.1, 4.6.1, 4.6.1 and 4.6.1 list descriptions for the particle properties appearing in the snapshot output. Most time-dependent variables are predicted to the current snapshot time, so for example the density variable in the snapshot file is is the density at time t 0 , the last time the density was computed using SPH, andρ is an estimate of the rate of change of the density. Because this prediction is not perfect, computing the SPH density for a particle, given the positions, smoothing lengths and masses of all other particles in the snapshot, will in general yield a different value for ρ. For most particles, these two estimates of the density should be close. Note that the SPH smoothing lengths are also predicted. Smoothing lengths (h) of gas particles are predicted as per the method of [18], whereby the SPH particle density where m j is the mass of each other particle and W ij (h i ) is the value of the kernel 3 at that location, yields a proportionality to the smoothing length of h i ∝ ρ −1/3 i , such that the relationship (4π/3)h 3 i ρ i = m i N ngb holds true for a given choice of N ngb , referred to as the 'effective neighbour number' (see [18] and Appendix A1 of [1] for details). N ngb is chosen to be 58 for gas particles. The smoothing lengths for star and black hole particles are also predicted from the neighbouring gas particles. However as they are not gas particles themselves, the smoothing length is now computed ensuring that the relation (4π/3)h 3 i N j=1 W ij (h i ) = N ngb holds true for a given choice of N ngb . For stars N ngb is chosen to be 48, and 58 for black holes. A short description of all variables is given in the tables below, together with a reference to an equation clarifying the meaning of the variable taken from [1], see also [13]. We begin by giving some more information about those variables whose meaning is difficult to convey in a single sentence. Gas particle variables -PartType0 3.1.1 Thermodynamic variables eagle uses a variety of thermodynamic variables, and it may be important to realise how they are related, and which version appears in the particle files. In this anarchy version of pressure-entropy SPH, each gas particle i carries its (pseudo) entropy A i . This is the particle variable Entropy. The entropy variable appears in the expression for the hydrodynamical acceleration, calculated from where the entropy-weighted pressure,P , and density,ρ, are computed for each particle from f ij are 'grad-h 'terms, and γ = 5/3 is the adiabatic index. For gas in hydrostatic equilibrium, the acceleration as computed from Eq. (4) is balanced by gravity. Shocks will change A i at a rate consistent with Eq. (4). Radiative cooling and heating, feedback, and the imposed pressure floor may also change A i , as described next. The radiative rates depend on the particle temperature, T , and density Eq. (3), which are computed for each particle i as The conversion from thermal energy per unit mass, u, to temperature T , depends on the mean molecular weight (in units of the proton mass, m H ), µ. µ is computed using the interpolation tables described by [17], which accounts for the element abundances of the particle, the thermal energy per unit mass, u i , the density, ρ i , and the radiation field J(ν), as is symbolically illustrated by Eq. (7). Note that the density ρ in Eq. (3), Eq. (7) and Eq. (8) is the usual SPH density, which differs from the entropy-weighted density that appears in Eq. (4), Eq. (5), and Eq. (6). In the feedback routines, particle temperatures may be increased by a fixed increment (∆T = 10 7.5 K in the case of stellar feedback for the reference model). This is implemented by increasing the entropy of the particle by an amount computed from Eq. (6). eagle also imposes a pressure floor and a minimum temperature floor (T > 100 K). After feedback (but before cooling), we calculate T i for every (active) gas particle. We use this to compute the maximum temperature a particle had throughout its history, as well as the expansion factor that corresponds to this event. The radiative equation Eq.(8) calculates the new value of u, evaluating the rate at constant ρ. The new value of u is used to update S andṠ ≡ (S(t + dt) − S(t))/dt, where dt is the current time step. The pressure floor is of the form p ≥ p lim (ρ/ρ lim ) γ lim , where p lim , ρ lim and γ lim are constants, and is applied if the density is above a given proper density threshold, ρ lim , as well as above an overdensity threshold, ∆ lim . Two thresholds are imposed, which are expressed in terms of the corresponding temperature thresholds T lim and hydrogen number density thresholds, n H,lim , related by n H,lim = Xρ lim /m H and p lim = ρ lim T lim /(µ m H ), where µ = 4/(4 − 3Y ) ≈ 1.23 is the mean molecular weight for neutral gas with the [3] Helium abundance by mass of Y = 0.248. For the reference model, these values are • A Jeans threshold, for which T lim = 8000 K, n H,lim = 0.1 cm −3 , γ lim = 4/3, ∆ lim = 10. In the snapshot files, the variable Density≡ ρ, the variable InternalEnergy≡ u, and the variable Temperature≡ T , all of which are predicted to the snapshot time. The variables MaximumTemperature and AExpMaximumTemperature correspond to the maximum temperature this gas particle ever had, and the value of the expansion factor when this occurred, respectively. Abundances The abundance group stores the fraction of a particle's mass in each of the explicitly tracked elements (H, He, C, N, O, Ne, Mg, Si, Fe). Stellar particles enrich gas particles using the SPH scheme. During this enrichment step, the particle abundance of an element, for example C, increases as where m i,C is the C mass of gas particle i, and the sum is over (active) star particles. dm j,C is the amount of C released by the three stellar evolutionary channels followed (i.e. AGB stars, type Ia SNe, and winds from massive stars and their core collapse SNe), over the current time step of star particle j (i.e not using the instantaneous recycling approximation) . The variable Carbon in the group ElementAbundance is then the ratio m i,C /m i of the particle's mass in C to its total mass. The simulation tracks also a 'total metallicity',metal mass fraction in all elements more massive then Helium over total mass -variable Metallicity (note that this includes contributions from elements that are not tracked individually), the metal mass fractions from each of the three channels separately (variables MetalMassFracFromAGB, MetalMassFracFromSNIa and MetalMassFracFromSNII), as well as the total mass received through these channels. To study the contribution of type Ia and type II SNe to Fe enrichment separately, eagle stores the variable IronMassFracFromSNIa -the ratio of the mass in Fe received through type Ia SNe only, over the mass of the particle. eagle also computes 'SPH-smoothed' versions of these metal masses by calculating a 'metal density' by summing over gas neighbours. Taking again C as an example, the metal mass density is The smoothed C abundance is then X C = ρ i,C /ρ i and the variable Carbon in group SmoothedElementAbundance is X C m (and similarly for other elements and for other smoothed metallicities). The motivation for using a smoothed metallicity is explained in [20]. Smoothed abundances are used to calculate the radiative rates in Eq. (8), to set the metallicity dependence of feedback, Eq.(12), and to compute stellar evolution. However, the particle metallicity is used to set the star formation threshold, see below. Star formation variables The star formation rate is compute using the method of [21]. For a gas particle i, the star formation rate is computed in eagle asṁ where A and n are constants (A = 1.515 × 10 −4 and n = 1.4 in the reference model), provided it is eligible for star formation. eagle uses the metallicity-dependent star formation threshold from [22], which is a fit to the warm, atomic to cold, molecular phase transition, and also requires the particle to be cold enough (see section 4.3 in [1]). The particle's star formation rate is stored in the variable StarFormationRate. The slightly misnamed variable OnEquationOfState is a star formation flag. Its value is 0 if a gas particle has never crossed the star formation threshold. A positive non-zero value indicates the value of the expansion factor a when it last became star forming, a negative value indicates the value of −a when it last failed to meet the star formation threshold. Dark matter particles -PartType1 This particle group does not include the Mass variable. All dark matter particles have the same particle mass, found as the second entry in the array MassTable in the group Header, and with the same units as all other mass variables. For an example of using this to create an array of dark matter particle masses in Python, see Section 4.3 Star particle variables -PartType4 In eagle, a gas particle may be wholly converted into a star particle. That star particle inherits all element abundances of its parent gas particle. In addition, eagle stores the density (ρ -the SPH density) of the gas particle when it was converted (variable BirthDensity) and the value of the expansion factor, a, when the conversion happened (variable StellarFormationTime). Note that all these variables are constants once a gas particle has been converted into a star: they will never change. The variable FeedbackEnergy Fraction is the instantaneous value of f th from Eq. (7) of [1] f th = f th,min + f th,max − f th,min the expectation value of the fraction of the energy released by type II SNe used to heat gas particles in the stellar feedback implementation. Note that this is the expectation value of the energy used in the stochastic implementation of thermal feedback of [23]. The normalisation of the (particle metallicity) dependence is 10 per cent of 0.02 -an approximation to the solar metallicity. The density dependence is calculated based on the SPH density, ρ. As a star particle ages, eagle evolves the single stellar population with stellar life-times and evolutionary tracks as described by [20]. As stars evolve, mass and metals are transferred from the star particle to neighbouring gas particles. The variable Mass is the current particle mass, whereas InitialMass is the star particle's birth mass. Black hole particle variables -PartType5 Black hole (BH) particles are seeded with a given mass in each FoF halo above a given mass that does not already contain a BH. The expansion factor a of when the BH was seeded is stored in the snapshot variable BH FormationTime. BHs may then grow in mass through mergers (with other BHs), and the accretion of neighbouring gas. BH CumNumSeeds is the total number of seeds the BH merged with, and BH MostMassiveProgenitorID is the ParticleID of the most massive progenitor of any of the BHs this BH merged with, and BH TimeLastMerger is the value of a when the last merger occurred. Following [24], eagle uses a subgrid model for BH particles. The mass of the black hole, m BH , which sets its accretion rate, is allowed to differ from the particle mass, m, which is used only for gravitational calculations. A short summary of the relevant equations, taken from [1], clarifies the meaning of the variables that describe accretion. The accretion rate is the minimum of the Eddington rate, (G is Newton's constant, σ T Thomson's cross section and c the speed of light, and r is the variable BlackHoleRadiativeEfficiency in the RunTimePars group, r = 0.1 in the reference runs) anḋ whereṁ Bondi is the Bondi-Hoyle rate for spherically symmetric accretion, The mass of the BH grows in a time dt by ∆m BH = (1 − r )ṁ accr dt . When a BH accretes mass, it stores energy E in a reservoir, which increases in time step dt by where f =BlackHoleFeedbackFactor, and is 0.15 in the reference model. The sound speed, c s , the speed of the gas relative to the BH, v ≡ |v|, and the weighted pressure near the BH, p BH , are These sums are over those gas neighbours of the BH that are within its smoothing length, h. Some of the variables that appear in the snapshot, referring to the last time the BH particle was active, are, When a BH heats surrounding gas, its energy reservoir is correspondingly decreased, as described in Section 4.6 of [1]. Python code examples Below we provide some simple example Python scripts that read, process and display data from the eagle snapshots at z = 0 (Snapnum 28). Each example is available to download at http://icc.dur.ac.uk/Eagle/database.php. These examples assume by default that the snapshot data are located in a folder named 'data'and in order for these routines to work, the user must have Numpy (http://www.numpy.org/), MatPlotLib (https://matplotlib.org/) and AstroPy (http://www.astropy.org/) installed. Reading datasets read dataset loops over each hdf5 part (see Section 2 for how the hdf5 files are structured) of snapshot 28 and extracts a chosen dataset for all particles of a particular type. It then converts the data into physical CGS units using information from the dataset's attributes (see Section 2.3.8 for details on unit conversion). For example, if we wished to extract the physical Density for all gas particles (PartType = 0), we would input read dataset(0, 'Density'). Note that the number of hdf5 part files (nfiles) may be different depending on the simulation. Reading dark matter mass As all dark matter particles share the same mass, there exists no PartType1/Mass dataset in the snapshot files. Instead, the dark matter mass is stored in the MassTable attribute (index 1) in the Header group. Below is an example function that uses this information to create a mass array for dark matter particles. The array length is determined by the NumPart Total attribute in the Header and conversion factors are taken from the PartType0/Mass attributes. This is the only dataset that needs special treatment, all other datasets can be read using the previous example. Plotting the rotation curve of a galaxy Here we use the three functions defined above to plot the rotation curve of the largest central galaxy (Group-Number 1, SubGroupNumber 0) from the small test volume (RefL0012N0188). We find the centre (Cen-treOfMass x = 12.08808994, CentreOfMass y = 4.47437191, CentreOfMass z 1.41333473 Mpc) using the public database. The script loads the Coordinates and Mass for all gas, dark matter, star and black hole particles from the selected galaxy using the read galaxy function in conjunction with the functions described above and plots the rotation curve for each component. Note that we must wrap all coordinates around the centre in order to account for the simulation's spatial periodicity. import h5py import numpy as np import astropy . units as u from astropy . constants import G import matplotlib . pyplot as plt from read dataset import read dataset from read header import read header from read dataset dm mass import read dataset dm mass class RotationCurve : def i n i t (self , gn , sgn , centre ): self.a, self.h, self. boxsize = read header () self. centre = centre # Load d a t a . Plotting the temperature-density relation for a galaxy This example uses a function called read galaxy to read the Temperature, Density and StarForma-tionRate for each gas particle in the largest central galaxy (GroupNumber 1, SubGroupNumber 0) from the small test volume (RefL0012N0188). It then plots Temperature vs Density with data points coloured red for a non-zero StarFormationRate and blue otherwise. import h5py import numpy as np import matplotlib . pyplot as plt from read dataset import read dataset The read eagle routine In the previous examples we read the entire particle data from the small test volume (RefL0012N0188) and masked to only particles contained within a single galaxy. For much larger volumes, particularly the RefL0100N1504 simulation, this becomes increasingly impractical and memory intensive. For this reason, it is more manageable to read these simulations utilizing the HashTable (see Section 2.3.3 for details). To aid with this, we provide a reading routine, named read eagle, that reads simulation datasets efficiently using the HashTable. read eagle is publicly available via a git repository, located at https://github.com/jchelly/read eagle. The module must first be installed, for instructions on how to do this for Python (instructions for use with C and Fortran languages are also available) please refer to the README documentation provided within the repository. read eagle example Below we provide an example that is a repeat of a previous example, where we create a temperature-density relation for a single galaxy, however this time using the read eagle routine. First, read eagle must read information from the HashTable and Header. This is done by initializing the EagleSnapshot class with the location of any hdf5 part of the snapshot data (it does not have to be part 0). You must then select a region of interest. This is a cubic region outlined by the minimum and maximum extents in the x, y and z directions (in cMpc/h units). For our example, we extract a 2 cMpc/h cube that is centred on the galaxy's centre of potential (taken from the database). Datasets can then be read in a similar fashion to the examples above that did not use read eagle, using the read dataset routine. Note there are additional examples within the repository, including C and Fortran examples, that explain the further functionality of read eagle beyond simply reading datasets (for example reading files in parallel). Any queries or bugs discovered using this module can be reported via the repository. import numpy as np import matplotlib . pyplot as plt from read eagle import EagleSnapshot from read header import read header import h5py class PhaseDiagram ReadEagle : def i n i t (self , gn , sgn , centre , load region length =2): # Load i n f o r m a t i o n from t h e h e a d e r . self.a, self.h, self. boxsize = read header () # Load d a t a . Acknowledging these data This document is not intended to serve as a reference for eagle. Users of eagle data are kindly requested to acknowledge and cite the original sources following the instructions listed in section 4.2 of [13], which we repeat here for completeness: To recognise the effort of the individuals involved in the design and execution of these simulations, in their post processing and in the construction of the database, we kindly request the following: • Publications making use of the eagle data extracted from the public database or particle data are kindly requested to cite the original papers introducing the project [1,2] as well as the paper describing the public release of the galaxy data [13]. • Publications making use of the database should add the following line in their acknowledgement section: "We acknowledge the Virgo Consortium for making their simulation data available. The eagle simulations were performed using the DiRAC-2 facility at Durham, managed by the ICC, and the PRACE facility Curie based in France at TGCC, CEA, Bruyèresle-Châtel.". • Furthermore, publications referring to specific aspects of the subgrid models, hydrodynamics solver, or post-processing steps (such as the construction of images or photometric quantities, and the construction ParticleIDs §2.3 Unique particle identifier. Index encodes the particles position in the initial conditions (see [1] for details). §2.3 Subgroup number (as defined by subfind) this particle belongs to. Values range from 0- (N -1) where N is the total number of subgroups for this particular FoF group. Values of 2 30 indicate this particle does not belong to any subgroup. Subgroup number 0 refers to the central subgroup, subgroup numbers greater than 0 refer to satellites. The peculiar velocity, adx/dt (see the Appendix of [13] for more details). of merger trees), are kindly requested to not only cite the above papers, but also the original papers describing these aspects. The appropriate references can be found in section 2 of this paper and in [1]. Mass of metals received from AGB divided by particle mass. §3.1.2 Mass of metals received from SNII divided by particle mass. §3.1.2 Mass of metals received from SNIa divided by particle mass. §3.1.2 Mass of elements heavier than Helium, including those not tracked individually, divided by particle mass. ParticleIDs §2.3.8 Unique particle identifier. ID is inherited from parent gas particle. §3.3 Expansion factor when this star particle last enriched its neighbours. §3.4 At the time of the last BH-BH merger, this is the ParticleID of the most massive member of the pair. §3.4 Gas pressure at the location of the black hole. §3.4 Gas sound speed at the location of the black hole. §3.4 Peculiar velocity of the gas at the location of the black hole. §3.4 Expansion factor when black hole particle last accreted another black hole. 0 if the particle has never accreted another black hole. §2.3 Friends of Friends (FoF) group number this particle belongs to in this snapshot. Values range from 1-N , where N is the total number of FoF groups. Values of 2 30 indicate this particle does not belong to any group. HostHalo TVir Mass -Estimate of host FoF group's virial temperature, calculated from the local velocity dispersion. Mass §2.3.8 BH particle mass. Users should use the black hole subgrid mass (BH Mass) for the actual black hole subgrid mass. ParticleIDs §2.3.8 Unique particle identifier. ID is inherited from parent gas particle. §2.3 Subgroup number (as defined by subfind) this particle belongs to. Values range from 0-(N -1) where N is the total number of subgroups for this particular FoF group. Values of 2 30 indicate this particle does not belong to any subgroup. Subgroup number 0 refers to the central subgroup, subgroup numbers greater than 0 refer to satellites. §2.3.8 The peculiar velocity, adx/dt (see the Appendix of [13] for more details).
2017-06-29T18:00:04.000Z
2017-06-29T00:00:00.000
{ "year": 2017, "sha1": "db17a8c172becc6f6c4971c39c246fb262d19fc4", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "db17a8c172becc6f6c4971c39c246fb262d19fc4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
216335311
pes2o/s2orc
v3-fos-license
Method of Constructing the Innovation Service Platform of Colleges and Universities based on Artificial Intelligence With the promotion of AI to national strategy, it will have a profound impact on various industries in China. Colleges and universities should follow the trend of the times, seize the pace of the “Ai” era, and apply “Ai” to the service platform of colleges and universities. The purpose of this paper is to study the application of artificial intelligence in service platform. First, the paper analyzes the technology of artificial intelligence, puts forward the way that artificial intelligence can build university innovation service platform, and defines the construction method of University wechat innovation service platform. Using the method of questionnaire, based on the theoretical analysis and network survey, the questionnaire is designed. The users of the regional university library are selected as the survey objects, and the questionnaire is distributed within the scope of the University. The experimental results show that in the investigation of the reasons for the readers’ infrequent use of the library’s personalized services, according to the investigation of all the problems that may lead to the readers’ infrequent use of the library’s services, 48.2% of the readers expressed that they did not understand the library’s information resources; 38.85% of the readers thought that the library’s services were lack of personalization and pertinence and were open Communication with new media platform is also the inevitable direction of the development of University Libraries and service systems in China. Introduction In recent years, with the rapid development of artificial intelligence, deep learning, mobile Internet and other technologies, products related to artificial intelligence have been applied to our work, life, learning and many other places, making our work, learning and life more convenient and convenient [1][2][3]. In this era of rapid development of "artificial intelligence", in order to realize the development of university library with the times, service is one of the contents that university library should strive to explore [4][5][6]. The university library is an important part of our country's library circle, the main distributing center and important information service organization of literature information. Its development level represents the frontier level of the library industry in China, and its development trend guides the development trend and direction of the library industry [7]. "Artificial intelligence" brings new ideas and technologies for information service, and provides support and guarantee for personalized service mode of University Library [8]. Research on the current situation of the use of Library mobile services can enhance the core competitiveness of the library in the future service field, and improve the mobile service level and service image of the library [9]. The research on wechat service can not only reflect the advantages and disadvantages of library information service, but also reflect the efficiency of the whole library information service system, which will ultimately promote the improvement of information service quality. Ningning Kong adopts the method of action research to develop and perfect our service model. The results show that the GIS services of library can provide support for Humanities and social sciences from the perspectives of research collaboration, learning support and outreach, and have different emphases according to different stages of learning and research. The research framework adopted can not only be used as an effective tool for the development of GIS services, but also be extended to other library services [10]. Lesley S. focuses on the unique factors of online electronic resources. School library setting is related to professional issues such as planning and management, selection and access, organization, guidance and proper use. In the examples of solving these factors and best practices, we provide some basic considerations to help librarians use these materials to provide effective services [11]. The innovation of this paper mainly includes the following two aspects: (1) Propose to reform the service platform of colleges and universities with artificial intelligence technology, and analyze the current situation of colleges and universities in detail. At present, there are few researches on the application of AI technology in university service platform. Most scholars mainly focus on the change of AI to education and the impact of AI on teaching. No scholars have conducted in-depth research from the perspective of service platform. Based on the background of the development of AI, this paper systematically discusses how to apply AI technology to service platform and how to use it. (2) Through this study, more thoughts can be aroused. Through literature analysis and case analysis, this study analyzes the application of artificial intelligence technology in the university service platform, which plays an enlightening role in the innovation and reform of the university service platform of artificial intelligence. Main Features of Artificial Intelligence (1) Designed by human beings to serve human beings. In fact, artificial intelligence is a system based on computer hardware, according to human program, according to certain logic and algorithm. Artificial intelligence is designed and manufactured by human beings. It must serve human beings. (2) According to human design, it can identify the surrounding environment and generate corresponding emergency behaviors, and it can interact with human beings. Artificial intelligence system should be designed to be able to feel the information of nature through five kinds of tactile senses, such as watching, listening, smelling, touching and tasting, and then respond to the external world through language, text, expression, action and other behaviors. (3) It has adaptability, learning ability, evolutionary iteration and connection extension. Artificial intelligence system should have the ability to adapt to the environment and be able to learn independently. It is the ability to adjust the corresponding parameters, data and tasks in real time according to the changes of the environment. Construction of Innovative Service Platform of University Library from the Perspective of"Artificial Intelligence" The WeChat service platform, the WeChat public address, provides a platform for service functions. As an open platform, any organization or individual can apply for registration and have corresponding service accounts. In the development technology of the platform, the API port of the service platform is open to the outside world. The university library can develop and build a third-party service platform based on the open interface to realize the seamless docking with other departments of the University, and to ensure the diversification and individuality of the function of the service platform. Wechat official provides developers with detailed development documents and code samples, which ensures that the service platform is convenient for innovation and customization. In the construction of the third-party service platform, the basic platform that can be used for free is provided as the template platform for users, and the project database that meets their own service needs is built by users according to their own needs. In addition, the operators of wechat service platform also need to prepare for the third-party interface: set up and connect for the basic network, have a separate domain name IP address, deploy the web server, design the user-defined interface and menu of the third-party message interface, design the third-party business service scheme, optimize the third-party message interface and the information interaction of the business system and other specific service principles: users in wechat The mobile client sends a message to the wechat service platform, which will arrive at the wechat background through the network. After the background receives the message, it forwards it to the server of the service platform. After the server receives the request, it parses the message format, analyzes the interface to match according to the user's content and its own server logic, and transmits the information to the corresponding third-party service Interface, the third-party interface parses the information type and delivers it to the corresponding third-party service department. The third-party service department calculates the message to be returned to the user, then encapsulates the message and returns it to the service platform. Finally, the wechat service platform forwards the returned message to the user's client. Investigation Purpose and Contents This questionnaire survey aims to understand the status quo of the development of personalized services based on the "artificial intelligence" perspective of regional college libraries. It focuses on the use of the service platform technology of college libraries to investigate the personalized services in college libraries The development of the work and the acceptance of this new service model by college users, and the development of the library service in the future are determined according to the individual needs of the readers. The purpose of this survey is to analyze the readers 'personalization through surveys on the satisfaction of current library, daily life information, and user feedback provided by university libraries from the perspective of "artificial intelligence". Requirements, discuss the problems existing in the development of personalized service in college libraries, and provide suggestions for the improvement of college library services. Sample Selection This questionnaire survey selected several representative college library user groups to comprehensively understand the satisfaction of users of different levels with the daily learning, living information, and user feedback provided by the current college library. Understanding and cognition of personalized Questionnaire Issuance and Recycling The questionnaire survey was conducted from March 2019 to October 2019. The survey participants selected college undergraduates, graduate students, teachers, and scientific research personnel as the main body. The survey areas mainly selected regional colleges, of which five are representative Universities include University A, University B, University C, University D and University E. A total of 154 questionnaires were distributed. After careful inspection, 15 unqualified questionnaires were eliminated, and 139 valid questionnaires were recovered, with a recovery rate of 90.25%. Analysis of the Causes of Infrequent Use of Library Services In the survey of readers 'infrequent use of library services, according to a multi-choice survey of all issues that may lead to readers' infrequent use of library services, 48.2% of readers expressed information on library Resources are not understood; 38.85% of readers think that library services lack personalization and pertinence; 36.69% of readers think that library information services are not convenient; 34.53% of readers think that the content and form of library personalized services are single; 26.62% The reader indicates that there is no unified entry for database retrieval, and the operation is inconvenient as shown in Figure 1. Judging from the selection results, readers at different levels and types chose similar reasons for not frequently using library personalized services. They mainly focused on readers' lack of knowledge of library information resources, and library information services were not targeted and Personalization and the inconvenience of library information services. In the survey of readers' measures to improve the library from the perspective of "artificial intelligence", readers who choose to provide interactive and personalized services, strengthen subject services and build a one-stop platform, have a high proportion of readers who integrate network information resources. Among them, it can be seen from the percentage of the voting results that many questions have been raised about the options of "increasing linkage services between various functional departments in universities" and "increasing the content and frequency of information literacy training". The attention of people also shows that the personalized service of our university library can no longer fully adhere to the rules to provide services, but also must keep pace with the times, use new technologies and new concepts to serve and meet the potential needs of all readers and Cries to open up a new path. Measures for Improving Library Personalized Services from the Perspective of "ArtificialIntelligence" According to the research on the status of library personalized service usage, it is found that in the current context of "artificial intelligence", libraries have strengthened the promotion of usage and coverage. Due to the continuous updating and iteration of current Internet technologies, and the new mobile terminal with the continuous emergence of functions, college student readers and young teacher readers have a Single content and form of library personalized service There is no unified access for database retrieval, which is inconvenient for operation Library information service is not convenient Conclusions This article mainly investigates the personalized service of the existing platform of the university library. It mainly introduces the service mode of the university library using the new media platform network in the university. Based on the results of theoretical analysis and questionnaires, and based on the content analysis method, it finds that the university at the current stage. The shortcomings of library personalized services are combined with "artificial intelligence" technical means and traditional library personalized services to propose a new personalized service framework for college libraries.
2020-04-02T09:26:19.682Z
2020-03-24T00:00:00.000
{ "year": 2020, "sha1": "825ede5b51f844540d9f65e40f2c9906fd541674", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/750/1/012087", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "d98fa4532515b74d18b1ed4ba4601b6038e1affd", "s2fieldsofstudy": [ "Computer Science", "Education" ], "extfieldsofstudy": [ "Engineering", "Physics" ] }
266381022
pes2o/s2orc
v3-fos-license
Targeting key RNA methylation enzymes to improve the outcome of colorectal cancer chemotherapy (Review) RNA methylation modifications are closely linked to tumor development, migration, invasion and responses to various therapies. Recent studies have shown notable advancements regarding the roles of RNA methylation in tumor immunotherapy, the tumor microenvironment and metabolic reprogramming. However, research on the association between tumor chemoresistance and N6-methyladenosine (m6A) methyltransferases in specific cancer types is still scarce. Colorectal cancer (CRC) is among the most common gastrointestinal cancers worldwide. Conventional chemotherapy remains the predominant treatment modality for CRC and chemotherapy resistance is the primary cause of treatment failure. The expression levels of m6A methyltransferases, including methyltransferase-like 3 (METTL3), METTL14 and METTL16, in CRC tissue samples are associated with patients' clinical outcomes and chemotherapy efficacy. Natural pharmaceutical ingredients, such as quercetin, have the potential to act as METTL3 inhibitors to combat chemotherapy resistance in patients with CRC. The present review discussed the various roles of different types of key RNA methylation enzymes in the development of CRC, focusing on the mechanisms associated with chemotherapy resistance. The progress in the development of certain inhibitors is also listed. The potential of using natural remedies to develop antitumor medications that target m6A methylation is also outlined. 1. Introduction 2. The m6A key enzyme in the development of CRC 3. Mechanisms of chemoresistance in CRC involving key m6A methylation enzymes 4. Therapeutic exploration of targeting key m6A methylation enzymes 5. Discussion Introduction Since 1960, when chemical modifications of RNA were first documented in detail as forms of epistatic modifications, >170 forms of RNA modifications, which have the role of maintaining mRNA stability and are involved in precursor shearing, transport and translation initiation of mRNA (1), have been identified (2).Of these modifications, the methylation modification of the nitrogen atom at position 6 of the RNA molecule adenine, i.e., N6-methyladenosine (m6A) modification, is the most widespread in eukaryotes (3).Although RNA methylation was discovered >60 years ago, owing to technical limitations, previous studies on epigenetic modifications in tumors have mostly focused on DNA methylation and histone modifications (4-7).It was not until 2012 when Meyer et al (8) Targeting key RNA methylation enzymes to improve the outcome of colorectal cancer chemotherapy (Review) first applied m6A-sequencing (seq) technology to determine the overall m6A levels of human and mouse genes at the transcriptional level on a large scale, and until 2015, when Linder et al (9) first used m6A individual-nucleotide-resolution cross-linking and immunoprecipitation-sequencing technology to achieve single-base m6A level detection that m6A research was gradually developed.An increasing number of studies have found that abnormal m6A methylation is closely associated with the development, metastatic recurrence and treatment failure of various tumors (10,11). Due to the relevance of epigenetic modifications in the extracellular environment, the complex heterogeneity of tumors and the potential of immunotherapy, research on m6A and tumors has predominantly focused on the immune microenvironment and immunotherapy (12,13).Indeed, only a few studies have really focused on the relationship between chemoresistance and m6A methylation, and even less on a specific single cancer entity. Colorectal cancer (CRC) is the third most common cancer type worldwide (14), and significant amounts of research have been devoted to the development of novel antitumor agents in the form of immune agents, lysosomal viruses and vaccines; however, chemotherapy is still the dominant treatment strategy (15).By contrast, primary or secondary chemoresistance is the main cause of treatment failure in patients with advanced CRC, and another limitation of conventional chemotherapy is the lack of specific targets (16).Primary drug resistance refers to poor response of the tumor tissue to the conventional dose of drugs at the initial treatment, while secondary drug resistance refers to the expected state that the tumor tissue can shrink or necrosis under the action of the drug at the initial treatment, but the efficacy decreases or metastatic recurrence may even occur after long-term use of the drug.The difference lies in the response to the initial tumor treatment. Epigenetic modifications of RNA are associated with sensitivity to multiple chemotherapeutic agents (17).Therefore, the present study focused on the relationship between aberrant m6A methylation modifications and chemotherapeutic response mechanisms in CRC (18)(19)(20) and various potential therapeutic measures targeting the m6A methylation process to seek novel strategies to improve the therapeutic effect pertaining to CRC. Numerous scholars posit that the reversible and protein-regulatory properties of m6A methylation offer a promising approach to overcoming the current shortcomings in multiple cancer therapies (8).However, the real application of m6A methylation and clinical transformation must also solve a myriad of issues, including the following problems: i) M6A methylation as the most extensive RNA modification-how to focus and target the key molecules; ii) how to screen out the most accurately targeted drug candidates in each cancer species; and iii) how to specifically target the regulatory axis involved in m6A methylation to reverse drug resistance in tumors (21). Methyltransferases. The key m6A methylation enzymes can be classified into methyltransferases, methyl recognition enzymes and demethylases. M6A methyltransferases mostly function as complexes [the m6A methyltransferase complex (MTC).The MTC mainly comprises methyltransferase-like 3 (METTL3), METTL14, vir like m6A methyltransferase associated (VIRMA), WT1 associated protein (WTAP), zinc finger CCCH-type containing 13 (ZC3H13), RNA binding motif protein 15 and Cbl proto-oncogene like 1 (HAKAI) (22,23).METTL3 was the first m6A methyltransferase identified and the only catalytic subunit of MTC, indicating that the presence and activation of METTL3 are the basis for m6A methylation.Although METTL3 can act independently of other m6A methyltransferases, its catalytic activity is much weaker than that of the MTC formed by wrapping it with other transferases (24).METTL14, as the primary RNA binding platform, forms a complex with METTL3 through 10 positively charged binding sites (Fig. 1), activating and enhancing METTL3 activity and promoting the recognition of RNA substrates, and thus enhancing MTC methylation efficiency (25,26).By contrast, WTAP is essential for promoting the enrichment of METTL3, METTL14 and other methyltransferases, transporting MTC into the nucleus and stabilizing MTC activity in the organism (27,28).VIRMA and ZC3H13 are relatively newly discovered m6A methyltransferases.VIRMA, also known as KIAA1429, which are the largest molecular weight proteins of MTC components known to date.VIRMA may be a scaffold for the MTC structure, connecting the immobilized WTAP, HAKAI and ZC3H13 to form an envelope structure capable of accommodating the METTL3-METTL14 complex (29).ZC3H13 is also a linking protein that bridges WTAP to the METTL3-METTL14 complex and facilitates the recognition of RNA substrates. It was discovered that m6A methyltransferases are extensively involved in various stages of CRC, including tumor stemness, microenvironmental remodeling, drug resistance, metastasis and recurrence.Some of the roles and related pathway molecules are shown in Table I. For a long time, the methyltransferase METTL3 has been regarded as a pro-oncogene (43,44).However, in recent years, a limited number of studies have found that under specific conditions, such as tumor starvation, METTL3 may also inhibit tumor development and development through the activation of the p38 signaling pathway and interference with the cell cycle to bring tumor cells into a dormant state (38,45).More interestingly, it has been suggested that METTL3 can function not only as a methyltransferase, but also as a methyl recognition enzyme independent of YTH N6-methyladenosine RNA binding protein F (YTHDF)1, capturing the recognition of mRNAs undergoing m6A methylation modification in the cytoplasm, promoting the recruitment of E74 like ETS transcription factor 3 (elF3) (46), drive β-linked protein trans-activation and upregulate c-Myc, VEGF, cyclin D7, MMP-3, c-Jun and other key genes of intestinal cancer malignant phenotypes (47). Of note, m6A methylation is not only related to the modern medical concept, but also similar to certain Traditional Chinese Medicine (TCM) concepts (48,49). In the concept of TCM, the function of various parts of the body is associated and influenced by the external environment.This is similar to epigenetics.If the external evil is more severe than the physical weakness, it is classified as excessive syndrome (ES) according to TCM concepts; if the two are of the same magnitude, it is classified as deficiency and excessive syndrome (DES); and if the physical weakness is more pronounced, it is defined as deficiency syndrome (DS).The early and early-middle stages of CRC often appear as ES or DES, while the late stages are DS (50,51).Under the guidance of these theories, TCM treatments need to be selected according to the different manifestations of different syndrome types in patients, so as to be used correctly (52). It has been found that patients with CRC with different syndrome types have differences in gene expression, and elF3 and the downstream factors it regulates are typical representatives.High expression of the keratin 19, keratin 18, keratin 8, ELF3 and serpin family E member 1 genes is a potential marker to identify the TCM evidence type in CRC and, high expression of the ELF3 gene in CRC with DES or ES.High expression of mucin 2 and regenerating family member 4 in DES was mainly related to cell growth, as well as the MAPK and cyclic adenosine 3' ,5'-monophosphate signaling pathways, while high expression of collagen type I alpha 2 chain and periostin genes in ES was mainly related to angiogenesis and the PI3K/AKT pathway and the caveolae-associated protein 2 and glutathione peroxidase 1 genes were highly expressed in DS and are mainly related to vomiting, platelet catabolism and endocytosis (50,53). Whether m6A methyltransferases can function as methylation recognition enzymes or other epigenetic factors remains to be elucidated.The combination of classical medical theories, including TCM and epigenetic modifications, which are emerging molecular biology concepts, provides new perspectives for antitumor therapy and warrants further exploration (54,55). Methyl recognition enzymes.The main function of methyl recognition enzymes is to recognize bases that undergo modification, thus activating downstream pathways and participating in biological processes, including mRNA translation, transcription, fission, and degradation.The core members include YTHDF1-3 and YTHDC1-3 (56)(57)(58).YTHDF2 and YTHDF3 accelerate the degradation and fission of modified mRNAs by recruiting the C-C chemokine receptor 4-negative regulator of transcription deadenosylation complex (59) and upregulating forkhead box O3 (FOXO3) expression (60), respectively. Wang et al (61) found that YTHDF1 was significantly amplified and upregulated in CRC tissues, a phenomenon closely associated with the inflammatory cancer transformation of intestinal lesions and liver and lung metastases in patients with the aforementioned disease. to discovery.There are also some small molecule drugs for methylation recognition enzymes such as YTHDF1, which is helpful for subsequent research. The group of the above-mentioned study also developed a lipid nanoparticle-encapsulated Rho guanine nucleotide exchange factor 1 small interfering RNA drug for in vivo tumor therapy.YTHDF1 also up-regulates the transcription factor glucocorticoid modulatory element binding protein 2 of the adhesion-regulating molecule-1/nuclear factor kappa pathway, thereby activating the pathway to resist apoptosis and drive CRC progression (62).Ni et al (63) demonstrated that YTHDF3 is a novel target for the Yes-associated protein 1 (YAP) signaling pathway.A long noncoding RNA called growth arrest-specific transcript 5 (GAS5) binds directly to YAP, promoting YAP phosphorylation and attenuating YAP-mediated YTHDF3 transcription and allowing YTHDF3 to reversibly and selectively bind to GAS5, which undergoes m6A-methylation to trigger its decay and form a negative feedback loop.Although methylation recognition enzymes have been relatively poorly studied, they are indispensable for the proper binding of methyltransferases to the modified site. Insulin-like growth factor 2 mRNA binding proteins (IGF2BPs) are also significant methyl recognition enzymes.They can specifically recognize and then bind directly to the m6A modification site, subsequently upregulating SOX2 to activate CRC stem cells (CSCs) (64).IGF2BP2 also can induce chemoresistance in CRC cells by activating the PI3K/AKT signaling pathway and enhancing aerobic glycolysis (65).However, these functions are still inseparable from the upstream regulation of METTL3. Demethylases.Thus far, the knowledge of demethylases is limited and the only known demethylases are fat mass and obesity associated (FTO) and alkylation repair homolog 5 (ALKBH5).Demethylases are key to the reversibility of m6A methylation (66,67) and are able to complete the demethylation process by oxidizing m6A to N6-hydroxymethyladenosine and N6-formyladenosine (68)(69)(70).Conventionally, the reversibility of methylation modifications creates a window for reversing chemoresistance and increasing demethylase activity to tilt the reaction rate toward demethylation, thus facilitating the re-sensitization of drug-resistant cells to chemotherapy. Relier et al (71) found that, although the overall FTO expression did not change significantly at different CRC stages, the subcellular localization of FTO shifted from strictly in the nucleus to the cytoplasm in the mucosa during metastatic CRC infiltration, which may be related to the tumor metastatic process.Meanwhile, FTO knockdown as performed in several different colon cancer cell lines, patient-derived cells and Patient-derived tumor xenograft animal models enhanced aldehyde dehydrogenase (ALDH) activity and promoted the CSC phenotype.Mice subjected to FTO silencing exhibited significant resistance to FOXFOL [50 µM 5-fluorouracil (5-FU) + 1 µM oxaliplatin (OXA)] as a first-line treatment, suggesting FTO is an important marker for predicting CRC metastasis and chemoresistance.Furthermore, Ruan et al (72) illustrated that, under hypoxic conditions in the tumor microenvironment, FTO leads to increased degradation of serine/threonine kinase receptor-associated protein as an E3 ligase-mediated ubiquitinated protein that promotes CRC metastasis.Further studies to elucidate the specific mechanisms of interaction between how FTO achieves metastasis from the nucleus to the cytoplasm according to the progression of CRC staging, how it identifies and specifically binds to its downstream targets and tumor microenvironment features, including those of hypoxia, FTO and CRC metastasis, and the response to therapeutic measures such as chemotherapy, are warranted.High expression and the methylation degeneration of ALKBH5 are strongly associated with Lynch syndrome (73) and silencing ALKBH5 facilitates an enhanced immune response, reduces lactate accumulation in the tumor microenvironment and increases infiltration of T-regulatory cells and myeloid-derived suppressor cell production (74).These findings are of great significance for slowing down the trend of CRC rejuvenation (75) and inducing the conversion of microsatellite stable-type CRC to 'hot tumors' to benefit from immunotherapy. In brief, the m6A methylation key enzymes are widely involved in all aspects of CRC development and have considerable interventional value.Accordingly, it is reasonable to ask what the role of these enzymes is in the current mainstream chemotherapeutic benefits pertaining to CRC, and what the potential opportunities for intervention are. Mechanisms of chemoresistance in CRC involving key m6A methylation enzymes The mechanisms by which chemoresistance occurs in CRC are complex and there is no shortage of links involving m6A methylation enzymes.In particular, the mechanism of chemotherapy resistance due to an altered tumor microenvironment regulated by m6A methylation key enzymes has gained attention.The known molecular mechanisms underlying the m6A methylation of key enzymes that regulate common chemotherapeutic drug resistance in pan-cancer are presented in Fig. 2. Li et al (64) found that METTL3 expression was elevated in both primary and metastatic CRC foci compared to normal tissue using a The Cancer Genome Atlas database analysis, and that patients with high METTL3 expression benefited less from chemotherapy when XELOX (OXA + capecitabine) and FOLFOX (5-FU + OXA) were used as first-line regimens.Chen et al (76) found that overexpression of IGF2BP1 promoted the colony-forming ability and resistance to 5-FU and etoposide in CRC cells. As the study of m6A methyltransferase is the most extensive and because 5-FU and OXA are the most commonly used chemotherapeutic agents in the treatment of CRC, the resistance mechanisms associated with m6A methylation were found to be highly dependent on the regulation of METTL3 (54) (as shown in Fig. 2).Therefore, this study will further focus on the 5-FU and OXA resistance mechanisms of m6A methyltransferases, particularly METTL3, in CRC.Due to the high complexity of the many genes and proteins involved and the associated mechanisms, the present review categorizes these mechanistic processes into drug transporter protein-related, stem cell activity, EMT process and cellular autophagy, as illustrated in Fig. 3 and below. Mechanisms associated with the microenvironment surrounding CRC cells.Tumor microenvironment refers to the special survival environment of tumor cells with their surrounding chemokines, apoptotic factors, immune cells, adhesion proteins, joint composition of metabolic disorders, defective apoptosis, lack of oxygen and acidification.Such a special environment is conducive to the evasion of tumor cells in response to therapy and epigenetic alterations such as m6A methylation induce the inevitable adaptation of tumor cells to this special environment. Metabolic remodeling in CRC.Metabolic remodeling is one of the 14 features of the tumor environment (77) and CRC is characterized by abnormal active glycolipid (54).The data and figures of the article can be freely quoted and edited, in accordance with the CC0 protocol.Copyright link: Rightslink ® by Copyright Clearance Center.Reasons for the present design: i) METTL3 is the catalytic center in the m6A methylation process and it was identified as the most widely available species of m6A methylation key enzyme.Accordingly, the m6A-associated mechanisms of common chemotherapeutic drugs were categorized into combinations based on whether they are dependent on METTL3 regulation.ii) The mechanisms were further classified and some of the latest findings were updated.Specifically, overexpression of TBB5 upregulated RAD51AP1 expression and induced an increase in m6A methylation modification of this gene, resulting in improved resistance to 5-FU in CRC cells.Knocking down METTL3 decreased the expression of RAD51AP1 and TBB5, while reducing the level of m6A methylation modification of RAD51AP1.As a result, resistant cells were once again sensitized to 5-FU.Upregulated by METTL3 and recognized by IGF2BP1, expression of the preprotein translocator known as Sec62 is upregulated, activating the Wnt/β-linker pathway and leading to enhanced 5-FU resistance in CRC cells.Due to the dysregulated glycolipid metabolism in CRC, there is an abundance of glycolipid complexes within CRC cells.These complexes containing Gb3 can upregulate R273H, leading to p53 mutation and induction of METTL3 for m6A methylation, ultimately resulting in resistance to 5-FU and oxaliplatin in CRC cells.iii) Methylases, recognition enzymes and demethylases were distinguished by different shapes.Enzymes of the same family are filled using the same color system (e.g., IGF2BP1-3 belong to the IGF2BP family and YTHDF1-3 belong to the YTHDF family).Through 3 or 4 steps of classification, it was found that the mechanism of 5-FU resistance is closely related to the regulation of METTL3 and the recognition function of the IGF2BP family.Cisplatin resistance is more closely related to the function of recognition proteins, and also involves the YTHDF family.METTL3, methyltransferase-like 3; m6A, N6-methyladenosine; 5-FU, 5-fluorouracil; RAD51AP1, RAD51-associated protein 1; YTHDF, YTH m6A RNA binding protein F; IGF2BP, insulin-like growth factor 2 mRNA binding protein; CRC, colorectal cancer; TBB5, tubulin beta class I; Sec62, SEC62 homolog, preprotein translocation factor.metabolic rearrangement.Interference with metabolic pathways by inhibiting the activity of key enzymes of glycolysis and blocking oxidative phosphorylation pathways may improve the sensitivity of CRC-resistant cells to cisplatin and vincristine (78)(79)(80).Overexpression of METTL3 recruits YTHDF1 to trigger the translation of LDHA mRNA, catalyze glycolysis and increase the resistance of CRC cells to 5-FU (81).By contrast, knockdown of METTL3 reduces the protein translation efficiency of hypoxia-inducible factor-1α, inhibits the activity of lactate dehydrogenase A, hexokinase 2, GADPH and other key enzymes of glycolysis, and blocks the occurrence of the Warburg effect in intestinal cancer cells (82), thus re-sensitizing drug-resistant cells to chemotherapy.In addition, YTHDF1 and 2 are involved in the activation of transcription factor 4-mediated glutamine metabolism, and deletion of YTHDF1 and 2 enhances cisplatin tolerance in colorectal cancer (83,84). Autophagy in CRC cells.Autophagy refers to the formation of autophagic lysosomes stimulated by various adhesion and apoptotic factors that transfer intracellular material into lysosomes for degradation.When the drug-tolerant persister (DTP) state is activated, key autophagy genes such as unc-51 like autophagy activating kinase 1 and autophagy related 2A are upregulated and when autophagy inhibitors are administered, cells exit the DTP state and cannot survive chemotherapy (85,86). Hao et al (60) found that the m6A methylation recognition protein YTHDF3 is required for the maintenance of autophagy and that YTHDF3 depletion interrupts the formation of autophagosomes.Although YTHDF3 promotes and recognizes the translation of the autophagy gene FOXO3, it does not maintain its stability.Silencing METTL3 without interfering with YTHDF3 can still impair autophagy-lysosome synthesis and destabilize FOXO3 mRNA.Thus, the complete regulation of autophagy in CRC cells is influenced by METTL3 and the METTL3/YTHDF3 complex.Lin et al (87) demonstrated that the expression of light chain (LC)3B, a marker of autophagy activation, was positively correlated with the expression of FTO in normal and 5-FU-resistant CRC tissues and that FTO knockdown resulted in a significant increase in apoptosis, Figure 3. M6A methylation key enzyme may mediate the chemotherapy resistance process of colorectal cancer (using methyltransferase as an example).Created with BioRender.com.Several studies have indicated that METTL3 can impact the effectiveness of chemotherapy drugs such as cisplatin, oxaliplatin and 5-fluorouracil.These drugs are commonly used to treat various types of cancer by influencing different enzymes involved in m6A methylation.Understanding this mechanism is crucial, as it sheds light on the role of METTL3.However, the specific impact and mechanism of METTL3 on chemotherapy treatment for individual cancers have not received sufficient attention or been adequately summarized.This article aims to address and enlighten fellow researchers on this matter.METTL3, methyltransferase-like 3; m6A, N6-methyladenosine; miRNA, microRNA, EMT, epithelial to mesenchymal transition; ABC, ATP-binding cassette transporter; APC, adenomatosis polyposis coli; LGR5, leucine rich repeat containing G protein-coupled receptor 5; BCL-2, B-cell CLL/lymphoma 2, apoptosis regulator; FOXO3, forkhead box O3; CSN8, COP9 signalosome subunit 8; RUNX1, runt-related transcription factor 1; USP43, ubiquitin specific peptidase 43; TCA, tricarboxylic acid.inhibition of autophagy in CRC cells and re-sensitization of drug-resistant cell lines to 5-FU. Epithelial-mesenchymal transition (EMT). A study from China published in 2020 found that METTL3 regulates the TGF-β and Snail pathways to affect EMT in intestinal cancer cells (88).EMT is also an important mechanism for the development of chemoresistance in CRC, indicating a decrease in adhesion factors such as epithelial surface calmodulin, detachment from basement membrane connections, and cytoskeletal and morphological convergence to mesenchymal features.Various molecules such as COP9 signalosome subunit 8, runt-related transcription factor 1, ubiquitin specific peptidase 43, histone deacetylase 2 and tumor microenvironment-associated fibroblasts can regulate EMT in CRC cells (89).When EMT occurs, the malignant phenotype of intestinal cancer becomes more prominent and resistant to OXA and 5-FU (90). Mechanisms associated with the function of CRC cells themselves Stem cell activity in CRC.CSCs are a special class of cells with self-healing and multi-differentiation potential.The most clinically relevant feature of CSCs is their ability to metastasize and evade standard chemotherapy (91).The differentiation homeostasis of this class of cells is usually regulated by the Wnt/β-catenin signaling pathway.Sec62 is a key protein in this pathway, and its increased expression enhances the sphere-forming ability of CSCs.Liu et al (92) found that Sec62 expression was positively correlated with METTL3 expression and that the METTL3-mediated accumulation of m6A methylation, which upregulates Sec62 levels, competitively disrupted the binding of β-linked protein and oncogene adenomatosis polyposis coli, leading to 5-FU resistance.METTL3-mediated m6A methylation accumulation also drives the methyltransferase recruitment of histone third subunit IV lysine (H4K3) to the promoter of leucine rich repeat containing G protein-coupled receptor 5 (LGR5), a colon cancer stem cell marker, leading to irinotecan and OXA resistance (93).Bai et al (94) found that silencing YTHDF1 downregulated CSC markers, including CD133, CD44, ALDH1, octamer-binding transcription factor 4 and LGR5, and inhibited Wnt/β-linked protein pathway activity in ex vivo experiments.Accordingly, it is proposed that YTHDF1 recognizes and promotes the translation of m6A-modified frizzled class receptor 9 and Wnt6 mRNAs, leading to aberrant activation of Wnt/β-linked protein signaling and ultimately affecting tumorigenicity, stem cell-like activity and response to chemical agents in CRC as a scientific hypothesis. In other words, some researchers have found that m6A methylation key enzymes can regulate the expression of various CSC markers and, consequently, affect CSC activity and sensitivity to chemotherapeutic agents.However, the specific mechanisms involved remain to be further elucidated. DNA damage repair (DDR).Defects in the DDR pathway are a hallmark of genomically unstable tumors, and up to 15-20% of CRCs have DDR pathway defects (95,96).In the past, it was often thought that DDR pathway defects were mainly associated with peroxisome proliferators-activated receptors (PPAR) inhibitor efficacy.Furthermore, defects in the DDR pathway were primarily associated with the efficacy of PPAR inhibitors.Mechanism of action of common chemotherapeutic agents, including 5-FU and topoisomerase inhibitors, is related to the DDR pathway.METTL3 in the physiological state can promote cell repair after physical damage, including that caused by UV light.By contrast, METTL3 is pathologically upregulated in tumors, leading to an excessive rate of homologous recombination repair (HR), non-homologous end recombination and failure of chemotherapeutic drugs (97).Li et al (98) found that after METTL3 silencing treatment of HCT-8/5-FU resistant colon cancer cells, the resistant cells were able to be re-sensitized to 5-FU, while RAD51-associated protein 1, a key factor on the HR pathway, was downregulated.Zhang et al (99) discovered that METTL3 knockdown in OXA-resistant colon cancer cells also improved the chemosensitivity of resistant cells, while METTL3 overexpression restored the drug-resistant phenotype.Further sequencing suggested that the differentially expressed genes were mainly enriched in classical drug resistance pathways, including the Hippo and DDR pathways. Therapeutic exploration of targeting key m6A methylation enzymes Currently, the exploration of METTL3 inhibitors is being conducted mainly from the following three perspectives: Application of natural drug ingredients, small molecule drug synthesis development and clinical trials, and the combination of huge data with computer model predictions and screening of drug targets and pathways (as shown in Fig. 4). Although m6A methylesterase inhibitors are currently less used in the treatment of CRC, the METTL3 inhibitor STM2457 has been reported to exhibit significant therapeutic effects in acute myeloid leukemia (100) and was able to reverse chemoresistance in small cell lung cancer (101) However, its derivative STC-15, the first clinical candidate for an oral agent targeting METTL3, is in phase I clinical trials for patients with advanced solid tumors (NCT05584111).Therefore, researchers have begun to explore the application of key m6A methylation enzyme modulators in solid tumors from multiple perspectives, including natural drug components, small molecule targeted drugs and programmed analysis of potential drug components using computerized big data.CRC has also received attention as a highly prevalent solid tumor, and an urgent clinical need to improve the efficacy of its treatments has emerged. It has been suggested that most of the natural drug components that can regulate the action of m6A methylation key enzymes are polyphenols, alkaloids, flavonoids, anthraquinones and terpenoids (102).For instance, curcumin is a phenolic compound extracted from turmeric root that can reduce the expression of ALKBH5 and enhance the translation of tumor necrosis factor receptor associated factor 4 (TRAF4), prompting TRAF4 to bind to YTHDF6, a methyl recognition enzyme with m6A, and improve the efficiency of m6A methylation modification (103).Curcumin was also able to drive the conversion of microtubule-associated protein LC3-I to LC3-II or upregulate Beclin-1 to induce autophagy in CRC cells, reduce CSC generation and re-sensitize drug-resistant cells to 5-FU and OXA (104,105).The combination of curcumin with another polyphenol, resveratrol, could alter the distribution of key m6A methylation enzymes such as METTL3 and [5.5]undecan-2-one derivative that can act as a potent inhibitor of METTL3.However, UZH2 does not singularly target METTL3; it also partially inhibits the activity of METTL1 and METTL16.STM2120 is one of the very few METTL3 inhibitors that belong to the non-S-adenosylmethionine-dependent class.STM2457 is a METTL3 inhibitor with higher activity and proven efficacy in cells and in vitro and in vivo, discovered on the basis of the studies of STM2120.STC-15 is the first STM2457 derivative to enter clinical trials.Test number: NCT05584111.Dac51 is a rationally designed FTO inhibitor based on structural similarity screening.Dac85 is a derivative of Dac51.FB23 is an FTO competitive inhibitor that selectively inhibits the N6-methyladenosine demethylase activity of FTO.FB23-2 is a more efficient derivative of FB23.CS1, Bisantrene, an anthracene derivative with antitumor activity.CS2 is a derivative of CS1 and is in clinical trials, trial no: NCT03820908.IOX1, a potent broad-spectrum inhibitor of 2OG oxygenases.2,4-PDCA, lutidinic acid, 2,4-dicarboxypyridine, a potent broad-spectrum inhibitor of 2OG oxygenases. YTHDF2, reduce the overall m6A methylation level in the intestine and improve intestinal mucosal integrity (106). An increasing number of studies support the anti-tumor effects of herbal medicines as epigenetic modification modulators, including turmeric, tannin, yam and Kalanchoe pinnata (107), which can target DNA (cytosine-5-)-methyltransferase 1 to inhibit P65 gene methylation and interfere with CRC cell infiltration and migration (108).Although the relationship between Chinese medicine and m6A methylation has not yet been fully elucidated, some studies have found that herbal extracts can regulate DNA methylation, including chaihu saponin (109), quercetin (110) and catechins (111).Flavonoid components such as chaihu saponin, quercetin and catechin can also increase the expression of METTL3 and METTL14 and decrease the methyl recognition proteins such as FTO and ALKBH5 (109,112). As for small-molecule drug development, besides STM2457 and its derivative STC-15, modulators targeting methylation recognition enzymes are also under active development.An inhibitor of FTO called CS1 inhibited the proliferation of six CRC cell lines, including HT-29, COLO, HCT-116 and 5-FU-resistant cell lines (HCT-116/5FU).It also induced G25/M phase cell cycle arrest and promoted apoptosis of HCT-116 cells by downregulating doublecortin domain containing 2C (113). Virtual screening and in vitro assays of 1,042 commercially available natural products by Du et al (114) identified quercetin as a natural inhibitor of METTL3, which binds to METTL3 to form stable protein-ligand complexes.Manna et al (115) identified hesperidin as a potent inhibitor of METTL3 by computer screening and molecular dynamics simulation.Deng et al (102) are also exploring the development of traditional drugs as novel and effective therapeutic agents to inhibit m6A modification-mediated tumor progression using the TCM Systematic Pharmacology Database and Analysis Platform, Indian Medicinal Plants Phytochemistry and Therapeutics database combined with artificial intelligence to build a framework for traditional or natural drug-based targeting of m6A drugs. Discussion M6A methylation modifications are a bridge between the tumor microenvironment and phenotypic alterations, including chemoresistance, the mechanisms of which are complex, and the knowledge of epigenetics is yet to be refined.Of note, it has been found that various epigenetic modifications are not completely isolated from each other and there is a strong correlation between m6A methylation and DNA methylation, which can specifically lead to increased DNA methylation at proximal sites when METTL3 is depleted, resulting in downregulated chromatin binding levels of fragile X mental retardation, autosomal homolog 1 and tet methylcytosine dioxygenase 1 (116).Although current studies on m6A methylation have started from a combination of various high-throughput screens, molecular deconstruction techniques and metabolic alteration assays, the specific mechanisms of the interactions between the various aspects of m6A methylation, how to accurately achieve a homeostatic balance between methylation and demethylation, and the specific mechanisms of migration, invasion resistance and other malignant phenotypes of tumors induced by key m6A methylation enzymes, as well as the regulation of the tumor microenvironment remain unresolved.More importantly, some of the currently developed m6A modification inhibitors and activators have poor target specificity, therapeutic efficacy, and safety and pharmacokinetic limitations.The large-scale development and application of artificial intelligence provide new opportunities to assist in the preclinical screening of more efficient drug components.More m6A methylation key enzyme modulators can enter clinical trials in the near future, providing an effective way to improve the treatment of CRC. Of note, the present study had certain limitations.First, the specific expression of each key m6A methylase was not summarized and discussed.The expression of these methylation key enzymes in different cancer species and their close relation to clinical characteristics and prognosis was not focused on, and may be discussed in the future.The present review focused on summarizing some directions of the basic findings of each methylation key enzyme in CRC chemoresistance and the subsequent clinical transformation.Furthermore, the relationship between the complex immune microenvironment of CRC and key m6A methylases was not summarized and discussed.The present review focused on the relationship of the immune microenvironment with m6A modification, and indeed, considerable studies have focused on this aspect.This is another topic that is not very closely related to the focus of this paper, but the knowledge system is huge, so it was not discussed.In the future, a focus will be placed on the role of m6A in the immune microenvironment.the manuscript.HX directed and participated in information gathering, image conception and design, drawing figures and subsequent revision of the manuscript.All authors have read and approved the final manuscript.Data authentication is not applicable. Figure 1 . Figure 1.Schematic diagram of the downstream protein interactions of the human METTL3-METTL14 complex and related resistance mechanisms.(I) Left: Structure of the human METTL3:METTL14 complex.Right: Protein structure of the human RAD51AP1 (one of the downstream proteins of METTL3-mediated 5-FU resistance in colorectal cancer).(II) Two-dimensional interaction between the METTL3-METTL14 complex and the RAD51AP1: The red dotted line indicates a salt bridge, the green dotted line hydrogen bonds, (A) indicates METTL3, (B) METTL14 and (R) RAD51AP1, where (A) chain ARG451 and GLN15 of the R chain, (hydrogen bond chain A) mainly have a hydrophobic interaction and R chain, including R chain positive LYS230 with (B) protein negatively charged GLU220 have a salt bridge interaction.(III) Three-dimensional interaction between the METTL3-METTL14 complex and RAD51AP1: Sky blue represents METTL14, green stands for METTL3 and purple indicates METTL14.The dark green dashed line shows the hydrogen-bonding interactions, e.g., GLU239, LYS278 and GLU220 of METTL14 form hydrogen bond interactions with ASN12, SER14 and ASP178 of METTL3 proteins; its hydrogen bonds are 2.8, 2.6 and 2.7 Angstroms.METTL3, methyltransferase-like 3; RAD51AP1, RAD51-associated protein 1. Figure 2 . Figure 2. Role and molecular mechanisms of m6A regulators in chemotherapy resistance.Adapted from Liu et al(54).The data and figures of the article can be freely quoted and edited, in accordance with the CC0 protocol.Copyright link: Rightslink ® by Copyright Clearance Center.Reasons for the present design: i) METTL3 is the catalytic center in the m6A methylation process and it was identified as the most widely available species of m6A methylation key enzyme.Accordingly, the m6A-associated mechanisms of common chemotherapeutic drugs were categorized into combinations based on whether they are dependent on METTL3 regulation.ii) The mechanisms were further classified and some of the latest findings were updated.Specifically, overexpression of TBB5 upregulated RAD51AP1 expression and induced an increase in m6A methylation modification of this gene, resulting in improved resistance to 5-FU in CRC cells.Knocking down METTL3 decreased the expression of RAD51AP1 and TBB5, while reducing the level of m6A methylation modification of RAD51AP1.As a result, resistant cells were once again sensitized to 5-FU.Upregulated by METTL3 and recognized by IGF2BP1, expression of the preprotein translocator known as Sec62 is upregulated, activating the Wnt/β-linker pathway and leading to enhanced 5-FU resistance in CRC cells.Due to the dysregulated glycolipid metabolism in CRC, there is an abundance of glycolipid complexes within CRC cells.These complexes containing Gb3 can upregulate R273H, leading to p53 mutation and induction of METTL3 for m6A methylation, ultimately resulting in resistance to 5-FU and oxaliplatin in CRC cells.iii) Methylases, recognition enzymes and demethylases were distinguished by different shapes.Enzymes of the same family are filled using the same color system (e.g., IGF2BP1-3 belong to the IGF2BP family and YTHDF1-3 belong to the YTHDF family).Through 3 or 4 steps of classification, it was found that the mechanism of 5-FU resistance is closely related to the regulation of METTL3 and the recognition function of the IGF2BP family.Cisplatin resistance is more closely related to the function of recognition proteins, and also involves the YTHDF family.METTL3, methyltransferase-like 3; m6A, N6-methyladenosine; 5-FU, 5-fluorouracil; RAD51AP1, RAD51-associated protein 1; YTHDF, YTH m6A RNA binding protein F; IGF2BP, insulin-like growth factor 2 mRNA binding protein; CRC, colorectal cancer; TBB5, tubulin beta class I; Sec62, SEC62 homolog, preprotein translocation factor. Figure 4 . Figure 4. Exploration of inhibitors.The names and definitions of Chinese medicinal herbs in the figure are based on the Chinese Pharmacopoeia 2020 Edition.The drawings of the human body and drugs in the figure are created with BioRender.com.Ginseng Radix is the dried root and rhizome of the plant Panax ginseng (C. A. Mey.) of the Acanthopanax family.Astragalus is the dried root of the legume Astragalus membranaceus (Fisch.)Bge.var.mongholicus (Bge.)Hsiao or Astragalus membranaceus (Fisch.)Bge.Cuscutae semen is the dried mature seed of the species Cuscuta australis R.Br. or Cuscuta chinensis Lam. of the composite family.Visci Herba is the dry leafy stem branch of the plant Viscum coloratum (Komar.)Nakai of the family Viscaceae.Ginkgo Folium is the dried leaves of the Ginkgo biloba L. plant of the Ginkgo family.Corydalis Rhizoma is the dried tuber of Corydalis yanhusuo W.T. Wang, a member of the Papaveraceae family.Scutellariae Radix is the dried root of Scutellaria baicalensis Georgi, a plant in the family Lablabaceae.Coptis Radix is the dried rhizome of Coptis chinensis Franch., Coptis deltoidea C.Y. Cheng et Hsiao, or Coptis teeta Wall., all of the buttercup family.Aloe is a concentrated dried juice of the leaves of the lily plant Aloe barbadensis Miller, Aloe ferox Miller or other related plants of the same genus.Rhei Radix et Rhizoma refers to the dried roots and rhizomes of Rheum palmatum L., Rheum tanguticum Maxim.ex Balf.orRheum officinale Baill.Muskmelon pedicel is the fruiting stalk of cucumber in Cucurbitaceae.UZH2 is a 1,4,9-triazaspiro[5.5]undecan-2-one derivative that can act as a potent inhibitor of METTL3.However, UZH2 does not singularly target METTL3; it also partially inhibits the activity of METTL1 and METTL16.STM2120 is one of the very few METTL3 inhibitors that belong to the non-S-adenosylmethionine-dependent class.STM2457 is a METTL3 inhibitor with higher activity and proven efficacy in cells and in vitro and in vivo, discovered on the basis of the studies of STM2120.STC-15 is the first STM2457 derivative to enter clinical trials.Test number: NCT05584111.Dac51 is a rationally designed FTO inhibitor based on structural similarity screening.Dac85 is a derivative of Dac51.FB23 is an FTO competitive inhibitor that selectively inhibits the N6-methyladenosine demethylase activity of FTO.FB23-2 is a more efficient derivative of FB23.CS1, Bisantrene, an anthracene derivative with antitumor activity.CS2 is a derivative of CS1 and is in clinical trials, trial no: NCT03820908.IOX1, a potent broad-spectrum inhibitor of 2OG oxygenases.2,4-PDCA, lutidinic acid, 2,4-dicarboxypyridine, a potent broad-spectrum inhibitor of 2OG oxygenases.
2023-12-21T16:02:23.243Z
2023-12-19T00:00:00.000
{ "year": 2023, "sha1": "01a4d037c147d344d5ac6368d59342be8a4c8335", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "9310b2b4046c56e9c22456c30a64ebb5d895e665", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
125698851
pes2o/s2orc
v3-fos-license
Geometric Framework for Unified Field Theory Using Finsler Gauge Transformation Thecommon Finsler idea used by the physicists Beil andHolland is the existence of a nonholonomic frame on the vertical subbundle VTM of the tangent bundle of a base manifold M. This nonholonomic frame relates a semi-Riemannian metric (the Minkowski or the Lorentz metric) with an induced Finsler metric. In 2001, Antonelli and Bucataru have determined such a nonholonomic frame for two important classes of Finsler spaces that are dual in the sense of Randers and Kropina spaces [1, 2]. Recently, Bucataru and Miron have studied Finsler-Lagrange geometry and its applications to dynamical systems [3]. In this paper, the fundamental tensor field might be thought of as the result of two Finsler deformations.Thenwe can determine a corresponding frame for each of these two Finsler deformations. Consequently, a nonholonomic Finsler frame for a Finsler space with infinite series of (α, β)-metric, that is, F = β/(β − α), will appear as a product of two Finsler frames formerly determined. We study the Finsler space with (α, β)-metrics which have nonholonomic frames as an application for classical mechanics and dynamics in physics using gauge transformation which helps to derive unified field theory. 2. Preliminaries Introduction The common Finsler idea used by the physicists Beil and Holland is the existence of a nonholonomic frame on the vertical subbundle of the tangent bundle of a base manifold . This nonholonomic frame relates a semi-Riemannian metric (the Minkowski or the Lorentz metric) with an induced Finsler metric. In 2001, Antonelli and Bucataru have determined such a nonholonomic frame for two important classes of Finsler spaces that are dual in the sense of Randers and Kropina spaces [1,2]. Recently, Bucataru and Miron have studied Finsler-Lagrange geometry and its applications to dynamical systems [3]. In this paper, the fundamental tensor field might be thought of as the result of two Finsler deformations. Then we can determine a corresponding frame for each of these two Finsler deformations. Consequently, a nonholonomic Finsler frame for a Finsler space with infinite series of ( , )-metric, that is, = 2 /( − ), will appear as a product of two Finsler frames formerly determined. We study the Finsler space with ( , )-metrics which have nonholonomic frames as an application for classical mechanics and dynamics in physics using gauge transformation which helps to derive unified field theory. Preliminaries We denote the tangent space at ∈ by and the tangent bundle of by . Each element of has the form ( , ), where ∈ and ∈ . The natural projection : → is given by ( , ) ≡ . A Finsler structure of is a function : → [0, ∞), with the following properties: Throughout the project, the lowering and raising of indices are carried out by the fundamental tensor defined above and its inverse matric tensor . It is obvious that the Finsler structure is a function of ( , ). In this case, depends on only; then Finsler manifold reduces to a Riemannian manifold. The symmetric Cartan tensor can be defined as The Cartan tensor vanishes if and only if has no dependence. So the Cartan tensor is a measurement of the deviation from the Riemannian manifold. Using Euler's theorem on homogeneous function, we can get useful property of the fundamental tensor and Cartan tensor : where = / . Definition 1. The Finsler space = ( , ) is said to have an ( , )-metric if is positively homogeneous function of degree one in two variables = √ ( ) and = ( ) , where is a Riemannian metric and is differential 1-form. An ( , )-metric is expressed in the following form: In order to define , must satisfy the condition ‖ ‖ < 0 for all ∈ . Thus the normalized element of support =̇is given by where = . The angular metric tensor ℎ =̇̇is given by where The fundamental tensor = (1/2)̇̇2 is given by where Moreover, the reciprocal tensor of is given by The ℎ]-torsion tensor = (1/2)̇is given by Nonholonomic Frames for Beil Metric We start with a real -dimensional manifold of ∞class. Denote by ( , , ) the tangent bundle of the base manifold and by (̃, , ) the tangent bundle with the null cross section removed. Local coordinates on are denoted by ( ), while the induced local coordinates on are denoted by ( , ). Denote by ⋆ the linear map induced by the canonical submersion : → . As for every ∈ , ⋆, : → ( ) is an epimorphism; then its kernel determines -dimensional distribution : . We call it the vertical distribution of the tangent bundle. If the natural basis of is denoted by {( / )| , ( / )| }, then {( / )| } is a basis of . Definition 2. A Generalized Lagrange metric (GL-metric) is a metric on the vertical subbundle of the tangent space ; that is, for every ∈ , : × → R is bilinear, symmetric, of rank , and of constant signature. A pair GL = ( , ) with GL-metric is called Generalized Lagrange space (GL-space). In local coordinates, we denote ( ) = (( / )| , ( / )| ) for every ∈ . Then a GLmetric may be given by a collection of functions ( , ) such that we have the following: is a GL-metric [4], called the Beil metric. We say also that the metric tensor is a Finsler deformation of the Riemannian metric . It has been studied and applied by R. Miron and R. K. Tavakol in General Relativity for ⋆ ( , ) = exp (2 ( , )) and ⋆ = 0. The case ⋆ ( , ) = 1 with various choices of ⋆ and was introduced and studied by Beil for constructing a new unified field theory in [5]. Throughout this paper, we shall rise and lower indices only with the Riemannian metric we call a nonholonomic Finsler frame. is a nonholonomic Finsler frame. The Beil metric (16) and the Riemannian metric ( ) are related by Proof. Consider alsõ It is a direct calculation to check that̃is the inverse of ; that is, is a nonholonomic frame. Next, we have that = ⋆ + ⋆ = , so the formula (21) holds true. Nonholonomic Frames for Finsler Spaces with ( , )-Metrics Definition 6. A Finsler space = ( , ( , )) is called with ( , )-metric if there exists a 2-homogeneous function of two variables such that the Finsler metric : → R is given by where ( , ) = √ ( ) is a Riemannian metric and For a Finsler space with ( , )-metric we have With respect to these notations, we have that the metric tensor of a Finsler space with ( , )-metric is given by [8] ( , ) = ( ) + 0 ( ) ( ) The metric tensor of a Lagrange space with ( , )-metric can be arranged into the following form: From (26) we can see that is the result of two Finsler deformations: The nonholonomic Finsler frame that corresponds to the first deformation of (27) is, according to Theorem 4, given by The metric tensors and ℎ are related by ℎ = . (29) According to Theorem 4, the nonholonomic Finsler frame that corresponds to the second deformation of (27) is given bỹ= The metric tensors ℎ and are related by the following formula: =̃̃ℎ . (31) From (29) and (31), we have that =̃with given by (28) and̃given by (30) is a nonholonomic Finsler frame of the Finsler space with ( , )-metric. Nonholonomic Frame for Infinite Series of ( , )-Metric. Now we will consider particular Finsler ( , )-metric; that is, ⋆ = 2 = ( 2 /( − )) 2 ; by (23), we have the Finsler invariants: The nonholonomic Finsler frame that corresponds to the first deformation of (27) is, according to Theorem 4, given by Chinese Journal of Mathematics 5 According to Theorem 4, the nonholonomic Finsler frame that corresponds to the second deformation of (27) is given bỹ Finsler Gauge Transformation If a particle in a space time moves along a curved, nongeodesic path, then it is said that the particle is under the influence of some external force. In such a case, an external force term is added to the equation of motions to explain the path of motion. Alternative point of view is that motion can be explained by a new metric, which would result from a gauge transformation. In this way, physical force fields can be geometrized, and general relativistic idea of space time curvature determining the path of the particle will also include fields other than gravitation. For this purpose, a class of gauge transformations which act on tangent space is considered. There are actually several ways to introduce Finsler geometry. Probably the most common way is just to assume a certain form for the metric function . It would be nice, however, to have a more physical picture of where the metric comes from. It is proposed to show that nontrivial Finsler metrics can be obtained from a certain type of Gauge transformation. This transformation takes a Lorentz space, of the sort we have been discussing, into another kind of space where the metrics and other geometrical quantities are dependent not only on but also on the tangent coordinate . The Gauge transformation is defined as follows: wherẽis a nonsingular matrix with inverse : This transformation acts on coordinates of the fiber. The action on the vertical basis is This gives a new internal or fiber metric: So the gauge transformation is a diffeomorphism acting on the vertical (fiber) subspace of . The matrices could be a representation of any subgroup of GL(4; ). This transformation is sometimes called a pure gauge transformation. It is also comparable to the metric group of Beil [5]. It does not act directly on coordinates of the horizontal subspace of . That is, = . Even so, it does produce a change of the base space metric. One way to infer the metric change is to require that the length of the tangent vector in the original Lorentz space, be invariant under the transformation. The expression 2 is used here since this is, indeed, the form of the Finsler metric function in the Lorentz space. So the transformed metric function, the length of the new tangent vector, is where ] is the new base space metric. If the components of a vector are changed under a transformation, then the metric is changed to preserve invariance. One way to see how the metric of changes is to consider that, following the soldering, are just orthonormal tetrads. This actually produces what is called soldering of the two parts of . One result of this soldering is that the components of a certain vector in are identified with the coordinates of the vertical part of . In other words, the base metric is related to the fiber metric by the same Lorentz transformation. This can also be written in the form is a new tetrad which is not necessarily orthonormal. It is possible to construct unified theories in Finsler manifold using these tetrads for fiber coordinates. So the gauge transformation acting on the vertical part of gives not only a new fiber metric but also a new metric on the horizontal part of . This is a new metric on the base space . It should be emphasized, though, that there is no -coordinate transformation involved here. It is easy to see that this type of gauge transformation generates a way to get Finsler spaces. The transformation matrix just has to be not only a function of but also a function of the tangent coordinates : and the new metric is -dependent: As will be seen, the matrix can also include a general vector field which is not necessarily the tangent vector. This vector can be the gauge potential itself, or it can be related to the potential vector by a "gauge" phase transformation. Note that (46) results from the assumption that the orthonormal tetrad is just . This is a matter of convenience without consequence to the main argument. The metric is not, in general, itself the Finsler metric. In order to get the Finsler metric, we takẽ2 This should be compared with (41). The function (47) should be used since we will be concerned with the effect of the change of metric with respect to rather than . This is actually the canonical Finsler approach and is used by Bao et al. [9][10][11]. A new metric ] is then computed in the standard way: We say that this is a Finsler metric if As discussed above, we do not insist that the metric be positive definite. Another point of curiosity is the difference between the metrics and . Of course, if is notdependent, we have = and the new metric is only Riemannian. Actually, = under more general conditions. One only needs the "metric condition" of Asanov [10]: It is of interest that (50) is satisfied by the Randers and Weyl metrics but not, in general, by the metric (46). For physical reasons we want̃2 to be of second-degree homogeneity in . For example, if is the Lagrangian, then the energy is Ordinarily, so if ⋆ is of second-degree homogeneity, then = ⋆ which is good for mechanical systems. For another choice, with ⋆ of first-degree homogeneity, as is possible in the Lagrangian theories of Miron and Anastasiei [12], = 0 and there is a problem of how to explain a system with zero energy. There is a way around this homogeneity problem [12], which involves an energy function, but it is simpler just to choosẽ2 to be of second-degree homogeneity. This also relates naturally to the original metric function 2 = ] ] . This means that the transformation matrix is of zero-degree homogeneity in : It is of interest to ask how many of the known Finsler metrics can be obtained by this sort of gauge transformation? At this point, one can only list those for which a specific matrix is known: Randers, Kropina, Beil, Weyl, and metrics where gives a conformal transformation. Obviously, nonlinear metrics are not included. What does this gauge transformation mean physically? It can be interpreted as what happens when a nongravitational field is turned on in a region of space. For example, the field could be electromagnetic. A metric has also been given for the electroweak field (2) × (1) [13]. The gauge transformation could also be interpreted as a distortion or deformation of the original Lorentz space. In other words, the gauge field twists or distorts the space. The relative effect is, by the way, a torsion rather than a curvature. Although, remarkably, the final outcome is a curved space. The torsion interpretation has been advocated by Holland [14] who relates the transformation to nonholonomic frames. The nonholonomic frame viewpoint is explained in a very useful new paper by Bucataru [15]. There is a teleparallel relation between the original Lorentz space and the resulting Finsler space. The change in local connections between the two spaces is zero in suitable coordinates. This implies a generalized equivalence principle which will be discussed below. One can write, in the natural basis ( , ] ), defining the horizontal ⋆ and vertical components of the connection. Given the metric condition (49), (48) reduces to From another point of view, A comparison of (56) and (57) gives These are just the horizontal and vertical components of ] . ⋆ and have in general no index symmetry. The net result of the work so far is a nontrivial Finsler metric and a Finsler metric function. In other words, this is the point where most Finsler theories begin. So what is the use of all these preliminaries? The main benefit is a physical understanding of how a Finsler space might describe a space which contains a nongravitational field. That is, it has been shown how a gauge transformation takes a Lorentz space to a space which is Finslerian. Note that the inverse transformation takes a metric from a Finsler metric back to the Lorentz metric. This demonstrates a generalized equivalence, whereby a transformation exists, which produces a local inertial frame along the world line of a particle. This means that the motion of a particle along a curved path not only might be due to a gravitational field derived from a metric but also might be due to other metric produced fields. It will be shown shortly exactly how this occurs. First, though, some standard Finsler results are presented. A significant point is that these results are developed in terms of a coordinate transformation of the base space . This contrasts with the gauge transformation just depicted which is a vertical diffeomorphism, a transformation in the fiber space. The gauge transformation is used to get the Finsler space. The connections given so far describe the transition to that space. The coordinate transformation deals with the properties of the resulting Finsler space. It describes the translation (sometimes called the transplantation) as one moves from one point to another in the space. The coordinate basis of does not transform covariantly under a coordinate transformation on . One has to introduce the local adapted basis where is the nonlinear connection. The adapted basis on is and the dual basis is which do transform covariantly under a coordinate transformation. The behavior of the metric under the coordinate transformation is, in the adapted basis, or it is, in the natural basis, This leads immediately to the usual connections. Consider is the adapted horizontal connection. This is symmetric in the second and third indices. Consider is the vertical connection. It is symmetric in all indices and by (58) is related to the vertical connection obtained from the gauge transformation by One can also consider the Finsler-Christoffel connection One can then form the canonical Finsler connections, for example, the Cartan connection ( , , ). Also, the various Finsler torsions and curvatures can be obtained. A highly recommended source for this standard theory is the book by Miron and Anastasiei [12]. For the Riemannian case, it is well known that there is curvature but no torsion. There is an interesting result which can be obtained using (58): This shows that the Finsler-Christoffel connection can be computed from the horizontal gauge connection. The Finsler-Christoffel connection is probably the most interesting to physicists, since it appears in the geodesic equation, the equation of motion, ] / + ] ] = 0. It is assumed that there is a time like path with parameter such that = / and = 2 holds. It is noted in passing that the equation of motion can also be written as which is the usual Euler-Lagrange equation. Application It is now time to get some specific physics using the above developments. There are several gauge transformations which might give useful results. One of them is now examined and compared. They are given by ] and̃] as (28) and (30), respectively. It will be assumed that the vector is related to the electromagnetic potential vector by This shows how the potential is included in a gauge transformation. Equation (70) has the form of what is commonly called a "gauge" transformation but which more properly should be called a phase transformation. Equation (28) corresponds to the actual mathematical diffeomorphism [12] which is a pure gauge transformation. Again, is not directly associated with the nonlinear connection in (30). It is not difficult to show how a transformation like (28) is directly derived from (1) group [6]; also 2 = ] ] . The potential is known to be given by If is changed by (28), then the potential is also changed. For example, it can be transformed from zero to a nonzero vector. This means that the electromagnetic potential can be "turned on" by the transformation (28). The metric which is associated with this transformation is This has the form of a Kaluza-Klein metric except that the vector potential appears instead of . Also, the space is four-dimensional, not five-dimensional. Furthermore, in Kaluza-Klein theories, the vector is a special case of a connection. Here, is specially not associated with a connection. It will be seen that the field ] which is derived from is a part of the connection. In general, is a function of both and . There is a variety of possible Finsler geometries. Metrics of this type have been labeled "Beil" metrics [4,15,16]. The metric function is̃2 where = ( ] ] ) 1/2 , = K 1/2 , and K is a constant which turns out to be just a factor times the universal gravitational constant. In order to illustrate the usefulness of this metric, we take the simplest case, which is the case where is a function of only. This means that the resulting Riemann space is actually the osculating space to this class of Finsler spaces. The transformation connection is easy to derive: and, from (68), The condition is imposed, which implies In [17], Beil has proved this. The geodesic equation becomes One can identify and (78) is the Lorentz equation of motion for a charged particle, recalling (70). Note that condition (76) does not restrict the gauge freedom of the electromagnetic potential . The field ] can be identified as the electromagnetic field. This means that a purely geometric derivation of electromagnetism has been developed. (1) symmetry determines the gauge transformation which in turn produces the metric and the rest of the structure. In the present theory, all potentials are included in a metric which transforms like a metric. All fields are included in a connection which transforms like a connection. The equations of motion are geodesic equations. The energymomentum for all fields is derived from a curvature. By way of comparison, consider three other gauge transformations which produce Finsler metrics which can be related to the one just studied. One transformation is The resulting metric is (46). The metric ] computed according to (48) is identical to (46), which approaches Finsler metric function: this is the special Finsler ( , )-metric function. The connections for this symmetric are well known and also complicated; hence they will not be repeated here. This is to be compared with Lorentz equation (see [18, eq (3)]) for charged particles to the equation of motion of a charged particle as geodesic equation (78), where one can see that is undetermined. Conclusion In 1982, Holland studied a unified formalism that uses nonholonomic frames on space time arising from consideration of a charged particle moving in an external electromagnetic field [19,20]. In 1987, Ingarden was first to point out that the Lorentz force law can be written in this case as geodesic equation on a Finsler space called Randers space [21]. In 1995, Beil viewed a gauge transformation as nonholonomic frame on the tangent bundle of a four-dimensional manifold [12,22]. The geometry that follows from these considerations gives a unified approach to gravitation and gauge symmetries. Considering the above concepts, we have presented a geometric setup that allows us to obtain necessary and sufficient conditions for the existence of invariants for certain types of nonholonomic systems for Finsler ( , )-metrics. Our methods have been successfully applied to prove the existence and nonexistence of invariants for concrete problems. Moreover, our geometric framework generates a new setup that might be useful to determine conditions that generate the existence of invariants for systems with a particular class of ( , )-metrics that we plan to study concerning nonholonomic systems for which some interesting results concern the existence. Finally, in this paper, we set up the application of Finsler geometry to geometrize the electromagnetic field completely. First Finsler gauge transformations are considered; thus, by a specific transformation, Finsler metric function is calculated and properties of this metric function are studied. Finally, general forms of Finsler metric functions, resulting from this transformation, are considered.
2019-04-22T13:02:15.476Z
2016-08-22T00:00:00.000
{ "year": 2016, "sha1": "fbd983309fe3f274f4d158cb1e3d35a1686d18c1", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/archive/2016/3081840.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d5f30a786cfa53afd713a7954d2edfdadf0f5994", "s2fieldsofstudy": [ "Physics", "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
267033739
pes2o/s2orc
v3-fos-license
A Self‐Designed Endobutton Installation Device for Coracoclavicular Stabilization in Acute Rockwood Type III Acromioclavicular Joint Dislocation Objective Endobutton technique could provide flexible coracoclavicular (CC) stabilization for acromioclavicular joint (ACJ) dislocation and achieved good clinical outcomes. However, the difficult part of this technique was placement of the Endobutton to the coracoid base. In this study, we designed an Endobutton installation device to place the Endobutton at the coracoid base. And we examined the clinical and radiographic outcomes of patients with acute Rockwood type III ACJ dislocation repaired with Endobutton using this device. Methods We designed an Endobutton installation device to place the Endobutton at the coracoid base to achieve CC stabilization. We retrospectively reviewed 42 patients with acute Rockwood type III ACJ dislocation who underwent CC stabilization with Endobuttons placed either using this novel device (group I, n = 19) or the traditional technique (CC stabilization without using special device, group II, n = 23) from January 2015 to April 2020. The two groups were compared regarding the operative time, intraoperative blood loss, and clinical and radiologic outcomes at final follow‐up. The operation‐related complications were also evaluated. The Student's t test and the Mann–Whitney U‐test were used to compare differences in continuous variables. Differences in categorical variables were assessed with either the Pearson's chi‐squared test or Fisher's exact test. Results Forty‐two patients were clinically followed up for a minimum of 12 months. Compared with group II, group I had a significantly shorter mean operative time (56.05 ± 7.82 min vs. 65.87 ± 7.43 min, p < 0.01) and significantly lesser mean intraoperative blood loss (67.89 ± 14.75 mL vs. 94.78 ± 25.01 mL, p < 0.01). At final follow‐up, there were no significant differences between the two groups in the visual analog scale score for pain, Oxford Shoulder Score, Disabilities of the Arm, Shoulder, and Hand score, and postoperative CC distance of the affected side. Loss of reduction occurred in four patients in group I and three patients in group II (p = 0.68); there were no other operation‐related complications in either group. Conclusions The Endobutton installation device makes placement of the Endobutton at the coracoid base easier and achieves satisfactory clinical and radiologic outcomes without additional complications in acute Rockwood type III ACJ dislocation. Introduction A cromioclavicular joint (ACJ) dislocation is a common injury among the active population. 1,2In acute Rockwood type III ACJ dislocation, the coracoclavicular (CC) ligament is completely torn; this disruption of the CC ligament leads to vertical instability because of the downward pull of the weight of the arm and the superior pull of the trapezius muscle. 3,4As a result, nonsurgical treatment of acute Rockwood type III ACJ dislocation may result in high rates of pain and shoulder dysfunction. 5,6Therefore, surgery is recommended for acute Rockwood type III ACJ dislocation, especially in young active patients. 7][10] However, each surgical technique has advantages and disadvantages, and there is no consensus regarding the "gold standard" of fixation for acute Rockwood type III ACJ dislocation. 11CC ligament reconstruction is currently recommended for ACJ dislocation because the CC ligament plays a crucial role in the physiological function of the ACJ. 12,13CC stabilization using Endobuttons placed either via open surgery or arthroscopically reportedly achieves good clinical and radiological outcomes. 14,15However, the difficult part of the Endobutton technique for CC stabilization was placement of the Endobutton at the coracoid base, 16 which often required repeated attempts.Therefore, more soft tissue stripping, the operative time and intraoperative blood loss were needed to finish the CC stabilization. Herein, we present a self-designed Endobutton installation device designed to place the Endobutton at the coracoid base.The purpose of the present study is to examine the clinical and radiographic outcomes of patients with acute Rockwood type III ACJ dislocation repaired with Endobutton using this device. Self-Designed Endobutton Installation Device The difficult part of Endobutton technique for CC stabilization was placement of the Endobutton to the coracoid base.Therefore, we designed an Endobutton installation device to easily place the Endobutton at the coracoid base for CC stabilization.The self-designed Endobutton installation device is composed of a cannula and a pushing rod (Patent number: ZL201821603597.9).The cannula is 100 mm long and has outer and inner diameters of 4.3 mm and 4.1 mm, respectively, through which the Endobutton (12 mm long, 3.75 mm wide, 1.5 mm thick; Johnson & Johnson, Piscataway, NJ, USA) can pass.The pushing rod is 150 mm long and has a rectangular handle (30 mm long, 10 mm wide, 2.0 mm thick), cylindrical part (110 mm long, 2.0 mm diameter), and rectangular end (10 mm long, 2.5 mm wide, 1.5 mm thick) (Figure 1).The Endobutton installation device is made of medical stainless steel and was produced by Double Medical Technology Inc., Xiamen, China (Figure 2). Study Population This retrospective study included patients with acute Rockwood type III ACJ dislocation who received CC stabilization using either the Endobutton installation device (group I) or the traditional Endobutton placement technique (group II) from January 2015 to April 2020.The inclusion criteria were (1): acute Rockwood type III ACJ dislocation that occurred within 2 weeks of surgery; (2) consent for surgical treatment; (3) postoperative follow-up of at least 12 months.The exclusion criteria were: (1) age younger than 18 years; (2) history of surgery on the affected shoulder; (3) concomitant fracture around the affected shoulder; (4) chronic ACJ dislocation; (5) severe osteoporosis.Surgery was recommended for patients with high activity levels.Forty-two patients with acute Rockwood type III ACJ dislocation consented to surgical treatment and were included in this study.All surgeries were performed by two surgeons (J.M. and X.W.) at a single institution.This study was approved by the Ethics Committee of the hospital (approval number: SH9H-2023-T89-1). Surgical Technique After the induction of general anesthesia, the patients were placed in the beach-chair position.A 5-cm transverse incision was made over the lateral third of the clavicle extending toward the ACJ.The ACJ reduction was achieved and maintained by temporary K-wire fixation across the ACJ.The anterior deltoid muscles were bluntly dissected along the deltoid fibers from the clavicle to the tip of the coracoid.The soft tissue under the coracoid base was then pushed away with the surgeon's fingers.A director was placed from the middle of the clavicle (20-30 mm from the distal end of the clavicle) to the middle of the coracoid base under X-ray guidance. In group I, a 4.6-mm bony tunnel was drilled through the director.The cannula was inserted into the bony tunnel from the clavicle surface to the coracoid base.A suture was placed through the Endobutton loop, and the Endobutton was then put into the cannula.Under X-ray guidance, the Endobutton was placed at the coracoid base by the pushing rod.The Endobutton loop was brought to the clavicle surface by pulling the suture.Another Endobutton was inserted through the Endobutton loop to reconstruct the CC ligament (Figure 3).In group II, a 4.2-mm bony tunnel was drilled through the director.A stainless steel suture (M649G, Ethicon Inc., Somerville, NJ, USA) was folded into double strands.The middle fold of the steel suture was passed through the bony tunnel to the coracoid base and was taken to the incision site by vascular forceps.No. 1 Ethibond suture (Ethicon Inc.) was then passed through the loop of the Endobutton and through the middle fold of the steel suture.The Endobutton was placed at the coracoid base and the loops were brought to the surface of clavicle by pulling the middle fold of the steel suture and the Ethibond suture.Another Endobutton was inserted through the Endobutton loop to reconstruct the CC ligament (Figure 4). Postoperative Rehabilitation Patients were encouraged to begin passive movement of the shoulder, including pendulum exercises, self-assisted circumduction exercises, and gradual passive range of motion (ROM) exercises on the first day postoperatively.Active ROM of the shoulder was encouraged at 1 week postoperatively when the pain was sufficiently relieved.A shoulder sling was used for 4 weeks postoperatively.Patients were instructed to avoid lifting, carrying, pushing, and pulling with strong force for 8 weeks postoperatively. Clinical and Radiographic Evaluation The operative time (minutes) was recorded as the time from skin incision to the closure of the wound.Intraoperative blood loss (ml) was also recorded.The patients were followed up at a minimum of 12 months postoperatively to evaluate the clinical and radiologic outcomes.The clinical outcomes were evaluated using the visual analog scale (VAS) score for pain, Oxford Shoulder Score, and Disabilities of the Arm, Shoulder, and Hand (DASH) score.For the evaluation of radiologic outcomes, the CC distances (CCD) of the affected and contralateral sides were measured on anteroposterior radiographs as described previously. 17Briefly, the CCD was measured between the uppermost border of the coracoid process and the opposing clavicular surface.CCD measurements were performed by two clinicians (J.M. and Y.T.).Operation-related complications such as incision infection, loss of reduction, re-dislocation, implant loosening, pleural injury, neurovascular injury, and iatrogenic fracture were recorded.Loss of reduction and re-dislocation were defined as increases in the CCD with respect to the contralateral side of 50%-100% and >100%, respectively, at final follow-up. 18,19atistical Analysis SPSS software (version 23.0;IBM Corp., Armonk, NY, USA) was used for statistical evaluations.The Student's t test and the Mann-Whitney U-test were used to compare differences in continuous variables.Differences in categorical variables were assessed with either the Pearson's chi-squared test or Fisher's exact test.The level of significance was p < 0.05 for all tests. Patient Demographics The patient demographics were detailed in Table 1.There were no significant differences between the two groups in terms of age, sex, affected side, injury type, duration from injury to operation, and duration of follow-up (all p > 0.05). Of the 42 patients with acute Rockwood type III ACJ dislocation, 19 patients (13 men and six women) were treated using the Endobutton installation device to place the Endobutton at the coracoid base for CC stabilization (group I).In group I, the mean age was 43.26 years (range 28-56 years); the dislocation occurred on the right side in 10 patients and the left side in nine; seven, eight, and four injuries were caused by motor vehicle accidents, falls from a height, and direct traumatic injuries, respectively; the mean time from injury to Clinical and Radiographic Outcomes The operative details were summarized in The clinical outcomes were shown in Table 3.At a minimum of 12 months postoperatively, all 42 patients had satisfactory clinical outcomes.At final follow-up in groups I and II, the average VAS scores for pain were 1.1 AE 0.9 (range, 0-3) and 1.3 AE 0.9 (range, 0-3), respectively (p = 0.57), the The radiologic outcomes were shown in Table 4.All patients achieved satisfactory radiologic outcomes at final follow-up.There were significant differences between the pre-and postoperative CCD on the injured side in both groups I and II (p < 0.01).The mean postoperative CCD of the injured side did not significantly differ between groups I and II ( p = 0.30). Postoperative Complications At final follow-up, a loss of reduction was radiographically confirmed in four patients in group I and three patients in group II (p = 0.68).In group I, the mean preoperative CCD of the affected side was 17.55 AE 0.91 mm (range, 17.1-18.5mm), the mean postoperative CCD of the affected side was 13.85 AE 0.89 mm (range, 12.5-14.6mm), and the mean CCD of the contralateral side was 8.55 AE 0.55 mm (range, 8-9 mm).In group II, the mean preoperative CCD of the affected side was 17.87 AE 0.90 mm (range, 17.2-18.9mm), the mean postoperative CCD of the affected side was 13.87 AE 0.81 mm (range, 13.3-14.8mm), and the mean CCD of the contralateral side was 8.4 AE 0.20 mm (range, 8.2-8.6 mm).The VAS score for pain in the injured shoulder was 2-3 (indicating mild pain) in seven patients, but no patients required additional analgesic medication.There was no radiological evidence of re-dislocation or implant loosening at final follow-up in either of the two groups.No patients had incision infection, pleural injury, neurovascular injury, or iatrogenic fracture. Discussion Technique Advantages To our knowledge, this is the first report of a self-designed Endobutton installation device for placing the Endobutton at the coracoid base to achieve CC stabilization in acute Rockwood type III ACJ dislocation.In our study, this technique enabled us to easily and effectively place the Endobutton at the coracoid base and achieve satisfactory clinical and radiologic outcomes without severe complications. Technical Characteristic The following three surgical techniques are frequently used for CC stabilization in clinical practice 8,[20][21][22] : (1) ACJ fixation (static techniques: K-wires [no longer used because of complications]; dynamic techniques: hook plate); (2) CC fixation (static techniques: CC screws; dynamic techniques: Endobuttons); and (3) CC ligament reconstruction.However, each of these surgical techniques has complications and the best treatment strategy for CC stabilization remains controversial.The most commonly used surgical procedure is internal fixation with the Endobutton technique, 23,24 which can be performed either via open surgery, mini-open surgery, or arthroscopically. 15,16,21Endobuttons technique provide flexible CC stabilization for ACJ dislocation. 25,26owever, as described above in the Surgical Technique section (Figure 3), the difficult part of the traditional Endobutton technique for CC stabilization is placement of the Endobutton at the coracoid base, 16 which often requires repeated attempts.The use of the traditional Endobutton technique may be associated with problems such as more soft tissue stripping, operative time, and intraoperative blood loss.However, to our knowledge, no studies have reported the use of special devices for Endobutton placement.We designed an Endobutton installation device to place the Endobutton at the coracoid base for CC stabilization.During the operation, instead of 4.2-mm bony tunnel in the traditional Endobutton technique, a 4.6-mm bony tunnel from the clavicle surface to the coracoid base was needed for the annula pass through.And then, the Endobutton was easily placed at the coracoid base through the annula. Application of the Technique In our study, we retrospectively analyzed 19 and 23 patients with acute Rockwood type III ACJ dislocation treated with Endobutton placement using the Endobutton installation device (group I) and the traditional technique (group II), respectively.During the operation, a mini-open technique was used in both groups, with a finger used to protect the neurovascular structures under the coracoid base when the director was placed and the bony tunnel was drilled.This procedure seemed to be safe and simple compared with arthroscopy or surgery under extensive radiographic guidance.Additionally, our results suggest that the Endobutton installation device easily and effectively placed the Endobutton at the coracoid base.And, the operative time and intraoperative blood loss were significantly reduced in group I compared with group II.At final follow-up, all 42 patients achieved satisfactory clinical outcomes, with no significant differences between groups I and II in the mean VAS score for pain, mean Oxford Shoulder Score, and mean DASH score.Moreover, on radiographs obtained at final follow-up, the postoperative CCD was significantly decreased compared with the preoperative CCD of the injured side in both groups, and the postoperative CCD of the injured side was similar in the two groups.There was a similar incidence of loss of reduction in both groups.In both groups, we only reconstructed the conical ligaments, which lacks horizontal stability. 27Therefore, the loss of reduction might have occurred mainly because the reconstruction of the conical ligaments was not able to completely stabilize the ACJ. 18owever, as previously reported, this loss of reduction was not significantly associated with clinical outcomes. 26,28The patients with loss of reduction still achieved satisfactory clinical outcomes, and no further complications occurred in either group in the present study. Technique Risks and Strategies Despite the good outcomes achieved in the present study, the Endobutton installation device has some disadvantages.Firstly, the bony tunnels are drilled using a 4.6-mm drill instead of a 4.2-mm drill.Although there were no clavicle and coracoid fractures in our study, the use of a drill with a larger diameter carries a risk of this kind of complication.We think that these complications were prevented because the bony tunnels were drilled through the middle of the distal clavicle and the coracoid base, which are wide enough to drill 4.5-5.0-mmbony tunnels. 29Secondly, there is a theoretical risk that an Endobutton is more likely to move into a 4.6-mm bony tunnel than a 4.2-mm bony tunnel.However, no such complications occurred in our study.We consider that the incidence of implant loosening may have been reduced by implementation of the following two procedures. 1 The Endobutton loop was kept under appropriate tension. With the aid of direct vision and radiographic monitoring, we confirmed that the ACJ dislocation was completely reduced and maintained by temporary K-wires placed across the ACJ.The bony tunnel was then drilled and the length was carefully measured.Based on this measurement, we chose a loop that was the same size or one size larger than the tunnel length.After the loop was brought to the clavicle surface, another Endobutton was inserted through the loop. 2 The Endobutton was placed horizontally above or below the center of the tunnels in the distal clavicle and the coracoid base.Thirdly, the drilling of a bony tunnel through the coracoid base carries the risk of pleural injury and neurovascular injury.To prevent such injuries, the surgeons used their fingers to push away the soft tissue under the coracoid base. The surgeons then placed a finger at the coracoid base when the director was placed and the bony tunnels were drilled.These procedures may avoid pleural and neurovascular injuries. Limitations O ur study has several limitations.This study was a single-center retrospective study with a low level of evidence and a small number of patients.Thus, the present results require confirmation in a multicenter study with a large sample size.Additionally, there may have been biases related to coding that may have influenced the identification of potentially eligible patients.To control for the bias related to coding, we carefully code the ACJ dislocation based on the Rockwood classification in the future study. Prospects for Clinical Application The endobutton installation device may be a good option for placing the Endobutton at the coracoid base to achieve CC stabilization, such as in the ACJ dislocation and Neer type IIB lateral clavicle fractures.It makes placement of the Endobutton at the coracoid base easier with less soft tissue stripping, the operative time and intraoperative blood loss.However, a drill with a larger diameter carries risk of clavicle and coracoid fractures in this surgical procedure.To avoid the intraoperative clavicle and coracoid fractures, the surgeon needed to manipulate carefully to maintain the bony tunnels were drilled through the middle of the distal clavicle and the coracoid base. Conclusion I n conclusion, the Endobutton installation device makes placement of the Endobutton at the coracoid base easier and achieves satisfactory clinical and radiologic outcomes without additional complications in acute Rockwood type III ACJ dislocation. FIGURE 1 FIGURE 1 Illustration of the self-designed Endobutton installation device.(A) The cannula is 100 mm long and its outer and inner diameters are 4.3 mm and 4.1 mm, respectively.(B) The pushing rod is 150 mm long and is composed of a 30 mm long, 10 mm wide, and 2.0 mm thick rectangular handle; a 110 mm long cylindrical part with a 2.0 mm diameter; and a 10 mm long, 2.5 mm wide, and 1.5 mm thick rectangular end. FIGURE 2 FIGURE 3 FIGURE 2 The self-designed Endobutton installation device used to place an Endobutton at the coracoid base.(A) Lateral view of the cannula, Endobutton, and pushing rod.(B) Transverse view of the cannula and Endobutton.(C) Lateral view of the Endobutton being passed through the cannula using the pushing rod. FIGURE 4 FIGURE 4 A 46-year-old man with Rockwood type III ACJ dislocation.(A) Preoperative radiographs.(B) The ACJ reduction is achieved and maintained by the placement of temporary K-wires across the ACJ.(C) A director is placed from the middle of the clavicle to the middle of the coracoid base.(D) Illustration showing the stainless steel suture folded into double strands.A 4.2-mm bony tunnel is drilled through the director and the middle fold of the steel suture is passed through the bony tunnel to the coracoid base.(E) Illustration of the middle fold of the steel suture being taken to the incision site by the vascular forceps.(F) Illustration showing the No. 1 Ethibond suture passing through the loop of the Endobutton and through the middle fold of the steel suture.(G) Illustration of the Endobutton placed at the coracoid base and the loop brought to the surface of clavicle by pulling the middle fold of the steel suture and the Ethibond suture.(H) The second Endobutton plate without loops is placed in the loop above the clavicle.(I) Postoperative radiograph obtained to assess the reduction. TABLE 2 Operation-related factors. Abbreviations: DASH score, disabilities of the arm, shoulder and hand score; SD, standard deviation; VAS, visual analog scale.
2024-01-19T06:18:03.308Z
2024-01-17T00:00:00.000
{ "year": 2024, "sha1": "bddfd99da9e15bb1017bb9bae7a00e9189ee72d3", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/os.13995", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f4b25bccc63465be13005b7914f74c5fe8441b95", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
235485586
pes2o/s2orc
v3-fos-license
Inconsistency thresholds for incomplete pairwise comparison matrices Pairwise comparison matrices are increasingly used in settings where some pairs are missing. However, there exist few inconsistency indices for similar incomplete data sets and no reasonable measure has an associated threshold. This paper generalises the famous rule of thumb for the acceptable level of inconsistency, proposed by Saaty, to incomplete pairwise comparison matrices. The extension is based on choosing the missing elements such that the maximal eigenvalue of the incomplete matrix is minimised. Consequently, the well-established values of the random index cannot be adopted: the inconsistency of random matrices is found to be the function of matrix size and the number of missing elements, with a nearly linear dependence in the case of the latter variable. Our results can be directly built into decision-making software and used by practitioners as a statistical criterion for accepting or rejecting an incomplete pairwise comparison matrix. Introduction Pairwise comparisons form an essential part of many decision-making techniques, especially since the appearance of the popular Analytic Hierarchy Process (AHP) methodology (Saaty, 1977(Saaty, , 1980. Despite simplifying the issue to evaluating objects pair by pair, the tool of pairwise comparisons presents some challenges due to the possible lack of consistency: if alternative is two times better than alternative and alternative is three times better than alternative , then alternative is not necessarily six times better than alternative . The origin of similar inconsistencies resides in asking seemingly "redundant" questions. Nonetheless, additional information is often required to increase robustness (Szádoczki et al., 2022), and inconsistency usually does not cause a serious problem until it remains at a moderate level. Inconsistent preferences call for quantifying the level of inconsistency. The first and by far the most extensively used index has been proposed by the founder of the AHP, Thomas L. Saaty (Saaty, 1977). He has also provided a sharp threshold to decide whether a pairwise comparison matrix has an acceptable level of inconsistency or not. This widely accepted rule of inconsistency has been constructed for the case when all comparisons are known. However, there are at least three arguments why incomplete pairwise comparisons should be considered in decision-making models (Harker, 1987): • in the case of a large number of alternatives, completing all ( − 1)/2 pairwise comparisons is resource-intensive and might require much effort from experts suffering from a lack of time; • unwillingness to make a direct comparison between two alternatives for ethical, moral, or psychological reasons; • the decision-makers may be unsure of some of the comparisons, for instance, due to limited knowledge on the particular issue. In certain settings, both incompleteness and inconsistency are inherent features of the data. The beating relation in sports is rarely transitive and some players/teams have never played against each other (Bozóki et al., 2016;Csató, 2013Csató, , 2017Petróczy and Csató, 2021;Chao et al., 2018). Analogously, there exists no guarantee for consistency when the pairwise comparisons are given by the bilateral remittances between countries (Petróczy, 2021), or by the preferences of students between universities (Csató and Tóth, 2020). Finally, note that pairwise comparison matrices are usually filled sequentially by the decision-makers, see e.g. the empirical research conducted by Bozóki et al. (2013). If the degree of inconsistency is monitored continuously during this process, the decision-maker might be warned immediately after the appearance of an unexpected value (Bozóki et al., 2011). Consequently, there is a higher chance that the problem can be solved easily compared to the usual case when the supervision of the comparisons is asked only after all pairwise comparisons are given. This is especially important as these values are often provided by experts who suffer from a lack of time. Let us see an example, where the missing elements are denoted by *: Pairwise comparison matrix A is inconsistent because 12 × 23 × 34 = 2 × 2 × 2 = 8 ̸ = 4 = 14 . But it remains unknown whether this deviation can be tolerated or not. The current paper aims to provide thresholds of acceptability for pairwise comparison matrices with missing entries. We want to follow the concept of Saaty as closely as possible. Therefore, the unknown elements are considered as variables to be chosen to reduce the inconsistency of the parametric complete pairwise comparison matrix, that is, to minimise its maximal eigenvalue as suggested by Shiraishi et al. (1998) and Shiraishi and Obata (2002). The main challenge resides in the calculation of the random index, a key component of Saaty's threshold: the optimal completion of each randomly generated incomplete pairwise comparison matrix should be found separately in order to obtain the minimal value of the Perron root of the completed matrix (Bozóki et al., 2010). On the other hand, the study of inconsistency indices for incomplete pairwise comparisons has been started only recently. Szybowski et al. (2020) introduce two new inconsistency measures based on spanning trees. Ku lakowski and Talaga (2020) adapt several existing indices to analyse incomplete data sets but do not provide any threshold. To conclude, without the present contribution, one cannot decide whether the inconsistency of the above incomplete pairwise comparison matrix A is excessive or not. Thus our work fills a substantial research gap. Even though Forman (1990) computes random indices for incomplete pairwise comparison matrices, his solution is based on the proposal of Harker (1987). That introduces an auxiliary matrix for any incomplete pairwise comparison matrix instead of filling it by optimising an objective function as we do. Our approach is probably closer to Saaty's concept since the auxiliary matrix of Harker (1987) is not a pairwise comparison matrix. The paper is structured as follows. Section 2 presents the fundamentals of pairwise comparison matrices and inconsistency measures. Incomplete pairwise comparison matrices and the eigenvalue minimisation problem are introduced in Section 3. Section 4 discusses the details of computing the random index. The inconsistency thresholds are reported in Section 5. A numerical example is provided in Section 6, and a real life application in Section 7. Finally, Section 8 offers a summary and directions for future research. Pairwise comparison matrices and inconsistency The pairwise comparisons of the alternatives are collected into a matrix A = [ ] such that the entry is the numerical answer to the question "How many times alternative is better than alternative ?" Let R + denote the set of positive numbers, R + denote the set of positive vectors of size and R × + denote the set of positive square matrices of size with all elements greater than zero, respectively. Let denote the set of pairwise comparison matrices and × denote the set of pairwise comparison matrices of size , respectively. According to the famous Perron-Frobenius theorem, for any pairwise comparison matrix A ∈ , there exists a unique positive weight vector w satisfying Aw = max (A)w and ∑︀ =1 = 1, where max (A) is the maximal or Perron eigenvalue of matrix A. Saaty has considered an affine transformation of this eigenvalue. Definition 2.3. Consistency index: Let A ∈ × be any pairwise comparison matrix of size . Its consistency index is Since (A) = 0 ⇐⇒ max (A) = , the consistency index is a reasonable measure of how far a pairwise comparison matrix is from a consistent one (Saaty, 1977(Saaty, , 1980. Aupetit and Genest (1993) provide a tight upper bound for the value of when the entries of the pairwise comparison matrix are expressed on a bounded scale. Definition 2.4. Random index: Consider the set × of pairwise comparison matrices of size . The corresponding random index is provided by the following algorithm (Alonso and Lamata, 2006): • Generating a large number of pairwise comparison matrices such that each entry above the diagonal is drawn independently and uniformly from the Saaty scale (1). • Calculating the consistency index for each random pairwise comparison matrix. • Computing the mean of these values. Several authors have published slightly different random indices depending on the simulation method and the number of generated matrices involved, see Alonso and Lamata (2006, Table 1). The random indices are reported in Table 1 for 4 ≤ ≤ 10 as provided by Bozóki and Rapcsák (2008) and validated by Csató and Petróczy (2021). These estimates are close to the ones given in previous works (Alonso and Lamata, 2006;Ozdemir, 2005). Bozóki and Rapcsák (2008 , Table 3) uncovers how depends on the largest element of the ratio scale. Definition 2.5. Consistency ratio: Let A ∈ × be any pairwise comparison matrix of size . Its consistency ratio is Saaty has proposed a threshold for the acceptability of inconsistency, too. Definition 2.6. Acceptable level of inconsistency: Let A ∈ × be any pairwise comparison matrix of size . It is sufficiently close to a consistent matrix and therefore can be accepted if (A) ≤ 0.1. Even though applying a crisp decision rule on the fuzzy concept of "large inconsistency" is strange (Brunelli, 2018) and there exist sophisticated statistical studies to test consistency (Lin et al., 2013(Lin et al., , 2014, it is assumed throughout the paper that the 10% rule is a wellestablished standard worth generalising to incomplete pairwise comparison matrices. The eigenvalue minimisation problem for incomplete pairwise comparison matrices Certain entries of a pairwise comparison matrix are sometimes missing. Definition 3.1. Incomplete pairwise comparison matrix: Matrix Let × * denote the set of incomplete pairwise comparison matrices of size . The graph representation of incomplete pairwise comparison matrices is a convenient tool to visualise the structure of known elements. To summarise, there are no edges for the missing elements ( = *) as well as for the entries of the diagonal ( ). In the case of an incomplete pairwise comparison matrix A, Shiraishi et al. (1998) and Shiraishi and Obata (2002) consider an eigenvalue optimisation problem by substituting the missing elements of matrix A above the diagonal with positive values arranged in the vector x ∈ R + , while the reciprocity condition is preserved: The motivation is clear, all missing entries should be chosen to get a matrix that is as close to a consistent one as possible in terms of the consistency index . According to Bozóki et al. (2010, Section 3), (2) can be transformed into a convex optimisation problem. The authors also give the necessary and sufficient condition for the uniqueness of the solution: the graph representing the incomplete pairwise comparison matrix A should be connected. This is an intuitive and almost obvious requirement since the relation of two alternatives cannot be established if they are not compared at least indirectly, through other alternatives. The calculation of the random index for incomplete pairwise comparison matrices Consider an incomplete pairwise comparison matrix A ∈ × * and a complete pairwise . This implies that the value of the random index , calculated for complete pairwise comparison matrices, cannot be applied in the case of an incomplete pairwise comparison matrix because its consistency index is obtained through optimising (i.e. minimising) its level of inconsistency. Consequently, by adopting the numbers from Table 1, the ratio of incomplete pairwise comparison matrices with an acceptable level of inconsistency will exceed the concept of Saaty and this discrepancy increases as the number of missing elements grows. In the extreme case when graph is a spanning tree of a complete graph with nodes (thus it is a connected graph consisting of exactly − 1 edges without cycles), the corresponding incomplete matrix can be filled out such that consistency is achieved. Therefore, the random index needs to be recomputed for incomplete pairwise comparison matrices, and its value will supposedly be a monotonically decreasing function of , the number of missing elements. Let us illustrate the three approaches listed in Remark 1. Among the three ideas in Remark 1, Method 1 always leads to the smallest dominant eigenvalue, followed by Method 2, whereas Method 3 provides the greatest optimum of problem (2) as can be seen from the restrictions in Remark 1. We implement Method 2 to calculate the random indices . The first reason is that the algorithm for the max -optimal completion (Bozóki et al., 2010, Section 5) involves an exogenously given tolerance level to determine how accurate are the coordinates of the eigenvector associated with the dominant eigenvalue as a stopping criterion. Consequently, it cannot be chosen appropriately if the matrix entries and the elements of the weight vector can differ substantially: the consistent completion of an incomplete pairwise comparison matrix with alternatives may contain (1/9) ( −1) or 9 ( −1) as an element if the corresponding graph is a chain. Furthermore, it remains questionable why elements below or above the Saaty scale (1) are allowed for the missing entries if they are prohibited in the case of known elements. On the other hand, Method 3 presents a discrete optimisation problem that is more difficult to handle than its continuous analogue of Method 2. To summarise, since the process is based on generating a large number of random incomplete pairwise comparison matrices to be filled out optimally, it is necessary to reduce the complexity of optimisation problem (2) by using Method 2. A complete pairwise comparison matrix of size can be represented by a complete graph where the degree of each node is − 1. Hence, the graph corresponding to an incomplete pairwise comparison matrix is certainly connected if ≤ − 2, implying that the solution of the max -optimal completion is unique. However, the graph might be disconnected if ≥ − 1, in which case it makes no sense to calculate the consistency index of the incomplete pairwise comparison matrix. Furthermore, if > ( − 1)/2 − ( − 1), then there are less than − 1 known elements, and the graph is always disconnected. If the number of missing entries is exactly = ( − 1)/2 − ( − 1) = ( − 1)( − 2)/2, then the graph is connected if and only if it is a spanning tree. Even though these incomplete pairwise comparison matrices certainly have a consistent completion under Method 1, this does not necessarily hold under Method 2 when the missing entries cannot be arbitrarily large/small. Generalised thresholds for the consistency ratio As we have argued in Section 4, the value of the random index , probably depends not only on the size of the incomplete pairwise comparison matrix but on the number of its missing elements , too. Thus the random index is computed according to the following procedure (cf. Definition 2.4): 1. Generating an incomplete pairwise comparison matrix A of size with missing entries above the diagonal such that each element above the diagonal is drawn independently and uniformly from the Saaty scale (1), while the place of the unknown elements above the diagonal is chosen randomly. 2. Checking whether the graph representing the incomplete pairwise comparison matrix A is connected or disconnected. 3. If graph is connected, optimisation problem (2) is solved by the algorithm for the max -optimal completion (Bozóki et al., 2010, Section 5) with restricting all entries in x ∈ R + according to Method 2 in Remark 1 to obtain the minimum of max (A(x)) and the corresponding complete pairwise comparison matrixÂ. Computing and saving the consistency index (︁Â)︁ based on Definition 2.3. Repeating Steps 1-4 to get 1 million random matrices with a connected graph representation, and calculating the mean of the consistency indices from Step 4. Table 2, which is an extension of Table 1 to the case when some pairwise comparisons are unknown. The values in the first row, which coincide with the ones from Table 1, confirm the integrity of the proposed technique to compute the thresholds for the consistency index . The role of missing elements cannot be ignored at all commonly used significance levels as reinforced by the t-test: for any given , the values of , are statistically different from each other. Recall that the maximal number of missing elements is at most ( − 1)/2 − ( − 1) = ( − 1)( − 2)/2 if connectedness is not violated, and this value is 3 if = 4, 6 if = 5, Table 2-for example, the pair = 7 and = 4-due to excessive computation time (> 48 hours). However, , can be easily predicted as follows. Figure 2 reveals that the random index is monotonically decreasing as the function of missing values according to common intuition. Furthermore, the dependence is nearly linear, thus a plausible estimation is provided by the below formula, which requires only the "omnipresent" Table 1: Our central result is reported in Obviously, (3) returns ,0 if there are no missing elements ( = 0). On the other hand, = ( − 1)( − 2)/2 means that the graph representing the incomplete pairwise comparison matrix is either unconnected, or it is a spanning tree, thus the matrix can be filled consistently if there is no restriction on its elements. Formula (3) immediately follows by assuming a linear function for intermediate values of . According to the "case studies" in Table 3, (3) gives at least a reasonable guess of , without much effort, even though it somewhat underestimates the true value. The discrepancy is mainly caused by ,( −1)( −2)/2 being larger than zero (see Table 2) as incomplete pairwise comparison matrices represented by a spanning tree can be made consistent only if the missing elements can be arbitrary, but not if they are bounded to the interval [1/9, 9]. Definition 2.5 can be modified straightforwardly to derive the consistency ratio for any incomplete pairwise comparison matrix. Definition 5.1. Consistency ratio: Let A ∈ × * be any incomplete pairwise comparison matrix of size with missing entries above the diagonal and be the complete pairwise comparison matrix given by the optimal filling of A. The consistency ratio of the incomplete matrix A is (A) = (Â)/ , . The popular 10% threshold of Definition 2.6 can be adopted without any changes. In the applications of the AHP methodology, the optimal number of alternatives does not exceed nine (Saaty and Ozdemir, 2003). Random indices for complete pairwise comparison matrices have been determined for ≤ 16 in Aguarón and Moreno-Jiménez (2003) and for ≤ 15 in Alonso and Lamata (2006). The corresponding thresholds for incomplete pairwise comparison matrices can be calculated offline by a supercomputer and built into any software used by practitioners. If these are not available, formula (3) provides a good approximation for any number of alternatives and missing elements , see Table 3. An illustrative example In this section, we highlight the implications of the calculated thresholds for the random index by a numerical illustration. It has been chosen to be simple but expressive. With three alternatives and one missing entry, the matrix can be filled out consistently. Therefore, the number of alternatives is four. Again, there exists a consistent filling if there are three missing elements, hence their number is two. Furthermore, they are in different rows, which is the more likely case. Bold numbers indicate that the consistency ratio Italic numbers indicate that (︁ ( , ) )︁ / 4,0 is below the 10% threshold but the consistency ratio (︁ ( , ) )︁ = (︁ ( , ) )︁ / 4,2 is above it. Example 6.1. Take the following parametric incomplete pairwise comparison matrix of size = 4 with = 2 missing elements: . Now 4,0 ≈ 0.884 and 4,2 ≈ 0.356 from Table 2. There are three instances where the optimal filling of matrix A( , ) results in a consistent pairwise comparison matrix: (1, 4) )︁ ≈ 0.0404 < 0.1 × 4,0 , thus the optimally filled out incomplete pairwise comparison matrix might be accepted according to the "standard" threshold for complete matrices because the latter does not take into account the automatic reduction of inconsistency due to the optimisation procedure. Table 4 reports the consistency index of matrix A( , ) for some parameters and . is restricted between 1/5 and 5 because 12 ( , ) × 23 ( , ) × 34 ( , ) = 3 but 14 ( , ) = . Bold numbers correspond to the cases when inconsistency can be tolerated based on the approximated thresholds of Table 2, while italic numbers show instances that can be accepted only if the optimal solution A(x) of (2) is considered as a (complete) pairwise comparison matrix and the threshold of 10% is used for (A(x)) / 4,0 . Bozóki et al. (2013) carried out a controlled experiment, where university students were divided into subgroups to make pairwise comparisons from different types of problems, with different number of alternatives in different questioning orders. Consequently, not only the complete pairwise comparison matrices are known but their incomplete submatrices obtained after a given number of comparisons was asked. We have picked one interesting matrix from this dataset. A real life application: continuous monitoring of inconsistency Example 7.1. The following pairwise comparison matrix reflects the opinion of a decisionmaker on how much more a summer house is liked compared to another summer house on a numerical scale: Ross (1934), optimises two objective functions: it maximises the distances for the same alternatives to reappear and aims to balance the number of the first and second positions in the comparison for every alternative. Figure 3 shows how inconsistency changes as more and more comparisons are given by the decision-maker. Following Bozóki et al. (2013, Figure 2), the solid red line uses the random index associated with a complete 6 × 6 pairwise comparison matrix, which is not a valid approach according to Section 4. On the other hand, the dashed blue line is obtained by the values of the random index according to our computations, see Table 2. The naïve approach indicates no problem around inconsistency, its level remains below the 10% threshold during the filling in process. However, accounting for the number of missing elements reveals that inconsistency is substantially increased when the seventh comparison ( 24 ) is made. Even though the complete pairwise comparison matrix can be accepted with respect to inconsistency, continuous monitoring warns the decision-maker that this particular comparison is worth reconsidering. Conclusions The paper reports approximated thresholds for the most popular measure of inconsistency, proposed by Saaty, in the case of incomplete pairwise comparison matrices. They are determined by the value of the random index, that is, the average consistency ratio of a large number of random pairwise comparison matrices with missing elements. The calculation is far from trivial since a separate convex optimisation problem should be solved for each matrix to find the optimal filling of unknown entries. Numerical results uncover that the threshold depends not only on the size of the pairwise comparison matrix but on the number of missing entries, too. A plausible linear estimation of the random index has also been provided. According to Table 2 and two examples, the extended values of the random index become indispensable in order to generalise Saaty's concept to incomplete comparisons. The associated thresholds can be directly programmed into decision-making software. With the suggested rule of acceptability, the decision-maker can decide for any incomplete pairwise comparison matrix whether there is a need to revise earlier assessments or not. It allows the level of inconsistency to be monitored even before all comparisons are given, which may immediately indicate possible mistakes and suspicious entries. Therefore, the preference revision process can be launched as early as possible. It will be examined in future studies how this opportunity can be built into the known inconsistency reduction methods (Abel et al., 2018;Bozóki et al., 2015;Ergu et al., 2011;Xu and Xu, 2020).
2021-02-23T02:15:35.808Z
2021-02-21T00:00:00.000
{ "year": 2021, "sha1": "e4846a165efae6d60739490c9e524796b862fd9e", "oa_license": "CCBY", "oa_url": "http://unipub.lib.uni-corvinus.hu/7058/1/omega_2022.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e4846a165efae6d60739490c9e524796b862fd9e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
18974772
pes2o/s2orc
v3-fos-license
IL-17A Promotes the Migration and Invasiveness of Cervical Cancer Cells by Coordinately Activating MMPs Expression via the p38/NF-κB Signal Pathway Objective IL-17A plays an important role in many inflammatory diseases and cancers. We aimed to examine the effect of IL-17A on the invasion of cervical cancer cells and study its related mechanisms. Methods Wound healing and matrigel transwell assays were used to examine the effect of IL-17A on cervical cancer cell migration and invasion by a panel of cervical cancer cell lines. The levels of matrix metalloproteinases (MMPs) and tissue inhibitor of metalloproteinases (TIMPs) were investigated using western blotting. The activity of p38 and nuclear factor-kappa B (NF-κB) signal pathway was detected too. Results Here, we showed that IL-17A could promote the migration and invasion of cervical cancer cells. Further molecular analysis showed that IL-17A could up-regulate the expressions and activities of MMP2 and MMP9, and down-regulate the expressions of TIMP-1 and TIMP-2. Furthermore, IL-17A also activates p38 signal pathway and increased p50 and p65 nuclear expression. In addition, treatment of cervical cancer cells with the pharmacological p38/NF-κB signal pathway inhibitors, SB203580 and PDTC, potently restored the roles of invasion and upregulation of MMPs induced by IL-17A. Conclusion IL-17A could promote the migration and invasion of cervical cancer cell via up-regulating MMP2 and MMP9 expression, and down-regulating TIMP-1 and TIMP-2 expression via p38/NF-κB signal pathway. IL-17A may be a potential target to improve the prognosis for patients with cervical cancer. Introduction Cervical cancer is the second most common cause of cancerrelated mortality in women worldwide [1] and is one of the only known cancers caused by a virus that can be sexually transmitted. Recent researches find that immune cells and their secreted cytokines can not only contribute to the elimination of cancer cells, but also provide a proper microenvironment for tumor development as well as promote tumor progression [2], during which the local tumor microenvironment and the function state of immune cells play important roles [3]. Interleukin 17A (IL-17A) is a pro-inflammatory cytokine, and has been found contributed to many chronic diseases. Recently, IL-17A has been also frequently found in many cancers such as ovarian cancer [4], breast cancer [5], gastric cancer [6], and hepatocellular carcinoma [3]. The role of IL-17A in the development and progression of these cancers remains controversial. Using animal model, some studies find that IL-17A inhibited tumor growth and metastasis through IFN-c producing NK and T cells [6,7]. Other studies show that IL-17A promoted tumor growth and metastasis [8,9]. The effect may be correlated with the induction of tumor promoting microenvironment at tumor site [10]. Tumor metastasis is the leading cause of mortality associated with cancer [11]. Cancer cells need to degrade the ECM and invade into the lymphatic and vascular systems for dissemination to distant sites [12]. In this process, proteases such as matrix metalloproteinase(MMPs), play important roles [12]. Production and activation of MMPs is dependent on various cytokines, including TNF-a and IL-1 secreted by tumor cells [13,14], fibroblasts [15,16] and macrophages [16]. Previous studies found that IL-17A could regulate MMPs, IL-1 and TNF in periodontitis [17], and found that IL-17 receptor deficiency results in impaired expression of IL-1 and MMP3/MMP9/MMP13 in rheumatoid arthritis [18], indicating that IL-17A also plays an important role in the regulation of MMPs. MAPK signal pathway and NF-kB play important roles in the regulation of the production and activity of MMPs [10,19]. And many effects of IL-17A are correlated with MAPK signal pathway and NF-kB [10,20,21]. In our study, we found that IL-17A could increase cell motility and invasion by the up-regulation of MMP2 and MMP9 via activating p38-NF-kB signal pathway. Ethics statement This research was approved by the Ethics Committee of the Second Affiliated Hospital, School of Medicine, Xi'an Jiaotong University. All cervical cancer patient participants with tissue examination provided their written informed consent to participate in this study. Cervical cancer samples Cervical cancer specimens for mRNA were obtained from 50 cervical cancer patients in the department of Obstetrics and Gynecology, Second Affiliated Hospital, School of Medicine, Xi'an Jiaotong University. All patients had consented to tissue collection at the time of surgery. And all of the specimens were diagnosed as the cervical squamous cancer by the department of pathology. Among them, 11 were from cervical biopsy and were not included in the statistical analysis of invasion depth and lymphatic metastasis. None of the patients had received chemotherapy, immunotherapy or radiotherapy prior to specimen collection. Clinical stage and histological classifications were based on the International Federation of Gynecology and Obstetrics (FIGO) classification system. Tissue samples were divided into two portions: one part was frozen in 280uC for RNA isolation and the left was used for pathological diagnosis. Cell migration and invasion assay Cell migration and invasion abilities were tested by wound healing and invasion assays. Cell migration was assessed by a wound healing assay. Cells were cultured in 6-well plate until confluent rate reach 70-80% and then treated with or without IL-17A (50 ng/mL). The cell layer was wounded using a sterile tip and the spread of wound closure was observed and photographed. Invasion assay was performed with 24-well BioCoat Matrigel Invasion Chambers (Becton Dicknson, Bedford, MA) according to the manufacturer's instructions. After cultured in medium with or without IL-17A (50 ng/mL), cells were seeded onto inner well and number of cells that invaded through the Matrigel was counted. RNA isolation and real-time PCR analysis Total RNA was extracted from cultured cells with TRIzol reagent (Invitrogen), and mRNA expression levels were measured by qRT-PCR using an iQ5 multicolor real-time PCR Detection System (Bio-Rad) with SYBR Premix EX. Reverse transcription was performed with the PrimeScript RT reagent Kit (Perfect Real Time; TaKaRa) according to the manufacturer's instructions. For mRNA analysis, GAPDH mRNA levels were used as internal normalization control. Fold changes were calculated and normalized using the CT method. Primers used were as follows: GAPDH (GCACCGTCAAGGCTGAGAAC and TGGTGAAGACGC Zymography Cells were treated with IL-17A at 37uC for 24 h, and samples of conditioned media were collected. Appropriate volumes of the unboiled samples were separated by 0.1% gelatin-8% SDS-PAGE electrophoresis. After electrophoresis, the gels were washed twice in 2.5% Triton X-100 at room temperature for 30 min and then incubated in reaction buffer (10 mM CaCl2, 40 mM Tris-HCl and 0.01% NaN3, pH 8.0) at 37uC for 12 h. Coomassie brilliant blue R-250 gel stain was then used to stain the gel. The intensities of bands on the gels were calculated using an image analysis system (Bio-Rad Laboratories, Richmond, CA). Quantification of MMP-2 and MMP-9 proteins Cells were seeded at a density of 1610 5 cells/ml into 6-well plates a day before the experiment. The cells were cultured in fresh DMEM medium supplemented with 1% FBS and parental cells were cultured in fresh DMEM medium supplemented with 1% FBS with or without IL-17A (50 ng/mL). After 48 h of incubation, cell supernatants were collected, and then MMP2 and MMP9 concentrations were quantified using the ELISA kits (Shanghai Westang Bio-Tech Co., Ltd., Shanghai, China). Statistical analysis All data are shown as the mean 6 standard deviation and analyzed using SPSS 13.0 software (SPSS Inc., IL). Statistical significance was analyzed using the student's t-test. P,0.05 was considered statistically significant. Results Expression of IL-17A is positively associated with metastasis of cervical cancer IL-17A mRNA expression was measured in 50 cervical cancer tissues by real-time PCR. Association study was further applied to investigate the clinical significance of IL-17A expression in 50 cervical cancer specimens. The result showed that IL-17A expression did not correlate to patients' age, FIGO stage, and tumor size, while IL-17A expression was significantly correlated to patients' invasion depth and lymphatic metastasis status (P,0.01, students t-test, Table 1). These results indicated that IL-17A might play an important role in cervical cancer metastasis. IL-17A increased motility of cervical cancer cells Wound healing and matrigel invasion assays were conducted to further test the role of IL-17A on cervical cell motility. The results of wound healing show that migrations of HeLa, C33A and Caski cells were enhanced by IL-17A ( Fig. 1A and B). Furthermore, transwell assay showed that treatment with IL-17A promotes cell invasion through the matrigel (Fig. 1C and D). IL-17A up-regulated MMP2 and MMP9 expression and down-regulated TIMP-1 and TIMP-2 expression in cervical cancer cells As overexpression of MMPs play an important role in cancer metastasis [22], the role of IL-17A on MMPs expression in cervical cancer cell lines (C33A and Caski) was investigated. Expressions of MMP1, MMP2, MMP3, MMP9, MMP10, and MMP13 were detected by real-time PCR analysis between IL-17A treated and untreated cells. As shown in Fig. 2A, IL-17A increased the expression of both MMP2 and MMP9, indicating that the motility promoting role of IL-17A might be involved with extracellular matrix (ECM) remodeling. MMP2 and MMP9 play important roles for cancer metastasis [12], and IL-17A can affect the expression of MMP2 and MMP9 [10]. TIMPs, the endogenous natural inhibitors of MMPs, can regulate the activity and expression of MMPs [23]. After treating with IL-17A, the expression of MMP2 and MMP9 proteins in cervical cancer cell lines (C33A and Caski) was increased, meanwhile the expression of TIMP-1 and TIMP-2 proteins was decreased( Fig. 2B and C). IL-17A increased the secretion and activity of MMP2 and MMP9 MMPs have the role of ECM degradation, and are strongly implicated in invasion and metastasis of malignant tumor cells [22]. In light of this, the expression of MMP2 and MMP9 in cell supernatants was analyzed by ELISA. As shown in Fig. 2D Furthermore, the activity of MMP2 and MMP9 secreted by cervical cancer cells was examined by zymography assay. The results showed that IL-17A could significantly increase the degradation activity of MMP2 and MMP9 in C33A and Caski cell lines (Fig. 2F and G). IL-17A regulated MMPs expression and invasion of cervical cancer cells via activating p38/NF-kB signal pathway The p38 signal pathway plays important role in the invasion of cervical cancer cells. After treatment with IL-17A, the phosphorylation level of p38 was increased ( Fig. 3A-B, E-F). However, the expression of total p38 was not affected. As NF-kB signaling pathway was reported to be a downstream target of p38 signaling, and was able to upregulate MMP2 andMMP9 expression, NF-kB/p50 and p65/RelA expression were also detected. We observed that treatment with IL-17A significantly increased the nuclear expression of both NF-kB/p50 and p65/RelA (Fig. 3 C-D, G-H). To further define the point in the p38/NF-kB signal pathway at which IL-17A regulates the invasion of cervical cancer cells, we treated cervical cancer cells with SB203580(a p38 inhibitor) and PDTC(a NF-kB inhibitor), and analyzed the invasive ability. Both SB203580 and PDTC can reverse the invasion increased by IL-17A (Fig. 4A-B, E-F). In addition, western blot analysis revealed that the pre-treatment of SB203580 and PDTC abrogated the upregulation of MMP2 and MMP9 induced by IL-17A (Fig. 4C-D, G-H), further demonstrating that IL-17A regulated MMPs expression and invasion of cervical cancer cells via activating p38/NF-kB signal pathway. Discussion Substantial evidence indicates that certain cancer patients exhibit a generalized immunosuppressive status, but the inflammatory reaction at tumor site can foster tumor growth and progression [24,25]. Persistent infection with human papillomavirus (HPV) is a necessary cause of cervical cancer [26]. HPV infections are common, and cervical cancer can be regarded as a rare complication of this common infection [27]. IL-17A is an important inflammatory cytokine in the development of many inflammatory diseases and it is also frequently detected in tumor microenvironment [8,28,29]. But up to now, little is known about the effect of IL-17A on cervical cancer progression. Souza and his co-workers have studied the correlation of the concentration of IL-17 on serum from patients and different grades of squamous intraepithelial lesions and invasive cervical carcinoma [30], not to mention the pro-metastatic and invasive effect of IL-17A on cervical cancer as well as its underling mechanism. Here, we revealed that IL-17A significantly promoted the invasive and metastatic ability of cervical cancer cells by regulating MMP/ TIMP balance via activating the p38/NF-kB signal pathway. In the present study, we found that IL-17A could enhance the migration and invasion abilities of cervical cancer cells. Previous research found that IL-17A could promote the migration and invasion abilities of human breast cancer and hepatocellular carcinoma cells [8,10]. These results suggest that IL-17A is closely correlated with the invasion of cervical cancer cells. In order to During the process of metastasis, cancer cells need to degrade the ECM and invade into blood or lymph vessels, and reach other tissues and organs, then generate new tumor. MMPs and TIMPs play important roles in degrading ECM [31]. MMP2 and MMP9 have been frequently detected to be over-expressed in solid tumors and associated with tumor invasion and metastasis [22,32]. So we investigated the effect of IL-17A on the expression of MMP2 and MMP9. Results show that IL-17A can up-regulate the expression of MMP2 and MMP9. TIMPs act through the formation of a tight and noncovalent complex with their cognate enzymes and are able to affect the biological activities of MMPs [33,34]. In present study, we found IL-17A could down-regualte the expression of TIMP-1 and TIMP-2. These results indicate that the promoting effect of IL-17A is correlated with the MMPs and their inhibitors. The p38 signal pathway also plays important roles in the regulation of expression and activity of MMPs and TIMPs [35,36]. Activity of p38 signal pathway can up-regulate the expression of MMP2 and MMP9 [10]. Previous study found that IL-17A can activate p38 signal pathway [37,38]. To further clarify possible mechanism(s) of IL-17A in the promotion of cervical cancer cell invasion, we investigate the effect of IL-17A on the phosphorylation of p38. The results showed that IL-17A could up-regulate the phosphorylation level of p38. Results also showed that treatment of inhibitor of p38 reduced cell invasion significantly, accompanied by increased MMP2 and MMP9 protein expression, and decreased TIMP-1 and TIMP-2 protein expression, suggesting that the up-regulation role of IL-17A in MMP2 and MMP9 expression might be through the activation of p38 signal pathway. NF-kB has been found to be a key transcription factor in the regulation of MMP2 and MMP9 expression [10,39] and IL-17A has been reported to be able to activate NF-kB signal pathway [40,41], we next studied whether IL-17A could activate NF-kB signal pathway. The results showed that IL-17A could activate NF-kB, suggesting that the up-regulation role of IL-17A in MMP2 and MMP9 expression might be through the activation of NF-kB signal pathway. In conclusion, the effect of IL-17A on cervical cancer cell invasion and metastasis may lead to the identification of new diagnostic markers and therapeutic targets.
2016-05-12T22:15:10.714Z
2014-09-24T00:00:00.000
{ "year": 2014, "sha1": "e242aa72fb925fc2dff78b8c4b11aa551aad8675", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1371/journal.pone.0108502", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e242aa72fb925fc2dff78b8c4b11aa551aad8675", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
21299687
pes2o/s2orc
v3-fos-license
Calmodulin interacts with angiotensin‐converting enzyme‐2 (ACE2) and inhibits shedding of its ectodomain Angiotensin‐converting enzyme‐2 (ACE2) is a regulatory protein of the renin–angiotensin system (RAS) and a receptor for the causative agent of severe‐acute respiratory syndrome (SARS), the SARS‐coronavirus. We have previously shown that ACE2 can be shed from the cell surface in response to phorbol esters by a process involving TNF‐α converting enzyme (TACE; ADAM17). In this study, we demonstrate that inhibitors of calmodulin also stimulate shedding of the ACE2 ectodomain, a process at least partially mediated by a metalloproteinase. We also show that calmodulin associates with ACE2 and that this interaction is decreased by calmodulin inhibitors. Introduction Angiotensin-converting enzyme-2 (ACE2) is rapidly emerging from the shadow of its better-known homologue angiotensin-converting enzyme (ACE) as an important co-regulator of the renin-angiotensin system (RAS). Whilst the primary physiological role of ACE in the RAS is to hydrolyse angiotensin I (Ang I) to the potent vasoconstrictor angiotensin II (Ang II) [1], ACE2 is able to cleave Ang II to produce Ang (1-7), a peptide which has opposing effects [2,3]. The physiological significance of ACE2 in the RAS has been demonstrated in a variety of tissues including the heart, liver, kidney and lung [4][5][6][7]. In addition, ACE2 is the cellular receptor for the SARS coronavirus, the causative agent of severe-acute respiratory syndrome (SARS) [8]. ACE2, like ACE, is a type I transmembrane metallopeptidase with an extracellular ectodomain containing its zinc-coordinating catalytic site [9,10]. Here, it is positioned to hydrolyse circulating substrates and serve as a viral receptor. Regulation of its expression at the cell surface is therefore of prime importance to its physiological and pathophysiological functions. We have previously shown that the ACE2 ectodomain can be cleaved from the cell membrane and be released into the extracellular milieu [11,12]. This ÔsheddingÕ event is stimulated by phorbol esters and involves a member of the ADAM (a disintegrin and metalloproteinase) family, ADAM17 (also known as TNF-a-converting enzyme, TACE) [12]. The function of diverse cell surface proteins is regulated by such shedding events, including enzymes (ACE, beta-site amyloid precursor protein cleaving enzyme), cytokines and growth factors (TNF-a, heparin binding epithelial growth factor) and neurodegenerative proteins (amyloid precursor protein, cellular prion protein) [13][14][15][16][17][18]. Whilst a great deal is known about the proteases which mediate these shedding events, the factors regulating this process still remain unclear. Ectodomain shedding is a complex event responding to a variety of stimuli (phorbol esters, calcium ionophores, growth factors) and involving a variety of interacting cellular proteins (protein kinase C, Eve), depending on the substrate [19][20][21]. In this study, we have identified the involvement of calmodulin in the regulation of ACE2 ectodomain shedding. Calmodulin is an ubiquitous calcium binding protein which is known to bind other transmembrane proteins, including L L-selectin and ACE, and regulate their cell surface expression [22,23]. Here, we show that calmodulin interacts with ACE2, both in cells expressing ACE2 heterologously and endogenously, and inhibitors of calmodulin increase the release of the ACE2 ectodomain in a dose-and time-dependent manner. This stimulation of shedding is only partially abrogated by metalloproteinase inhibitors, suggesting the involvement of disparate sheddases. Furthermore, treatment with calmodulin inhibitors decreases the association between the two proteins, suggesting the interaction of ACE2 with calmodulin serves to retain catalytically active enzyme in the plasma membrane. Materials All standard laboratory reagents were purchased from Sigma (UK) unless indicated otherwise. Anti-ACE2 polyclonal antibody was purchased from R&D Systems (UK), anti-calmodulin antibody and donkey anti-goat horseradish-peroxidase conjugated secondary antibodies were purchased from Sigma (UK). GM6001 was purchased from Chemicon (UK). The ACE2-specific fluorescent substrate Mca-APK(Dnp) was synthesized by Dr. G. Knight (Cambridge University, UK). Treatment of cells and protein extraction Cells were grown to confluence in 80 cm 3 flasks and rinsed twice with OptiMem prior to experimentation. All pharmacological treatments were diluted in OptiMem (5 ml) and all incubations carried out at 37°C. Medium was harvested and concentrated by centrifugation in 10 kDa cut-off Centricon tubes (VivaScience, UK) to a final volume of 200 ll. Cells were scraped into ice-cold phosphate-buffered saline, pelleted by centrifugation and solubilised in 500 ll radio-immunoprecipitation assay (RIPA) buffer (0.1 M Tris-HCl, pH 7.4, 0.15 M NaCl, 1% (v/v) Triton X-100, 0.1% (v/v) Nonidet P-40). Protein concentration was determined using bicinchoninic acid with bovine serum albumin as a standard [21]. Immunoprecipitation of cell lysates Cell lysates (500 lg, prepared as described) were incubated with rotation for 3 h at 4°C with 50 ll protein-A Sepharose beads (Sigma). Following centrifugation, the supernatant was incubated overnight at 4°C with rotation in the presence of 10 ll anti-calmodulin monoclonal antibody (Sigma). Subsequently, 50 ll protein-A sepharose beads (1 g resuspended in 3 ml phosphate buffered saline) were added and incubation continued for a further 2 h. The conjugated beads were pelleted by centrifugation and rinsed three times in ice-cold RIPA buffer. The beads were then heated in SDS-PAGE sample buffer containing denaturing reagent (Invitrogen) for 10 min at 95°C. SDS-PAGE and immunoblotting Immunoprecipitated proteins were separated by SDS-PAGE and proteins electrotransferred to nitrocellulose membranes (Invitrogen). Non-specific protein binding sites were blocked using 5% (w/v) dried milk, 3% (w/v) bovine serum albumin in Tris-buffered saline containing 0.5% (v/v) Tween-20 (TBS-T), and the membranes subsequently incu-bated in anti-ACE2 antibody (1:1000 for HEK-ACE2, 1:100 for Huh7) in the same solution for 3 h. Donkey anti-goat horseradish peroxidaseconjugated secondary antibody was used at a dilution of 1:5000 for 1 h in TBS-T. Immunoreactive bands were visualised using enhanced chemiluminescence (ECL; Pierce, UK) according to the manufacturerÕs instructions. Statistical analyses Statistical significance of data were tested using Mann-Whitney U-test. The cytoplasmic domain of ACE2 contains a conserved predicted calmodulin binding motif Analysis of the cytoplasmic domain of ACE2 using the Calmodulin Target Database (http://calcium.uhnres.utoronto.ca; [24]) revealed the presence of a region strongly indicative of a potential calmodulin binding domain (Fig. 1A). This 10 amino acid region, encompassing residues 763-772, is evolutionarily conserved in both rat and mouse (Fig. 1A), suggesting this domain may be functionally significant. Calmodulin associates with ACE2 In order to ascertain whether the predicted calmodulin binding domain of ACE2 was indeed functional, we next performed immunoprecipitation using an anti-calmodulin monoclonal antibody in cellular lysates collected from HEK293 cells stably transfected with full-length ACE2 [12]. Subsequent immunoblotting of immunoprecipitates for ACE2 revealed a band of the expected size (120 kDa) in HEK-ACE2 cells but not untransfected HEK cells (Fig. 1B) or HEK-ACE2 cell lysates subjected to immunoprecipitation with mouse IgG. Similar results were obtained for Huh7 cells, which endogenously express ACE2 ( [12]; data not shown). Incubation of HEK-ACE2 cells with W-7, a specific calmodulin antagonist, reduced the ACE2:calmodulin association (Fig. 1C). Calmodulin inhibitors stimulate ACE2 ectodomain shedding Given that interaction of membrane proteins with calmodulin has been shown to influence their retention in the plasma membrane, we next analysed the effect of inhibiting the association of ACE2 with calmodulin on the shedding of its ectodomain. Incubation of HEK-ACE2 or Huh7 cells with the calmodulin antagonist (CaMi) calmidazolium resulted in increased ACE2 activity in the media ( Fig. 2A and B, respectively). This CaMi-mediated stimulation of ACE2 shedding was time-and dose-dependent ( Fig. 2C and D, respectively). Similar results were obtained with other calmodulin inhibitors (trifluoperazine, W-7; data not shown). CaMi-stimulated shedding is reduced by the metalloproteinase inhibitor, GM6001 We have previously demonstrated that ADAM17 (also known as TACE, tumour necrosis factor-a converting enzyme) is involved in mediating stimulated ACE2 shedding in response to phorbol ester [12]. We therefore next analysed whether a metalloproteinase is also involved in CaMi-stimulated ACE2 shedding. Pre-incubation of HEK-ACE2 cells with the hydroxamate-derived metalloproteinase inhibitor GM6001 reduced the increased ACE2 shedding in response to CaMi (Fig. 3A). GM6001 treatment had little effect on basal shedding of ACE2 from HEK-ACE2 cells as shown previously [12]. Inhibitors of other classes of proteinases (serine, aspartic and cysteine) had no observable effect on shedding (Fig. 3B). GM6001 did not have any effect on the activity of ACE2 itself (Fig. 3C). Discussion ACE2 is increasingly recognised to have a pivotal role in regulating the local levels of the hypertensive and mitogenic peptide angiotensin II [25][26][27]. In addition, it is also the cellular receptor of the causative agent of SARS, SARS-CoV [8]. ACE2 is a type I transmembrane protein which is subject to a juxtamembrane cleavage event releasing a catalytically active ectodomain [12]. While the precise physiological role of this ÔsheddingÕ event is unknown it is clear, given the well documented pathophysiological roles of Ang II, that the mechanisms regulating the cell surface expression of ACE2 are of critical importance. We have previously demonstrated that the shedding of ACE2 can be stimulated by phorbol ester, a process mediated at least in part by the promiscuous metalloproteinase ÔsheddaseÕ, ADAM 17 [12]. Here, we show that the Media from these and from untreated control cells was collected, concentrated and assayed for ACE2 activity as described in Section 2. As an additional control, media were collected from untreated cells and incubated in the presence or absence of 50 lM GM6001 before being assayed for ACE2 activity as described. The results are presented relative to untreated control flasks and represent data collected from four (A), two (B) or three (C) independent experiments. * P < 0.1. Fig. 4. The ACE2 homologue collectrin contains a putative calmodulin binding motif. Alignment of human somatic ACE, ACE2 and collectrin functional domains. Stars indicate catalytic sites, black rectangles the transmembrane domains and white rectangles denote observed ( * ) or predicted calmodulin binding domains. The ACE2 and collectrin peptide sequence alignment shows conserved residues (highlighted) and predicted calmodulin binding domain (outlined). There is no homology between ACE and ACE2 or collectrin in this region. The transmembrane hydrophobic regions of all three proteins are in italics. shedding of ACE2 can also be upregulated by inhibitors of the ubiquitous calcium binding protein, calmodulin, both in cells endogenously expressing ACE2 (Huh7) and in those heterologously over-expressing it (HEK-ACE2). The CaMi-mediated increase in ACE2 shedding was reduced by the hydroxamatederived metalloproteinase inhibitor GM6001, suggesting the involvement of an ADAM (a disintegrin and metalloproteinase). Inhibitors of other classes of proteinases (serine, aspartic, and cysteine) had no effect on cleavage. Further studies are required to determine whether the CaMi-stimulated shedding of ACE2 is also mediated by ADAM17, or whether phorbol esters and calmodulin inhibitors invoke distinct sheddases. Computational analysis of the cytoplasmic domain of ACE2 revealed a conserved consensus calmodulin binding motif. Immunoprecipitation experiments revealed that calmodulin associates with ACE2 suggesting that this motif may be functional. While further studies are required to elucidate the precise binding site of calmodulin on ACE2, these results clearly indicate that association of calmodulin with ACE2 has a regulatory role in its cleavage-secretion from the plasma membrane. It is noteworthy that the cytoplasmic tail of ACE2 is quite distinct to that of ACE, the shedding of which is also modulated by association with calmodulin [23]; however, the ACE2 tail bears significant sequence identity to collectrin, a developmentally regulated protein recently shown to have a critical role in amino acid retrieval in the kidney [28,29] (Fig. 4). Analysis of the sequence of the cytoplasmic domain of collectrin also identifies a putative calmodulin-binding motif (Fig. 4). Recent studies have revealed that collectrin is also subject to ectodomain shedding [30], suggesting a possible role for calmodulin in regulating the cell surface expression and release of this protein. In summary, we have demonstrated that ACE2 interacts with calmodulin and that this association down-regulates shedding of the ACE2 ectodomain. This is the first study to identify a regulatory binding protein for ACE2 in vitro, revealing a hitherto unknown mechanism for regulating its cell surface and circulating activity.
2018-04-03T04:48:27.433Z
2007-12-10T00:00:00.000
{ "year": 2007, "sha1": "ed1c02529a252d1302164dfdaaf74dbf7e3ee831", "oa_license": null, "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1016/j.febslet.2007.11.085", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "3413934f2467636bd81a88e428838d95e04aae62", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
267298405
pes2o/s2orc
v3-fos-license
Targeted mapping and utilization of the perihepatic surface for therapeutic beta cell replacement and retrieval in diabetic non-human primates Introduction Successful diabetes reversal using pancreatic islet transplantation by various groups illustrates the significant achievements made in cell-based diabetes therapy. While clinically, intraportal islet delivery is almost exclusively used, it is not without obstacles, including instant blood-mediated inflammatory reaction (IBMIR), relative hypoxia, and loss of function over time, therefore hindering long-term success. Here we demonstrate the perihepatic surface of non-human primates (NHPs) as a potential islet delivery site maximizing favorable characteristics, including proximity to a dense vascular network for adequate oxygenation while avoiding IBMIR exposure, maintenance of portal insulin delivery, and relative ease of accessibility through minimally invasive surgery or percutaneous means. In addition, we demonstrate a targeted mapping technique of the perihepatic surface, allowing for the testing of multiple experimental conditions, including a semi-synthetic hydrogel as a possible three-dimensional framework to improve islet viability. Methods Perihepatic allo-islet cell transplants were performed in immunosuppressed cynomolgus macaques using a targeted mapping technique to test multiple conditions for biocompatibility. Transplant conditions included islets or carriers (including hydrogel, autologous plasma, and media) alone or in various combinations. Necropsy was performed at day 30, and histopathology was performed to assess biocompatibility, immune response, and islet viability. Subsequently, single-injection perihepatic allo-islet transplant was performed in immunosuppressed diabetic cynomolgus macaques. Metabolic assessments were measured frequently (i.e., blood glucose, insulin, C-peptide) until final graft retrieval for histopathology. Results Targeted mapping biocompatibility studies demonstrated mild inflammatory changes with islet-plasma constructs; however, significant inflammatory cell infiltration and fibrosis were seen surrounding sites with the hydrogel carrier affecting islet viability. In diabetic NHPs, perihepatic islet transplant using an autologous plasma carrier demonstrated prolonged function up to 6 months with improvements in blood glucose, exogenous insulin requirements, and HbA1c. Histopathology of these islets was associated with mild peri-islet mononuclear cell infiltration without evidence of rejection. Discussion The perihepatic surface serves as a viable site for islet cell transplantation demonstrating sustained islet function through 6 months. The targeted mapping approach allows for the testing of multiple conditions simultaneously to evaluate immune response to biomaterials at this site. Compared to traditional intraportal injection, the perihepatic site is a minimally invasive approach that allows the possibility for graft recovery and avoids IBMIR. Targeted mapping and utilization of the perihepatic surface for therapeutic beta cell replacement and retrieval in diabetic non-human primates Introduction: Successful diabetes reversal using pancreatic islet transplantation by various groups illustrates the significant achievements made in cell-based diabetes therapy.While clinically, intraportal islet delivery is almost exclusively used, it is not without obstacles, including instant blood-mediated inflammatory reaction (IBMIR), relative hypoxia, and loss of function over time, therefore hindering long-term success.Here we demonstrate the perihepatic surface of non-human primates (NHPs) as a potential islet delivery site maximizing favorable characteristics, including proximity to a dense vascular network for adequate oxygenation while avoiding IBMIR exposure, maintenance of portal insulin delivery, and relative ease of accessibility through minimally invasive surgery or percutaneous means.In addition, we demonstrate a targeted mapping technique of the perihepatic surface, allowing for the testing of multiple experimental conditions, including a semi-synthetic hydrogel as a possible three-dimensional framework to improve islet viability.Methods: Perihepatic allo-islet cell transplants were performed in immunosuppressed cynomolgus macaques using a targeted mapping technique to test multiple conditions for biocompatibility.Transplant conditions included islets or carriers (including hydrogel, autologous plasma, and media) alone or in various combinations.Necropsy was performed at day 30, and histopathology was performed to assess biocompatibility, immune response, and islet viability.Subsequently, single-injection perihepatic allo-islet transplant was performed in immunosuppressed diabetic cynomolgus macaques.Metabolic assessments were measured frequently (i.e., blood glucose, insulin, C-peptide) until final graft retrieval for histopathology.Results: Targeted mapping biocompatibility studies demonstrated mild inflammatory changes with islet-plasma constructs; however, significant inflammatory cell infiltration and fibrosis were seen surrounding sites with the hydrogel carrier affecting islet viability.In diabetic NHPs, perihepatic islet transplant using an autologous plasma carrier demonstrated prolonged function up to 6 months with improvements in blood glucose, exogenous insulin requirements, and HbA1c.Histopathology of these islets was associated with mild peri-islet mononuclear cell infiltration without evidence of rejection. Introduction In the United States, over 11% of the population has been diagnosed with diabetes (1), with the incidence and prevalence of the disease continuing to grow on a national and global level (2).While only representing approximately 10% of diabetes diagnoses, type 1 diabetes mellitus (T1D) is typically diagnosed earlier than type 2 diabetes mellitus (T2D), often at approximately 4-5 years of age or in the teenage years (3,4); this leads to a longer duration living with the disease and a greater risk for the long-term complications associated with diabetes (5,6).The standard of care for T1D includes frequent blood glucose monitoring along with exogenous insulin administration, a nonphysiologic treatment often associated with a greater burden of disease and reduced quality of life for patients (7)(8)(9)(10). For some patients with T1D, pancreatic islet transplantation is an option to replace lost β-cells and recapitulate endogenous insulin secretion (11,12).Allogeneic islet cell transplantation has been shown to result in near-normoglycemia (13), decreased hypoglycemic episodes in brittle diabetics (14-16), and improvement or slowing of the micro-and macrovascular complications of T1D (17)(18)(19)(20)(21)(22).Despite this success, the widespread adoption of this cell-based therapy has been hindered by an insufficient supply of donor organs, deterioration of graft function over time, and side effects from life-long immunosuppression (10,(23)(24)(25)(26).These obstacles are in part due to the injection of islets into the portal vein, the site of choice since the beginning of clinical islet transplantation (11,27). Though used as the clinical transplantation site, the portal vein is far from ideal.Islet cells are particularly susceptible to hypoxemia and the portal vein oxygen tension is well below that of the pancreas (28)(29)(30).Furthermore, intravascular injection subjects the transplanted islets to instant blood-mediated inflammatory reaction (IBMIR), resulting in inflammation, further hypoxemia, exposure to inflammatory cells and blood components, and the complement cascade (11,12,30,31).It has been estimated that between 50% and 70% of islet grafts are immediately destroyed as a result of IBMIR (32), which explains, in part, the need for significant amounts of islets for each recipient.When introduced intraportally, the islets embolize the liver, further amplifying the hypoxia while also contributing to liver steatosis, another factor implicated in graft loss (33,34).Portal vein implantation also exposes islets to higher immunosuppressive drug concentrations than in the periphery (35,36), reaching levels known to cause destruction of islets (37) or inhibit angiogenesis and healing (38), which is of great consequence during islet engraftment.While all these features impact long-term outcomes, thus far, no other site has demonstrated consistent successful engraftment and metabolic benefit in large animals or the clinic.As a result, specific attention has been turned to possible alternative sites to the portal vein for pancreatic islet transplantation. An optimal site would not only offer the efficient engraftment of islets but also capture the physiologic secretion to maximize metabolic benefit without increasing the number of islets needed to reverse diabetes.The site should have a rich vascular supply to boost the oxygen tension for the islets, create an ideal microenvironment to prevent early loss to promote engraftment, protect from rejection and IBMIR, and recapitulate portal venous drainage to avoid systemic hyperinsulinemia (39,40).Ideally, the site would be relatively easy to access to minimize significant surgery and allow for biopsy or access for functional assessment (30,41,42).To this end, others have investigated various means to address some of these issues including different cell encapsulation techniques using semi-permeable barriers to immunoisolate cells or other anatomic sites, including the subcapsular kidney, gonadal fat pad, peritoneum, gastrointestinal wall, spleen, pancreas, and intramuscular and subcutaneous space (10,30,39,43,44).Most of these have been studied in experimental rodent models and have never been tested in large animal studies related to a lack of clinical relevance or scalability. Given this, our experimental goal was to utilize a site distinct from the traditional intraportal site and functionalize it to support transplanted islet cells.The perihepatic (PH) liver surface was chosen for multiple reasons: (1) the liver is a highly perfused and well-vascularized organ receiving approximately 25% of cardiac output (45) with the perihepatic surface creating a prevascularized bed; (2) islets will have close proximity to a dense vascular network but will be protected against IBMIR; (3) presumed engraftment in the PH will allow for physiologic portal drainage of insulin via the sinusoids and reinnervation for potential normal pulsatile secretion; (4) the PH surface is easily accessible for transplantation, biopsy, or graft retrieval via minimally invasive surgery or percutaneous ultrasound guidance with little detriment to liver functionality (46,47); and (5) the large surface area of the human liver (approximately 1,000 cm 2 (48)) would provide plenty of area for a superficial graft. We also demonstrate a unique method of targeted mapping of the PH surface to quickly and efficiently test various transplantation conditions to determine the most suitable environment for the islet grafts.In these experiments, we used small volumes of islets and attempted to further functionalize the PH surface with carrier constructs, including a three-dimensional capillary alginate hydrogel.Carriers can serve as mechanical support and provide spatial distribution to the islets.Hydrogel scaffolds can support islets that are particularly vulnerable after the isolation and purification process, having damaged or lost extracellular matrix (ECM) or basement membranes (49).These hydrogels are similar to natural tissue ECM and can facilitate rapid revascularization (50).Moreover, hydrogel-islet constructs that are injectable, allow for extremely precise injection and localization of grafts.By utilizing the PH surface, a prevascularized area with relative ease of access and potential for physiologic insulin secretion, we aim to improve islet cell engraftment to establish long-term graft function while providing a means for monitoring and biopsy. Animal subjects All animal procedures were approved by the University of Minnesota Institutional Animal Care and Use Committee, conducted in compliance with the Animal Welfare Act, adhered to principles stated in the NIH Guide for Care and Use of Laboratory Animals (51), and were performed and reported in compliance with the ARRIVE guidelines.All animals were purpose-bred and purchased from institutionally approved commercial vendors.Animals used in this study were assigned to study group/experimental conditions based on appropriateness for study; due to the study's purpose and exploratory nature, no animals were assigned to a conventionally defined control group.Due to clinical care requirements, experimenters could not be blinded to an animal's experimental condition for certain aspects of the experiment, including metabolic characterization.Blinding occurred during data analysis when feasible. Non-human primates A total of five Mauritian-origin cynomolgus macaques (Macaca fascicularis) (four female, one male) were enrolled for testing.All enrolled animals were healthy and confirmed to be tuberculosis (TB) negative and viral negative (macaque herpes B virus, simian retrovirus D, simian immunodeficiency virus, and simian T-cell leukemia virus-1).The mean age of the animals was 6.1 ± 2.0 years and their mean weight was 5.1 ± 1.9 kg.For this exploratory study, each individual animal was used to model a combination of conditions of interest, enrolling one of the commonly used species of macaques used in transplantation modeling.These studies were not designed to achieve statistical significance or detect rare adverse events.Animals are presented individually for clarity and, where appropriate, grouping by similar experimental condition has been performed to evaluate trends and define expected variability for future modeling. To realize the need for frequent blood draws while avoiding confounding effects from restraint, sedation, and pain, all animals were implanted with single-incision, peripherally inserted vascular access ports (VAPs) as previously described (52).All animals were trained to cooperate with examination, blood collection, and general husbandry activities as part of the behavioral management program (53,54). Animals were fed a standardized diet of either 2055C Certified Teklad Global 25% Protein Primate Diet or 7195 Teklad High Fiber Primate Diet (Envigo, Madison, WI, USA).A standardized enrichment program was used for the duration of the study, including fresh fruits and vegetables, grains, beans, and nuts, as well as a children's multivitamin. Animal behavior and clinical status were evaluated at least twice daily.Scheduled physical examinations per protocol and semi-annual comprehensive veterinary examinations were performed on all animals.Animals were continuously housed in same-sex pairs, except in rare cases of demonstrated social incompatibility, in which singly housed animals remained in close proximity with social conspecifics maintaining visual, auditory, and olfactory contact at all times until re-pairing.An environmental enrichment program including social play, toys, music, and regularly scheduled access to a large exercise and swimming area was provided to encourage sensory engagement, enhance foraging behavior and novelty seeking, promote mental stimulation, increase exploration, play, and activity levels, and strengthen social behaviors, increasing the proportion of time animals spent on species-typical behaviors.All animals enrolled in this study were offered equal access and time for exercise and identical enrichment activities. Diabetes induction Diabetes was induced in three animals using pharmaceutical grade STZ (streptozotocin, Zanosar; Sicor Pharmaceutics, Irvine, CA, USA) using methods previously described by this laboratory (55, 56).After verifying appropriate hydration, a single dose of 100 mg/kg STZ was infused IV.Diabetes was confirmed by persistent hyperglycemia (>300 mg/dl on at least two consecutive readings), the need for exogenous insulin to maintain target blood glucose levels, and the absence of a C-peptide response to metabolic challenge.Non-human primates (NHPs) with diabetes were treated using glargine and lispro in combination on a sliding scale to target preprandial blood glucose levels between 50 and 200 mg/dl. Hydrogels Capillary alginate gel (Capgel TM ) is a self-assembled hydrogel comprising alginate and optionally other biopolymers, such as gelatin (57-64), with unique microstructures of packed parallel capillary channels running the length of the material (Figure 1A).Capgel TM was synthesized as has been extensively described in previous publications (57 -64).Specifically, the formulation of all parent gel solutions was 2% w/v alginate (Pronova ® , NovaMatrix ® ; Sandvika, Norway) and 2.6% w/v gelatin (Sigma-Aldrich, St Louis, MO, USA), and parent gels were grown with 0.5 M copper (II) sulfate pentahydrate (CuSO4 5H2O; Acros Organics, Fisher Scientific, Thermo Fisher Scientific Inc., Waltham, MA, USA).Once self-assembly was completed, the parent gels were rinsed extensively, sectioned, crosslinked in the cold using carbodiimide chemistry (Sigma-Aldrich, St Louis, MO, USA), processed, sterilized via autoclave, and the final Capgel TM product stored at 4°C until used. Islet isolation and quality control Adult cynomolgus macaque islets were isolated and cultured as previously described and evaluated for conventional quality control (purity, sterility, and viability assessed by oxygen consumption rate normalized for DNA) (65). Anesthesia and analgesia For surgical procedures and euthanasia, anesthesia was induced with 10-12 mg/kg ketamine IM with or without 0.1 mg/kg midazolam IM and 0.5%-3% isoflurane inhaled for maintenance anesthesia.Post-operative analgesia was administered for at least 72 h with 0.01-0.03mg/kg buprenorphine IM BID and 1.0 mg/kg ketoprofen IM daily for pain management. Islet transplantation and biopsy 2.7.1 Islet-hydrogel mapping surgery After the induction of anesthesia, NHPs were intubated and positioned supine.The intended incision sites were clipped of hair and the sites were widely prepped with chlorhexidine gluconate/ isopropyl alcohol solution and draped with sterile towels.The incision sites were infiltrated with 1% lidocaine (1:5 dilution).A 6 cm midline incision was made caudal to the xiphoid process.A gentle blunt dissection was used to expose the linea alba, which was then incised, and the peritoneum entered.The liver was immediately visualized, and a padded Babcock clamp was placed on the edge of the left lateral liver lobe.The lobe was gently externalized and then held by hand to expose the capsule for injection.Various islet constructs with or without carriers were injected into the left lateral lobe of the liver (injection volume per site: 100-250 µl) using a 25 g needle just under the capsule.A notable wheal was formed under the capsule for each injection.Each wheal was made equidistant from one another, and gentle pressure was held after each injection to ensure no leakage of islet product.For a given NHP, the islet product was equally divided across each injection site.There was minimal to no bleeding visualized at each injection site.The Babcock clamp was then removed, and the liver gently replaced into the abdomen.The incision was closed in five layers using 5-0 absorbable monofilament suture and sealed with topical skin adhesive. Islet transplantation in NHPs with diabetes After the induction of anesthesia, NHPs were intubated and positioned supine.The intended incision sites were clipped of hair and the sites were widely prepped with chlorhexidine gluconate/ isopropyl alcohol solution and draped with sterile towels.The incision sites were infiltrated with 1% lidocaine (1:5 dilution).A 2-3 cm midline incision was made caudal to the xiphoid process.The liver was immediately visualized, and a padded Babcock clamp was placed on the edge of the left lateral liver lobe.The lobe was gently externalized and then held by hand to expose the capsule for injection.Using a 22 g angiocatheter, saline was used to hydrodissect the liver capsule from the parenchyma and was then subsequently drawn back into the syringe.Islets in autologous plasma were injected using the same angiocatheter just under the capsule where it had been hydrodissected (injected volume 250-900 µl).A notable wheal was formed under the capsule.Gentle pressure was held after the injection to ensure no leakage of islet product and skin adhesive was used, if needed, to seal the puncture site.There was minimal to no bleeding visualized at each injection site.The Babcock clamp was then removed, and the liver gently replaced into the abdomen.The incision was closed in five layers using 5-0 absorbable monofilament suture and sealed with topical skin adhesive. Euthanasia Anesthesia was induced as described in Section 2.6 and the animals were euthanized using a barbiturate overdose consisting of 87 mg/kg pentobarbital +11 mg/kg phenytoin (Beuthanasia) IV. Laboratory testing For complete blood counts, venous blood samples were collected into EDTA-treated microtainers and analyzed using the Advia 2120 hematology analyzer (Siemens Healthineers USA, Malvern, PA, USA).For chemistry panels, venous blood samples were collected into serum separator tubes and centrifuged to obtain serum.Chemistry panels were analyzed using an AU480 chemistry analyzer (Beckman Coulter, Brea, CA, USA). Graft assessment 2.10.1 Laboratory testing Point-of-care glucose measurements were made using a standard glucometer (Nova Biomedical, Waltham, MA, USA).HgbA1c was measured from whole blood using a point-of-care DCA Vantage Analyzer (Siemens Healthineers USA, Malvern, PA, USA).For Cpeptide assays, venous blood was collected into serum separator tubes treated with bovine lung aprotinin (Millipore-Sigma, Darmstadt, Germany) at a ratio of a minimum of 500 kU to 1 ml of sample.C-peptide was measured via radioimmunoassay (Millipore-Sigma, Darmstadt, Germany) using the Genesys Genii instrument (Laboratory Technologies, Elburn, IL, USA). The glucose disappearance rate (K glucose ) was calculated as the slope of the decline of the log-transformed blood glucose between 10 and 30 min. Histological processing Islet cells were fixed in 10% formalin, paraffin-embedded, and processed for routine histology.Immunohistochemistry was performed on retrieved islet graft sites taken from the PH surface.Sections of tissue with a thickness of 4 µm were cut and slides were loaded onto the Biocare Intellipath IHC staining instrument (Biocare Medical, Pacheco, CA, USA).Slides were deparaffinized through xylene and rehydrated through graded alcohol to water.If needed, heat retrieval was performed.Endogenous peroxide was quenched with 3% hydrogen peroxide followed by a protein serum block.Antibodies were applied followed by detection, each for 30 min at room temperature.Slides were developed with DAB and counterstained with Mayer's hematoxylin, insulin, CD3, CD20, IBA-1, CD31, and β3 tubulin IHC staining. After staining, biopsies were imaged using a Nikon Eclipse-800M bright-field/fluorescence/dark-field microscope equipped with a Nikon DXM1200 high-resolution digital camera and NIS Elements-D 5.02.00Imaging software. Histological assessment All islet graft sites from the PH surface were reviewed by a boardcertified veterinary pathologist and scored to assess the degree of insulin immunoreactivity, infiltration of the graft constructs by immune and inflammatory cells, vascularization, and innervation. Data analysis The statistical analysis and graphical representation of data were performed using Prism version 10.0.2 (GraphPad Software, San Diego, CA, USA).A reverse Kaplan-Meier time-to-event was used to present differences in time-to-islet engraftment between diabetic NHP recipients.All histopathological scoring was performed by a board-certified veterinary pathologist with graft assessment including viability, islet fragmentation, insulin production, and inflammatory infiltration of cell product. Targeted mapping technique of left lateral liver lobe The targeted mapping technique was applied in two nondiabetic NHPs with the intent to functionalize the PH surface to improve islet survival while testing multiple carriers and conditions in a single animal, thereby reducing the overall number of NHPs needed by maximizing conditions that can be studied exposed to the same immune response for direct comparisons.An anatomic map of the left lateral lobe (66) is created depicting the spatial orientation for each islet-carrier construct facilitating graft retrieval later (Figure 2A).Using a small upper-midline laparotomy, the left lateral liver lobe is extracorporeally delivered and the islet-carrier constructs are injected under the liver capsule, forming discrete wheals spaced approximately 5 mm apart in a planned grid pattern.Each wheal contains a different experimental condition including islets and carrier (isolation media, autologous plasma, capillary alginate hydrogel) either together or alone (Figure 2B) and these wheals are identifiable at the time of retrieval (Figure 2C).Islet purity was 95% for both recipients and the total islet dose was equally divided across conditions. Biocompatibility and safety of the PH surface for islet transplantation Biocompatibility and safety were assessed after 30 days in immunosuppressed non-diabetic NHPs (Figure 2D).The procedure was well tolerated, no adverse events associated with the transplantation were experienced, and the NHPS' weight remained stable throughout the 30 days (Figures 1B,C). Histopathology and immunohistochemistry were performed on the PH surface after the graft retrieval at 30 days.A histopathologic evaluation demonstrated a thin layer of fibrosis surrounding the graft site, with mild to moderate macrophages and a few lymphocytes present in the islet-only and islet-autologous plasma constructs.In the conditions utilizing capillary alginate hydrogel (Figure 1A), the hydrogel was evident as homogenous material within the graft site extending into the hepatic parenchyma surrounded by multinucleated inflammatory giant cells (Figure 2B).Around this, a zone of fibrosis was seen with a moderate amount of macrophages and lymphocytes as well as a few polymorphonuclear leukocytes and eosinophils (Figures 3C-F).CD31 immunohistochemistry identified endothelial lined vessels and demonstrated prominent microvascularity at the graft sites (Figure 3B).Overall, the PH surface demonstrated significant vascularization at the graft sites with some inflammatory cells present in the conditions without hydrogel whereas a more robust immune response was seen in the hydrogel constructs as evidenced by the number and diversity of inflammatory cells at the site.Interestingly, there were no identifiable islet cells on histology (Figure 3A) across conditions, despite adequate immunosuppression. Graft survival and function after PH surface islet transplantation in diabetic NHPs Following feasibility testing with targeted mapping in nondiabetic NHPs, three streptozotocin-induced diabetic NHPs were chosen to undergo PH surface islet transplantation to evaluate long-term graft survival and function.These were similarly immunosuppressed using an anti-inflammatory induction regimen with rapamycin maintenance and co-stimulatory blockade for both induction and maintenance therapy (Figure 2D).Islets with an autologous plasma carrier were used for the PH surface transplantation based on the histological evaluation during targeted mapping demonstrating a greater inflammatory response and fibrosis with the use of hydrogel.Furthermore, autologous plasma proved easy to handle and inject with more control over spatial distribution in comparison to naked islets in media.The dosage and purity of islets transplanted can be found in Table 1. One recipient (16JP14) received a low purity islet product (20%) and did not have meaningful function throughout the post-transplant period; therefore, the evaluation and testing were only carried out through day 55.Indeed, C-peptide was <0.5 ng/ ml through the entire post-transplant period (Figure 2H).No meaningful improvements in pre-or postprandial glucose, exogenous insulin requirements, glucose disposal, or HbA1c were demonstrated either (Figures 2H, 4C,F,I). Histopathological findings in diabetic recipients revealed intact, engrafted islets that were organized in loose clusters (Figure 5A).Interestingly, despite perihepatic, subcapsular injection, many islet clusters were located deeper around portal tracts and zones.In all diabetic recipients, histology revealed mild fibrosis with a few inflammatory cells present and little evidence of immune rejection around the islet grafts (Figures 5B-D).Moderately to strongly positive insulin staining was seen in recipients 16JP3 (Figure 2I) and 16JP11 (Figure 2J).Conversely, while recipient 16JP14 showed relatively intact islets, there were overall smaller numbers and weaker positive insulin staining compared to the other recipients (Figure 2K).Prominent microvascularity was detected in the graft sites for all recipients and there was evidence of innervation within the islets or in the surrounding tissues (Figures 5E,F). Discussion The purpose of this study was to evaluate the capability of the PH surface to support transplanted islet cells in the translationally relevant NHP model.While intraportal injection remains the gold standard for islet transplantation, issues related to significant immediate graft loss, relative hypoxia, IBMIR as well as portal vein thrombosis and hypertension have led investigators to seek other potential sites for islet transplant.While some of these extraportal sites have demonstrated some advantages over the traditional transplant site, at this time, none have been shown consistent superiority to portal vein delivery.These reasons led our studies to investigate the PH surface as a potential extraportal site.Our results indicate that the PH surface is able to support islet cell survival through 180 days with detectable improvements in metabolic parameters using conventional, commercially available immunosuppression. We chose the PH liver surface for several reasons but primarily because of the superior vascularization of the liver, an issue for many of the previously studied extraportal sites (28-30, 39, 67-69).The PH surface places the graft adjacent to the liver parenchyma, which has a dual blood supply, receiving arterial blood from the hepatic artery and deoxygenated blood from the portal vein.This Representative histology of graft site in a non-diabetic, targeted mapping NHP recipient.Sections taken from the site injected with islet-capillary alginate hydrogel construct at 4× (top) and 10× (bottom) magnification with various stains including (A) insulin staining, (B) CD31 IHC staining for endothelial lined blood vessels, (C) Iba-1 IHC staining for macrophages, with black arrows pointing to the areas of background staining by capillary alginate hydrogel, (D) CD3 IHC staining for T-cells, (E) CD20 IHC, and (F) CD79a staining for B-cells.Scale bar: 500 µm at 4× magnification; 100 µm at 10× magnification.utilizes an intrinsic vascular bed, avoiding the need to prevascularize the space before transplant, such as for subcutaneous sites (67, 68, 70).Studies evaluating liver parenchymal oxygen tension have shown pO2 in the range of 42-57 mmHg (44, 71, 72), which is similar to, if not slightly better than, portal vein oxygen partial pressure.This supports islet survival during revascularization while avoiding direct contact with blood, protecting the graft from IBMIR. As insulin is normally secreted in a pulsatile manner from the pancreas into the portal vein and then to the liver, the PH site allows for physiologic insulin secretion given the proximity to the portal drainage and reinnervation of the site could recapitulate the pulsatility of secretion. In addition to the oxygenation and vascularization advantages, the PH liver surface allows for easier access to the islet graft.In our study, we were able to access the left lateral liver lobe (Figure 2A) through a small upper midline incision of approximately 6 cm in comparison to a large laparotomy or bilateral subcostal incision as seen in total pancreatectomy with islet auto transplantation (TPIAT) (73).The PH site also lends itself to percutaneous ultrasound-guided access, allowing for a potential percutaneous PH surface islet injection as in the case of a clinical allogeneic islet portal transplant (19).This ease of access simplifies transplantation but also allows for graft biopsy or retrieval, which is not possible in an intraportal islet transplant.Furthermore, if a graft retrieval or biopsy requires a more extensive liver resection, this can be done without significant detriment to the liver, which requires only a 20% functional liver remnant in order to regenerate (47,74). In our study, we first attempted to functionalize and improve the conditions of the PH liver surface for islet cells.Native islet survival, in part, relies on the ECM to create a particular spatial distribution in the pancreas, allowing for autocrine and paracrine signaling with neighboring cells (75); isolation of the cells and a loss of ECM leads to a form of cell death (76,77).As the isolation process removes much of the ECM and structure, we hypothesized that the use of a capillary alginate hydrogel may function as a scaffold for the islets and improve their survival and functionality.Some studies have also demonstrated increased growth factor release, wound healing, and vascularization using autologous plasma positioned this as an important carrier to study (78)(79)(80).Capillary alginate hydrogels were chosen as a potential carrier as they have been investigated in a wide array of biomedical contexts, including the 3D culture of embryonic stem cells (57), as an injectable neural stem cell delivery and scaffolding system (60), as an injectable post-myocardial infarction therapeutic (63), as a treatment for full-thickness skin wounds (62), to engineering in vitro functional 3D nerve tissue models (57), as new bioinks for 3D printing (61), and, recently, to engineering human tissues for direct arthropod biting and blood feeding (62).As inter-donor and recipient variability may confound results, particularly in the small group sizes characteristic in pilot NHP studies, we utilized the targeted mapping technique allowing us to simultaneously test multiple islet-carrier constructs in the same recipient with islets from the same donor. We evaluated the hydrogel, plasma, and naked islet constructs via targeted mapping technique in two non-diabetic NHPs with planned graft retrieval at day 30.The naked islet and autologous plasma constructs showed minimal fibrosis and immune cell infiltration whereas the hydrogel constructs demonstrated greater and more diverse inflammatory cell infiltration at the site and around the hydrogel with more significant fibrosis.This finding is interesting given that multiple studies have demonstrated the utility of alginate hydrogel for encapsulation and subsequent implantation of islets in various models (44,64,81).In studies using alginate hydrogel in the kidney subcapsular space, these use alginate hydrogel as a means for microencapsulation of islets (81-84), whereas in this study, islets and hydrogel were mixed before injection under the liver capsule.In contrast to microencapsulated islets, the histology shows large, homogenous areas of aggregated hydrogel; these areas are surrounded by zones of fibrosis and inflammation (Figure 2C).Though the mechanism by which hydrogel stimulates this inflammatory response is unclear, it has a similar histologic appearance of a foreign body reaction and perhaps the large, accumulated areas of hydrogel in the PH space are being treated as such.Other studies have demonstrated how alginate hydrogel stimulates an inflammatory response leading to fibrosis and islet death (44,85).Though spatially separated by a few millimeters, we hypothesize that the capillary alginate hydrogel may have a systemic adjuvant effect on the immune system and in combination with relatively low islet doses per injection site (1,169-2,085 IE/kg), likely explained the lack of islets seen across conditions. Building on these results, we wanted to assess the impact of the PH liver surface on long-term survival and the metabolic effects in diabetic recipients using the best condition, autologous plasma as a carrier for the islets.In a similar procedure to the targeted mapping studies, three diabetic, immunosuppressed NHPs underwent PH transplant with allogenic islet-plasma singlesite injection with planned graft retrieval at day 180.While no recipients achieved insulin independence, two recipients (16JP11 and 16JP3) had positive C-peptide levels at day 180 with improved HbA1c at the time of graft retrieval compared to the day of transplant (Figures 2F,G, 4A,B).16JP3 demonstrated islet engraftment soon after the transplant while 16JP11 demonstrated engraftment at day 14 (Figure 2E).Both also had improved glucose disposal (Figures 4D,E) though 16JP11 eventually had loss of graft function despite graft survival through day 180.On histology, loose clusters of intact islets were seen with moderate to strongly positive insulin staining observed (Figures 2F,J, 5A).Overall, there was minimal fibrosis or inflammation detected with minimal evidence of rejection.IHC staining also showed microvascular formation and evidence for innervation of the sites (Figures 5E,F). Conversely, recipient 16JP14 did not demonstrate significant C-peptide levels and was unable to achieve a metabolic benefit after transplant (Figure 2H); graft retrieval occurred on day 55 as a result.Interestingly, 16JP14 had the highest islet dose of the three recipients (14,788 IE/kg); however, islet purity was only 20% compared to the 90% purity of the other two recipients.Similar to the other recipients with diabetes, IHC showed vascularization and innervation at the site with evidence of intact islets.However, in comparison, there were relatively few islets seen with minimal positive insulin staining (Figure 2K).We suspect that the significantly poor purity of the islets resulted in the overall lack of function and benefit after transplant. In all three recipients with diabetes, there was evidence of the graft extending deeper into the parenchyma despite injection under the liver capsule; the reason for this is not entirely clear.At the time of transplant, it could be that the initial puncture of the liver capsule was deeper into the parenchyma and could have created a tract for islets to migrate after injection.Regardless, it is likely of little clinical consequence in terms of graft survival and function or in terms of safety as there were no adverse events related to this.Indeed, in a mouse model, one group has demonstrated the efficacy of islet transplant within a hepatic sinus tract (HST) created in the liver parenchyma (86, 87). As the portal vein is used in both allogenic islet transplants (17) and autologous islet transplants, such as in TPIAT (88), the surgeon must continuously monitor and account for changes in portal venous pressure (PVP).A rise in PVP is a known consequence of islet transplantation, with the potential for portal hypertension, bleeding complications, and portal vein thrombosis (39,(89)(90)(91)(92). Impurities in islet preparations are a known risk factor for increased PVP (93), a particular concern in TPIAT where islet cell yield is typically low due to chronic pancreatitis and thus, purification is often not performed to maximize the islet dose (93,94).As a result, this rise in PVP often limits the amount of islet product able to be infused, with the remaining preparation typically dispersed freely into the intra-abdominal cavity (95,96), an inferior site from a functional and histological standpoint compared to others (30,97). In this regard, the PH surface may represent a viable site to maximize the functional result from an autologous islet transplant in particular, compared to free dispersal into the peritoneal cavity, though the site is not capable of "rescuing" highly impure products, as seen in recipient 16JP14.Previously, the kidney subcapsular space was considered to be a potential site for islet transplantation but was unsuccessful in NHPs even using doses that were approximately twofold higher than those routinely successful in intraportal transplant (69).Similarly, in humans, the renal subcapsular site was inferior to intraportal transplant and resulted in only marginal C-peptide secretion with no appreciable metabolic benefit (98,99).Both the surgical invasiveness necessary to expose the kidney and the prevalence of diabetic nephropathy in potential recipients continues to limit the feasibility of this site (12,30).In contrast, the PH surface advantages the dense vascular network of the dual blood-supplied liver while the close proximity to the portal system preserves physiologic insulin kinetics, as demonstrated in the response to glucose challenge, given the islet dose is optimized.Furthermore, as there is a direct injection into the portal system when the PH surface is being utilized, there is no increase or change in PVP.During TPIAT, the PH surface is readily accessible whereas the retroperitoneum must be entered to access the kidney.For these reasons, as well as the demonstrated long-term function, survival, and vascularization of the PHtransplanted islet grafts, the PH may be an easy and advantageous site for the transplantation of the remaining islet preparation when the PVP prohibits further portal vein infusion during TPIAT.By harnessing all available islet products for transplantation, this could not only improve the metabolic benefits gained from transplant, but also increase the possibility of insulin independence. Limitations Given the exploratory nature of this study with the use of NHPs, the number of subjects was relatively small.While we were able to test new techniques and various conditions, the adjuvant effect that may present with certain hydrogels limits the evaluation of immune response to the immediate local reaction in the targeted mapping technique.Inflammation and rejection were only evaluated through histology at the time of graft retrieval; therefore, the immune response throughout the study period or soon after transplant is unclear.In the future, serologic markers of inflammation and immunoactivity in combination with potential serial graft biopsies would help shed light on the dynamic immunologic landscape after PH transplants. Conclusions We demonstrate the ability of the perihepatic liver surface to support the long-term function and survival of transplanted allogenic islet cells in NHPs on a conventional, clinically relevant immunosuppression regimen.Initial targeted mapping studies allowed for the simultaneous testing of multiple conditions to rule out islet-carrier constructs for additional testing in the more stringent STZ-induced NHP model and demonstrated the safety of PH surface islet transplantations.In diabetic recipients receiving standard purity islet products, PH islet transplants demonstrated islet survival through the day 180 endpoint, with improvements seen in blood glucose, exogenous insulin needs, HbA1c, and glucose disposal.Unlike intraportal islet transplants, the PH surface is accessible, allowing for graft biopsy or retrieval.While further work is still necessary, the PH surface may be a clinically relevant site for transplanting remaining islets after the portal venous pressure limit is reached during traditional portal islet transplants. FIGURE 2 FIGURE 2 Overview of PH surface islet transplantation and targeted mapping technique.(A) Representative schematic of targeted mapping technique using the left lateral liver lobe.Numbers correspond to different map site constructs spatially distributed by grid.(B) Islet-carrier constructs for each map site for each recipient.Islet dose presented in islet equivalents per kilogram.CAG, capillary alignate hydrogel; Plasma, autologous plasma (C) Left.Liver at necropsy showing islet grafts on PH surface.Grafts are circled in black.Right.H&E staining of representative islet-capillary alginate hydrogel map site, black arrows point to areas of aggregated hydrogel.(D) Study design overview for NHP recipients including immunosuppression protocol and time of graft retrieval.(E) Reverse Kaplan-Meier estimate of time to islet engraftment in NHPs calculated from the date of transplantation to the date of engraftment as measured by C-peptide ≥0.5 ng/ml.(F) Daily measures of preprandial (solid line) and postprandial glucose (dashed line) in mg/dl and exogenous insulin requirements (gray) in U/kg with inset showing human C-peptide (ng/ml) measured randomly (green), under fasting (blue), or stimulated (yellow) conditions by days post-transplant in recipient 16JP3 with a dose of 6,400 IE/kg, (G) in recipient 16JP11 with a dose of 6,687 IE/kg, and (H) in recipient 16JP14 with a dose of 14,788 IE/kg, low purity.(I) Insulin immunohistochemistry staining for islets in recipients 16JP3, (J) 16JP11, and (K) 16JP14. FIGURE 5 FIGURE 5Representative histology of graft site in a diabetic NHP recipient.Sections taken from the site injected with islet-autologous plasma at 10× magnification with various stains including (A) H&E, (B) Iba-1 IHC staining for macrophages, (C) CD3 IHC staining for T-cells, (D) CD20 IHC staining for B-cells, (E) CD31 IHC staining for endothelial lined blood vessels, and (F) β3 tubulin IHC staining for neurons.The dashed line highlights the cluster of islets.Scale bar 100 µm. TABLE 1 Diabetic cynomolgus macaque demographics and transplant characterization.
2024-01-28T16:09:03.809Z
2024-01-26T00:00:00.000
{ "year": 2024, "sha1": "0f11ac3a4beb70693b8a53b0c85161f2196683a8", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/frtra.2024.1352777/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e066ff8f176f7b985b54ff7ec1b981f69a97c024", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
254823343
pes2o/s2orc
v3-fos-license
Upper bound on the Guessing probability using Machine Learning The estimation of the guessing probability has paramount importance in quantum cryptographic processes. It can also be used as a witness for nonlocal correlations. In most of the studied scenarios, estimating the guessing probability amounts to solving a semi-definite programme, for which potent algorithms exist. However, the size of those programs grows exponentially with the system size, becoming infeasible even for small numbers of inputs and outputs. We have implemented deep learning approaches for some relevant Bell scenarios to confront this problem. Our results show the capabilities of machine learning for estimating the guessing probability and for understanding nonlocality. I. INTRODUCTION Whenever the statistics of a measurement on a composite quantum state contradict the assumptions of local realism, thus violating a Bell-type inequality, the correlations are referred to as nonlocal [1]. These nonlocal correlations are used to certify private randomness in device-independent quantum key distribution (DIQKD) [2][3][4][5][6][7][8][9][10][11][12][13] and device-independent randomness generation (DIRNG) [14][15][16][17][18][19][20][21][22]. For quantifying randomness, estimating the guessing probability is often an important task. The guessing probability is the probability with which an adversary can guess an outcome of another party's measurement. If the guessing probability is less than 1, the adversary cannot predict the outcome with certainty. This implies the presence of intrinsic randomness in the system. However, bounding the guessing probability is not an easy task. Typically it is not possible to explicitly compute the guessing probability, but one can only provide an upper bound by solving a semi-definite optimization problem. Usually, one bounds the guessing probability from a given Bell inequality, and the corresponding quantum violation [7,14]. Here, one needs to use the hierarchical structure of the quantum correlations [23,24] to solve the semi-definite optimization problem. The complexity of this optimization problem is increasing and becoming computationally demanding with the number of settings and outcomes. In this paper, motivated by the outstanding recent progress in utilizing machine learning in the field of quantum information [25][26][27][28][29][30][31][32][33][34], we develop deep learning (DL) models that predict the guessing probability along with the optimal Bell inequalities (used to upper bound the guessing probability) from an observed probability distribution using supervised machine learning. A crucial element of supervised machine learning is to generate sample data input and output to train the model. Here, we sample random quantum probability distributions and use them as the input of the training data. With this * Sarnava.Datta@hhu.de data, using the two-step method of Ref. [35], we estimate the upper bound of the guessing probability and the optimal Bell inequality, and use it as the output of the training data. After sufficient training, our DL approach can recognize the pattern and predict the guessing probability and the optimal Bell inequality with high accuracy and low average statistical error. We organize this work as follows. We start in Sec. II by explaining the generalized Bell set-up, types of correlations and Bell inequalities. We introduce the guessing probability and show how to estimate it by solving a semi-definite programme in Sec. III. We introduce our deep learning approach in Sec. IV. We discuss how to sample quantum probability distributions from the quantum correlation space, which are then used as input for supervised learning. We build several deep learning models for predicting the guessing probability and the Bell inequality for various Bell scenarios and measure their efficiency to show the model's utility. II. GENERALIZED BELL SET-UP In this section, we introduce a generalized Bell setup. In each measurement round, two parties, Alice and Bob, share a quantum state ρ AB acting on H A ⊗ H B . In the presence of an eavesdropper Eve, her side information E is described via the purification of the joint system ρ ABE acting on Each party selects locally an input (a measurement setting) which produces an output (a measurement outcome). We refer to this scenario as a Bell scenario. Alice performs measurements specified by her input x ∈ X = {1, · · · , m}, where each input has k possible outcomes a ∈ A = {1, · · · , k}. Similarly, Bob performs measurements specified by his input y ∈ Y = {1, · · · , m} and produces the outputs b ∈ B = {1, · · · , k}. We denote this scenario as [m, k] Bell scenario, i.e. m measurement settings with k outcomes each; see Fig. 1 for visualization. After many repetitions, the conditional probability P (ab|xy) can be estimated. The Bell scenario is completely characterized by the set P := {P (ab|xy)} ⊂ R m 2 k 2 of all joint conditional probabilities which we refer to as a behavior [36]. Thus, the following constraints are imposed: positivity P (ab|xy) ≥ 0 ∀ a, b, x, y and the normalization k a,b=1 P (ab|xy) = 1 for all x and y. We say the behavior is no-signaling if the input-output correlation obeys k b=1 P (ab|xy) = P (a|x) ∀a, x, y and k a=1 P (ab|xy) = P (b|y) ∀b, x, y . The set of all correlations satisfying the no-signaling constraints forms a convex polytope N S. A behavior is said to be local if it can be written as a convex mixture of deterministic strategies [37,38]. The set of all local correlations forms a convex polytope P. There exist inequalities of the form [36] a,b,x,y C abxy P (ab|xy) ≤ I L , which separate the set of all local correlations (in other words, the convex polytope P) from the nonlocal behaviors. These inequalities are called Bell inequalities. A Bell inequality is specified by the coefficients C abxy ∈ R. We denote a Bell inequality as B, and a,b,x,y C abxy P (ab|xy) as the Bell value B[P] in this paper. Here, I L is the classical bound, which is the maximal value over all local behaviors. Thus, a behavior with a classical origin, i.e. {P (ab|xy)} ∈ P, cannot violate this inequality. The Born rule of quantum theory postulates that a behavior is quantum if there exists a quantum state ρ AB acting on a joint Hilbert space H A ⊗ H B of arbitrary dimension and measurement operators (POVM elements) {M a|x } with M a|x ≥ 0 and a M a|x = 1 ∀x, and {M b|y } with analogous properties such that P (ab|xy) = Tr(ρ AB M a|x ⊗ M b|y ) . ( The set of all quantum correlations forms a convex set Q. If a behavior {P (ab|xy)} ∈ Q \ P, it violates at least one Bell inequality of the form in Eq. (2). The sets P, Q FIG. 2: A pictorial representation for the set of correlations. All classical probabilities form a convex polytope P, which is embedded in the set Q of quantum correlations, which in turn is a subset of the no-signaling polytope N S. v 1 and v 2 are vertices of the local polytope. B (blue dashed line) represents the Bell inequality which separates the classical polytope from the quantum and no-signaling set. and N S obey the following relation: P Q N S; see Fig. 2 for a pictorial representation. III. GUESSING PROBABILITY In an adversarial black box scenario framework, the adversary Eve tries to guess some outcomes obtained by Alice and Bob. The probability that Eve can correctly guess the outcome is called the guessing probability. Here, we denote the guessing probability as P g (a|x, E), which is the guessing probability of Eve about Alice's outcome a corresponding to her measurement setting x. In Ref. [7], it is shown that P g (a|x, E) can be upper bounded by a function G x of the observed Bell value B[P] of a particular Bell inequality B by semi-definite programming, i.e. P g (a|x, E) ≤ G x (B[P]). One crucial element to bound the guessing probability P g (a|x, E) is to choose a suitable Bell inequality. We follow the two-step procedure of [35] where the Bell inequality is constructed from the input-output probability distribution P that leads to the maximum Bell violation for that particular measurement statistics. This is achieved by solving the linear program: Here h is the hyperplane specifying the Bell inequality B, P denotes the measurement data, v p corresponds to the p vertices of the classical polytope P and c is the classical bound. Thus the Bell inequality B found by the optimization of Eq. (4) and specified by the hyperplane vector h is given as: where a ∈ A, b ∈ B, x ∈ X, y ∈ Y . We will use the Bell inequality B and corresponding Bell value B[P] = a,b,x,y h abxy P (ab|xy) to upper bound the guessing probability P g (a|x, E) by solving the following semidefinite program [7]: In the optimization problem of Eq. (6), the guessing probability is bounded using the NPA-hierarchy [23,24] up to level 2. The optimization is performed using standard tools YALMIP [39], CVX [40,41] and QETLAB [42]. Note that, A(a|x) and B(b|y) are the measurement operators of Alice and Bob, respectively, and ρ AB is the state shared between them. G is the Bell operator defined as: Let us denote P * g (a|x, E) as the upper bound of the guessing probability, which is the solution to the optimization problem of Eq. (6). IV. MACHINE LEARNING APPROACH Providing an upper bound for the guessing probability by solving a semi-definite program is a computationally demanding task. It becomes arduous when the Bell scenario raises its complexity, i.e. for an increased number of measurement inputs and/or outputs in the Bell scenario. Thus, in this paper, we approach solving the problem via machine learning (ML) (see Ref. [43] for detailed discussions on the concepts of machine learning) so that the trained model can estimate the guessing probability P * g (a|x, E) from the input-output probability distribution {P (ab|xy)}. We are going to use the supervised learning technique. In a supervised ML approach, the first step is generating the training points. We use random bipartite quantum probability distributions as the supervised ML model's input (features), after generating them from facet Bell inequalities using the weighted vertex sampling method [44]. Since the guessing probability for local behaviors is always 1 (i.e. Eve can guess the right outcome with probability 1), we do not need to train the machine to perform well on those. Thus we only take samples from the nonlocal part of the no-signaling set, i.e. N S \ P. To single out the input-output correlation with a quantum realization, we reduce the samples using the NPA hierarchy to approximate the quantum realizable probability distribution. Explicitly, we generate samples from the quantum set Q as follows. For the [m, k] Bell scenario (i.e. m measurements, k outcomes each), the classical polytope P is specified by k 2m local vertices. The classical polytope can also be described by its facets, which represent the hyperplanes (or Bell inequalities) that separate any nonclassical (quantum and no-signaling) behavior from the classical ones. These facets are called facet Bell inequalities or tight Bell inequalities [36]; see Fig. 2 for a pictorial representation. For the [2,2] scenario, eight facet Bell inequalities exist, all equivalent to the CHSH inequality [45]. For the [3,2] Bell scenario, there are 648 facet Bell inequalities. These facet Bell inequalities are found using the formulation of Ref. [46] 1 . Note that all the 648 facet Bell inequalities correspond to two classes of independent facet Bell inequalities, i.e. the CHSH inequalities and the I3322 inequalities [47,48]. We consider all facet Bell inequalities for the [2,2] and [3,2] Bell scenario while generating training points for the supervised machine learning problem. For the [4,2] Bell scenario, there are 174 independent facet Bell inequalities [49]. Since there are many (>10000) equivalent facets [50], we will only consider the independent ones. These facet Bell inequalities are spanned by some of the local vertices of the classical polytope 2 . These vertices provide the maximum 1 Using Ref. [46], one can calculate all the facets of a convex polytope given its vertices. The transformation from the vertex representation to the facet representation of a polytope is known as facet enumeration or convex hull problem, which uses Gaussian and Fourier-Motzkin elimination. classical bound of the corresponding facet Bell inequality. Consider the case that n local vertices span a facet Bell inequality, where we denote the set of n vertices as . We denote the PR-box of the corresponding facet Bell inequality as P PR (ab|xy), see Fig. 2 for visualization. The PR-Box P PR (ab|xy) can be defined as the probability distribution that provides the maximal no-signaling bound of the corresponding facet Bell inequality [51,52]. We take uniformly random weighted mixtures of the n + 1 vertices (n vertices that span the facet Bell inequality and the corresponding PR-box) with an n-fold weight on the PR-box. Formally, the sample behavior from the set N S \ P can be generated as: n, are uniformly drawn random numbers. This process is done for all facet inequalities. From this set of samples, we only select the ones with a Q 2 realization (the second level of NPA hierarchy [23,24]). Here we work under the assumption that Q 2 provides a good approximation for the original quantum set Q. We store the probability distribution {P (ab|xy)} and use it as the input (features) of the supervised machine learning problem, i.e. We calculate the guessing probability of each input P using the two-step method (see Sec. III for details), and use it as the output (target), i.e. y = P * g (a|x, E) . Without loss of generality, we have always calculated the guessing probability of Alice's first measurement setting. We use a deep neural network to assess the dataset and make predictions. We fed the input-output pair {X, y} (see Eq. (9) and Eq. (10)) into an artificial neural network (ANN) to learn the best possible fit. For an elaborate explanation of an artificial neural network, see Ref. [43]. Following the standard approach, we divide the dataset into two parts. The first part of the dataset is for training and validation (80%), and the second is for testing (20%). We choose a 'linear' 3 ANN with several layers as our model; see Fig. 3 for visualization. The input layer has m 2 k 2 neurons corresponding to the elements in {P (ab|xy)}. The output (last) layer has only one neuron since we only have to predict one element: the guessing probability P * g (a|x, E). We perform 100 rounds of training using the optimizer ADAM [53], of which the first 50 rounds have a fixed learning rate of 0.001. For the next 50 rounds, we reduce the learning rate by 90% in every tenth round. We choose the activation function ReLu (Rectified linear unit) 4 in the input and the hidden layers while using the sigmoid activation function 5 (12)) generates the predicted value of the guessing probability P pred g (a|x, E). To check the efficiency of our approach, we have used the mean absolute error (MAE) and the mean squared error (MSE) as a performance measure. N test is the number of data points in the test set. We analyze the results for different bipartite Bell scenarios and list the errors in Table I. The average error is in the order of 10 −4 to 10 −2 . Such high accuracy and small error without knowing the Bell inequality are truly remarkable. We also compare the runtime performance of the neural network model with the frequently used SDP solver Mosek [54] (that can be used to upper bound the guessing probability by solving the optimization problem of Eq. (6)) in Table II. The Mosek task is generated and solved using the Ncpol2sdpa [55]. The results are evaluated over 10000 unknown samples and performed on a personal computer 7 under 7 Specifications: Intel(R) Core(TM) i7-10510U Processor, 2.30GHz Frequency, 16.0 GB RAM comparable conditions. Once the neural network is trained, we get a speed-up of 10 3 − 10 5 for obtaining a prediction about a new instance, compared to the runtime of the usual method for solving the optimization problem; see Table II. This follows from the fact that the number of variables in the optimization process of Eq. (6) increases exponentially with the number of measurement settings (or outcomes per measurement) in the Bell scenario. Thus, it takes more computational time to perform the SDP using a classical solver like Mosek. A trained neural network only calculates the functional output using the optimized weights and biases. Only the neural network size affects the computational time needed to complete the prediction task. However, the upper bound on the guessing probability calculated from a trained machine learning model only provides an estimate of its real value. Thus, we cannot use this estimate to bound the secret key rate. The predicted Bell inequality on the other side that generates a non-zero Bell violation (for a particular measurement statistics) can be used to bound the guessing probability (see Sec. III for details) and the secret key rate. That's why in the next step, we use deep learning to predict the associated optimal Bell inequality B, which is then used to upper bound the guessing probability (see Sec. III for details). For this purpose, we again use the neural network architecture where supervised learning is incorporated. We start by preparing the dataset where our input features are X := {P (ab|xy)} a,b=1,··· ,k x,y=1,··· ,m . Here we use two types of neural network architectures. The first neural network is a usual 'linear' feed forward neural network (see [43] for details, schematically represented in Fig. 3). For [m, k] Bell scenario, the input layer has m 2 k 2 neurons (corresponds to the elements of {P (ab|xy)}). The input layer is followed by several hidden layers. Unlike in the previous scenario, the output layer has m 2 k 2 + 1 neurons in this case, where m 2 k 2 neurons correspond to the coefficients of the Bell inequality h abxy , and one neuron corresponds to the guessing probability P * g (a|x, E). In this paper, we denote this construction of the 'linear' deep neural network as NN 1 . Following the standard approach, we divide the dataset {X, y} (see Eq. (13) and Eq. (14)) into two sets; the first part of the dataset is for training and validation (80%), and the second part is for testing (20%). Similar to the training of the previous network, we perform 100 rounds (first 50 rounds with a 0.001 learning rate and then reduce the learning rate by 90% in every tenth round) of training using the gradient solver ADAM. Similar to the previous scenario, we use the activation function ReLu in the input and the hidden layers. In the output layer, the linear activation function 8 is used for m 2 k 2 neurons that correspond to the optimal Bell inequality and the sigmoid activation function is incorporated for the neuron that corresponds to the guessing probability. As the cost function, we use the Mean Squared Error (MSE) which is minimized during the training process. In addition, we use another neural network architecture with two parallel sub-models (by using branching) to interpret parts of the output that share the same input. In this construction, the input layer has m 2 k 2 neurons corresponding to the elements of the probability distribution {P (ab|xy)} of the [m, k] Bell scenario. The input layer is followed by hidden layers consisting of multiple neurons. Then we bifurcate one hidden layer to create two branches. Several hidden layers then follow both branches; see Fig. 4 for visualization. The first branch of the network is for predicting the coefficients of the optimal Bell inequality {h abxy } a,b=1,··· ,k x,y=1,··· ,m and thus has m 2 k 2 neurons. The second branch of the network is for predicting the guessing probability. Thus, the output layer will have only one neuron corresponding to P * g (a|x, E). In this paper, we refer to this neural network as NN 2 which is built using the Keras functional API [56]. In NN 2 , we use the ReLu activation function in the input and all the hidden layers. The linear activation function is used in the output layer of the first branch (which predicts the coefficients of the Bell inequality) while the sigmoid activation function is used in the second branch (which predicts the guessing probability). The other details of the training steps are the same as for the NN 1 neural network stated previously. Both NN 1 and NN 2 predict the Bell inequality B pred (specified by the predicted coefficients {h pred abxy } a,b=1,··· ,k x,y=1,··· ,m ) and the guessing probability P pred g (a|x, E). Since the neural networks NN 1 and NN 2 predict two separate entities (the optimal Bell inequality and the guessing probability), we evaluate their performance separately. We use the mean absolute error (see Eq. (11)) and mean squared error (see Eq. (12)) as our performance measure of predicting the guessing probability. The errors for different bipartite Bell scenarios are listed in Table III. (a|x, E) with respect to the guessing probability P * g (a|x, E) for different Bell scenarios for NN 1 and NN 2 . Here the neural network is trained for predicting the guessing probability P * g (a|x, E) and the Bell inequality B from the probability distribution P (ab|xy). Note that, for estimating the guessing probability, NN 2 yields lower statistical errors than NN 1 . The reason lies in the structure of the neural network architectures. Since we create a branch in the neural network only to estimate the guessing probability, the NN 2 neural network assigns more nodes to only estimate the guessing probability than NN 1 . In the case of predicting the optimal Bell inequality B (characterized by its coefficients {h abxy } a,b=1,··· ,k x,y=1,··· ,m ), we use the performance measure MSE, which reads: and MAE, which reads: Another way to evaluate the quality of the predicted Bell inequality is to use it for upper bounding the guessing probability problem (see Eq. (6)). First, we estimate the probability of P * g (a|x, E) < 1, where P * g (a|x, E) is calculated from the predicted Bell inequality B pred and the input-output probability distribution {P (ab|xy)} of the test set. We present the results in Table V also look into the statistical errors between the original guessing probability P * g (a|x, E) from the test set and the guessing probability calculated from the predicted Bell inequality B pred . We use MAE and MSE as the performance measures listed in Table VI. The high probability of generating P * g (a|x, E) < 1 with the predicted Bell inequalities (see Table V) and the small statistical errors (see Table VI) demonstrate the quality and accuracy of the predicted Bell inequality. We again compare the computational runtime of predicting the optimal Bell inequality using the standard linear optimization of Eq. (4) with the neural network the runtimes are evaluated over 10000 unknown samples and performed on the same personal computer in the same condition. The linear programming of Eq. (4) is performed with the Mosek solver using PICOS [57] python interface. We notice a significant speed-up when using the trained neural network models compared to the Mosek solver. This again follows from the fact that the number of variables in the optimization process of Eq. (4) increases with the number of measurement settings (or outcomes per measurement) in the Bell scenario while the computational time for the neural networks only depends on its size. V. DISCUSSION & CONCLUSION Estimating the guessing probability is a cornerstone for device-independent quantum key distribution and deviceindependent randomness generation. This paper introduces a novel method to estimate the guessing probability using trained deep learning models to bypass the computationally complex and cumbersome semi-definite optimization process. Computation with the trained deep learning models is significantly faster than using a conventional solver. With current technology, Bell test event rates are around 100 kHz, which results in new data every 10µs [58]. This frequency is too high for conventional SDP solvers on a single CPU. For those cases, our deep learning approach improves the computation significantly. In principle, optimizing the size of a deep neural network that can process each event as the experiment is being conducted is possible. The deep learning model only provides an estimation of the upper bound of the guessing probability. But it will not provide a certification. Thus, additionally, our DL model provides an estimation of the optimal Bell inequality for which the Bell violation using the measurement statistics certifies the nonlocality of input-output correlations and guarantees that the guessing probability will be less than one. Our trained deep learning models, which significantly speed up the prediction of the Bell inequality compared to a conventional linear program solver, predict a Bell inequality that can generate P g (a|x, E) < 1 with a very high probability. The mean average error between the guessing probability calculated from the predicted Bell inequality and the optimal Bell inequality (calculated using Eq. (4)) is in the order of 10 −5 − 10 −2 (mean squared error is in the order of 10 −8 − 10 −3 ) which shows the quality of this approach such that it can efficiently be used in a DIQKD or DIRNG protocol. We also demonstrate a method for sampling random quantum correlations (correlations which have a realization of NPA hierarchy level of 2) using the facet Bell inequalities, which is then used as input in the supervised machine learning process. Note that, while generating probability distributions, we consider all facet Bell inequalities for the [2,2] and [3,2] Bell scenario. However, since there are more than 10000 facet Bell inequalities for the [4,2] Bell scenario, we only restrict ourselves to generating probability distributions using the independent facet Bell inequalities. To illustrate the benefits of our method, we have applied it to several relevant Bell scenarios. Note that we design and train our neural networks to minimize statistical errors. However, we do not claim that our choice of the trained neural network is optimal for estimating the guessing probability and the associated optimal Bell inequality from the measurement statistics. Other constructions of neural networks will lead to different results. We observed that the statistical errors in the estimation of the guessing probability and the optimal Bell inequality increase with the complexity of the Bell scenario (i.e., the increase in the number of measurements per party). Since there are more inputs and outputs, our neural network architecture might not be able to generalize the extensive system with a limited number of hidden layers and nodes in each layer. To decrease the errors, one can take two steps. First, one can generate a larger dataset to train the model. Second, one can build a more extensive neural network architecture (i.e., more hidden layers or nodes in every layer). However, using a larger dataset for training or/and training a more extensive neural network will result in significantly more computational time. There is also the possibility of overfitting in an extensive network. A larger neural network architecture will also take more time to predict new instances. Therefore, one has to change the network architecture to optimize the speed and precision of a specific scenario. Note that while comparing the runtime for the Mosek optimization solver with the trained neural network for the estimation of the guessing probability (see Table II), we implement the NPA hierarchy of level 2. The difference in computational runtime between the methods will be much more pronounced with increasing hierarchy. Our research demonstrates the applicability of deep learning techniques for Bell nonlocality and upper bounding the guessing probability. We believe that this strategy will create several research lines. The logical next step is to apply our approach to Bell scenarios with a higher number of measurement settings and outcomes. It is also possible to expand our framework to a multipartite scenario. Another direction worth exploring for future work is investigating other neural network constructions. Beyond the advantage in speed, one could use neural network architectures to search for new Bell inequalities. Also, recall that our methodology does not account for uncertainty or offers certification of the output. It remains for future work to use techniques like probabilistic modeling [59] that can certify the correctness of the model's output.
2022-12-19T06:42:20.216Z
2022-12-16T00:00:00.000
{ "year": 2022, "sha1": "632365dcbb1104b2106f2ab1fabfbfdc036233cb", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "632365dcbb1104b2106f2ab1fabfbfdc036233cb", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
15669645
pes2o/s2orc
v3-fos-license
Observation of Partial U_A(1) Restoration from Two-Pion Bose-Einstein Correlations The effective intercept parameter of the two-pion Bose-Einstein correlation function, lambda_*, is found to be sensitive to the partial restoration of U_A(1) symmetry in ultra-relativistic nuclear collisions. An increase in the yield of the eta' meson, proposed earlier as a signal of partial U_A(1) restoration, is shown to create a ``hole'' in the low p_t region of lambda_*. A comparison with NA44 data from central S+Pb collisions at 200 AGeV is made and implications for future heavy ion experiments are discussed. Introduction: Intensity interferometry is a useful method for studying the space-time geometry of high energy nucleus-nucleus collisions and elementary particle reactions (for recent reviews, see Ref. [1,2]). In particular, pion interferometry has proved useful in studying the space-time dependence of pion emission as was first shown experimentally by Goldhaber, Goldhaber, Lee and Pais [3]. The method of intensity interferometry, known also as Hanbury-Brown-Twiss (HBT) correlations, was introduced by Hanbury-Brown and Twiss [4] for measuring the angular diameters of main sequence distant stars. The purpose of this Letter is to show that pion interferometry can be used to detect the axial U A (1) restoration and the related increase of the η ′ production. As was shown in several papers [5,6], at incident beam energies of 200 AGeV at the CERN SPS, the space-time structure of pion emission in high energy nucleus-nucleus collisions can be separated into two regions: the core and the halo. The pions which are emitted from the core or central region consist of two types. The first type is produced from a direct production mechanism such as the hadronization of wounded stringlike nucleons in the collision region. These pions rescatter as they flow outward with a rescattering time on the order of 1 fm/c. The second type is produced from the decays of short-lived hadronic resonances such as the ρ, N * , ∆ and K * , whose decay time is also on the order of 1-2 fm/c. This core region is resolvable by Bose-Einstein correlation (BEC) measurements. The halo region, however, consists of the decay of long-lived hadronic resonances such as the ω, η, η ′ and K 0 S whose lifetime is greater than 20 fm/c. This halo region is not resolvable by BEC measurements. However, as will be summarized below, this region still affects the Bose-Einstein correlation function. In recent papers [7,8], it was argued that the partial restoration of U A (1) symmetry of QCD and related decrease of the η ′ mass [9][10][11][12] in regions of sufficiently hot and dense matter should manifest itself in the increased production of η ′ mesons. Estimates of Ref. [7] show that the corresponding production cross section of the η ′ should be enhanced by a factor of 3 up to 50 relative to that for p + p collisions. The effective intercept parameter, λ * , can be written in terms of the one-particle invariant momentum distributions [5,6] of the core and halo pions and thus is sensitive to the abundance of the long-lived hadronic resonances such as the η ′ . To see this, consider the two-particle Bose-Einstein correlation function which is defined as; where the inclusive n-particle invariant momentum distribution is given as ..E n dσ dp 1 ...dp n , the relative and the mean four-momenta are given by and p = (E p , p). From the four assumptions made in the core-halo model [13], the Bose-Einstein correlation function is found to be where the effective intercept parameter and the correlator of the core are defined, respectively, as and Here,S c (∆k, K) is the Fourier transform of the one-boson emission function, S c (x, p), and the subscripts c and h indicate the contributions from the core and the halo, respectively. In this form, λ * (K = p, Q min ) is simply related to the momentum distributions of the core and halo pions. The Q min dependence of λ * which essentially indicates the separation of the core and the halo is actually defined by the experimental two-track resolution Q min . Axial Symmetry Restoration and the η ′ production: In the chiral limit (m u = m d = m s = 0), QCD possesses a U(3) chiral symmetry. When broken spontaneously, U(3) implies the existence of nine massless Goldstone bosons. In nature, however, there are only eight light pseudoscalar mesons, a discrepancy which is resolved by the Adler-Bell-Jackiw U A (1) anomaly; the ninth would-be Goldstone boson gets a mass as a consequence of the nonzero density of topological charges in the QCD vacuum [14,15]. In Refs. [7,8], it is argued that the ninth ("prodigal" [7]) Goldstone boson, the η ′ , would be abundantly produced if sufficiently hot and dense hadronic matter is formed in nucleus-nucleus collisions. It was also observed, however, that the η ′ decays are characterized by a small signalto-background ratio in the direct two-photon decay mode. This may make the observation of η ′ in this mode difficult, especially at small transverse momenta, where the increase is predicted to be the strongest. However, we now show that the momentum dependence of λ * from pion correlations provides a good observable for partial U A (1) restoration. If the η ′ mass is decreased, a large fraction of the η ′ s will not be able to leave the hot and dense region through thermal fluctuation since they need to compensate for the missing mass by large momentum [7,8,11]. These η ′ s will thus be trapped in the hot and dense region until it disappears, after which their mass becomes normal again and as a consequence of this mechanism, they will have small p t . The η ′ s then decay to pions via Assuming a symmetric decay configuration (|p t | π + ≃ |p t | π − ≃ |p t | η ) and letting m η ′ = 958 MeV, m η = 547 MeV and m π + = 140 MeV, the average p t of the pions from the η ′ decay is found to be p t ≃ 138 MeV. In the core-halo picture the η ′ , η decays contribute to the halo due to their large decay time (1/Γ η ′ ,η > 20 fm/c). Thus, we expect a hole in the low p t region of the effective intercept parameter, λ * = [N core (p)/N total (p)] 2 , centered around p t ≃ 138 MeV. We note that the shape of λ * will also be effected if the masses of the ω and η mesons decrease. However, due to the large inelastic cross sections for ω -meson scattering, the ω are expected to rapidly reach chemical equilibrium when the hadronic fireball cools and their mass returns to its "normal" value. In this case, we do not expect a sizeable increase in the overall production of the ω mesons. In addition, any enhanced production of the η mesons should only increase the depth of the hole primarily in the p t ≃ 117 MeV region. In the case of equal production of the η and η ′ , there will be on the order of twice as many π + coming from the decay of the η ′ s than from the decay of the ηs. Thus, we concentrate on the dynamics of the η ′ mesons giving an estimated lower bound on the depth of the produced hole. Description of the Simulation: In the following calculation of λ * , we suppress the rapidity dependence by considering the central rapidity region, (−0.2 < y < 0.2). As a function of m t = p 2 t + m 2 , λ * (m t ) is given by where the numerator represents the invariant m t distribution of π + emitted from the core and where the denominator represents the invariant m t distribution of the total number of π + emitted. The denominator may be explicitly written as A detailed analysis [6] has shown that the ω does not to contribute to the core in the S+Pb reaction and in the NA44 acceptance. To calculate the π + contribution from the halo region, the bosons (ω, η ′ , η and K 0 S ) are given both a rapidity (−1.0 < y < 1.0) and an m t and then are decayed using Jetset 7.4 [16]. The m t distribution [5,17] of the bosons is given by where C is a normalization constant, where α = 1 − d/2 and where [17,18] In the above expression, d = 3 is the dimension of expansion, T f o = 140 MeV is the freeze-out temperature and u t is the average transverse flow velocity. The m t distribution of the core pions is also obtained from Eqs. (10) and (11). The contributions from the decay products of the different regions (halo and core) are then added together according to their respective fractions, allowing for the determination of λ * (m t ). The respective fractions of pions are estimated from both the Fritiof [19] and the Relativistic Quantum Molecular Dynamics (RQMD) [20] models as summarized in Ref. [21]. The calculation using Fritiof abundances is shown in Fig 2 (solid line). A similar m t dependence but with a slightly higher value of λ * (m t ) is obtained when using RQMD abundances. Simulating the presence of the hot and dense region involves including an additional relative fraction of η ′ with a medium modified p t spectrum. The p t spectrum of these η ′ is obtained by assuming energy conservation and zero longitudinal motion at the boundary between the two phases. This conservation of transverse mass at the boundary implies, where the ( * ) denotes the η ′ in the hot dense region. The p t distribution then becomes a twofold distribution. The first part of the distribution is from the η ′ which have p * t ≤ m 2 η ′ − m * 2 η ′ . These particles are given a p t = 0. The second part of the distribution comes from the rest of the η ′ 's which have big enough p t to leave the hot and dense region. These have the same, flow-motivated p t distribution as the other produced resonances and are given a p t according to the m t distribution where C is a normalization constant and where T ′ = 200 MeV and m * η ′ is the effective temperature and mass, respectively, of the hot and dense region. Assuming m * η ′ = 500 MeV in the above scenario, the m t distribution of the π + from the decay of these η ′ (η ′ → η + π + + π − ) is shown in Fig. 1. Also shown is the m t distribution of the π + from η ′ assuming no hot and dense matter (Eqs. (10) and (11) with T f o = 140 MeV, u t = 0.5 and m η ′ = 958 MeV). Comparison of the two distributions shows the enhancement of the π + in the low m t region which results from the presence of the hot and dense region. Using three different effective masses for the η ′ in the hot and dense region, calculations of λ * (m t ) including the hot and dense regions are compared to those assuming the standard abundances in Fig 2. The effective mass, m * η ′ = 738 MeV, corresponds to an enhancement of the production cross section of the η ′ by a factor of 3, while m * η ′ = 403 MeV and m * η ′ = 176 MeV correspond to factors of 16 and 50, respectively. The two data points shown are taken from NA44 data on central S + P b reactions at the CERN SPS with incident beam energy of 200 AGeV [22]. The lowering of the η ′ mass and the partial chiral restoration result in a hole in the effective intercept parameter at low m t . This happens even for a modest enhancement of a factor of 3 in the η ′ production. Similar results are obtained when using RQMD abundances. In addition, λ * (m t ) is calculated using Fritiof abundances with different average flow velocities in Fig 3. Here it is shown that λ * (m t ) can also be a measure of the average collective flow. In our calculations, an average flow velocity of u t = 0.50 results in an approximately flat, m t -independent shape for the effective intercept parameter λ * (m t ), if the value of α = 1 − d/2 = −1/2 is kept fixed in Eq. (10). Calculations using RQMD abundances result in a similar dependence on u t , but with slightly higher values of λ * (m t ). A limitation of our study is that we did not include the effects of possible partial coherence in the λ * (m t ) function. This is motivated by the success of completely chaotic Monte Carlo simulations in describing the measured two-particle correlation functions at the CERN SPS. However, a recent study [23] indicates that higher order BE symmetrization effects may also result in a decrease of λ * (m t ) at low p t . For the present system, this effect seems to be negligible, about a 1 % decrease, where the typical momentum scale of this effect is m t − m = T ef f and where the typical decrease is estimated [23] from the measured radius and slope parameters. The flat shape of our λ * (m t ) distribution results from the inclusion of the flow motivated temperature, T ef f , along with the effective, m t dependent volume factor [5,17], V * ∝ m −d/2 t , in Eq. (10). This flat shape reproduces the published NA44 data and differs from earlier theoretical calculations where λ * is found only to increase with increasing m t . Summary: Our results reveal an important relationship between partial U A (1) symmetry restoration and the shape (hole) of the λ * (m t ) parameter of the Bose-Einstein correlation function. We stress that this proposed signal is observed from the transverse mass dependence of the strength of the two-particle correlations, correlations which are presently being measured for fixed target Pb+Pb collisions at the CERN SPS. Measurements of twoparticle correlations are also being planned for nuclear collisions at the Relativistic Heavy Ion Collider (RHIC) at BNL as well as at the CERN Large Hadron Collider (LHC). A qualitative analysis of NA44 S+Pb data suggests no visible sign of U A (1) restoration at SPS energies. In addition, we deduce a mean transverse flow of u t ≈ 0.50 in S+Pb reactions. Let us note that the suggested λ * -hole signal of partial U A (1) restoration cannot be faked in a conventional thermalized hadron gas scenario, as it is not possible to create significant fraction of the η and η ′ mesons with p t ≃ 0 in such a case. Acknowledgments: One of the authors, T. Csörgő, would like to express his thanks to Miklós and Györgyi Gyulassy for their kind hospitality while at Columbia University. D. Kharzeev is grateful to J. Kapusta
2014-10-01T00:00:00.000Z
1998-02-24T00:00:00.000
{ "year": 1998, "sha1": "4d1c31ea4d471981251a6a4412e48e3506ed7a2b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/nucl-th/9802074", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4d1c31ea4d471981251a6a4412e48e3506ed7a2b", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
34598063
pes2o/s2orc
v3-fos-license
C*- Algebras and Thermodynamic Formalism We present a detailed exposition (for a Dynamical System audience) of the content of the paper: R. Exel and A. Lopes, $C^*$ Algebras, approximately proper equivalence relations and Thermodynamic Formalism, {\it Erg. Theo. and Dyn. Syst.}, Vol 24, pp 1051-1082 (2004). We show only the uniqueness of the \beta-KMS (in a certain C*-Algebra obtained from the operators acting in $L^2$ of a Gibbs invariant probability $\mu$) and its relation with the eigen-probability $\nu_\beta$ for the dual of a certain Ruele operator. We consider an example for a case of Hofbauer type where there exist a Phase transition for the Gibbs state. There is no Phase transition for the KMS state. Introduction In this paper we show a relation of the KMS state of a certain C * -Algebra U [BR] [P] [EL2] with the Gibbs state of Thermodynamic Formalism [PP] [Bo] Section 1 -KMS and Gibbs states We denote C(X) the space of continuous functions on X taking values on the complex numbers where (X, d) is a compact metric space. Consider the Borel sigma-algebra B over X and a continuous transformation T : X → X. Denote by M(T ) the set of invariant probabilities for T . We assume that T is an expanding map. Tipical examples of such transformations (for which that are a lot of nice results [R2]) are the shift in the Bernoully space and also C 1+α -tranformations of the circle such that |T ′ (x)| > c > 1, where | | is the usual norm (one can associate the circle to the interval [0, 1) in a standard way) and c is a constan. The geodesic flow in compact constant negative curvature surfaces induces in the boundary of Poincaré disk a Markov transformation G such that for some n, we have G n = T , and where T is continuous expanding and acts on the circle (see [BS]). Our results can be applied for such T . We denote by H = H α the set of α-Holder functions taking complex values, where α is fixed 0 < α ≤ 1. For each ν ∈ M(T ), the real non-negative value h(ν) denotes the the Shanon-Kolmogorov entropy of ν and h(T ) = sup{h(ν)|ν ∈ M(T )}. h(T ) is called the topological entropy of T . Given a continuous function A : X → R we denote the Ruelle operator by L A (which acts on continuous function f ). More precisely if g = L A (f ), then g(x) = L A (f )(x) = T (z)=x e A (z)f (z). We say that the potential A is normalized if L A (1) = 1. Given A, the dual operator L * A acts on probabilities on M(X). We say that L * A (ν) = ρ if for any continuous function f We denote by µ a fixed Gibbs state for a real Holder potential log p : X → R. We suppose log p is already normalized [Bo][R3], in the sense that, if L log p (for short L p ) denotes the Ruelle-Perron-Frobenius operator for log p, that is for any f : X → C, and all x ∈ X, we have (L p (f ))(x) = T (z)=x p(z)f (z), then we assume that L p (1)(x) = T (z)=x p(z) = 1 and L * p (µ) = µ. We will show later that the index λ(x) = p(x) −1 for the C * -algebra associated to µ. As an interesting example we mention the case where T has degree k, that is, for each x ∈ X there exists exactly k different solutions z for T (z) = x. We call each such z a pre-image of x. If T has degree k and in the particular case where µ is the maximal entropy measure (that is, h(µ) = h(T ) = log k), then p = 1/k. In order to simplify the arguments in our proofs we will assume from now on that T has degree k. One can consider alternatively in Thermodynamic Formalism L p acting on C(X) or on H α . Different spectral properties for L p ocurr in each one of these two cases (see [Bo][R2]). We will consider in the sequel a fixed real Holder-continuous positive potential H : X → R and L H,β , β ∈ R the Ruelle-Perron-Frobenius operator for −β log H, that is, for each continuous f we have by definition We denote by λ H,β ∈ R the largest eigenvalue of L H,β . We also denote ν H,β the unique probability such that L * H,β (ν H,β ) = λ H,β ν H,β , and h H,β the unique function h ∈ C(X) such that hdν H,β = 1 and L H,β As H is fixed for good in order to simplify the notation we will sometimes write L β , L * β , λ β , ν β , h β . h β is a real positive Holder function. The hypothesis about H and p being Holder in the Statistical Mechanics setting means that in the Bernoulli space the interactions between spins in neighborhoods positions decrease very fast [L2] [L3]. In section 2.3 we will consider a non-Holder potential H where in this case it will appear a phasetransition phenomena. This model is known as the Fisher-Felderhof model [FF], [L2], [L3], [FL]. In this case the interactions do not decrease so fast. We return now to the Holder case. It is well known the variational principle for such potential −β log H, The probability µ H,β = h H,β ν H,β ∈ M(T ) and satisfies The probability µ H,β is unique for the variational problem and ν H,β is unique for the the eigenmeasure problem associated to the value λ H,β , if p and H are Holder. If we do not assume p and H Holder then there exist counterexamples for uniqueness in both cases [L2] [L3]. We will return to this point later. For some reason the eigen-probabilities have a distinguished role here, but not the equilibrium states. P H (β) is called the pressure of −β log H (or sometimes Free-Energy) and is a convex analytic function of β. If T has degree k and in the particular case where µ is the maximal entropy measure (that is, h(µ) = h(T ) = log k), then p = 1/k. In Thermodynamic Formalism it is usual to consider the Koopman operator acting on L 2 (µ) (the space of complex square integrable functions over L 2 (µ)), and it is well known that its adjoint (over L 2 (µ)) is the operator L p = S * acting on L 2 (µ). As we assume X is compact, any continuous function f is in L 2 (µ). Definition 1.4: Another important class of linear operators is M f : L 2 (µ) → L 2 (µ), for a given fixed f ∈ C(X), and defined by M f (η)(x) = f (x)η(x), for any η in L 2 (µ). In order to simplify the notation, sometimes we denote by f the linear operator M f . Note that for M f and M g , f, g ∈ C(X), the product operation satisfies M f • M g = M f.g , where . means multiplication over the complex field C. Note that the * operation applied on M f , f ∈ C(X), is given by M * f = M f , where z is the complex conjugated of z ∈ C. In this sense, M * f is the adjoint operator of M f over L 2 (µ). The main point for our choice of µ as eigen-probability for L * p , is that in L 2 (µ), the dual of the Koopman operator S is the operator L p = S * acting on L 2 (µ). Indeed, for any f, g we have It is important not confuse the dual of the Ruelle operator L p in the Hilbert structure sense with the dual of L p as a linear functional on continuous functions. L(L 2 (µ)), the set of linear operators over L 2 (µ), is a very important C * -Algebra. We will analyze here a sub-C * -Algebra of such C * -Algebra (defined with the above operations . and * ), more precisely the C * -Algebra U. Definition 1.5: We denote by α : C(X) → C(X) the linear operator such that for any f , we have α(f ) = f • T . We have to show how the operators S and M f acting on L 2 (µ) interact with the operators L p and α acting on C(X). One can easily see that α(M f ) = M f •T . This is the first relation. In the simplified notation (we identify M f with f ), one can read last expression as α(f ) = f • T . In this way α n (f ) = f • T n . If B is the Borel sigma-algebra then we denote by F n the Sigma-algebra It is know that if we consider the probability µ, then the conditional expected value As F m ⊂ F n for m ≥ n, we have Definition 1.6: Consider the C * -Algebra contained in the set of bounded operators L(L 2 (µ)) generated by the elements of the form M f S n (S * ) n M g , where n ∈ N and f, g ∈ C(X). We denote such C * -Algebra by U = U(µ, T ). We call U the C * -Algebra associated to µ. Each element a in U is the limit of finite sums Note that f → M f defines a linear injective function of C(X) on U. We denote e n = S n (S n ) * = E(f | F n ) ∈ U. Important Properties: We have basic relations in such C * -Algebra U: a) (S * ) n S n = 1, for all n ∈ N (it follows from S * S = 1) . proof: for any η ∈ L 2 (µ), we have proof: for any η ∈ L 2 (µ), we have proof: note first that taking adjoint with respect to the L 2 (µ) structure Then, and we can apply item i) to get Now taking adjoint once more we get If u is F m measurable and m > n, then u is F n measurable. Then, By the other hand Remark 0: If we consider the C * -algebra generated M f S m (S * ) n M g , where n, m ∈ N and f, g ∈ C(X), we have a different setting (which is usually called a Vershik C * -algebra) which was consider in another paper by R. Exel [E3]. In this case, the KMS state exists only for one value of β. We now return to our setting. An extremely important result will be shown in expression (*1) and (*2) in Lemma 2.1 which claims that there exists functions A bijective linear transformation K : U → U which preserves the composition and the * operation is called an automorphism of U We denote by Aut(U) the set of automorphism of the C * -Algebra U. Definition 1.7: Given a positive function H we define the group homo- The value t above is related to temperature and not time, more precisely we are going to consider bellow t = βi where β is related to the inverse of temperature in Thermodynamic Formalism (or Statistical Mechanics). It can be shown that for each t fixed, we just have to define σ t over the generators of U in order to define σ t uniquely on U. In this way a) and b) above define σ t . We will assume in this section from now on that H is Holder in order we can use the strong results of Thermodynamic Formalism. Remark 1: Note that for η ∈ L 2 (µ), we have It follows easily by induction that Taking dual in both sides of the above expression we get other important relation In terms of the formalism of C * -dynamical systems, the positive function H defines the dynamics of the evolution with time t ∈ R of a C * -dynamical system. Our purpose is to analyze such system for each pair (H, β). Definition 1.9: By definition a "C * -dynamical system state" is a linear functional ψ : U → C such that a) ψ(M 1 ) = 1 b) ψ(a) is a positive real number for each positive element a on the C * -Algebra U. A "C * -dynamical system state" ψ in C * -dynamical systems plays the role of a probability ν in Thermodynamic Formalism. For a fixed H, we have a dynamic temporal evolution defined by σ t where t ∈ R. Definition 1.10: An element a ∈ U is called analytic for σ if σ t (a) has an analytic extension from t ∈ R to all t ∈ C. Definition 1.11: For a fixed β ∈ R and H, by definition, ψ is a KMS state associated to H and β in the C * -Algebra U(µ, T ), if ψ is a C * -dynamical system state, such that for any b ∈ U and any analytic a ∈ U we have ψ(a.b) = ψ(b.σ βi (a)). For H and β fixed, we denote a KMS state by ψ H,β = ψ β and we leave ψ for a general C * -dynamical system state. It follows from section 8.12 in [P] that if ψ β is a KMS state for H, β, then for any analytic a ∈ U, we have that τ → ψ β (σ τ (a)) is a bounded entire function and therefore constant. In this sense ψ is stationary for the continuous time evolution defined by the flow σ t . Note that the KMS state, in principle, could depend of the initially chossen µ because we are considering L 2 (µ) when defining U, but in the end it will be defined by a measure that depends only in β and H We point out that it can be shown that in order to characterize ψ as a KMS state we just have to check the condition ψ(a.b) = ψ(b.σ βi (a)) for a, b the linear generators of U, that is, a of the form M f1 S n (S * ) n M g1 and b of the form A natural question is: for a given β and H, when the KMS state ψ H,β exist and when it is unique? We are interested mainly in uniqueness and explicitly. We will explain this point more carefully later. Our purpose here is to show how to associate in a unique way each KMS state ψ H,β = ψ β to the eigenmeasure ν H,β = ν β defined before. We denote H β [n] (x) = Π n−1 i=0 H(T i (x)) β and Λ n = H −β [n] λ [n] . From this follows that for any continuous function f we have L n β (f ) = L n p (Λ n f ). Remember that for any continuous function k we have L n p (k • T n ) (x) = k(x) because L n p (1) = 1. Lemma 1.1 For any any β and continuous function f Proof: Note that Section 2 -The main result We define G : U → C(X) by G(M f e n M g ) = f λ −[n] g where e n = S n (S * ) n . Moreover, G(M f M g ) = f g Note that we define G in the elements of the form M f e n M g , n ≥ 0, and then we define G in U by linear combinations and limits. Suppose There is a canonical way to define a C * -dynamical system state ψ ν : U → C by In this way if n ≤ m (by item i) ) In this way if n ≥ m (by item j) ) Theorem 2.1: Given φ ν and ψ ν = φ ν • G we get that ψ ν is KMS for temperature β, if and only if, φ ν satisfies which is the same that to say that ν satisfies Proof: In order to simplify the notation we call E n (f ) = E µ (f | F n ). Suppose that ψ is a KMS state. Then for all a, b, c, d ∈ C(X) and all n we have ψ((ae n b)σ iβ (ce n d)) = ψ((ce n d)(ae n b)). ( * 3) The left hand side is equals to The right hand side of (*3) is equals to Now, we want to prove the other implication. Note that φ ν (ab) = φ ν (ba) for continuous functions a and b. We would like to prove that ψ((ae n b)σ iβ (ce m d)) = ψ((ce m d)(ae n b)), ( * 4) for all a, b, c, d ∈ A and n, m ∈ N. Suppose first the case n ≤ m. By the important property i) we get that the left hand side of (*4) is equals to where in the last equality we use the fact that By the other hand the right hand side of (*4) is equals to where in the last equality we use once more the fact that In this way we showed the KMS condition in the case n ≤ m. For the case n ≥ m, using the important property j) we note that the left hand side of (*4) is The right hand side of (*4) equals The conclusion follows at once because λ [m] λ −[n] ∈ F m . Corollary 2.1. Suppose ν β is an eigenprobability for the Ruelle operator of the potential −β log H. If the C * -dynamical system state ψ ν β : U → C is defined by then, ψ ν β is a KMS state for temperature β. Proof: This follows from last theorem and Lemma 1.1 Note that when H is constant then µ is an eigenprobability for the associated Ruelle operator for any β > 0. From expression (*5) we can see that σ t in this case is the identity for any t. Moreover, by the KMS relation ψ µ (a b) = ψ µ (b a). We can ask about uniqueness of the KMS state. To address this question is the purpose of the next results. We need a preliminary estimate before proving the lemma. For the transformation T , consider a partition A 1 , ..., A k of X such that T is injective in each A i . Our proof bellow is for the shift in the Bernouilli space. In the case of the Bernoulli space with k symbols A i is the cylinder i with first coordinate i. Now we consider a partition of unity given by k non-negative functions v 1 , ..., v k such that each v i (x) = I i (x) (the indicator function of the cilinder i) which has support on A i and k i=1 v i (x) = 1 for all x ∈ X. In the case X is the unitary circle and T is expansive, using a conjugacy with the shift, we obtain similar results. Denote now the functions u i given by: if x is in the cylinder i Indeed, for x in the cylinder i, take u i (x) = p −1/2 (x). This is so because for x = (j, x 2 , x 3 , ...) we get Now, we continue the argument: Now we use the relations S n M g = M α n (g) S n and M g (S * ) n = (S * ) n M α n (g) in last expression and we get Now we will prove the lemma. Using last expression and then Remark 2 for g = α n (u i ) ∈ C(X) and a = S n+1 (S * ) n+1 we get This shows the claim of the lemma. We denote E m (f ) = E µ (f | F m ). Corollary 2.1 If ψ is Gibbs for H at temperature zero, and ν is such that for any continuous function f we have ψ(f ) = f dν, then which is the same that to say that ν satisfies Proof: We get from last lemma that ψ(f e n ) = φ ν (G(f e n )) where φ ν (f ) = f dν = ψ(f ). Now, from Theorem 2.1 we get that (*6) is true. Now we will show the uniqueness of the KMS state: Theorem 2.3: Given any KMS ψ, then ψ = ψ β where ψ β is the KMS state associated to the Gibbs probability ν β . Proof: In order to do that we will show that any possible ν as defined above from the KMS ψ is equal to ν β . Take ν a probability associated to ψ, then for each n, and f ∈ C(X) we have We claim that and this shows that ν = ν β , and therefore ψ = ψ β . Now we show the claim. Note that where λ β is the eigenvalue associated to L β . Applying the above expression to f = h β (we can assume h β is such that h β dν β = 1) and using the fact that L n β (h β ) = λ n β h β we get As h β is continuous and positive, there exists c > 0 such for all x ∈ X we have h β (x) > c. It is known (see [Bo]) that uniformly in z ∈ X, we have Therefore, given ǫ > 0, we can find N > 0 such that for all n > N we have for all z ∈ X The conclusion from (*7) is that for any f ∈ C(X) Consider now f = 1 and we get lim n→∞ α n (h β )Λ n λ n β dν = 1. From this we conclude that f dν = I = f dν β for all f ∈ C(X). This shows the uniqueness and that ν = ν β . The final conclusion is that any KMS ψ for H, β is equal to the ψ β associated to ν β . Section 3 -no phase transitions We consider here an interesting example of a KMS state associated with the reference measure µ given by the maximal entropy measure for the shift in 2 symbols {0, 1}. In this case p = 1/2 is contant. We will define a special potential H and we will consider specifically the special value β = 1 We refer the reader to [H] [L2] [L3] [FL] [Y] [L] for references and results about the topics discussed in this section. We are going to introduce the Fisher-Fedenhorf model of Statistical Mechanics in the therminology of Bernoulii spaces and Thermodynamic Formalism [H]. We denote by M k ⊂ Σ + , for k > 1, the cylinder set [111 . . . 11 . The ordered collection (M k ) ∞ k=0 is a partition of Σ + ; in other words these sets are disjoint and their union is the whole space (minus the point (11 . . . )). Note that T maps M k bijectively onto M k−1 for k ≥ 1, and onto Σ + for k = 0. The point (1111...) is fixed for T . For γ > 1 a fixed real constant, we consider the potential g(x) such that g(111111 . . . .) = 0, for x ∈ M k , for k = 0, and a 0 = − log(ζ(γ)), for x ∈ M 0 , where ζ is the Riemann zeta function. By definition, and so the reason for defining a 0 in such way is that, if we define s k = a 0 + a 1 + · · · + a k , then Σe s k = 1. From now on we assume γ > 2, otherwise we have to consider sigma-finite measures and not probabilities in our problem. The potential 1 < ( k+1 , for x ∈ M k , is not Hölder and in fact is not of summable variation. Note that H(1111...) = 1, The pressure P (− log H) = P (g) = P (log p+ log 2 − 1 log H) = 0 and one can show that there exist two equilibrium states for such a potential g (in the sense of minimizing measures for the variational problem): a point mass (the Dirac delta δ(111...)) at (1111 . . . ), and a second measure which we shall denote byμ (see [H]) The existence of two probabilitiesμ and δ (1111...) for the variational problem of pressure defines what is called a phase transition in the sense of Statistical Mechanics [H] [L3]. We will describe bellow how to define this measureμ. Consider as in [H] L * g , the dual of the Ruelle-Perron-Frobenius operator L g associated to g, where the action of L g on continuous functions is given by L β=1 (ψ)(y) = T (x)=y e g(x) ψ(x). We claim that there is a unique probability measure ν on Σ + which satisfies L * g ν = ν [FL] [H]. To prove this, note first that ν cannot have any mass at (11 . . . ); it follows that M 0 has positive mass, and the stipulation that ν be an eigenmeasure then gives a recurrence relation for the masses of M k . Since T (M k ) = M k−1 for k ≥ 1, we have that the masses of the sets in this partition are ν(k) = ν(M k ) = e s k = (k + 1) −γ ζ(γ) , k ≥ 0; in particular, ν(0) = ν(M 0 ) = e s0 = e a0 = 1 ζ(γ) . By the same reasoning, ν is determined on all higher cylinder sets for the partition (M k ) ∞ k=0 . Hence ν exists and is unique. The measure ν defined above is the unique eigenmeasure for L * β=1 and denoted by ν 1 . The measure defined by the delta-Dirac on (111...) is invariant but is not a fixed eigenmeasure for L * g . This measure ν 1 defines a KMS state ψ ν1 for such H, β = 1 and U(µ). We conjecture that there is another KMS state ψ different from ψ ν1 but not associated to a measure. Note that such H assumes the value 1 in just one point. The functionh satisfies L g (h) =h. The integral h (x)dν 1 (x) is finite if and only if γ > 2. One can normalizẽ h, multiplying by a constant u to get h = uh with hdν 1 = 1. This constant is . The probabilityμ has positive entropy and its support is all Σ + (see [H] or [L3] [FL]). The probabilityμ has positive entropy and its support is all Σ + (see [H] or [L3] [FL]). We can conclude from the above considerations that not always an equilibrium probability ρ for the pressure is associated to a KMS state ψ ρ whitout the hypothesis of H and p been Holder. In the present example, this happen because ρ = δ (1111...) is not an eigenmeasure of the dual of the Ruelle-Perron-Frobenius operator L β but it is an equlibrium measure for β = 1. In [L2] and [L3] the lack of differentiability of the Free energy is analyzed and in [L3] [Fl] [Y] it is shown that such systems present polynomial decay of correlation. In [L1] it is presented a dynamical model with three equilibrium states.
2014-10-01T00:00:00.000Z
2007-05-28T00:00:00.000
{ "year": 2007, "sha1": "f0c8f9fe405c85594da19bafedf531f422d98b80", "oa_license": "CCBYNCSA", "oa_url": "http://www.revistas.usp.br/spjournal/article/download/32/30", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "71727f4b86c5ac4ff227244f2abb2424ddf40994", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
214748327
pes2o/s2orc
v3-fos-license
Hepatitis C Virus Translation Regulation. Translation of the hepatitis C virus (HCV) RNA genome is regulated by the internal ribosome entry site (IRES), located in the 5’-untranslated region (5′UTR) and part of the core protein coding sequence, and by the 3′UTR. The 5′UTR has some highly conserved structural regions, while others can assume different conformations. The IRES can bind to the ribosomal 40S subunit with high affinity without any other factors. Nevertheless, IRES activity is modulated by additional cis sequences in the viral genome, including the 3′UTR and the cis-acting replication element (CRE). Canonical translation initiation factors (eIFs) are involved in HCV translation initiation, including eIF3, eIF2, eIF1A, eIF5, and eIF5B. Alternatively, under stress conditions and limited eIF2-Met-tRNAiMet availability, alternative initiation factors such as eIF2D, eIF2A, and eIF5B can substitute for eIF2 to allow HCV translation even when cellular mRNA translation is downregulated. In addition, several IRES trans-acting factors (ITAFs) modulate IRES activity by building large networks of RNA-protein and protein–protein interactions, also connecting 5′- and 3′-ends of the viral RNA. Moreover, some ITAFs can act as RNA chaperones that help to position the viral AUG start codon in the ribosomal 40S subunit entry channel. Finally, the liver-specific microRNA-122 (miR-122) stimulates HCV IRES-dependent translation, most likely by stabilizing a certain structure of the IRES that is required for initiation. Introduction Hepatitis C virus (HCV) is an enveloped positive strand RNA virus that preferentially replicates in the liver [1], and it is classified in the genus Hepacivirus in the family Flaviviridae. Worldwide, about 71 million people are infected with HCV [2]. The infection is usually noticed only when coincidentally diagnosed by routine testing, for example, during hospitalization, or when the liver disease becomes acute. In the latter case, liver damage by virus replication and the resulting immune responses can lead to impaired bilirubin conjugation in the liver, and unconjugated bilirubin deposits can then be noticed as a yellowish color (called jaundice), often first in the sclera in the eyes and when more severe also in the skin. An acute infection can result in severe liver damage, in rare cases even resulting in death [3,4]. However, most HCV infections remain inapparent [5,6], and the virus infection can become chronic in about 60% to 70 % of all infections [7], often without being noticed. Chronic infection can, in the long run, result in liver cirrhosis and liver cancer (hepatocellular carcinoma, HCC) [8][9][10], while a metabolic reprogramming of the infected cells according to the "Warburg effect" like in cancer cells can be observed only a few days after the onset of HCV replication [11]. Moreover, inapparent replication of the virus usually results in unnoticed spread of the virus to other individuals, a fact that is a major challenge for surveillance, health care, and treatment [12]. Meanwhile, very effective treatment regimens using direct acting antivirals (DAAs) are available, although they are still very expensive [13,14]. Although the error rate of the viral replicase is high and can, in principle, easily give rise to resistance mutations, the conserved nature of the replicase active center and the often occurring reduced fitness of mutants result in the rare appearance of resistance mutations against nucleoside inhibitors such as sofosbuvir [15,16]. An effective vaccination is not yet available, also partially due to the high variability of the viral RNA genome. Thus, further research on HCV is urgently required to combat HCV infections, and despite much progress in the understanding of HCV replication the molecular mechanisms of HCV replication are still far from being completely understood [12]. HCV molecular biology research essentially started with the first cloning of the HCV genome ("non-A non-B hepatitis") [17] (Figure 1). Further important landmarks were the development of a subgenomic replicon system [18] that first analyzed intracellular virus replication, and the development of infectious full-length clones that were capable of going through a complete viral replication cycle including infection and new virus production [19][20][21]. replicase is high and can, in principle, easily give rise to resistance mutations, the conserved nature of the replicase active center and the often occurring reduced fitness of mutants result in the rare appearance of resistance mutations against nucleoside inhibitors such as sofosbuvir [15,16]. An effective vaccination is not yet available, also partially due to the high variability of the viral RNA genome. Thus, further research on HCV is urgently required to combat HCV infections, and despite much progress in the understanding of HCV replication the molecular mechanisms of HCV replication are still far from being completely understood [12]. HCV molecular biology research essentially started with the first cloning of the HCV genome ("non-A non-B hepatitis") [17] (Figure 1). Further important landmarks were the development of a subgenomic replicon system [18] that first analyzed intracellular virus replication, and the development of infectious full-length clones that were capable of going through a complete viral replication cycle including infection and new virus production [19][20][21]. [22][23][24]. Those regions of the 5´UTR, 3´UTR, and CRE that bind to the ribosomal 40S subunit are underlayed in light yellow. Stem-loops (SLs) in the 5´UTR are numbered by roman numerals. The region of the IRES is surrounded by a dotted line. The IRES includes SLs II-IV of the 5´UTR but spans into the core protein coding region. The canonical AUG start codon in SL IV of the 5´UTR (black circle) gives rise to translation of the polyprotein which is cleaved to yield structural proteins and non-structural (NS-) proteins, including the viral replicase NS5B. The 3´UTR contains the variable region, a poly(U/C) tract (U/C), and the so-called 3´X region including SLs 1, 2, and 3. The stem-loop 5BSL3.2 in the 3´-region of the NS5B coding region is the CRE, flanked by upstream stem-loop 5BSL3.1 and downstream 5BSL3.3. The polyprotein stop codon is located in the stem-loop 5BSL3.4 (asterisk). Some other start codons which give rise to the alternative reading frame (ARF) in the core+1 reading frame [25][26][27][28] are shown in grey. Positively and negatively acting long-range RNA-RNA interactions (LRIs) are shown in green or red, respectively, with the sequence "9170" shown as green circle. Selected binding sites for microRNA -122 (miR-122) are shown in blue. The HCV RNA genome is about 9600 nucleotides (nts) long and codes for one long polyprotein that is co-and post-translationally processed into the mature gene products [29][30][31]. The viral structural proteins include the core protein, which is contained in the virus particle, as well as the envelope glycoproteins E1 and E2. The non-structural (NS) proteins are involved in HCV RNA replication and particle assembly. The first NS protein is p7, a viroporin, which is involved in assembly. NS2 is a protease and acts as a cofactor involved in assembly. NS3 has protease and helicase functions, and the protease activity of NS3 is responsible for the cleavage of downstream NS protein Figure 1. cis-Elements in the hepatitis C virus (HCV) RNA genome that are involved in translation regulation. The HCV plus strand RNA genome. The internal ribosome entry site (IRES) in the HCV 5 -untranslated region (5 UTR), the entire 3 UTR and the cis-acting replication element (CRE) in the NS5B coding region are involved in translation regulation [22][23][24]. Those regions of the 5 UTR, 3 UTR, and CRE that bind to the ribosomal 40S subunit are underlayed in light yellow. Stem-loops (SLs) in the 5 UTR are numbered by roman numerals. The region of the IRES is surrounded by a dotted line. The IRES includes SLs II-IV of the 5 UTR but spans into the core protein coding region. The canonical AUG start codon in SL IV of the 5 UTR (black circle) gives rise to translation of the polyprotein which is cleaved to yield structural proteins and non-structural (NS-) proteins, including the viral replicase NS5B. The 3 UTR contains the variable region, a poly(U/C) tract (U/C), and the so-called 3 X region including SLs 1, 2, and 3. The stem-loop 5BSL3.2 in the 3 -region of the NS5B coding region is the CRE, flanked by upstream stem-loop 5BSL3.1 and downstream 5BSL3.3. The polyprotein stop codon is located in the stem-loop 5BSL3.4 (asterisk). Some other start codons which give rise to the alternative reading frame (ARF) in the core+1 reading frame [25][26][27][28] are shown in grey. Positively and negatively acting long-range RNA-RNA interactions (LRIs) are shown in green or red, respectively, with the sequence "9170" shown as green circle. Selected binding sites for microRNA-122 (miR-122) are shown in blue. The HCV RNA genome is about 9600 nucleotides (nts) long and codes for one long polyprotein that is co-and post-translationally processed into the mature gene products [29][30][31]. The viral structural proteins include the core protein, which is contained in the virus particle, as well as the envelope glycoproteins E1 and E2. The non-structural (NS) proteins are involved in HCV RNA replication and particle assembly. The first NS protein is p7, a viroporin, which is involved in assembly. NS2 is a protease and acts as a cofactor involved in assembly. NS3 has protease and helicase functions, and the protease activity of NS3 is responsible for the cleavage of downstream NS protein precursor cleavage sites. NS4A is a NS3 cofactor, whereas NS4B is involved in the membrane reorganization of replication complex formation. NS5A is involved in RNA replication and assembly, and NS5B actually is the viral replicase (RNA-dependent RNA polymerase, RdRp) [29][30][31]. The HCV virus particle comes as a lipo-viro-particle [32,33]. After binding to a variety of surface receptors including the low-density lipoprotein (LDL) receptor [33][34][35] and entry into the cytoplasm, the viral NS proteins produced in the first ("pilot") round of translation recruit cellular membranes that originate from the endoplasmic reticulum (ER) and form a so-called membranous web in which the viral proteins and genomes form spatially coordinated replication complexes [31,36,37]. Each incoming plus strand which survived cellular immune responses [5,38,39] and degradation is replicated by the NS5B replicase and gives rise to one antigenome minus strand copy, which in turn generates about 10 progeny plus strands [30]. HCV RNA synthesis is regulated by a variety of RNA signals that reside close to the very 3 -and 5 -ends of the genome, but also sequence and RNA secondary structure elements in the coding region (largely in the 3 -terminal NS5B coding sequence) contribute to the regulation of RNA synthesis [23,30,[40][41][42][43][44][45]. Bulk translation from the progeny plus strand genomes, then, yields a vast excess of viral proteins over the number of genomes [30,46]. Then, progeny plus strand RNA genomes together with viral proteins are packaged into newly assembled virions [32,47], together with some cellular proteins such as apolipoproteins (Apo) A-I, B, C-II, and E, which contribute to liver tropism of the virus by binding to the next infected hepatocyte´s surface receptors [48]. Unlike most cellular mRNAs, the 5 -end of the HCV genomic RNA has no cap nucleotide attached which would govern efficient cap-dependent translation initiation [49][50][51]. Instead, HCV translation is mediated by virtue of an internal ribosome entry site (IRES) [22,[52][53][54][55], which is located largely in the 5 UTR but also slightly spans into the coding region ( Figure 1). While the resulting low efficient translation coincides with the "undercover" strategy of HCV replication that often leads to chronic infection and further unnoticed spread of the virus to uninfected individuals, the use of such IRES elements has two more big advantages. The first advantage is that in particular the very ends of the RNA genome do not need to serve functions in translation control such as in capped and polyadenylated cellular mRNAs. Instead, RNA signals that are involved in genome replication can be directly placed at the very genome ends [23,30,45]. The second benefit of cap-independent translation is that the virus escapes antiviral countermeasures of the cell in terms of the downregulation of cap-dependent translation, which is largely conferred by phosphorylation of eIF2 and the resulting inhibition of cap-dependent translation initiation [56]. In addition to the hepatocyte surface receptors mentioned above, another determinant of the liver tropism of HCV is the microRNA-122 (miR-122). microRNAs are small single-stranded guide RNAs that direct effector complexes involving Argonaute (Ago) proteins to cellular mRNAs and usually negatively influence the mRNA´s translation efficiency and induce degradation of the mRNA [57,58]. miR-122 is expressed almost exclusively in the liver and constitutes about 70% of all microRNAs in hepatocytes [59], whereas it is nearly not expressed in other tissues [60]. In contrast to the negative influence of microRNAs on cellular mRNAs, HCV utilizes miR-122 to promote its own replication [61], thereby making the liver-specific miR-122 another determinant for the liver tropism of HCV. There are five to six target sequences conserved in the HCV genome, two in the 5 UTR [62], one in the 3 UTR, and two to three in the NS5B coding region (depending on genotype) [63]. Cooperative binding of two miR-122 molecules to the two adjacent target sites in the HCV 5 UTR contributes to RNA stability by protecting against cellular nucleases [64,65] and has a positive effect on the efficiency of HCV translation [66][67][68][69][70][71][72]. Although some studies investigated the possible roles of the conserved potential miR-122 binding sites in the NS5B coding region and in the 3 UTR, it is not yet clear if physical binding of miR-122 to these sites results in effector functions [73][74][75][76], leaving some doubts why these sequences are conserved among HCV isolates. In this review, we focus on the sequences, cellular factors, and molecular mechanisms involved in the regulation of translation by the HCV IRES. Thereby, we touch the functions of miR-122 specifically only with regard to translation regulation, while another review by Joyce Wilson in this review series thoroughly covers miR-122 action in all aspects of HCV replication. An Overview over HCV Genome Regions Involved in Translation Regulation The RNA cis-elements that control HCV RNA genome translation largely reside in the 5 and 3 regions of the genome (Figure 1). The IRES element is located in the 5 UTR plus some 30 nts of the core coding region (dotted line in Figure 1) [52,53,77,78]. It contains the large branched domain (Dom) III (or stem-loop (SL) III, respectively) with several small subdomain stem-loops, the SL IV and a double pseudoknot, and the upstream SL II. In contrast, the SL I located at the very 5-end of the genome is not involved in translation regulation but in replication control. The sequence between the SLs I and II, which contains the two miR-122 binding sites located directly upstream of the IRES, is formally not counted as belonging to the IRES, but this sequence can influence translation activity upon binding of miR-122 [66]. The IRES binds the small ribosomal 40S subunit with high affinity of about 2 nM [79] (underlayed in yellow in Figure 1). At the 3-end of the HCV genome, the 3 UTR [80], as well as the cis-acting replication element (CRE) with its stem-loops 5BSL3.2 and 5BSL3.3 in the 3 -terminal region of the NS5B coding region [81,82] are also involved in translation regulation ( Figure 1). The function of the 3 UTR and the possible function of the CRE in translation regulation are related to two types of long-range interactions (LRIs), first RNA-RNA LRIs, and second those long-range interactions that are mediated by RNA-binding proteins or by the ribosomal 40S subunit. This 3 -to 5 -end communication is reminiscent of that occurring with cellular mRNAs. There, the polyA tract at the 3 -end stimulates translation at the 5 -end. In principle, this interaction in cis can only indicate that the RNA to be translated is not degraded but intact, and only then it makes sense to translate this RNA efficiently. Reports about a contribution of the HCV 3 UTR to the regulation of translation at the IRES have been quite different in the past. Some previous reports have shown a positive influence of the 3 UTR on IRES-directed translation [80,[83][84][85][86][87]. One report showed a negative influence of the 3 UTR [88], and other studies reported no influence [89][90][91][92]. These discrepancies could have been caused by the use of different reporter systems and artificial extensions at the reporter RNA´s 3 -end (discussed in [86]). Currently, it is rather clear that the 3 UTR actually stimulates translation, and it is recognized that suitable reporter assays should use RNA transfection (not DNA) and a precise authentic 3 -end of the HCV 3 UTR [86]. The CRE (see Figure 1, also called SL5B3.2) was first described to be involved in control of overall HCV genome replication [93] and binds physically to the NS5B replicase [94][95][96]. Moreover, the CRE 5BSL3.2 plus the flanking downstream 5BSL3.3 [82], as well as the poly(U/C)-tract of the 3 UTR [82,97] were shown to bind to the small 40S ribosomal subunit. Thereby, the CRE binds with K D of about 9 nM to the 40S subunit, while the poly(U/C) tract of the 3 UTR binds even stronger (K D = 1 nM); together, the CRE and 3 UTR bind 40S subunits with a comparable affinity as the IRES [82,97]. Long-range RNA-RNA interactions are very important in HCV replication. Several such putative interactions have been described [24,45,63,82,[98][99][100][101][102][103][104][105][106][107][108]. By far the most important interactions (see Figure 1) that have been demonstrated to be functionally relevant by several studies are the following. First, the interaction between the apical loop of the CRE 5BSL3.2 and the apical loop of the SL 2 in the 3 UTR (also named "kissing loop" interaction) [98,99] is important for HCV replication. Second, the interaction between the bulge of the CRE 5BSL3.2 (GCCCG) with a sequence about 200 nts upstream (CGGGC) ("9170" in Figure 1 and in [23]) was also shown to be important for replication [100,103] and third, the internal CRE 5BSL3.2 bulge can alternatively interact with the apical loop of the SL IIId (UGGGU) in the IRES [101]. Regarding translation regulation, conflicting results have been reported with respect to a possible role of the CRE. One study showed that deletion of the CRE 5BSL3.2 conferred an increase of translation efficiency from HCV-luciferase translation reporter constructs 18 h after transfection (but not after 6 h) [102], suggesting that the CRE 5BSL3.2 is involved in inhibiting HCV translation and, together with the flanking upstream 5BSL3.1 and the downstream 5BSL3.3, the CRE 5BSL3.2 inhibits translation [102]. This implicates that this LRI could have something to do with translation regulation or with a possible switch between translation and replication. In contrast, another study showed that the inhibition of the CRE by hybridization with locked nucleic acid (LNA) oligonucleotides impairs translation of luciferase reporter RNAs or replication-incompetent replicons 6 h after transfection [81], leading to the conclusion that the CRE rather stimulates translation. This discrepancy could be due to differences in the reporter assay systems and yet needs to be further elucidated. In addition, a hybridization between sequences in the core-coding region and the region between 5 UTR SLs I and II (red in Figure 1) had been described, which has a negative effect on translation efficiency. This interaction is important in terms of IRES structure, function, and miR-122 action and is discussed in the next section. Structure of the HCV 5 UTR and the Internal Ribosome Entry Site The HCV 5 UTR is about 340 nts long and is predicted to fold into characteristic RNA secondary structures ( Figure 2). These sequences and the predicted secondary structures are highly conserved among HCV genotypes and subtypes [63]. 6 h after transfection [81], leading to the conclusion that the CRE rather stimulates translation. This discrepancy could be due to differences in the reporter assay systems and yet needs to be further elucidated. In addition, a hybridization between sequences in the core-coding region and the region between 5´UTR SLs I and II (red in Figure 1) had been described, which has a negative effect on translation efficiency. This interaction is important in terms of IRES structure, function, and miR-122 action and is discussed in the next section. Structure of the HCV 5´UTR and the Internal Ribosome Entry Site The HCV 5´UTR is about 340 nts long and is predicted to fold into characteristic RNA secondary structures ( Figure 2). These sequences and the predicted secondary structures are highly conserved among HCV genotypes and subtypes [63]. [63,[109][110][111][112][113][114]. The sequence shown is from genotype 2a (J6 isolate), with length variations among HCV isolates indicated by short horizontal dashes at nucleotide positions. Nucleotide numbering is according to the MAFFT alignment in the supplement of [63] (http://www.rna.uni-jena.de/supplements/hcv/). For comparison, the first nucleotide of the core coding sequence (AUG, in bold) in the SL IV is nucleotide No. 342 in genotype 1b (Con1) and No. 341 in genotype 2a (J6 and JFH1 isolates). IRES SL domains are indicated by roman numerals. microRNA-122 (miR-122, red) binding is indicated, miR-122 target sequences are boxed. Pseudoknot (PK) base pairing and the interaction between SL II and SL IV is indicated; (B) Secondary structure model of the 5´UTR with alternative folding of SL II alt . The structure is largely according to [72,77,115], with minor modifications according to our RNAalifold outputs using several genotypes (not shown). The small SL I form very close to the very 5´-end of the HCV 5´UTR, leaving only three or four unpaired nts at the very 5´-end. In the canonical representation of the 5´UTR secondary structure [63,[109][110][111][112][113][114]. The sequence shown is from genotype 2a (J6 isolate), with length variations among HCV isolates indicated by short horizontal dashes at nucleotide positions. Nucleotide numbering is according to the MAFFT alignment in the supplement of [63] (http://www.rna.uni-jena. de/supplements/hcv/). For comparison, the first nucleotide of the core coding sequence (AUG, in bold) in the SL IV is nucleotide No. 342 in genotype 1b (Con1) and No. 341 in genotype 2a (J6 and JFH1 isolates). IRES SL domains are indicated by roman numerals. microRNA-122 (miR-122, red) binding is indicated, miR-122 target sequences are boxed. Pseudoknot (PK) base pairing and the interaction between SL II and SL IV is indicated; (B) Secondary structure model of the 5 UTR with alternative folding of SL II alt . The structure is largely according to [72,77,115], with minor modifications according to our RNAalifold outputs using several genotypes (not shown). The small SL I form very close to the very 5 -end of the HCV 5 UTR, leaving only three or four unpaired nts at the very 5 -end. In the canonical representation of the 5 UTR secondary structure ( Figure 2A) [42,63,109,111], this SL I is followed by a single-stranded stretch of about 42 nts which binds two molecules of miR-122 [62] with one of the miR-122 molecules also hybridizing to the very 5 nts upstream of SL I. This entire sequence often is named domain I of the 5 UTR. Downstream of that, the canonical representation depicts the SL II (or domain II), followed by a large branched domain that is called either SL III or domain III, which contains the subdomains or SLs IIIa, b, c, d, e, and f. This domain III, then, is followed by a single-stranded stretch that can form a pseudoknot 1 (PK1) with the SL IIIf [116]. In addition, a U residue (postion 300 in Figure 2) the SL IIIe can form one base pair with the upstream A (position 291), creating a second pseudoknot (PK2). This region with the two pseudoknots forms a compact and unique tertiary structure [117]. The downstream SL IV contain the AUG start codon [118]. These IRES RNA secondary structures have been predicted in silico and have been experimentally validated by chemical structure and nuclease protection mapping [52,63,77,113]. Moreover, the ribosome-bound HCV IRES structure has been validated by cryo electron microscopy (EM) [55,[119][120][121]]. An alternative predicted 5 UTR structure shows a refolded, alternative SL II (SL II alt ) ( Figure 2B), which is as consistent with the reported experimental structure mapping results as is the canonical structure [72,77,113,115]. In this alternative fold of the 5 UTR sequences between SL I and SL III ( Figure 2B), the two miR-122 binding sites are largely hidden within double-stranded RNA structures, and therefore are less accessible as compared with the canonical structure ( Figure 2A). In the presence of miR-122, the alternative SL II alt reforms to adopt the canonical structure, allowing efficient translation and stabilization of the genome [72,115,122]. Interestingly, also mutations that render HCV replication miR-122 independent favor the canonical structure [72,122]. Despite the seemingly clearly defined 5 UTR structure, we can distinguish some central structure-driving regions from others, which are more flexible and can change structure, also according to functional requirements. We have performed our own in silico predictions using LocARNA [123] and RNAalifold [124] with a representative number of different isolates from different genotypes (selected from [63]) (data not shown). On the one hand, some IRES regions appear to have a quite strong preference to robustly fold into a definite structure, likely driven by the intrinsic properties of the conserved primary sequence. Formation of these RNA secondary structures takes place regardless of subtle variations in their sequence that lead to covariations in their secondary structure, and regardless of the availability of varying flanking RNA regions that could interfere with the secondary structures by providing options for alternative interactions. On the other hand, other sequences appear to be able to dynamically assume different secondary structures, depending on the extent of flanking RNA that is available for interaction, depending on the binding of miR-122, and depending on the binding of cellular proteins as well as of the ribosomal 40S subunit. In the first group of sequence elements, i.e., those which appear to robustly form a defined and invariant secondary structure, we would list essentially three regions (drawn in bold in the 5 UTR structures in Figure 3). In brief, these are the SLI, the upper part of domain III, and the base of the domain III with the double pseudoknot. In part, the structure of these regions can also be stabilized by binding to cellular factors or to the ribosomal 40S subunit. The first region is the G/C-rich stem-loop I near the very 5 -end of the HCV 5 UTR. The SL I region has a strong preference to form under almost any conditions such as varying length of flanking sequences or subtle sequence variations among genotypes and subtypes, with only very few exceptions. The second sequence region that appears to always robustly form the same structure in in silico predictions is the upper part of the domain III, including the four-way junction with the stem-loops IIIa, IIIb, and IIIc [125]. This structure binds eIF3 [79] and contains conserved elements in its central bulge (around position 180 and 220 in Figure 2A) as well as in the stem above which are required for eIF3 binding [126][127][128]. The third steadily forming region includes the base of the domain III with SL IIIe, IIIf, and the double pseudoknots [116,117], which can drive binding to and be stabilized by the ribosomal 40S subunit [129]. These three robustly forming 5 UTR regions can also influence the biased folding of flanking sequences, and small-angle X-ray scattering of the HCV IRES in solution, and in silico structure flexibility simulations [114] are consistent with an overall IRES structure as predicted. In addition, the first three RNA secondary structures in the core coding region also appear to have a conserved tendency to robustly fold in silico to form the SLs V, VI, and the following SL 588. The formation of these distinct structures are required to leave only the SL IV sequence to form either a stem-loop or to unfold and bind in the ribosome entry channel, without being disturbed by intruding flanking sequences ( Figure 3E). tendency to robustly fold in silico to form the SLs V, VI, and the following SL 588. The formation of these distinct structures are required to leave only the SL IV sequence to form either a stem-loop or to unfold and bind in the ribosome entry channel, without being disturbed by intruding flanking sequences ( Figure 3E). Figure 3. Conformational dynamics of HCV 5´UTR structure. The conserved canonical 5´UTR structure [63,[109][110][111][112][113][114] is shown in the middle (structure A). 5´UTR sequences which have a strong intrinsic tendency to fold into a distinct secondary and tertiary structure are depicted in bold. The constitutive double pseudoknot (PK) interactions involving SL IIIe (PK2) and IIIf (PK1) and the possible interaction between SL II and SL IV are indicated. The 5´UTR structure B [72,77,115] has the region between SLs I and III refolded and forms an alternative version of SL II (SL II alt ). This SL II alt can form in the absence of miR-122, then hiding both miR-122 binding sites. Binding of miR-122 promotes refolding of structure B to structure A [72,115,122]. The 5´UTR structure C can form by foldback of nucleotides 428-442 in the base of SL VI in the core protein coding region to hybridize with sequences downstream of SL I. In this state, the translation activity of the IRES is decreased [130][131][132][133][134]. This hybridization also interferes with binding of miR-122 to the 5´UTR. In turn, binding of miR-122 induces refolding of structure C to structure A [135]. Structure D is a yet completely hypothetic structure that was predicted to be the energetically slightly favored conserved structure [63]. In this structure D, the SL IIId of the IRES is refolded to form SL IIId*. However, a possible biological relevance of this hypothetical structure is unclear, in particular since several interactions of the canonical SL IIId have been described (see main text). Structure E is formed upon binding to the ribosomal 40S subunit. Then, SL IV sequences containing the AUG start codon are unfolded and positioned in the mRNA entry channel of the 40S subunit, aided by interaction with SL II. In contrast, other sequences are more flexible. This is particularly true for the region between SLs I and III. In the canonical 5´UTR structure ( Figure 3A), the region between SLs I and II with the two miR-122 binding sites is predicted to be single stranded in RNAalifold structure predictions, and [63,[109][110][111][112][113][114] is shown in the middle (structure A). 5 UTR sequences which have a strong intrinsic tendency to fold into a distinct secondary and tertiary structure are depicted in bold. The constitutive double pseudoknot (PK) interactions involving SL IIIe (PK2) and IIIf (PK1) and the possible interaction between SL II and SL IV are indicated. The 5 UTR structure B [72,77,115] has the region between SLs I and III refolded and forms an alternative version of SL II (SL II alt ). This SL II alt can form in the absence of miR-122, then hiding both miR-122 binding sites. Binding of miR-122 promotes refolding of structure B to structure A [72,115,122]. The 5 UTR structure C can form by fold-back of nucleotides 428-442 in the base of SL VI in the core protein coding region to hybridize with sequences downstream of SL I. In this state, the translation activity of the IRES is decreased [130][131][132][133][134]. This hybridization also interferes with binding of miR-122 to the 5 UTR. In turn, binding of miR-122 induces refolding of structure C to structure A [135]. Structure D is a yet completely hypothetic structure that was predicted to be the energetically slightly favored conserved structure [63]. In this structure D, the SL IIId of the IRES is refolded to form SL IIId*. However, a possible biological relevance of this hypothetical structure is unclear, in particular since several interactions of the canonical SL IIId have been described (see main text). Structure E is formed upon binding to the ribosomal 40S subunit. Then, SL IV sequences containing the AUG start codon are unfolded and positioned in the mRNA entry channel of the 40S subunit, aided by interaction with SL II. In contrast, other sequences are more flexible. This is particularly true for the region between SLs I and III. In the canonical 5 UTR structure ( Figure 3A), the region between SLs I and II with the two miR-122 binding sites is predicted to be single stranded in RNAalifold structure predictions, and the canonical SL II structure forms. This structure is predicted to occur by default also in the absence of miR-122 (i.e., in the absence of constraints to keep the miR-122 target sites as single-stranded sites in the in silico foldings). Thus, the region between SL I and SL II is available to easily bind miR-122. In turn, miR-122 binding stabilizes this region in single-stranded form ( Figure 3A), then supporting the stable formation of SL II and its important interactions with the apical loop of SL IV and with the ribosomal 40S subunit [72,115]. These structure changes of SL II induced (or stabilized, respectively) by miR-122 can account for the stimulation of translation by miR-122 [66]. However, an only very slight variation of folding parameters (e.g., by not running RNAalifold but instead running LocARNA with default parameters) changes the prediction dramatically, with the sequences downstream of SL I largely rearranging and partially covering the miR-122 target sites. Then, the alternative SL II alt structure forms ( Figures 2B and 3B), and the miR-122 target sites are largely hidden. This indicates that this region is very flexible in structure and can easily refold in response to the absence or presence of miR-122, thereby changing the structure and by that the function of the SL II sequence region which is involved in regulating translation. In addition, a sequence in the core coding region (nts 428 to 442) can fold back to the miR-122 binding region in a long-range interaction and cover the miR-122 target sites, forming a more compact structure ( Figure 3C, and also see the red LRI in Figure 1) and reducing translation efficiency [130][131][132][133]. This inhibitory hybridization can be relieved by miR-122 binding [135,136], while the physiological relevance for controlling HCV translation in cells appears not to be dramatic but can more easily be detected in an in vitro translation system [134]. A yet completely hypothetical structure is the structure shown in Figure 3D. In this structure, the lower part of the domain III is refolded largely to loop the SL IIId and form the alternative SL IIId* [63]. This prediction was yielded as the most stable consensus structure among the 106 HCV isolates from all available genotypes and subtypes [63]. However, its minimum fold energy (MFE) is only marginally lower than that of the canonical form. Since all RNA structure probing experiments and cryo EM structures, as mentioned above, are not consistent with 5 UTR structure D but rather with structure A, we can only speculate if this conserved structure D could have possible functions. The predicted secondary structure of the SL IV with the AUG start codon contains covariations among isolates, which implies that this small secondary structure must be of some importance. However, the position of the start codon within the predicted SL IV secondary obviously suggests that there must be changes in RNA secondary structure when the start codon is positioned in the ribosome. When the IRES binds the 40S subunit, the SL IV unfolds and is placed in the entry channel of the 40S subunit ( Figure 3E) [117,137,138], preparing the IRES-40S complex for translation initiation. With respect to the previously described interchangeable 5 UTR/IRES structures and their possible biological relevance (Figure 3), as well as with regard to the required detachment of the IRES RNA from the ribosome upon completion of the translation initiation cycle, we need to take into account that under natural conditions in the cell the secondary and tertiary structure of the IRES RNA could be much more flexible than suggested by the seemingly rigid, fixed canonical structure, as shown in Figures 2A and 3 [144]. In contrast, the concentration of free Mg 2+ in the cytosol in different cells is 0.86 mM on average over many studies, and in rat liver cells, the free intracellular Mg 2+ concentration is approximately 0.7 mM (reviewed in [145]). The high Mg 2+ concentrations used in the studies mentioned above have demonstrated the IRES and ribosome structures previously reported (see above). However, at such high Mg 2+ concentrations, IRES binding to the 40S subunit is strongly favored [79], leaving no options for an equilibrium of different IRES structures in solution. Moreover, in the presence of 60S subunits, both ribosomal subunits would be largely associated at 2.5 mM Mg 2+ [146], and by that they would even not allow binding of the HCV IRES to individual 40S subunits, followed by the induction of 60S subunit association by the HCV RNA to be translated. In contrast, at the physiological concentration of 0.7 mM free Mg 2+ , only about half of the IRES molecules are actually completely folded [79], while the other half assume more flexible structure intermediates. In addition, another study showed that the IRES has a rather extended conformation at low Mg 2+ concentrations, whereas the pseudoknots form only in the presence of Mg 2+ [136]. Concurrently, HCV translation is optimal at Mg 2+ concentrations lower than 1 mM [147]. Thus, we must be aware that the functional HCV IRES structure is much more flexible in terms of opening and closing secondary structures than most studies on IRES structure suggest. The above considerations particularly apply to those IRES regions that must dynamically interact with the 40S subunit (drawn thin in the IRES structures in Figure 3). Although the HCV IRES is completely unfolded in the absence of Mg 2+ , strongly folding IRES core structures such as the upper portion of SL III (shown in bold in Figure 3) form properly into the canonically shown form at Mg 2+ concentration of only 0.25 mM and above [110,136] (i.e., are formed under intracellular conditions). In contrast, the G residue 135 which is the second nucleotide in the left part of the base of domain III (a region that is routinely drawn as double-stranded in the IRES secondary structure predictions [63]) is largely protected (i.e., double-stranded) at Mg 2+ concentrations of 2.5 mM, whereas it appears only 50% protected at the physiological Mg 2+ concentration of 0.7 mM [110]. These results imply that the HCV IRES structure can be (and perhaps must be) much more flexible during productive complete translation initiation cycles than many studies suggest. This gives room to the speculation that the previously mentioned highly conserved predicted structure D ( Figure 3) could have some biological relevance yet to be shown. One of the most intriguing questions for future research is how the 40S subunit, which initially binds very strongly to the IRES manages to routinely detach from the IRES in order to commence the transition from translation initiation to elongation and synthesis of the polyprotein. This likely needs to be investigated at magnesium concentrations that are lower than those used in previous studies (best 0.7 mM). Contacts of the HCV IRES with the Small Ribosomal 40S Subunit and with eIF3 When the protein-free HCV IRES is allowed to bind in vitro to isolated small ribosomal 40S subunits, the IRES makes several close contacts to the 40S subunit. We must assume that under conditions of intracellular magnesium concentrations (as discussed above) there should be more flexibility of the IRES [110]. However, we are not aware of studies that analyzed the sequential arrival of different IRES regions at the 40S subunit with high time resolution (in the sub-second scale), and thereby demonstrated structural "induced fit" changes of the IRES structure during binding. The only study that methodically comes close [148] shows that there is a long lag phase until the IRES binds the 40S subunit. Nevertheless, from the IRES structure as it appears when bound to the isolated 40S subunit (Figure 4) [55,119,121,138,149], we can assume that there are three distinct regions of the IRES that are different in binding, as well as in function. The first region is the "core" of the IRES, including the two pseudoknots PK1 and PK2 together with the base of domain III and SLs IIId, IIIe, and IIIf, and considered in an extended version also including the four-way junction SL IIIabc which also binds closely to the 40S subunit (but not including the actual SL IIIb). This "core" IRES region essentially serves to "anchor" the body of the IRES with high affinity in a fixed position (irrespective of possible induced fit changes during binding) on the 40S subunit and provides a platform for the flexible connection of the other two functional IRES modules, SL II and SL IIIb. The small ribosomal 40S subunit (yellow) with bound eIF3 (red). eIF3 makes multiple contacts to the 40S subunit using several subunits, including eIF3a and eIF3c; (B) The IRES of classical swine fever virus (CSFV) (pink) without IRES domain II, binding to the 40S subunit (yellow) and to eIF3 (red). The IRES has displaced eIF3 completely from its binding to the 40S subunit (compare panel A) but keeps it connected to the 40S subunit only indirectly by contacts between IRES SL IIIabc and eIF3. Figures A and B were reprinted from [150] ( Figure 2F) and slightly modified. Reprinted with permission from Elsevier (licence no. 4761840166773); (C) The HCV IRES (pink) binding to the 40S subunit (yellow) in the complete 80S ribosome (60S subunit in blue). The IRES SL IIIdef/PK is in close contact with the 40S subunit, the SLs II and IV are positioned in the mRNA entry channel, and the SL IIIb is pointing to the solvent side for binding eIF3. The IRES is not touching the 60S subunit [121,138]. Figure C was modified from the left panel of Figure 1C in [121]. Reproduced with permission from EMBO; (D) Schematic illustration of the HCV IRES binding to the 40S subunit (yellow) and to eIF3 (red) when eIF3 is largely displaced from the 40S subunit by the IRES, similar as in (B). The orientation of the 40S subunit is shown essentially top-down as compared with (A-C), viewed approximately from the solvent side. In contrast, the second functional IRES region, the SL II region in its canonical form (see Figures 2 and 3A,E), appears to be rather flexible and fulfills important tasks in reorganizing the IRES structure and the ribosome in order to unwind the SL IV, place the contained AUG start codon in the ribosomal mRNA entry channel, and manipulate the 40S subunit to undergo initiation (see below). The third region is the SL IIIb. It appears not to bind to the 40S subunit at all [55,119,121,138,149], but it is connected to the rest of the IRES by the very flexible SL IIIabc four-way junction [151] which The small ribosomal 40S subunit (yellow) with bound eIF3 (red). eIF3 makes multiple contacts to the 40S subunit using several subunits, including eIF3a and eIF3c; (B) The IRES of classical swine fever virus (CSFV) (pink) without IRES domain II, binding to the 40S subunit (yellow) and to eIF3 (red). The IRES has displaced eIF3 completely from its binding to the 40S subunit (compare panel A) but keeps it connected to the 40S subunit only indirectly by contacts between IRES SL IIIabc and eIF3. Figures A and B were reprinted from [150] ( Figure 2F) and slightly modified. Reprinted with permission from Elsevier (licence no. 4761840166773); (C) The HCV IRES (pink) binding to the 40S subunit (yellow) in the complete 80S ribosome (60S subunit in blue). The IRES SL IIIdef/PK is in close contact with the 40S subunit, the SLs II and IV are positioned in the mRNA entry channel, and the SL IIIb is pointing to the solvent side for binding eIF3. The IRES is not touching the 60S subunit [121,138]. Figure C was modified from the left panel of Figure 1C in [121]. Reproduced with permission from EMBO; (D) Schematic illustration of the HCV IRES binding to the 40S subunit (yellow) and to eIF3 (red) when eIF3 is largely displaced from the 40S subunit by the IRES, similar as in (B). The orientation of the 40S subunit is shown essentially top-down as compared with (A-C), viewed approximately from the solvent side. In contrast, the second functional IRES region, the SL II region in its canonical form (see Figures 2 and 3A,E), appears to be rather flexible and fulfills important tasks in reorganizing the IRES structure and the ribosome in order to unwind the SL IV, place the contained AUG start codon in the ribosomal mRNA entry channel, and manipulate the 40S subunit to undergo initiation (see below). The third region is the SL IIIb. It appears not to bind to the 40S subunit at all [55,119,121,138,149], but it is connected to the rest of the IRES by the very flexible SL IIIabc four-way junction [151] which is anchored on ribosomal protein eS27 [138]. This SL IIIb is used by the IRES solely to bind eIF3 [126,127] after the IRES has displaced eIF3 from its binding to the 40S subunit [143]. The reasons for this are not yet fully understood. However, it appears likely that the IRES needs to keep eIF3 on hold in close vicinity of cis for using its contacts to the HCV 3 UTR [97], to avoid premature subunit joining [152,153], and to use it later in subsequent initiation steps to acquire eIF2, eIF5B, and the 60 subunit to the initiation complex [141,154,155]. In addition, the AUG start codon of the HCV IRES must be properly positioned in the 40S mRNA entry channel in order to fully accommodate its contacts on the 40S subunit for proper initiation [137]. However, it has been reported that eIF3 is not absolutely essential for subsequent stages in initiation, since 48S complexes formed in the absence of eIF3 on both wild-type and SL IIIb mutant HCV IRES elements readily underwent subunit joining, forming elongation-competent 80S ribosomes [143]. Nevertheless, the HCV IRES can still bind eIF3 also in the complete 80S ribosomes [156], likely in the same way with the IRES SL IIIb. The IRES core domain with the double pseudoknot, SL IIId, IIIe, and IIIf tightly contacts the 40S subunit on the solvent side [117,129] by binding to the ribosomal 18S rRNA, as well as to several ribosomal proteins. The HCV IRES SL IIId apical loop GGG sequence contacts a CCC sequence in the 18S rRNA helix 26 ES (expansion segment) 7 apical loop (close to the 40S subunit mRNA exit tunnel) [119,138,143,150,157], while also SL IIIe makes a contact to this helix expansion segment [138,150]. Several specific ribosomal proteins are also known to be contacted by the IRES core domain and SL III, listed in the following (for the new nomenclature of ribosomal proteins see [158], where the prefix "u" stands for ribosomal proteins universal to bacterial, archeal, and eukaryotic ribosomes, and "e" stands for ribosomal proteins unique to eukaryotes). The ribosomal protein eS27 contacts the upper IRES four-way junction with SLs IIIa and IIIc and the SL IIId [138,143,144,159]. Proteins eS1, uS7, uS9, and uS11 contact the lower part of the SL III with SLs IIId and IIIe [138,159,160], and proteins eS10, eS26, and eS28 (near the ribosomal exit channel) contact the IRES more downstream at the double pseudoknot and at the beginning of the core coding region [138,143,159,161]. In contrast, the IRES SL II contacts ribosomal proteins uS7, uS9, uS11, and uS25 [119,138,144,[160][161][162]. The IRES SL II functions in preparing the ribosomal 40S subunit for initiation. Whereas IRES domain III and pseudoknot contacts to the IRES appear to function primarily in tight binding, the domain II of the IRES exerts important functions in manipulating the ribosome and to facilitate the positioning of the AUG start codon region in the 40S entry channel and subsequent 60S subunit joining [121,141,150]. Therefore, the SL II needs to be flexible, a feature that is mainly conferred by included bulges to allow flexible changes [112,163]. The apical part of SL II is essential for ribosomal subunit joining [154]. This part contacts the 40S subunit in the region of the head and the edge of the platform, near the mRNA entry channel [55,138,149], and causes a slight rotation of the 40S head and changes in the structure of the platform [55]. Thereby, SL II occupies similar binding sites as the E-site tRNA and eIF2 [138]. SL II helps to unfold the SL IV with the AUG start codon and to position it in the mRNA entry channel in the 40S-bound conformation [137,149], perhaps also by base-pairing with the region with the start codon like tRNA [121,138] (see Figures 2 and 3). Finally, the SL II stimulates eIF5-mediated hydrolysis of eIF2-bound GTP and joining of a 60S subunit [141,164,165]. Consistently, a deletion of three nucleotides (GCC) from the apical loop of SL II completely abolishes SL II function on the 40S subunit and results in an accumulation of 80S ribosomes after 15 min [162], suggesting that assembled 80S ribosomes are arrested on such defective IRES and are unable to undergo the transition from initiation to elongation, which normally takes place after about 6 min [66]. Interestingly, the HCV IRES is able to bind to translating ribosomes (which translate regular cap-dependent cellular mRNAs). Thereby, the IRES binds with its "core" described above (SL IIIacdef plus PK1 and PK2), but without SL IIIb and SL II, to the solvent side of the 40S subunit in the translating 80S ribosome. In this way, the IRES "hitchhikes" with an actively translating ribosome until regular termination of the cap-dependent mRNA occurs. After termination, the HCV IRES is already present on the 40S subunit in cis and efficiently usurps the post-termination 40S subunit [166]. Steps Involved in HCV Translation Initiation When we consider the order of binding events taking place during translation initiation at the HCV IRES RNA (Figure 5), it is important to note that the affinity of the HCV IRES to the isolated small ribosomal 40S subunit is much higher than its affinity to isolated eIF3. The HCV IRES can bind to the small ribosomal 40S subunit independently of any initiation factors [167]. The dissociation constant (K D ) of IRES binding to isolated 40S subunits is about 2 nM, whereas the K D of IRES binding to isolated eIF3 is only about 35 nM [79,141,154]. The SL II contributes only very little to the overall IRES-40S affinity [79,141]. The high affinity of the IRES-40S interaction is caused by multiple and very close contacts of several IRES regions with the 40S subunit [55,119,121,138,149], whereas binding to eIF3 includes only the apical region of the SL III, in particular the SL IIIb [79,126,127,141,143]. When we consider the order of binding events taking place during translation initiation at the HCV IRES RNA (Figure 5), it is important to note that the affinity of the HCV IRES to the isolated small ribosomal 40S subunit is much higher than its affinity to isolated eIF3. The HCV IRES can bind to the small ribosomal 40S subunit independently of any initiation factors [167]. The dissociation constant (KD) of IRES binding to isolated 40S subunits is about 2 nM, whereas the KD of IRES binding to isolated eIF3 is only about 35 nM [79,141,154]. The SL II contributes only very little to the overall IRES-40S affinity [79,141]. The high affinity of the IRES-40S interaction is caused by multiple and very close contacts of several IRES regions with the 40S subunit [55,119,121,138,149], whereas binding to eIF3 includes only the apical region of the SL III, in particular the SL IIIb [79,126,127,141,143]. In vivo, the IRES most likely binds to 40S-eIF3 post-termination complexes (top right), while in vitro-studies also suggest that binding of 40S and eIF3 can occur subsequently (top left). Then, the ternary 40S-IRES-eIF3 complex acquires eIF2 charged with tRNAi Met and GTP, as well as eIF1A and eIF5. After locating the HCV AUG start codon, eIF5 catalyzes release of decharged eIF2-GDP from the ribosome. eIF5B then causes subunit joining, and eIF5B-GDP, eIF1A, and eIF3 leave the complex which is, then, ready for the first translation elongation step. "Hitchhiking" of the IRES on translating 80S ribosomes is not shown (see main text); (B and C) Alternative translation initiation pathways for the HCV IRES under stress conditions leading to eIF2α phosphorylation, i.e., under limited eIF2 availability. (B) Binary IRES-40S complexes, which can or cannot also bind eIF3, bind either eIF2A (left), eIF5B (right) or both eIF2A and eIF5B in combination (middle). In the cases when eIF2A is present, it delivers the tRNAi Met . In the absence of eIF2A, eIF5B delivers the initiator tRNA. (C) The function of tRNAi Met delivery can also be taken over by either eIF2D or by two proteins which together are structured similar to eIF2D, namely MCT-1 and DENR [168]. These affinities could suggest that the HCV IRES first binds to naked 40S subunits, and only after that, the IRES can additionally acquire eIF3 (as shown in Figure 5A, upper left arrows). This idea emerges from several studies that analyzed the binding of the HCV IRES to purified 40S subunits [55,79,119,121,141,149,154,169]. However, eIF3 is known to bind to 40S subunits obtained from post- These affinities could suggest that the HCV IRES first binds to naked 40S subunits, and only after that, the IRES can additionally acquire eIF3 (as shown in Figure 5A, upper left arrows). This idea emerges from several studies that analyzed the binding of the HCV IRES to purified 40S subunits [55,79,119,121,141,149,154,169]. However, eIF3 is known to bind to 40S subunits obtained from post-termination complexes, and then remains routinely bound to the 40S subunits in order to facilitate the next initiation round [170][171][172]. Thus, we need to consider that most 40S subunits are available as 40S-eIF3 complexes. Moreover, eIF3 wraps around nearly the entire 40S subunit, including the 60 subunit interface [173], and largely covers exactly those regions on the surface of the 40S subunit that are supposed to also bind the HCV IRES, or the very similar IRES of CSFV (classical swine fever virus) [143]. In this respect, it should be noted that in [120], the binding positions of HCV IRES and eIF3 were just artificially overlayed in silico and shown in the same figure, leading to the possible misunderstanding that HCV IRES and eIF3 could simultaneously bind to essentially the same position on the 40S subunit. However, the CSFV IRES (which functionally acts in the same way as the HCV IRES) appears to displace eIF3 from its binding position on the 40S subunit. Thereby, the IRES effectively usurps ribosomal contacts of eIF3 [174]. Then, large parts of the IRES bind closely to the 40S subunit, and eIF3 is only indirectly kept bound in the complex solely by contacting the IRES RNA, but not any more by contacting the 40S subunit [143] (compare Figure 4B with A). Surprisingly, binding of the HCV IRES to the preformed 40S-eIF3 complex is essentially not impaired by the presence of eIF3 on the 40S ribosomes, but the presence of eIF3 appears to facilitate IRES binding to the 40S subunit [79]. Thus, although the position of eIF3 on the 40S subunit could be considered to sterically hinder IRES binding, eIF3 somehow facilitates IRES binding instead of competing with it. Taken together, we can assume that under the in vivo conditions in the cell, the natural substrate for binding of the HCV IRES is the 40S-eIF3 complex ( Figure 5A, upper right arrow). eIF2 is the standard factor that routinely delivers the charged initiator tRNA (Met-tRNA i Met ) to the initiation complex that binds to AUG start codons of most cellular mRNAs [175][176][177][178]. The addition of eIF2 largely facilitates formation of ribosomal initiation complexes with the CSFV IRES [167]. The charged initiator Met-tRNA i Met is also required for efficient complex formation with the HCV IRES [167]. Binding of eIF2-Met-tRNA i Met to preinitiation complexes is facilitated by the eIF3 subunit eIF3a [155]. Efficient 48S initiation complex formation also requires eIF1A, whereas eIF1 interferes with 48S initiation complex formation [165,179]. Domain II of HCV-like IRESs stimulates eIF5-mediated hydrolysis of eIF2-bound GTP and joining of the 60S subunit [141,164,165]. eIF5 serves to remove discharged eIF2 from the initiation complex ( Figure 5A, lower part). Subsequent formation of 80S complexes with 60S subunit joining additionally requires eIF5B [165], which is recruited to the IRES-40S complex by the eIF3 subunit eIF3c [155]. From a kinetic point of view, the association of the HCV IRES with ribosomes is a rather slow process. For comparison, with the highly efficiently translated cap-dependent β-globin mRNA, formation of complete 80S ribosomes was detected to be maximal after 15 s (the first time point that was analyzed in that study), and even the second, third, and fourth wave of 80S ribosomes had already been loaded to the mRNA after 15 s [180], giving rise to corresponding polysomes at that early time point. In contrast, the association of the HCV IRES with the 40S subunit is rather slow. Low amounts of the resulting 48S initiation complexes could be detected after 1 min [66,154], but the formation of maximal amounts of the first wave of 48S initiation complexes required 3 to 6 min [66,154]. Thereby, the association of some molecules of the HCV IRES RNA with some 40S subunits can occur within seconds, but saturating binding of most molecules of IRES RNA and 40S subunits in the populations takes about 40 to 80 s [148]. The presence of miR-122 can greatly accelerate and enhance this process [66]. Formation of the first wave of complete 80S ribosomes requires 4 to 6 min [66,154]. Between 6 and 10 min, the first wave of 80S complexes leaves the initiation site, and the second wave of 48S complexes forms [66]. These kinetic differences, despite high affinity binding to the 40S subunit, could contribute to the relatively low translation efficiency of the HCV IRES as compared with cellular mRNAs [181]. Use of Alternative Initiation Factors under Stress Conditions eIF2 is one of the main targets of general translation regulation in the cell. Such regulation takes place at the initiation step of translation and occurs during different stress conditions such as starvation, ER stress, or after activation of innate immune responses during a viral infection. Under such conditions, the α-subunit of eIF2 is phosphorylated by eIF2 kinases, and eIF2 tightly associates with eIF2B, resulting in inactivation of eIF2 [56]. As a consequence, translation of most cap-dependent cellular mRNAs is downregulated [49][50][51]175]. However, under such conditions the HCV RNA can still be translated with sufficient efficiency to allow viral protein synthesis [182]. When eIF2-GTP-Met-tRNA i Met ternary complex availability is reduced, translation initiation at the HCV IRES switches to eIF2-independent modes of translation initiation ( Figure 5B,C), and in some cases also independent of the initiator tRNA i Met [178,183]. Then, translation initiation at the HCV IRES is mediated by alternative initiation factors. Several protein factors have been proposed for this role including eIF2A [184], eIF2D [183], eIF5B [185], a combination of eIF2A and eIF5B [186], and the complex of the proteins MCT-1 and DENR [187]. The 139 kDa eIF5B (the eukaryotic homolog of bacterial IF2) [188,189] can promote formation of 80S complexes with the HCV IRES initiator-tRNA binding to the ribosomal P site in the presence of only additional eIF3 [51,149,185] (Figure 5B). Thereby, eIF5B substitutes for eIF2 and eIF5, and 80S complexes formed without eIF2 are competent for translational elongation [185]. In addition, for the closely related CSFV IRES, translation initiation can occur using the same mechanism [165]. eIF2A (a protein of about 65 kDa) was described in 1975 and was characterized as a factor that is capable of GTP-independent binding of Met-tRNA i Met to the 40S subunit of eukaryotic ribosome [190]. Meanwhile, there are conflicting reports about the possible role of eIF2A. Cloned eIF2A (NCBI nucleotide database entry NM_032025) has been described to have Met-tRNA binding properties and is also able to deliver Met-tRNA i Met in HCV translation initiation [184,186]. This eIF2A interacts with the HCV IRES core domain including SLs IIId, IIIe, IIIf, and the pseudoknots, with specific binding determinants present in the SL IIId, and eIF2A relocates from the nucleus to the cytoplasm in HCV-infected cells [184], suggesting its importance to function as an eIF2 surrogate during HCV infection under stress conditions. In contrast, in another study it was discovered that the activity formerly attributed to eIF2A was performed by another protein that copurified along with the previously described eIF2A over almost the entire purification procedure, while after final separation steps the purified eIF2A did not show any such activity. The new protein was, then, named eIF2D (NM_006893) (formerly named "ligatin" by mistake) and facilitated the delivery of Met-tRNA i Met to the P-site of the 40S subunit independent of GTP (see below) [183]. The same eIF2D was described to facilitate HCV translation initiation [187]. According to a recent study [186], eIF2A can also function synergistically with eIF5B. Both eIF2A and eIF5B can bind to the 40S subunit during stress conditions [186]. eIF5B interacts both with eIF2A and with the tRNA, and eIF5B augments the activity of eIF2A in loading Met-tRNA i Met onto a 40S ribosome associated with an HCV [186]. Alternatively, eIF2D or a set of two other related proteins, MCT-1 and DENR, can facilitate translation initiation at the HCV IRES ( Figure 5C). eIF2D has also a molecular mass of about 65 kDa, a fact that led to the previously mentioned confusion with eIF2A when the activity of eIF2D at the HCV IRES was first described [183]. It should be noted that, unlike eIF2A, eIF2D resembles other initiation factors since it contains a domain similar to translation initiation factor eIF1 [183]. Recent work performed with ribosomal profiling indicates that eIF2D is involved in post-termination events, where it promotes ribosome recycling [191]. eIF2D can confer HCV translation initiation as an alternative to eIF2. eIF2D but not eIF2 can utilize non-AUG codons in the HCV IRES [183]. On other RNAs, eIF2D can also bind to non-AUG codons in the P-site [178], suggesting that in the case of non-AUG ARF translation (see below), eIF2D can substitute for eIF2. Interestingly, eIF2D does not need to be loaded with initiator Met-tRNA i Met or even with any tRNA to bind to the ribosomal P-site [183] and causes 40S and 60S subunit joining in the absence of eIF5B [187,192]. The interaction of eIF1 and eIF1A with the 40S subunit interferes with the binding of eIF2D to the 40S subunit, whereas eIF3 binding does not [187]. As an alternative to eIF2D, the heterodimeric complex of MCT-1 (the product of malignant T cell-amplified sequence 1 oncogene) and DENR (density regulated protein) [192] can substitute for eIF2D in delivering the charged tRNA i Met to the ribosome [168,187,192]. However, a very recent study reported that the HCV IRES still works efficiently under conditions of a suppressed eIF2 activity in double knockout cells lacking both eIF2A and eIF2D [191]. If so, this result is more consistent with the option to use only eIF5B for HCV translation initiation under stress conditions ( Figure 5B, left). It also makes the diagrams in Figure 5B (middle and right) and Figure 5C involving alternative factors eIF2A, eIF2D and, possibly, DENR and MCT-1, somewhat less attractive. Anyway, HCV translation is able to escape the suppression of general cellular translation under conditions of stress or viral infection when eIF2 activity is downregulated by phosphorylation. Nevertheless, it is worthwhile to note that even under conditions using eIF2 for initiation, the efficiency of translation directed by the HCV IRES is much lower than that of average cap-dependent cellular mRNAs [181]. This leaves HCV translation at a level that does not completely perturb cellular gene expression and does not flood the cell with viral products, and thus allows for ongoing long-term low-level HCV replication during chronic infection. IRES Trans-Acting Factors (ITAFs) Several cellular proteins, which are not routinely involved in translation initiation of cellular mRNAs, are recruited by the HCV RNA and modulate its activity, either in translation regulation or in replication. Some of these proteins are also involved in regulating the activity of picornavirus IRES elements [54,193]. Here, we focus on those cellular proteins that are involved in translation regulation. These proteins ( Figure 6) have been called IRES trans-acting factors (ITAFs), a term which is also used here, but this term should not be regarded too strictly, since some of these factors also interact with the HCV 3 UTR or with other RNA genome regions. under stress conditions ( Figure 5B, left). It also makes the diagrams in Figure 5B (middle and right) and Figure 5C involving alternative factors eIF2A, eIF2D and, possibly, DENR and MCT-1, somewhat less attractive. Anyway, HCV translation is able to escape the suppression of general cellular translation under conditions of stress or viral infection when eIF2 activity is downregulated by phosphorylation. Nevertheless, it is worthwhile to note that even under conditions using eIF2 for initiation, the efficiency of translation directed by the HCV IRES is much lower than that of average cap-dependent cellular mRNAs [181]. This leaves HCV translation at a level that does not completely perturb cellular gene expression and does not flood the cell with viral products, and thus allows for ongoing longterm low-level HCV replication during chronic infection. IRES Trans-Acting Factors (ITAFs) Several cellular proteins, which are not routinely involved in translation initiation of cellular mRNAs, are recruited by the HCV RNA and modulate its activity, either in translation regulation or in replication. Some of these proteins are also involved in regulating the activity of picornavirus IRES elements [54,193]. Here, we focus on those cellular proteins that are involved in translation regulation. These proteins ( Figure 6) have been called IRES trans-acting factors (ITAFs), a term which is also used here, but this term should not be regarded too strictly, since some of these factors also interact with the HCV 3´UTR or with other RNA genome regions. Figure 6. IRES trans-acting factors (ITAFs) that modulate HCV IRES activity. Important functional domains are shown in dark grey. In most cases, these domains are RNA-binding domains, similar to the RNA-recognition motif domain (RRM) [194] or the K-homology (KH) domains of hnRNP proteins [195]. Most of these proteins bind directly to the RNA, whereas a few others stimulate HCV translation indirectly, or we do not yet know how exactly they modulate IRES activity. The roles of many of these proteins have been reviewed in detail before (please see [54] and [193]). Most of these proteins have multiple RNA-binding domains (shown in Figure 6) and form homo-and heterodimers, or they even act as multidomain protein complex organizers such as Gemin5 [196]. Moreover, some of the proteins Figure 6. IRES trans-acting factors (ITAFs) that modulate HCV IRES activity. Important functional domains are shown in dark grey. In most cases, these domains are RNA-binding domains, similar to the RNA-recognition motif domain (RRM) [194] or the K-homology (KH) domains of hnRNP proteins [195]. Most of these proteins bind directly to the RNA, whereas a few others stimulate HCV translation indirectly, or we do not yet know how exactly they modulate IRES activity. The roles of many of these proteins have been reviewed in detail before (please see [54,193]). Most of these proteins have multiple RNA-binding domains (shown in Figure 6) and form homo-and heterodimers, or they even act as multidomain protein complex organizers such as Gemin5 [196]. Moreover, some of the proteins bind not only to the 5 UTR/IRES but also to the 3 UTR. By that, these proteins can be assumed to build a large network of RNA-protein and protein-protein interactions that connects the HCV RNA genome 5 -and 3 -ends, supported by direct binding of the 40S subunit and eIF3 to both the 5 -and 3 -regions (see above, and Figure 7). Proteins involved in this network which directly bind to the HCV RNA are La [197], NSAP1 [198], hnRNP L [199] and D [200], IMP1 [156], PCBP2 [201], the Lsm1-7 complex, and the negatively acting Gemin5 [196,202], and perhaps also PTB [203,204] and RBM24 [205]. Interestingly, the above proteins bind to many sites on the HCV plus strand RNA, but the very 3 end of the genomic RNA is not covered, suggesting that the 3 end is left available for the initiation of RNA minus strand synthesis by the NS5B replicase, likely supported by the NFAR proteins [206]. In contrast, the NFAR protein complex (NF90, NF45, and RHA) appears not to be involved in HCV translation regulation [207], even though it binds to both the HCV 5 , and 3 UTRs and is involved in replication [208]. Network components which do not directly bind to the HCV RNA but participate by protein-protein interactions are HuR (ELAVL1) [209,210], the proteasome subunit α7 (PSMA7) [209,211], and perhaps PatL1, a P-body component involved in mRNA degradation that decreases HCV translation reporter gene expression [212]. bind not only to the 5´UTR/IRES but also to the 3´UTR. By that, these proteins can be assumed to build a large network of RNA-protein and protein-protein interactions that connects the HCV RNA genome 5´-and 3´-ends, supported by direct binding of the 40S subunit and eIF3 to both the 5´-and 3´-regions (see above, and Figure 7). Proteins involved in this network which directly bind to the HCV RNA are La [197], NSAP1 [198], hnRNP L [199] and D [200], IMP1 [156], PCBP2 [201], the Lsm1-7 complex, and the negatively acting Gemin5 [196,202], and perhaps also PTB [203,204] and RBM24 [205]. Interestingly, the above proteins bind to many sites on the HCV plus strand RNA, but the very 3´end of the genomic RNA is not covered, suggesting that the 3´end is left available for the initiation of RNA minus strand synthesis by the NS5B replicase, likely supported by the NFAR proteins [206]. In contrast, the NFAR protein complex (NF90, NF45, and RHA) appears not to be involved in HCV translation regulation [207], even though it binds to both the HCV 5´, and 3´UTRs and is involved in replication [208]. Network components which do not directly bind to the HCV RNA but participate by protein-protein interactions are HuR (ELAVL1) [209,210], the proteasome subunit α7 (PSMA7) [209,211], and perhaps PatL1, a P-body component involved in mRNA degradation that decreases HCV translation reporter gene expression [212]. In addition to building a large interaction network, a second possible function of these ITAFs could be an RNA chaperone function. For example, for a picornavirus IRES we have shown that PTB, using its different RNA-binding domains, connects different parts of the large IRES structure, and thereby stimulates IRES activity [213]. Such a function appears not so likely for the HCV IRES, which has a rather compact structure. However, we could speculate that some of these ITAFs support the unfolding of the HCV IRES SL IV and by that facilitate the entrance of this sequence in the 40S subunit mRNA entry channel. Specifically, La protein and IMP1 bind to the HCV sequence region involving the PK1 and the SL IV [156,197,214,215], and La protein can probably support melting of the SL IV [216]. Perhaps, these proteins induce a single-stranded conformation of this IRES region, and thereby help to position the AUG start codon in the mRNA entry channel of the 40S subunit. In addition, NSAP1 [198,217] and hnRNP L [199,218] bind the HCV RNA downstream of the AUG and the SL IV ( Figure 7, and also refer to Figure 4). Thereby, NSAP1 was reported to promote the correct positioning In addition to building a large interaction network, a second possible function of these ITAFs could be an RNA chaperone function. For example, for a picornavirus IRES we have shown that PTB, using its different RNA-binding domains, connects different parts of the large IRES structure, and thereby stimulates IRES activity [213]. Such a function appears not so likely for the HCV IRES, which has a rather compact structure. However, we could speculate that some of these ITAFs support the unfolding of the HCV IRES SL IV and by that facilitate the entrance of this sequence in the 40S subunit mRNA entry channel. Specifically, La protein and IMP1 bind to the HCV sequence region involving the PK1 and the SL IV [156,197,214,215], and La protein can probably support melting of the SL IV [216]. Perhaps, these proteins induce a single-stranded conformation of this IRES region, and thereby help to position the AUG start codon in the mRNA entry channel of the 40S subunit. In addition, NSAP1 [198,217] and hnRNP L [199,218] bind the HCV RNA downstream of the AUG and the SL IV (Figure 7, and also refer to Figure 4). Thereby, NSAP1 was reported to promote the correct positioning of the 40S ribosomal subunit at the initiation codon [217]. Thus, we can speculate that NSAP1 and hnRNP L bind to the HCV RNA downstream of the 40S entry channel similar to pulling hands on a rope to keep the unfolded single-stranded RNA in the 40S entry channel, and by that block snap-back folding of the SL IV which could then slip out of the entry channel. The third category of ITAFs involved in HCV translation can alter RNA structure. The representative of this category is the RNA helicase DDX6 (also called RCK or p54). There are conflicting reports regarding the role of DDX6 in HCV overall replication. Two reports showed that DDX6 has a positive role in HCV translation [69,212], and one of these reports claimed that the positive effect of DDX6 on HCV translation was independent of miR-122 [69]. In contrast, another report claimed that DDX6 has no influence on HCV translation, while HCV RNA stability depended on DDX6 (mediated by DDX6-dependent binding of miR-122 to the second 5 UTR binding site) [219]. This is consistent with a previous study that claimed a positive effect of DDX6 on HCV replication but not translation [220]. The fourth category of proteins has only indirect regulatory influence on HCV translation, while it is assumed that these proteins are no components of protein complexes directly acting on HCV RNA. Identified by a siRNA screen using a HCV IRES reporter RNA, MAP kinase interacting serine/threonine kinase 1 (MKNK1) and phosphatidylinositol 4-kinase catalytical subunit beta (PI4K-beta) were identified to stimulate HCV translation [221], consistent with a previous report that had described a general positive effect of PI4K-beta on overall HCV replication [222]. Expression of the Alternative Reading Frame ARF/core+1 For about two decades, a number of reports have claimed that, in addition to the canonical polyprotein ORF, another protein is produced from the core ORF region in the core+1 frame, the "alternative reading frame" (ARF) or "core+1" (originally also called "F") protein (see Figures 1 and 8). The perception of the possible importance of this protein has been hampered for long time by the following three circumstances: (1) The reading frame for this putative protein is only moderately conserved among HCV genotypes and isolates and has a variety of start and stop codons at different positions; (2) the mechanism of initiation of its translation was difficult to elucidate, and its expression is relatively weak; and (3) the possible function of these putative proteins appeared enigmatic for long time, while only recently some progress has been achieved in this direction. The genetic structure of the ARF/core+1 ORF appears to be very variable among HCV genotypes and isolates (Figure 8). The start codons for initiation of these proteins were identified to be essentially at codons 26, 85, and 87 of the core+1 reading frame, when the nucleotide No. 5 (AUGAG) of the core ORF is counted as the first nucleotide of the core+1 ORF [223][224][225][226][227][228][229], but also codon 58 was reported to give rise to another version of the ARF protein [228]. We have analyzed the ARF/core+1 ORF sequences of each two isolates of HCV genotypes 1, 2, 3, 4, and 6 (10 isolates in total, selected from the MAFFT alignment in [63]), including the often used genotype isolates 1b "Con1", 2a "J6", and 2a "JFH" sequences ( Figure 2B). The result is that the ARF/core+1 ORF is open from codon 26 to at least codon 124 of the ARF/core+1 frame, i.e., for at least 99 amino acids, confirming previous results obtained with many isolates among virtually all HCV genotypes [25][26][27]223,228,230]. From codon 125, stop codons are interspersed, except in genotype 1a which has a stop only after codon 161. The start point at codon 26 gives rise to the core+1/L protein [229], whereas the start at codon 85 yields the core+1/S protein [223], thus, producing proteins with two different conserved N-termini but heterogeneous C-termini due to genotype-or isolate-specific stop codons. Figure 8. Translation of the HCV alternative reading frame (ARF) or core+1 frame. (A) Overview over starts and stops that result in expression of a variety of different ARF of core+1 protein, among HCV isolates. The HCV 5´UTR and the downstream core region is shown essentially as in Figure 1. The start AUG for the canonical core protein is shown in SL IV as black filled circle. The start codons of core+1 protein are shown as grey filled circles at the core+1 frame for positions 26, 58, 85, and 87. The canonical core protein and the various core+1 ARF products are shown as boxes below. The major products produced in genotype 2a are shown in grey, other products produced by initiation at codon 58 or by termination at other more downstream stops in other genotypes are underlayed in white. Start codon usage is depicted; (B) Amino acid sequences of representative HCV genotypes and subtypes (selected from [63]); genotypes and NCBI nucleotide database accession numbers are given on the left, as well as abbreviations of some well-known isolates. The AA sequence starts with codon 26 of the core+1 frame, when the nucleotide No. 5 of the canonical core frame, AUGAG, is nucleotide No. 1 of the core+1 frame. The amino acids encoded by the main start codons 26, 58, 85, and 87 are in bold, stop codons are shown by asterisks in the sequence. Conservation is shown under the alignment, with (*) indicating absolute conservation, (:) indicating strongly similar and (.) indicating weakly similar AA properties. The dot indicating similarity between AAs at position 7 in terms of being charged but neglecting charge reversal have been removed, since charge reversal can have serious consequences for proteins [231]. Figure 1. The start AUG for the canonical core protein is shown in SL IV as black filled circle. The start codons of core+1 protein are shown as grey filled circles at the core+1 frame for positions 26, 58, 85, and 87. The canonical core protein and the various core+1 ARF products are shown as boxes below. The major products produced in genotype 2a are shown in grey, other products produced by initiation at codon 58 or by termination at other more downstream stops in other genotypes are underlayed in white. Start codon usage is depicted; (B) Amino acid sequences of representative HCV genotypes and subtypes (selected from [63]); genotypes and NCBI nucleotide database accession numbers are given on the left, as well as abbreviations of some well-known isolates. The AA sequence starts with codon 26 of the core+1 frame, when the nucleotide No. 5 of the canonical core frame, AUGAG, is nucleotide No. 1 of the core+1 frame. The amino acids encoded by the main start codons 26, 58, 85, and 87 are in bold, stop codons are shown by asterisks in the sequence. Conservation is shown under the alignment, with (*) indicating absolute conservation, (:) indicating strongly similar and (.) indicating weakly similar AA properties. The dot indicating similarity between AAs at position 7 in terms of being charged but neglecting charge reversal have been removed, since charge reversal can have serious consequences for proteins [231]. Moreover, the ARF/core+1 ORF appears much less conserved than the overlapping canonical core ORF. Analyzing the above-mentioned 10 selected isolates, we find that the codons 26 to 124 of the core+1 frame (i.e., the core+1/L protein) have 99 AAs, of which 27 AAs (27.3%) are identical at their positions and an additional 20 AAs (20.2%) are similar, resulting in an overall similarity of 47.5%. The codons 26 to 161 of the core+1 frame (i.e., the core+1/L protein) have 136 AAs, 34 AAs (25%) identical, plus 25 (18.4%) similar, resulting in 43.4% overall similarity. The codons 85 to 161 of the core+1 (i.e., the core+1/S protein) frame have 77 AAs, 21 AAs (27.3%) identical, plus 10 (13%) similar, resulting in 40.3% overall similarity. Thereby, the conservation of the ARF/core+1 protein is somewhat stronger near its N-terminus and directly at its shortest consensus C-terminus (compare Figure 8B). However, the ARF/core+1 frame is by far less conserved than the regular core ORF (71.7% identical, plus 18.8% similar AAs, overall similarity 90.5%). Regarding the mechanism by which translation of the ARF/core+1 frame is conferred, initially some evidence was presented for ribosomal frameshifting to produce the ARF/core+1 protein, named "F" [26]. The authors fused an HA tag to the N-terminus of the canonical core ORF and found a shift in the core+1 ORF, indicating expression of the ARF. The efficiency of expression was very high in rabbit reticulocyte lysate (RRL) but was found to be only about 1% in Huh-7 hepatoma cells. The authors speculated that an A/C-rich sequence around core+1 of codons 8 to 14 could be responsible for this frameshifting [26]. However, this comparison of translation systems shows that the RRL translation machinery (derived from reticulocytes that develop to erythrocytes, a cell system with very reduced protein complexity) is quite flexible in using RNA templates, while more complex cells exhibit more stringent translation control. The activity in RRL can now be regarded as collateral background expression by unspecific RNA binding by the translation machinery [232]. In contrast, introduction of stop codons between the putative frameshifting site around codons 8 to 14 and codon 85 did not abrogate ARF protein production in cells, strongly arguing against frameshifting, but for alternative internal use of downstream codons such as codon 85 or 87 [223,[225][226][227][228]. In addition to core+1, codon 85, codon 26 can also be used for initiation [224,229]. Moreover, reduction of canonical core protein ORF expression increased ARF expression, whereas an increase of canonical core frame expression reduced ARF expression, again arguing against the frameshift hypothesis [223,225,233]. The core+1 start codons 85 and 87 often have AUG (but also ACG or ACC), whereas the codons 26 and 58 only have non-AUG codons (GUG, GCG, and GAG), whereas all codons No. 26, 58, 85 and 87 are located in a moderately strong Kozak context [234]. The possible function of the ARF/core+1 protein was unclear for long time, and only recently some functions were described. Short-term in vivo experiments in Huh-7 cells, in SCID mice carrying primary human hepatocytes and even in chimpanzees did not reveal evidence for a functional role of a core+1 frame product in JFH1 (HCV genotype 2a) virus production [235,236]. However, antibodies against the core+1 product could be detected in patient sera ( [25,27,237] and references in [230]). During acute HCV infection, seroconversion to anti-core+1 antibodies can be observed [238]. Patients with chronic HCV infection, liver cirrhosis, and hepatocellular carcinoma have many more antibodies against core+1 products than other patients, and the antibody titers correlate with the extent of liver cirrhosis [239][240][241]. These findings show that ARF/core+1 actually is expressed in chronic HCV infection of the liver, and the extent of ARF/core+1 expression (likely as overall HCV expression) in the liver correlates with the extent of liver damage. Some rather mechanistic studies shed some light on how the HCV ARF/core+1 protein can support HCV replication and by that confer a long-term advantage for the virus. However, the emerging picture appears not yet fully consistent. An early study showed that p53 and p21 promoters were activated when ARF protein was overexpressed [28], a finding that would argue for ARF proteins acting as tumor suppressors. In contrast, activation of the hepcidin promoter by the transcription factor AP-1 was inhibited by the ARF/core+1 protein [242], which is supposed to result in reduced inhibition of iron export from enterocytes, leading to iron overload and increased oxidative stress in the body [243], but the possible advantage for HCV yet needs to be shown. Two more described functions indicate how expression of the ARF/core+1 protein could provide evident advantages for HCV replication in the body. Overexpression of the ARF/core+1 protein suppresses the expression of interferon-stimulated genes (ISGs), including the pattern recognition receptor retinoic-acid-inducible gene-I (RIG-I) [244]. This indicates that the ARF/core+1 protein contributes to establishing long-term HCV replication in the liver. Another clear evidence for a role of the ARF/core+1 protein comes from the analysis of cell cycle control. Overexpression of the HCV genotype 1a (H) protein in two isoforms resulted in slightly higher expression of cyclin D1 and in strongly enhanced phosphorylation of the retinoblastoma (Rb) protein, correlating with higher cell proliferation rates in Huh-7.5 cells [245]. Concomitantly, expression of cellular proto-oncogenes including hras, c-fos, c-jun, c-myc, and vav1 was elevated under ARF/core+1 overexpression, and the number of tumors in mice overexpressing the ARF/core+1 protein during chemically induced tumorigenesis was significantly increased [245]. This indicates that HCV not only rapidly reprograms the hepatocyte metabolism to promote the Warburg effect that is a characteristic of tumor cells [11,246] but also establishes proto-oncogene expression changes that lead to cancer in the long run. Future research should also systematically search for cellular binding partners of the ARF/core+1 protein by global scale interactome and for targets of gene expression regulation by crosslink immunoprecipitation (CLIP) studies. Even though non-AUG start codons can be used with high efficiency under normal conditions [247], it could also be interesting to find out if the largely non-canonical initiation codons used for ARF/core+1 expression could also be used by eIF2A and eIF2D which are known to support non-canonical initiation under stress conditions [178,183]. Taken together, after decades of research, more and more facets of possible ARF/core+1 protein functions in the infected body appear, but the view of its functions is far from complete, a situation that is reminiscent of another small "accessory" protein in another virus infecting the liver, hepatitis B virus (HBV) [248]. Future Directions In the past, we have learned a lot about the function of IRES and the factors involved in that. In part, this has been mostly based on technical progress in Cryo EM visualization techniques. On the basis of this, as well as on classical molecular biological and biochemical methods, it would be interesting to further investigate IRES structure and the transition from initiation to elongation, analyzed at intracellular magnesium concentrations. This includes the visualization of the IRES structure mediated by miR-122/Ago complexes and the actual implications for SL II action, as well as the visualization of ITAFs (e.g., NSAP1, hnRNP L, La, and hnRNP D) associated with the IRES on the 40S subunit in order to elucidate their function. Further questions for future directions could aim at the functional implications of the "eIF3 holding" function of the ribosome-bound IRES, as well as the further elucidation of the diversity, structure(s), and functions of the ARF proteins, in particular the molecular details of the regulation of gene expression and the long-term implications for HCC.
2020-04-02T09:12:39.115Z
2020-03-27T00:00:00.000
{ "year": 2020, "sha1": "a30858eb6d69b40457158d41208ca828b297287f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/ijms21072328", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7c57c79245d8e0e384569e2242ee504f3e601197", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
6364667
pes2o/s2orc
v3-fos-license
Mammalian Wax Biosynthesis The conversion of fatty acids to fatty alcohols is required for the synthesis of wax monoesters and ether lipids. The mammalian enzymes that synthesize fatty alcohols have not been identified. Here, an in silico approach was used to discern two putative reductase enzymes designated FAR1 and FAR2. Expression studies in intact cells showed that FAR1 and FAR2 cDNAs encoded isozymes that reduced fatty acids to fatty alcohols. Fatty acyl-CoA esters were the substrate of FAR1, and the enzyme required NADPH as a cofactor. FAR1 preferred saturated and unsaturated fatty acids of 16 or 18 carbons as substrates, whereas FAR2 preferred saturated fatty acids of 16 or 18 carbons. Confocal light microscopy indicated that FAR1 and FAR2 were localized in the peroxisome. The FAR1 mRNA was detected in many mouse tissues with the highest level found in the preputial gland, a modified sebaceous gland. The FAR2 mRNA was more restricted in distribution and most abundant in the eyelid, which contains wax-laden meibomian glands. Both FAR mRNAs were present in the brain, a tissue rich in ether lipids. The data suggest that fatty alcohol synthesis in mammals is accomplished by two fatty acyl-CoA reductase isozymes that are expressed at high levels in tissues known to synthesize wax monoesters and ether lipids. Wax esters are abundant neutral lipids that coat the surfaces of plants, insects, and mammals. They are composed of long chain alcohols esterified to fatty acids and have the chemical property of being solid at room temperature and liquid at higher temperatures. Waxes play essential biological roles in preventing water loss, abrasion, and infection and are produced commercially at levels approaching 3 billion pounds per year for use in polishes, cosmetics, and packaging. In some mammals, wax esters constitute ϳ30% of sebum and meibum, the oils secreted by the sebaceous and meibomian glands onto the surfaces of the skin and eye, respectively (1,2). Although the enzymes of wax biosynthesis in mammals have not been isolated, the components of the pathway can be inferred from work in plants (3,4) and mammalian tissue ex-tracts (5). As indicated in Scheme 1, two catalytic steps are required to produce a wax monoester, including reduction of a fatty acid to a fatty alcohol and subsequently, the trans-esterification of the fatty alcohol to a fatty acid. The first step is catalyzed by the enzyme fatty acyl-CoA reductase (FAR), 1 which uses the reducing equivalents of NAD(P)H to convert a fatty acyl-CoA into a fatty alcohol and CoASH. cDNAs specifying fatty acyl-CoA reductases have been identified in the jojoba plant (6), the silkworm moth (7), wheat (8), and in a microorganism (9); however, the biochemical properties and subcellular localizations of these enzymes have not been reported. Fatty alcohols have two metabolic fates in mammals: incorporation into ether lipids or incorporation into waxes. Ether lipids account for ϳ20% of phospholipids in the human body and are synthesized in membranes by a pathway involving at least seven enzymes (10). The second step of this pathway is catalyzed by the enzyme alkyl-dihydroxyacetone phosphate synthase, which exchanges an sn-1 fatty acid in ester linkage to dihydroxyacetone phosphate with a long chain fatty alcohol to form an alkyl ether intermediate. Once produced, ether lipids are precursors for platelet activating factor, for cannabinoid receptor ligands, and for essential membrane components in cells of the reproductive and nervous systems (10,11). In the current study a bioinformatics approach was used to identify mouse (m) and human (h) cDNAs encoding two fatty acyl-CoA reductase isozymes designated FAR1 and FAR2. The biochemical properties and subcellular localizations of recombinant FAR enzymes expressed in cultured mammalian and insect cells were defined, and tissue distributions were delineated. An accompanying paper reports the isolation and characterization of a wax synthase enzyme that catalyzes the second step of the mammalian wax biosynthetic pathway (12). EXPERIMENTAL PROCEDURES Bioinformatics and cDNA Cloning-Fatty acyl-CoA reductase protein sequences from the jojoba plant (Simmondsia chinensis (6)) and the silkworm moth (Bombyx mori (7)) were compared with those in the protein data base using the program BLASTP (13) to identify potential mouse and human reductase sequences. Two proteins with ϳ30% sequence identity to the plant and insect enzymes were identified, and cDNAs for each were obtained. A mouse fatty acyl-CoA reductase 1 cDNA (mFAR1, IMAGE clone 3495305, GenBank TM /EBI Data Bank accession number BC007178) in the pCMV⅐SPORT6 vector was obtained from the Mammalian Gene Collection (IRAV) (Invitrogen). A mouse fatty acyl-CoA reductase 2 cDNA (mFAR2, IMAGE clone 6809131, GenBank TM /EBI Data Bank accession number BC055759) in the pYX-Asc vector was obtained from Open Biosystems (Huntsville, AL). The cDNA insert in this plasmid was released by digestion with the restriction enzymes EcoRI and NotI, purified by agarose gel elec-trophoresis and extraction (QIAquik gel extraction kit, Qiagen, Valencia, CA), and ligated into the pCMV6 vector (GenBank/EBI Data Bank accession number AF239250). A human fatty acyl-CoA reductase 1 cDNA (hFAR1, GenBank TM /EBI Data Bank accession number AY600449) in the pCMV6-XL6 vector was purchased from Origene Technologies (Rockville, MD). A human fatty acyl-CoA reductase 2 cDNA (hFAR2, IMAGE clone 4732586) in the pDNR-LIB vector was obtained from Invitrogen. The cDNA insert in this plasmid was released by digestion with the restriction enzymes EcoRI and XhoI and then treated with the Klenow fragment of Escherichia coli DNA polymerase I and the four deoxynucleoside triphosphates to generate blunt ends. The engineered hFAR2 cDNA insert (nucleotides 62-2235 of GenBank TM /EBI Data Bank accession number BC022267) was purified by gel electrophoresis and extraction (QIAquik gel extraction kit) and ligated into the SmaI site of the pCMV6 vector. DNA sequence analysis confirmed the identity and structure of the hFAR2 cDNA. Construction of FLAG Epitope-tagged Expression Plasmids-An expression plasmid (pCMV⅐SPORT6-FLAG-mFAR1) encoding a FLAG epitope-tagged version of the mouse FAR1 protein was assembled as follows. The FAR1 cDNA was amplified from the pCMV⅐SPORT6-mFAR1 template described above by PCR using the oligonucleotide primers 5Ј-GTACCTGTCGACCCACCATGGATTACAAGGATGACGA-CGATAAGAGACAAGTCTGGATGGTTTCAATCC-3Ј and 5Ј-ATTATG-CGGCCGCGGTCTTCAGTATCTCATAGTGCTG-3Ј. The DNA was digested with the restriction enzymes SalI and NotI and ligated into the pCMV⅐SPORT6 vector (Invitrogen). The mFAR1 encoded by the resulting plasmid has the FLAG epitope (amino acid sequence DYKDDDDK) at the amino terminus. An expression plasmid (pCMV⅐SPORT6-FLAG-mFAR2) encoding a FLAG epitope-tagged version of the mouse FAR2 protein was assembled as follows. The mFAR2 cDNA was amplified from the pCM-V6-mFAR2 template described above by the PCR using the oligonucleotide primers 5Ј-GTACCTGTCGACCCACCATGGATTACAAGGATG-ACGACGATAAGATGTCCATGATCGCAGCTTTCTAC-3Ј and 5Ј-ATTATGCGGCCGCTGTTCTTAGACCTTGAGTGTGCTG-3Ј. The DNA product was digested with the restriction enzymes SalI and NotI and ligated into the pCMV⅐SPORT6 vector (Invitrogen). This engineering placed the FLAG epitope at the amino terminus of the encoded mouse FAR2 protein. Baculovirus Expression Vectors-A baculovirus recombinant donor plasmid with a mouse FAR1 cDNA in the pFastBAC HTC vector (Invitrogen) was constructed by amplifying the FAR1 cDNA insert in the vector pCMV⅐SPORT6-mFAR1 described above using the oligonucleotide primers 5Ј-GCCTATGTCGACGGACAAGTCAGGATGGTTTCAAT-CC-3Ј and 5Ј-ATTATGCGGCCGCGGTCTTCAGTATCTCATAGTGCT-G-3Ј. The modified cDNA was digested with the restriction enzymes SalI and NotI, purified by agarose gel electrophoresis and QIAquik gel extraction, and ligated into pFastBAC HTC. A mouse FAR2 cDNA baculovirus recombinant donor plasmid was constructed by amplifying the cDNA from the pCMV6-mFAR2 plasmid using the oligonucleotide primers 5Ј-GCCTATGTCGACGAACC-ATACAGGAACGGAGGAATC-3Ј and 5Ј-ATTATGCGGCCGCGTATC-TGAGGTTCCAGATGATGGG-3Ј. The resulting DNA product was digested with the restriction enzymes SalI and NotI and ligated into pFastBAC HTC. A baculovirus donor plasmid containing the mouse ⌬ 4 -3-oxosteroid 5␤-reductase cDNA (nucleotides 48 -1053 of GenBank TM /EBI Data Bank accession number BC018333) was constructed as follows. Hepatic cDNA was amplified via PCR using the oligonucleotide primers 5Ј-CA-GAAGCTTCAGATTCTTCTCTACG-3Ј and 5Ј-TGTTCAGTATTCGTCA-TGAAATGGG-3Ј. The resulting cDNA product was ligated into the TOPO TA vector (Invitrogen) and propagated in E. coli. The cDNA insert in this plasmid was excised by digestion with the restriction enzyme EcoRI, purified by agarose gel electrophoresis and the QIAquik gel extraction kit, and then ligated into the pFastBAC HTC vector. After construction of the donor plasmids, infectious Autographica californica nuclear polyhidrosis baculovirus stocks were generated and titrated in Spodoptera frugiperda (Sf9) cells using the Bac-to-Bac baculovirus expression system kit (Invitrogen). Preparation of Bovine Serum Albumin (BSA)-conjugated Fatty Acids-Capric acid (decanoic acid, C10:0), lauric acid (C12:0), myristic acid (C14:0), palmitic acid (C16:0), stearic acid (C18:0), oleic acid (C18: 1), linoleic acid (C18:2), homo-␥-linolenic acid (C20:3), and arachidonic acid (C20:4) were purchased from Sigma Chemical Co., dissolved in ethanol at a concentration of 62 mM, and then precipitated by the addition of 5 M NaOH to a final concentration of 0.25 M. The ethanol was evaporated under a stream of N 2 gas, and precipitated fatty acids were resuspended in 4 ml of 0.9% (w/v) NaCl by stirring and heating at 80°C. An aliquot (4.16 ml) of 24% (w/v) BSA dissolved in H 2 O was added and the solution stirred at room temperature for 10 min. Thereafter, 0.9% NaCl was added to bring the total volume to 10 ml. The resulting stocks contained 5 mM fatty acid, 0.5% (w/v) NaCl, and 10% (w/v) BSA. FAR Enzyme Assay in Transfected Cells-On day 0, human embryonic kidney (HEK) 293 cells (American Type Culture Collection) were plated at a density of 4 ϫ 10 5 cells/60-mm dish in low glucose Dulbecco's modified Eagle's medium supplemented with 10% (v/v) fetal calf serum, 100 units/ml penicillin, and 100 g/ml streptomycin sulfate. On day 2, cells were transfected with 3.5 g of a plasmid mixture containing 0.5 g of pVA1 and 3 g of pCMV⅐SPORT6-mFAR1, pCMV6-XL6-hFAR1, pCMV6-mFAR2, or pCMV6-hFAR2 expression plasmid using the FuGENE 6 reagent. Approximately 23 h after transfection, the cell medium was aspirated and replaced with 2.25 ml of fresh Dulbecco's modified Eagle's medium supplemented with 33.3 M BSA-conjugated palmitic acid and 2.4 M BSA-conjugated [1-14 C]palmitic acid. After a further 24 h of incubation, cells were washed once with phosphate-buffered saline (PBS), harvested with a rubber policeman into 2 ml of PBS, and utilized for thin layer chromatography (TLC) as described below. FAR Enzyme Assay in Baculovirus-infected Sf9 Cells-Sf9 cells were plated on day 0 at a density of 2.5 ϫ 10 6 cells/60-mm dish in Sf-900 II SFM medium (Invitrogen). Approximately 4 h after plating, the medium was replaced with Sf-900 medium supplemented with 50 units/ml penicillin and 50 g/ml streptomycin sulfate, and the cells were infected with recombinant baculovirus at a multiplicity of infection of 5-10 for 26 h. Thereafter, the medium was replaced with 2 ml of plating medium supplemented with 37.5 M BSA-conjugated fatty acids and 2.7-3 M BSA-conjugated 1-14 C-labeled fatty acids, and the infected cells were returned to the incubator for an additional 28 h. The cells were washed with 2 ml of PBS and then scraped into 2 ml of PBS using a rubber policeman prior to analysis by TLC. TLC-Fatty acid metabolites in transfected or infected cells harvested into 2 ml of PBS were extracted into 8 ml of chloroform:methanol (2:1, v/v). The organic layer was dried under a stream of nitrogen, and the lipid residue was resuspended in 50 l of hexane and spotted on prescored 150 Å silica gel plates (Whatman). Metabolites were resolved SCHEME 1. Catalytic steps required to produce a wax monoester. by chromatography in one of two solvent systems. Solvent system 1 involved development for 30 min in hexane:ether:formic acid (65:35:2, v/v/v). Solvent system 2 employed development for 30 min in hexane followed by drying of the plate in air for 15 min and a second development for 40 min in toluene. Radiolabeled metabolites on the plates were detected either by exposure to Biomax MR film (Eastman Kodak) or phosphorimaging using Fuji BAS-TR2040 screens (Fuji Medical Systems, Tokyo, Japan) and the Storm 820 imaging system (Amersham Biosciences). Lipid standards were purchased from Sigma Chemical Co., dissolved in ethanol (palmitic acid, stearic acid, hexadecanol, octadecanol, 1-oleoyl-racemic glycerol, (S)1,2-diolein, and glyceryl trioleate) or chloroform (dipalmitin and glyceryl tripalmitate) at final concentrations of 10 mM, and aliquots of 5 l were chromatographed on the plates in lanes adjacent to those containing radiolabeled lipids. Standards were visualized by spraying the TLC plates with 0.1% (w/v) 2Ј,7Јdichlorofluorescein in ethanol followed by examination under ultraviolet light (14). Preparation of Sf9 Cell Membranes-On day 0 of an experiment, Sf9 cells were inoculated at a density of 500,000 cells/ml in 120 ml of Sf-900 II SFM medium. On day 1, the cultures were infected at a multiplicity of infection of 2-4 with the indicated recombinant baculovirus and cultured an additional 48 h. Cells were collected by centrifugation at 1,000 ϫ g for 5 min at 4°C in a desktop centrifuge, and the cell pellets were washed once with 30 ml of PBS. Cell pellets were resuspended in 3 ml of hypotonic lysis buffer (10 mM Hepes-KOH, pH 7.6, 1.5 mM MgCl 2 , 10 mM KCl, 1 mM EDTA, pH 8.0, 1 mM EGTA, pH 8.0, supplemented with one minicomplete protease inhibitor mixture tablet (Roche Applied Science)/10 ml), incubated on ice for 10 min, and then lysed by passage through a 23-gauge needle 20 times. The nuclei were removed by centrifugation at 1,000 ϫ g for 5 min at 4°C, and the resulting supernatant was centrifuged at 130,000 ϫ g for 30 min at 4°C in a TLA120.2 rotor in a TL-100 ultracentrifuge (both from Beckman Coulter, Inc., Fullerton, CA). The membrane pellets were resuspended in assay buffer (0.3 M sucrose, 0.1 M Tris-HCl, pH 7.4, 1 mM EDTA, and protease inhibitors as described above). Assay of Cofactor Preference-FAR enzyme activity was measured in a volume of 500 l of 0.3 M sucrose, 0.1 M Tris-HCl, pH 7.4, 1 mM EDTA, 2.5 mM dithiothreitol, 5 mM MgCl 2 , one minicomplete protease inhibitor mixture tablet (Roche Applied Science)/10 ml, 0.8 mg/ml BSA, 98 M palmitoyl-CoA, 7 M [1-14 C]palmitoyl-CoA (PerkinElmer Life Sciences), and 2.5 mM ␤-NADPH or ␤-NADH. Aliquots (75 g) of Sf9 cell membrane protein isolated from cells infected with either baculovirus expressing the steroid 5␤-reductase or the mFAR1 enzyme were added, and the mixture was incubated at 37°C for 30 min. The reaction was stopped by the addition of 100 l of 6 N HCl and 0.9 ml of PBS, and lipids were extracted into 6 ml of chloroform:methanol (2:1, v/v). TLC on dried and resuspended lipids was performed as described above. Assay of Palmitoyl-CoA versus Palmitic Acid Preference-These experiments were done as described under "Assay for Cofactor Preference" except that the reaction mixtures contained 2.5 mM ␤-NADPH and either 98 M palmitoyl-CoA and 7 M [1-14 C]palmitoyl-CoA (obtained from PerkinElmer Life Sciences) or 2.9 M BSA-conjugated [1-14 C]palmitic acid and 40 M BSA-conjugated palmitic acid with or without 1 mM ATP and 100 M CoA. Aliquots (75 g) of Sf9 cell membrane protein isolated from cells infected with either baculovirus expressing the steroid 5␤-reductase or the mFAR1 enzyme were added to the mixture, and the tube was incubated at 37°C for 30 min. The reaction was stopped by the addition of 100 l of 6 N HCl, and the lipids were separated by TLC as described above. Immunocytochemistry-On day 0 of an experiment, Chinese hamster ovary-K1 cells (American Type Culture Collection) were plated at a density of 1 ϫ 10 5 cells in 6-well dishes containing 22-mm 2 glass coverslips in Dulbecco's modified Eagle's medium and Ham's F-12 (50:50 mix) medium supplemented with 5% fetal calf serum (v/v), 100 units/ml penicillin, and 100 g/ml streptomycin sulfate. On day 1, cells were transfected with 1 g of plasmid DNA (pCMV6, pCMV⅐SPORT6-FLAG-mFAR1, or pCMV⅐SPORT6-FLAG-mFAR2) using FuGENE 6 reagent. After 18 h, cells were washed twice with ice-cold Dulbecco's PBS and then fixed in methanol at Ϫ20°C for 10 min. Cells were washed three times with ice-cold PBS and then incubated in a blocking solution (PBS containing 1% (w/v) BSA (Sigma)) for 1 h at room temperature. Cells were incubated with primary antibodies (SKL rabbit polyclonal antibody (15) at 1:1,000 dilution and/or FLAG M2 mouse monoclonal antibody (Sigma) at 1:2,500 dilution) for 8 h at 4°C. The cells were then rinsed three times for 5 min each in PBS containing 0.1% (w/v) BSA and incubated for 1 h with secondary antibodies (Alexa Fluor 568 goat anti-rabbit IgG (Molecular Probes, Inc., Eugene, OR) and/or Alexa Fluor 488 goat anti-mouse IgG (Molecular Probes)) both at 1:1,000 dilution in PBS containing 0.1% (w/v) BSA. Cells were rinsed three times with PBS containing 0.1% (w/v) BSA, twice with PBS, and then twice with deionized distilled H 2 O. The coverslips were mounted on a glass slide using a ProLong Antifade Kit (Molecular Probes) and then examined using a 63 ϫ 1.3 NA PlanApo objective on a model 510 Laser Scanning Confocal microscope (Carl Zeiss, Inc., Göttingen, Germany). RESULTS The sequences of two eukaryotic fatty acyl-CoA reductase enzymes were used as probes of the mammalian data base. The first, from the jojoba plant S. chinensis (6), identified two mouse and human proteins with ϳ30% sequence identity. The second fatty acyl-CoA reductase sequence was from the silkworm moth B. mori (7). A search for orthologs of this insect enzyme in the mammalian data base revealed the same two mouse and human proteins as did searches with the plant enzyme sequence, again with low sequence identity. The fact that the plant, insect, mouse, and human sequences shared the same short regions of amino acid identity (Fig. 1) suggested that the mammalian enzymes were potential fatty acyl-CoA reductases. The two enzymes were tentatively named mouse and human fatty acyl-CoA reductase 1 and 2 (FAR1 and FAR2). Comparisons between cDNA sequences and genomic DNA revealed that the mouse FAR1 enzyme was encoded by a gene on chromosome 7 F1 and contained at least 13 exons and 12 introns. The mouse FAR2 gene on chromosome 6 G3 contained at least 12 exons and 11 introns. The structures of the two genes were similar, both in overall size and in the positions of the introns, 10 of which interrupted the coding portions of the enzymes at the same positions (arrowheads, Fig. 1). Two introns in the FAR1 gene were located in the 5Ј-noncoding portion of the transcribed mRNA, and the FAR2 gene contained one intron in this region. The human FAR1 and FAR2 genes contained at least 12 exons and 11 introns and were present on autosomes (FAR1 ϭ chromosome 11 p15.2; FAR2 ϭ chromosome 12 p11.23). The amounts of FAR1 and FAR2 mRNAs were determined in different tissues of the mouse by real time PCR (Fig. 2). The FAR1 mRNA had the broadest distribution, being present at readily detectable levels (C T ϭ 16.8 -28) in the 20 tissues or cell lines examined (Fig. 2, top panel). The tissue with the highest level of FAR1 mRNA was the preputial gland (C T ϭ 16.8), a specialized sebaceous gland located near the tail of the animal. In contrast to the FAR1 mRNA, the FAR2 mRNA was present at generally lower levels in a smaller number of tissues. The two highest expressing tissues were the eyelid (C T ϭ 21.3) and skin (C T ϭ 23.2) (Fig. 2, bottom panel). An abundance of FAR2 mRNA in these tissues is consistent with a role for this reductase isozyme in lipid synthesis within the meibomian glands of the eyelid and the sebaceous glands of the skin. Both Complementary DNAs encoding mouse and human FAR1 and FAR2 were cloned into mammalian expression vectors as described under "Experimental Procedures." Expression of FAR1 and FAR2 cDNAs in HEK 293 cells resulted in the conversion of BSA-conjugated [ 14 C]palmitate into [ 14 C]hexadecanol (Fig. 3, lanes 2-5), whereas transfection of a vector lacking a cDNA insert did not produce hexadecanol (lane 1), indi-cating that HEK 293 cells have negligible levels of endogenous fatty acyl-CoA reductase activity. Several conclusions were reached from these results. First, the putative mammalian FAR cDNAs identified by bioinformatics encoded bona fide fatty acyl-CoA reductases. Second, there are two fatty acyl-CoA reductase isozymes in the mouse and human as well as other mammalian genomes (see "Discussion"). Third, comparisons of the results in Fig. 3, lanes 2 and 3, with those in lanes 4 and 5 indicated that the expressed mouse and human FAR1 enzymes were more active than the corresponding FAR2 enzymes in transfected HEK 293 cells. In an effort to increase expression of FAR1 and FAR2, the mouse cDNAs were recombined into baculovirus expression vectors as described under "Experimental Procedures." As expected, FAR1-and FAR2-expressing viruses produced higher levels of enzyme activity in intact insect cells compared with those obtained in HEK 293 cells, but the levels of FAR1 enzyme activity in the Sf9 cells remained ϳ5-10-fold higher than those of FAR2 despite equivalent levels of expression (data not shown). These results suggested that the difference in activity between the recombinant FAR1 and FAR2 enzymes reflected the intrinsic properties of the proteins and was not the result of an anomaly between the mammalian and insect cell expression systems. For these reasons the two expression systems were used interchangeably in subsequent experiments to characterize FAR1 and FAR2. Centrifugation experiments with cell lysates from FAR1 baculovirus-infected Sf9 cells showed that reductase enzyme activity sedimented with the membrane fraction. Assays with membrane preparations showed that enzyme activity with palmitoyl-CoA substrate was optimal over a range of magnesium (3-32 mM) and KCl (0 -200 mM) concentrations and between pH 7.0 and 8.0. When the assay buffer had a pH greater than 8.0, considerable nonenzymatic conversion of fatty acyl-CoA to fatty alcohol was observed. No inhibition of FAR1 activity was detected in the presence of excess product (0.5 mM hexadecanol). Based on these findings, the standard FAR1 enzyme assay was performed in a buffer of pH 7.4 containing 3.0 mM MgCl 2 and 0 mM KCl for 30 min at 37°C. In contrast to the results obtained with membranes from FAR1-expressing cells, FAR2 enzyme activity was lost upon lysis of either baculovirus-infected Sf9 cells or transfected HEK 293 cells (data not shown). FAR2 enzyme activity was not detected with the inclusion of different fatty acid substrates, NAD(P)H cofactors, or detergents in the assay buffer, and variations in the ionic conditions or in the incubation time failed to resuscitate FAR2 enzyme activity. Thus, we were only able to analyze FAR2 enzyme activity in whole cells. Fatty acyl-CoA reductases catalyze a concerted reaction in which the thioester bond of the fatty acyl-CoA substrate is cleaved, and the resulting fatty acid is reduced to an alcohol by transfer of electrons from an NAD(P)H cofactor (16). A series of experiments was performed to determine the substrate and cofactor utilized by mouse FAR1. As shown by the data in Fig. 4A, conversion of palmitate to hexadecanol by membranes from FAR1 baculovirus-infected Sf9 cells required the presence of ATP and CoA in the reaction mixture (lane 4). If the preparations were incubated with palmitoyl-CoA, the reaction proceeded in the absence of ATP and CoA (lane 6). These results suggested that FAR1 utilized fatty acyl-CoA esters as substrates instead of free fatty acids. Furthermore, because the synthesis of hexadecanol was approximately the same when palmitoyl-CoA or palmitate plus ATP and CoA was used as substrate, the data indicated that the Sf9 cell extracts were saturating for acyl-CoA synthetase enzyme(s) that form the CoA derivatives of fatty acids. Recombinant mouse FAR1 enzyme required NADPH as a cofactor (Fig. 4B, lane 7). No activity was measured when 2.5 mM NADH was substituted in the reaction mixture (lane 6), and the inclusion of NADH at this concentration did not inhibit the ability of NADPH to serve as a cofactor (lane 8). Membranes prepared from Sf9 cells infected with a baculovirus expressing an unrelated enzyme, steroid 5␤-reductase, contained no fatty acyl-CoA reductase activity when these cofactors were present alone or in combination (lanes 1-4). To determine the fatty acid substrate preferences of the FAR enzymes, Sf9 cells were infected with mouse FAR1 or FAR2 5 and 6). The reaction was stopped by the addition of 100 l of 6 N HCl, lipids were extracted and separated by TLC using solvent system 1, and radioactivity was detected by exposing the plate to x-ray film. The positions to which palmitate (substrate) and hexadecanol (product) migrated are shown on the right. B, Sf9 insect cells were infected with the indicated baculovirus vectors, and cell membranes were prepared. Aliquots (75 g of protein) were incubated in reactions containing 98 M palmitoyl-CoA, 7 M [1-14 C]palmitoyl-CoA, and 2.5 mM ␤-NADPH, 2.5 mM ␤-NADH, or 2.5 mM ␤-NADPH and 2.5 mM ␤-NADH, for 30 min at 37°C. Lipids were extracted and analyzed as described in A. The experiments of A and B were repeated at least twice on separate days. cDNA-containing baculoviruses or a control virus expressing the mouse steroid 5␤-reductase cDNA, and the cells were incubated with BSA-conjugated 14 C-labeled fatty acids of different carbon chain length or saturation (Fig. 5). The FAR1 enzyme preferred C16, C18, C18:1, and C18:2 fatty acids and was less active against other lipids (top panel). With longer exposures of the TLC plate to x-ray film, activity was observed when C10 -C14 substrates were added to the medium. All fatty acids tested were incorporated into lipids having the same mobility as monoacylglycerols by an endogenous insect activity and to a lesser extent into diacylglycerol products, indicating that they gained access to biosynthetic enzymes in the infected cells. Experiments done with microsomal membranes from FAR1expressing Sf9 cells produced similar results with respect to fatty acid substrate preference (data not shown). In contrast to the substrate preference of the FAR1 enzyme, the FAR2 enzyme showed a more narrow specificity for fatty acids, acting with partiality toward saturated C16 and C18 lipids (Fig. 5, middle panel). Longer exposures of the TLC plate to x-ray film showed weak activity against the shorter saturated fatty acids. The control-infected cells expressing steroid 5␤-reductase did not reduce any of the fatty acids tested, although all substrates were incorporated into other lipid products by endogenous enzymes (bottom panel). The subcellular localizations of the FAR1 and FAR2 enzymes were determined by immunocytochemistry (Fig. 6). The mouse cDNAs were engineered to contain FLAG epitopes at the amino termini of the encoded proteins and the resulting modified cDNAs cloned into pCMV expression vectors. The introduction of the FLAG epitope did not affect the fatty acyl-CoA reductase activities of the two modified enzymes when the constructs were expressed in HEK 293 cells (data not shown). Transfection of the DNAs into Chinese hamster ovary-K1 cells followed by staining with anti-FLAG monoclonal antibody showed that both the FAR1 (Fig. 6E) and FAR2 enzymes (Fig. 6H) localized to peroxisomes distributed throughout the cytoplasm of expressing cells. The identification of these vesicular bodies as peroxisomes was confirmed by costaining with a polyclonal antiserum directed against a targeting sequence (serine-lysineleucine, SKL) present in many peroxisomal enzymes (15). As seen in Fig. 6, A, D, and G, this antiserum recognized the same type of subcellular organelle in all cells on the coverslip, and when these rhodamine images were merged with the fluorescein images generated with the anti-FLAG antibody, many peroxisomes in transfected cells were observed to express both antigens (Fig. 6, F and I). DISCUSSION In the current study, we identify two mammalian fatty acyl-CoA reductase enzymes that convert a variety of fatty acids to fatty alcohols. The two FAR isozymes are ϳ58% identical in sequence and are encoded by genes with similar exon-intron structures located on different chromosomes. The mouse FAR1 mRNA is most abundant in the preputial gland and present at lower levels in many other organs and cells. The highest levels of FAR2 mRNA are detected in tissues that are rich in sebaceous glands (eyelid and skin). FAR1 acts on fatty acids of different chain lengths and degrees of saturation, whereas FAR2 prefers saturated C16 and C18 fatty acids as substrates. The FAR enzymes are localized to the peroxisome as judged by immunocytochemistry in transfected cells. The distinct biochemical properties and tissue distributions of the two fatty acyl-CoA reductases suggest that these isozymes perform different functions in lipid metabolism. Pairwise sequence comparisons between the mammalian FAR enzymes and the previously defined plant and insect orthologs reveal ϳ30% amino acid identity (Fig. 1). Searches with the mouse protein sequences indicate putative fatty acyl-CoA reductases in many species, including the toad (Xenopus laevis, GI28277293), mosquito (Anopheles gambiae, e.g. GI31197903, and many others), rat (GI34859004), zebra fish (GI28278322), fruit fly (Drosophila melanogaster, e.g. GI24654209 and many others), and nematode (Caenorhabditis elegans, GI17570463). The prospective reductases are 33% (mosquito) to 96% (rat) identical in sequence to the mouse FAR enzymes, with identity extending over the complete length of the compared proteins. Homologous sequences include socalled "male sterility proteins" that are implicated in lipid synthesis and the formation of the pollen cell wall in plants (17), and in the case of wheat, they have been shown to have fatty acyl-CoA reductase enzyme activity (8). Comparisons between the known and presumed reductases show that only a small number of amino acids are conserved across species. For example, among mouse, plant, and insect proteins, only 61 amino acids (ϳ13%) are identical (Fig. 1). Thirteen of the highly conserved residues are either glycines or prolines, which may play structural roles in these proteins. No obvious NADPH cofactor binding or catalysis sites were found among the conserved sequences. In mice and humans, the FAR1 and FAR2 isozymes share ϳ58% sequence identity and are encoded by genes of similar structure, suggesting that they arose from a common evolutionary precursor via duplication. This genetic event is presumably ancient as apparent orthologs for each isozyme are present in several species for which complete genome sequences are available, including the puffer fish (Fugu rubripes, FRUP00000132990 and FRUP00000130769) and the rat (XP_215022.2 and NW_047696.1). The conservation of two isozymes in distantly related species represents one line of evidence that each FAR protein has a different biological function. This idea is supported further by their different fatty acid substrate preferences (Fig. 5) and their differential expression in tissues (Fig. 2). FAR1 is distributed broadly and acts on fatty acids that vary in size and saturation, suggesting that this isozyme plays a general role in the synthesis of fatty alcohols. In contrast, the narrow distribution and substrate preference of the FAR2 isozyme are indicative of a more specialized function. Some support for this division of labor is to be found in the ether lipids (plasmalogens) of tissues expressing the FAR1 enzyme, which have diverse structures consistent with the production and incorporation of a variety of fatty alcohols into this class of lipids (10). Furthermore, the fatty alcohol composition of waxes secreted by the sebaceous glands of mouse skin is different from those of the preputial gland (2), which may reflect the differential expression and substrate specificities of the FAR1 and FAR2 enzymes in these tissues. In the experiments reported here, the FAR1 enzyme was consistently more active in reducing fatty acids than the FAR2 enzyme when assayed in intact cells (e.g. Fig. 3). The reason for this difference was not ascertained but did not appear to be the result of discrepancies in expression level as judged by immunoblotting (data not shown), substrate preference (Fig. 5), or differences in subcellular localization (Fig. 6). Furthermore, FAR2 enzyme activity was lost upon lysis of the cells and could not be preserved or restored by several different treatments. The FAR2 enzyme may require a protein cofactor for activity which is not present in HEK 293 or Sf9 cells. In support of this possibility, a soluble protein identified as a member of the fatty acid-binding protein family was reported to enhance reductase activity in extracts of mouse preputial glands (18). This effect was shown to be caused by the ability of the protein to bind fatty acyl-CoAs, thereby decreasing the effective concentration of the lipid below the critical micelle value and lessening the detergent effects of the substrate. Although BSA was included in all reactions to buffer fatty acyl-CoA concentrations, and no feedback inhibition by substrate was observed with the FAR1 enzyme, we cannot rule out the possibility that FAR2 requires a unique accessory protein for full activity. Future expression cloning experiments with cDNA libraries from tissues such as the eyelid that express high levels of the FAR2 enzyme may identify a stimulatory factor. Both the FAR1 and FAR2 enzymes are localized to the peroxisome (Fig. 6) and are found in the pellet fraction of high speed centrifugations, suggesting that they are bound to the membrane of this organelle. Hydropathy plots and other sequence analysis programs do not reveal classical transmembrane domain profiles within the reductases, thus we do not know whether they are integral membrane proteins or tightly bound to the phospholipid bilayer of the peroxisome. Similarly, it is not immediately evident how the FAR proteins are imported into this organelle because sequence prediction programs that identify peroxisomal proteins (mendel.imp. univie.ac.at) do not reveal a type 1 targeting sequence, and visual scanning fails to uncover a conserved type 2 consensus sequence (19). The presence of FAR enzymes in the peroxisome is consistent with a central role in the production of fatty alcohols for ether lipid biosynthesis. Three of the seven enzymes involved in synthesis of ether lipids are found in this organelle, including akyl-dihydroxyacetone phosphate synthase, which replaces an sn-1 fatty acid in ester linkage to dihydroxyacetone phosphate with a long chain fatty alcohol to form an alkyl ether intermediate (10). Colocalization of the reductase and synthase within the peroxisome obviates the need for interorganellar transport of the fatty alcohol and presumably facilitates the synthesis of ether lipids. In contrast, the synthesis of wax monoesters by the wax synthase enzyme (Scheme 1), which is localized in the endoplasmic reticulum (see accompanying paper (12)), requires transport of the fatty alcohol across two lipid bilayers. Whether the movement of fatty alcohols represents a controlling step in the synthesis of these and other classes of lipids, and how transport is accomplished, are questions to be answered in future studies.
2017-10-10T22:08:17.554Z
2004-09-03T00:00:00.000
{ "year": 2004, "sha1": "f8644a0da94c2ffcaa1d790f81656f23f39ee8fd", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/279/36/37789.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "16dc6461b2f76f9f19868be0796c9b1d7a292621", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
14189762
pes2o/s2orc
v3-fos-license
Quantization of Infinitely Reducible Generalized Chern-Simons Actions in Two Dimensions We investigate the quantization of two-dimensional version of the generalized Chern-Simons actions which were proposed previously. The models turn out to be infinitely reducible and thus we need infinite number of ghosts, antighosts and the corresponding antifields. The quantized minimal actions which satisfy the master equation of Batalin and Vilkovisky have the same Chern-Simons form. The infinite fields and antifields are successfully controlled by the unified treatment of generalized fields with quaternion algebra. This is a universal feature of generalized Chern-Simons theory and thus the quantization procedure can be naturally extended to arbitrary even dimensions. Introduction The Chern-Simons action has many applications for physical mechanisms and formalisms. In particular it was used to formulate three-dimensional Einstein gravity [1]. Two of possible reasons why three-dimensional Einstein gravity was successfully formulated by the Chern-Simons action are based on the facts that the action is formulated by differential forms on the one hand and the three-dimensional Einstein gravity has no dynamical degrees of freedom on the other hand. One of the authors (N.K.) and Watabiki have proposed a new type of topological actions in arbitrary dimensions which have the Chern-Simons form [2,3,4]. The actions have the same algebraic structure as the ordinary Chern-Simons action and are formulated by differential forms. It was shown that two-dimensional topological gravities [3] and a four-dimensional topological conformal gravity [4] were formulated by the even-dimensional version of the generalized Chern-Simons actions. It is interesting to ask if the models defined by the generalized Chern-Simons actions are well-defined in the quantum level and thus lead to the quantization of topological gravity. It turns out that the quantization of the generalized Chern-Simons action is highly nontrivial. The reasons are two folds: Firstly the action has a zero form square term multiplied by the highest form and thus breaks regularity condition. Secondly the theory is highly reducible, in fact infinitely reducible, as we show in this paper. Thus the models formulated by the generalized Chern-Simons actions provide its own interesting problems for the known quantization procedures such as Batalin and Vilkovisky formulation of the master equation [5], Batalin, Fradkin and Vilkovisky Hamiltonian formulation [6] and the quantization procedure of cohomological perturbation [7]. It was shown in the quantization of the simplest abelian version of generalized Chern-Simons action that the particular type of regularity violation does not cause serious problems for the quantization [8]. In this paper we investigate nonabelian version of Chern-Simons actions which turn out to be infinitely reducible. We show that the quantization of this infinitely reducible system can be treated successfully by the unified treatment of fields and antifields of the generalized Chern-Simons theory. It is interesting to note that the nonabelian version of the generalized Chern-Simons actions provide the most fruitful examples for the quantization of infinitely reducible systems among the known examples such as Brink-Schwarz superparticle [9], Green-Schwarz superstring [10] and covariant string field theories [11]. Generalized Chern-Simons theory The generalized Chern-Simons theory is a generalization of the ordinary three dimensional Chern-Simons theory into arbitrary dimensions [2,3,4]. The main point of the generalization is to extend a one form gauge field to a quaternion valued generalized gauge field A which contains forms of all possible degrees. Correspondingly a gauge symmetry is extended and it is described by a quaternion valued gauge parameter V. It was shown that this formulation can naturally incorporate fermionic gauge fields and parameters as well. In the most general form, a generalized gauge field A and a gauge parameter V are defined by the following component form, where (ψ, α), (ψ,α), (A, a) and (Â,â) are direct sums of fermionic odd forms, fermionic even forms, bosonic odd forms and bosonic even forms, respectively, and they take values on a gauge algebra. The bold face symbols 1, i, j and k are elements of quaternion. The two types of component expansions (2.1) and (2.2), which belong to Λ − and Λ + classes, can be regarded as generalizations of odd forms and even forms, respectively. In the case of even-dimensional formulation a gauge algebra can simply be chosen as such an algebra as is closed within commutators and anticommutators. In this case the elements in Λ − and Λ + classes fulfill the following Z 2 grading structure; where λ + ∈ Λ + , λ − ∈ Λ − . In general, graded Lie algebra is necessary to accommodate odd-dimensional formulation. The even-dimensional version of actions proposed by Kawamoto and Watabiki possess the following Chern-Simons form [2], where Q = jd ∈ Λ − is the exterior derivative and Tr k (· · ·) is defined so as to pick up only the coefficient of k from (· · ·) and take the trace of the gauge algebra. The k component of an element in the Λ − class includes only bosonic even forms and thus the action (2.4) leads to an even-dimensional one. We then need to pick up d-form terms corresponding to the d-dimensional manifold M. Since this action has the same structure as the ordinary three-dimensional Chern-Simons action, it is invariant under the following gauge transformation, It should be noted that this symmetry is much larger than the usual gauge symmetry since the gauge parameter V contains many parameters of various forms. Since anticommutators as well as commutators for elements of the gauge algebra appear in the explicit form of the gauge transformations, we need to use an algebra which is closed within commutators and anticommutators. A specific example of the algebra is realized by Clifford algebra. In general a generalized gauge theory can be formulated for a graded Lie algebra which includes supersymmetry algebra as a special example [2]. The equation of motion of this theory is where F is a generalized curvature, given by 3 Infinite reducibility of two-dimensional models Hereafter we consider the action (2.4) in two dimensions with a nonabelian gauge algebra as a concrete example although we will see that models in arbitrary even dimensions can be treated in the similar way. A simple example for nonabelian gauge algebras is given by Clifford algebra c(0, 3) generated by {T a } = {1, iσ k ; k = 1, 2, 3} where σ k 's are Pauli matrices [3]. For simplicity we omit fermionic gauge fields and parameters in the starting action and gauge transformations. It is, however, easy to recover them in the subsequent formulation. Then the action expanded into components is given by where φ, ω µ and B µν are scalar, vector and antisymmetric tensor fields, respectively, and ǫ 01 = 1 * . This Lagrangian possesses gauge symmetries corresponding to (2.5) (3.4) where B is defined by B ≡ 1 2 ǫ µν B µν and b 1 by b 1 ≡ 1 2 ǫ µν b 1µν . Equations of motion of this theory are given by This system is on-shell reducible since the gauge transformations ( with the on-shell conditions. However this is not the end of the story. Indeed this system is infinitely on-shell reducible, i.e., successive reducibilities are given by the following relations; where [ , ] (−) n is a commutator for odd n and an anticommutator for even n. This fact is more easily understood by using compact notations such as the generalized gauge field A and parameter V. We define V n from v n , u nµ and b n by where v 0 = φ, u 0,µ = ω µ and b 0 = B and thus V 0 = A. Then eqs.(3.2)−(3.4) and (3.8)−(3.10) can be described in the following compact form, Using these notations, it is easy to see the on-shell reducibility where we used the equation of motion (2.6). Actually the infinite on-shell reducibility is a common feature of generalized Chern-Simons theories with nonabelian gauge algebras in arbitrary dimensions, which can be understood by the fact that (3.14) is the relation among the generalized gauge fields and parameters. Thus generalized Chern-Simons theories add another category of infinitely reducible systems to known examples like Brink-Schwarz superparticle [9], Green-Schwarz superstring [10] and covariant string field theories [11]. It should be Before closing this section, we compare the generalized Chern-Simons theory of the abelian gl(1, R) algebra, which was investigated previously [8], with the model of nonabelian algebra. In the abelian case commutators in the gauge algebra vanish while only anticommutators remain. Then we can consistently put all transformation parameters to be zero except for v 1 , u 1µ and v 2 . This leads to the previous analysis that the abelian version was quantized as a first stage reducible system. In nonabelian cases, however, infinite reducibility is the universal and inevitable feature of the generalized Chern-Simons theories. Minimal sector In this section we present a construction of the minimal part of quantized action based on the Lagrangian formulation given by Batalin and Vilkovisky [5]. In the construction of Batalin and Vilkovisky, ghosts and ghosts for ghosts and the corresponding antifields are introduced according to the reducibility of the theory. We denote a minimal set of fields by Φ A which include classical fields and ghost fields, and the corresponding antifields by Φ * A . If a field has ghost number n, its antifield has ghost number −n − 1. Then a minimal action is obtained by solving the master equation, with the following boundary conditions, where S 0 is the classical action and Z an a n+1 Φ a n+1 represents the n-th reducibility transformation where the reducibility parameters are replaced by the corresponding ghost fields. In this notation, the relation with n = 0 in eq.(4.4) corresponds to the gauge transformation. The BRST transformations of Φ A and Φ * A are given by the following equations; Eqs.(4.1) and (4.5) assure that the BRST transformation is nilpotent and the minimal action is invariant under the transformation. In the present case it is difficult to solve the master equation (4.1) order by order with respect to the ghost number because the theory we consider is infinitely reducible. We need to solve an infinite set of equations according to the introduction of an infinite set of ghost fields; ghosts, ghosts for ghosts, · · · and the corresponding antifields. There is, however, a way to circumvent the difficulties by using the characteristics of generalized Chern-Simons theory in which fermionic and bosonic fields, and odd and even forms, can be treated in a unified manner. First we introduce infinite fields C n , C nµ , C n = 1 2 ǫ µν C nµν , n = 0, ±1, ±2, · · · , ±∞, (4.6) where the index n indicates the ghost number of the field. The fields with ghost number 0 are the classical fields The fields with even (odd) ghost numbers are bosonic (fermionic). It is seen from eqs.(3.2)−(3.4) and (3.8)−(3.10) that fields content for ghosts and ghosts for ghosts in the minimal set is completed in the sector for n > 0 while the necessary degrees of freedom for antifields are saturated for n < 0. We will later identify fields with negative ghost numbers as antifields. We now define a generalized gauge field A in such a form of (2.1) as it contains these infinite fields according to their Grassmann parities and form degrees, We then introduce a generalized action for A as where the upper index 0 on Tr indicates to pick up only the part with ghost number 0. This action is invariant under the following transformation where F is the generalized curvature (2.7) constructed of A and λ is a fermionic scalar parameter with ghost number −1. It should be understood that the same ghost number sectors must be equated in eq.(4.14). Since F and iλ belong to Λ + and Λ − , respectively, their product in the right hand side of eq.(4.14) belongs to the same Λ − class as A. The invariance of the action S under the transformation (4.14) can be checked by the following manipulation, where the subscript j plays the similar role as the subscript k, i.e., to pick up only the coefficient of j in the trace. The change of the subscript k to j is necessary to take i into account in the trace in accordance with ji = −k. Here we have simply ignored the boundary term and thus the invariance is valid up to the surface term. We now show that a right variation s defined by δ λ A = s Aλ is the BRST transformation. First of all this transformation is nilpotent, where the generalized Bianchi identity is used, Next we need to show that the transformation s is realized as the antibracket form of (4.5). The invariance of S under (4.14) implies that S is indeed the minimal action if we make a proper identification of fields of negative ghost numbers with antifields. It is straightforward to see that the BRST transformations (4.5), both for fields and antifields, are realized under the following identifications with S min = S; where ǫ −1 µν is the inverse of ǫ µν , ǫ µρ ǫ −1 ρν = δ µ ν † . This shows that we have obtained a solution for the master equation (4.1), δ λ S min = (S min , S min ) · λ = 0. (4.19) † To be precise the antifields are defined as C * n = C * a n η −1 ab T b , · · · , with TrT a T b = η ab . It is easy to see that this solution satisfies the boundary conditions (4.3) and (4.4), by comparing the gauge transformation (3.2)−(3.4) and the reducibilities (3.8)−(3.10) with the following expansion of S min , Thus the action S min = S with the identification (4.18) is the correct solution of the master equation for the generalized Chern-Simons theory. It is easy to see that this minimal action also satisfies the quantum master equation. For completeness we give explicit forms of the BRST transformations of the minimal fields; (4.25) where the identification (4.18) should be understood. It is critical in our construction of the minimal action that the action of the generalized theory possesses the same structure as the Chern-Simons action and fermionic and bosonic fields are treated in an unified manner. It is interesting to note that the starting classical action, which includes only bosonic fields, and the quantized minimal action, which includes the infinite series of bosonic and fermionic fields, have the same form of (2.4) with the replacement A → A. This is reminiscent of string field theories whose actions have the Chern-Simons form: A string field contains infinite series of ghost fields and antifields. The quantized minimal action also takes the same Chern-Simons form [11]. It is also worth mentioning that there are other examples where classical fields and ghost fields are treated in a unified way [12]. It is obvious that the minimal action for generalized Chern-Simons theory in arbitrary even dimensions can be constructed in the same way as in the two-dimensional case because the classical action (2.4), symmetries (2.5), reducibilities (3.13), the minimal action (4.12) and BRST transformations s A = − F i are described by using generalized fields and parameters. Gauge fixed action The gauge degrees of freedom are fixed by introducing a nonminimal action which must be added to the minimal one, and choosing a suitable gauge fermion. Though the number of gauge-fixing conditions is determined in accordance with the "real" gauge degrees of freedom, we can prepare a redundant set of gauge-fixing conditions and then compensate the redundancy by introducing extraghosts. Indeed Batalin and Vilkovisky gave a general prescription to construct a nonminimal sector by this procedure [5]. This prescription is, however, inconvenient in the present case since it leads to a doubly infinite number of fields; antighosts, extraghosts,· · ·, where "doubly infinite" means the infinities both in the vertical direction and the horizontal direction in the triangular tableau of ghosts. We can instead adopt gauge-fixing conditions so that such extra infinite series do not appear while propagators for all fields be well-defined. The type of gauge-fixing prescription which is unconventional for the Batalin-Vilkovisky formulation is known, for example, in a quantization of topological Yang-Mills theory [13]. In the present case, we found that in each sector of the ghost number the standard Landau type gauge-fixing for the vector and antisymmetric tensor fields is sufficient to make a complete gauge-fixing. After taking into account the above points, we introduce the following nonminimal action, Tr C * n b n−1 +C * nµ b µ n−1 + η * n−1 π n , (5.1) where the index n indicates a ghost number except that ghost number of b n is −n, and even (odd) ghost number fields are bosonic (fermionic), as usual. The BRST transformations of these fields are defined by this nonminimal action, Next we adopt the following gauge fermion Ψ which leads to a Landau type gauge fixing, where we assume a flat metric for simplicity. Then the antifields can be eliminated by The complete gauge-fixed action S tot is Thus the gauge fermion (5.3) is a correct choice and the gauge degrees of freedom are fixed completely. We can consistently determine the hermiticity of the fields with a convention λ † = −λ in eq.(4.14) ‡ . Here comes a possible important comment. There is a universal feature for models of infinitely reducible system with finite degrees of freedom, that the number of the "real" gauge degrees of freedom is the half of the original degrees of freedom [9,10]. The known examples of infinitely reducible system have the same characteristics. In the present two-dimensional model, there are four parameters v n , u nµ and b n for each stage of the reducibility. The "real" number of gauge-fixing conditions is 3 − 1 = 2, where three gauge fixing conditions ∂ µ C n−1µ = 0, ǫ −1 µν ∂ ν C n−1 = 0 are linearly dependent due to ∂ µ (ǫ −1 µν ∂ ν C n−1 ) = 0 and thus we needed to impose an extra condition ∂ µC µ n = 0. Conclusions and discussions We have investigated the quantization of two-dimensional version of the generalized Chern-Simons theory with a nonabelian gauge algebra by the Lagrangian formalism [5]. We have found that models formulated by the generalized Chern-Simons theory are in general infinitely reducible and thus the quantization is highly nontrivial. We have derived the on-shell nilpotent BRST transformation and the BRST invariant gauge-fixed action for this infinitely reducible system. We have confirmed that the propagators of all fields are well-defined in the gauge-fixed action. It is important to recognize that the starting classical action includes only bosonic fields, while the quantized minimal action ‡ Hermiticity conditions; includes infinite series of both bosonic and fermionic ghost fields, which are treated in a unified way by the generalized Chern-Simons formulation. It is a characteristic of the generalized Chern-Simons theory that the quantized minimal action has the same Chern-Simons form as the classical action. The quantization is successfully carried out while there appear other possible problems in connection with the introduction of the infinitely many fields. It is then an important question whether we can treat the quantum effects of the infinitely many ghost fields consistently. We have obtained some evidences that quantum effects of the infinitely many ghost fields can be treated in a systematic way and lead to a finite contribution. To be specific as a related example, the classical action is independent of the space-time metric, but it is not obvious that the quantized theory is topological because of the on-shell reducibility. The similar situation occurs in the nonabelian BF theories [14]. We can, however, prove the metric independence of the partition function by regularizing the quantum effects of infinitely many ghosts contributions in a specific but natural way. It is also important to analyze quantum effects of correlation functions for physical operators. The details of these points will be given in a subsequent publication [15]. It is interesting to consider physical aspects of an introduction of the infinite number of ghost fields. An immediate consequence is a democracy of ghosts and classical fields, i.e., the classical fields are simply the zero ghost number sector among infinitely many ghost fields. The classical gauge fields and ghost fields have no essential difference in the quantized minimal action. In the present paper we have not introduced fermionic gauge fields in the starting action but it is straightforward to introduce fermionic gauge fields [2] and carry out quantization. The classical fermionic fields are just zero ghost number sector among infinitely many ghost fields in a quantized action, just the same as in the bosonic sector. It is tempting to speculate that fermionic matter fields may be identified as a special and possibly infinite combination of ghost fields because the fermionic and bosonic sectors couple in the standard covariant form in the quantized minimal action of the generalized Chern-Simons theory. In the analyses of the quantization of the generalized Chern-Simons theory with abelian gl(1, R) algebra, it was pointed out that a physical degree of freedom which did not exist at the classical level appeared in the constant part of the zero form field φ at the quantum level due to the violation of the regularity [8]. We know that a zero form field plays an important role in the generalized Chern-Simons theories as emphasized in the classical discussion [3,4]. In particular a constant component of the zero form field played a role of physical order parameter between the gravity and nongravity phases. We find it is important to clarify the mechanism how the physical constant mode of the zero form field plays the role of possible order parameter in the quantum level. This question is essentially related to the regularity violation in the nonabelian version of the generalized Chern-Simons theory. It is, however, expected that this question will be better clarified in the Hamiltonian formalism quantization. We have already found that the BRST invariant gauge-fixed action obtained from the Hamiltonian formalism coincides with that of the Lagrangian formulation. These points will also be discussed in a subsequent publication [15]. Finally we point out that the quantization procedures of the generalized Chern-Simons theories given in this paper is universal and thus naturally extended to arbitrary even dimensions. To derive nonminimal action, however, we need to count the genuine independent degrees of freedom in the gauge transformation and impose a gauge-fixing by choosing an adequate gauge fermion. It seems to be a general feature that the independent gauge degrees of freedom is just a half of the original degrees of freedom. In the Hamiltonian formalism we found a reasoning that this should be the case.
2014-10-01T00:00:00.000Z
1997-02-25T00:00:00.000
{ "year": 1997, "sha1": "817bcf9b6b9ca47dd8eec689b02683f81745f0aa", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/9702172", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5ae3e639924fd594239734a089d5bc9e02b80dfa", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
258476673
pes2o/s2orc
v3-fos-license
Ecological Factors of Telemental Healthcare Utilization Among Adolescents with Increased Substance Use During the COVID-19 Pandemic: The Moderating Effect of Gender Background Adolescent substance use is often associated with concurrent mental health problems (e.g., depression, suicide attempts, parental emotional and physical abuse, not feeling close to people at school, and lower virtual connectedness) at multiple ecological levels. Objective This study examined whether such risk factors among adolescents were associated with the use of telemental healthcare (TMHC) and whether gender moderated these associations. Methods Data were drawn from the Adolescent Behaviors and Experiences Survey, collected by the U.S. Centers for Disease Control and Prevention from January to June 2021. A hierarchical multiple logistic regression analysis was conducted using a national sample of 1,460 students in Grades 9–12 in the United States who reported having used more alcohol and/or drugs during the pandemic than before it started. Results The results showed that only 15.3% of students sought TMHC. Students reporting increased substance use during the pandemic were more likely to use TMHC if they experienced more severe mental health problems (e.g., suicide attempts) compared to other ecological factors, such as issues with their family, school, or community. Analysis of the moderating effect showed that the closer male students felt to people at school, the more likely they were to seek TMHC, whereas the opposite was true for female students. Conclusions The findings highlighted that feeling close to people at school is an important aspect of understanding the help-seeking behavior of female and male adolescent substance users. Introduction The novel coronavirus (SARS-COV-2) outbreak caused by severe acute respiratory syndrome (COVID-19) created unprecedented challenges for adolescents due to abrupt and drastic government measures, such as lockdowns and school closures, upending the protective factors for mental health (e.g., schooling, social support, and community support; Lee 2020). Moreover, the panic surrounding the disease caused a heightened sense of isolation, contributing to profound mental health consequences, including suicide (Gunnell et al., 2020). Extant data assessing substance use changes among Canadian adolescents from before to during the COVID-19 pandemic showed that the percentage of users of most substances decreased; however, the frequency of both alcohol and cannabis use increased (Dumas et al., 2020). Among a nationally representative sample of students in grades 9-12 (N = 7,705) in the United States from January to June 2021, 33.7% reported past illicit drug (marijuana, synthetic marijuana, cocaine, other illegal drug, or prescription opioid misuse) usage, and 31.4% reported they used more drugs during the COVID-19 pandemic (Brener et al., 2022). Dumas et al. (2020) revealed that most adolescents engaged in solitary use (49.3%) but also used substances with peers via technology (31.6%) or face-to-face (23.6%) during COVID-19. The COVID-19 pandemic had a significant impact on adolescents' mental health, likely affecting their substance use behaviors. According to Dumas et al. (2020), high rates of depression and anxiety among adolescents are linked to COVID-19. Among at-risk groups, including those with preexisting mental health conditions or those living in abusive settings, the COVID-19 environment of isolation significantly impaired adolescents' mental health conditions (Thorisdottir et al., 2021). Consequently, studies documented a high demand for mental health services for adolescents with substance use problems during the pandemic (Czeisler et al., 2020;Racine et al., 2020). The COVID-19 pandemic caused severe economic issues and social and educational crises (Barron et al., 2021), disrupting existing mental health systems. The National Institute of Mental Health (2022) found that only 40% of adolescents with diagnosed conditions sought help. Lu et al.'s (2021) survey reported that less than 12% of adolescents with co-occurring conditions of major depression and substance use utilized mental health services. Some challenges surrounding adolescents in seeking mental healthcare include mental health literacy (Radez et al., 2021), fear, lack of family support (Al Omari et al., 2022), self and social stigma (Fante-Coleman & Jackson-Best, 2020), and past negative experiences (Gulliver et al., 2010). Although the need for telemental healthcare (TMHC) services (mental healthcare services via telecommunication or videoconferencing; Gulliver et al., 2010;Toscos et al., 2019) has been documented (Tausch et al., 2022) for college students or emerging adults (e.g., Crumb et al., 2021;Toscos et al., 2019), few studies have investigated TMHC frequency among at-risk adolescents who use alcohol and/or drugs and have concurrent mental health issues. What motivates adolescents with increased substance use to seek TMHC is also unclear. Only a few studies have examined TMHC use among high school adolescents, and none have considered the effect of gender. How gender affected adolescents' outcomes and their motivation to seek TMHC or other mental health services during the pandemic remains unclear. Equally, the role gender plays in the association between concurrent mental health problems and TMHC utilization among adolescents with increased substance use is uncertain. Although Mackenzie et al. (2006) found that females were more likely than males to seek professional mental health services, no study has hitherto analyzed the role of gender in adolescents seeking TMHC services. We examined how different mental health problems at multiple ecological levels increased adolescents' willingness to seek TMHC during the COVID-19 pandemic and whether these associations varied by gender. Theoretical Frameworks Guided by the ecological systems theory (Bronfenbrenner, 1986) and the health belief model (HBM; Rosenstock et al., 1988), we examined adolescents' lived environment during the COVID-19 pandemic and their utilization of TMHC to address mental health and substance use. The ecological systems perspective of a person in the environment focuses on the interactions between the individual and their social environment. Bronfenbrenner's ecological systems theory emphasizes the dependency of living creatures on their surroundings. Healthcare utilization among adolescents, therefore, is the influence of the interaction between the inherent qualities of an adolescent's perception of the quality of TMHC during COVID-19. This interaction provides synergy with the HBM by predicting adolescents' beliefs about their mental health conditions and the key factors influencing health-seeking behaviors. Bronfenbrenner (1979Bronfenbrenner ( , 1986) divided the person's environment into five different systems (i.e., the microsystem, mesosystem, exosystem, macrosystem, and chronosystem). Adolescents' immediate environmental risk factors at the individual level (e.g., depression and suicide attempts), family (parental physical or emotional abuse), school (not feeling close to people at school), and community factors (virtual disconnectedness from family, friends, and other groups) are interactions that may heavily influence their well-being, including mental health, and their beliefs and behaviors (Addison, 1992). However, individuals' willingness to seek help is a function of the value they assign to the problem (Hutchison, 2019) and their beliefs about the benefits of taking action (Murno et al., 2007;Solanki et al., 2022). The HBM model, therefore, outlines six constructs that predict health behavior: risk susceptibility, risk severity, benefits to action, barriers to action, self-efficacy, and cues to action (Becker, 1974;Murno et al., 2007;Solanki et al., 2022). The theory assumes that people are likely to avoid a health threat if they believe they have a specific risk of contracting the disease/condition (perceived susceptibility) and consider the situation to pose a serious threat (perceived severity; Rosenstock et al., 1988). Bronfenbrenner's ecological theory is an effective behavioral model for guiding adolescents' mental health interventions (Eriksson et al., 2018). It considers individual, family, school, and community in the improvement and sustainability of healthy habits in people (Israel, 1985). However, it does not address health beliefs and associated benefits. Toscos et al. (2019) found significant and positive associations between adolescents' symptoms of depression and suicidality and TMHC help-seeking behaviors. Adolescents reporting elevated symptoms of depression and suicidality had greater use of mental health technology resources (Toscos et al., 2019). Although TMHC, conceptually referred to as teletherapy, telepsychology, or telepsychiatry, has existed for several decades (Hilty et al., 2013;National Institute of Mental Health, 2022), it currently encompasses a broad range of mental health treatment modalities, often offered via online forums, text messaging, or web-based. TMHC has greater accessibility (Fairchild et al., 2020) and applicability (Orsolini et al., 2021) and has been demonstrated to have improved mental health outcomes for adolescents dealing with depression and suicidality than traditional modalities (Calear et al., 2009). Individual Factors (Depression and Suicidality) and TMH The utility of TMHC was highlighted in a pilot study conducted among children and adolescents in rural emergency departments that found TMHC to be accessible for youth dealing with depression and suicidality and reduced longer emergency department wait times (Fairchild et al., 2020). Likewise, a systematic review of 29 studies focusing on the applicability of TMHC found it was an efficacious alternative to resolving barriers to healthcare delivery among adolescents with depression and suicidality (Orsolini et al., 2021). Another study conducted among students in Grades 9 to 12 (N = 2,789) found slightly over 16% reporting prior experience using TMHC resources, including talking to an online counselor, anonymous chat, using a self-help mobile app, and/or texting a crisis counseling text line (Toscos et al., 2019). Findings reported by Toscos and colleagues (2019) indicated that adolescents with symptoms of depression preferred to have anonymous chats and counseling through crisis text lines, whereas those with suicidality preferred to use all available types of TMHC resources. Thus, the use of TMHC services was contingent on the availability of resources during the COVID-19 pandemic (Racine et al., 2020). Family Factors (Parental Abuse) and TMHC Recent research suggests that internet-delivered treatment modalities are viable for adolescents who experience or witness childhood adversity to address child maltreatment, including psychological and physical abuse, neglect, and sexual abuse (Stewart et al., 2021). TMHC removes child maltreatment barriers to treatment access and expands treatment services to diverse populations (Racine et al., 2020). It enhances client treatment adherence (i.e., attendance and engagement), leads to shorter treatment durations, and improves mental health outcomes (Racine et al., 2020). Technology-based trauma treatment (i.e., traumafocused cognitive behavioral therapy) has been found efficacious in reducing behavioral problems (Comer et al., 2015;Stewart et al., 2017). However, TMHC child maltreatment research remains sparse, with mixed evidence of its implementation or efficacy. Some studies showed a significant but negative relationship between child maltreatment and TMHC (Comer et al., 2015;Stewart et al., 2017), A significant finding, however, is that TMHC treatment modalities for child maltreatment have known limitations, such as ensuring chil-dren's and adolescents' safety, reliable technologies, and lack of a confidential or private space for processing trauma (Comer et al., 2015;Racine et al., 2020). School and Community Factors, and TMHC Research on the association between TMHC and school connectedness (i.e., students feeling valued and having a sense of acceptance), virtual connectedness, and symptoms of mental health is mixed . Some research has found significant associations between social connectedness and feelings of loneliness, depression, social anxiety, and mental health challenges (Wu et al., 2016). For example, Jones and colleagues (2022) found that adolescents' virtual connectedness with others (i.e., family, friends, or peer groups via electronic devices) improved their mental health difficulties and lowered their suicidality risk during the COVID-19 pandemic. However, despite a high prevalence of mental health problems among adolescents, many do not seek professional help (Sheffield et al., 2004). Some prefer to manage their mental independently (Kuhl et al., 1997) or seek help from their friends and families (Sheffield et al., 2004). A majority of adolescents do not want to talk to unfamiliar adults, such as teachers and counselors, about mental health issues; instead, they are more likely to speak to informal sources, such as their friends or family members, to solve their problems (Raviv et al., 2009). Gender Differences in Ecological Factors and TMHC The literature demonstrates that adolescent females are more likely to seek mental health services from friends and informal relationships than their male counterparts (Raviv et al., 2009). Literature suggests that the high help-seeking among females is because they may perceive their mental health situation to be more severe (Doherty & Kartalova-O'Doherty, 2020). Further, they may have experienced fewer negative experiences with mental health help-seeking (e.g., stigmatization, fear or adverse outcomes, or difficulty accessing help). Female adolescents are also slightly more likely to have prior TMHC utilization services and knowledge of web-based technologies, anonymous online chats, and self-help resources compared to their male counterparts (Toscos et al., 2019). Yet, some research suggested that male adolescents may have more positive mental health outcomes utilizing TMHC, including reducing depressive symptoms, compared to females (Calear et al., 2009). Current Study The circumstances identified in the literature point to various internal and external adolescent risk factors at multiple ecological levels (e.g., individual, relationship, community, and societal) that converge to impact adolescents' perceptions of mental health and their help-seeking behavior. However, little is known about how these risk factors are associated with high school adolescents' uptake of TMHC during the COVID-19 pandemic. Further research is warranted to examine individual factors (e.g., depression and suicidality), family factors (e.g., physical and psychological maltreatment), school factors (e.g., feeling connected to others), and community factors (e.g., virtual access to resources and connectedness) that might influence adolescents to seek help-particularly those who perceive that they are susceptible to substance use and mental health issues. Utilizing the ecological systems theory and the HBM, we hypothesized that adolescents would be more likely to seek and experience TMHC if they perceived that they were susceptible to mental health problems or substance misuse. This study examined (a) whether adolescents' ecological risk factors (depression, suicidal attempts, parental emotional abuse, parental physical abuse, low virtual connectedness, and not feeling close to people at school) and substance use during the COVID-19 pandemic would drive adolescents to use TMHC services and (b) whether these associations would be moderated by gender. Specifically, based on the literature indicating higher rates of help-seeking behaviors among females (Mackenzie et al., 2006;Raviv et al., 2009;Toscos et al., 2019), this study hypothesized that the effects of these ecological risk factors on TMHC use would be greater among female adolescent substance users than their male counterparts during the COVID-19 pandemic. Data Source and Sample The Adolescent Behaviors and Experiences Survey (ABES) conducted among U.S. high school students from January-June 2021 was the data source. The ABES study, an adapted version of the Youth Risk Behavior Surveillance System (YRBS) survey (Underwood et al., 2020), aimed to assess student behaviors and experiences during the COVID-19 pandemic. The survey included 97 questions from the 2021 YRBS questionnaire; six were modified to allow students who were attending only virtually. The ABES survey contains 12 questions on COVID-19-related behaviors that are not included in the YRBS questionnaire. To obtain a nationally representative sample of public and private school students in Grades 9-12 in all 50 U.S. states and the District of Columbia, a three-stage cluster sampling approach was used. Classes were randomly selected, and data collection was facilitated by teachers who provided the students access to the anonymous online survey. Given the different instructional models used across the nation during the pandemic (i.e., in-person only, virtual-only, and hybrid), the ABES was designed as a self-administered, anonymous 110-item questionnaire that took 30 min on average to complete. A total of 7,998 students submitted surveys, and 7,705 had valid data (i.e., ≥ 20 questions answered). The school response rate was 38%, the student response rate was 48%, and the overall response rate was 18%. The current study used a sample of 1,460 students who reported having used more alcohol and/or drugs during the COVID-19 pandemic than before it started. Specifically, we included students who answered, "strongly agree" and "agree" to the following two questions: (a) Do you agree or disagree that you drank more alcohol during the COVID-19 pandemic than before it started; (b) Do you agree or disagree that you used drugs more during the COVID-19 pandemic than before it started? (Count using marijuana, synthetic marijuana, cocaine, prescription pain medicine without a doctor's prescription, and other illegal drugs). After deleting cases with missing values, the final sample size for this study was 1,216. Dependable Variable TMHC was measured using the following question: "During the COVID-19 pandemic, did you receive mental healthcare, including treatment or counseling for your use of alcohol or drugs, using a computer, phone, or other device?" We coded these responses in binary form (1 = no and 2 = yes). Independent Variables In this study, we included six ecological factors as independent variables: depression (individual factor), suicide attempts (individual factor), parental emotional abuse (family factor), parental physical abuse (family factor), feeling close to people at school (school factor), and virtual connectedness (community factor). Depression was assessed using the following question: "During the past 12 months, did you ever feel so sad or hopeless almost every day for 2 weeks or more in a row that you stopped doing some usual activities?" The response options were yes and no. These symptoms have frequently been described as depressive (Kim et al., 2018). Suicide attempts were measured with the question: "During the past 12 months, how many times did you actually attempt suicide?" This measure was assessed on a 5-point Likert scale (1 = 0 times, 2 = 1 time, 3 = 2 or 3 times, 4 = 4 or 5 times, and 5 = 6 or more times), with higher scores indicating more suicide attempts. Parental emotional and physical abuse were measured using the following questions: (a) "During the COVID-19 pandemic, how often did a parent or another adult in your home swear at you, insult you, or put you down?" (b) "During the COVID-19 pandemic, how often did a parent or other adult in your home hit, beat, kick, or physically hurt you in any way?" The response categories to the questions ranged from 1 (never) to 5 (always). For analysis purposes, the variables were binary coded, reflecting never and rarely as 0 (no) and sometimes, most of the time, and always as 1 (yes). Virtual connectedness during the pandemic was measured by participants' responses to the following question: "During the COVID-19 pandemic, how often were you able to spend time with family, friends, or other groups, such as clubs or religious groups, using a computer, phone, or other device?" The response options ranged from 1 (never) to 5 (always). The feeling of closeness to people at school was assessed with the question, "Do you agree or disagree that you feel close to people at your school?" This measure was assessed on a 5-point Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree). Higher scores indicated stronger feelings of closeness to people at school. Moderator Gender was the moderator in this study, which was a dichotomous variable (0 = male and 1 = female). Data Analysis Descriptive, bivariate (chi-square tests and independent samples t-tests), and multivariate (hierarchical multiple logistic regression) analyses were conducted using SPSS version 27. The percentages of missing data varied from 0.1% for age to 13.4% for suicide attempts. We omitted cases that had missing data from the analysis. With a large sample size (N = 1,216), list-wise deletion is unlikely to influence the results (Hair et al., 2014). Additionally, before hierarchical multiple logistic regression was conducted, an ordinary least squares (OLS) regression was performed to identify multicollinearity issues among the independent variables. The outcomes of the OLS regression revealed no multicollinearity issues. Specifically, tolerance was > 0.1 and the variance inflation factor was < 10 ( Hair et al., 2014). Finally, we ran a three-step hierarchical multiple logistic regression. Sociodemographic control variables, including age and race, were entered into Model (1) The six ecological factors (depression, suicide attempts, parental emotional abuse, parental physical abuse, feeling close to people at school, virtual connectedness) and the moderator (gender) were then entered into Model (2) The interaction variables between each ecological factor and the moderator were entered into Model (3) The − 2 log-likelihood and Nagelkerke R 2 values were reported at each step. Descriptive and Bivariate Results As reported in Table 1, more than half of the sample (55.1%) self-identified as female. Regarding age, the largest group (28.5%) was 17 years, followed by 16 years (27%), 15 years (21.6%), 18 years or older (15.9%), and 14 years or younger (7.1%). In terms of race, just over half of the sample (52.5%) was White, 12.7% were Black or African American, 4.8% were Hispanic or Latino, and 30.0% of the students were from other racial groups. Ecological Factors and TMHC by Gender Table 1 also shows the bivariate results of differences in ecological factors between male and female students. Among the students who used more alcohol and/or drugs during the pandemic, 67.2% reported they had ever been depressed and 20.5% had attempted suicide. Depression and suicide attempts were significantly different between male and female students, χ² = 84.46, p < .001 vs. χ² = 30.34, p < .001, respectively. More female students Regarding parental abuse, 53.2% of the respondents reported they had experienced parental emotional abuse and 9.0% had experienced parental physical abuse during the COVID-19 pandemic. Furthermore, parental emotional abuse and physical abuse were significantly associated with gender, χ² = 62.67, p < .001, χ² = 13.89, p < .001, respectively. More females than males experienced parental emotional abuse (63.4% vs. 40.7%) and females and males experienced similar levels of parental physical abuse (9.1% vs. 8.8%). Among students, 33% reported they were not virtually connected with family, friends, or other groups, and this was significantly associated with gender, χ² = 13.15, p < .05. Specifically, 36.8% of the male students and 29.8% of the female students reported they never or rarely had a virtual connection. Overall, 33.4% reported they did not feel close to people at school, and this was also significantly associated with gender, χ² = 20.33, p < .001. Just over a quarter (27.8%) of the male students and 38.1% of female students reported not feeling close to people at school during the pandemic. In terms of mental healthcare use, 15.3% reported they had ever received TMHC during the pandemic, and 13.4% of the male students and 16.9% of the female students reported they had ever received TMHC during the pandemic. TMHC use did not significantly differ by gender. Ecological Factors by TMHC As shown in Table 2, depression was significantly associated with TMHC, χ² = 18.04, p < .001. Among students who experienced depression, 18.4% reported they had ever received TMHC during the pandemic and 81.6% reported they had never received TMHC. Likewise, suicide attempts were significantly associated with TMHC. According to the independent samples t-test, students who had never received TMHC during the COVID-19 pandemic showed lower suicide attempts (M = 1.27) than students who had ever received TMHC during the pandemic (M = 1.74), t = -5.79, p < .001. The chi-square test also showed a significant association between suicide attempts and TMHC, χ² = 68.79, p < .001. Among students who attempted suicide only one time, 32.1% reported they ever received telemental health. Among those who attempted suicide two or three times, 28% reported they had ever received TMHC during the pandemic. Among those who attempted suicide four or five times, 42.9% reported they had ever used telemental health. Among students who attempted suicide six or more times, 42.1% reported they had ever received TMHC. Parental emotional abuse and physical abuse were also significantly associated with TMHC, χ² = 15.98, p < .001, χ² = 13.82, p < .001, respectively. Among students who had experienced parental emotional abuse, 19.2% reported they had utilized TMHC. Among students who had experienced parental physical abuse, 27.5% reported they had received TMHC. However, virtual connectedness and TMHC were not significantly associated. Feeling close to people at school was inversely associated with TMHC. According to the independent samples t-test, students who had never received TMHC during the COVID-19 pandemic showed higher levels of feeling close to people at school (M = 3.15) than students who had ever received TMHC during the pandemic (M = 2.80), t = 3.08, p < .001. The chisquare test also showed a significant association between feeling close to people at school and TMHC, χ² = 20.72, p < .001. Among students who had never felt close to people at school, 24.4% reported they had received TMHC and among those who rarely felt close to people at school, 18.1% reported they had used TMHC during the pandemic. Table 3 provides the correlations between ecological factors and TMHC. Depression, r = .12, p < .001, suicide attempts, r = .21, p < .001, parental emotional abuse, r = .12, p < .001, and parental physical abuse, r = .11, p < .001, were all positively correlated with TMHC. However, feeling close to people at school was negatively correlated with TMHC, r = − .10, p < .001. First, we entered the demographic control variables into Model 1 but found none to be significant. Next, the six ecological variables (depression, suicide attempts, parental emotional abuse, parental physical abuse, feeling close to people at school, virtual connectedness) and the moderator (gender) were entered into Model 2 to examine the association between ecological factors and TMHC. All demographic control variables remained nonsignificant in Model 2, except for suicide attempts and feeling close to people at school, which were significantly associated with TMHC. For every additional category of suicide attempt, the odds of TMHC usage were increased by 59% (OR = 1.59, 95% CI [1.32, 1.92]). Likewise, for every additional category of feeling close to people at school, the odds of TMHC usage were decreased by 13% (OR = 0.87, 95% CI [0.77, 0.99]). Hierarchical Multiple Logistic Regression Results Lastly, the interaction variables were entered into Model 3, as shown in Table 4, to examine the moderating effect of gender on the associations between ecological factors and TMHC. In Model 3, only suicide attempts (OR = 1.96, 95% CI [1.45, 2.66]) remained significant. In Model 3, the moderating effect of gender on the association between feeling close to people at school and TMHC was significant (OR = 0.74, 95% CI [0.57, 0.96]). As illustrated in Fig. 1, the association between feeling close to people at school and TMHC was positively associated with being male, but it was negatively associated with being a female student. When male students felt close to people at school, they were more likely to access TMHC. On the other hand, when female students felt close to people at school, they were less likely to access TMHC. Discussion The present study focused on adolescents with increased substance use during the COVID-19 pandemic and the ecological factors (i.e., depression, suicide attempts, parental emotional and physical abuse, virtual connectedness, and feeling close to people at school) related to their use of TMHC and examined gender as a moderating factor. The descriptive results showed that only 15.3% of adolescents sought TMHC (84.7% did not) and that 13.4% of males and 16.9% of females sought TMHC. These results support those of previous studies and signify a potentially low level of perceived susceptibility to mental health risks among the sample and a low level of perceived benefit of TMHC use (Toscos et al., 2019). The TMHC use rates were low among those who had further identified ecological risk factors; 67.2% reported experiencing symptoms of depression, but only 18.4% reported receiving TMHC. More than one in five adolescents in our sample reported having attempted suicide at least once but less than half sought TMHC. However, these adolescents were more likely to seek TMHC than those who did not experience these factors, supporting the results of previous studies and the proposed theoretical framework, which posits that intrapersonal and interpersonal factors in the lives of adolescents influence their likelihood of seeking healthcare (Pater et al., 2020;Toscos et al., 2019). Adolescents who experienced parental emotional and physical abuse were also more likely to receive TMHC, with 19.2% of those who experienced emotional abuse and 27.5% of those experiencing physical abuse receiving TMHC (Racine et al., 2020). Overall, the results of our study partially support our first hypothesis; suicide attempts were significantly associated with TMHC use, with the likelihood of seeking care increasing with the number of suicide attempts made. Adolescents with increased substance use during the pandemic showed more health-seeking behaviors when they had more severe mental health problems (e.g., suicide attempts) than other family, school, or community risk factors. This reflects perceived severity in the HBM, where the experience of making suicide attempts strengthens one's belief about the severity of the existing threat (Rosenstock et al., 1988), which might have driven one to attempt suicide. Thus, it must be ensured that crisis services, such as TMHC, are available whenever adolescents need them. Suicide screening and assessment should also be implemented for adolescents with substance use problems. Feeling close to people at school was also associated with TMHC use. The closer adolescents felt to people at their school, the less likely they were to seek TMHC. Adolescents with increased substance use during the COVID-19 pandemic were less likely to use TMHC when they felt close to their peers at school. An increase in school connectedness results in lower susceptibility to mental health issues. This is an interesting finding because previous studies have found an increase in school connectedness among adolescents results in lower susceptibility to mental health issues, suggesting, based on HBM, that such adolescents are less likely to seek TMHC . Generally, adolescents are less likely to talk to authoritative figures, such as counselors and teachers, and more likely to speak to informal sources, such as their friends or family members (Raviv et al., 2009). Adolescents tend to be uncomfortable speaking to unfamiliar adults (Helms, 2003). When adolescents have close peers to talk to about their substance or mental health problems, they may be less likely to seek TMHC. Our study contributes to the literature by including the moderating effect of gender. Our results partially support our second hypothesis concerning the moderating effect of gender on associated ecological factors and TMHC use. Specifically, we found the association between feeling close to people at school and TMHC varied depending on gender: the closer male students felt to people at school, the more likely they were to seek TMHC, whereas the closer female students felt to people at school, the less likely they were to seek TMHC. Positive support from significant relationships has been found to increase formal help-seeking for mental health in males (Harding & Fox, 2015). This could be because men have attributed close relationships to cultivating confidence and trust in seeking formal mental healthcare (Burke et al., 2022;Vogel et al., 2007). According to Raviv et al. (2009), female adolescents are less likely to seek counseling because they are more likely than male adolescents to open up to peers about their feelings and mental health problems, such as depression or anxiety. However, previous studies have shown that female adolescents are slightly more likely than male adolescents to have previous experience with TMHC resources and prefer to use various types of web-based technologies, with a higher preference for anonymous online chats and self-help resources, when seeking help for their mental health symptoms (Toscos et al., 2019). Our study findings highlight that feeling close to people at school is an important factor in understanding the help-seeking behaviors of female and male adolescent substance users. Limitations Despite the important findings of this present study, it had several limitations. First, while the study examined a nationally representative sample of high school adolescents, the data were cross-sectional, which restricted our ability to describe causal interpretations of the associations between the independent variables (depression, suicide attempts, parental emotional and physical abuse, virtual connectedness, and feeling close to people at school) and the dependent variable (TMHC use). A further limitation concerns the utilization of self-reported data on substance use. Some participants may not have accurately reported their current substance use due to the social stigma associated with it (Kim et al., 2018). Additionally, rather than examining all adolescents who reported current substance use, the present study examined only adolescents who reported increased substance use during the COVID-19 pandemic. Future research should consider including all current substance users as a sample. Next, depression was operationalized as a single variable from the secondary dataset, which may have weakened the contentment validity of the measure or omitted some depressive symptoms. Some previous studies have also equated "sadness and hopelessness" with depression (Messias et al., 2014;Reed et al., 2015). Furthermore, more nuanced information regarding the type of TMHC used and the frequency of TMHC use would have provided a deeper understanding of perceived barriers, a construct of the HBM that was not explored in depth in this present study (Solanki et al., 2022). Lastly, this study focused on more micro-level ecological factors that have immediate impacts on TMHC use. Future research should also consider mezzo-and macro-level ecological factors associated with TMHC use by adolescents with substance use issues. Other important factors that may influence TMHC use, such as family economic status, family structure, health insurance, and community characteristics (rural or urban), could be considered for future research. Implications and Conclusions Given that 15.3% of students who reported increasing their substance use during the COVID-19 pandemic also sought TMHC during the same period, strategies to increase the utilization of mental healthcare must be considered. Schools are the largest source of mental healthcare for adolescents (Merikangas et al., 2011), this gives school personnel, including teachers, administrators, and mental health practitioners, an opportunity to reduce stigma and increase access to mental health services within the school environment (Wolpert & Cortina, 2018). Importantly, norms can be established regarding substance use and mental healthcare through conversation, education, and service dissemination (Probst et al., 2021). Incorporating mental health discussions into social-emotional curricula and health services can promote service utilization by normalizing mental healthcare. Additionally, facilitating dialogue and regularly sharing knowledge of educational resources for mental health can reduce stigma and increase help-seeking behaviors, cultivating a climate of acceptance (Boydell et al., 2013). With a recent increase in adolescent usage of mobile apps for mental health and substance use recovery, implementing practices that expand TMHC may also increase mental healthcare usage for adolescents. Adams et al. (2021) conducted a mixed-methods study that found that adolescents who had both substance use and mental health needs preferred using an mHealth app over paper documentation as part of treatment. Adolescents were drawn to these platforms because of their convenience, accessibility, and capacity to deliver immediate feedback (Adams et al., 2021). Additionally, many resources to help clinical practice were developed amid the COVID-19 pandemic that may help identify resources on TMHC provided by professional organizations (Briere et al., 2020). Prominent policies (e.g., the Every Student Succeeds Act and No Child Left Behind) did increase funding for mental healthcare to be provided in schools for adolescents (ESSA, 2015;Flaherty & Orsher, 2002). States have also implemented laws that increase funding to hire more school mental health professionals and reimburse TMHC services. However, statistics on the youth mental health crisis during the COVID-19 pandemic have made it clear that many schools still experience barriers in meeting the mental health needs of their students, with the most significant barriers being inadequate funding and lack of access to licensed mental health professionals (Padgett et al., 2020). Most recently, the American Rescue Plan and the Bipartisan Safer Communities Act provided billions of dollars for states to implement and expand school-based mental health and substance use services (U.S. Department of Education, 2022). Future policies increasing access to TMHC could include strategies for reducing barriers to care, such as expanding health insurance policies to reach more adolescents and increasing internet access in resource-scarce areas (Schwarz & Aratani, 2011). Future studies should seek to identify barriers that need to be remediated to increase adolescent usage of TMHC. Since mental health issues and substance abuse often co-occur, this is a critical issue that must be urgently addressed. Due to differences between male and female adolescents in help-seeking behaviors and social issues, it is important to consider differentiated treatment strategies and screening tools (Lu et al., 2021). For male adolescents, mentoring, group interactions, and shared activities in informal spaces with a focus on expressing emotions and experiences of mental health may promote seeking mental healthcare (Burke et al., 2022;Calear et al., 2009). To mitigate feelings associated with the stigma of mental health or substance use, self-administered screening tools may be more beneficial for female adolescents (Substance Abuse and Mental Health Services Administration, 2009). In sum, our study provides novel insight into potential gender differences concerning how feeling close to people at school relates to TMHC use among adolescents. Our findings highlight the need for future research that explores these gender differences in greater detail and helps to inform an evidence base that will impact treatment efforts for adolescents with substance and mental health problems. Conflict of Interest This study was not supported by any funding and the authors have no conflicts of interest to declare. Access to Data ABES is readily available and furnished through the U.S. Centers for Disease Control and Prevention (CDC). All authors equally take responsibility for the integrity and accuracy of the data analysis.
2023-05-04T15:17:46.523Z
2023-05-02T00:00:00.000
{ "year": 2023, "sha1": "70a69a12b6f066d7ef8fda2808d7ab0ad970a515", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s10566-023-09751-y.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "a0a88787606c7f0ac1eaf62079f4e4463dac2501", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
219155380
pes2o/s2orc
v3-fos-license
Keyhole anesthesia—Perioperative management of subglottic stenosis: A case report © 2020 Saudi Journal of Anesthesia | Published by Wolters Kluwer ‐ Medknow ABSTRACT Any narrowing in the airway presents as obstruction and with features of noisy breathing. The presence of subglottic stenosis poses a great challenge to the anesthesiologist. Diagnostic and corrective procedures by Otolaryngologist require rigid endoscopy which demands apneic ventilation. Hence, the goal of general anesthesia in the presence of subglottic stenosis requires a patent airway to maintain oxygenation and ventilation and avoid hypoxia. We present an interesting case of a preterm neonate with subglottic stenosis who was managed successfully with endoscopic release. Introduction Subglottic is the narrowest nonexpandable and nonpliable part of airway which extends from below the true vocal cords to lower surface of the cricoid cartilage. [1] The incidence of subglottic stenosis in neonates is less than 2%. [2] Neonatal subglottic stenosis (SGS) can be either congenital or acquired. Congenital stenosis usually requires a conservative approach, but if severe and symptomatic it needs an intervention. We present an interesting case of a preterm neonate with SGS who was managed successfully with endoscopic release. Informed and written consent obtained from parents. Case Report A preterm neonate born at 33 + 4 week, because of respiratory distress was intubated and kept on mechanical ventilation, immediately after birth. There was a history of difficult intubation with multiple attempts with smaller size endotracheal tube. Neonate had difficulty weaning. Post extubation, neonate had noisy breathing and was referred to a higher center for bronchoscopic assessment with suspicion of congenital SGS. A flexible bronchoscopic assessment under sedation revealed SGS through which the bronchoscope could not be negotiated. A formal plan for bronchoscopic assessment with plasma ablation release was planned under general anesthesia. At the time of presentation, neonate was 38 completed weeks, weight 2.2 kg, baseline investigations and echocardiography were all within normal limits. Preanesthetic assessment a day before surgery suggested no significant abnormality, no history of cyanosis, seizures, or failure to thrive. Anesthesia plan was to keep the newborn on spontaneous ventilation using incremental dose of inhalational agent. Keyhole anesthesia-Perioperative management of subglottic stenosis: A case report This is an open access journal, and articles are distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as appropriate credit is given and the new creations are licensed under the identical terms. Access this article online Informed and written consent was obtained from parents, along with consent for tracheostomy. The surgical plan was to do a bronchoscopic assessment and proceed. Hence, it was decided to induce the neonate and hand over to surgeon in the apneic stage. Premedication included injection glycopyrrolate 10 µg/kg, injection hydrocortisone 2 mg/kg, and injection dexamethasone 0.1 mg/kg; with American Society of Anaesthesiologists standard monitoring. Nasal prongs with 3 L/min of 100% oxygen were applied to the neonate before induction of anesthesia and preoxygenation was done using 100% oxygen. Neonate was induced with incremental doses of sevoflurane in 100% oxygen, maintaining spontaneous ventilation. Gentle bag-mask ventilation was done to assist breathing and once deep enough, miller blade size 1 was inserted and used as guide for zero-degree 4 mm endoscope. After negotiating the vocal cords, the subglottic area was visualized revealing a grade III Cotton and Meyer subglottic stenosis [ Figure 1]. [3] Anesthesia was maintained using bolus dose of injection propofol 0.5 mg/kg. Bag and mask ventilation with 100% oxygen with sevoflurane was resumed. Since we were able to ventilate the neonate, injection succinylcholine was administered for motionless field for plasma ablation of the stenotic segment. Plasma ablation release was done with serial bougie dilatation. Post procedure, the endoscope was easily negotiated below the level of stenosis to visualize the trachea [ Figure 2]. During the procedure, 100% oxygen was given using nasal cannula (apneic oxygenation). The procedure lasted less than 5 min without the need for bag-mask ventilation and there were no desaturations in between. Assisted bag-mask ventilation with 100% oxygen was resumed and once active newborn was shifted to post anesthesia care unit. Discussion Subglottic stenosis is the 2 nd most common cause of stridor in infants with the incidence of congenital stenosis ranging from 5% in children and less than 1% if very low birth weight neonates are excluded. [4,5] To classify the extent of luminal obstruction in subglottic stenosis, the most followed grading system is by Meyer and cotton. Grade 1: 0-50% luminal obstruction, grade 2: 51%-70% luminal obstruction, grade 3: 71%-99% luminal obstruction, and grade 4: 100% obstruction of lumen. [3] In a full-term and preterm neonate, the diameter of the normal subglottic lumen is 4.5-5.5 mm and 3.5 mm, respectively. Any values below 4 mm in full-term and 3 mm in a preterm neonate is considered as narrowing and labeled as subglottic stenosis. [4] In case of congenital subglottic stenosis, antenatal diagnosis is difficult and the presentation can range from respiratory distress at birth to recurrent or persistent croup in children below 6 months of age. Pediatric patients can present with suprasternal or subglottic retractions, dyspnea, tachypnea, stridor, and respiratory distress. Subglottic stenosis could be a part of a wider range of malformations in the form of syndromes or associated with other pathologies, like vertebral defects, anal atresia, cardiac defects, trachea-esophageal fistula, renal and limb anomalies (VACTERAL), downs syndrome, duodenal or esophageal atresia, and Fraser syndrome. [6][7][8] Such cases require meticulous planning for anesthesia as well as surgery. Spontaneous ventilation was preferred because of the unknown status of grading of stenosis and maintenance of ventilation. During the procedure, neonate was handed over to surgeon in apneic phase. To increase safe apnea time, 100% oxygen was provided with a nasal cannula. Maximizing the duration of safe apnea for pediatric patients is vital during airway interventions to provide sufficient time to secure the airway and perform airway procedures without a critical drop in oxygen saturation. [9] Factors contributing to the critical changes in duration of safe apnea time in pediatric patients are the effect of the physiological differences, including lower functional residual capacity and increased oxygen consumption rate. [10] Setting up of laryngoscopes with suspension and other attachments requires more time, and hence increased duration of the apneic period. This leads to more desaturations; hence, we used Millers blade as an aid for rigid endoscopy. This technique has been studied by one of the authors. [11] This report highlights the role of paraoxygenation, and use of Millers blade as an aid, to increase safe apnea time in children undergoing procedures in apneic ventilation. To conclude, meticulous planning and coherence is required when dealing with neonates for subglottic stenosis. There should be good communication between the anesthetist and the otolaryngologist as there is airway sharing, for better outcomes. Declaration of patient consent The authors certify that they have obtained all appropriate guardian consent forms. In the form, the guardians have given their consent the patient images and other clinical information to be reported in the journal. The guardians understand that patient name and initial will not be published and due efforts will be made to conceal identity, but anonymity cannot be guaranteed. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2020-06-01T13:27:39.978Z
2020-05-30T00:00:00.000
{ "year": 2020, "sha1": "5e256eb26b56c405eaf163ab77c94808586ef8f3", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/sja.sja_694_19", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "35f35bc9e89741c39f0afc252ba49a5fe3e78a7a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4047984
pes2o/s2orc
v3-fos-license
Seeking Synchrony Between Family Planning and Immunization: A Week-10 DMPA Start Option for Breastfeeding Mothers Many mothers initiate DMPA injectables at 6 weeks postpartum, at the time of their baby's first immunization visit. Offering an optional delayed DMPA start at the next (10-week) immunization visit has potential advantages including a reduced follow-up schedule with DMPA visits synchronized with other immunization visits, and, possibly, improved contraceptive and immunization outcomes. T he single most popular moment in Africa (and many other regions) to initiate family planning may well be the 6-week postpartum clinic visit, when mothers also bring infants to begin the crucial primary immunization series. 1 A 6-week start does indeed work well for mothers accepting long-acting, reversible contraceptives (LARCs) such as implants and intrauterine devices (IUDs), because these methods have no negative impacts on breastfeeding and, once inserted, remain effective for years. However, although there are no medical restrictions on starting injectables at 6 weeks, the 6-week postpartum visit may not be the optimal timing for initiating injectables, the most popular method in sub-Saharan Africa, comprising nearly half of modern method use in the region. 2 What is the potential downside of initiating depotmedroxyprogesterone acetate (DMPA) injectables at 6 weeks? Beyond the redundant use of contraceptives during lactational infertility 3,4 is the problem of high discontinuation. In a review of Demographic and Health Survey (DHS) data from 19 countries, Ali et al. 5 noted that more than 40% of new injectable clients discontinued within the first year of use. When such early discontinuation occurs among postpartum womenduring the time infants are weaned and fertility is reestablished-the stakes are even higher, because these mothers need effective contraception for optimal birth spacing. Although high injectable discontinuation has proven a particularly challenging problem to solve, 6 several partial solutions present themselves for better protection during the first postpartum year. For example, more intensive counseling, particularly on the side effects that users can expect, has been shown to increase continuation rates among injectable users. 7,8 Also, for the many women who use DMPA because more effective methods are not available, programs must continue to improve access to LARCs, particularly in rural areas where the poorest and most vulnerable live. Finally, we should do a better job promoting exclusive breastfeeding during the first 6 months postpartum and ensuring that those using the Lactational Amenorrhea Method (LAM) can smoothly transition to another effective method when desired. There is another option that merits investigation. Fully or nearly fully breastfeeding mothers desiring the most popular injectable, DMPA, at 6 weeks could be offered the option of delaying their injection for 1 month, until the second visit of the scheduled 6-, 10-, and 14-week primary immunization series. Week-10 DMPA initiation has much to recommend it. For example, given existing discontinuation patterns, the delayed start time will translate into a delayed discontinuation time, meaning that mothers will have an extra month of contraceptive protection, more likely to fall at a time without redundant protection from lactational infertility. Furthermore, well-counseled clients who want to limit births or who want a highly effective spacing method will have an extra month to consider their family planning options and to discuss these options with their providers and partners. Upon return, they may be more likely to accept a more effective method and/or one with less chance of early discontinuation. Finally, when DMPA initiation is delayed until the second well-baby visit at 10 weeks, mothers benefit from better synchronization of clinic visits during the first year postpartum. SYNCHRONIZED SERVICES This last advantage, better synchrony, may be the most important, but the benefits of synchronized visits have a FHI 360, Durham, NC, USA. Global Health: Science and Practice 2017 | Volume 5 | Number 3 never been promoted or evaluated, in spite of much recent attention focused on the benefits of integrating family planning with other health services such as immunization. 9 Intuitively, initiating contraception during the 6-week visit makes good sense. Throughout the developing world, maternal and child health programs have made enormous investments to ensure that mothers come to clinics at this time for their infants' immunizations and growth monitoring, so it is no surprise that family planning programs have tried to "piggyback" onto this all-important visit. However, given the 3-month (13-week) cycle of DMPA, subsequent resupply visits after a week-6 initiation do not align well with scheduled immunization visits, resulting in a total of 6 scheduled family planning and immunization revisits in the following 10 months (Figure 1). In contrast, if new mothers are counseled about family planning at week 6 and choose DMPA, they can be offered the option of a more "mother-friendly" schedule if they delay their first injection until their baby's second immunization visit at week 10. This voluntary alternative schedule (Figure 2), which takes modest advantage of the 1-month "grace period" for DMPA resupply, decreases the number of follow-up visits (after the initial 6-week visit) over the next 10 months by a full third-from 6 visits to 4 visits. (If the full 12 months postpartum period is considered, a final family planning visit would be due at week 52.) Furthermore, it is possible that women will better adhere to a reduced schedule of dual-purpose visits than to more numerous single-purpose visits. This could improve contraceptive continuation if scheduled immunization visits help overcome any hesitation to seek DMPA reinjection. Similarly, a synchronized schedule could boost immunization timeliness and coverage if the family planning appointment provides an extra cue to action for the 10-week booster or the often-neglected measles vaccination at 39 weeks. Finally, a scheduled mid-year family planning visit at around 6 months (25 weeks) ( Figure 2) coincides with the baby's recommended first dose of vitamin A, and also presents a good opportunity for growth monitoring and counseling on complementary feeding at the World Health Organization's recommended timing for weaning. Of course, there are several potential arguments against the 10-week start for DMPA. The first, that mothers may become pregnant between 6 and 10 weeks postpartum, should be negligible if providers are careful to offer this option only to fully (or nearly fully) breastfeeding mothers who plan to continue breastfeeding for several months. A more serious argument is that a mother who presents at 6 weeks postpartum might never return. This is certainly possible, but drop-out between the first and third visits in the primary immunization series is already low-an average of 6% in Gavi-supported countries 10 -and having 2 reasons to return at 10 weeks, instead of only 1 reason, could further reduce that drop-out rate. It is much more likely that some mothers will be late for the 10-week visit. Indeed, a 2009 Lancet review of the timing of children's vaccinations in 45 low-and middle-income countries, while not explicitly addressing the 10-week visit, found that tardiness was common. 11 However, such delays might actually be reduced by virtue of the dualpurpose nature of the visit-mothers not sufficiently motivated to be on time for their baby's vaccination boosters may be more motivated by their own contraceptive needs. Furthermore, even if mothers are late, fully or nearly fully breastfeeding should protect them from pregnancy for at least 6 months, if they experience no bleeding episodes. A CALL FOR RESEARCH Compelling practical and theoretical arguments exist for giving breastfeeding mothers the option of delaying DMPA initiation from 6 weeks to 10 weeks postpartum. Research should be undertaken to test hypotheses related to the potential benefits of "synchronizing" the DMPA schedule with that for infant immunizations. Will new mothers accept a 1-month delay in initiating DMPA use? Will providers offer women this option, given the modest extra effort required? And, most importantly, will the benefits of synchronized, dual-purpose visits translate into better contraceptive continuation, immunization coverage, and other outcomes, compared with possible risks, such as unintended pregnancies among mothers who stop breastfeeding or fail to return? Synchronized scheduling for new mothers, with fewer, more integrated revisits, not only reflects the tenets of integrated services and patient-centered, mother-friendly care, but could also improve important outcomes in vulnerable populations.
2018-04-03T04:33:58.984Z
2017-09-27T00:00:00.000
{ "year": 2017, "sha1": "58d2736d99b24bf373f7849ce19ad057a407ec12", "oa_license": "CCBY", "oa_url": "http://www.ghspjournal.org/content/ghsp/5/3/341.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "58d2736d99b24bf373f7849ce19ad057a407ec12", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
219913637
pes2o/s2orc
v3-fos-license
Inner Engineering Practices and Advanced 4-day Isha Yoga Retreat Are Associated with Cannabimimetic Effects with Increased Endocannabinoids and Short-Term and Sustained Improvement in Mental Health: A Prospective Observational Study of Meditators Background Anxiety and depression are common in the modern world, and there is growing demand for alternative therapies such as meditation. Meditation can decrease perceived stress and increase general well-being, although the physiological mechanism is not well-characterized. Endocannabinoids (eCBs), lipid mediators associated with enhanced mood and reduced anxiety/depression, have not been previously studied as biomarkers of meditation effects. Our aim was to assess biomarkers (eCBs and brain-derived neurotrophic factor [BDNF]) and psychological parameters after a meditation retreat. Methods This was an observational pilot study of adults before and after the 4-day Isha Yoga Bhava Spandana Program retreat. Participants completed online surveys (before and after retreat, and 1 month later) to assess anxiety, depression, focus, well-being, and happiness through validated psychological scales. Voluntary blood sampling for biomarker studies was done before and within a day after the retreat. The biomarkers anandamide, 2-arachidonoylglycerol (2-AG), 1-arachidonoylglycerol (1-AG), docosatetraenoylethanolamide (DEA), oleoylethanolamide (OLA), and BDNF were evaluated. Primary outcomes were changes in psychological scales, as well as changes in eCBs and BDNF. Results Depression and anxiety scores decreased while focus, happiness, and positive well-being scores increased immediately after retreat from their baseline values (P < 0.001). All improvements were sustained 1 month after BSP. All major eCBs including anandamide, 2-AG, 1-AG, DEA, and BDNF increased after meditation by > 70% (P < 0.001). Increases of ≥20% in anandamide, 2-AG, 1-AG, and total AG levels after meditation from the baseline had weak correlations with changes in happiness and well-being. Conclusions A short meditation experience improved focus, happiness, and positive well-being and reduced depression and anxiety in participants for at least 1 month. Participants had increased blood eCBs and BDNF, suggesting a role for these biomarkers in the underlying mechanism of meditation. Meditation is a simple, organic, and effective way to improve well-being and reduce depression and anxiety. Introduction In the modern world, stress and anxiety have become synonymous with success and productivity. Over the course of a lifetime, almost half of Americans are estimated to have a mental disorder such as anxiety (>25% incidence) or mood disorder (>20% incidence) [1,2]. ere is increasing interest and accessibility of nonpharmaceutical treatment options for these disorders, such as meditation, counseling, and lifestyle changes (e.g., diet, regular exercise). Meditation is increasingly being recognized as a simple and effective tool to decrease perceived stress and increase general well-being [3]. Although certain benefits of meditation have been widely acknowledged, the physiological basis and underlying mechanisms have not been well-characterized. Cortisol levels may be reduced [4][5][6][7], and the neuroregulator brainderived neurotrophic factor (BDNF) has been shown to increase with meditative practices [6,[8][9][10]. Following a 3month Isha yoga and meditation retreat, participants had nearly a 3-fold increase in mean BDNF. Increased BDNF signaling, and thus enhanced neurogenesis and/or neuroplasticity, was associated with improved resilience and wellbeing. Decreases in inflammatory biomarkers, including interleukin-(IL-) 6, C-reactive protein and tumor necrosis factor-(TNF-) α, and increases in β-endorphins have also been shown with various meditation/yoga practices [11]. Endocannabinoids (eCBs) are lipid mediators found in the brain and peripheral tissues that mimic the action of Δ 9 -tetrahydrocannabinol (THC) [12]. e eCBs N-arachidonoylethanolamine (AEA or anandamide) and 2-arachidonoylglycerol (2-AG) have been associated with enhanced mood and feelings of blissfulness. e term "anandamide" was even derived from Sanskrit ("ananda" meaning bliss) for its cannabimimetic effects. Other eCBs include 1-arachidonoylglycerol (1-AG), docosatetraenoylethanolamide (DEA), and oleoylethanolamide (OLA) [12]. Serum levels of both anandamide and 2-AG may be reduced in patients with depression and anxiety, and the eCB system plays a fundamental role in emotional homeostasis [13]. We hypothesized that eCBs may be involved in the underlying mechanisms leading to the beneficial effects of meditation. To gain greater insights into the psychological and physiological effects of meditation, we evaluated participants at Bhava Spandana Program (BSP), an experiential program leading to deep states of meditativeness in just a short period of time. In Sanskrit, "Bhava" means "sensation" and "Spandana" can be loosely translated as "resonance." Essentially, it means "resonance with life." e program [14] is a 4-day advanced yoga and meditation retreat held at the Isha Institute of Inner Sciences, located in Tennessee in the USA. e prerequisite for participating in the BSP is completion of the Inner Engineering program offered worldwide. e Inner Engineering program teaches participants a 21-minute yoga and meditation practice called Shambhavi Mahamudra [15], an Inner Engineering Level 1 program, which reduces stress and improves well-being [3]. Other Isha Yoga meditation programs have shown benefits including increased gamma brainwave amplitude, heart rate variability, sympathovagal balance, BDNF, and cortisol awakening response [16][17][18]. e BSP is a 4-day, 3-night, residential program offered to those who have been initiated into Shambhavi Mahamudra. is advanced meditation program is designed to provide the opportunity to go beyond the limitations of body and mind and experience higher levels of consciousness. Bhava Spandana offers the experience of a world of unbounded love and joy. rough powerful processes and meditations, BSP creates an intensely energetic situation, where individuality and the limitations of the5 sense organs can be transcended, creating an experience of oneness and resonance with the rest of existence. Humans have lived within the limitations of human senses. Bhava Spandana is like giving one a lift or jump over the wall to have a peep at life beyond the limitations of the five sense perceptions. Past BSP participants have described feelings of blissfulness, ecstasy, intense happiness, inclusive perception, and higher states of consciousness during and after the program. Our hypothesis was that BSP meditation would significantly reduce depression and anxiety and improve happiness and positive well-being in the short-term as well as in the long-term. Furthermore, these changes in psychological well-being are associated with objective changes in blood eCB and BDNF levels. is study specifically aimed (a) to assess the impact of this 4-day guided, experiential meditation retreat on mental health and well-being (happiness, awareness, well-being, anxiety, and depression), (b) to correlate reported psychological changes with objective blood biomarkers (eCBs and BDNF) immediately after the retreat, and (c) to assess any persistent impact on participants' psychological well-being one month after the retreat. Study Population. In October 2017, adult participants (≥18 years) were registered to participate in the 4-day BSP meditation program at Isha Institute of Inner-Sciences, McMinnville, Tennessee, USA. All registrants received an e-mail which invited them to participate in an online survey 2 weeks before the meditation program. Participation in the study was voluntary. Individuals who were unable to read and/or comprehend the consent forms were excluded. For the blood biomarker subset of the study, individuals were excluded if they reported use of active marijuana, opioid, or other illegal drugs, or if they were taking antidepressant medication. e protocol was reviewed and approved by Institutional Review Board of Indiana University School of Medicine. All participants gave electronic informed consent. Meditation Program. e BSP is a 4-day advanced yoga and meditation program designed to enhance participants' perception and sensitivity to life by going beyond the limitations of body and mind and experiencing higher levels of consciousness [14]. Participants are required to complete an online or in-person prerequisite program (Inner Engineering) that includes Shambhavi Mahadmudra kriya yoga practice [15]. Surveys. Participants completed the preprogram ("baseline") survey within 2 weeks before the retreat and postprogram ("immediately after retreat") survey within 2 weeks after the retreat, and a 1-month follow-up survey. ese surveys included scales that have well-established reliability and validity in the literature. Depression is measured by the 20-item Center for Epidemiologic Studies Depression Scale [19] (CES-D). A sample item is, "During the past week, I was bothered by things that usually do not bother me." e response is coded from 0 (rarely) to 3 (most of the time) [20]. e CES-D composite score is the sum of 20 scores. e possible range is 0 to 60. If more than 4 questions are unanswered, no score is assigned. A score of 16 points or higher is considered depression. Anxiety is measured by the 8-item Patient-Reported Outcomes Measurement Information System (PROMIS) Emotional Distress-Anxiety (short form) [21]. e scale uses a 7-day time frame and a sample item is, "I felt fearful." e response is coded on a 5-point scale from 1 (never) to 5 (always) [21]. Mindfulness is measured by the 15-item Mindful Attention Awareness-Trait Scale (MAAS) [22]. e MAAS is designed to assess awareness and observation of what is occurring in the present moment in participants' everyday experience. A sample statement is, "I could be experiencing some emotion and not be conscious of it until sometime later." e response is coded on a 6-point scale from 1 (almost always) to 6 (almost never). e MAAS score is computed as the average of 15 items. Well-being is measured by Ryff's 42-item Psychological Well-Being (PWB) Scale [23]. Ryff's PWB is a theoretically grounded instrument that encompasses multiple facets of psychological well-being, autonomy, environmental mastery, personal growth, positive relations, purpose of life, and selfacceptance. Since the inception of the scale thirty years ago, the scale has been translated into 30 languages. A sample item for the autonomy is, "I am not afraid to voice my opinions, even when they are in opposition to the opinions of most people." A sample item for environmental mastery is, "In general, I feel I am in charge of the situation in which I live." A sample item of personal growth is, "I am not interested in activities that will expand my horizons." A sample item of positive relations is, "Most people see me as loving and affectionate." A sample item of purpose of life is, "I live life one day at a time and do not really think about the future." A sample item of self-acceptance is, "When I look at the story of my life, I am pleased with how things have turned out." For each item, the response is coded on a 6-point Likert scale from 1 (strongly disagree) to 6 (strongly agree). e composite score of each subscale is the sum of its own 7 items and the composite score of the global scale is the sum of 42 items. e scale has been used in a longitudinal follow-up of the US national sample [23]. Happiness is a global 0-10 happiness scale. Only those who completed the first survey were asked to complete the postintervention surveys. ose who did not complete the survey within a week of the first invitation were sent e-mail reminders. Blood Sampling. Survey participants were given the option for blood biomarker evaluation. Informed consent for blood draws was obtained, and the participant's samples were collected at the following time points: (1) preprogram (up to 2 days before program began) and (2) postprogram (within 2 days after program ended). Standard sterile venipuncture technique was used to collect 10 mL of blood divided into different sampling tubes for measuring anandamide, 1-AG, 2-AG, DEA, OLA, and BDNF. For participant confidentiality, samples were deidentified with a code for sharing with individuals outside of the study team. e biomarker samples were sent immediately to the analysis lab in Denver, Colorado. As anandamide is quickly catabolized by Fatty Acid Amide Hydrolase (FAAH), the fresh blood samples were centrifuged and extracted plasma was frozen immediately to preserve and accurately measure this biomarker. Data Analysis. Repeated measures analysis of variance (ANOVA) was used to compare preprogram (baseline), immediately after program, and 1-month follow-up survey responses; an unstructured variance-covariance matrix was used. In the subset of subjects with biomarkers assessed, paired t-tests were used to compare biomarkers for significant change from before meditation to after meditation. Pearson correlation coefficients were used to evaluate the associations between the changes in biomarkers and changes in psychological states. In addition, the change in biomarkers was categorized into two groups, including one group where there was at least a 20% increase and another where there was less than 20% increase. e responses to the psychological scale were separately analyzed for both of these groups. A 5% significance level was used for all tests. Study Participation. ree hundred forty-eight participants completed the preprogram survey and 323 Evidence-Based Complementary and Alternative Medicine participants completed the postprogram survey, resulting in a 93% response rate. e 1-month follow-up survey was completed by 308 participants. One hundred and forty-two participants volunteered for blood sampling for biomarker assays before and after meditation retreat. Subject characteristics are shown in Table 1. Markers of Well-Being. Scores for depression (CES-D) and anxiety significantly decreased from baseline to immediately after retreat (P < 0.001; Table 2). e mean CES-D score was relatively low at baseline (10.3) and declined by 26%, mean decrease of 2.7 (SD � 8.6), and effect size of 0.31. e mean standardized anxiety score was also low at baseline and declined by 23% at the immediate postretreat survey, mean decrease of 0.46 (SD � 0.76), and effect size of 0.60. Scores for mindfulness (MAAS), happiness, and psychological well-being (PWB) increased significantly from baseline to immediately after retreat (P < 0.001; Table 2, Figure 1). Mean (SE) increases and effect sizes were 0.63 (0.85) and 0.75 for mindfulness, 2.0 (1.8) and 1.1 for happiness, and 15.7 (20.8) and 0.75 for well-being. ese improvements were maintained at the 1-month follow-up. e autonomy measure of PWB showed a continuous improvement, as PWB scores increased even more on the 1month survey compared with immediately after retreat (P � 0.003; Table 2). All of these primary psychological changes were statistically significant even after adjusting for multiple testing. Biomarkers. Endocannabinoids (anandamide, 2-AG, 1-AG, DEA, and OLA) and BDNF were analyzed in 142 participants. Compared with baseline levels, all immediately after retreat biomarker levels were higher (Table 3, Figure 2). Some individuals had multiple-fold increases in anandamide, and the effect size for biomarker changes ranged from 0.17 to 0.92. High interindividual variability was observed among participants. Specifically, 2-AG increased significantly (P < 0.001) by 2.0 ng/mL (SD � 2.8) with an effect size of 0.71, and 71% of participants had an increase of at least 20%. Similarly, BDNF increased (P < 0.001) by 5945 pg/mL (SD � 8414) with an effect size of 0.71, and 53% of participants had an increase of at least 20%. Participants with increases in 2-AG levels larger than 20% after meditation from the baseline had larger increases in mindfulness, happiness, and positive well-being (total, autonomy, environmental mastery, positive relations, and self-acceptance) scores (Supplementary Table (available here)). Similarly, participants with increases in 1-AG or total AG levels larger than 20% had larger increases in happiness and positive well-being (total and self-acceptance for 1-AG, positive relations for total Ag; Supplementary Table). Discussion In this large, prospective study, we showed that a short, intense guided experiential meditation (BSP) significantly decreases anxiety and depression and improves psychological well-being, happiness, and mindfulness and that these improvements sustain for at least a month. For the first time, we showed that objective blood biomarkers, eCBs (anandamide, 1-AG, 2-AG, DEA, and OLA) and BDNF, all increased following an advanced meditation, BSP, suggesting a role for these mediators in the underlying mechanisms of meditation. e specific mechanisms linking meditation to positive psychological effects have not been well-characterized in the literature, and this research provides early evidence that the eCB system and BDNF are significantly increased after advanced meditation. Low blood levels of anandamide and 2-AG have been reported in patients with depression [24]. Deficiencies in anandamide can be associated with acute stress [25] and increased pain [26]. Studies have related depression and anxiety to the expression and/or functionality of cannabinoid type 1 (CB1) receptors and FAAH in brain areas belonging to the amygdala-hippocampal-corticostriatal neural circuit, especially the frontal cortex in depression and the amygdala in anxiety disorders [13]. Increasing patients' anandamide levels is a potential solution for many ailments including depression [27,28], fibromyalgia [29,30], inflammatory bowel disease [31][32][33], and cancer [34,35]. Our findings that meditation can increase blood eCB levels and improve well-being suggest that meditation might be further Evidence-Based Complementary and Alternative Medicine explored as a treatment modality for anxiety and depression and possibly even other diseases. e level of increase in eCBs following meditation was large enough to be deemed clinically significant, as effect sizes ranged from 0.77 to 0.90, and some participants had 2-3-fold increases. is is more than reported eCB increases associated with moderate intensity aerobic physical exercise [36] and sexual orgasm [37]. Modest increases (<40%) in eCBs explain pleasurable effects of singing and exercise and ultimately some of the long-term beneficial effects on mental health, cognition, and memory [38]. Moreover, increases in eCB levels with moderate exercise have been associated with mood improvements in patients with major depressive disorders, explaining the biological mechanism behind mood enhancement with exercise in major depressive disorders [39] and posttraumatic stress disorder [40]. Activation of CB1 receptors has been shown to increase BDNF expression [41]. We demonstrated increased BDNF levels following meditation, and our findings validate similar observations from a previous Isha mediation program [18]. is is notable as BDNF is associated with neuronal regeneration [42][43][44][45]. BDNF deficiencies have been linked to mental disorders such as depression [46,47], bipolar disorder [46], and schizophrenia [48,49]. While THC has been shown to increase BDNF levels, this effect is mitigated by even light chronic cannabis use [50]. erefore, endogenous activation of CB1 receptors through other ways, such as meditation, would be necessary to have a long-lasting impact on BDNF expression. Future studies might examine the efficacy of meditation as a therapy for bipolar disorder, schizophrenia, and other mental health conditions. e sustained positive impacts from meditation, even one month after BSP, are encouraging. e BSP participants experienced persistent positive benefits from the short and intensive BSP meditation, extending beyond an immediate "high" or bliss feeling from increased eCBs. An intense short-term negative experience can lead to diseases like posttraumatic stress disorder; thus an intense short-term positive experience might similarly lead to lasting positive psychological consequences. Completion of Inner Engineering program (level 1) and daily practice of Shambhavi kriya is a requirement for BSP, and participants were encouraged to continue this daily practice after the BSP to maximize potential benefits. Continued daily meditation may have contributed to some of the sustained positive results at 1 month after BSP in our study. Given certain logistical constraints (some BSP participants were from different states or other countries), we did not collect blood samples to reassess biomarkers at 1 month. is could be evaluated in future studies. Premeditation in objective biomarkers, similar to variation in experiences and happiness levels. Because of the disparity in subjective experience and physiological responses, the benefits and expression of biomarkers from meditation may vary between individuals. is needs further study to explore the interindividual variations in subjective and biomarker responses in people simultaneously participating in a meditation program. Factors such as genetics/epigenetics, gene expression, and corresponding psychological health may explain these interindividual variations. Strengths of this study are prospective design, relatively large number of meditators participating simultaneously in validated psychological surveys and blood biomarkers before and after meditation, hypothesis-driven approach to mechanistically associate the beneficial effects of meditation with simultaneous objective biomarker assay associations, use of validated scales for sustained psychological benefits of meditation for up to 1 month, and generalizability of the findings as the study meditators had diversity in terms of age, sex, race, and ethnicity. Limitations included lack of 1-month postretreat biomarker assessment, as discussed above. Additionally, the 4-day BSP retreat had multiple daily meditation sessions, and the "immediatelyafter-meditation" blood samples were actually collected after the entire program. Because eCBs (particularly anandamide) are highly labile, biomarker expression would likely have been even higher if samples had been collected during the retreat. e study population had low baseline levels of anxiety and depression; therefore, differing results might be obtained by analyzing the effects of meditation on patients with psychological disorders. Finally, participants were not surveyed on postretreat meditation or other activities that may have contributed to the sustained longterm benefits. Since the meditators in this study were prospectively and objectively followed over 3 time points, each individual essentially served as his/her own control. To rule out a placebo effect, we analyzed objective biomarkers in addition to self-reported psychological changes. e lack of a separate, matched control group may be viewed as a study limitation. However, the significant changes in objective biomarkers and the sustained psychological benefits (despite only a brief meditation retreat) strongly support that this is a true effect rather than a placebo. is pilot study provides evidence to support future larger studies (including a control group) with longer-term follow-up to assess the persistent benefits of meditation. More research is needed to understand the role of the eCB system in the mechanisms of meditation. In future, the effects of meditation may be replicated through pharmacologic intervention, as FAAH inhibitors can reduce anandamide degradation. Inhibition of FAAH has been considered as a treatment option for anxiety and other psychological disorders [25,[51][52][53][54]; however, a clinical trial of one compound in 2016 caused severe adverse effects including one death [55]. Meditation remains a simple and safe option to improve well-being and benefit individuals with depression, anxiety, and other disorders. Conclusion An intense 4-day guided Isha meditation retreat significantly decreased depression and anxiety while improving happiness, mindfulness, and psychological well-being. Increased blood endocannabinoids and BDNF were also observed immediately after the intense meditation, potentially explaining the underlying mechanism behind the improvements in happiness and other psychological benefits. e psychological effects of the meditation retreat were sustained for at least 1 month. Meditation might serve as a simple, organic, nonpharmacological, and effective low-risk therapy or a prevention strategy for depression and anxiety, while improving happiness and well-being. Future studies are needed to investigate the role of the eCB system as mediators of the positive effects of meditation, the sustained benefits of meditation over longer-term, and reasons for interindividual variability in meditation response. Data Availability e datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.
2020-06-11T09:02:56.428Z
2020-06-05T00:00:00.000
{ "year": 2020, "sha1": "4420c6570cc166c59d4da4975cd73546618196e7", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ecam/2020/8438272.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a4183fa50577ee782b7e7dc64a4cca057361ea8e", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
194776874
pes2o/s2orc
v3-fos-license
ON THE TRANSMISSION OF THE AESTHETIC FEATURES FROM EZRA POUND’S CREATIVE TRANSLATION OF “CHANG GAN XING” The translation of poetry has always been a challenging work. Pound’s creative translation of Chinese classical poetry has been a hot topic that has caused increasing attention. His translation has been criticized by some scholars for his deviations from the original, but canonized in American literature in the mid-20 century and become classic works of American poetry. Whether through merit or controversy, Pound has stayed in the history of American modern poetry. By comparing Pound’s translated English version with the original Chinese ancient poem, we can gain some insight into the transmission of the aesthetic features in the process of translation and reception in the cross-cultural communication. INTRODUCTION The process of translation is a process of contending of ideas in the choice of form and meaning, especially the translation of poetry. The challenge for the translation of a poem lies in the fact that it is, on the one hand, loaded with subtle artistic conceptions and whimsical expressions, and on the other hand, the poetic forms, such as rhymes, rhythms and cadences and have to be weighed and considered again and again, otherwise it is not a poem [8, P. 100]. Chang Gan Xing(《长干行》), a famous poem of Li Bai in Tang Dynasty, ancient China, is translated by an American poet Ezra Pound as The River-Merchant's Wife: A Letter. By studying Pound's translation of ancient Chinese poem, which embodies Pound's poetic achievement, we can learn how the aesthetic features contained in a classic Chinese poem are transmitted, from an ancient classic Chinese poem into the American modern poetry. And it has gained the status of a classic poem and exerted an enduring influence on the imagist movement. MAIN PART 1. Chang Gan Xing: the Original Chinese Classic Poem Chang Gan Xing was written by Li Bai,a very famous poet in Tang Dynasty. The poem consists of 150 Chinese Characters, which narrates a touching love story. Chang Gan Li is the place where the story happened, which provides the setting of the love story. It is located in today's Nanjing, where once lived many rivermerchants who would go a long way to upwards along the Chang Jiang River to Sichuan Province on a business voyage in Tang Dynasty. The poem tells the river merchant's life to express her complicated mixed feelings of love and concern for her husband far away from home. The poem begins with her memories of her childhood: 妾发初覆额,折花门前剧。 郎骑竹马来,绕床弄青梅。 同居长干里,两小无嫌猜。 Here the river-merchant's wife recalls her innocent childhood playfulness when she and her perspective husband first met. They were innocent playmates and attached to each other. The description is vivid, lovely and picturesque. The vivid expression "青梅竹马,两小无猜" has become Chinese household set phrases for idealized innocent childhood sweet heart. Then she proceeds to describe her marriage. 十四为君妇,羞颜未尝开。 低头向暗壁,千唤不一回。 十五始展眉,愿同尘与灰。 常存抱柱信,岂上望夫台! She married at 14, she was shy and bashful, and then they get familiar with each other, she expressed their feelings and devotions towards each other. Here the poet employed two allusions of Chinese ancient love story to express their devotion and faithfulness to one another that even death cannot alter. In the following part the poem begins to describe her husband's going away on business. 十六君远行,瞿塘滟滪堆。 五月不可触,猿鸣天上哀。 She recalled the departing scene when her husband was leaving. Her care and her worries about her husband on the risky voyage are presented by mentioning "Qu Tang Xia", a very dangerous rock for the sailing in the rocky Three Gorges. 门前迟行迹,一一生绿苔。 苔深不能扫,落叶秋风早。 八月蝴蝶黄,双飞西园草。 感此伤妾心,坐愁红颜老。 In this part, her inner feelings and sentiments were revealed by describing the change of scenery associated with the different seasons to remind us the elapse of time. Everything she witnessed would remind her of her husband who is far away from home. The images of "moss", "falling leaves", "paired butterflies", etc., the endless bleak and lonely aspects of scenery, described the subtle changes in natural scenery, but at the same time are the descriptions of her inner feelings and sentiments, illustrating how hard she was living with husband away from home. She even fancied herself an old woman suffering from missing her husband for several months. In the end, she told her husband; before he come home he should write a letter to her in advance, she would meet him at Chang Feng Sha, which were very far from home. 早晚下三巴,预将书报家。 相迎不道远,直至长风沙。 The poem is written in rigid rhyme scheme of ancient Chinese poetic form more than a thousand years ago, with five characters in each line, rhyming in alternating lines. "The River Merchant's Wife: A Letter": A Classic American Poem Pound creatively translated Li Bai's Chang Gan Xing (《长干行) as "The River-Merchant's Wife: A Letter". It has become a classic American Poem since the mid-20 th century. Pound as a translator didn't understand Chinese himself and he never came to China. His acquisition of Chinese classic poetry benefited from an Orientalist Scholar, poet, Ernest Fenollosa, who studied Chinese poetry from Japanese translation, and paraphrase it into English. Unfortunately, he didn't finish his work, and left his manuscript into Pound's hand. Pound, according to his own insight and understanding of Chinese poetry, translated the poems creatively, and collected as "Cathay" in English version. Pound's English version of Li Bai's Chang Gan Xing (《长干行》) deviates from the original poem in many ways. Li Bai's original poem is a typical Chinese poem of Tang Dynasty, with very strict rhyme scheme; each line contains five Chinese characters. Pound translated it in free verse. The narrative is changed into a sort of dramatic monologue, or a letter, written in the first person narration by the river-merchant's wife. Another remarkable change appeared in the title. Chang Gan Xing, not only carries the information of the place, the geographical location, but also the rivermerchants' living condition, their way of living, the social and environmental atmosphere, which is so familiar to Chinese people that they were seldom conscious of it. The foreign readers may have great difficulty in understanding of the situation of the heroine and development of the story, and may even fail to get the attention. In translating the title in this way, he directly tells the readers, who is the speaker (the rivermerchant's wife), and who she is talking to (the river-merchant), indicating their way of life and why there is such a story of love and complaint. Here also lies the difference of two cultures: The Chinese people tend to be implicit, introspective while the Western people tend to be more direct and extroversive. The Chinese tend to say less to convey more [12, P. 65]. In the poem, more details of their life and feeling are just hinted between the lines, and these may cause Here also implied some hidden but deep-rooted moralistic value in the historical background: the man should take up the responsibility for the country and for the family and should have great prospects in the future, whether to hold responsibilities for the country, to be army man or to go out for business to make a fortune for the family, they are often expected to be absent from the family for a long time, and there are great many poems of this kind written in the tone of the women at home complaining and missing their husband far away from home. Therefore, Pound's creative shift in the title of the poem is here of great functional value not only to help bridging up the gap of difference in culture, but also help the foreign readers to get the atmosphere, the mood and tone of the poem. The Aesthetic Features Transmitted in Pound's Creative Translation Pound creatively translated Li Bai's "Chang Gan Xing"into:"River-Merchant's Wife: A Letter", with the intention to transmit the aesthetic features of Chinese poetry. He presented the beauty of the poem in beautiful English language by transmitting his own subjective perceptions of this Chinese Poem [10, P. 149]. The English version of Pound's translation established its classic position in American literary history since mid-20 th century. His mastery of cultural knowledge and English language cannot be underestimated. Indeed, he has transformed the ancient Chinese poems into beautiful touching and masterly English verse form. His translations are canonized and collected with other American masterpieces into anthologies of English and American poems as original creative works, like Norton Anthology of American literature of different editions [11, P. 35]. It may be that his position and influence as a leading poet has contribute to the recognition of his translation, or vice versa, although his translation works is not great in number, his translation works occupies an unique important position in the American literary history. Li Bai, famous for his romantic and imaginative poems, here in this poem tells the story of a shy, timid, bashful, sentimental and passionate girl of 16, and spoke out her mind in an innocent and complaining tone. Pound Pound used modern colloquial language as the diction of the poem. The image of the merchant's wife is created as a lively and innocent little girl, with a vivid and lifelike image, her mood and intonation is natural, a sentential stop at the end of each line, and a familiar tone as if she had come to us from the neighborhood instead of more than a thousand years ago. Her haircut, the way the two children played is so fresh and cute, which ended with "two small people, without dislike or suspicion" in simple language, vivid and sharp images and fresh expressions. And then the story proceeds to their married life: "At fourteen I married My Lord you. I never laughed, being bashful. Lowering my head, I looked at the wall. Called to, a thousand times, I never looked back" [6, P. 132]. These lines give the evidence that how he can make use ordinary dictions but creates special effects. For instance, "being bashful", "lowering my head", "called to a thousand times", "looked at the walls", etc. these are but some short and crisp oral expressions, clauses, phrase, free and flexible in structure but have managed to create deep impressive effects. The next part describes her married life, when she and her husband lived together: "At fifteen I stopped scowling, I desired my dust to be mingled with yours Forever and forever and forever, Here the source text employed two allusions of Chinese ancient love story to express their loyalty and devotion to one another that even death cannot alter. Pound did not translate the classical allusions, which is often very complex and difficult to understand, but expressed their devotion to each other: "I desired my dust to be mingled with yours,/ Forever and forever and forever!" How impressive! "At sixteen you departed, You went into far Tu-Ku-en, by the river of swirling eddies And you have gone for five months. The monkeys make sorrowful noise overhead." [6, P. 132] These four lines describe heroine's husband departure and her feelings after his leaving. She directly expresses her worries for his security by mentioning "Tu-Ku-ren, (Japanese for "Qu Tang Yan Yu Dui") Which is a gigantic rock at the Three Gorges of Chankiang River, the river of swirling eddies", and shows her worries, her concerns, her sorrow and her loneliness by mentioning her fancied scene: "The monkeys make sorrowful noise overhead." "You dragged your feet when you went out. By the gate now, the moss is grown, the different mosses, Too deep to clear them away! The leaves fall early this autumn, in wind. The paired butterflies are already yellow with August Over the Grass in the West garden; They hurt me. I grow older." [6, P. 133] This part is her observations of the world around her, natural scenery, the change of seasons, but dramatic enough to express her suffering from her husband's absence with the cycle of the seasons as if this sad state of mind would lasted a whole life, although he has gone just five months. The images of natural scenery remained: "the mosses", "leaves fall early this autumn", and " the paired butterflies hurt me". All these images are typical of Chinese sensitivities to describe the loneliness, sadness and solitary but may not cause the same feeling for foreign readers, and then Pound translated the lines directly: "They hurt me / I grow older". He actually has simplified the original Chinese connotations, but made the images telling and the sensitivity deeply felt with dramatic effect. The last part is her enthusiastic expectation and her bursting out of feelings: "If you are coming down through the narrows of the river Kiang, Please let me know beforehand, And I will come out to meet you As far as Cho-fu-sa." Cho-fu-sa is from the Japanese spelling of "Chang Feng Sha", which is a beach several hundred miles away from Nanjing, even now, there is quite a long distance far away, but the river-merchant's wife who lived a thousand years ago said: I will come out to meet you there, at Chang Feng Sha, and I don't think it is far away. The reason that the English version of Pound's creative translation of Chinese poem are taken as classic works of American literature may be trace to its cultural or even political causes, but the fact that if any literary works could be regarded as masterpiece it must have its prerequisites, that is, it has to come up to its own standards of aesthetic requirements in its own culture. In addition to its freshness in language, the tone and mood is also expressed vividly and eloquently. Evidently, His success in translation here does not lie in evidently not his faithfulness in transmit the meaning from one culture to another but in his skillful transformation of the heterogeneous elements such the Chinese poetical Aesthetic features. We can sum up his creativity in the change of poetic forms in the following aspects: 1. He changed the title: from the name of a place into the identity of the Heroine; 2. He shifted the angle of the narrating perspective; from third person narrative to first person narration; 3. He changed form of poetry: from a narrative poem into a dramatic monologue or a letter; 4. He changed the poetic diction, from Chinese classic poetic diction to oral everyday speech; 5. He omitted classical allusions, metaphorical expressions and simplified some details of the content; In spite of all these, he managed to translate the poem and make it well received and appreciated as a masterpiece of English writing. How did he achieve it? That is because he CONCLUSION: The Transmission the Aesthetic Features the Imagist Poetry In translation, there are often some heterogeneous elements, include the Aesthetic features, which cannot be directly introduced into another culture directly but may be transmitted in the creative translated version. In Pound's creative translation of Chinese classic poetry, we can get some implications in his creativity in handling these Aesthetic features. Ezra Pound is now generally recognized as the leading poet of the imagist movement, with his rejection of traditional English poetic form and meter and of Victorian diction. He has steered American poetry toward greater density, difficulty, and opacity, and opened up American poetry to diverse influences, including the ancient classic poetry of China. That is to say, not only he transmitted the Aesthetic features of Chinese poetry into his translation, but also transferred then into his own creation of his imagist poems. Central to his Imagist idea was clarity of expression through use of precise visual images. He advocated concision and directness, building short poems around single images; these features are precisely some of the features of Chinese classic poems. And there is no doubt that one of the greatest cultural influences over Pound came from ancient China. "In a Station of the Metro" was published in 1913, which serves as the classic specimen of Imagist poetry in which Pound led the way. The poem contains only two lines: "The apparition of these faces in the crowd; Petals on a wet, black bough." [5, P. 383-185] The famous poem is regarded as the masterpiece of Pound's Imagist poem. It reflects an observation of the poet of the human faces in a Paris subway station where Pound was once impressed by the pretty faces of people hurrying out of the dim, damp and gloomy metro station. The faces Pound observed reflect variously against light and darkness, like flower petals on a wet and dark bough [5, P. 384]. This is something new in American literary history, for there is not any poem so short and concise, with juxtaposition of vivid images and density of meaning in American poetry before, but you can find many evidences in Chinese classical poems, even shorter than this with deep Aesthetic and philosophical depth. And his "Imagist faith" was listed in his 1913 lists of "tenets": 1) Direct treatment of the "thing". Whether subjective or objective. 2) To use absolutely no word that does not contribute to the presentation. 3) As regarding rhythm: to compose in sequence of the musical phase, not in sequence of the metronome [1, P. 301-302]. If this imagist poetry is read in relation with creative translation, it is not difficult to find clues traced back to the influences of Chinese aesthetic features from his creative translation [12, P. 146]. That is to say, not only had he translated Chinese poems, but also strongly influenced by the aesthetic features of Chinese poems, especially in his adoption and application of the Chinese poetics [10, P. 149]. Whether through merit or controversy, Pound has stayed in print. Both his creative translations and his imagist poems have achieved classic status in modern American literary history. Many a time, we are conscious of the influence of western countries in the shaping of Chinese modern literature, but through the discussion above, we may come to the conclusion that the opposite is also true. The Chinese ancient poetry has a strong enduring influence on American modern poetry. There is the interaction of the two cultures, each absorb strength and vitality from the other. Information of conflict of interests: authors have no conflicts of interests to declare.
2019-06-19T13:22:25.652Z
2017-09-30T00:00:00.000
{ "year": 2017, "sha1": "3cd6152c996cf9eccbf10bbd66d64e85ba64dd82", "oa_license": "CCBY", "oa_url": "http://rrlinguistics.ru/journal/download/1199", "oa_status": "GOLD", "pdf_src": "Unpaywall", "pdf_hash": "3cd6152c996cf9eccbf10bbd66d64e85ba64dd82", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Art" ] }
265325368
pes2o/s2orc
v3-fos-license
Learning Analytics in the Era of Large Language Models : Learning analytics (LA) has the potential to significantly improve teaching and learning, but there are still many areas for improvement in LA research and practice. The literature highlights limitations in every stage of the LA life cycle, including scarce pedagogical grounding and poor design choices in the development of LA, challenges in the implementation of LA with respect to the interpretability of insights, prediction, and actionability of feedback, and lack of generalizability and strong practices in LA evaluation. In this position paper, we advocate for empowering teachers in developing LA solutions. We argue that this would enhance the theoretical basis of LA tools and make them more understandable and practical. We present some instances where process data can be utilized to comprehend learning processes and generate more interpretable LA insights. Additionally, we investigate the potential implementation of large language models (LLMs) in LA to produce comprehensible insights, provide timely and actionable feedback, enhance personalization, and support teachers’ tasks more extensively. Introduction Rapid technological advancements are bringing about significant transformations in every aspect of the education system.The advent of digital learning environments (DLEs) has made large volumes of novel data available.As students interact in the DLEs, digital traces about learning, performance, and engagement are recorded [1].To exploit these new forms of information and make use of computational analysis techniques, learning analytics (LA) has emerged as a new research field at the intersection of student learning, data analytics, and human-centered design.LA is defined as the "measurement, collection, analysis and reporting of data about learners and their contexts, for understanding and optimizing learning and the environment in which it occurs" [2] (p.4).To date, many efforts in LA have been devoted to information visualization or predicting students' academic performance.The essential utilities of LA, as listed by Society for Learning Analytics Research [SoLAR] [1], include (1) promoting the development of learning skills and strategies; (2) offering personalized and timely feedback; (3) increasing student awareness by supporting self-reflection; and (4) generating empirical evidence on the success of pedagogical innovations.With the growing number of published studies focused on LA each year [3], the field of LA has been recognized for its potential to improve learning outcomes for students and educators. Researchers have generated a vast literature on LA over the past decade.Systematic reviews of these studies identify various benefits attributed to LA systems, related not only to teaching and learning but also to management aspects and educational research [4].For example, LA could enhance students' engagement and performance by predicting performance and identifying students at risk of failing, providing personalized feedback and intervention strategies, personalization of learning, curriculum improvement, and course offering suggestions.In turn, these would favor better management of educational resources, improving enrollment and expense allocation.Furthermore, LA can increase our understanding of learning processes and foster the development of innovative methods for analyzing educational data [4][5][6]. However, there are still some areas for improvement in the development of LA systems relative to their theoretical grounding and design choices [7][8][9], challenges in their implementation [10], and issues in the evaluation of their effectiveness [11].Jivet et al. [12] recognize these as critical moments in the LA life cycle, which should always be informed by learning theories to produce effective LA tools.More recently, scholars have shifted their attention to raising awareness of the importance of making teachers an integral part of the LA design process and improving the usability of LA systems based on learning theories [13].Moreover, with the release of advanced artificial intelligence (AI) systems that can complete a variety of tasks, from memorizing basic concepts to generating narratives and ideas using human-like language, technology is revolutionizing the way we think about learning and opening up new standards for teaching practices [14].Considering these areas for improvement in LA practices, this paper offers an overview of the current challenges and limitations and proposes directions for its future development.In particular, we encourage teacher empowerment in developing LA systems and using LA to aid teaching practices.To this end, we reflect on how process data and large language models (LLMs) can be harnessed to improve the development of LA systems and support instructional tasks. This position paper begins by introducing various types of LA and their applications.Then, we present the challenges that modern LA practices face.Figure 1 provides a visual overview of these limitations, situating them within the LA life cycle, together with their proposed solutions.Insufficient grounding in learning sciences and poor design choices during the development of LA systems exacerbate issues in the interpretability of LA insights, which add to further challenges in their implementation related to prediction and actionability of feedback.Lastly, the evaluation of LA solutions brings forward issues related to their generalizability and scarce evidence of their effectiveness.From the issues presented, we put forward our recommendations based on the existing literature to involve teachers as LA designers for interpretable pedagogy-based LA systems.We also recommend using process data and natural language processing (NLP) to enhance the interpretability of LA.After that, we discuss how natural language models and their larger variants, like ChatGPT, can increase LA personalization and support teaching practices.We conclude the paper by discussing how the posited recommendations can enhance LA practice as a whole. LA: Limitations and Ongoing Challenges This section briefly presents the different scopes of existing LA systems and illustrates the weaknesses of current research and practices in this field.It is essential to acknowledge that the limitations discussed in this section, while inherently challenging and may sound detrimental, represent invaluable opportunities for investigation and potential influence on the advancement of LA and the broader landscape of modern education. 2.1.Descriptive, Predictive, and Prescriptive LA Insights from LA systems are often communicated to stakeholders through LA dashboards (LADs), which is "a single display that aggregates different indicators about learners, learning processes and learning contexts into one or multiple visualizations" [15] (p.37).LADs can display multiple types of information.Descriptive analytics show trends and relationships among learning indicators (e.g., grades and engagement compared to peers).Descriptive dashboards typically provide performance visualizations and outcome-focused feedback [16].Researchers use modern computational techniques to analyze educational data, not only to determine student performance but also to understand why they performed as they did, what their expected performance is, and what they should do next.Predictive LA systems utilize machine learning algorithms to analyze current and past data patterns to predict future outcomes.These systems, or LADs, are mainly used to forecast academic outcomes, such as grades in upcoming assignments and final exams, and the likelihood of non-submission, course failure, or similar results.As further explored below, predictive analytics come with their own set of technical limitations and ethical challenges.More recently, there has been a shift towards creating prescriptive dashboards offering process-oriented feedback: actionable recommendations pointing students to what they should be doing next to reach their learning goals [16][17][18][19].Examples of similar systems can be found in the "call to action" emails employed by Iraj et al. [20], or in a LAD providing students with content recommendations and skill-building activities [11]. Insufficient Grounding in Learning Sciences Researchers have criticized existing LA systems for their insufficient grounding in the learning sciences and called for a better balance between theory and data-driven approaches [7,8].Most studies took a data-driven approach at the beginning of LA investigations without utilizing specific learning theories to guide their analysis.While this approach allowed for identifying behavioral patterns, interpreting and understanding them remained problematic [8].The exact definition of LA identifies measurement and analytics not as the goal itself but as a "means to an end" [21], which is the understanding and optimization of learning and educational environments.This implies that the data are meaningful only to the extent that they support interpretation and guide future actions. Of the 49 articles included in their literature review, Algayres and Triantafyllou [22] found that only 28 presented a theoretical framework, primarily referring to theories of self-regulated learning.Similarly, a scoping review of LA articles published from 2016 to 2020 revealed that 37 studies utilized the most common theories, namely self-regulated learning and social constructivism [8].The authors invite researchers to explore behavioral and cognitive theories, going beyond observable behavioral log data and investigating information processing strategies (e.g., problem-solving, memory).Also, they discuss how learning theories should be used to interpret LA data and promote pedagogical advancement by validating learning designs.DLEs make an astounding amount of data available to researchers and educators.Still, without a theory, they are left astray in interpreting them and deciding which variables are valuable and should be selected for their models [23].Furthermore, sometimes aggregate measures derived from simple indicators from process data are more informative for learning, as they better represent learning behaviors studied by educational theories [24].Therefore, it is essential to understand the meaning of these new measures generated in the DLE and to remember that engagement does not necessarily equate to learning [25]. Interpretability Challenges The interpretability of insights derived from LA is not only related to the theory underlying the data but also to the choices being made related to communication and design.LADs are located at the intersection between educational data science and information visualization.Recently, scholars have been reminding LAD researchers and developers that these instruments should not merely display data and ask students and teachers to assume the role of data scientists; instead, their main goal should be communicating the most essential information [26]. Research shows that learners' ability to interpret data may be limited, and to best support cognition, design choices should be founded on the principles of cognitive psychology and information visualization [9].For example, coherent displays and colors can reduce visual clutter and direct attention to the essential elements for correctly interpreting the data.The usability and interpretability of LA tools are critical.In general, educators feel that LA fosters their professional development [27]; however, even if LA tools are perceived as valuable, teachers sometimes struggle to translate data into actions [28].According to the Technology-Acceptance Model [29], the intention to use technology is influenced by its perceived usefulness and ease of use of the instrument.Therefore, even though teachers recognize the potential benefits of LA, they might avoid using dashboards if they do not feel comfortable navigating or interpreting them. Prediction Issues While they provide richer information than purely descriptive LA systems, predictive models come with their own set of limitations and ethical challenges, such as the risk of stereotyping and biased forecasts.Evidence on teacher use of predictive LA tools is also mixed.Some studies find that teachers who make more intense and consistent use of LA tools can better identify students who need additional support [30], while others do not corroborate these findings [31].Furthermore, predictions are often generated by black-box models, lacking transparency, interpretability, and explicability [32].These characteristics favor actionability [33], and in their absence, the utility of the system and users' trust are reduced [34].Researchers highlight the need to improve prediction accuracy, together with its validity and generalizability [35], and advise that predictions need to be followed by appropriate actions and effective interventions to influence learning outcomes [36]. In the second edition of The Handbook of Learning Analytics, SoLAR provides directions for using measurement to transition from predictive models to explanatory models.The goal of LA is optimization, which goes a step further than prediction.At the same time, explanation is neither necessary nor sufficient for optimization; there has to be a causal mechanism on which students and teachers base their decisions if these actions are expected to produce specific desired outcomes [37]. Beyond Prediction: Actionability Issue for Automatically Generated Feedback Researchers are now advocating for the development of LADs that inform students about how they have performed so far and how they can do better [18,32].As it is widely recognized, feedback supports learning and academic achievement [38].Earlier studies on feedback adopted an information paradigm, focusing on the type of information provided to learners, its precision, and the level of cognitive complexity [20,38].More recently, the focus has shifted to feedback as a dialogic process and its actionability: students (and teachers) are not passive recipients of information.However, it is crucial to develop their abilities to understand feedback and take action [39].For feedback to be effective, learners need to understand the information, evaluate their own work, manage their emotions related to the feedback, and take appropriate actions [39,40]. Another important characteristic of good feedback is timeliness.Research shows that the effectiveness of feedback is more significant when it is received quickly [41].LA tools can offer instructors and learners constant access to automatic-generated feedback and real-time performance monitoring.Iraj et al. [20] found early engagement with feedback to be positively associated with student outcomes when instructors used an LA tool to monitor students' progress and send personalized weekly emails that provided learners with feedback on their activity and highlighted the actions required next in their learning through "call to action" links to task materials.Prescriptive information is appreciated by students [16,17] and seems to support student motivation [42].However, emerging prescriptive dashboards often rely on human intervention or employ automated algorithms based on hard-core heuristics and thresholds, so some researchers call for developing more sophisticated systems [18]. Moreover, the effectiveness of feedback is influenced by student characteristics [38,43], and it is enhanced when feedback is personalized [20].LA tools can offer instructors and learners continuous access to automatically generated feedback and real-time monitoring of their performance, and they offer new opportunities to provide individualized feedback to students.Technology-mediated feedback systems have been found to increase students' engagement, satisfaction, and outcomes [44,45].By favoring personalization and timeliness of feedback, together with the display of adequate and actionable information, student-facing LADs could help reduce the "feedback gap" [20], the difference between the potential and actual use of feedback [44,46].However, an extensive literature review from Matcha et al. [7] suggests that existing dashboards, with their scarce grounding in theory, are unlikely to follow literature recommendations for best feedback. Generalizability Issue The development and adoption of LA tools are complex and require intense efforts in terms of time and expertise.Therefore, learning institutions often assume a "one-size-fitsall" approach, creating a single tool applied across every course, discipline, and level.There has been an increase in the offering of LA tool packages that use the same off-the-shelf algorithms for all modules, disciplines, and levels [47].However, "trace data reflects the instructional context that generated it and validity and reliability in one context is unlikely to generalize to other contexts" [37] (p.22).The Gašević et al. [48] study demonstrates that LA predictive models must "account for instructional conditions", as generalized models are far less powerful than course-specific models to guide practice and research.The literature review by Joksimović et al. [49] on LA approaches in massive open online courses highlights the lack of generalizability of these studies, as they adopt a widely different range of metrics to model learning.They suggest that a shared conceptualization of engagement by finding generalizable predictors could make results from future research more comparable across different contexts, and they invite a shift from observation to experimental approaches. Insufficient Evidence of Effectiveness Reviews of the literature highlight the lack of rigorous evaluations of the effects of LA tools [50].The literature review by Bodily and Verbert [50] on student-facing LADs shows that more research is needed to understand the impact of LADs on student behavior, achievement, and skills, as the studies conducted are few and yielded mixed results.They encourage the adoption of more robust research methodologies, such as quasi-experimental studies and propensity score matching, and the investigation of underdeveloped topics, such as how students engage and interact with LADs and the evaluation of their effectiveness.Quantitative findings supporting the positive effects of LADs on learning outcomes are starting to emerge [18]; however, most of the literature consists of studies that tend to consider few outcome measures and to evaluate usability aspects, using small samples and adopting mainly qualitative strategies of inquiry [18,50].Jivet et al. [12] advise that, in evaluating LA solutions, usability studies should investigate the tool's perceived ease of use and utility and how users interpret and understand the outputs they receive.However, these aims must remain secondary to assessing whether the intended outcomes were achieved by LA and to evaluating their affective and motivational effects.The authors suggest strengthening the evaluation of LA by triangulating data from validated self-reported measures, assessments, and tracked data.When assessing the effectiveness of LA systems, it is essential to consider not only the outcomes but also the learning process itself.As explored below, researchers may use diversified data types collected by the LA system to offer valuable insights into learner activities, such as video logs, fine-grained click streams, eye-tracking data, and log files.These types of data allow for extracting meaningful patterns and features that can help understand learners' intermediate states of learning and how they are related to the learning outcomes [51]. Insufficient Teacher Involvement Teachers are among the most critical stakeholders in integrating LA systems in schools.Thus, the effectiveness of LA systems is very much dependent on the acceptance and involvement of teachers [52].As mentioned above, although teachers usually hold positive attitudes toward LA [27], they are also identified as a potential source of resistance to the adoption of these new systems [53,54].Surveys reveal that in 2016, LA initiatives were primarily driven by IT experts and a few dedicated faculty members in Australia and the UK.Still, for the most part, teachers were left "out of the loop" of these novel initiatives [47].However, teachers may develop a negative attitude toward LA systems and be reluctant to utilize them if they perceive them as lacking usefulness or ease of use [55].Therefore, it is vital to understand and address teachers' needs and tolerance for complex systems.Moreover, teachers represent not only the end users of LA systems but also content experts in their subjects and classrooms.As educators orchestrate the teaching and learning process, they should be called to take part in designing the learning tools they will be expected to adopt.Involving teachers as designers in the development of LA systems would help create a bridge between data and theory by integrating the teachers' learning design and aid design choices that support the usability and readability of dashboards. Moving Forward in LA The previous sections highlighted the most critical gaps in present LA research.Although LA offers excellent potential to the educational field, clear guidelines for LAD development and robust evaluation procedures are still lacking.Scarce grounding in learning theories, lack of generalizability, and subsequent scalability challenges have generated a rather large body of literature from which it is hard to draw interpretations and conclusions on the effectiveness of LA tools.Moreover, an excessive focus on data and insufficient involvement of teachers and students in the design of these systems created dashboards that are too disconnected from the instructors' learning designs, users' needs, and data literacy abilities, leading to usability and interpretability challenges. Such limitations must be acknowledged as they lead to venues for improvement that could enhance LA practice in several aspects.This section presents some approaches that could offer valuable guidelines for the future developments of LA and enhance their implementation.Some of these approaches have started to be adopted in the literature; however, as acknowledged as a limitation of our paper, some of the ideas must still be developed and tested to verify their educational effectiveness and their actual value in improving LA. Involving Teachers as Co-Designers in LA Human-centered learning analytics (HCLA) [13] proposes to overcome some limitations of LA through the participatory design of LA tools.Engaging stakeholders as co-creators holds the potential to develop more effective tools by transforming LA from something done to learners into something done with learners.This shift could lower ethical concerns and lead to the development of tools that better fit the needs of their users.For example, this perspective is switching the focus from relying entirely on users in data interpretation to giving them the answers they are interested in. Dimitriadis et al. [56] identified three fundamental principles of HCLA: (1) theoretical grounding for the design and implementation of LA; (2) intensive inter-stakeholder communication in the design process; and (3) the integration of LA into every phase of the learning design cycle to "support teacher inquiry into student learning and evidencebased decision-making".During LA design, the target of the LA tools should be derived from the learning design; then, the implementation of the LA tools can provide valuable insights to inform the orchestration of learning and the evaluation of the learning design itself.Finding a way to hear all stakeholders' voices can be challenging; to facilitate the orchestration Prestigiacomo et al. [57] introduce OrLA, which provides a roadmap to guide communication.Through the participatory design and the active involvement of teachers not only as end users but as designers and content experts, HCLA could favor a scalable implementation of LA and lead to the development of instruments that fit teachers' data abilities and needs [58].Similar principles remain valid when broadening the discussion from LA to educational AI in general.Cardona et al. [59] identify three instructional loops in which cooperation between AI and teachers should always center on educators: the act of teaching, the planning and evaluation of teaching, and the design and evaluation of tools for teaching and learning. An increasing number of studies have started to implement methods of participatory and co-design in the development of LA dashboards [60].Examples of LA developed in cooperation with teachers can be found in the work of Pardo et al. [61] and Martinez-Maldonado et al. [62].The tools developed for these studies allow educators to set "if-then" rules that reflect their learning design and influence the output returned from the analysis of the various data sources used by the system (i.e., semi-automated emails for processoriented feedback and data stories, respectively).Interviews with educators revealed that they liked to be able to see the rules and modify them, and some proposed showing them to students during in-class debriefings so that they could understand the difference between their performance and the learning expectations [62].Conijn et al. [63] present the iterative procedure they used to develop a dashboard that provides interpretable and actionable feedback about students' writing process.The steps included the cooperation of writing researchers and teachers for the design of the tool and usability tests with new teachers, which pointed to the effectiveness of the approach. Using Natural Language to Increase Interpretability To reduce reliance on users' data literacy for LADs interpretation and support the inference process, Alhadad [9] suggests integrating textual elements into visualizations, for example, through narrative and storytelling aspects.The incorporation of storytelling in LA visualization was introduced by Echeverria et al. [64].The authors advocate for the explanatory instead of the exploratory purpose of LA: dashboards should not invite the exploration of data, but rather explain insights.They propose a learning design-driven data storytelling approach, which builds on principles from information visualization and data storytelling and, in accordance with HCLA, connects them to teachers' intentions (i.e., learning design).Contrary to traditional "one-size-fits-all" data-driven visual analytics approaches, the new method derives rules from the learning design and uses them to construct storytelling visual analytics.Data storytelling principles determine which visual elements should be emphasized, while the learning design determines which events should be the focus of communication. Fernandez Nieto et al. [65] explored the effectiveness of three visual-narrative interfaces built on three different communication methods: visual data slices, tabular visualizations, and written reports.From interviews with educators, it emerged that different methods are more helpful for different purposes.For example, written reports were perceived as beneficial for teachers' reflection but not as much to be used in students' debriefings, for which tabular visualizations were thought more appropriate.Therefore, defining the purpose of the LAD and involving stakeholders in this process seems to be fundamental for developing effective dashboards. To incorporate textual elements into LADs, Ramos-Soto et al. [66] developed a service that uses natural language templates and data extracted from the DLE to automatically generate written reports about students' activity.According to the evaluation of an expert teacher, the system was able to generate practical and overall truthful insights, albeit with small divergences and not as complete as those that would have been derived from the data by human experts. Natural language generation could automate the production of verbal descriptions and data stories to facilitate and guide the interpretation of charts and infographics generated by LA systems.While no system is mature enough to be trusted on its own, research in the field is moving fast.Sultanum and Srinivasan [67] recently developed DataTales, an LLMpowered system to support authoring narratives about any given chart.The system does not simply tell users what is conveyed by the data but also helps them read the chart: when the user hovers over a particular portion of text, an interactive visualization highlights the relevant elements of the graph.The prototype was evaluated through interviews with data experts.Participants found the tool effective in assisting both data explanation ("what to talk about and how" (p.3)) and exploration (including to "get a high-level summary of the data in natural language form" (p.4)) and extracting insights ("the why's" (p.3)) from the data.Although responses were mostly positive, some issues were identified, including style, lengthiness, and wrong or inaccurate interpretations.Even though the technology is not perfect, it can offer us a glimpse into the future; or, even in its flawed state, it could be used in an expert-led environment to support the development of data literacy abilities of teachers and students. Using Process Data to Increase Interpretability In recent years, with the popularity of learning systems, researchers have been interested in the process data; that is, the data generated while students interact with the learning systems.In a learning system, students' interactions with the user interface, including their duration on each screen and actions such as clicking, are often logged and commonly referred to as process data [68].However, process data encompasses more than log data; it broadly includes empirical data that indicates the process of working on a test item based on cognitive and non-cognitive constructs [69].This encompasses various data types, such as action sequences, frequency of actions, conversations or interactions within the learning system, and even eye-tracking movements and think-aloud data.In recent years, process data have received extensive research attention within the context of educational data mining, learning analytics, and artificial intelligence.Process data serves as a valuable source of detailed information regarding students' learning process within a learning system, enabling interpretation of both cognitive and behavioral aspects of learning. As an important aspect of process data, response time has been extensively studied and is commonly regarded as an indicator of students' behaviors and cognitive processes.For example, the response time has been used to identify students presenting abnormal behaviors in assessments.Wise and Ma [70] proposed a normative threshold method that compares an examinee's response time with that of their peers to determine rapid guessers or disengaged test-takers.In addition, Rios and Guo [71] developed a mixture log-normal approach which assumes that, in the presence of low effort, a bimodal response time distribution should be observed, with the lower mode representing non-effortful responding and the upper mode indicating effortful responding.This approach employs an empirical response time distribution, fits a mixed log-normal distribution, and identifies the lowest point between the two modes as the threshold.A more straightforward yet effective method is to visually inspect bimodal response time distributions for a distinctive gap, which can differentiate rapid guessers from other test-takers [72].These methods can also be extended beyond assessment environments to infer students' motivation, engagement, and learning experiences by analyzing their time spent navigating learning systems. In addition to response time, clickstream data recorded during test-taking experiences can provide valuable insights into behavior patterns.For example, Su and Chen [73] utilized clustering techniques to group students' clickstream data with similar behavior usage patterns.Ulitzsch et al. [74] considered both action sequences and timing, employing cluster edge deletion to identify distinct groups of action patterns that represent common response processes.Each pattern describes a typical response process observed among testtakers.Furthermore, Tang et al. [75] introduced the model agreement index as a measure to quantify the typicality or atypicality of an examinee's clickstream behaviors compared to a sequence model of behavior.To achieve this goal, they trained a Long Short-Term Memory network to model student behaviors.This approach allows the model to incorporate various behavior patterns and acquire knowledge about normal behavior patterns across different test-taker archetypes and styles.Gao et al. [76] used fine-grained log data to capture students' progress in a programming class.Using differential sequence mining on data from the first assignment, they could predict the final course outcome with 79% accuracy and capture interpretable behavioral patterns that reflect effective and ineffective strategies that students enact to learn.For example, specific coding patterns frequent among low performers were interpreted by all researchers as indicative of unsystematic actions performed without taking time to think and of uncertainty. Biometric measures, such as analyzing eye movements, can also offer valuable insights into students' learning and test-taking behaviors.The duration of eye fixation can reflect the level of attention a test-taker pays to specific words in test items, with more challenging items generally requiring longer fixation periods [77].Pupil size can indicate fatigue levels, interest in specific learning content, and the cognitive workload associated with a particular task [78].Moreover, blink rates tend to decrease when there is a higher visual demand, indicating the reallocation of cognitive resources [79].For instance, research studies have demonstrated that when individuals encounter unfamiliar, ambiguous, or complex items, they tend to increase their regression rate, which means they look back at previous parts of the text to reinstate or confirm their cognitive effort [80,81].Furthermore, such regression has been strongly linked to the level of effort and attention a reader devotes to a reading task.Thus, increased regression is often associated with improved accuracy in processing the content information [82]. The intermediate states of students' problem-solving or writing processes within the learning system can also be analyzed.For example, Adhikari [83] proposed several process visualization practices for writing and coding tasks in learning systems, such as the playback of typing and tracking changes in paragraphs, sentences, or lines over time.By employing these visualization practices, educators can directly see (1) the specific points in the process where students spent the majority of their time, (2) the distribution of time between creating the initial draft and revising and editing it, (3) the paragraphs that underwent editing and revision, and (4) the paragraphs that remained unedited.These visualizations allow educators to explore, review, and analyze students' learning processes and their approach to writing or programming.In addition, students themselves can leverage these visualizations for self-reflection, direction, and improvement.Furthermore, the temporal analysis of keystrokes and backspaces provides insights into learners' engagement [84] and affective states [85].Allen et al. [86] encourage the exploration of additional aspects of the online language production process, such as pausing typing to check syntax or research the vocabulary. Another example of where process data proves useful is in identifying and interpreting the patterns of action sequences associated with different learning or testing outcomes.For instance, He and von Davier [87] combined sequence mining with n-gram techniques to pinpoint common patterns leading to either successful or unsuccessful action sequences.Their findings revealed that the patterns of action sequences linked to correct responses are more consistent across countries than those linked to incorrect responses.Extending this line of research, Ulitzsch et al. [88] incorporated graph-based data clustering to identify how, and in which aspects, the patterns of action sequences related to correct responses differ from those related to incorrect responses. Moreover, NLP techniques can be employed to analyze process data.For example, Guthrie and Chen [89] analyzed log data from an online learning platform and introduced a novel approach to modeling student interactions.They incorporated information about logged event duration to differentiate between abnormally brief events and normal or extra-long events.These new event records were treated as a form of language, where each word represented a student's interaction with a specific learning module, and each sentence captured the entire sequence of interactions.The authors used second-order Markov chains to identify patterns in this new language of student interactions.By visualizing these Markov chains, the authors found the interaction states associated with either disengagement or high levels of engagement.However, LLMs have been rarely applied for log analysis.To address this gap, Chhabra [90] experimented with several BERT models to establish a system for automatically extracting information (i.e., the events occurring within a system) from log files.In contrast to traditional log parsing approaches that heavily relied on humans constructing regular expressions, rules, or grammars for information extraction, the proposed system significantly reduced the time and human effort required for log analysis.This work demonstrates the potential of using LLMs to extract and analyze the logged events collected through LA systems, thereby improving the ease of interpreting students' learning process. According to the showcased examples above, process data can increase the interpretability of students' learning process, and including this type of data in LA systems could lead to the generation of more interpretable insights.Identifying the concrete behavioral patterns that underlie learning processes can bring to light the strategies students adopt and prompt teachers and learners to reflect on their effectiveness and what they might need to do differently to improve their performance. Using Language Models to Increase Personalization Our review of the literature identifies timeliness [41], personalization [20], and actionability [39] as attributes of effective feedback, which would support effective implementation of LA.A thematic analysis of learners' attitudes toward LADs reveals that students are interested in features that support learning opportunities: they express a wish for systems that provide everyone with the same opportunities and, at the same time, a desire for customization to deliver meaningful information.They demonstrate awareness of privacy concerns and prefer automated alerts over personalized messages from teachers.This might be because the latter elicits feelings of surveillance [91].Automatically generated personalized feedback could provide the benefits of customized messages without making students feel monitored by their teachers. A literature review on automatic feedback generation (AFG) in online learning environments [92] points to the usefulness of this technology, with about half of the studies indicating that AFG enhances student performance (50.79%) and reduces teacher effort (53.96%).The main techniques used in generating feedback were comparison with a desired answer, dashboards, and NLP. NLP analyzes language in its multi-dimensionality and delivers insights about both texts and learners.Descriptive features of language (e.g., number or frequency of textual elements) can inform about student engagement or be used for predicting task completion or identifying comparable texts.Characteristics of the lexicon employed in a text can be used to classify genres or estimate readability.The syntactical structure of sentences informs about the readability, quality, and complexity of the utterance, and can be used to evaluate linguistic development.Semantic analyses can identify the central message of the text and its affective connotations, or detect overlap between two texts (e.g., original text and summary).NLP analyses can also estimate cohesion and coherence, which inform how learners process and elaborate knowledge.Moreover, NLP can communicate with teachers and learners through natural language, for example, through the generation of reports or personalized feedback [86]. Cavalcanti et al. [92] notice how existing studies on AFG are plagued by two of the limitations that have already been highlighted in this paper: insufficient grounding in educational theories for effective feedback and a lack of consideration for the role of teachers in the provision of feedback.Therefore, they encourage further research to evaluate feedback quality and develop tools focused on instructors.Moreover, they call for studies on the generalizability of systems for AFG, identifying a possible solution in natural language generation. LLMs are advanced NLP models that use deep learning techniques to learn patterns and associations between the elements of natural language and capture statistical and contextual information from the training data.The models are usually trained on vast databases encompassing various textual data sources, such as books, articles, and web pages.LLMs are not only able to understand language but also to produce coherent humanlike utterances in response to any user-generated prompt.LLMs can translate, summarize and paraphrase a given text, and generate new ones.With the release of ChatGPT in November 2022, LLMs gained huge traction in society and across numerous fields, from medicine to education, as scholars explore the applications of these new systems and warn about their pitfalls.In fact, even though the training corpora is massive, it is not always accurate or up to date, which means that sometimes outputs generated by ChatGPT can be inaccurate or outdated.For example, there are records of LLMs providing links to unrelated sources or citing nonexistent literature [93].These are examples of hallucinations, which are only one of the unresolved challenges in LLM research [94].Moreover, LLMs are not (yet) great at solving math problems [95]. Lim et al. [96] invite researchers to develop LA systems that can make feedback more dialogue-based; personalized feedback messages should go a step further and include comments on learning strategies (i.e., metacognitive prompts) to support sense-making, as understanding feedback and interpreting it in relation to one's own learning process is necessary to plan appropriate action in response to the feedback. Dai et al. [97] provided ChatGPT with a rubric and asked it to produce feedback on student assignments to compare it against instructor-generated feedback.The AI tool produced fluent and coherent feedback, which received a higher average readability rating than the ones written by the teacher.Agreement between the instructor and ChatGPT was high on the evaluation of the topic of the assignment; however, precision was not as satisfactory on the evaluation of other aspects of the rubric (goal and benefit).ChatGPT generated task-focused feedback for all the students and provided process-focused feedback for just over half of the assignments.On the other hand, the AI never gave feedback on self-regulation and self, while the instructor provided similar feedback in 11% and 24% of cases, respectively. Similarly, Matelsky et al. [98] developed FreeText, a model-agnostic framework that can leverage any specific LLM to provide students with timely and individualized feedback on short answers to open-ended questions.The system does not assign grades but offers textual feedback on the overall answer and specific snippets of the response that might contain errors or inaccuracies.Teachers have the option to set evaluation criteria, which the tool can also use to present them with improved versions of the question prompt.FreeText is intended to support both teachers and students, not to fully automate assessment, and it should soon be tested in a large-scale context. Yildirim-Erbasli and Bulut [99] discussed the potential of conversational agents in improving students' learning and assessment experiences through continuous and interactive conversations.The authors argue that conversational agents can create an interactive and dynamic learning and assessment system by administering tasks or items and offering feedback to students.The use of NLP enables conversational agents to provide real-time feedback that adapts to students' responses and needs, fostering a more effective and engaging learning environment.Consequently, students' motivation and engagement levels in learning and test-taking can be continuously boosted through personalized conversations and directed feedback. Hasan et al. [100] recently introduced SAPIEN, a highly customizable, high-fidelity virtual agent powered by LLMs, able to engage in dynamic video-call conversations in 13 languages and to adapt vocal and facial expressions across the range of seven basic emotions. Users can set the demographic characteristics of the avatar, choose the topic and goal of the conversation, and obtain feedback at the end of the video call.The authors suggest a wide range of applications for the tool, including language learning.They demonstrate awareness about the ethical risks linked to a virtual agent so highly humanized, and in response, they set short limits to the length of the call and the information retention capabilities of the tool.SAPIEN offers an example of what can be achieved when LLMs are coupled with other technologies (i.e., animations, speech-to-text, and text-to-speech models).While conversational agents for education do not necessarily need to be realistic, longer attention spans would likely be more beneficial than customizable humanoid avatars.Educational researchers should explore what could be achieved when LLMs are integrated into educational systems such as LA tools or intelligent tutoring systems. Another aspect of the personalization potential of LLMs is that they can be utilized to generate learning tasks or assessment items that are optimally tailored to individual student abilities.For example, LLMs have been employed to automatically generate a variety of learning and assessment materials, including reading passages [101,102], programming exercises [103], question stems [104,105], and distractors [106,107].These examples demonstrate the potential of LLMs to create large item banks.The automatically generated assessment items then can be integrated into the existing framework of computerized adaptive tests, a testing methodology that adapts the selection of the following item based on the student's ability level inferred from their previous responses [108].As a result, students can engage in a personalized and adaptive learning experience, thereby enhancing their engagement and improving learning outcomes [109]. With a large item bank or a bank of learning tasks created, LLMs can be further used to build recommender systems.Recommender systems in education aim to offer personalized items that match individual student preferences, needs, or ability levels, helping them navigate through educational materials and optimize their learning outcomes.Typically, there are two popular approaches for building recommender systems: collaborative filtering and content-based filtering.The underlying idea of collaborative filtering is to analyze students' past behavior and preferences to generate recommendations, identifying patterns and similarities between users or items.It assumes that students who have exhibited similar interests will continue to do so in the future.On the other hand, content-based filtering examines the content of the items and compares them to students' profiles or past interactions.By identifying similarities between the content of items and students' preferences, needs, or ability levels, the system can generate recommendations that match students.In the era of LLMs, language model recommender systems have been proposed to increase transparency and control for students by enabling them to interact with the learning system using natural language [110].LLMs can interpret natural language user profiles and use them to modulate learning materials for each session [111].For example, Zhang et al. [112] proposed a language model recommender system leveraging several language models, including GPT2 and BERT.They converted the user-system interaction logs (items watched: 1193, 661, 914) to text inquiry ("the user watched <item name> of 1193, <item name> of 661, and <item name> of 914") and then used language models to fill in the masks for recommendation ("now the next item the user wants to watch is ").Therefore, integrating LLMs into LA systems could generate more effective tools by affording students a highly personalized learning experience and providing detailed and timely verbal feedback on their performance and progress.Moreover, by automating feedback generation, LLMs can relieve teachers from demanding and time-consuming tasks, allowing them to devote more time to other aspects of teaching.However, it is always important to keep teachers in the loop in the process of the generation and provision of automatic feedback, as these systems are not (yet) able to touch upon all the dimensions of learning and might not integrate the learning design or take the student history into account.Some existing LA systems offer an instructor-mediated approach to personalized feedback, which offers teachers greater control over the metrics and messages returned to students, for example, by allowing them to set up "if-then" rules for message delivery based on their specific learning design.A focus group exploring students' perception of a similar system reveals that, even if they knew that the messages were, to some extent, automated, pupils perceived that their instructor cared about their learning.The authors argue that the perception of interpersonal communication favored proactive recipients of feedback and increased motivation for learning [113].Cardona et al. [59] support the use of AI for AFG but recommend always keeping educators at the center of the feedback loops and invite researchers to create feedback that is not solely deficit-focused but also asset-oriented, able to help students recognize their strengths and build onto them. Using Language Models to Support Teachers Feedback generation is only one of the many possible applications of LLMs to support educational practices.Allen et al. [86] suggest that when applying NLP to LA, we should consider both the multi-dimensional nature of language and the multiple ways in which language is part of the learning process.Language permeates every aspect of learning: it is through processing natural language that learners are asked to understand course materials and tasks (input), explain their reasoning (process), and formulate their responses (output).NLP can be leveraged to analyze the learning process in all its different phases.At the input level, NLP can inform teachers about how their communications and the materials they select impact students and also identify the most appropriate materials for each student based on their reading abilities and vocabulary skills.To understand cognitive processes underlying learning, NLP techniques can automate the analysis of think-aloud protocols and open-ended questions in which respondents describe their reasoning.Lastly, NLP can analyze textual outputs produced by students with different objectives, such as automated essay scoring (AES), assessing students' abilities (e.g., vocabulary skills) and understanding of the course content, and providing highly personalized feedback. Bonner et al. [114] provide examples of practical uses of LLMs to alleviate teachers' workload and free up time to focus on learners while creating engaging lessons and personalized materials.LLMs such as ChatGPT can correct grammar and evaluate cohesion in student-generated texts, summarize texts, generate presentation notes from a script, offer ideas for lessons and classroom activities, create prompts for writing exercises, generate test items, write or modify existing texts into suitable assessment materials based on skill level, and guide teachers in the development of teaching objectives and rubrics.By crafting wellthought-out and specific inputs, teachers can receive outputs that best fit their intent and meet their needs.For example, teachers can specify how many distractors to be included in the multiple choice questions generated by the AI, what writing style should be used, or how difficult the text should be.When asked to provide ideas for classroom activities to introduce a topic, ChatGPT proposed tasks that span across the taxonomy of learning, from analyzing to applying, depending on the students' skill level that the activity was thought for.Through LLMs, teachers can create personalized materials for each student in a fraction of the time it would take them to do so themselves. AI technologies could enhance the practices of formative assessment by capturing complex competencies, such as teamwork and self-regulation, by promoting accessibility for neurodivergent learners, or by offering students constant support whenever needed, even outside of class times [59].For example, LLMs can be used to build virtual tutors that can help learners understand concepts, test their knowledge, improve their writing, or solve assignments.Khanmigo is a virtual tutor developed by Khan Academy that uses GPT-4 to support both students and teachers in many of the ways presented above.The system was instructed to tutor students based on the best practices identified by the literature, which means it supports and guides student reasoning processes without doing the assignment for them, even when asked to do so.Chat logs are made available for teachers to access, and inappropriate requests (e.g., cheating) are automatically flagged by the system and brought to the educator's attention [115]. All these applications are anticipated to reduce teachers' workload, either by taking it on themselves (e.g., modifying a text so that it meets the appropriate difficulty level for learners) or by offering educators guidance and ideas (e.g., planning classroom activities).Users are encouraged to be specific when providing prompts and to keep interacting with the LLMs, giving them further instructions if they are not satisfied with the answer they received, as these systems retain a more or less extensive memory of the conversation (context window).Increasing efforts have been recently focused on enlarging the mnemonic capabilities of LLMs, which would be useful to approach complex tasks, such as summarizing entire books or keeping the memory of each student's background and interests. Discussion Although the literature has received numerous contributions over the last few years, there are still limitations in the development and design of LA tools and challenges in their implementation.Existing LA applications still suffer from an insufficient grounding in pedagogical theories, leading to difficulties in the valid interpretation and use of the learner data.Moreover, the generalizability of LA models is still sub-optimal in terms of performance, and substantial evidence of LA effectiveness is lacking due to mixed results and the paucity of evaluation studies making use of strong research methodologies.Educators' overall attitude towards LA tends to be positive, but they still face challenges in adopting LA tools.An excessive focus on data, detached from learning theories and the teacher's learning design, together with poor design choices, can create tools that do not meet their end users' needs and data literacy abilities. Making teachers co-designers in the development of LA seems to be a promising route to integrate pedagogical theories and the teachers' own learning design with the behavioral data collected in the DLE.The collaborative design process proposed by HCLA should yield tools that better meet the context need, better enable teachers to interpret insights, and better meet their data literacy skills. Another promising way to aid users in interpreting LA data is to integrate visual information with written text.Natural language is central to communication, permeating every aspect of teaching and learning.With the recent and fast evolution of language models, a plethora of new opportunities are opening up in the educational field.LLMs can be used to evaluate assignments, provide personalized feedback on students' essays and their progress, or offer support as an ever-accessible tutor.Furthermore, LLMs can support teachers in various other tasks, from adapting learning materials to their students' language proficiency levels to developing creative activities, learning plans, essay prompts, or questions for testing.LLMs should not be embraced as the solution to all problems in education and LA; they could be effective in increasing interpretability and personalization of LA insights but cannot address the foundational issues related to the development of LA systems and the investigation of their effects. Integrating LLMs into LA could make insights more interpretable for users, and integrating LA into LLMs could give the language model the context necessary to offer each student highly personalized and better-rounded feedback that takes their history, progress, and interests into account when providing reports and recommendations.Moreover, LLMs can serve educators as support tools to approach complex and time-consuming teaching tasks.The intent is not to use AI to replace teachers but to put technology at the service of teachers.Educators should use LLMs as a resource to reduce workload, stimulate creativity, and offer students tailored materials and more feedback while retaining their role as reference figures and decision-makers in the planning and evaluation of learning.AI is not supposed to strip teachers of the value of their expertise but rather to support it and allow them to focus on tasks in which the human factor cannot be replaced. Contrary to the interpretation of LA data, LLM outputs are generally as straightforward as possible since the systems communicate directly through natural language.In this regard, one of the barriers to acceptance and usability is removed.However, integrating LLM systems into teaching practices still requires trust in the technology and an adjustment in the ways teachers have been operating until now.As applications of LLMs increasingly take hold in the educational world, we should provide educators with guidelines on how to interact with these systems, including how to phrase their prompts to obtain the answer that best fits their needs, understanding the limitations of these tools, and being aware of risks.When used responsibly, LLMs such as ChatGPT present opportunities to enhance students' learning experience and mitigate a considerable amount of workload for teachers, for example, through assistance in the formulation of test item writing [116].However, educators should be aware of potential issues that LLMs entail, such as over-reliance on the LLM, copyright, and cheating [116,117].In this era of rapid technological development, a new approach to teaching practices may be necessary to revolutionize modern education and reconcile the tension between human teachers and artificial intelligence [117].In particular, to successfully establish a safe and prolific cooperation with AI in education, we need to find a balance between the contrasting forces of human control and delegation to technology, between collecting more data to better represent students and respecting their privacy, and strive for personalization that does not cross over the line of teacher surveillance [59]. Limitations and Directions for Future Research This study has several limitations worth noting.First, we recognize that this paper does not offer any AI-or LA-based solutions to overcome the limitations in the evaluation stage of the LA life cycle.Specifically, the issue of generalizability is an ongoing challenge for LA researchers.Inadequate feature representation, inadequate sample size, and imbalanced class are primary causes that hinder the generalizability of LA models [118].However, such problems are commonly encountered in real-world datasets.The achievement of a shared conceptualization is hampered by patterns in the population, as both individual factors (e.g., the shift in interest) and societal factors (e.g., trends in education) could change at the sub-group level and, therefore, hinder a common feature representation.The mentioned sample size and imbalanced class issues are also hardly avoidable, as in predictive tasks that target low-occurrence but high-impact situations, such as school dropout, the discrepancy between the minority and the majority class is usually high [119].Thus, these limitations can usually be addressed only after the fact. Furthermore, the challenge of insufficient evidence of effectiveness cannot be addressed solely by using AI-or LA-based solutions, but it calls for purposeful choices in the development of LA tools and evaluation studies.To guide the planning of LA evaluations, we encourage future research to follow Jivet et al. [12]'s recommendations outlined above.However, future evaluation studies might employ NLP techniques and LLMs to support the qualitative analysis of teachers' and students' responses to open-ended questions about LA usability and perceived utility. Lastly, we want to note that the effectiveness of the LLM-based solutions proposed in this paper to improve LA has not been tested yet, as the LLM-based educational tools discussed above are still under development.Furthermore, despite the potential benefits of LLM-based solutions, technology readiness remains a significant challenge.Yan et al. [120] conducted a scoping review on the applications of LLMs in educational tasks, focusing on the practical and ethical limitations of LLM applications.The authors asserted that there was little evidence for the successful implementation of LLM-based innovations in real educational practices.In addition, they noted that existing LLMs applications are still in the early stages of technology readiness and struggle to handle complex educational tasks effectively, despite showing high performance in simple tasks like sentiment analysis of student feedback [121].Furthermore, the authors pointed out that many reviewed studies lacked sufficient details about their methodologies (e.g., not open-sourcing the data and codes used for analysis), making it challenging for other researchers and practitioners to replicate their proposed LLMs-based innovations.Based on the results, Yan et al. [120] suggest future studies to validate LLM-based education technologies through their deployment and integration in real classrooms and educational settings.Real-world studies would allow researchers to test the models' performance in authentic scenarios, particularly for tasks of prediction and generation, and to evaluate their generalizability.The authors warn researchers that studies in educational technology tend to suffer from limited replicability.Therefore, they encourage them to open-source their models and share enough details about their datasets. From an ethical perspective, adopting LLMs and AI-powered learning technologies in education should carefully consider their accountability, explainability, fairness, interpretability, and safety [122].Data privacy is a primary concern for ethical AI, and information security standards should be followed at all stages of data management.Informed consent for data collection, usage, sharing, and disposal is the first essential step to ensure ethical data treatment [123]; however, often users are not aware of the extent of personal information they agree to share [124], and more concerns about individual freedom of choice arise when the use of AI-based technologies is required by the school [123].Scholars warn that excessive surveillance can diminish learner agency, and predictive models based on student characteristics can put self-freedom at risk and perpetuate systematic biases embedded in the algorithms [125].The majority of existing LLMs-based innovations are considered transparent and understandable only by AI researchers and practitioners.At the same time, none are perceived as sufficiently transparent by educational stakeholders, such as teachers and students [120].To address this issue, future research should incorporate a human-in-the-loop component, actively involving educational stakeholders in the development and evaluation process.This also ensures that the educational stakeholders gain insights into how LLMs and AI-powered learning technologies function and how they can be harnessed effectively for improved learning outcomes. Future studies could further explore the application of NLP techniques to analyze process data and generate written reports from students' data.Although LLMs have only recently been developed and numerous challenges remain to be solved, researchers both inside and outside of academia are hastily at work to address them and improve these models, and as the capabilities of LLMs expand, so will their applications [94].For example, expanding LLM context windows would support the provision of feedback that takes into account students' background information, such as individual interests and level of language proficiency [126].Further, LLM could also be used with an intelligent tutoring system to enhance the quality of feedback provided to students [127].Moreover, as LLMs will find their way into teaching and learning practices, further consideration should be given to the ethical implications of AI in education.Data privacy and transparency concerns call for higher model explainability and greater involvement of stakeholders in developing and evaluating educational technologies.Moreover, while the high level of personalization that LLMs could offer students might increase equity, the costs currently associated with developing and adopting these technologies raise issues about equality.Additional concerns involve model accuracy, discrimination, and bias [120].Researchers, policymakers, and other educational stakeholders should consider what they can do to mitigate these threats to fairness and ensure that educational AI will not broaden inequalities instead of reducing them. Conclusions While LA holds many promises to enhance teaching and learning, there is still work to be done to bring them to full fruition.The present paper highlighted the areas for improvement in the development, implementation, and evaluation of LA and offered guidelines and ideas that could be tested to overcome some of these challenges.In particular, there is a need for incorporating data and learning theories, as these would provide a lens to make sense of LA insights.HCLA offers principles to reach this integration through intensive cooperation with educators as co-designers of LA solutions.In addition, using process data in LA systems can enhance our understanding of students' learning processes and increase the interpretability of insights.Furthermore, we explored numerous ways in which LLMs can be deployed to make LA insights more interpretable and customizable, to increase personalization through feedback generation and content recommendation, and to support teachers' tasks more broadly while always maintaining a human-centered approach. By raising awareness about areas for improvement and highlighting the tools that the literature and recent technological innovations are providing, we hope this paper can inspire further efforts to bring LA closer to fulfilling its potential.Future research should strive to implement human-centered frameworks in LA development, from identifying users' needs to assuring that design choices support usability.Indicators of engagement and learning should not be defined solely by obscure algorithms but be based on shared conceptualizations funded in pedagogical theories and fitted to the instructors' learning design.Only then it becomes possible to strike a balance between generalizability and context-specificity, between prediction accuracy and interpretability.Moreover, as SoLAR reminds researchers, the utility of LA extends beyond prediction, encompassing the development of complex skills and learning strategies and the personalization of feedback.To tap into these potentials, future studies could incorporate process data into LA systems to identify concrete behavioral patterns and the underlying learning processes, giving teachers and students concrete elements to reflect upon to understand their performance and insights that could be linked to learning theories.Examples of valuable data sources are response times and action sequences, such as writing and editing processes.In the wake of the recent innovations in language models, we invite researchers to explore how LLMs can be integrated into LA to support interpretability and personalization: the limited but expanding capabilities of existing LLM-based tools presented in this paper can offer a promising starting point for researchers to improve upon and test in realistic settings.Evaluation studies are crucial to assess the effectiveness of LA and inform the community if we are moving in the right direction. Figure 1 . Figure 1.Limitations in the LA life cycle and proposed solutions.Note: Circles represent the proposed solutions.HCLA stands for human-centered learning analytics.InfoVis stands for information visualization.LLM stands for large language models.
2023-11-22T17:05:23.252Z
2023-11-16T00:00:00.000
{ "year": 2023, "sha1": "74a187d9e9c592e16e9b691a47cb94da64b206c0", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2813-2203/2/4/46/pdf?version=1700115186", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "9b1a68cf08035fc16655d21d8916055be8a09429", "s2fieldsofstudy": [ "Education", "Computer Science", "Linguistics" ], "extfieldsofstudy": [] }
255269394
pes2o/s2orc
v3-fos-license
Review of human–robot coordination control for rehabilitation based on motor function evaluation As a wearable and intelligent system, a lower limb exoskeleton rehabilitation robot can provide auxiliary rehabilitation training for patients with lower limb walking impairment/loss and address the existing problem of insufficient medical resources. One of the main elements of such a human–robot coupling system is a control system to ensure human–robot coordination. This review aims to summarise the development of human–robot coordination control and the associated research achievements and provide insight into the research challenges in promoting innovative design in such control systems. The patients’ functional disorders and clinical rehabilitation needs regarding lower limbs are analysed in detail, forming the basis for the human–robot coordination of lower limb rehabilitation robots. Then, human–robot coordination is discussed in terms of three aspects: modelling, perception and control. Based on the reviewed research, the demand for robotic rehabilitation, modelling for human–robot coupling systems with new structures and assessment methods with different etiologies based on multi-mode sensors are discussed in detail, suggesting development directions of human–robot coordination and providing a reference for relevant research. Introduction As a wearable robot, a lower limb rehabilitation exoskeleton can provide limb support to restore human locomotion [1] and address the current shortage of medical resources. Therefore, research on this topic has gradually gained importance [2]. In the 21st century, with rapid developments in wearable exoskeleton robots for power and rehabilitation, commercial applications have begun [3]. Several theories and techniques have been formulated for lower limb rehabilitation exoskeleton robots for various types of patients [4][5][6][7][8][9][10]. The typical feature of a lower limb exoskeleton rehabilitation robot is that it is worn by a patient; thus, human-robot coordination control is extremely important. To this end, scholars have proposed human-in-loop systems [11,12] that aim to solve the tri-co (coexistingcooperative-cognitive) problem [13]. Reviews on robotassisted lower limb rehabilitation have also been published. A review by Meng et al. [14] focused on the progress of mechanisms, training modes and control strategies for lower limb rehabilitation robots. Lower limb orthoses and exoskeleton devices are broadly reviewed according to joint types, actuation modes and control strategies [15]. Shi et al. [2] reviewed and critically evaluated the research progress in human gait analysis and systematically summarised developments in the mechanical design and control of lower limb rehabilitation exoskeleton robots. The advantages and disadvantages of the theory and technology used in prototypes and products have also been compared and summarised [16]. These reviews focused on the design and control of the systems; however, they did not provide much detail on human-robot coordinate control. A systematic overview by Yan et al. [17] outlined the assistive strategies utilised by active locomotionaugmentation orthoses and exoskeletons. Control strategies have been reviewed and classified to determine how these devices interact with users [18]. These reviews have mainly been carried out from an engineering perspective without an in-depth analysis of the clinical rehabilitation needs of lower limbs or patient movement disorders, without considering the relationship with clinical rehabilitation in the analysis of modelling, perception and control nor the effect of the different etiologies of lower limb motor dysfunction on the robotic rehabilitation. In addition, these reviews do not provide much detail regarding the modelling of the human-robot coupling system, which is an important component of humanrobot coordination. Different fields such as robotics, biomechanics and human motor control must converge for the development of lower limb rehabilitation exoskeleton robots [19]. Therefore, patients' functional disorders and clinical rehabilitation needs of the lower limb must be analysed, forming the basis for human-robot coordinate control of lower limb rehabilitation robots. When coupling the robot and the human body, the system must be analysed and modelled, also forming the basis of human-robot coordinated control. Accordingly, a perception system is designed to process multi-fusion information and provide the necessary feedback for the control of the robot. In the cases of demand, model and feedback, a control strategy is designed to achieve human-robot coordinated control. Therefore, to provide a reference for related research, this paper reviews human-robot coordination control of lower-limb rehabilitation robots from four aspects, including demand analysis, system modelling, sensing and control strategies. Based on the reviewed research, the demand for robotic rehabilitation, modelling for human-robot coupling systems with new structures and assessment methods with different etiologies based on multi-mode sensors mechanism of rehabilitation and the needs of patients in rehabilitation are discussed in detail, suggesting development directions of human-robot coordination and providing a reference for relevant research. Motor function assessment for rehabilitation Motor function assessments are essential during rehabilitation, which can help us understand the functional state of patients. Many different methods and tools are used to evaluate motor functions, including traditional tools and objective-evaluation-based and biological-signal-based methods (Fig. 1). Traditional tools Traditional tools (such as scales) can evaluate many motor functions, including walking ability, balance, endurance, strength, and gait. The roles of different tools overlap. For example, the timed up and go (TUG) test is widely used to evaluate balance and walking ability [20,21], while the Berg balance scale and the short physical performance battery are also used for balance assessment [22,23]. In another study, TUG was used to predict the risk of falls in the elderly [24]. The 6-min walk test (6MWT) is a submaximal exercise test that measures the distance in meter (m) traversed over 6 min and provides cardiopulmonary and musculoskeletal functional capacity information. Therefore, different tools have been used for the same motor function in different studies. Conversely, the same tools may be used for different motor functions. The interpretation of each tool may vary from one study to another. Traditional tools can provide a global description of the functional state but cannot quantify real-time movement information in motor function assessment. Objective-evaluation-based method The progress of new technologies has given rise to devices including inertial measurement units (IMUs), motion capture systems, force plates, and foot pressure sensors. These devices allow an objective evaluation of human movements, providing us with the movement information of patients. This information provides a better understanding of how humans control their movements. Motor control strategies are essential for understanding the patient's motor dysfunction and finding new rehabilitation techniques. Human movements include static and dynamic characteristics. Static characteristics are also referred to as spatiotemporal parameters, and they include step length, step width, distance, and time [25]. Stroke patients have an asymmetric gait pattern with a large difference in step length [26] or swing time on both sides [27]. Moreover, dynamic characteristics are time series parameters [25], which commonly include kinematic and kinetic parameters. Kinematic parameters include joint trajectories, joint angles, joint velocities, joint accelerations, joint range of motion, and trajectory of centre of mass (COM). Pickle et al. [28] evaluated the balance ability of patients with Parkinson's using angular momentum calculated by the angular acceleration and velocity of segments. The sway distance between the COM and centre of pressure (COP) and the sway speed of COP are also used to assess balance ability [29]. A previous study used the correlation coefficient of the left and right joint angle curves to evaluate the asymmetry of hemiplegic gait [26]. The maximum stretching speed of the elbow joint was found to be related to the modified Ashworth scale used to assess spasticity [30]. A systematic review showed that almost half of the current exoskeleton performance evaluation studies used kinematic parameters [31]. Kinematic parameters can be obtained using tools such as inertial measurement units, infrared motion capture systems and image processing systems (Figs. 2(a) and 2(b)). Kinetic parameters include torque, force, power, ground reaction force (GRF), and heel-contact force. The GRF and heel-contact force can be used to identify the gait events of heel strike and toe-off in patients with hemiplegia and spinal cord injury [32,33]. The maximum joint resistance can be used to evaluate muscle tension in patients with spasms [30]. Kinetic parameters can be used to quantify weakness in patients. For example, Neckel et al. [34] compared active joint torques between patients with chronic stroke and a control group and found that patients who suffered from a stroke were significantly weaker in six of the eight measures tested. Another study by this team comparing gait patterns of subjects wearing Lokomat showed that the kinematic patterns of the chronic stroke and control groups were similar. However, the kinetic parameters were different, with the hip extension torque and knee flexion torque of the uninjured side being significantly greater in patients with stroke than in the control group [35]. This suggests that although Lokomat uses symmetrical kinematic features to guide walking, the torque pattern remains asymmetric. Thus, further investigating ways to appropriately combine kinematic and kinetic parameters is necessary to better represent patients' needs. The GRF and heel contact force can be obtained through measurements using a three-dimensional force plate system or a plantar pressure system. The pressure sensor in the insole can be placed in the shoe to measure the vertical force. A series of kinetic parameters, such as joint torque, can be calculated using the inverse dynamics method (Figs. 2(c) and 2(d)). Biological-signal-based method Surface electromyography (EMG) records muscle activity ( Fig. 2(e)). The biological signals of the brain can also be recorded using electroencephalography (EEG). Muscle activity can be used to evaluate the effort required by patients to complete motor tasks [36] as well as abnormal muscle coactivation patterns [37]. A previous study has shown that the contraction of antagonistic muscles in stroke patients is very strong during ankle flexion and extension and knee extension [34]. Shestakov [38] used EMG to evaluate astronauts' ability to maintain body balance in the presence of external disturbances. At present, EMG is being used to identify motor intentions of healthy people [39]. However, as mentioned, patients with disorders, such as strokes or spasms, may induce abnormal muscle contractions, which also produce EMG signals. Therefore, identifying a patient's motor intention through EMG signals directly has limitations. EMG varies greatly among individuals when evaluating human movements as the muscle activity of maximum voluntary contraction is used for standardised processing. In addition, EMG is also affected by fatigue. These problems may limit the use of EMG in rehabilitation robots. 3 Modelling and perception of human-robot-coupled system 3.1 System modelling for three levels Humans wear lower limb exoskeleton rehabilitation robots for rehabilitation training, forming a human-robot coupling system. Models of the coupling system can provide a basis for the design and control of the system. The modelling of such a system can be divided into three levels based on its characteristics, including robot, human and human-robot interaction, as shown in Fig. 3. The first level involves modelling the actuators. Lower limb rehabilitation robots often use electric motors and hydraulics. For the motor drive, the servo system is a three-closed-loop control including a position control loop, speed control loop and current control loop, which is generally simplified to a second-order differential link [40]. For hydraulic systems, the corresponding drive system model is usually set up according to the hydraulic components adopted, such as the general valve-controlled asymmetrical cylinder system [41]. Both these drives are rigid drives. To improve human-robot collaboration, studies have been conducted on the design of the drive system. On the one hand, pneumatic [42] drives with stronger flexibility are adopted, and on the other hand, serial elastic actuators (SEAs) [43] and cable-driven actuators [12] are adopted. At the level of the drive system, the elastic actuator is modelled. The drive is connected to the load via a compliant element. The drive dynamics are represented by the inertia and motor torque. According to the structure designed by SEA, the corresponding actuator modelling can be obtained by considering friction and other links [44]. The distribution of driven cables is various, potentially satisfying different requirements of the robot and obtaining better performance. More emphasis should be put on the unidirectional characteristics and the coupling relationship of cables [5]. In the process of driving modelling, the most important problem is the identification of system parameters and the accuracy of the model that affects the control effect. Another aspect is the modelling of the robot system level. Lower limb exoskeleton rehabilitation robots generally adopt serial structures after simplifying the joints of the human lower limbs [2]. In recent years, parallel structures have also been reported based on further studies of real human movement [8, 45,46]. The series and parallel structures are rigid structures. The Lagrange method [47] and Newton-Euler method [48] can be used to establish dynamic models for rigid-body dynamics modelling, which is widely used in industrial robots. For the lower limb exoskeleton robot in a series configuration, the robot is simplified into a multi-link model, and dynamic modelling of the lower limb exoskeleton robot is carried out [49]. For robots with parallel configurations, similar ideas as in industrial robots can be used for dynamic modelling [9]. In a lower limb exoskeleton rehabilitation robot, the process of modelling is insufficient to consider the robot itself, but it also needs to include the human body to realise the modelling of the human-robot coupling system. The human lower limb was simplified into a multi-link model based on the simplification of its each joint [50]. Existing multi-link models are mainly derived by simplifying the sagittal plane movement of the human body. Generally simplified to two- [51] and three-DOF link models [49], the five- [52] and seven-bar human dynamics models [53] without and with consideration of the ankle joint, respectively, are established. Different dynamic models have been established according to the difference between the support and swing phases [54]. The main function of establishing these models is to calculate the corresponding joint torques of the human body. However, due to the existence of the musculoskeletal system of the human body, a rigid body cannot be simply used to represent the dynamic model of the human body. Therefore, the muscle model is also used to conduct dynamic modelling of human lower limbs [55,56] and the human reflex-based musculoskeletal model [57,58] is used to obtain the driving torque. However, from the perspective of control, as the human body is equivalent to an impedance model including stiffness and dumping for processing, the dynamics modelling of lower limbs is not performed [59] to form a human-robot coupling system together with the robot model. All of the above models cannot accurately describe the rigid-flexible coupling characteristics of human lower limbs caused by the musculoskeletal system. The model is not accurate due to the large difference in inertia parameters of the human body which are difficult to accurately measure. Such a mapping relation is different from person to person, and individual differences are large, making application difficult. Concurrently, for patients who need to carry out passive movements, their ability for performing activities is weak, they cannot generate a torque that is calculated according to the musculoskeletal model and their EMG signals are difficult to collect. The human body and the robot are not rigidly connected but are usually connected through flexible links such as straps [9] via which human-robot interaction (HRI) forces act. The existence of interaction forces realises force and energy transfer between the robot and the human body. For such an interaction force model, a K-B model with a single degree of freedom (DOF) was proposed [60]. To expand the model, a K model with three DOFs was proposed [9]. The main problems of these models are as follows: The parameters of the impedance model are inaccurate. These models also assume that the joint centre of the human body and the joint centre of the robot coincide, but the influence caused by the joint centre mismatch is not considered. Furthermore, when using the multi-link model for human-robot coupling dynamic modelling, we tend to assume no deviation occurs in the movement between the human and robot, thus assuming a rigid connection between the human and robot. However, particularly in the early stage of the rehabilitation, for the patient, the interaction force owing to the motion deviation between the human and robot drives the human body to move. If no deviation occurs, no interaction force occurs, creating contradictions in the multi-link model. Perception for rehabilitation As a human-centred intelligent system [61], the lower limb exoskeleton rehabilitation robot needs to fully perceive the information of the human-robot coupling system through a sensing system and identify the motion state and intended motion of the patient to realise effective human-robot coordination, ensure a smooth and effective control strategy and achieve the effect of rehabilitation. Thus, a perception system is a key component of the system to realise human-robot coordination control. Two types of information are obtained by the perception system. One is the information from physical and biological sensors, which reflects the motion and state of the human-robot coupling system. The other is HRI information, which accurately predicts the intended movements of patients (Fig. 4). For robot systems, perception can be achieved using physical sensors. Linear and rotary potentiometers and force sensors can be set at the joint to measure the output angle and torque of the joint [5]. The plantar pressure information can be detected using plantar pressure sensors and ground reaction force sensors [62]. The collection of this physical information and input into the control system can serve as a feedback link and provide a basis for the design of the controller. A lower limb exoskeleton rehabilitation robot is a typical human-robot coupling intelligent system, which also needs to perceive the relevant information of the human body. However, biological signals are collected to identify the intended human movement. The sensor mode based on EMG serves as the input signal of the controller [63] to identify muscle strength or gait for corresponding control or the method of electromyographic fusion [64]. Through mechanism analysis of brain signals [65] and intention recognition [66], great progress has been made in the research on intention perception of exoskeletons based on EEG signals [67,68]. Bioelectric sensing information can directly reflect the movement intention of the wearer, due to its advantages of strong global stability, fast response and being the most natural HRI. However, individual differences exist in bioelectrical signals, and the mapping mechanism between bioelectrical signals and human motion intention needs to be further studied. By contrast, physical sensors can be used to detect the movement information of the human body, and the IMU can be used to detect the movement information of the body for the identification of the movement intention [69] and gait phase estimation [70]. The IMU can also be used to detect the joint angles of the lower limbs [55,71]. For such an attitude detection algorithm, a precision problem arises [72,73]. A vision sensor can also be used to detect the posture of the human body [11]. Detection of HRI forces is also an important method to realise human-robot coordinated control. Such forces are mainly measured in the following ways: The force/torque sensor is installed between the robot and the binding joint is used to detect the interaction forces [74,75]. The direction and magnitude of the multiple interaction forces were used to comprehensively determine the motion intention. Force information is collected using twodimensional interaction force sensors installed between the robot and the cuff, and the interaction torque is determined using the product relationship and the installation position. A uniform sensor is used to measure the interaction force information and identify the motion intention of the human body [76]. Series elastic actuators [44] are used to detect and identify interaction forces to realise human-robot coordinated control. This approach increases the complexity of the structure. The measurement of interaction forces is also affected by the number and arrangement of the sensors. Human-robot coordinate control strategies The recovery cycle is divided into three stages. Phase I is considered an inpatient program, with an average duration of 7 to 10 days with the objective to maintain the patient's muscular tone by performing passive movements, low-intensity exercise and education to reduce risks. Phase II is a twice-weekly outpatient program, with an average duration of three months that consists of a combination of physical exercise on a treadmill, an education program oriented to the prevention of risk factors and adoption of healthy habits (e.g., controlling blood pressure, cholesterol, weight and stress management). Finally, Phase III is defined as a long-term maintenance period, with the objective to provide reinforcement to the already-acquired routines in previous phases and to provide advice concerning secondary prevention [77]. Therefore, for the initial stages of I and II, the robot can be used to fully drive the affected limb to move to achieve passive rehabilitation training. In the middle and late stages of II, the patient's motor ability is recovered, so the robot cannot be simply controlled passively. Evaluating the patient's motor ability and state and adopting different rehabilitation training methods are necessary to realise active rehabilitation training. In Phase III, after a period of training, the patient recovers their motor ability. At this time, the robot is needed to assist the patient in daily life to perform routine exercises. At present, lower limb rehabilitation robots pay more attention to Phases I and II, whereas Phase III is classified as assistance. Starting from the entire cycle of I-III, this study considers that part of III also belongs to lower limb rehabilitation robots. Rehabilitation goals may vary at different stages of a disease. A physical activity and exercise plan must be formulated according to the patient's tolerance, recovery stage, environment, social support, physical activity preference, and activity, and participation restrictions. According to the American Heart Association and Stroke Association, bed rest needs to be minimised and a patient must sit or stand intermittently to maintain endurance during acute recovery. When the patient is stable, physical and occupational therapy are used to promote motor recovery, such as gait, muscle strength or balance. The goal of the third phase of stroke rehabilitation is to promote the completion of recommended physical activities and exercise to prevent cardiac problems and recurrent strokes [78]. According to the patient's condition and different rehabilitation training stages, different rehabilitation training modes need to be adopted, and corresponding control methods should also be used (Table 1 [5,51,57,62,[79][80][81][82][83][84][85]). Robot-assisted rehabilitation training can be divided into passive and active training. Generally, lower limb rehabilitation robots adopt the method of active and passive mixed control [86,87]. In early stages of rehabilitation training, due to the reduced strength of patients' limbs, passive control is needed; that is, robots drive the patient's limbs to carry out continuous passive training to achieve continuous passive movement. The passive control mode is aimed at patients with severe diseases and weak muscle strength; here, the affected limbs are driven by the robot to move along a predetermined trajectory. From the perspective of robot control, robots perform trajectory tracking tasks in passive training, which can be achieved through trajectory tracking control methods, such as proportional−derivative (PD) control [79], computational torque control [62], variable structure control [80] and impedance control [81]. The controller mentioned above does not take humans into account; that is, the trajectory tracking control realised focuses on the movement of the robot and the tracking of expected motion. In the design of the controller, the torque that the robot needs to apply to the human body is generally put into the dynamics equation as a disturbance term [88,89]. Then, the dynamic control of the robot is carried out. A structured or unstructured reach exists in the robot system, and the uncertainty of the non-structure, multiple input multiple output (MIMO) decoupling control method is used for compensation control [51]. For patients who can actively exert force in the middle and later stages of rehabilitation, the robot will provide the necessary assistance according to the patient's motion intention. Owing to the high degree of active participation of patients in active training and good stimulation of the nervous system, the clinical rehabilitation effect at this stage is better than that of passive training [90]. In active training, the robot needs to provide corresponding assistance according to the motion intention and state of the patient [91]. By using a method based on impedance control, an environment with different impedance characteristics is simulated to ensure compliance with the interaction process; thus, an assisted procedure is proposed [92]. As on-demand auxiliary control, an important problem is how to assess the patient's motion intention and state and then give the corresponding auxiliary force. One way is to use physical sensors for measurement and evaluation; that is, the actual position or attitude deviation measured by the sensor can obtain the corresponding adjustment force/torque through the corresponding force-field control (FFC) [5], momentfield control (MFC) [82] and three-dimensional-forcefield control (3D-FFC) [83] to achieve impedance control based on the attitude deviation. To increase the flexibility of the system, the term to the adaptive control law is reduced by adding a novel force, decaying the force output from the robot when errors in task execution are small [93]. This type of controller has two drawbacks that limit its application. First, the motion intentions and status of the patients are evaluated based on the position and attitude information of the robots. However, as discussed, deviation occurs in the motion between the patients and the robots, implying that the feedback information in the controllers cannot accurately reflect the motion of the patients. Second, the parameters in the controllers are fixed but cannot be changed according to different patients, indicating that the controllers have inadequate adaptability. Another approach is to capture EMG signals, and neuromuscular control reacts to the movements of the thighs, resulting in a synchronised and natural gait [57]. The motion intention of the human body is detected through the neuromuscular model to achieve human-robot coordinated control. When rehabilitation training reaches a certain stage, the lower limb exoskeletons can provide assistance to the patients for their daily life. At this stage, lower limb exoskeletons tend to be over ground. In this case, the preprogrammed method can still be used to drive the human body to move [94], but this control mode is difficult to adapt to complex and changing situations of the actual walking process. In this process, more attention is paid to HRI and human-robot coordinated control. On the one hand, the interactive force is used to identify the motion intention of the human body to achieve assistance of the lower limbs in the walking process [54,95]. On the other hand, it is different from the rehabilitation training stage [96]. At this time, the trajectory is no longer pre-set, but can be obtained by real-time reference changes [97,98]. A finite-state machine defines different motion Table 1 Overview of control methods Control strategies Methods Features Passive control Proportional−derivative (PD) control [79], computational torque control [62], variable structure control [80], impedance control [81], multiple input multiple output (MIMO) decoupling control [51] After a walk mode based on the sensors was selected, the participant initiated and propagated the programmed motions. The torque that the robot needs to apply to the human body is generally put into the dynamics equation as a disturbance term Assist-as-needed control Force-field control (FFC) [5], moment-field control (MFC) [82], three-dimensional-force-field control (3D-FFC) [83] Using physical sensors for measurement and evaluation; that is, the actual position or attitude deviation measured by the sensor can obtain the corresponding adjustment force/torque to achieve impedance control based on the attitude deviation Neuromuscular control [57] Capturing EMG signals to generate a synchronised and natural gait and achieve human-robot coordinated control Force control Finite state machine [84] A finite state machine is used to indicate the intended option of a series of manoeuvres. The intended manoeuvre of the user based on the provided inputs is determined. Each state is defined by a set of joint angle trajectories, which are enforced by position control loops EMG-based control [85] Human joint torque is estimated based on EMG signals to generate virtual torque for the control of the motors scenarios and logic to provide the desired assistance for patients [84]. However, biological signals can be used to identify the motion intention of the human body to assist in the walking process, and the EMG model can be used to calculate the driving torque in real time [55] or the mapping relationship between the EMG signal and joint torque [85] can be used to realise human-robot coordinated control. The robot system can detect robot motion information, human motion information and HRI information for human motion perception and state evaluation. The information of joint angle, torque and plantar pressure can only reflect the motion state of the robot. Individual differences exist in surface EMG, EEG and other biological signals, and the mapping mechanism between them and human motor intention is insufficient. Interactive force information can be detected by force/moment sensors and distributed sensors between the human and robot; however, this increases the complexity of the structure, and the measurement results are affected by the number of sensors and layout. Biological signal information and interaction force information are mainly used to perceive the intention of human movement and are generally used as a trigger quantity in the process of use. Therefore, achieving accurate perception of human movement is difficult. Physical sensors are used for measurement and evaluation, that is, the actual position or attitude deviation measured by sensors can be adjusted by force/torque controllers such as force field, screw field and torque field controllers to achieve impedance control based on position and attitude deviation. These methods are position deviations of the robot for evaluation basis. However, the flexible connection between human movement deviation and the resulting position deviation may not accurately reflect the state of patients with intention, and the controller parameters are usually fixed and cannot adapt to the needs of different patients with different stages of illness, leading to insufficient humanrobot coordination. The control method of biological signals such as EMG and EEG, although directly measuring human signals, is insufficiently accurate, also leading to the lack of human-robot coordination. Discussion Recovery from disease is a complex process, possibly done through a combination of spontaneous and dependent learning [99]. Task-specific and context-specific training are widely accepted principles in motor learning, indicating that rehabilitation training should be targeted at goals related to patients' needs [100]. The results of a systematic review showed that at present, almost half of the studies on the evaluation of lower limb exoskeleton performance are focused on flat ground or treadmill, indicating that the exoskeleton field mainly focuses on basic motor skills, while other motor tasks, such as standing, balance, walking on irregular terrain, turning and lateral movement, have been largely ignored [31]. The simultaneous solution of several other important functional tasks may also be a problem for rehabilitation robots. Demand for robotic rehabilitation Dysfunction is a major disease-related problem. Rehabilitation should focus mainly on improving activity and functional limitations; thus, exercise and functional recovery play an important role in modern rehabilitation [101]. Different diseases may lead to different dysfunctions. Decreased muscle strength is the most significant injury after stroke, further reducing walking speed and endurance [102]. Incomplete spinal cord injury (SCI) can cause motor dysfunction below the injury level, and walking is one of the most desired goals for many patients with SCI [103]. Dysfunction with different causes may share the same mechanism and consequences, leading to the same clinical syndrome, and responding to the same interventions [104]. For example, walking disorders may be caused by decreased muscle strength or balance, which may result from stroke, SCI or other diseases. Therefore, the focus must be on dysfunction and not only the disease causing the disorder in the rehabilitation process. At present, rehabilitation robots mainly focus on training different types of patients to walk, mainly because walking is one of the main goals of patients with motor dysfunctions. Walking is also one of the main methods via which patients can perform and participate in other activities. Fewer rehabilitation robots focus on other functional disorders, such as balance, muscle weakness, and body transfer. Therefore, to adapt to different conditions and functional disorders, lower limb exoskeleton rehabilitation robots need to have personalised characteristics and be able to fulfil new requirements for human-robot coordination and control. Many studies on human-robot coordination in different stages of robot-assisted rehabilitation training have been published, but research on different conditions is insufficient. The pathological characteristics of different diseases are not the same. Even if the rehabilitation robot is also used for rehabilitation training, the way of rehabilitation training is not the same. Therefore, developing a program suitable for robotassisted rehabilitation training for different conditions is necessary to show different human-robot coordination problems. At present, some studies combine a certain index that can be measured by robots with dysfunction, and human-robot coordination methods can be designed by sensing these kinematic or dynamic indicators of patients. However, no comprehensive evaluation model has been developed for a disease or dysfunction, which is also urgently needed in the future. Modelling for human-robot coupling system with new structures To adapt to different tasks, environments and rehabilitation needs, prototypes need to be lightweight and miniaturised. The presently used drive and transmission mechanisms occupy a large proportion of overall prototypes, resulting in a heavy system. At the same time, the flat joint, small volume and high energy density have important significance for realising the strong quantification and comfort design of the lower limb exoskeleton rehabilitation robot. To obtain a high power/thrust density [105], an innovative design of the drive system is available; its corresponding exokinematic modelling is required to achieve accurate control, thereby placing new requirements on the construction of the drive model. In addition, the idea of modularisation is to select corresponding modules according to different stages of illness to achieve structural restructuring [43,45]. Lightweight and miniaturised drive and transmission mechanisms can also make modular design easier to achieve. The humanin-the-loop design is an important method for modular design, based on the human-robot coupling dynamical model. The lower limbs were simplified into a multi-link model, and the rigid body dynamics modelling method was used to model the human-robot coupling system. However, accurately describing the rigid-flexible coupling characteristics of human lower limbs caused by the musculoskeletal system, and the inertia parameters of the human body vary greatly, making it difficult to accurately measure, resulting in uncertainty. Humanrobot energy transfer is realised by the interaction force generated by flexible links, such as binding. The interaction force modelling method based on the spring damping model has model parameter uncertainty, and the model does not consider the effect of human-robot joint centre not coinciding, thereby influencing the accuracy of the model. Therefore, a research hotspot in realising the control mode of human in the loop is the establishment of a rigid-flexible coupling model in line with the characteristics of the human musculoskeletal system and a human-robot coupling dynamics model in conjunction with the robot dynamics model. The rigid-soft coupling structure is closer to the actual musculoskeletal structure of the human body [106]. How to carry out structural modelling in this aspect and realise the modelling of a human-robot coupling dynamics system is also an important problem. Assessment methods with different etiologies based on multi-mode sensors Traditional assessment methods can be used to comprehensively assess human locomotion ability, but these methods are mostly qualitative or post-qualitative and cannot meet the real-time assessment needs of lower limb exoskeleton rehabilitation robots. A perception system, used for real-time perception and evaluation of human motor functions, and the adaptive adjustment of the corresponding rehabilitation training, are also required. Contemporary research on flexible sensors and electronic skin [107] will facilitate the design of perception systems. First, designing comprehensive models based on the mapping relationship between motor functions and multi-sensor information is necessary, and the redundancy of information should be considered to simplify the systems. Perception systems should also be designed to dynamically sense humans' motion, accurately understand their intentions and evaluate their motor functions in real time. Compared with single-sensor data, processing multi-modal information through multi-source fusion can ensure the speed and accuracy of perception. Such processing can combine the advantages and disadvantages of the various sensors mentioned in this paper to construct the perception mode of bio-machine mixed signal and design data fusion algorithm, which is also a research hotspot. For example, IMU and EMG were used for hybrid detection, and a neural network was used to perceive and predict the motion of the knee joint [108]. By making full use of the learning algorithm, the multi-mode multi-sensor information analysis and processing and data fusion algorithm are constructed. Thus, the perception system can sense more complete information and higher-level features of the robot and human body according to the multi-mode information acquired and realise adaptive sensing. At the same time, through the indepth combination of machine learning [109] and other technologies, the customisation and parameterisation of data diagnosis and treatment are realised, and the integration of autonomous learning and people is better realised [77]. The combination of virtual reality and augmented reality technology [110] enables a real scene to stimulate the brain and significantly stimulate motor function [111]. Using visual interaction and virtual reality technology is necessary [112]. At the same time, the current perception system is more used to sense the motion intention of the human body and then adjust the output force/moment of the robot accordingly to achieve human-robot coordinated motion. However, the comprehensive evaluation of the human motion ability is not considered in the perception system, which are basis for the diagnosis and evaluation to realise human-robot coordination between the movement according to different conditions and phases. Conclusions Human-robot coordination, which is crucial to lower limb exoskeleton rehabilitation robots used as humanrobot coupling systems, is reviewed in this paper. First, patients' functional disorders and clinical rehabilitation needs regarding lower limbs are analysed, forming the basis for the human-robot coordination of lower limb rehabilitation robots. Then, human-robot coordination is discussed in three aspects: modelling, perception, and control. The modelling of such a human-robot coupling system is described at three levels: robots, humans, and HRI. Two types of information, namely, information from physical and biological sensors and HRI information, are discussed, and the design method for the perception system is analysed. Control strategies for different stages throughout the recovery cycle are illustrated and analysed. The demand for robotic rehabilitation, modelling for human-robot coupling systems with new structures and assessment methods with different etiologies based on multi-mode sensors are discussed in detail, suggesting development directions of human-robot coordination and providing a reference for relevant research. COM Centre of mass Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution, and reproduction in any medium or format as long as appropriate credit is given to the original author(s) and source, a link to the Creative Commons license is provided, and the changes made are indicated. The images or other third-party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
2022-12-31T14:17:19.040Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "ce0da3ba66d4bf75030ca78624b50d3eda813c9b", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11465-022-0684-4.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "ce0da3ba66d4bf75030ca78624b50d3eda813c9b", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [] }
240353512
pes2o/s2orc
v3-fos-license
Efficacy of Natural Formulation Containing Activated Charcoal, Calcium Sennosides, Peppermint Oil, Fennel Oil, Rhubarb Extract, and Purified Sulfur (Nucarb®) in Relieving Constipation Introduction Long-term use of laxatives may have side effects such as bloating, allergic reaction, abdominal pain, metabolic disturbances, and hepatotoxicity. In this study, we have compared the efficacy of herbal medicine Nucarb, a combination of activated charcoal, calcium sennosides, peppermint oil, fennel oil, rhubarb extract, and purified sulfur, in relieving constipation. Methods This longitudinal study was conducted in multiple cities of Pakistan from April 2021 to June 2021. A total of 1000 patients, of either gender between age group 18 and 75 years, with complete spontaneous bowel movement of less than or equal to two times per week, were enrolled in the study. Participants were prescribed two tablets of Nucarb once daily (OD) at bedtime for the first seven days, followed by one tablet of Nucarb OD at bedtime for the following seven days. They were asked to return for follow-up after 14 days. Results There was a statistically significant improvement in all six components of constipation. After 14 days, the severity of constipation reduced by 80.70%, the sensation of straining was reduced by 72.69%, and the feeling of incomplete evacuation was reduced by 71.87%. There was no adverse event reported. Conclusion Nucarb is efficacious in reducing the severity of constipation, sensation of straining, bloating and abdominal pain, feeling of incomplete evacuation, and difficulty in passing gas. Since it is a herbal product, it can be safely used in all populations. Introduction Constipation is defined as the disorder of motility of the gastrointestinal tract, which includes symptoms such as infrequent stools, difficult stool passage with pain and stiffness. In cases of acute constipation, intestine may get close, which may even require surgery [1]. Constipation is a global issue, with prevalence reported as high as 80% and resulting in much costs for the community [2]. There are various factors responsible for constipation. These factors include genetic predisposition, socioeconomic status, low fiber consumption, lack of adequate fluid intake, lack of mobility, disturbance in the hormone balance, side effects of medications, and anatomy of the body [1]. Primary management of constipation should consist of lifestyle modifications, reassurance of their concept of a healthy or "regular" bowel movement, and biofeedback [3]. In cases where symptoms do not improve with non-pharmacologic management, laxatives should be the next line of treatment for the management of constipation [3]. Various adverse effects are reported with long-term use of laxatives. These adverse events can be mild such as bloating, allergic reaction, and abdominal pain or severe such as metabolic disturbances and hepatotoxicity [4]. Furthermore, the multifactorial causes of constipation restrict the clinical efficacy of current conventional Western treatments as these drugs act through a single pathway [5]. To help overcome these shortcomings and deliver a complete holistic approach, herbal medication with the ability to target multiple organ sites may be used [6]. The purpose of our study is to compare the efficacy of herbal medicine Nucarb (a combination of activated charcoal, calcium sennosides, peppermint oil, fennel oil, rhubarb extract, and purified sulphur) in relieving constipation. Materials And Methods This longitudinal study was conducted in multiple cities of Pakistan from April 2021 to June 2021. A total of 1000 patients, of either gender between the age group 18 and 75 years, with complete spontaneous bowel movement of less than or equal to two times per week, were enrolled in the study. Patients with druginduced constipation or constipation due to secondary causes were excluded from the study. After informed consent, detailed history of the patients, including the number of stools in the last week and drug history, was taken, and they were asked to fill a self-structured questionnaire. The questionnaire had a visual analog scale (VAS) where participants were asked to rank the severity of constipation, feeling of incomplete evacuation, the sensation of straining, bloating, and abdominal pain, and difficulty in passing gas from zero to four (four being worse and zero being no symptoms). The questionnaire was explained to the participants to ensure that they completely understood each question. Participants were prescribed two tablets of Nucarb once daily (OD) at bedtime for the first seven days, followed by one tablet of Nucarb OD at bedtime for the next seven days. They were asked to return for follow-up after 14 days. In the follow-up, patients were asked to fill VAS scale again for all six elements. Any adverse event was also reported in the questionnaire. We lost 99 participants to follow-up. Only participants who completed the follow-up period of 14 days were included in the study. Data were analyzed using Statistical Package for Social Sciences® software (version 23.0; SPSS; IBM Corp., Armonk, NY, USA). Numerical variables and data were presented as mean and standard deviations. Categorical values were tabulated as frequencies and percentages. The t-test was used to compare the participant's responses on day 0 and day 14. A p-value of less than 0.05 means there is a significant difference between the two groups. Results A total of 901 participants completed the study. Most participants were between 46 and 60 years (44.6%). Male participants (67.9%) were more common in our study ( Table 1). Discussion In our study, most of the patients with constipation were included in the age group 46-60 (44.6%) and were male (67.9%). After intake of Nucarb OD for a span of 14 days, difficulty in passing gas was significantly reduced (80.7%), followed by reduced abdominal pain (72.69%) and bloating (71.87%). Moreover, considerable relief in the intensity of constipation (68.80%), incomplete evacuation (54.46%), and sensation of straining (68.83%) was also observed. The possible explanation for the relief in constipation is due to the laxative property of the components included in Nucarb. Calcium sennoside, an anthraquinone drug, contains inactive glycosides. When glycosides are taken, they do not undergo any change in the small intestine. They are broken down by the bacterial glycosidases present in the colon to form active molecules. These molecules allow the entry of electrolytes into the colon and trigger myenteric plexuses to initiate peristalsis in the bowel [3]. After the oral intake of anthraquinones, the passage of stool is observed within 6-8 hours. Controlled trials have proved that senna can soften stools and stimulate bowel movements to induce frequent defecation [3]. Senna and magnesium oxide have been shown to significantly improve the frequency of bowel movements and quality-of-life score and seem to be effective in the treatment of constipation [7]. Rhubarb extract has also been shown to cause beneficial effects in constipation. It contains tannin that can impose antidiarrheal effects [8]. Other agents used for relieving constipation are magnesium and sulfate ions. Magnesium sulfate is a strong laxative that is used to distend the stomach and cause a large amount of liquid stool [9]. Sodium sulfate is also used for colonic irrigation for diagnosis and before surgery [10,11]. Peppermint oil has several mechanisms of action including smooth muscle relaxation via calcium channel blockade or direct enteric nervous system effects, visceral sensitivity modulation via transient receptor potential cation channels, and psychosocial distress modulation. It also has antimicrobial and anti-inflammatory activity [12]. The use of fennel oil is associated with a decrease in colic pain [13]. Various studies have proven the safety of herbal medicines for constipation [8,12,14,15]. No serious adverse event was reported in our study. To the best of our knowledge, this is the first study that targets the efficacy of the combination of herbal medicine for constipation. Since the study was conducted in various cities and included both genders and participants of all age groups, data generated can be considered reliable and provide a basis for further investigation regarding the role of herbal medicine in constipation. Since the study has only single arm, care should be taken while comparing the result against other molecules or formulations. Conclusions Our study indicates that herbal medicines, such as Nucarb used in this study, are an effective way of managing constipation. Herbal medicine can be used as first line for constipation or alternative in patients experiencing adverse events on pharmacological management. Further large-scale studies are needed to validate the findings of our result. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Allied Hospital Faisalabad issued approval Allied/IRB/2019-12-02. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2021-10-18T17:30:27.028Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "2e18d7ac82f5ecb025bca69b68a1bbaa03ccda7a", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/64731-efficacy-of-natural-formulation-containing-activated-charcoal-calcium-sennosides-peppermint-oil-fennel-oil-rhubarb-extract-and-purified-sulfur-nucarb-in-relieving-constipation.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "916bdc9ec8e255f0d509974e879d7e3f2dcf7fd3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
35413156
pes2o/s2orc
v3-fos-license
Thymosin β4 Up-regulation of MicroRNA-146a Promotes Oligodendrocyte Differentiation and Suppression of the Toll-like Proinflammatory Pathway* Background: Thymosin β4 (Tβ4) promotes differentiation of oligoprogenitor cells (OPCs) to oligodendrocytes in animal models of neurological injury. Results: Tβ4 increased expression of microRNA-146a and suppressed expression of TLR (Toll-like) proinflammatory cytokines. Conclusion: Tβ4 suppresses the TLR proinflammatory pathway by up-regulating miR-146a to promote OPC differentiation. Signficance: Learning how Tβ4 promotes oligodendrogenesis supports its development for clinical studies. Thymosin β4 (Tβ4), a G-actin-sequestering peptide, improves neurological outcome in rat models of neurological injury. Tissue inflammation results from neurological injury, and regulation of the inflammatory response is vital for neurological recovery. The innate immune response system, which includes the Toll-like receptor (TLR) proinflammatory signaling pathway, regulates tissue injury. We hypothesized that Tβ4 regulates the TLR proinflammatory signaling pathway. Because oligodendrogenesis plays an important role in neurological recovery, we employed an in vitro primary rat embryonic cell model of oligodendrocyte progenitor cells (OPCs) and a mouse N20.1 OPC cell line to measure the effects of Tβ4 on the TLR pathway. Cells were grown in the presence of Tβ4, ranging from 25 to 100 ng/ml (RegeneRx Biopharmaceuticals Inc., Rockville, MD), for 4 days. Quantitative real-time PCR data demonstrated that Tβ4 treatment increased expression of microRNA-146a (miR-146a), a negative regulator the TLR signaling pathway, in these two cell models. Western blot analysis showed that Tβ4 treatment suppressed expression of IL-1 receptor-associated kinase 1 (IRAK1) and tumor necrosis factor receptor-associated factor 6 (TRAF6), two proinflammatory cytokines of the TLR signaling pathway. Transfection of miR-146a into both primary rat embryonic OPCs and mouse N20.1 OPCs treated with Tβ4 demonstrated an amplification of myelin basic protein (MBP) expression and differentiation of OPC into mature MBP-expressing oligodendrocytes. Transfection of anti-miR-146a nucleotides reversed the inhibitory effect of Tβ4 on IRAK1 and TRAF6 and decreased expression of MBP. These data suggest that Tβ4 suppresses the TLR proinflammatory pathway by up-regulating miR-146a. Thymosin ␤4 (T␤4), a G-actin-sequestering peptide, improves neurological outcome in rat models of neurological injury. Tissue inflammation results from neurological injury, and regulation of the inflammatory response is vital for neurological recovery. The innate immune response system, which includes the Toll-like receptor (TLR) proinflammatory signaling pathway, regulates tissue injury. We hypothesized that T␤4 regulates the TLR proinflammatory signaling pathway. Because oligodendrogenesis plays an important role in neurological recovery, we employed an in vitro primary rat embryonic cell model of oligodendrocyte progenitor cells (OPCs) and a mouse N20.1 OPC cell line to measure the effects of T␤4 on the TLR pathway. Cells were grown in the presence of T␤4, ranging from 25 to 100 ng/ml (RegeneRx Biopharmaceuticals Inc., Rockville, MD), for 4 days. Quantitative real-time PCR data demonstrated that T␤4 treatment increased expression of microRNA-146a (miR-146a), a negative regulator the TLR signaling pathway, in these two cell models. Western blot analysis showed that T␤4 treatment suppressed expression of IL-1 receptor-associated kinase 1 (IRAK1) and tumor necrosis factor receptor-associated factor 6 (TRAF6), two proinflammatory cytokines of the TLR signaling pathway. Transfection of miR-146a into both primary rat embryonic OPCs and mouse N20.1 OPCs treated with T␤4 demonstrated an amplification of myelin basic protein (MBP) expression and differentiation of OPC into mature MBPexpressing oligodendrocytes. Transfection of anti-miR-146a nucleotides reversed the inhibitory effect of T␤4 on IRAK1 and TRAF6 and decreased expression of MBP. These data suggest that T␤4 suppresses the TLR proinflammatory pathway by upregulating miR-146a. Thymosin ␤4 (T␤4) 2 is a 5-kDa, 43-amino acid peptide originally isolated from the thymus gland (1). T␤4 regulates the cellular actin-cytoskeleton and cellular migration by sequestering G-actin (2,3). Most mammalian cells express T␤4, and its observed actions in dermal wound and cardiac ischemia models are anti-inflammatory and proangiogenic (4). In addition, T␤4 promotes cardiomyocyte and keratinocyte migration in these models. T␤4 improves functional outcome after experimental induction of multiple sclerosis, embolic stroke, and traumatic brain injury (5)(6)(7). In all three models, improvement in neurological outcome is associated with oligodendrogenesis (i.e. differentiation of oligoprogenitor cells (OPCs) into mature myelin-secreting oligodendrocytes (OLs)). Oligodendrogenesis contributes to remyelination after neurological injury by differentiation of OPCs into mature myelin-expressing OLs. Neurorestorative agents act on intact parenchymal cells to promote neurogenesis, angiogenesis, oligodendrogenesis, and axonal remodeling during the recovery phase of neurological injury and thereby enhance neurological recovery (8). Therefore, T␤4 is a candidate neurorestorative agent when administered in animal models of multiple sclerosis, stroke, and traumatic brain injury (9). However, its mechanisms of action are unclear and require investigation. Toll-like receptors (TLRs) are pattern recognition receptors that recognize conserved molecular patterns of pathogens. In addition to pathogens, TLRs also recognize damage-associated molecular patterns, which are molecular patterns of endogenous host debris released during cellular injury or death (10,11). This debris can be extracellular matrix protein, oxidized proteins, RNA, or DNA. Once recognition occurs, the TLRs are stimulated, resulting in activation of many signaling pathways, including those pathways involving the mitogen-activated protein kinases (MAPKs) and the nuclear factor NF-B transcription factors. The MAPKs activate OL differentiation; therefore, TLR signaling may be involved in oligodendrogenesis as well as in regulating the inflammatory response (12,13). In addition, the TLR pathways are affected by miR-146, which down-regulates proinflammatory cytokine production and activation of inflammatory pathways (14 -17). TLR4, is a well studied TLR that mediates its proinflammatory response through three proteins, IRAK1 (IL-1 receptor-associated kinase 1), IRAK4, and TRAF6 (tumor necrosis receptor associated factor 6) (18). By targeting IRAK1 and TRAF6, miR-146 inhibits NF-B activation. We therefore hypothesized that T␤4 regulates the TLR proinflammatory signaling pathway by specifically regulating miR-146a to promote differentiation of OPCs to mature myelin basic protein (MBP)-expressing OLs. MATERIALS AND METHODS All animal experiments were performed according to protocols approved by the Henry Ford Hospital Institutional Animal Care and Use Committee. Isolation of Primary Rat Embryonic OPCs-Primary rat embryonic OPCs were isolated and prepared according to the method of Chen et al. (19). Briefly, on embryonic day 17, the rat embryos were removed from a pregnant Wistar rat in a laminar flow hood. The cortices were dissected out by using microdissecting scissors, rinsed twice in Hanks' buffered salt solution, and dissociated after digesting with 0.01% trypsin and DNase at 37°C for 15 min. The digested cells were washed twice, filtered through a 70-mm nylon cell strainer, and plated with DMEM containing 20% fetal bovine serum (FBS) in poly-D-lysinecoated T75 cell culture flasks (ϳ10 million cells/flask). The cells grew to confluence for 10 days and then were placed on the shaker at 200 rpm at 37°C for 1 h to remove microglial cells. Subsequently, the cells were left on the shaker for an additional 18 -20 h to collect OPCs. The collected OPCs were plated in untreated Petri dishes for 1 h to remove contaminated microglia and astrocytes, which attach to the Petri dish more efficiently than OPCs. The unattached OPCs were transferred onto poly-DL-ornithine-coated Petri dishes at a cell density of 10 4 /cm 2 with a basal chemically defined medium containing 10 ng/ml platelet-derived growth factor-␣ and 10 ng/ml basic fibroblast growth factor for 7-10 days. Cell Culture, Transfection, and Treatment with T␤4-The mouse primary cultures of OPCs were conditionally immortalized by transformation with a temperature-sensitive large T-antigen into a mouse OPC cell line, N20.1 (20). N20.1 cells were provided by Dr. Anthony Campagnoni (UCLA). N20.1 cells were grown and maintained in Dulbecco's modified Eagle's medium (DMEM)/F-12 with 1% FBS and G418 (100 g/ml) at 37°C. For N20.1 cells, transient transfections were performed with the Nucleofector kit according to the manufacturer's protocol (Amaxa, Germany). The cells (10 6 ) were mixed with 1 g of plasmid DNA or 100 pmol of siRNA/ oligonucleotides and pulsed according to the manufacturer's instruction. The transfected cells were immediately plated into Petri dishes with DMEM containing 1% FBS and incubated at 37°C for 2 days. Primary rat embryonic OPCs were transiently transfected with Lipofectamine (Invitrogen) overnight, according to the manufacturer's protocol. Amounts of DNA and siRNA/oligonucleotides were used as recommended by manu-facturer. The control plasmid (pcDNA3) was used as a mocktransfected control for miR-146a expression vector transfection, and control siRNA (Ambion; a random mixture of oligonucleotides) was used as a mock-transfected control for transfections with T␤4 siRNA, Krox-20/EGR2 siRNA (Santa Cruz Biotechnology, Inc., Dallas, TX), and anti-miR-146a inhibitor nucleotides (21). Oligodendrocyte Differentiation Assay-To investigate the effect of T␤4 on oligodendrocyte differentiation, primary rat embryonic OPCs and mouse N20.1 cells (10 4 cells/cm 2 ) were incubated at 37°C with media containing 0, 25, 50, or 100 ng/ml T␤4 (RegeneRx Biopharmaceuticals Inc., Rockville, MD) without any growth or differentiation factors. Cells were fed every 2 days for 4 days. Basal defined medium without FBS for primary rat embryonic OPCs and DMEM containing 1% FBS for N20.1 cells were employed. After the treatment with T␤4, we examined the oligodendrocyte differentiation by measuring the expression of its marker, MBP, with Western blot and quantitative real-time (qrt-PCR), as described below. The samples that showed the elevation of MBP expression after T␤4 treatment as a positive response to oligodendrocyte differentiation were utilized for all experiments involved in oligodendrocyte differentiation. For the treatment with kinase inhibitors, the cells were pretreated with p38 MAPKspecific inhibitor (SB 203580) and JNK-specific inhibitor II (SP600125) (Calbiochem) at the dose of 1 M for 20 -30 min before the addition of T␤4 into the medium. LPS Contamination Assay-To test for LPS contamination in T␤4, the cells were cultured in the presence of LPS inhibitor polymyxin B (50 g/ml), followed by treatment with T␤4. T␤4 (100 ng/ml) was boiled for 10 min in order to denature T␤4 protein and used as a negative control. Transfected cells (2 ϫ 10 4 cells/cm 2 ), including mock-transfected controls, were treated with and without 100 ng/ml T␤4 (RegeneRx Biopharmaceuticals Inc.) for 4 days, and fresh medium was provided at day 2 with/without T␤4. qrtPCR-The extraction of total RNA and preparation of cDNA were performed as described previously (22). The qrt-PCR amplification was done for 40 cycles in the following thermal cycle using SYBR Green (Invitrogen): 95°C for 30 s, 60°C for 30 s, and 72°C for 45 s. The sequences for each primer were used, as reported previously (23). After qrtPCR, agarose gel electrophoresis was performed to verify the quality of the qrt-PCR products. There were no secondary products in our data. Each sample was tested in triplicate, and all values were normalized to GAPDH. Values obtained from three independent experiments were analyzed relative to gene expression data using the 2 Ϫ⌬⌬CT method (24). Quantification of Mature MicroRNAs by Real-time qrtPCR-The cDNA for each microRNA and TaqMan assay were performed in triplicate according to the manufacturer's protocol specified in the Applied Biosystems ViiA TM 7 real-time PCR system (Applied Biosystems). Briefly, total RNA was isolated with TRIzol (Qiagen). The reverse transcription reaction mixture contained 1-10 ng of total RNA, 5 units of MultiScribe reverse transcriptase, 0.5 mM each dNTP, 1ϫ reverse transcription buffer, 4 units of RNase inhibitor, and nuclease-free water. The microRNA cDNA was performed by individual reverse transcription in the following thermal cycle: 16°C for 30 min, 42°C for 30 min, 85°C for 5 min. The TaqMan assay was performed in 20-l TaqMan real-time PCRs containing 1ϫ Taq-Man Universal PCR Master Mix No AmpErase UNG, 1ϫ Taq-Man microRNA assay buffer, 1.33 l of undiluted cDNA, and nuclease-free water. All values were normalized to a U6 snRNA TaqMan microRNA control assay (Applied Biosystems) as the endogenous control. Values obtained from three independent experiments were analyzed relative to gene expression data using the 2 Ϫ⌬⌬CT method (24). Immunochemistry-Immunofluorescence staining was performed in N20.1 and primary rat embryonic OPC cells. These cells were fixed with 4% paraformaldehyde for 1 h, washed with PBS, blocked with 1% serum for 1 h, incubated with monoclonal antibodies of OPC marker, O4 (1:1000, Chemicon, Billerica, MA), and a polyclonal antibody against mature OL marker MBP (1:200; Dako, Carpinteria, CA) at room temperature for 1 h, and rinsed with PBS. Secondary antibodies were labeled with cyanine fluorophore (Cy3, red fluorescence) for 1 h. The slides were counterstained with DAPI (blue fluorescence) and examined under a fluorescent illumination microscope (Olympus IX71/IX51, Tokyo, Japan). O4-and MBP-positive cells were quantified by counting in at least three slides per experiment for at least three independent experiments. DAPI-positive cells were considered as the total number of cells. Statistical Analysis-Data were summarized using mean and S.D. values. To compare the differences between cell cultures with T␤4 treatment and without, a one-sample t test or a twosample t test was used. For the comparisons of qrtPCR of mRNA/GAPDH and qrtPCR of miR-146a/U6, controls were normalized to 1, so that a one-sample t test was used for analysis. To compare the percentage of positive stained cells of the total number of cells between T␤4 treatment and control, a two-sample t test was used. A p value of Ͻ0.05 was considered significant. RESULTS T␤4 Increases Expression of miR-146a in OPCs-We investigated the effect of T␤4 treatment on the expression of miR-146a and miR-146b in primary rat embryonic OPCs (n ϭ 5) and in a mouse OPC cell line, N20.1 (n ϭ 5), by qrtPCR. The purity of rat primary OPCs used in the experiments was confirmed by immunostaining for O4 and was quantified by cell counting. The cell counting data showed that Ͼ90% of these cells were O4-positive ( Fig. 1). We found that T␤4 treatment induced the expression of miR-146a in rat primary embryonic OPCs and mouse N20.1 cells in a dose-dependent manner ( Fig. 2A). In contrast, T␤4 treatment had no effect on miR-146b expression in rat primary embryonic OPCs and mouse N20.1 cells (Fig. 2B). Transfection with miR-146a plasmid enhanced miR-146a expression ϳ30to ϳ50-fold but had no effect on miR-146b expression in rat primary embryonic OPCs and mouse N20.1 cells (Fig. 2). T␤4 Down-regulates the Intracellular TLR Signaling Pathway in OPCs-miR-146a targets two proinflammatory cytokines, IRAK1 and TRAF6, in the intracellular TLR signaling pathway (25). We investigated the effect of T␤4 treatment on the TLR signaling pathway in rat primary embryonic OPCs and mouse N20.1 cells. These cell cultures, which demonstrated induction of miR-146a expression after T␤4 treatment (Fig. 2), were utilized to analyze the expression levels of IRAK1, TRAF6, and MBP, the mature OL marker, by Western blot. T␤4 treatment markedly reduced the expression levels of IRAK1 and TRAF6 and increased the expression level of MBP in a dose-dependent manner in rat primary embryonic OPCs (n ϭ 3) and mouse N20.1 OPCs (n ϭ 3) (Fig. 3). These data indicate that the TLR signaling pathway may be involved in T␤4-mediated OPC differentiation in primary rat embryonic OPCs and mouse N20.1 cells. Downstream Signaling of the MAPKs in T␤4-mediated Oligodendrocyte Differentiation-We investigated the effect of T␤4 on MAPKs involved in downstream signaling of the TLR pathway. Expression of TLR2 and TLR4 was confirmed by Western blot analysis (Fig. 3). However, treatment with T␤4 had no effect on expression of TLR2 and TLR4 (Figs. 3 and 4). Western blot was performed to measure expression and phosphorylation of p38 MAPK, ERK1, JNK1, and c-Jun after T␤4 treatment (Figs. 3 and 4). T␤4 treatment induced expression and phosphorylation of p38 MAPK, a known regulator of oligodendrocyte differentiation, in a dose-dependent manner. In contrast, T␤4 dose-dependently inhibited the phosphorylation of ERK1/2, JNK1, and c-Jun in primary rat embryonic OPCs and mouse N20.1 cells (Figs. 3 and 4). During Schwann cell myelination, a similar opposing effect of MAPKs, p38 MAPK, ERK1, and JNK1, has been reported for the expression of a key tran- scription factor of the MBP promoter, Krox-20, which is also known as EGR2 (early growth response-2) transcription factor (23,26,27). To determine whether T␤4 treatment affected Krox-20 expression in OPCs, we reprobed the Western blots with Krox-20/EGR2 antibodies in OPCs after T␤4 treatment. Effect of T␤4 on Oligodendrocyte Differentiation Marker, MBP, Is Independent of LPS Contamination in T␤4-To avoid confounding data because of any LPS contamination in T␤4, the cells were cultured in the presence of polymyxin B (50 g/ml), followed by T␤4 treatment at a dose of 50 and 100 ng/ml for 4 days. The qrtPCR data indicate that T␤4 treatment induced the expression of MBP in a dose-dependent manner even in the presence of polymyxin B (50 g/ml) in rat OPC and N20.1 cells in both mRNA and protein levels (Fig. 5, A-C). In contrast, the boiled denatured T␤4 (100 ng/ml) treatment had FIGURE 2. MicroRNA analysis of miR-146a and miR-146b in OPCs after T␤4 treatment by qrtPCR. The total RNA samples were extracted from primary rat embryonic OPCs (left) and mouse OPC cell line N20.1 (right) after treatment with T␤4 at doses ranging from 0 to 100 ng/ml (shown at the bottom) and after transfection with miR-146a for microRNA analysis of miR-146a (A) and miR-146b (B) by qrtPCR. Note that expression of miR-146a was increased in a dose-dependent manner in both OPCs. In contrast, expression of miR-146b remained unchanged. p Ͻ 0.05 was considered as significant. Thymosin ␤4 Up-regulates miR-146a no effect on MBP expression (Fig. 5, A-C). These data suggested that induction of MBP was solely dependent on natural T␤4 and independent of LPS contamination. Effect of miR-146a and Anti-miR-146a on Downstream Signaling Mediators of TLR and MAPKs-We measured protein expression of IRAK1, TRAF6, and MAPKs in miR-146a-overexpressing and miR-146a knock-out primary rat embryonic OPCs (n ϭ 3) and mouse N20.1 cells (n ϭ 3) (Fig. 6). Overexpression and knock-out of miR-146a were determined by quantitative analysis of miR-146a. The efficacy of transfection was an increase of miR-146a of 51 Ϯ 5.3-fold in N20.1 cells and 33.5 Ϯ 4.1-fold in rat OPCs for miR-146a overexpression and a decrease of 73.1 Ϯ 8.3-fold in N20.1 cells and 46.7 Ϯ 5.2-fold in rat OPCs for miR-146a knock-out. Western blot analysis revealed that the miR-146a transfection inhibited expression of IRAK1 and TRAF6 and increased expression and activation of p38 MAPK. In contrast, transfection with anti-miR-146a inhibitor nucleotides significantly inhibited the expression of MBP and phosphorylation of p38 MAPK (Fig. 6). Expression of IRAK1, TRAF6, phospho-ERK1, phospho-JNK, and phosphoc-Jun remained unchanged or slightly elevated. These data indicate that miR-146a may be directly involved in OL differentiation by activation of the p38 MAPK signaling pathway in rat primary embryonic OPCs and mouse N20.1 cells. To determine whether miR-146a transfection regulates Krox-20 expression in OPCs, we performed Western blot analysis in rat primary embryonic OPCs and mouse N20.1 cells. These data demonstrate that miR-146a transfection markedly up-regulated Krox-20 expression. (Fig. 6). T␤4 Regulates miR-146a Expression-To investigate the mechanistic link between T␤4 and miR-146a upon MBP expression, we further investigated the effect of both T␤4 and miR-146a on the TLR signaling pathways using primary rat embryonic OPCs (n ϭ 3) and the mouse OPC cell line N20.1 (n ϭ 3). Fig. 7 demonstrates a 2-fold increase in mRNA MBP expression in the miR-146a transfection and T␤4 group in rat primary embryonic OPCs and mouse N20.1 cells. However, a 10-fold increase in mRNA MBP expression is observed when miR-146a-transfected cells are grown in the presence of T␤4, suggesting that T␤4 amplifies miR-146a-induced MBP expression. A similar but less robust result is observed when measuring p38 MAPK. Western blot demonstrated similar results at the protein level, as shown in Fig. 8 (primary rat OPCs) and Fig. 9 (mouse N20.1 cells). Furthermore, knock-out of miR-146a or silencing T␤4 using T␤4 siRNA (transfection efficiency of T␤4 siRNA was 58.3 Ϯ 6.2-fold in rat OPCs and 75.1 Ϯ 7.9-fold in N20.1 cells) inhibited MBP expression with no effect on the proinflammatory expression of IRAK1 and TRAF6 or the MAPKs, phospho-ERK1, phospho-JNK1, phospho-c-Jun, and Krox-20 when compared with control ( Figs. 8 and 9). Interestingly, silencing T␤4 using T␤4 siRNA in miR-146a-overexpressing cells showed inhibition of IRAK1 and TRAF6 without an increase of MBP expression, suggesting that T␤4 may be necessary for MBP expression. In contrast, using knock-out miR-146a cells treated with T␤4 showed no change in the expression of MBP, IRAK1, TRAF6, p38 MAPK phospho-ERK1, phospho-JNK1, phospho-c-Jun, and Krox-20. These data indicate that miR-146a is a necessary component for T␤4mediated MBP expression. Relative protein expression is quantified and shown in Fig. 10. Collectively, these results suggest that T␤4 promotes the expression of MBP and Krox-20 through up-regulation of miR-146a. To determine whether T␤4 treatment and miR-146a transfection affect NF-B activation, we investigated a specific endogenous inhibitor of NF-B, IB␣, which sequesters NF-B dimers and keeps NF-B complexes as inactive forms in the cytoplasm (28). We therefore performed Western blot analysis by reprobing the blots from T␤4-treated and miR-146a/anti-miR-146a-transfected primary rat embryonic OPCs (n ϭ 3) and the mouse OPC cell line N20.1 (n ϭ 3) with IB␣ antibodies. These data indicate that T␤4 treatment and miR-146a transfection induced IB␣. Silencing miR-146a reversed the effect of T␤4 and miR-146a on IB␣ induction. Knock-out of T␤4 neutralized the effect of T␤4 treatment, but it failed to reverse the effect of miR-146a transfection on IB induction (Fig. 8 -10). These data suggest that blockage of the TLR4 signaling pathway induced IB␣, leading to NF-B activation, because TLR4 signaling mediators, IRAK1 and TRAF6, are targets of miR-146a. T␤4 treatment therefore inhibited NF-B activation by inducing IB␣ through blocking the proinflammatory TLR4 signaling pathway. Role of TLR4 Signaling Mediators, IRAK1, TRAF6, p38 MAPK, and JNK1, in Regulation of MBP Synthesis-To determine whether p38 MAPK and JNK1 regulated MBP expression after T␤4 treatment, these OPCs were pre-exposed with specific pharmaceutical inhibitors, SB203580 for p38 MAPK and SP600125 for JNK1, followed by treatment with T␤4 (100 ng/ml). The p38 MAPK-specific inhibitor, SB203580, reversed the T␤4 effect on up-regulation of MBP and Krox-20 expression at the protein and mRNA levels but induced phosphorylation of c-Jun in both rat and mouse OPCs (Figs. 11 and 12). In 1 (shown at the bottom). These cells were transfected with control plasmid (plasmid control) and miR-146a vector (miR-146a transfection), followed by treatment without and with T␤4 (100 ng/ml) (miR-146a ϩ T␤4). These OPCs were also transfected with anti-miR-146a and T␤4 siRNA. p Ͻ 0.05 was considered as significant. FIGURE 8. Effect of T␤4 treatment and transfection with miR-146a, anti-miR-146a, and T␤4 siRNA on MBP expression and downstream signaling mediators of TLR in the primary rat embryonic OPCs. In the left panel, the primary rat embryonic OPCs were transfected with control pcDNA3 vector (Control vector), miR-146a expression vector (miR-146a vector), control pcDNA3 vector followed by T␤4 treatment (T␤4 (100 ng/ml)), and miR-146a expression vector followed by T␤4 (100 ng/ml) treatment (miR-146a ϩ T␤4) (shown at the top). In the right panel, the primary rat embryonic OPCs were transfected with control siRNA, anti-miR-146a, T␤4 siRNA, T␤4 siRNA ϩ miR-146a, and anti-miR-146a followed by T␤4 (100 ng/ml) treatment (anti-miR-146a ϩ T␤4) (shown at the top). These cells were lysed for protein extraction and Western blot analysis. The loading of the samples was normalized with ␣-tubulin. Migrations of proteins are shown at the right. Molecular mass markers are shown at the left in kDa. Note that miR-146a transfection combined with T␤4 treatment markedly induced MBP expression in the OPCs. Note that T␤4 treatment fails to induce MBP expression in the absence of miR-146a and that miR-146a transfection has no effect on MBP expression in T␤4-negative OPCs. P-, phosphorylated. Thymosin ␤4 Up-regulates miR-146a contrast, Western blot data showed that the JNK1-specific inhibitor, SP600125, increased phosphorylation of p38 MAPK and augmented MBP and Krox-20 expression but abolished phosphorylation of c-Jun in these OPCs (Fig. 11). Transfection either with IRAK1 siRNA or TRAF6 siRNA reduced phosphorylation of JNK1 and c-Jun but increased phosphorylation of p38 MAPK and enhanced the expression of MBP and Krox-20 in both OPCs in Western blot analysis (Fig. 11). These data suggest that inhibition of JNK1 is necessary for MBP synthesis because JNK1 phophorylates and activates the transcription factor c-Jun, which negatively regulates MBP synthesis. On the other hand, activation/phosphorylation of p38 MAPK was required for the expression of Krox-20 and MBP. Thus, blocking TLR4 signaling after T␤4 treatment induces the expression of Krox-20, the transcription factor for the MBP promoter, which may positively regulate MBP synthesis in these OPCs. Underlying Signaling Mechanism on the Opposite Effect of Two MAPKs, p38 MAPK and JNK1, on MBP Synthesis-To investigate the underlying signaling mechanism on the effect of p38 MAPK and JNK1 on MBP synthesis, we analyzed the expression of a key transcription factor of the MBP promoter, Krox-20 (26,27,29). Reduction or deficiency of Krox-20/Egr2 in Schwann cells resulted in the failure of MBP synthesis and myelination of axons (30 -32). Among these three MAPKs, p38 MAPK shows effects opposite to those of ERK and JNK on the expression of Krox-20/EGR2 and MBP synthesis in Schwann cells (26,27,29). ERK and JNK activate c-Jun, which inhibits the expression of Krox-20 and MBP synthesis. In contrast, p38 MAPK induces the expression of Krox-20/EGR2 and MBP synthesis in Schwann cells (26,27,29). Krox-20/EGR2 is expressed in the brain and also induces MBP synthesis in glial and olfactory ensheathing cells in mice (33,34). To examine the expression of Krox-20/EGR2, we performed Western blot and qrtPCR analysis in rat OPC (n ϭ 3) and N20.1 cells (n ϭ 3). These OPCs were pre-exposed with/without pharmaceutical specific inhibitors of p38 MAPK and JNK1 followed by T␤4 treatment. Data demonstrated that expression of the transcription factor Krox-20/EGR2 was required for MBP synthesis because knocking down Krox-20/EGR2 with its siRNA transfection completely reversed the effect of T␤4 on MBP synthesis. In contrast, p38 MAPK inhibitor partially reversed the effect of T␤4 on MBP synthesis at the protein and mRNA levels in rat OPC and N20.1 cells (Fig. 12). The inhibitors of p38 MAPK and JNK1 showed an opposing effect for the expression of Krox-20 at the protein and mRNA levels in rat OPC and N20.1 cells (Fig. 12). These data illustrate that the transcription factor Krox-20/EGR2 regulates the underlying signaling mechanism of the opposite effect of two MAPKs, p38 MAPK and JNK1, on MBP synthesis. T␤4 Treatment and miR-146a Transfection Induce Differentiation of OPC to Mature Oligodendrocytes-Rat primary embryonic OPCs and mouse N20.1 cells (n ϭ 3) were transfected with control (mock) and miR-146a vector and treated with and without T␤4 (100 ng/ml). The OPCs were stained with immunofluorescence antibodies for mature OL markers (MBP) and counterstained with DAPI. These data were quantified by counting the number of MBP-positive cells. DAPI-positive cells were considered as the total number of cells. The number of MBP-positive OPCs was significantly increased after treatment with T␤4 or transfection with miR-146a in rat primary embryonic OPCs and mouse N20.1 cells (Figs. 13 and 14), respectively. The miR-146a transfection amplified the effect of T␤4 -146a vector), control pcDNA3 vector followed by T␤4 treatment (T␤4 (100 ng/ml)), and miR-146a expression vector followed by T␤4 (100 ng/ml) treatment (miR-146a ϩ T␤4) (shown at the top). The right panel indicates N20.1 cells transfected with control siRNA, anti-miR-146a, T␤4 siRNA, T␤4 siRNA ϩ miR-146a, and anti-miR-146a followed by T␤4 (100 ng/ml) treatment (anti-miR-146a ϩ T␤4) (shown at the top). The loading of the samples was normalized with ␣-tubulin. Migrations of proteins are shown at the right. Molecular mass markers are shown at the left in kDa. Note that marked induction of MBP was observed after miR-146a transfection combined with T␤4 treatment in N20.1. Note that neither T␤4 treatment nor miR-146a transfection had any effect on MBP expression in the absence of miR-146a or T␤4 in N20.1 cells. P-, phosphorylated. Thymosin ␤4 Up-regulates miR-146a treatment on MBP immunostaining of both sets of OPCs. These data suggest that T␤4 treatment and miR-146a transfection induced OL differentiation in both rat primary embryonic OPCs and mouse N20.1 cells. DISCUSSION In this study, we discovered that the pleiotropic peptide, T␤4, regulates miR-146a. We previously demonstrated a strong association of T␤4 treatment with OL differentiation in in vivo and in in vitro models (5)(6)(7)23). The results of this study further support our central hypothesis of T␤4-mediated oligodendrogenesis. Our data demonstrate that T␤4 increases expression of miR-146a in rat primary OPCs and mouse N20.1 OPCs; attenuates expression of IRAK and TRAF6; and reduces expression of phosphorylation/activation of ERK1, JNK1, and c-Jun, a negative regulator of MBP. Therefore, these data suggest that T␤4mediated oligodendrogenesis results from miR-146a suppression of the TLR proinflammatory pathway and modulation of the p38 MAPK pathway. T␤4 is present in high concentrations (up to 0.4 mM) in various tissues, including the brain in rats (35). The expression of T␤4 in the brain is increased with neurodegenerative disease, such as Huntington disease (36), as well as in various experimental conditions, such as brain ischemia (37,38), kainic acid-induced seizure (39), and hippocampal denervation (40). Intracerebroventricular administration of T␤4 (10 l of a 10 M solution twice a day over 5 days starting from the day of kainic acid injection) prevented kainic acid-induced hippocampal neuronal loss or neurotoxicity (41). Based on this information, our maximal dose of 100 ng/ml (20.4 nM) is not toxic and is a physiologic dose for the treatment of OPCs. Innate immune signaling pathways are activated in the brain not only in response to infectious disease but also to injury and chronic disease (42,43). Inflammation initiates tissue repair after injury; however, it must be highly regulated so as not to harm the healing or recovering tissue. Negative regulation of the innate immune system is achieved by several proteins and microRNAs. miR-146a is an important negative regulator of the innate immune system, and it is also found to be highly expressed in developing oligodendrocytes during differentiation (15,17,18). Therefore, our finding that T␤4 up-regulates miR-146a in our in vitro models of OPCs in conjunction with previous observations that T␤4 promotes recovery after neurological injury suggests a multipurpose role of T␤4 in promoting oligodendrocyte differentiation as well as modulating the inflammatory response of the innate immune system by downregulating two components of the pathway, IRAK1 and TRAF6. The functional role of miR-146a in cellular differentiation has been studied in many different systems. After transfection, the levels of miR-146a were increased up to 50-fold. This observation is consistent with cultured human THP-1 cells demonstrating miR-146a elevation up to 1850-fold in endotoxin tolerance experiments. Overexpression of miR-146a up to at least 35-fold was required for endotoxin tolerance (44). Transfection of miR-146a was also employed previously for tumor suppression in glioma (45). Lentiviral miR-146a transfection showed a 26-fold increase of wild-type miR-146a and attenuated the proliferation, migration, and tumorigenic potential of Ink4a/ Arf_/_ Pten_/_EgfrvIII murine astrocytes. Expression of miR-146a in the hematopoietic system promotes macrophage development from hematopoietic stem cells, and down-regulation of miR-146a influences megakaryocytopoiesis (46). Forced expression of miR-146a in breast tumor cells inhibits endogenous NF-B expression and reduces metastatic activity of the tumor cells (47). Recent work performed by Zhao et al. (48) has demonstrated that miR-146a is a critical regulator of inflammation. Using knock-out miR-146a mice exposed to chronic LPS stimulation, they showed that hematopoietic stem cells are reduced in number and are converted into miR-146a-deficient dysfunctional lymphocytes and myeloid cells, which produce elevated levels of TRAF-6 and NF-B, resulting in enhanced production of IL-6. Up-regulation of these factors resulted in FIGURE 13. Immunohistochemistry of MBP in primary rat embryonic OPCs mouse N20.1 cells. The primary rat embryonic OPCs (left) and N20.1 cells (right) were transfected with control vector (control), and cells were treated with T␤4 (100 ng/ml) (T␤4 (100 ng/ml)). Similarly, OPCs were also transfected with miR-146a, and miR-146a-transfected cells were treated with T␤4 (100 ng/ml) (T␤4 ϩ 146a). The cells were immunofluorescence-stained with Cy3-labeled antibody against OL marker MBP and counterstained with DAPI. Images are merged (Merged). FIGURE 14. Quantitative analysis of MBP-positive cells in primary rat embryonic OPCs and mouse N20.1 cells. Primary rat embryonic OPCs and mouse N20.1 cells were transfected with control vector and miR-146a vector (miR-146a transfection) followed by treatment without and with T␤4 (T␤4 (100 ng/ml) and miR-146a transfection ϩ T␤4 (100 ng/ml)). MBP-positive cells after immunofluorescence staining were quantified by cell counting. The bar graph indicates the percentage of MBP-positive cells in primary rat embryonic OPCs and mouse N20.1 cells when DAPI-positive cells were considered as 100% (i.e. total number of cells). p Ͻ 0.05 was considered as significant. hematopoietic stem cell depletion, bone marrow failure, and myeloproliferative disease, suggesting that chronic inflammation leads to accelerated aging and cancer risk. Therefore, miR-146a may be a pivotal component in regulating inflammation, and its absence may lead to the detrimental effects of aging. The observation that miR-146 is highly expressed in oligodendrocyte lineage cells suggests that maturation of oligodendrocytes occurs in an environment in which chronic inflammation is down-regulated. Our results showing that T␤4 increases expression of miR-146a while promoting differentiation of OPCs to MBP-positive oligodendrocytes support this hypothesis. Inhibiting miR-146a in T␤4-treated cells removed the inhibitor effect on the expression of IRAK and TRAF6 with no increase in MBP expression, suggesting that the miR-146a is a necessary component for MBP expression and down-regulation of the TLR proinflammatory pathway. Moreover, overexpression of miR-146a in T␤4-treated cells showed an amplified MBP expression and well as suppression of IRAK and TRAF6. TLRs activate each of the three major mitogen-activated protein kinases, ERK, JNKs, and p38 MAPKs (12). A complex series of triggering MAPK modules occurs after TLR activation, leading to eventual activation of the ERK, JNK, and p38 MAPK, which in turn phosphorylates numerous transcription factors, proteins, and cytoskeletal proteins influencing cell survival and controlling the expression of immune mediators. Our observation of T␤4 modulation of the two key proinflammatory cytokines, IRAK and TRAF6, with corresponding down-regulation of the expression of phosphorylation/activation of ERK1, JNK1, and c-Jun suggests that T␤4 reduces inflammation, modulates the MAPKs, and creates an environment for oligodendrocyte differentiation. Our previous study using SVZ cells showed that T␤4 treatment induced p38 MAPK while suppressing ERK1 and JNK activity and phosphorylated c-Jun, which negatively regulates myelin gene promoter activity (23). Data from this study demonstrate similar results in rat primary OPCs, suggesting that T␤4 regulation of the MAPKs promotes oligodendrocyte differentiation. Furthermore, our data suggest that up-regulation of miR-146a influences activation of p38 MAPK and corresponding suppression of ERK1 and JNK1 and thus promotes differentiation of OPCs to mature MBP oligodendrocytes. A similar antagonistic effect of p38 MAPK against JNK1 for MBP synthesis was also found for Krox-20 expression. Expression of Krox-20 was required for MBP synthesis in rat and mouse OPCs. Thus, Krox-20 regulates MBP synthesis and the mechanism underlying the opposing effect of two TLR4-signaling MAPKs, p38 MAPK and JNK1. Another transcription factor, c-Jun, which inhibits MBP synthesis, is a downstream target of JNK1, a serine-threonine kinase that directly phosphorylates c-Jun and increases its activity (23,27). In contrast, the transcription factor Krox-20, which induces MBP synthesis, is a downstream target of p38 MAPK, which up-regulates MBP expression. Thus, these two transcription factors, c-Jun and Krox-20, are antagonistic for MBP synthesis and oligodendrocyte differentiation. In summary, T␤4 treatment up-regulates miR-146a expression in rat primary embryonic OPCs and mouse N20.1 cells. T␤4 treatment induced miR-146a suppression of the proin-flammatory cytokines IRAK1 and TRAF6, leading to up-regulation of p38 MAPK and inhibition of phospho-c-Jun, a negative regulator of MBP promoter. T␤4 regulates miR-146a and may be required for MBP expression. Furthermore, T␤4 treatment and miR-146a transfection induced morphological changes suggestive of OL differentiation. These results provide further support for the hypothesis that T␤4 mediates oligodendrogenesis and support its development as a treatment for neurological injury.
2018-04-03T02:53:36.727Z
2014-05-14T00:00:00.000
{ "year": 2014, "sha1": "ccf49f4c3fb8d115d2118b0c419a12b2a5be3373", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/289/28/19508.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "d55af136e8ad120286eccb3d539e1e81e8af7cb1", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
1757628
pes2o/s2orc
v3-fos-license
Hippocampal volume correlates with attenuated negative psychotic symptoms irrespective of antidepressant medication Background Individuals with at-risk mental state for psychosis (ARMS) often suffer from depressive and anxiety symptoms, which are clinically similar to the negative symptomatology described for psychosis. Thus, many ARMS individuals are already being treated with antidepressant medication. Objectives To investigate clinical and structural differences between psychosis high-risk individuals with or without antidepressants. Methods We compared ARMS individuals currently receiving antidepressants (ARMS-AD; n = 18), ARMS individuals not receiving antidepressants (ARMS-nonAD; n = 31) and healthy subjects (HC; n = 24), in terms of brain structure abnormalities, using voxel-based morphometry. We also performed region of interest analysis for the hippocampus, anterior cingulate cortex, amygdala and precuneus. Results The ARMS-AD had higher ‘depression’ and lower ‘motor hyperactivity’ scores than the ARMS-nonAD. Compared to HC, there was significantly less GMV in the middle frontal gyrus in the whole ARMS cohort and in the superior frontal gyrus in the ARMS-AD subgroup. Compared to ARMS-nonAD, the ARMS-AD group showed more gray matter volume (GMV) in the left superior parietal lobe, but less GMV in the left hippocampus and the right precuneus. We found a significant negative correlation between attenuated negative symptoms and hippocampal volume in the whole ARMS cohort. Conclusion Reduced GMV in the hippocampus and precuneus is associated with short-term antidepressant medication and more severe depressive symptoms. Hippocampal volume is further negatively correlated with attenuated negative psychotic symptoms. Longitudinal studies are needed to distinguish whether hippocampal volume deficits in the ARMS are related to attenuated negative psychotic symptoms or to antidepressant action. Introduction The clinical high-risk state of psychosis (at-risk mental state, hereafter ARMS) is defined by attenuated positive psychotic symptoms, genetic liability and functional deterioration or brief and self-remitting psychotic symptoms (Fusar-Poli et al., 2013;Yung et al., 1998). However, affective symptoms, including depressive and anxiety symptoms, are also highly prevalent in these individuals (Salokangas et al., 2012). A recent meta-analysis, conducted in 1683 high-risk subjects, confirmed that the baseline prevalence of comorbid depressive and anxiety disorder is 41% and 15%, respectively (Fusar-Poli et al., 2014a). Depressive and anxiety symptoms can precede the onset of attenuated positive psychotic symptoms (Fusar-Poli et al., 2013). Some studies indicate that co-occurrence of depressive disorders can predict subsequent transition to psychosis in ARMS individuals (Salokangas et al., 2012). However, other studies have not confirmed this finding (Fusar-Poli et al., 2014a). Additionally, a large study on 3349 twins suggests an association between depressive and/or anxiety symptoms and psychosis-like traits (schizotypy) and emphasizes a major role for genetics, especially as regards positive symptoms (Macare et al., 2012). The comorbidity of psychotic and depressive disorders in the ARMS population is associated with specific psychopathological features at the time of the presentation to high risk services and with low functional level (Fusar-Poli often receive antidepressant medication (e.g. 42% of ARMS individuals in our previous study (Smieskova et al., 2012a)). Negative psychotic symptoms are a major source of disability in the psychosis spectrum and are refractory to any effective treatment (Fusar-Poli et al., 2014b). Negative symptoms group into two factors, one involving diminished expression of affect and alogia and the second involving avolition, including anhedonia and asociality (Fusar-Poli et al., 2014b). Antidepressants may have a potential benefit for ARMS individuals, as they may target their negative attenuated psychotic symptoms (Cornblatt et al., 2007;Fusar-Poli et al., 2007). These studies indicate that antidepressant treatments in ARMS individuals can impact their longitudinal outcomes. However, it is not clear if these improvements are associated with underlying neurobiological changes . Similar alterations were found in depressive disorders. Reductions in gray matter volume (GMV) in the anterior cingulate gyrus, hippocampus, amygdala (Koolschijn et al., 2009) and prefrontal cortex (Lorenzetti et al., 2009) were associated with major depression. The only available study directly testing the effect of comorbid depressive disorders on the neurobiology of ARMS uncovered a significant impact on the anterior cingulate region (Modinos et al., 2014). On the other hand, long-term antidepressant medication can be neuroprotective and some studies have linked the use of antidepressants to an increase in hippocampal volume in patients with major depressive disorder (Amico et al., 2011;Malykhin et al., 2010). It has been shown that antidepressants increase hippocampal neurogenesis (Anacker et al., 2011). Thus, both affective symptoms (Baynes et al., 2000) and antidepressant medication (Kraus et al., 2014) are known to impact brain structure. In the present study, we addressed for the first time the effect of antidepressant treatment and attenuated negative psychotic symptoms on the neurobiology of ARMS. Firstly, we hypothesized that ARMS individuals without current antidepressant treatment (ARMS-nonAD) would manifest more severe attenuated negative symptoms than ARMS subjects currently receiving antidepressants (ARMS-AD). Secondly, we hypothesized that ARMS-AD individuals would have increased GMV in regions associated with depressive symptoms and/or antidepressant medication (hippocampus, anterior cingulate gyrus, amygdala and precuneus) compared to the ARMS-nonAD individuals. Thirdly, we hypothesized that the volumetric abnormalities in gray matter between ARMS-AD and ARMS-nonAD would be associated with attenuated negative symptoms. Subjects MRI data were collected within the framework of a research program on the early detection of psychosis. The subjects were recruited in our specialized clinic for the early detection of psychosis (FEPSY) at the Psychiatric Outpatient Department, Psychiatric University Clinics Basel, Switzerland . The entire group of ARMS individuals (n = 49) conforms to Yung3s criteria (Yung et al., 1998) and overlaps with previously published data (Borgwardt et al., 2007a;Borgwardt et al., 2007b;Smieskova et al., 2012a;Smieskova et al., 2012b). All the ARMS individuals were antipsychotic-free and were assessed prior to the neuroimaging session. ARMS inclusion required one or more of the following: (a) attenuated psychotic symptoms that do not reach full psychosis threshold (b) brief limited intermittent psychotic symptoms (lasting less than a week with spontaneous remission) (c) a first degree relative with a psychotic disorder plus at least two indicators of a clinical change, such as a marked decline in social or occupational functioning. We assessed the subjects using the Basel Screening Instrument for Psychosis (BSIP) , the Brief Psychiatric Rating Scale (BPRS) (Lukoff et al., 1986), the Scale for the Assessment of Negative Symptoms (SANS) (Andreasen, 1989) and the Global Assessment of Functioning (GAF) (Endicott et al., 1976). Attenuated negative psychotic symptom severity was investigated with the cluster 'negative symptoms', calculated from the BPRS as a sum of blunted affect, emotional withdrawal, and motor retardation (BPRS16, BPRS17 and BPRS18) (Fusar-Poli et al., 2014b;Velligan et al., 2005). Additionally, we calculated 'mood disturbance' BPRS cluster as a sum of anxiety, depression, suicidality and guilt (BPRS02, BPRS03, BPRS04 and BPRS05) (Thomas et al., 2004), as well as depression (BPRS03) and motor retardation (BPRS18) scores alone. We used these scores for stepwise regression analysis with backward elimination. Participants were excluded from the study if they presented with a history of previous psychotic disorder, psychotic symptomatology secondary to an organic disorder, substance abuse, affective psychosis, borderline personality disorder, age under 18 or over 40, inadequate knowledge of the German language or IQ less than 70 (assessed by multiple-choice vocabulary intelligence test) (Lehrl et al., 1995). Healthy controls (n = 24) were from the same geographical area as the other groups (Table 1). All participants provided written informed consent. The study was approved by the local ethics committee. Magnetic resonance imaging acquisition For structural imaging, a whole brain 3D T 1 -weighted MPRAGE (magnetization prepared rapid acquisition gradient) sequence was applied using a 3 T magnetic resonance imaging scanner (Magnetom Verio, Siemens Healthcare, Erlangen, Germany) and a 12-channel radio frequency head coil. The acquisition was based on a sagittal matrix of 256 × 256 × 176 and 1 × 1 × 1 mm 3 isotropic spatial resolution, with an inversion time of 1000 ms, repetition time of 2 s, echo time of 3.4 ms, flip angle of 8°and bandwidth of 200 Hz/pixel. All images were reviewed by trained neuroradiologists for radiological abnormalities. Image analysis Structural MRI data were analyzed using the voxel-based morphometry toolbox (VBM8, http://dbm.neuro.uni-jena.de/vbm8/), implemented within SPM8 (Wellcome Department of Cognitive Neurology, London, UK) and running on Matlab 7.11 (MathWorks, USA). All T 1 -weighted images were first checked for scanner artifacts and anatomical abnormalities. Images were then segmented into gray matter, white matter and cerebrospinal fluid using the adaptive maximum a posteriori technique (in contrast to the classical use of a priori Tissue Probability Maps), where local variations in the parameters are modeled by means of slowly varying spatial functions (Rajapakse et al., 1997). More accurate segmentation can be achieved with partial volume estimation of additional mixed tissue classes in every voxel. All images were DARTEL-normalized (Diffeomorphic Anatomical Registration using Exponentiated Lie algebra; Ashburner, 2007). The DARTEL template was derived from 550 healthy controls as provided in MNI space. This method produces more accurate results for registration and additional registration in MNI space was unnecessary (Ashburner, 2007). Finally, sample homogeneity was reviewed and all images were smoothed using an isotropic 8 mm full-width-at-halfmaximum (FWHM) Gaussian kernel (Shen and Sterr, 2013). Group differences were explored using a one-way ANOVA. Since our groups differed significantly in gender and age, we introduced these two variables as additional covariates of interest. Group comparisons included ARMS-AD versus ARMS-nonAD versus healthy controls. Brain region labeling was achieved using the Harvard-Oxford structural atlas (http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/Atlases (Desikan et al., 2006)), incorporated within the FMRIB Software Library (FSL). Statistical significance was assessed at a cluster level using a threshold of p b 0.005 uncorrected (cluster-forming threshold); statistical inference was then made at p b 0.05, adjusted to provide a family-wise error (FWE) correction at the peak and cluster levels. ROI analysis On the basis of the previous evidence, we defined 4 specific regions of interest (ROIs) to test for differences in GMV between our two ARMS groups: the bilateral hippocampus, (associated with greater risk of depressive disorder; Amico et al., 2011;Vasconcelos et al., 2011), the anterior cingulate gyrus (reported to be reduced during ongoing depression; Amico et al., 2011;Koolschijn et al., 2009) and in co-morbid depression in ARMS (Modinos et al., 2014); the amygdala (inconsistent changes in depression; Koolschijn et al., 2009) and the precuneus (associated with increased gray matter density after short antidepressant application in HC; Kraus et al., 2014). All the regions were individually defined using the Wake Forest University PickAtlas Toolbox (http://fmri.wfubmc.edu/software/PickAtlas). For each region, a small volume correction was conducted using a 5 mm radius for the hippocampus (Amico et al., 2011;Vasconcelos et al., 2011) and amygdala or a 10 mm radius for the precuneus and anterior cingulate cortex (ACC) (Abutalebi et al., 2012;Amico et al., 2011). Mean gray matter volume indices were extracted from these regions using the Rex Toolbox (http://web.mit.edu/swg/software.htm) implemented in Matlab 7.11. The analysis was performed on region of interest basis, with no conjunction mask, no scaling and extraction of the mean within the predefined ROI. The extracted values were used for a stepwise backward regression analysis (see Supplementary table). Only corrected family-wise error values were taken into consideration, in order to avoid a type I error. Statistical analysis of clinical variables Clinical and socio-demographic differences were assessed using one-way ANOVA and χ 2 -test. For post hoc analysis, the Bonferroni correction was conducted. In addition, a stepwise regression analysis with backward elimination was applied, to restrict correlating variables. We included the BPRS total score and SANS total score; as well as the BPRS clusters for 'negative symptoms' as a sum of blunted affect, emotional withdrawal, and motor retardation (BPRS16, BPRS17 and BPRS18); and 'mood disturbance' as a sum of anxiety, depression, suicidality and guilt (BPRS02, BPRS03, BPRS04 and BPRS05); and the single scores 'depression' (BPRS 3) and 'motor hyperactivity' (BPRS 23) (see Supplementary table). We applied outlier detection using Cook3s Distance Test and no subject had to be excluded from regression analysis. Still, we excluded one ARMS-nonAD individual due to missing data in the BPRS 16, 17, and 18 necessary for calculating the cluster 'negative symptoms'. We then performed a correlation with our significant clinical parameters from the stepwise regression. From the regions of interest with significant differences between ARMS-AD and ARMS-nonAD (hippocampus and precuneus), we extracted the volumes and used them in our in correlation calculations. The data were normally distributed and we performed a series of two-tailed Pearson3s correction analyses with statistical threshold set at p b 0.05. All analyses were performed using the Statistical Package for the Social Science (SPSS, Version 22). Clinical and demographic characteristics The ARMS-AD, ARMS-nonAD and HC showed significant differences in age at MRI scan (p = 0.02), gender (p = 0.01), years of education (p = 0.002), smoking (p = 0.001), and cannabis (p = 0.006); but no differences in alcohol consumption or handedness. Post hoc analysis showed that smoking (p = 0.006) and cannabis consumption (p = 0.014) were significantly more common in the ARMS-AD group than in the ARMS-nonAD group (Table 1). We observed significant clinical differences between ARMS-AD, ARMS-nonAD and HC in the total BPRS score (p b 0.0001), total SANS score (p b 0.0001), GAF total score (p b 0.0001), BPRS cluster for 'negative symptoms' (p b 0.0001), BPRS 'mood disturbance' (p b 0.0001), BPRS 'depression' score (p b 0.0001), and BPRS 'motor hyperactivity' score (p = 0.005). Post hoc analysis showed that the ARMS-AD had a higher BPRS 'depression' score (p = 0.011) and less 'motor hyperactivity' (p = 0.029) than the ARMS-nonAD (Table 1). The test of our a priori defined contrast in attenuated negative symptoms (BPRS cluster for 'negative symptoms') between ARMS-nonAD and ARMS-AD found no significant difference (p = 0.220). Whole brain analysis Compared to the HC, there was significantly less GMV in the middle frontal gyrus in the whole ARMS cohort, and in the superior frontal gyrus in the ARMS-AD group (p uncorr. b 0.05, Table 2). The ARMS-nonAD group showed reduced GMV in the left superior parietal lobe compared with the ARMS-AD group (p uncorr. b 0.05, Table 2). Region of interest analyses The ARMS-AD group had less GMV in the left hippocampus (Fig. 1) and right precuneus than the ARMS-nonAD (p FWE-corr. b 0.05 after small volume correction, Table 2). However, these results did not survive correction for multiple comparisons (p b 0.0125). No significant differences were found for the ACC and amygdala. Correlation between ROI volumes and clinical parameters We found a negative correlation between the hippocampal volume and the BPRS 'negative symptoms' cluster, both in ARMS subjects (r = −0.314, p = 0.030, Fig. 2) and in all our subjects, including the HC (r = −0.293, p = 0.013). There was no significant correlation between the BPRS 'negative symptoms' and the precuneus volume. Discussion In the present study, we addressed for the first time the effect on psychosis of antidepressant treatment and attenuated negative psychotic symptoms in ARMS individuals. Firstly, we have found no evidence for our first hypothesis, that ARMS individuals suffer more pronounced attenuated negative psychotic symptoms if they have not been treated with antidepressants. We found that ARMS-AD individuals had a higher depression score, lower motor hyperactivity and smoked more cigarettes and marijuana than the ARMS-nonAD individuals. The antidepressants that ARMS-AD individuals were receiving differed in their mode of actionsome inhibited the reuptake of serotonin and/or noradrenaline, while others enhanced the release of these monoamines (Andrade and Rao, 2010). Moreover, the antidepressants may take weeks or longer to take effect after dosage (Penn and Tracy, 2012). The duration of antidepressant therapy in the ARMS-AD group varies from 4 to 170 days and thus the The data presented here are from ANOVA of 3 included groups (ARMS-AD, ARMS-nonAD, HC) at a threshold of p b 0.005 uncorrected across the whole brain. There were no significant differences in the following contrasts: observed effect could be indicative of both the predominant depressive and/or attenuated negative psychotic symptoms or of the already developed antidepressant effect of the medication. Hence, we cannot clearly distinguish the extent to which each of the two components contributes to the current clinical state. Secondly, we did not find increased GMV in the regions associated with depressive symptoms in those ARMS who were receiving antidepressants, compared to those without this medication. Thus we could not confirm our second hypothesis, but found GMV deficits in the hippocampus and precuneus only in the ARMS individuals currently receiving antidepressant medication. Our third hypothesis is related to the volumetric abnormalities in ARMS and their association with the attenuated negative symptoms; we confirmed this relationship in the hippocampus. We found a clear negative correlation between the bilateral hippocampal volume and attenuated negative symptoms in all ARMS individuals. This corresponds to studies linking the hippocampus with psychosis (Jun et al., 2012). Previous studies similarly either found negative correlations between the left hippocampal volume and negative symptoms in schizophrenics (Rajarethinam et al., 2001), or at least a strong trend in this direction (Brambilla et al., 2013). These findings underline the role of the hippocampus in the pathophysiology of schizophrenia and suggest specific associations between individual structures and both the positive and negative symptoms of the illness (Kühn et al., 2012;Rajarethinam et al., 2001). Thus, our findings support the importance of hippocampal structures as a region of interest in the early stage of psychosis (Benetti et al., 2009;Fusar-Poli et al., 2012;Walter et al., 2012). Our ROI analysis demonstrated smaller GMV in the left hippocampus in the ARMS-AD group than in the ARMS-nonAD group. The hippocampus is involved in various psychiatric conditions, including major depression and psychosis (Videbech and Ravnkilde, 2004;Walter et al., 2012). Three meta-analyses have confirmed significant reductions in the hippocampal volume in depression (Campbell et al., 2004;Cole et al., 2011;Videbech and Ravnkilde, 2004). Furthermore, the total number of depressive episodes was significantly correlated to the reduction in the right hippocampal volume (Videbech and Ravnkilde, 2004). The left-hemispheric deficits in hippocampal volume may reflect brain degeneration, as a consequence of chronic stress (Schmidt and Duman, 2010). Recent data on antipsychotic-free ARMS have confirmed that vulnerability to psychosis may be associated with a significant decrease in hippocampal volume (Fusar-Poli et al., 2012;Wood et al., 2010). In the right precuneus, we found less GMV in the ARMS-AD group than in the ARMS-nonAD group. This is consistent with Grieve et al. (Grieve et al., 2013), who found significant reductions in the precuneus volumes, along with several other structural changes in depression. However, the direction of the effect is controversial. For example, a positive association was described between the volume of the precuneus and the severity of depression (Kroes et al., 2011). It is well established that intrusive imagery and increased self-focus, common in patients suffering from depression, are regularly associated with higher depression scores (Kroes et al., 2011). Since the precuneus is involved in visuospatial processing, imagery and self-related processing (Kjaer et al., 2002;Wenderoth et al., 2005), depression could in principle enhance its GMV. We acknowledge the limitations of a cross-sectional design, which precludes studying clinical and structural abnormalities within the same group of ARMS before and after antidepressant medication. In order to decide whether brain volumetric deficits are related to distinct depressive symptoms or to the antidepressant effect, longitudinal study designs are needed. Secondly, other confounders, such as nicotine and cannabis consumption, may have influenced our findings (higher consumption in ARMS-AD individuals). Furthermore, the ARMS-nonAD group shows more motor hyperactivity than the ARMS-AD group. This could result from the early effect of the antidepressant medication or from the sedative effect of cannabis or nicotine self-medication, which was higher in the ARMS-AD group (Warburton, 1985). Cigarette use may serve as an instrument to alleviate depressive symptoms, although the role of cannabis consumption is unclear. We can only speculate that the attenuated negative symptoms may drive cannabis consumption, although this abuse may exacerbate the positive symptoms observed (Gill et al., 2013). We are also aware that not all negative symptoms are of hippocampal origin. In addition, the prescribed antidepressant drugs have different affinities to various synaptic receptors and therefore their effects on macroscopic structures and neurogenesis may vary and also be associated with other brain regions. Likewise, differences in the duration of antidepressant therapy may affect their impact on brain structure. Finally, relatively small sample groups are included, which reduces the statistical power to detect any significant effects. Conclusion Hippocampal volume was negatively associated with attenuated negative psychotic symptoms in ARMS individuals. Surprisingly, ARMS individuals without antidepressant medication did not suffer more pronounced attenuated negative psychotic symptoms. The short-term antidepressant medication in this study is more likely to be an indicator of a more serious depressed state than to have a direct effect on attenuated negative psychotic symptoms. These findings emphasize the importance of comorbidity issues, especially in the context of depressive and attenuated negative psychotic symptoms in clinical high-risk individuals and their functional outcomes.
2016-10-25T01:43:02.243Z
2015-04-29T00:00:00.000
{ "year": 2015, "sha1": "76e2b09a4b1b154bfdf34a18c9cde9510a8057b1", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.nicl.2015.04.016", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7435294644f9a1d22d4d9c4aa49dc8cdcb58f007", "s2fieldsofstudy": [ "Psychology", "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
254218150
pes2o/s2orc
v3-fos-license
A Comparison Between Tone-Based and Code-Based Cell Search Schemes for Multipath Division Multiple Access Cell search procedure is an essential and critical process in an early stage when the user equipment (UE) is powered on. It mainly comprises symbol and frame timing synchronization, frequency offset compensation, and base station (BS) identification. Among the current exiting 3G, 4G and 5G mobile networks, the UE completes the initial cell search based on different code sequences. In this paper, we investigate and compare two kinds of cell search methodology. Tone-based and code-based methods are explored for massive antenna systems. The detailed description and analysis are offered for two approaches. Simulation results indicate that the tone-based cell search not only possesses stable performance with respect to path numbers but also outperforms the code-based one in general channel realizations in terms of cell search error probability. The results suggest that the tone-based cell search could be used for 5G communication systems. I. INTRODUCTION During mobile communication setup, initial cell search is a necessary process for link establishment between a UE and a serving BS. With the help of control signals, the procedure deals with time and frequency synchronization and performs home BS selection for the UE. Time synchronization consists of symbol and frame timing synchronization [1] for both BS and UE to be properly time aligned. On the other hand, frequency synchronization is to estimate and compensate for the integer carrier frequency offset (ICFO) and the fractional carrier frequency offset (FCFO) [2]. The frequency impairment originates from the oscillator mismatch between the transmitter and the receiver, and the Doppler shift due to mobility of the UE. Therefore, the initial cell search plays an essential role for the successful connection between BS and UE. Of the original 1G to current 5G mobile networks, the initial cell search is done in either frequency or time domains. The associate editor coordinating the review of this manuscript and approving it for publication was Stefan Schwarz . A. 1G TO 5G CELL SEARCH METHODS The Advanced Mobile Phone System (AMPS) [3] is the first generation (1G) mobile communication systems based on analog signals developed by Bell Labs in the 1980s. The frequency reuse factor is 7, and each cell is divided into 3 sectors. Therefore, the number of Physical Cell IDs (PCI) is 21. In AMPS system, each sector is assigned a control channel. The initial cell search is based on finding the control channel with the highest power to determine the home BS sector, which can be divided into the following two steps. (1) The first step is frequency scanning. After the UE is powered on, it scans those 21 control channels. The UE next sorts them in the order of decreasing power and selects the strongest one as the home BS. (2) The second step is Forward Control Channel (FCC) detection. This is for time synchronization, including bit synchronization and frame synchronization. We call this search method Frequency Based Cell Search. The Global System for Mobile Communications (GSM) [3], [4] is the second generation (2G) mobile communication VOLUME 10, 2022 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ system, which was widely used in the 1990s. Since then, both data signals and control channels are digitized. The frequency reuse factor is 4 (with frequency hopping) and each cell is divided into 3 sectors. Therefore, the number of PCI is 12. The initial cell search of the GSM system can be divided into four steps. (1) The first step is frequency scanning. The UE scans all the control channels and arranges them in decreasing order according to their signal strength. (2) The second step is Frequency Correction CHannel (FCCH) check. The UE tunes to the strongest carrier frequency, and then confirms whether it is the Broadcast CHannel (BCH) through decoding an FCCH burst. (3) The third step is Synchronization CHannel (SCH) detection. This is for time synchronization. (4) The fourth step is Broadcast Control CHannel (BCCH) detection to acquire the system information. After completing the frequency and time synchronization, the UE can accurately read the home BS ID and other system information from the BCCH. So far, the initial cell search has been completed. This initial cell search method also belongs to Frequency Based Cell Search, like the AMPS system. Wideband Code Division Multiple Access (WCDMA) [5], [6] is a third generation (3G) mobile communication system. The WCDMA system was widely used in the 2000s. The frequency reuse factor is 1, and each cell is divided into 3 sectors. The number of PCI is 512. The initial cell search of the WCDMA system can be divided into three steps below [7], [8]. (1) The first step is Primary Synchronization CHannel (P-SCH) detection. The UE correlates the received signal with a known and unique Primary Synchronization Code (PSC) to obtain the exact slot timing. Typically, this can be done through a PSC matched filter. (2) The second step is Secondary Synchronization CHannel (S-SCH) detection. This is for frame timing synchronization and Scrambling Code Group (SCG) detection. 3GPP uses a total of 512 Scrambling Codes (SCs), divided into 64 groups, for 512 cell sectors. Each group contains 8 scrambling codes. The information of the SCG is carried in the S-SCH. After this step, we can determine which group it belongs to. (3) The third step is Common Pilot CHannel (CPICH) detection. This step is to select 1 SC from the SCG determined in Step 2. CPICH carries a SC of length 38400 chips. Each BS sector is assigned a SC. Once the scrambling code is detected, the PCI is also known. So far, the initial cell search has been completed. We call this initial cell search method Code Based Cell Search. Long Term Evolution Advanced (LTE-A) [9], [10], [11] is the fourth generation (4G) mobile communication system, which was widely used in 2010s. The frequency reuse factor is 1 (with partial frequency reuse), and each cell is divided into 3 sectors. The number of PCI is 504. The initial cell search of the LTE-A system is executed with the following three steps [12], [13], [14]. (1) The first step is initial synchronization. It is performed in the time domain. From the fact that the cyclic prefix (CP) is a copy of the tail part of OFDM symbols, we can estimate the symbol timing and fractional carrier frequency offset. (2) The second step is Primary Synchronization Signal (PSS) detection. This is performed in the frequency domain for slot timing synchronization, integer carrier frequency offset detection, and sector ID detection. The PCI index is determined from N cell ID = 3N (1) ID + N (2) ID , where N (1) ID and N (2) ID are Cell-Identity Group (CIG) and sector ID, respectively. The PCIs are divided into 168 groups. Note that N (1) ID ∈{0∼167} and N (2) ID ∈{0,1,2}. N (1) ID is carried in the Secondary Synchronization Signal (SSS) and N (2) ID is transmitted in PSS. Thus, 504 (= 168 × 3) PCIs are assumed in LTE-A. (3) The third step is SSS detection. This step is for frame timing synchronization and Cell-Identity Group detection. After the above three steps, we have completed the process of initial cell search. Each BS sector of the LTE-A system corresponds to a set of codes (PSS+SSS). Since the initial cell search is based on detecting these two codes to find a home BS, this method belongs to Code Based Cell Search. 5G New Radio (5G-NR) is a fifth generation (5G) mobile communication system, which been commercially used since 2020s [15], [16], [17]. The frequency reuse factor is 1, and each cell is divided into 3 sectors. The PCI index is determined from N cell ID = 3N (1) ID ∈{0,1,2} are Cell-Identity Group and sector ID, respectively. Thus, 1008 (= 336 × 3) PCIs are assumed in 5G-NR. Likewise, 5G-NR uses the similar cell search method as in the LTE-A, which is also a kind of Code Based Cell Search. A PSS based timing synchronization algorithm with anti-frequency offset and anti-noise is developed in [18]. It proposed improved coarse and fine timing synchronization algorithms based on Fourier theory and a triple autocorrelation algorithm. In [19], it presented a deep-learning based initial access method on mmWave bands. It used probability function for detection statistics, which is different from conventional ones that exploit energy detection. Reference [20] offered a physical-layer cell ID detection algorithm that employs a joint estimation for both the frequency offset and the SSS sequence. It adopted 5G-NR beamforming technique for the initial access at BSs. In [21], it proposed a network resolved and mobile assisted cell search, which lets the BSs be the main performers for deciding the appropriate home BS and UEs only be the role of assistants in the cell search process. As compared to the conventional cell search that requires the UE to detect the cell ID and decode the control data, a significant computation can be offloaded to the BSs that relieves the computational efforts and enhances power efficiency for the UE. B. MOTIVATION AND CONTRIBUTIONS To sum up, 1G and 2G use Frequency Based Cell Search, 3G, 4G, and 5G use Code Based Cell Search [22]. In this article, we compared tone-based cell search method with code-based method. We will introduce and analyze both cell search methods in detail, and the performance is evaluated on a 5G multipath division multiple access (MDMA) system. Contributions of this paper are described as follows. 1. Current and previous use of initial cell search methods are listed and compared, which shows that the 3G, 4G and 5G mobile networks all rely on code-based cell search. 2. The detailed description and analysis are offered for both code and tone-based cell search. 3. Simulation results indicate that the tone-based cell search outperforms the code-based one in general channel realizations in terms of cell search error probability. 4. It is found that the tone-based cell search possesses stable performance which is nearly invariant to path numbers. The paper is organized as follows. Sec. II shows the MDMA system architecture, including the channel model and frame structure. Sec. III gives the detail of the tonebased cell search. Sec. IV then provides the code-based cell search. Sec. V shows simulation results to evaluate cell search performance in terms of cell search error probability. Finally, the paper concludes in Sec. VI. The notations and the meanings thereof in the whole paragraphs are described in Table 1. The bold face in lower and upper cases are used respectively for vectors and matrices. II. MDMA SYSTEM OVERVIEW MDMA is one of the multiple access techniques for 5G mmWave communication systems [23]. It greatly simplifies computation burden at UE terminals whereas it endows the BS with powerful processing capabilities. The BS acquires multipath diversity by Pre-RAKE precoding and RAKE equalizer at transmitters and receivers, respectively. Moreover, MDMA obtains large processing gain to suppress interference by deploying massive antennas at BSs, which is feasible for mmWave communications. Antennas at BSs are separately placed tens of wavelengths apart to be of low correlation. Thus, MDMA exploits both time and spatial degrees of freedom to separate users from each other. Each user owns equivalently M uncorrelated multipath channels at both time and frequency domains as shown in Fig. 1, where M is the number of BS antennas. The details are revealed in [24] and [25] for readers of interest. In brief, the benefits of using MDMA are channel hardening, uniform data rate, high cellular capacity, spatial focus beamforming, and hybrid multiple access. Fig. 2 is the frame structure for MDMA at mmWave band of 30 GHz. The smallest transmission unit contains four time slots, with two uplink (UL) slots followed by two downlink (DL) slots. 25 units composes of one frame of 1 ms. Since the coherence time, which is (5f d ) −1 , at 30 GHz band is roughly 20 µs for a vehicular speed of 300 km/hr, the consecutive two time slots of 20 µs experience nearly the same channel response. Thus, the channel estimate at the first UL slot can be used not only for BS equalizer but also for the Pre-Rake precoding in the first DL slot. The second UL slot is used in the same manner for the second DL slot. Note that the channel estimate in MDMA is completed with the aid of different pilots sent from each user in the uplink [26]. The channel bandwidth considered is 200 MHz allocated for each user at the same piece of spectrum. OFDM modulation is used for the purpose of cell search. FFT size and subcarrier spacing are 2048 and 100kHz, respectively. The sampling time is about 5ns. On the other hand, single carrier modulation, say BPSK, is employed for data transmission. The processing gain derived from massive BS antennas is used to suppress both intra-cell and inter-cell interferences. Besides, the MDMA is interference limited with power control [25]. In addition, control channels are designed in both time and frequency domains. For the following discussion, tone-based and code-based cell search methods are corresponding to frequency-domain and time-domain control channel designs, respectively. III. TONE-BASED CELL SEARCH The procedure for the tone-base cell search is presented in Fig. 3. It is composed of four essential steps. In the first step, the initial synchronization detects symbol timing and also estimates and compensates for FCFO. The following primary control tone (PCT) detection estimates ICFO and compensates for it accordingly. Next, secondary control tone (SCT) detection identifies the cell ID. Finally, preamble detection finds the exact frame timing. The subcarriers for control signaling shown in Fig. 4 are developed for tone-based cell search, which consists of one PCT and eight SCTs. PCT is at the central subcarrier while 8 SCTs are separated at equal distance with one another. The first SCT starts with a position index ρ, which is its cell ID. Consider a typical cellular system with a home cell and four tiers of cochannel cells. Each cell is composed of three sectors. There are totally 61 (= 1 + 6 + 12 + 18 + 24) cochannel cells. Hence, at least 183 (= 61 × 3) cell IDs are needed. In the MDMA system, the position of SCTs is designed to be distinct for each cell. It is achievable to distinguish all cells from each other since 183 × 8 + 1 < 2048, where 1 accounts for the PCT. The SCTs are equally spaced over the entire transmission bandwidth, say 200 MHz with FFT size of 2048 for the system under consideration. Equal spacing assignment helps exploit the frequency diversity. Since there are eight SCTs for each sector, the frequency spacing for two adjacent SCTs is thus 200MHz/8 = 25 MHz. The allocation of SCTs is denotes the control tone spacing between two nearest SCTs, i.e., = 256 due to 2048/8. The mapping result is also shown in Table 2 for clarity. An example of SCTs for three sectors in the central cell is plotted in Fig. 6. The downlink transmitter for tone-based cell search is given in Fig. 7. The PCT is a single tone on the fixed subcarrier. On the other hand, the SCTs are modulated through DPSK on specific subcarriers. The mapping rule is corresponding to its own cell ID as mentioned before. An example can be referred to in [27]. The transmit power for the PCT is allocated half of a user power while the transmit power for the whole SCTs is equal to 1.5 times of a user power. After subcarrier mapping, the frequency domain control data is sent to the conventional OFDM transmitter. In contrast, the user data is transmitted with single carrier modulation. Each active user data is first pre-equalized with Pre-RAKE precoding and then summed over all users. For the purpose of initial symbol timing synchronization, CP is inserted after the Pre-RAKE precoder. Finally, the time-domain control signals (w.r.t. PCT and SCT) and user data are transmitted by BS antennas. To obtain antenna hopping diversity, one out of M antennas are randomly selected for time-domain control data transmission on a slot by slot basis. On the selected antenna, the time-domain control data is sample-wise combined with user data for transmission. M antennas are preserved in advance, where M < M and M is the total number of BS antennas. The transmit signal at the m-th antenna from BS ρ to user j, having SCT, can be generally written as where l, p, N , , and δ[·] denote path index, control data index, FFT size, and control tone spacing between two nearest SCTs, and Kronecker delta function, respectively. d and x [·] are control data (SCT) and user data. h is the channel impulse response at the m-th antenna between BS ρ to user j. d ∈{±1} is due to DPSK modulation. For convenience, ρ,m j is defined as data of the user j after pre-RAKE precoding at antenna m. The factor of 3N /16 in the SCT results from the individual power of 8 SCTs having totally 1.5 times of a user power. The factor of N /2 in the PCT corresponds to half of a user power. The first and the second terms in (1), i.e., PCT and SCT, only appear at one of M antennas as described in the last paragraph. The detail steps for the tone-base cell search are introduced as follows. A. INITIAL SYNCHRONIZATION The received signal without timing offset (TO) and carrier frequency offset (CFO) is denoted as z(n). The amount of TO and CFO are represented as θ and ε, respectively. The received signal with additive white Gaussian noise w(n) is thus written as where N is the FFT size. Since CP is the circular repetition from the last several samples of an OFDM symbol, we can estimate θ and ε by comparing the CP and the tail of the OFDM symbol, as shown in Fig. 8. To alleviate the noise effect, the minimum mean-square error (MMSE) criterion is used to estimate θ and ε as follows [21]: where N CP is the number of samples of the CP and γ (θ) = θ +N CP −1 n=θ r(n)r * (n + N ). Thus, where (4) and (5) are the estimated TO and FCFO, respectively. (4) is found by one-dimensional sequential line search. The symbol timing is thus obtained and the FCFO can be compensated afterwards [28]. The received timing is said to be correct, if the estimated symbol timing is located in the ISI free region. B. PCT DETECTION Based on (1), the received signal of the user j is expressed as in (6), shown at the bottom of the next page, where β ρ , ζ ρ , and * represent path loss, shadow fading, and convolution operator, respectively. Converting into frequency domain via OFDM demodulation, we have, as in (7), shown at the bottom of the next page, where FFT{.} is the FFT operation, γ ρ = (β ρ ) 1/2 ς ρ e j2πkθ/N , θ is the symbol time offset, and ε I is the ICFO. The detected PCT subcarrier is the one with the maximum subcarrier power. In order to obtain more accurate PCT location index, we can accumulate over several slots. That is,k where i denotes slot index. The ICFO is thus derived aŝ C. SCT DETECTION Since the position of the SCTs is related to the cell ID, we can identify the cell ID bŷ Similar to (8), accumulating over more slots leads to better detection results. D. PREAMBLE DETECTION After the cell ID is identified from (10), the control data can be detected on the corresponding SCTs through DPSK demodulation. The preamble sequence used in MDMA is the 8-bit Hadamard Walsh code placed in the downlink slots of the first transmission unit in a frame. Thus, the frame timing synchronization is achieved by recognizing the preamble sequence. Until now, the tone-based cell search has been completely introduced. For hardware design point of view, please refer to [27] for implementation purpose. IV. CODE-BASED CELL SEARCH The procedure for the code-base cell search is given in Fig.9. The main steps include initial synchronization, PCT detection, preamble detection, and cell ID detection. Different from the tone-based cell search, the preamble is detected before the cell ID is identified in the code-base cell search. The code adopted here is the popular Zadoff-Chu (ZC) sequence [29] used in current 4G systems. The generation of a ZC sequence, c q , follows the formula where q and N ZC are root index and the length of the sequence, respectively. Besides, q and N ZC are mutually prime, i.e., gcd(q, N ZC ) = 1. In addition, distinct root indices yield different ZC sequences satisfying gcd(q 1 , N ZC ) = gcd(q 2 , N ZC ) = 1 and gcd(q 1 −q 2 , N ZC ) = 1 for q 1 = q 2 . For multiple root indices, N ZC must be a prime (and odd) number. Given two sequences a and b with equal length N , their autocorrelation function (ACF) and the cross-correlation function (CCF) at a phase shift τ are defined respectively as In many practical cases, the length of a ZC sequence may not be a prime number. We can first generate the ZC sequence having the prime length that is closest to and greater than the desired size. Next, truncate the ZC sequence from the last to the wanted size. For example, if the preferred length is 2048, one can first generate the ZC sequence of a prime length 2053 and then truncate the last five symbols to yield the desired length [30]. For the truncated sequence, denoted as c q , the ACF and CCF turn to be where δ[.] is the Kronecker delta function. The proof is shown in the appendix. The downlink transmitter for code-based cell search is given in Fig.10. Denote M as the total number of BS antennas. The PCT signal on the fixed subcarrier is a single tone, which broadcasts on every downlink slot. The control data is DPSK modulated and code spread by a ZC sequence. For antenna hopping diversity, one out of M antennas are randomly selected for control signals (PCT and control data), where M is fixed and selected in advance. For user data, it is processed in the same way as described in Sec III. Note that CP is added for initial synchronization as in the tone-based cell search. The transmit signal at the m-th antenna from BS ρ to user j can be generally written as Control data where c ρ [n] is the ZC sequence and N is the FFT size. d ρ j ∈{±1} due to DPSK modulation. ρ,m j is defined as data of the user j after pre-RAKE precoding at antenna m. The factor of 3/2 in (18) is equal to 1.5 times of a single user power for control data, which is the same as the total SCT power for fair comparison. The downlink slots in the first unit of each frame are related to the preamble for the code-based cell search. Below we describe each step in the code-base cell search. The initial cell search and PCT detection follow the same approaches as in the tone-based search. A. INITIAL SYNCHRONIZATION Since the CP is inserted for the code-based cell search, the initial synchronization is the same as in the tone-based cell search to obtain symbol timing and FCFO. Please refer to the detail in Sec. III-A. B. PCT DETECTION Since the PCT is transmitted with identical parameters and settings for both tone-and code-based cell search, the same detection method, i.e., (6) to (9), is applied to complement for ICFO. Please refer to the detail in Sec. III-B. C. PREAMBLE DETECTION The length of ZC sequences is equal to the FFT size N . N = 2048 is used in simulations later. For the need of 183 cell VOLUME 10, 2022 IDs, we can choose 8 root indices, say q = 1, 2, . . . 8, and cyclically shift each sequence 23 times to obtain 8 × 23 = 184 sequences. The choice of the root indices and shift amount will be explained in Sec. V. The first sequence, denoted as c 0 , is selected to be the preamble sequence for all cells. The rest c 1 , c 2 , . . . , c 183 are used as code sequences for 183 cochannel cells. The received signal in the first and the second downlink slot can thus be expressed as in (19), shown at the bottom of the next page. Without loss of generality, assume user j is served by cell 1 and the first antenna is used for control signals. Then (19) can be further written as where γ 1 = (β 1 ) 1/2 ς 1 . The interference combines PCT signal, other cell preamble signal, noise, and user data from all cells. The vector expression for N received time samples is where I is the corresponding vectorization of the interference and c (l) 0 is the l-tap cyclic shift of the vector c 0 . Note that the exact mathematical expression is given as follows: As demonstrated in Fig. 11, the preamble sequence is used to match the received signal r j [n]. This can be put in the matrix form as where The path selection is then executed that selects N p paths from R j . This thus gives: where τ k is the delay of selected path index, f (·) represents the path selection process as in [31]. The detected preamble is the one with the maximum power. The frame timing is identified accordingly. That is, where i denotes slot index for frame header. D. CELL ID DETECTION The cell ID is detected in the similar way as preamble detection. The receiver needs to identify which code sequence c ρ , ρ = 1, 2, . . . 183, is transmitted for the received signal. Likewise, assume the first antenna is used for control signals. The received signal in the i-th downlink slot (i ≥ 3) for user j served by cell ρ is vectorized as where γ ρ = (β ρ ) 1/2 ς ρ , I is the corresponding vectorization of the interference and c (l) ρ is the l-tap cyclic shift of the vector c ρ . Note that r i j [n] is of the similar form as (20), i.e., The subscript i in (27) and (28) refers to the i-th downlink slot. As the same for the preamble detection, the code sequence is used to match the received signal r i j [n]. This can also be expressed as for ρ = 1, 2, . . . 183, where The path selection is then executed that selects N p paths from R i ρ . This thus gives: where τ k is the delay of selected path index, f (·) represents the path selection process. The detected sequence is the one with the maximum power. The cell ID is identified accordingly. That is, whereρ is the detected cell ID. V. SIMULATION A. SIMULATION SETUP For computer simulations, we consider the MDMA system with the cellular structure and the transceiver architecture as illustrated in [24]. The system operates in the 30 GHz mmWave band with 300 BS antennas. In addition, channel bandwidth of 200 MHz is utilized in our system that leads to the bit time of 5 ns. The radius of the cellular system is 50 m due to severe propagation loss in the mmWave band. Further, the maximum delay spread is set as 400 ns from the measurement results given by Sun and Rappaport [32]. Thus, the CP should be at least 400/5 = 80 taps to combat channel delays. Besides, the FFT size adopted is 2048 and the original un-truncated ZC sequence length is 2053. The CP length used here is 160 taps and the duration of a slot is thus 2208 taps. Moreover, the cellular system is interference limited, i.e., the background noise is neglected. Since the system is interference-limited, the exact value and unit of the transmit power are not critical. Thus the user power in one slot is set to be 1 unit. Likewise, the PCT power and SCT power in one slot are 0.5 unit and 1.5 units, respectively. The basic simulation parameters are summarized in Table 3. We adopt a mmWave S-V channel model according to the spatial parameters given in [33]. The S-V model is a widely-used cluster-based channel model which considers path index, path amplitude, phase shift, and arrival time for both clusters and rays therein. The detail description of the model can be referred in [34]. Recall that the channel duration is 80 taps for the maximum delay spread of 400ns. As long as the cyclic shift is larger than 80 taps for the ZC sequence, the resulted shifted sequence has the helpful ACF and CCF properties, (16) to (17), that can be used to distinguish each other within the maximum channel delay spread. Thus, we choose 8 root indices and 23 cyclic shifts. Each shift interval is 89 samples such that 23 × 89 < 2048. There are totally 8 × 23 = 184 sequences, where the first sequence, say c 0 , is selected for the preamble. The remaining c 1 , c 2 , . . . , c 183 are used as code sequences for 183 cochannel cells. Indeed, there are many possible combinations of root indices and shift interval provided that the total number of generated sequences is enough and each shift interval is greater than the maximum channel delay spread. Fig. 12 shows the FCFO result. The vertical axis denotes the mean square error (MSE) of the FCFO defined as E ε − ε 2 . It is clear that the MMSE based detection is better than the traditional ML approach for both code and tonebased cell search schemes. The same results are obtained for both schemes since FCFO detection is based on CP structure only, which is irrelevant to cell search schemes. Fig. 13 shows the error probability of ICFO estimation with respect to the number of users simultaneously served by the BS. The error probability of ICFO detection is the probability that the PCT is incorrectly detected. The number of slots for each curve means using different numbers of slots to conduct the combining when detecting the PCT. Results show that the ICFO detection is effective and the accuracy can be better than 10 −2 when the number of slots is 14. B. SIMULATION RESULT In addition, results reveal that when increasing the number of slots for combining, the error probability can be decreased as expected. The next simulation results for both methods are evaluated in terms of cell search error probability. That is, the error rates of the cell ID are plotted and investigated. Fig.14 shows the error rate of cell ID detection. Here we accumulate over multiple slots to enhance the detection performance. 10 4 trials are executed for the simulation and an independent channel is generated for each trial. The dashed lines and solid lines refer respectively to code-based and tonebased cell search. It is clear that the tone-based cell search always outperforms the other under the same number of slots. Recall that a transmission unit contains two uplink and two downlink slots. Hence, the slot = 2 in the figure legend corresponds to one transmission unit for DL, slot = 4 corresponds to two DL units, and so on. The tone-based method performs better than the code-based method even with small number of slots. As expected, the more slots used in the accumulation, the less error probability obtained for the detection. On the other hand, as the number of user increases, the error probability also increases for all scenarios. The previous simulation considers channels with random number of paths (channel taps). That is, we obtained the averaged performance results over all possible number of paths. However, one can also observe the effects with a fixed path number. By fixed path number we first set a positive integer Lp and generate a mmWave S-V channel. Then we select first Lp largest taps and normalize the profile to be of unit power for later simulations. To examine two methods more closely, we present single cell scenario since other-cell interference leads to the same noise level for both methods. Fig.15 and For small path number, say Lp = 4, the code-based method is better than tone-based method. However, the latter method shows stable performance (with respect to path numbers) and is superior to the former for Lp > 4. This phenomenon is explained as follows. Due to accumulation over multiple slots, the channel of each SCT subcarrier suffers nearly the same noise level. That is, each SCT subcarrier has nearly the same channel power to noise ratio, which is invariant to the path number. However, time-domain paths would have different channel power to noise ratios for different path powers. For channels with more paths, it is easier for each tap to be hidden below the noise level since the channel power gets more dispersed. For schematic explanation, it is shown in Fig.17 for both cases. Therefore, the tone-based cell search is better than the other for more channel taps. Since the typical 3G, 4G and 5G mobile networks all use different codes to identify different BSs, they can be seen as a kind of code-based approach. Thus, it is representative to compare the tone-based with code-based cell search methods. VI. CONCLUSION We investigated and compared the performance of the tonebased and the code-based cell search methods. Tone-based cell search showed superior performance over the other for general channel realizations. The main reason is that SCT subcarriers experience nearly the same channel power to noise ratio, which is invariant to the number of channel taps. It is possible that the code-based method performs better only when the tap number is always small. Simulations showed the consistent results in terms of cell search error probability. It is thus suggested to use the tone-based cell search method for general-purposed mobile communications.
2022-12-04T17:48:20.933Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "a11c2f7e7224e7c29449e6914549fbe730a27bac", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09967999.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "151ebdc4536544404977e57fbc39afe4087e7528", "s2fieldsofstudy": [ "Business", "Computer Science" ], "extfieldsofstudy": [] }
40373985
pes2o/s2orc
v3-fos-license
Effect of Infant Formula Containing a Low Dose of the Probiotic Bifidobacterium lactis CNCM I-3446 on Immune and Gut Functions in C-Section Delivered Babies: A Pilot Study BACKGROUND In the absence of breast-feeding and its immunomodulatory factors, supplementation of starter infant formula (IF) with probiotics is currently used to support immune functions and gut development. AIM To assess whether immune-related beneficial effects of regular dose (107 CFU/g of powder) of the probiotic Bifidobacterium lactis CNCM I-3446 (hereafter named B. lactis) in starter IF supplementation can be maintained with starter IF containing a low dose (104 CFU/g of powder) of B. lactis. METHOD This trial was designed as a pilot, prospective, double-blind, randomized, single-center clinical trial of two parallel groups (n = 77 infants/group) of C-section delivered infants receiving a starter IF containing either low dose or regular dose of the probiotic B. lactis from birth to six months of age. In addition, a reference group of infants breast-fed for a minimum of four months (n = 44 infants), also born by C-section, were included. All groups were then provided follow-up formula without B. lactis up to 12 months of age. Occurrence of diarrhea, immune and gut maturation, responses to vaccinations, and growth were assessed from birth to 12 months. The effect of low-dose B. lactis formula was compared to regular-dose B. lactis formula, considered as reference for IF with probiotics, and both were further compared to breast-feeding as a physiological reference. RESULTS Data showed that feeding low-dose B. lactis IF provides similar effects as feeding regular-dose B. lactis IF or breast milk. No consistent statistical differences regarding early life protection against gastrointestinal infections, immune and gut maturation, microbiota establishment, and growth were observed between randomized formula-fed groups as well as with the breast-fed reference group. CONCLUSION This pilot study suggests that supplementing C-section born neonates with low-dose B. lactis-containing starter formula may impact immune as well as gut maturation similarly to regular-dose B. lactis, close to the breast-feeding reference. Introduction Neonates represent a particularly vulnerable population suscep tible to infections due to the immaturity of their immune system. 1 At the time of birth, they move from an almost sterile environment within the maternal uterus into a world teeming with bacteria. Within the first days of life, mucosal surfaces of the host, including the gastrointestinal tract as well as the respiratory tract, become colonized with different bacterial communities, 2,3 comprising a large spectrum of commensal and potentially pathogenic microorganisms. This complex environment contributes to the maturation of the immune system, which is able to later on fight against many potential life-threatening infections. [4][5][6] In addition to microbial colonization, it has been demon strated that other postnatal factors, such as breast-feeding, are extremely important for the maturation of the immune system allowing its full functionality. [7][8][9] Interestingly, even if compounds of breast milk (BM) such as antibodies, cytokines, and growth factors can directly act on the developing gut-associated lymphoid tissues, 1 the impact of BM on immune maturation is also closely linked to its effects on the establishing microbiota. BM shapes the microbiota profile via a prebiotic effect of oligosaccharides or specific proteins that are able to favor beneficial gut colonization by lactobacilli and bifidobacteria. 10,11 Indeed, recent findings have demonstrated that BM naturally contains small amounts of living bacteria that are transmitted to the infants. 12,13 It is hypothesized that this postnatal natural bacterial inoculum is also a key for the programming of the neonatal immune system to establish oral tolerance and protection early in life. 14 In the absence of breast-feeding, supplementation of infant formula (IF) with probiotics is one of the strategies commonly considered to improve early life immunity. Series of publications have shown that administration of the probiotic Bifidobacterium lactis CNCM I-3446 (hereafter named B. lactis) at regular average doses of 10 9 CFU per day to newborns is able to promote early life immune development and improve gastrointestinal health. Indeed, a three-week supplementation with B. lactis in breast-fed (BF) preterm infants, mostly born by C-section, was shown to increase fecal IgA and reduce calprotectin production. 15 Moreover, the same intervention also modulated microbiota composition with an increase of bifidobacteria and a decrease of clostridia as well as enterobacteria. 16 In another study, feeding of C-section delivered full-term infants with IF containing B. lactis enhanced responses to polio and rotavirus vaccines over the six-week intervention period. 17 These effects of B. lactis feeding on immune and microbiota markers reflect a reinforcement of defenses that may lead to a beneficial impact on the outcomes of infection, such as lowering the risk of developing diarrhea. 18,19 Considering the low amount of bacteria observed in human milk as described earlier, the present trial aims at exploring whether beneficial effects of B. lactis IF supplementation can be maintained in infants fed with IF containing a lowered dose of B. lactis, from regular dose 10 7 CFU/g of powder to low dose 10 4 CFU/g of powder. In order to provide optimal exploratory conditions to address the objectives of the study, C-section delivered newborns have been selected. This target population was chosen for two main reasons: (i) C-section born babies present a defect in the development of immune defenses leading to increased susceptibility to infections in the first months of life 20 and (ii) cesarean delivery induces an alteration in the early life microbiota composition, including a diminished and delayed bifidobacteria colonization in comparison to vaginally delivered babies. 21,22 Thus, these newborns are expected to be more sensitive to nutritional intervention with different doses of bifidobacteria probiotics. methods clinical trial design. This trial was designed as a prospective, double-blind, randomized, single-center clinical trial of two parallel groups (low dose and regular dose of B. lactis). In addition, there was an observational reference group of BF infants followed from birth to 12 months. We considered the B. lactis regular-dose group as a reference for IF with probiotics in this study as compared to the exploratory low-dose IF group. Both groups were compared to the physiological BF reference group. This study was conducted by the team of Prof. C. Costalos in the Alexandra General Hospital, Athens, Greece, between June 2009 and March 2011. The trial was performed in accordance with the Declaration of Helsinki and compiled with good clinical practices as laid out in the International Conference on Harmonization guidelines. It was approved by the institutional ethics committees (the Board of Directors and the Scientific Council of the Alexandra General Hospital). Parents/legal guardians and investigators signed the informed consent. All randomized infants received a starter IF (67 kcal/ 100 mL of reconstituted formula, 1.8 g of protein/100 kcal, developed at Nestlé Product Technology Center) which contains sufficient amounts of proteins, carbohydrates, fats, vitamins, and minerals for their normal growth from birth to six months. The study formulas contained either a low dose (3.7 ± 2.1 10 4 CFU/g of powder) or a regular dose (3.1 ± 1.4 10 7 CFU/g of powder) of probiotic B. lactis, depending on the allocated dose group, from birth to six months (Fig. 1, upper part). The two formulas were indistinguishable and were supplied in similar cans that were coded with letters and colors by the study sponsor (Nestlé). The B. lactis CFU counts were monitored in both products throughout the study. The IFs were provided to the parents during each study visit. Parents, investigators, support staff, and clinical project manager were blinded to the identity of the formulas. Then, from 6 to 12 months of age, these infants were given a follow-up formula without B. lactis (67 kcal/100 mL of reconstituted formula, 2.0 g of protein/100 kcal, developed at Nestlé Product Technology Center). For the BF reference group, breast-feeding was recommended for a minimum of four months. Those infants who stopped breast-feeding before four months received a starter formula without B. lactis. At weaning, the same follow-up formula without probiotics, as for randomized groups, was given up to 12 months. study population. The protocol was planned to recruit a total of 160 infants (80 per formulation group). Healthy full-term C-section delivered newborns, infants who had a birth weight between 2500 and 4500 g, infants whose mothers had anticipated not to breast-feed or decided to stop breast-feeding within 24 hours after delivery, and those infants with written informed consent obtained from his/ her legal representative were enrolled within a maximum of 96 hours after birth. Infants whose mothers intended to breast-feed from birth to at least four months were enrolled in a nonrandomized reference group. Enrolled infants were vaccinated for diphtheria, Bordetella pertussis, polio, tetanus, and Heamophilus influenzae type B (HiB) (Pentavac, Sanofi Pasteur MSD, France) following the guidelines set by the Greek National Council for vaccinations of the Ministry of Health. Infants were not enrolled in the study if they received a Rotarix® vaccine, were still BF beyond 24 hours (except for BF group), were expected to have problems with compliance, and were already participating in, or were from a mother currently participating or had participated in another clinical trial during the preceding three months prior to the inclusion in this study. measured outcomes. The primary outcome measure was prevalence of diarrhea, incidence of diarrhea, and total number of days with diarrhea over the study period (12 months). Diarrhea was defined as one day (24-hour period) with at least two to three watery stools. An episode of diarrhea was defined as at least one day of diarrhea followed by at least 48 hours without diarrhea. Secondary outcomes were grouped as follows: Immune maturation: fecal Immunoglobulin A (IgA) at one week, one month, and four months after birth. Gut maturation: fecal calprotectin and 1-antitrypsin at one week, one month, and four months after birth, adjusted for the baseline value. Microbiota: total counts of Bifidobacteria and the presence of B. lactis in feces at four months after birth. Immune responses to vaccines: Antibody responses at 7 and 12 months after birth to diphtheria, B. pertussis, polio, tetanus, and HiB. In addition, for HiB, the percentage of protective response, which was estimated as the proportion of subjects who reached the protective level, ie, HiB .1 µg/mL, 23 was also calculated. Anthropometry: change in weight, length, BMI, and head circumference, during the first 4 months (1 week-4 months) and during the first year (1 week-12 months). Based on these data, z-scores were calculated for each subject and visit based on the EuroGrowth database. [24][25][26] Serious and nonserious adverse events (System Organ Class), as well as concomitant medication, were collected through the 12-month follow-up period. Adverse events were defined as any untoward occurrence in a patient or clinical investigation subject administered an investigational product and which does not necessarily have to have a causal relationship with this treatment. Adverse events are illnesses, signs or symptoms occurring or worsening, and/or abnormal laboratory findings during the course of the study. Adverse events include occasions when the subjects contact the investigator or their private physician and are examined or given medical direction. They may or may not lead to the withdrawal of the subject from the study. Total counts of bifidobacteria and the presence of B. lactis in feces were obtained from aliquots of ∼1 g of stool transferred into a cryotube of 5 mL and frozen ideally at −80 °C after addition of 10% glycerol. Measurements were performed following the AAT internal protocol for B. lactis detection (Advanced Analytical Technologies Srl) that consisted in plating of the samples on Bifidobacterium spp. selective medium, counting CFU before scraping of plates surface and recovery of grown colonies, cells disruption of the plates triplicate by means of Maxwell protocol_AAT procedure. This later allowed assessment of the presence of the probiotic B. lactis by strain-specific polymerase chain reaction (detection limit 10 3 CFU/g feces). Specific primers used were (Sequence 5′-3′): sense GAGCTGATCGACGACCTGAC and antisense CCGAGAAAATCTGGGATGAG. statistics. Sample size. A total number of 160 infants (80 per randomized group) were planned to be recruited into the study. In addition, 30 infants were to be recruited in the BF reference group. The sample size was not determined by a formal power calculation given the exploratory nature of the study. Randomization. Randomization was done by using an electronic program (TrialSys, developed by Nestlé) ensuring dynamic randomization via Internet with minimization technique. Statistical methods. Primary outcome was analyzed in both the intention-to-treat (ITT) and per protocol populations, and secondary outcomes were analyzed in the ITT population. The ITT population consisted of all infants who were randomized and received any formula intake. The statistical significance level was set at 0.05, and no adjustment was applied due to the exploratory nature of the study. The effects of low dose versus regular dose of probiotic on diarrhea incidence, episode, and duration were analyzed using generalized linear Binomial model, Poisson model, and ANOVA, respectively. Outcomes of fecal IgA, gut maturation, vaccinations, and anthropometry parameters were compared between the two probiotic doses utilizing mixed models. Microbiota and morbidity data were analyzed using Fisher's exact test. Only descriptive statistics (mean ± SD or 25th-75th percentile) were used to compare data from randomized groups versus those from physiological BF reference group (no P values were calculated, comparison was made on numerical trends). results disposition of subjects. In total, 208 infants were recruited in the study. One hundred sixty-four infants were randomly allocated to either the low-dose (n = 84 infants) or regular-dose (n = 80 infants) B. lactis starter formula groups. In both groups, 77 infants actually started consumption of study product (Fig. 1). All 44 infants recruited in the reference BF group started the study (Fig. 1). demographics and baseline data. Gender was equally distributed between the two randomized groups with just over 50% of males in each group (Table 1). In the BF group, 64% were males. At enrollment in the study (ie, randomization), the mean age was two days for the randomized groups with a range from zero to four days. The mean age at enrollment for the BF group was three days with a range from one to four days. All infants were in good health at birth with a median APGAR score $9, at 1, 5, and 10 minutes after birth. The median body weight at enrollment was the same for infants randomized in the low-dose and the regular-dose (2.9 kg) groups. The mean birth weight for the BF group (3.0 kg) was similar to the randomized group infants. The majority of randomized infants were not BF at all (87% and 74% for low-and regular-dose groups, respectively). Infants from the BF reference group were exclusively BF for an average of 5.3 ± 4.1 (SD) months. Primary outcome: diarrhea prevention. During the 12-month follow-up of infants, no statistically significant difference could be observed between the low and regular probiotic dose groups with respect to prevalence of diarrhea (20.8% vs. 23.4%, respectively, P = 0.70). Incidence (0.26 ± 0.57 episodes vs. 0.25 ± 0.46 episodes) or mean number of days with diarrhea per infant (0.72 ± 1.84 days vs. 1.17 ± 2.72 days) were also similar in both groups (P = 0.83 and P = 0.58, respectively; Table 2). No major difference could be seen as well between the randomized groups and the BF reference group regarding diarrhea status (prevalence: 18.2%; incidence: 0.30 ± 0.70 episodes; mean number of days with diarrhea: 1.09 ± 3.08 days). secondary outcomes. Immune and gut maturation. There were no statistically significant differences between the low-dose group and the regular-dose group with respect to fecal IgA, calprotectin, and α1-antitrypsin levels for any of the defined time points (one week, one month and four months after birth; Table 3). As expected, fecal IgA level was numerically higher in the BF reference group compared to IF groups. However, there was no difference between the randomized groups and the BF reference group for the two other markers. Microbiota -total bifidobacteria counts and B. lactis detection in feces. There was no statistically significant difference between the low-and regular-dose groups with respect to the total bifidobacteria counts in feces (median log CFU/g [with 25th/75th percentile] of 6.6 [5.8/7.5] and 6.7 [5.6/7.6], respectively, P = 0.78). There was also no substantial difference between the randomized groups and the BF group having a total bifidobacteria count of 7.1 (6.2/7.9). Approximately 85% of infants randomized to the regulardose group were colonized with B. lactis, ie, with positive detection of B. lactis in their feces, while only 47% of them were positive in the low-dose group. This difference was statistically significant (P , 0.0001). Noteworthy, a background of 16% of positive detection was observed in the BF reference group. Immune responses to vaccinations. There were no statistically significant differences between the low-dose and regular-dose groups, as well as between the two randomized and BF groups, with respect to the response to diphtheria and B. pertussis at any of the specified time points (7 and 12 months after birth; Table 4). Immune response to the tetanus vaccine was significantly higher in the regular-dose group compared to the lowdose group only at 12 months after birth, while no difference could be observed between the randomized groups and the BF group (Table 4). Regarding response to polio vaccination, no statistically significant difference could be observed between the low-and regular-dose groups at any of the specified time points (Table 4). Absolute titer values of Ig response to polio vaccine in both IF groups appeared substantially higher than in BF infants. Finally, immune response to the HiB vaccine appeared to be higher in the low-dose group than in the regular-dose group at 12 months after birth (Table 4). When compared to BF infants, immune response to HiB was found to be noticeably higher in the regular-dose group at 7 months after birth, but this difference was not seen at 12 months. In the low-dose group, this difference was higher only when the infants were 12 months old, as it was the case for response to polio vaccine. Besides, 79.6% and 81% of the infants in low-dose and regular-dose probiotic, respectively, reached the protective level of HiB antibodies (ie, anti-HiB .1 µg/mL 23 ) at any time during the 12-month follow-up (P . 0.05; Table 4). Noteworthy, this protective level of antibodies against HiB was only reached by 59.1% of the BF infants. Anthropometrics. No consistent significant difference in growth parameters could be observed between the low-and regular-dose groups (Fig. 2). Compared with EuroGrowth database standards, infants in all groups grew normally throughout the study. Mean values for all growth measures through age four months were within 0.5 SD of the median value. 27 Serious adverse events. Serious adverse events reported throughout the study were ,5% in all groups (3.9%, 1.3%, and 2.3% for low-dose, regular-dose, and BF groups, respectively). No statistically significant difference was observed between the low-dose and regular-dose groups, as well as between the two randomized and BF groups. discussion We hypothesized that the beneficial effects on neonatal immune maturation might be achieved with a low dose of the probiotic B. lactis. At this preliminary exploratory stage, we emphasized the comparison of low dose with regular dose, considered as reference for IF with probiotics, for which several earlier studies already support a functional effect on diarrhea and immune functions. 15,[17][18][19]28,29 Both formula groups were further compared to a BF physiological reference. In that regard, a group that was fed a formula without B. lactis was not considered in the study design. We recognize that this could represent a weakness in our study. However, we still believe that this approach will be useful to pave the way toward further research in this area. The number of diarrhea episodes, as well as total number of days with diarrhea, per infant per year, were comparable in the three arms of the present study. The observed low incidence of ∼0.25-0.3 reflects a discrepancy between data from observational studies (∼2.8 in Western Europe) 30 and the ones from interventional studies (0.2-0.5). 19 Moreover, the fact that the present study population was not attending day care centers may also account for the low incidence of diarrhea. Nevertheless, bringing together the recognized evidence that breast-feeding protects against diarrhea 31 and the previously documented beneficial effect of regular dose of B. lactis in reducing incidence/duration of diarrhea in infants, 18,19,28,29 it may be postulated that low dose of B. lactis in starter IF may also provide benefit in such a population of neonates. Note that full assessment of noninferiority between both formulas would have required a sample size of 5421 infants per group as retrospectively calculated from the results of this pilot study with a statistical power of 80% and a noninferiority margin of 10%. Regarding immune maturation, no difference could be observed in intestinal IgA production, measured as fecal IgA, between both randomized groups at any of the defined time points. As the effect of a regular dose of B. lactis in increasing fecal IgA production has been previously reported, 15 one can hypothesize that low-dose B. lactis might be as efficient as regular dose in promoting neonatal gut immune maturation. The relatively poor gut microbial environment in C-section born babies may have offered a favorable niche for low amount of bacteria to exert their function. Moreover, plausibility of the effect of low-dose B. lactis could be supported by the fact that, in reality, the small intestine microbiota is far less dense (10 3 -10 7 CFU/g of intestinal content) and diverse when compared to the colon (10 11 -10 12 CFU/g of intestinal content). 32 As most of the gut-associated immune system is located in the small intestine, such scarce bacterial population is still sufficient to interact with the mucosa and trigger immune functions. 33 Moreover, recent advances in human milk analysis and understanding of its property show that it contains living microorganisms, including bifidobacteria, in small amounts (10 2 -10 4 CFU/mL). 12,13 These bacteria and/or bacterial signatures likely contribute to postnatal immune education. 14,34 These indications jointly support a rational for using a low amount of physiologically relevant bacterial inoculum with probiotics during the first weeks of life. The absence of statistically significant difference between the three groups with respect to the total bifidobacteria counts further favors our initial hypothesis of a positive effect of low dose of B. lactis. Indeed, this observation can be brought together with the recognized bifidogenic effect of BM 35 and the reported capacity of regular dose of B. lactis in IF to restore BM-like levels of bifidobacteria in the gut of infants. 36 Interestingly, it was recently reported that relative abundance of commensal bifidobacteria and lactobacilli correlated with reduced risk of diarrhea, further suggesting that low dose of B. lactis may beneficially impact this latter outcome. 37 Noteworthy, this similar effect can be observed despite a lower rate of infants positive for fecal B. lactis in the low-dose group compared with the rate of the regular-dose group, which was here comparable to previous studies with regular dose. This lower rate reflects the difference in B. lactis feeding load between both groups that may lead B. lactis fecal levels below the detection limit. Protection against infections early in life may be related to multifactorial parameters, such as quality of the microbiota, as already mentioned, and/or normal immune maturation. Age (month after birth) 1 4 12 Age (month after birth) 1 4 12 Age (month after birth) 1 4 12 Age (month after birth) Median length-for-age z-score (error bars = 25th and 75th percentiles) Median head cirumferencefor-age z-score (error bars = 25th and 75th percentiles) Median BMI-for-age z-score (error bars = 25th and 75th percentiles) Median weight-for-age z-score (error bars = 25th and 75th percentiles) Response to vaccination is also currently accepted by expert panels (ILSI, 38 EFSA 39 ) as a valuable marker reflecting the evolution of the immune system responsiveness to foreign antigens. Demonstration of the efficacy of regular dose of B. lactis supplementing IF on vaccine responses in a C-section population has been recently reported in a placebo-controlled study. 17 Fecal anti-rotavirus-specific and anti-poliovirus-specific IgA levels postvaccination were both definitely increased in the B. lactis-supplemented group in comparison to IF without B. lactis. In the present study, no consistent differences could be observed between the three groups regarding antibody responses to the five different vaccines. Interestingly, when scarce statistical significant differences could be observed between the randomized groups and BF reference group, they were always in favor of a better vaccine response in the formula groups. In the particular case of HiB vaccine, this later observation can lead to clinical relevance as a protective antibody level threshold has been defined by the WHO (.1 µg/mL). 23 Indeed, the percentage of infants who reached anti-HiB protective antibody titers was substantially higher in infants fed with either IF containing regular or low doses B. lactis in comparison to those in the BF physiological reference groups (81.0% or 79.6% versus 51.9% respectively). Finally, besides the already discussed defense-related outcomes of the host, we also investigated parameters addressing more physiological read-outs, such as gut maturation (fecal calprotectin and α1-antitrypsin) and growth (anthropometric z-scores). Both B. lactis-supplemented IF and BF reference groups behaved similarly. No safety concern is to be mentioned here. In conclusion, the present study suggests that effects observed with regular-dose B. lactis-containing starter formula on diarrhea outcomes and immune responsiveness could be reached by feeding C-section born infants early in life with IF containing a lower dose of B. lactis.
2018-04-03T04:48:28.040Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "8a60b943a9d6f3a15acd3290cba5b66ca27f8b38", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.4137/CMPed.S33096", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8a60b943a9d6f3a15acd3290cba5b66ca27f8b38", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
28587735
pes2o/s2orc
v3-fos-license
Light-front interpretation of Proton Generalized Polarizabilities We extend the recently developed formalism to extract light-front quark charge densities from nucleon form factor data to the deformations of these quark charge densities when applying an external electric field. We show that the resulting induced polarizations can be extracted from proton generalized polarizabilities. The available data for the generalized electric polarizabilitiy of the proton yield a pronounced structure in its induced polarization at large transverse distances, which will be pinned down by forthcoming high precision virtual Compton scattering experiments. We extend the recently developed formalism to extract light-front quark charge densities from nucleon form factor data to the deformations of these quark charge densities when applying an external electric field. We show that the resulting induced polarizations can be extracted from proton generalized polarizabilities. The available data for the generalized electric polarizabilitiy of the proton yield a pronounced structure in its induced polarization at large transverse distances, which will be pinned down by forthcoming high precision virtual Compton scattering experiments. The distribution of charge is a basic quantity which characterizes a many-body system. In the case of relativistic many-body systems such as hadrons, composed of near massless quarks, a field theoretic consistent charge density can be formulated by considering the system in a light-front frame. In such frame, the pair creation by the probing photon is suppressed, and the photon only couples to forward moving quarks, allowing for a density interpretation. Such a charge density interpretation, based on elastic form factor data, was recently given for the nucleon [1,2], for spin-1 systems [3], spin-3/2 systems [4], and extended to higher spin systems in [5]. Any charge density will deform when subjected to an external electric field and develop an induced polarization. The quantity describing the "ease" by which such distribution will deform is referred to as the electric polarizability. In this work we will extend the formalism of light-front charge densities to obtain the spatial deformations of these densities. We will show that the resulting induced polarizations can be obtained from nucleon generalized polarizabilities (GPs) [6,7], which have been measured in recent years by precision virtual Compton scattering (VCS) experiments, see Refs. [8,9] for reviews. We consider the VCS process on the nucleon γ * (q) + N (p) → γ(q ′ ) + N (p ′ ). Its kinematics are described in terms of Lorentz scalars : Q 2 = −q 2 , ν ≡ q · P/M , with P = (p + p ′ )/2, and t = (p − p ′ ) 2 . The dynamical information which is accessed in the VCS process is described by the matrix element of a time-ordered product of two electromagnetic (e.m.) current operators as : with λ N (λ ′ N ) the helicities of the initial (final) nucleons. In this work, we consider the VCS tensor in the lowenergy limit, q ′ → 0. In such a limit, the final soft photon plays the role of an applied quasi-static electromagnetic field, and the VCS process measures the linear response of the nucleon to this applied field [6,8]. This linear re-sponse can be parameterized through six Q 2 dependent GPs, denoted by P (ρ ′ l ′ ,ρ l)S [6,7]. In this notation, ρ (ρ ′ ) refers to the Coulomb/electric (L), or magnetic (M ) nature of the initial (final) photon, l (l ′ = 1) is the angular momentum of the initial (final) photon, and S differentiates between the spin-flip (S = 1) and non spin-flip (S = 0) transition at the nucleon side. To arrive at a spatial representation of the information contained in the GPs, we consider the process in a symmetric light-front frame, denoting the average direction of the fast moving protons as the z-axis. We indicate the (large) light-front + component by P + (defining a ± ≡ a 0 ± a 3 ), and choose the symmetric frame by requiring that ∆ = p ′ − p is purely transversal, i.e. ∆ + = 0. To access the GPs, we can restrict ourselves to the terms in the VCS tensor that are linear in the outgoing photon energy (proportional to ν), along the line t = −Q 2 . In this limit the light-front kinematics is given by : with light-like vectorsn = (1, 0, 0, 1), n = (1, 0, 0, −1). Furthermore, the two transverse components of the virtual photon momentum are denoted by q ⊥ with Q 2 = q 2 ⊥ , and τ ≡ Q 2 /(4M 2 ), with M the nucleon mass. The (small) momentum fraction η is obtained as η = ν/(M (1 + τ )). In the light-front frame, the + component of the current J µ in (1) is a positive definite operator for each quark flavor, allowing for a light-front charge density interpretation. The VCS light-front helicity amplitudes can then be obtained from the VCS tensor H µν as : with transverse outgoing photon polarization vector denoted by ǫ ′ ⊥ , and λ ′ γ = ±1 denoting its helicity. In the following, we will consider the polarization component of the outgoing photon corresponding with an electric field, E = −∂ A/∂t, which can be expressed as : Any system of charges will respond to such an applied electric field, resulting in an induced polarization P 0 , which will be forced to align with the applied electric field such as to minimize its energy − E · P 0 . The linear response in q ′0 of the helicity averaged VCS amplitude therefore allows to define an induced polarization P 0 as : The induced polarization P 0 for the helicity averaged case can be worked out from Eq. (2) as : where A can be expressed in terms of the GPs as : (6) which depends on the scalar GPs, as well as those spin GPs which enter the unpolarized VCS response functions. In an analogous way, we can define the linear response to an external quasi-static e.m. dipole field when the nucleon is in an eigenstate of transverse spin, S ⊥ ≡ cos φ Sêx + sin φ Sêy , with φ S the angle indicating the spin vector direction. Analogously to Eq. (4), the induced polarization P T for a state of transverse spin can be worked out from the sum of contributions from spin-averaged and spin-flip light-front helicity amplitudes as : The functions B and C entering the induced polarization P T can be expressed in terms of the GPs as : To evaluate the induced polarizations, we use the available empirical information on the GPs. The four spin GPs are described, following [10], by a dispersive part, and a π 0 pole part. The dispersive part is saturared by πN intermediate states, using empirical information from pion photo-and electroproduction as encoded in the MAID2007 parameterization [11]. The electric and magnetic GPs are decomposed as a sum of a dispersive πN part and an asymptotic part. The asymptotic part of the magnetic GP is described by a dipole : (10) To describe the available data for the electric GP, we allow for an asymptotic part consisting of a sum of a dipole and a gaussian, in the same vein as the parameterization proposed in [12] for the nucleon form factors : The values at the real photon point have been fixed as the difference between the empirical information for the proton electric and magnetic polarizabilities, obtained from real Compton scattering (RCS) experiments [13], and the dispersive πN contribution, yielding P (L1,L1)0 asy (0) = −14.37 GeV −3 , and P (M1,M1)0 asy (0) = 21.82 GeV −3 . The remaining three parameters Λ β , Λ α , and C α , describing the Q 2 dependence of the asymptotic parts of the spin independent GPs can be determined by a fit to available VCS data. In Fig. 1, we show the comparison with the experimentally measured unpolarized structure functions P LL − P T T /ε (P LT ), proportional to the electric (magnetic) GPs respectively, up to a small spin GP contribution (dashed curves). For an exhaustive description of VCS observables, we refer to Refs. [8,9]. For the magnetic GP, one sees from P LT on Fig. 1 that a good fit to all data is obtained for Λ β = 0.5 GeV. For the electric GP, a fit to the MIT-Bates and JLab data is obtained for C α = 0, and Λ α = 0.7 GeV (denoted by parameterization GP I). However, this does not describe the MAMI data at intermediate Q 2 , which require an additional structure, parameterized through the gaussian term in Eq. (11). A good description of all available data is obtained for Λ α = 0.7 GeV, and C α = −150 GeV −7 (denoted by parameterization GP II). The above empirical parameterizations for the GPs, allow to evaluate the Q 2 dependence of A, B, and C, as displayed in Fig. 2. One clearly notices the enhancement in A as well as the structure at intermediate Q 2 values in B in GP II. The light-front frame allows us to use this empirical information to visualize the deformation of the charge densities in an external e.m. field and map out the transverse position space dependence of the induced polarization. For the case of a nucleon in a state of definite helicity, the transverse position space dependence of the induced polarization P 0 is given by : [14], squares [17]), MIT-Bates (up triangles [16]), and JLab (stars [15]). The RCS data [13] are shown by the (black) down triangles (slightly displaced in Q 2 ). The curves are based on the parameterizations of Eqs. which can be worked out using Eq. (5) as where b is the transverse position, b = | b|, andb = b/b. The dipole pattern described by Eq. (13) is shown in Fig. 3. One clearly sees that the enhancement at intermediate Q 2 in the electric GP (upper panel in Fig. 1) in GP II, as compared with GP I, yields a spatial distribution of the induced polarization that extends noticeably to larger transverse distances. Forthcoming VCS experiments, that are conceived to pin down more precisely the behavior of the GPs at intermediate Q 2 values, will therefore be able to verify this large distance structure. For the case of a nucleon in an eigenstate of transverse spin, the transverse position dependence of the induced polarization P T can likewise be worked out as : displaying dipole, quadrupole and monopole patterns. In Fig. 4 we show the spatial distributions in the induced polarization for a proton of transverse spin (chosen along the x-axis) for parameterization GP II. The component P x T − P x 0 displays a quadrupole pattern with pronounced strength around 0.5 fm due to the electric GP, whereas the component P y T − P y 0 shows in addition a monopole pattern, dominated by the π 0 pole contribution. In summary, in this work we have used recent data on proton GPs to map out the spatial dependence of the induced polarizations in an external e.m. field. The formalism to extract in a field theoretic consistent way light-front densities from nucleon form factor data has been extended in this work to the deformations of these quark charge densities when applying an external e.m. field. It has been shown that the available proton electric GP data yield a pronounced structure in its induced polarization at large transverse distances of 0.5 − 1 fm. At Q 2 values smaller than 0.1 GeV 2 , chiral effective field theory was found to well describe the VCS data, highlighting the role of pions in the nucleon structure. Such description can however not be applied at intermediate and large Q 2 values. This transition region is dominated by nucleon resonance structure, which can be described by dispersion relations. Forthcoming VCS precision experiments at MAMI in this intermediate Q 2 region, will be able to better determine this structure, thus complementing our picture of the distribution of quark charges in the nucleon as obtained through elastic form factors.
2010-03-15T11:35:18.000Z
2009-11-15T00:00:00.000
{ "year": 2009, "sha1": "db3389486eb153ed3ba8c0db82ce925db9136e18", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0911.2882", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "db3389486eb153ed3ba8c0db82ce925db9136e18", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
252842684
pes2o/s2orc
v3-fos-license
Layered Double Hydroxides for Photo(electro)catalytic Applications: A Mini Review Chemical energy conversion strategies by photocatalysis and electrocatalysis are promising approaches to alleviating our energy shortages and environmental issues. Due to the 2D layer structure, adjustable composition, unique thermal decomposition and memory properties, abundant surface hydroxyl, and low cost, layered double hydroxides (LDHs) have attracted extensive attention in electrocatalysis, photocatalysis, and photoelectrocatalysis. This review summarizes the main structural characteristics of LDHs, including tunable composition, thermal decomposition and memory properties, delaminated layer, and surface hydroxyl. Next, the influences of the structural characteristics on the photo(electro)catalytic process are briefly introduced to understand the structure–performance correlations of LDHs materials. Recent progress and advances of LDHs in photocatalysis and photoelectrocatalysis applications are summarized. Finally, the challenges and future development of LDHs are prospected from the aspect of structural design and exploring structure-activity relationships in the photo(electro)catalysis applications. Introduce Globalization and industrialization development have accelerated population increase and economic development, which greatly increase demand of fossil fuels [1]. As of 2018, fossil energy still accounted for 80% of the world's primary energy [2]. The huge consumption of fossil energy brings numerous ecological and social problems, such as the greenhouse effect [3,4], environmental pollution [5,6], and reduction of fossil energy [7][8][9]. Solar and hydrogen energy attract attention as promising green and clean energy sources to address our energy shortages and environmental problems. However, it is necessary to convert solar energy and excess electrical energy into chemical energy stored in chemical molecules, due to limits of time and space [10]. Among various chemical energy conversion strategies, photocatalysis and electrocatalysis are attractive approaches for converting solar energy and produce hydrogen or hydrocarbon fuels, such as water splitting and CO 2 reduction by photo(electro)catalysis [11][12][13][14][15][16]. A variety of materials have been exploited in photo(electro)catalytic energy conversion. Among these materials, two-dimensional (2D) materials have attracted tremendous interest due to its high charge mobility and large specific surface area [17][18][19]. [40]. Copyright 2007 Elsevier. Adjustable Composition The most significant structural property of LDH is the compositional flexibility, including tunable metal cations in the host layer and guest anions in the interlayer. The tunability of composition significantly affects the physicochemical properties of LDH. We Adjustable Composition The most significant structural property of LDH is the compositional flexibility, including tunable metal cations in the host layer and guest anions in the interlayer. The tunability of composition significantly affects the physicochemical properties of LDH. We will discuss the influences of tunable composition on the physicochemical properties and photoelectrocatalytic performance of LDH. Regulating Energy Band Structure The varied metal cation species and ratios modulate the composition of LDHs, and their physicochemical properties change significantly. The band structure of the LDH is usually regulated by the changed types and ratios of metal cations in the host layer, which change the range of light absorption and oxidation-reduction potential of LDH. Xu et al. [41] found band gaps of Mg and Zn-based LDH were greater than 3.1 eV, whereas the Co and Ni-based LDH samples absorbed visible light with a band gap lower than 3.1 eV ( Figure 2). Guo et al. [42] loaded TiO 2 to three different cobalt-based LDHs (CoAl-LDH, CoCr-LDH, and CoFe-LDH). The Ti-TiO 2 @CoCr-LDH had the optimal photoelectrocatalytic (PEC) performance with a 43% increase in photocurrent in those samples. This is because the band structure of CoCr-LDH has the best matching with reduced TiO 2 , resulting in the best water oxidation performance. will discuss the influences of tunable composition on the physicochemical properties a photoelectrocatalytic performance of LDH. Regulating Energy Band Structure The varied metal cation species and ratios modulate the composition of LDHs, a their physicochemical properties change significantly. The band structure of the LDH usually regulated by the changed types and ratios of metal cations in the host layer, wh change the range of light absorption and oxidation-reduction potential of LDH. Xu e [41] found band gaps of Mg and Zn-based LDH were greater than 3.1 eV, whereas the and Ni-based LDH samples absorbed visible light with a band gap lower than 3.1 eV (F ure 2). Guo et al. [42] loaded TiO2 to three different cobalt-based LDHs (CoAl-LDH, Co LDH, and CoFe-LDH). The Ti-TiO2@CoCr-LDH had the optimal photoelectrocatal (PEC) performance with a 43% increase in photocurrent in those samples. This is beca the band structure of CoCr-LDH has the best matching with reduced TiO2, resulting in best water oxidation performance. The changed ratio of metal cations can also adjust the band gap and light absorpt of LDHs. Han and Yang et al. [43] reported that BiVO4/NiFe-LDH core/shell heterostr ture films had four times higher photocurrent intensity than that of pure BiVO4 at 1.2 vs. reversible hydrogen electrode (RHE). The higher content of Fe 3+ in NiFe-LDH resul in a smaller band gap and stronger light absorbance and conductivity. Parida et al. fabricated the ternary series of Mg/Al + Fe-CO3 LDHs by adjusting the rate of Al/Fe. T Fe 3+ doping increased the visible-light absorption of MgAl-LDHs, resulting in the be H2 evolution performance. The changed ratio of metal cations can also adjust the band gap and light absorption of LDHs. Han and Yang et al. [43] reported that BiVO 4 /NiFe-LDH core/shell heterostructure films had four times higher photocurrent intensity than that of pure BiVO 4 at 1.23V vs. reversible hydrogen electrode (RHE). The higher content of Fe 3+ in NiFe-LDH resulted in a smaller band gap and stronger light absorbance and conductivity. Parida et al. [44] fabricated the ternary series of Mg/Al + Fe-CO 3 LDHs by adjusting the rate of Al/Fe. The Fe 3+ doping increased the visible-light absorption of MgAl-LDHs, resulting in the better H 2 evolution performance. Promoting Electron-Hole Pairs Separation The variable valence state of metal cations of LDHs directly promotes the transfer and separation of charge carriers. Low-valence metal cations are oxidized to high-valence metal cations by the photogenerated holes, which improve the transfer and separation of photogenerated charge carriers. For example, Bai et al. [45] synthesized the NiFe-LDH/Mo-BiVO 4 heterostructure by an electrodeposition method. The photogenerated holes transferred from BiVO 4 nanoparticles to NiFe-LDH due to a type II staggered band structure of the heterostructures. At the same time, the photogenerated holes oxidized Ni 2+ from NiFe-LDH to Ni 3+ and Ni 4+ . The Ni 3+ and Ni 4+ take part in oxygen evolution reaction (OER) and improve the performance of PEC for decomposing water (Figure 3a). In the work of Shao et al. [14], a ZnO@CoNi-LDH core−shell nanoarray was prepared by an electrosynthesis method. The Co 2+ was oxidized to Co 3+ /Co 4+ by the photogenerated holes, which enhanced the efficiency of photogenerated charge carrier separation. Moreover, the Co 3+ /Co 4+ served as co-catalysts to improve water splitting ability. Suitable interlayer anions facilitate the transport and separation of charge carriers. Hunter et al. [46] synthesized different interlayer anions inserted into NiFe-LDH samples. The experimental results indicated that all interlayer anions were replaced by CO 2 in the air to CO 3 2− , which had the highest catalytic activity. In non-CO 3 2− interlayer anions, the catalytic activity is a function of the alkalinity of the interlayer anion. The interlayer anions with more negative charges act as stronger proton acceptors and electron donors than interlayer anions with one negative charge. Zheng et al. [47] obtained 4,4-diaminostilbene-2,2-disulfonate (DAS) and 4,4-dinitro-stilbene-2,2-disulfonate (DNS) co-intercalated Zn 2 Al-LDH nanosheets. Due to the matched HOMO/LOMO energy levels of DAS and DNS, the photogenerated electrons of DAS efficiently migrate to DNS under UV-visible-light illumination (Figure 3b). When the percentage of DAS is 50%, the DAS (50%)-DNS/LDHs exhibit excellent photogenerated charge separation ability and stability. Photogenerated electron transfer within the interlayer anion was achieved with water splitting. Co /Co served as co-catalysts to improve water splitting ability. Suitable interlayer anions facilitate the transport and separation of charge carriers Hunter et al. [46] synthesized different interlayer anions inserted into NiFe-LDH samples The experimental results indicated that all interlayer anions were replaced by CO2 in th air to CO3 2− , which had the highest catalytic activity. In non-CO3 2− interlayer anions, th catalytic activity is a function of the alkalinity of the interlayer anion. The interlayer anion with more negative charges act as stronger proton acceptors and electron donors tha interlayer anions with one negative charge. Zheng et al. [47] obtained 4,4-diaminostilbene 2,2-disulfonate (DAS) and 4,4-dinitro-stilbene-2,2-disulfonate (DNS) co-intercalated Zn2Al-LDH nanosheets. Due to the matched HOMO/LOMO energy levels of DAS and DNS, the photogenerated electrons of DAS efficiently migrate to DNS under UV-visible light illumination (Figure 3b). When the percentage of DAS is 50%, the DAS (50%) DNS/LDHs exhibit excellent photogenerated charge separation ability and stability. Pho togenerated electron transfer within the interlayer anion was achieved with water split ting. Adjusting Selectivity of Reactions The different types of metal cations of LDHs lead to different active sites of the reaction and thus different products. The different positions of the conduction bands of the photocatalysts determine the different reduction capabilities, leading to the different selectivity in photo(electro)catalytic reactions [48]. Xiong et al. [49] prepared a series of Zn-based layered ZnM-LDH (M = Ti 4+ , Fe 3+ , Co 3+ , Ga 3+ , Al 3+ ) by a co-precipitation method. The varied M 3+ or M 4+ in the ZnM-LDH could precisely adjust the product selectivity of the CO 2 reduction. The experimental and computational results revealed that d-band center positions of the metal cations dominated the adsorption strength of CO 2 and, ultimately, product selectivity. The d-band centers of intralayer metal ions of ZnTi-LDH, ZnGa-LDH, and ZnAl-LDH were relatively adjacent to the Fermi level, which facilitated the reduction of CO 2 to CH 4 (ZnTi-LDH) and CO (ZnGa-LDH and ZnAl-LDH). ZnFe-LDH and ZnCo-LDH cannot reduce CO 2 but induce water desorption and hydrogenation due to the d-band centers of Fe 3+ and Co 3+ further away from the Fermi level. Zhao et al. [50] investigated the electronic properties, reaction path, and reaction kinetics of CO 2 PR in 10 M II 2 M III/IV -NO 3 -LDHs (M II = Mg 2+ , Co 2+ , Ni 2+ , Zn 2+ ; M III = Al 3+ , In 3+ , Cr 3+ , Fe 3+ ; M IV = Ti 4+ ) by Hubbard-corrected density functional theory. The calculation showed that all LDHs might exhibit CO 2 PR except Ni 2 Al-LDH and Ni 2 FeNO 3 -LDH. Among the remaining eight LDHs, the favorable products of the others were CH 4 , except for the product of Co 2 Fe-NO 3 LDH, which was HCOOH. According to the relationship between the effective driving force (∆∆Gb) of CO 2 reduction to CH 4 or CO and the adsorption energy of CO 2 , which resembled the relationship between ∆∆Gb and valence band maximum (VBM) of LDH, Mg 2 In-LDH was most likely to photocatalytically reduce CO 2 to CH 4 , whereas Mg 2 Al-NO 3 -LDH was most likely to reduce CO 2 to CO. Improving Absorption Capacity LDHs with exchangeable interlayer anions are widely used to adsorb harmful anions or contaminants of wastewater and polluting soil. The type of interlayer anion affects the adsorption capacity of LDH for the anions in solution. HONGO et al. [51] prepared MgAl-LDH with Cl − , NO − 3 , or SO 2− 4 as the interlayer anion to adsorb harmful anions (F − , CrO 2− 4 , HAsO 2− 4 , and HSeO − 3 ) using a co-precipitation method. The LDHs with different interlayer anions showed excellent attraction for harmful anions and display adsorption capacity in the order NO − 3 > Cl − > SO 2− 4 . They concluded that the two adsorption mechanisms of LDH for anions were fast adsorption on the surface and slow interlayer anion exchange. The exchange rate of interlayer anions depends on the strength of the interaction between interlayer anions and LDH. The stronger the interaction of interlayer anion and LDH, the weaker the ion exchange capacity of the LDH, resulting in a poorer adsorption capacity of anion. In the first minute, the fast adsorption process is caused by the synergy of two adsorption mechanisms. However, the surface anion adsorption by anion exchange is usually relatively slow. Based on the experimental results and analysis, they believe that the nanocrystallization and highly Al substituted phase of NO 3 -formed Mg-Al LDH obviously improve the anion adsorption ability, resulting in fast surface adsorption. The fast surface adsorption dominates the adsorption ability for NO 3 -formed Mg-Al LDH. The interaction of interlayer anion and metal ions dominates the adsorption capacity of LDH and selectivity for different metal ions. Jawad et al. [52] synthesized MoS 2− 4 intercalated FeMgAl-LDH as an absorber to removal heavy metals. The results showed the following order of selection for adsorption: Hg 2+ ∼ Ag + > Pb 2+ > Cu 2+ > Cr 6+ > As 3+ > Ni 2+ ∼ Zn 2+ ∼ Co 2+ . The adsorbed metal cations can form coordination complexes in the interlayer channels. At the same time, the layered structure of LDH provide a protective space for Fe-MoS 4 to prevent its oxidation. The adsorption capacity of samples for metal ions was determined by the strength of soft-soft acid base bonding interactions between Fe-MoS and metal ions. Thermal Decomposition and Memory Property The calcination process causes significant change of the structure and properties of LDHs. The thermal decomposition process of LDHs generally includes three stages [17]: First, the calcination temperature is below 300 • C, adsorbed water of the interlayer and surface is removed, and the layer structure of LDHs is well maintained. Second, during the calcination process at 300-450 • C, the intralayer hydroxyl groups and water are gradually removed. Third, when the calcination temperature is above 450 • C, the layer structure of LDHs gradually collapses, and a composite oxide (M 2+ M 3+ O) is formed. Calcined LDHs decompose into complex metal oxides and thus form in-situ heterojunctions between the metal oxides, resulting in improved photo/electrocatalytic performance. Suárez-Quezada et al. [53] synthesized a series of ZnAl-LDH samples calcined at different temperatures. They found that Zn was present as hexagonal ZnO in all samples, and Al was present as ZnAl 2 O 4 and Zn 6 Al 2 O 9 depending on calcination temperature. Both ZnAl 2 O 4 and Zn 6 Al 2 O 9 can form heterojunctions with ZnO. As the temperature increased, the higher crystallinity led to higher hydrogen production efficiency, reaching a peak at 600°C. Mostafa et al. [54] prepared novel 1D CoBiTi-LDH with a bandgap of 2.4 eV and 2D CoBiTi layered double oxides (LDO) with high infrared (IR)-responsivity. After drying at 150 • C for 1D CoBiTi-LDH, 1D CoBiTi-LDH and in-situ formed 2D CoBiTi-LDO formed a novel 3D-heterojunction. The hydrogen evolution reaction (HER) of CoBiTi-LDH/CoBiTiO heterojunction increased nearly four times (∼1255 µmolg −1 h −1 ) compared with the 1D CoBiTi-LDH. The increased HER of the heterojunction was attributed to the enhancement of light absorbance in the IR-region (53% of sunlight) and the trapping of photoexcited species by the functional groups of CBT-LDH. The calcined LDHs have a stronger adsorption capacity for anionic dyes than the pristine LDHs due to higher specific surface areas and better reconstruction ability [17]. Li et al. [55] prepared hierarchical ZnAl-LDH by ZnAl-LDOs reaction with carbonate solution. The adsorption capacity of ZnAl-LDHfor methyl orange (MO) is far less than that of LDOs due to the decreased specific surface area, adsorption sites, or positive surface charge. Kim et al. [56] reported that calcination process of MgAl-LDHs induced the crystal deformation and formation of an interlayer structure of layered double oxides, leading to the development of mesopores and increased specific surface area. When LDHs were calcined at 500 • C for 10 h and transformed to LDOs, the specific surface area of LDHs obtained by hydrothermal reaction for 1 day (H1-LDH) and 3 days (H3-LDH) increased from 18.4 and 11.3 m 2 /g to 206 and 187 m 2 /g, respectively. The enhanced specific surface area originated from the developed mesopores of the LDO and larger pore volume. Interestingly, at a certain temperature, the disordered lamellar structure of calcinated LDHs are restored to its original layer structure by immersing it in water or an aqueous solution containing anions [57]. This unique property of LDH is known as the "memory effect". Peng et al. [58] obtained MgAl-LDH by intercalating 5-Fluorouracil anions using the memory effect of the LDH. The as-prepared samples not only showed improved corrosion resistance, but also inhibited human bile duct cancer cells. Thus, the intercalation of anions in interlayer by the memory effect of LDH is an efficient approach for designing functionalized LDHs [18]. However, some LDHs, such as Ni-Cr, Ca-Al, and Co-Al, have irreversible thermal decomposition behavior, and their lamellar structure cannot be recovered [59]. Delamination of LDH The LDHs possess a typical layered structure whose layers are connected by strong interlayer electrostatic interactions and interlamellar hydrogen bonding [18]. Although delamination of LDHs remains a big challenge (especially for monolayer LDH) [60], delamination is still an attractive way to improve photo(electro)chemical activity and expand the applications for LDH nanomaterials. This is because the delaminated LDHs have a larger specific surface area, more active sites, and higher electron transport efficiency. Hu et al. [61] delaminated CoCo-LDH, NiCo-LDH, and NiFe-LDH using a liquid phase ex-foliation method. The delaminated nanosheets have lower overpotential. When η = 300 mV, the current densities of the CoCo-LDH, NiCo-LDH, and NiFe-LDH nanosheets were 2.6, 3.4, and 4.5 times that of their bulk LDHs, respectively. Delamination of LDHs introduces more vacancy defects and thus increases number of reactive sites [19]. For example, Wang et al. [62] prepared ultrathin CoFe LDH nanosheets by exfoliation of bulk CoFe LDHs with nitrogen plasma. The exfoliation process induces formation of defects of ultrathin CoFe LDHs nanosheets. The defects increase the dangling bonds near reactive sites and decrease the coordination number of reactive sites, resulting in the improved electrocatalytic activity. Hydroxyl Groups on the LDH Surface The LDH surface has abundant surface hydroxyl groups that are nearly perpendicular to the host layer [18]. The hydroxyl groups not only effectively adsorb reactants [63], but also form interfacial chemical bonds with other semiconductor surfaces, thereby facilitating the transport of interfacial charge carriers [15]. For example, Liu et al. [13] deposited NiFe-LDH onto Co-intercalated TiO 2 by electrodeposition. The hydroxyl groups of NiFe-LDH form hydrogen bonds with TiO 2 (Figure 4). Therefore, under illumination, the holes in Co-TiO 2 VB can be transferred to the VB of NiFe-LDH through hydrogen bonding in time to participate in water decomposition, which improves transfer and separation of interfacial photogenerated charge. Water Splitting When the energy of light harvesting is more than the bandgap energy of LDHs, electrons in the valence band of LDHs would inject into the conduction band and lea photogenerated holes in the valence band. The photogenerated electrons and holes w migrate to LDH surfaces to participate in a hydrogen evolution reaction and an oxyg evolution reaction. However, in photoelectrocatalysis, the photogenerated electrons w drift to the cathode to participate in a hydrogen evolution reaction, whereas the pho generated holes will drift to the anode to participate in an oxygen evolution reaction. Water splitting by photochemistry can be divided into three steps [17]: (i) water sorption. LDHs and LDH compounds will directly contact water without a concentrat gradient. The ability of water adsorption is determined by the specific surface area LDHs. (ii) Separation and migration of charge carriers: High separation and migrat rate of carriers greatly improve the performances of LDH water splitting by photoche istry [64]. (iii) Surface redox reaction: The valence band maximum should be greater th the potential of O2/H2O (1.23 V vs. normal hydrogen electrode (NHE)), and the conduct minimum should be less than the potential of H + /H2 (0 V vs. NHE) [65]. Hydrogen is an excellent clean energy and has many potential applications [66], su as hydrogen electric vehicles [67], reduction iron in industry [68], treatment in clinical [13]. Copyright 2020 American Chemical Society. Water Splitting When the energy of light harvesting is more than the bandgap energy of LDHs, the electrons in the valence band of LDHs would inject into the conduction band and leave photogenerated holes in the valence band. The photogenerated electrons and holes will migrate to LDH surfaces to participate in a hydrogen evolution reaction and an oxygen evolution reaction. However, in photoelectrocatalysis, the photogenerated electrons will drift to the cathode to participate in a hydrogen evolution reaction, whereas the photogenerated holes will drift to the anode to participate in an oxygen evolution reaction. Water splitting by photochemistry can be divided into three steps [17]: (i) water adsorption. LDHs and LDH compounds will directly contact water without a concentration gradient. The ability of water adsorption is determined by the specific surface area of LDHs. (ii) Separation and migration of charge carriers: High separation and migration rate of carriers greatly improve the performances of LDH water splitting by photochemistry [64]. (iii) Surface redox reaction: The valence band maximum should be greater than the potential of O 2 /H 2 O (1.23 V vs. normal hydrogen electrode (NHE)), and the conduction minimum should be less than the potential of H + /H 2 (0 V vs. NHE) [65]. Hydrogen is an excellent clean energy and has many potential applications [66], such as hydrogen electric vehicles [67], reduction iron in industry [68], treatment in clinical applications [69], and so on. Water splitting by photochemistry is one effective method to evolve hydrogen. However, the high charge carrier recombination and the low efficiency of hydrogen evolution limit commercial scale production. Among many photo/ electrocatalysts, LDHs have attracted wide attention in photo(electro)catalytic water splitting, due to high specific surface areas, highly dispersed metal active sites, adjustable composition, and low cost [17]. However, many drawbacks of LDHs, such as low conductivity, low carrier mobility, and high electron-hole recombination rate, greatly hinder the photo(electro)catalytic applications [17,70]. Thus, many modification methods of LDHs have been used to improve the photo(electro)catalytic performance. Constructing LDH-based heterostructure is an effective strategy to enhance the photochemical hydrogen evolution performance for LDHs. Chen et al. [71] successfully prepared hierarchical CoNi-LDH modified TiO 2 nanotube arrays (NTAs) by a quick electrochemical deposition method. The photocurrent density of TiO 2 @CoNi-LDH NTAs was 4.4 mA·cm −2 (vs. RHE), which was 3.3 times higher than that of pure TiO 2 . The band gap of TiO 2 @CoNi-LDH NTAs was smaller than that of pristine TiO 2 . When light radiation is introduced during the synthesis of the heterostructure, the interface of heterostructure become more compact, leading to better separation ability of charge carriers. Zhang et al. [72] obtained two types of ZnFe-LDH/TiO 2 nanoarrys (NAs) by photo-assisted electrodeposition (TiO 2 /ZnFe-LDH-PE) method and electrochemical deposition method (TiO 2 /ZnFe-LDH-E), respectively. The photocurrent density of TiO 2 /ZnFe-LDH-PE was 2.29 and 1.31 times than that of pure TiO 2 and TiO 2 /ZnFe-LDH-E, respectively. For pristine TiO 2 , the interface formed between TiO 2 and ZnFe-LDH reduced the recombination of photogenerated electrons and holes (Figure 5a). At the same time, Fe species captured photogenerated holes and served as active sites for oxygen evolution reaction. For TiO 2 /ZnFe-LDH-E, the light radiation resulted in the stronger interaction between Zn 2p 3/2 and Ti 2p 3/2 , resulting in the enhanced separation and transfer efficiency of photogenerated charges (Figure 5b,c). Carbon nanodots (CDs) have superior rapid charge separation due to their unique structure [73]. Lv et al. [74] reported the introduction of CDs further improved carrier mobility and reduced overpotential for oxygen evolution of CDs/NiFe-LDH/BiVO 4 photoanode, leading to enhanced water splitting ability. Yang et al. [75] constructed a novel CoFe-LDH/NiFe-LDH core-shell architecture supported on nickel foam by a hydrothermal and electrodeposition strategy. The heterostructure showed the lowest Tafel slope of 88.88 mV dec −1 , indicating excellent HER kinetics. The outstanding kinetics of the HER reaction was attributed to the strong synergistic effect as well as the typical 3D interconnected architectures. The HER activity of the core-shell architecture electrode is similar to or better than many state-of-the-art HER electrocatalysts. Zhang et al. [76] fabricated hierarchical NiFe-LDH@NiCoP nanowires on nickel foam as electrodes by a hydrothermalphosphorization-hydrothermal strategy. The 3D heterostructure NiFe-LDH@NiCoP/NF electrodes require a low overpotential of 120 and 220 mV to deliver 10 mA cm −2 for the HER and OER, respectively. The overall water splitting of the heterostructure electrodes showed a cell voltage of 1.57 V at 10 mA cm −2 and excellent stability. Due to the strong electronic interaction between the NiFe-LDH and NiCoP, the synthetic strategy and interface engineering of the heterostructure facilitated charge transfer and improved reaction kinetics. The formation of positive-negative (PN) junctions is a common and effective method to improve photochemical water splitting performance. Yang et al. [77] used NiV-LDH and CdS to form P-N heterojunctions by physically mixing them together in a mass ratio of 1:10 ( Figure 6a). The formed NiV-LDH/CdS heterostructure had excellent electron-hole separation ability, and the hydrogen evolution efficiency is significantly greater than that of pure NiV-LDH and CdS (Figure 6b). Sahoo et al. [78] constructed a heterojunction between Co(OH) 2 and ZnCr-LDH by an ultrasonication method. The H 2 and O 2 evolution apparent Nanomaterials 2022, 12, 3525 9 of 18 conversion of optimized Co(OH) 2 -modified ZnCr LDH sample reached 13.12% and 6.25% in 2 h, respectively. or better than many state-of-the-art HER electrocatalysts. Zhang et al. [76] fabricated hierarchical NiFe-LDH@NiCoP nanowires on nickel foam as electrodes by a hydrothermalphosphorization-hydrothermal strategy. The 3D heterostructure NiFe-LDH@NiCoP/NF electrodes require a low overpotential of 120 and 220 mV to deliver 10 mA cm −2 for the HER and OER, respectively. The overall water splitting of the heterostructure electrodes showed a cell voltage of 1.57 V at 10 mA cm −2 and excellent stability. Due to the strong electronic interaction between the NiFe-LDH and NiCoP, the synthetic strategy and interface engineering of the heterostructure facilitated charge transfer and improved reaction kinetics. The formation of positive-negative (PN) junctions is a common and effective method to improve photochemical water splitting performance. Yang et al. [77] used NiV-LDH and CdS to form P-N heterojunctions by physically mixing them together in a mass ratio of 1:10 ( Figure 6a). The formed NiV-LDH/CdS heterostructure had excellent electron-hole separation ability, and the hydrogen evolution efficiency is significantly greater than that of pure NiV-LDH and CdS (Figure 6b). Sahoo et al. [78] constructed a heterojunction between Co(OH)2 and ZnCr-LDH by an ultrasonication method. The H2 and O2 evolution apparent conversion of optimized Co(OH)2-modified ZnCr LDH sample reached 13.12% and 6.25% in 2 h, respectively. CO2 Reduction Currently, the greatest threat to ecosystems is climate change. In order to achieve the plan specified in Conference of the Parties 21, energy and industrial processes need to reduce carbon emissions by 60% to limit global temperature rise to 2 °C [79]. There are also a number of ways to reduce the environmental impact of CO2: carbon capture and storage (ccs) chemical cycle capture, thermal decarbonization, photo(-electro)chemical reduction, and so on [80]. While reducing CO2, it is highly anticipated that CO2 can be used to generate electricity and convert it into more valuable compounds [81,82]. However, the traditional CO2 absorption method requires high temperature and pressure. A fresh LDH is not able to capture CO2, but LDHs forming a metal oxide mixture will have the ability to capture CO2 [83]. At present, the photochemical CO2 reduction attracts much attention due to the mild reaction condition. For photochemical CO2 reduction, when the energy of the absorbed light is greater than the band gap energy of LDHs, electron-hole pairs are produced. The photochemical CO2 reduction is roughly divided into three steps: (1) CO2 adsorption; hydroxyl groups adsorption on the surface [84], and interlayer anions adsorption [85]; (2) separation and migration of photogenerated charges [86]; (3) CO2 reduction reaction; CO2 will be reduced to hydrocarbons or CO by electrons [87]. The difference between PEC and photocatalytic (PC) CO2 reduction is that photoelectrocatalysis uses light and bias voltage to reduce CO2. Light performs as the drive, and the bias voltage improves the catalysis efficiency. Photosemiconductor structure, intrinsic properties, and active centers on the surface affect the efficiency of CO2 reduction in PEC [88]. In the photocatalytic CO2 reduction by LDH, the amount of CO2 absorbed depends on the type of divalent metal cation. Wang et al. [89] reported the bond strength between CO2 and MAl-LDH was relevant to the position of the d-band center. The higher the position of the d-band center, the higher the photocatalytic activity for CO2 reduction ( Figure 7). The reduction capacity of CO2 was as follows: NiAl-LDHs > CuAl-LDHs > ZnAl-LDHs > MgAl-LDHs. CO 2 Reduction Currently, the greatest threat to ecosystems is climate change. In order to achieve the plan specified in Conference of the Parties 21, energy and industrial processes need to reduce carbon emissions by 60% to limit global temperature rise to 2 • C [79]. There are also a number of ways to reduce the environmental impact of CO 2 : carbon capture and storage (ccs) chemical cycle capture, thermal decarbonization, photo(-electro)chemical reduction, and so on [80]. While reducing CO 2 , it is highly anticipated that CO 2 can be used to generate electricity and convert it into more valuable compounds [81,82]. However, the traditional CO 2 absorption method requires high temperature and pressure. A fresh LDH is not able to capture CO 2 , but LDHs forming a metal oxide mixture will have the ability to capture CO 2 [83]. At present, the photochemical CO 2 reduction attracts much attention due to the mild reaction condition. For photochemical CO 2 reduction, when the energy of the absorbed light is greater than the band gap energy of LDHs, electron-hole pairs are produced. The photochemical CO 2 reduction is roughly divided into three steps: (1) CO 2 adsorption; hydroxyl groups adsorption on the surface [84], and interlayer anions adsorption [85]; (2) separation and migration of photogenerated charges [86]; (3) CO 2 reduction reaction; CO 2 will be reduced to hydrocarbons or CO by electrons [87]. The difference between PEC and photocatalytic (PC) CO 2 reduction is that photoelectrocatalysis uses light and bias voltage to reduce CO 2 . Light performs as the drive, and the bias voltage improves the catalysis efficiency. Photosemiconductor structure, intrinsic properties, and active centers on the surface affect the efficiency of CO 2 reduction in PEC [88]. In the photocatalytic CO 2 reduction by LDH, the amount of CO 2 absorbed depends on the type of divalent metal cation. Wang et al. [89] reported the bond strength between CO 2 and MAl-LDH was relevant to the position of the d-band center. The higher the position of the d-band center, the higher the photocatalytic activity for CO 2 reduction (Figure 7). The reduction capacity of CO 2 was as follows: NiAl-LDHs > CuAl-LDHs > ZnAl-LDHs > MgAl-LDHs. In photochemical reduction of CO2, the reduction of H2O tends to compete with the reduction of CO2 for electrons [90]. Therefore, much effort has been made to improve the selective reduction of CO2 by LDHs. Tan et al. [91] successfully obtained a composited photocatalyst with ruthenium and NiAl-LDH. The experimental results confirmed that a monolayer NiAl-LDH (m-NiAl-LDH) could completely suppress the hydrogen evolution reaction under a longer wavelength irradiation (λ > 600 nm). This phenomenon was attributed to the metal-induced defect states in the forbidden zone of m-NiAl-LDH. Photogenerated electrons only localized at the defect state, and the driving force of the defect state (0.313 eV) could reduce CO2 to CH4 instead of H2O reduction. Wang et al. [92] successfully prepared NiO samples with different vacancy amounts by calcinating NiAl-LDH. The vacancy concentrations of Ni and O determine the selectivity of CO2 reduction under visible light irradiation. The NiAl-275 sample with the highest defect concentration has the highest selectivity for CH4 (22.8%). Constructing heterostructure still effectively improves the photo(electro)chemical performance of CO2 reduction. Lin et al. [93] prepared a FeWO4/NiAl-LDH(FWLDH) heterostructure using NiAl-LDH flower-like spheres and FeWO4 nanoflakes. The NiAl-LDH and FeWO4 formed a direct Z-scheme heterostructure. Tight binding of the heterostructure interface resulted in a larger specific surface area and thus formed more active sites. The internal electric field enhanced the separation and transport of photogenerated electrons, leading to the prominent improved photoelectron reduction ability of the NiAl-LDH (Figure 8). The photocatalytic CO yield of 10%FWLDH was 2.4 times than that of the original NiAl-LDH. Song et al. [94] fabricated a MgAl-LDO/carbon nitride with nitrogen defect (MgAl LDO/Nv-CN) 2D heterostructure. The photocatalytic activity of 10% MgAl LDO/Nv-CN for CO2 reduction was seven times than that of pure g-C3N4 under visible light illumination. Liu et al. [95] synthesized ultrathin Cu2O/CuCoCr-LDH p-n type heterojunction nanosheets (U-Cu2O/CuCoCr-LDH) as the cathodes of PEC. The photogenerated electrons at the photocathode reduced CO2 to CO and CH4. The maximum CO product yield of photoelectrocatalysis was 1167.6 mmol g −1 h −1 , which was approximately four times higher than that of electrocatalysis. The obvious improvement of photoelectrocatalytic performance was attributed to the internal electric field constructed by Cu2O and CuCr-LDH, which accelerates the separation of carriers. In photochemical reduction of CO 2 , the reduction of H 2 O tends to compete with the reduction of CO 2 for electrons [90]. Therefore, much effort has been made to improve the selective reduction of CO 2 by LDHs. Tan et al. [91] successfully obtained a composited photocatalyst with ruthenium and NiAl-LDH. The experimental results confirmed that a monolayer NiAl-LDH (m-NiAl-LDH) could completely suppress the hydrogen evolution reaction under a longer wavelength irradiation (λ > 600 nm). This phenomenon was attributed to the metal-induced defect states in the forbidden zone of m-NiAl-LDH. Photogenerated electrons only localized at the defect state, and the driving force of the defect state (0.313 eV) could reduce CO 2 to CH 4 instead of H 2 O reduction. Wang et al. [92] successfully prepared NiO samples with different vacancy amounts by calcinating NiAl-LDH. The vacancy concentrations of Ni and O determine the selectivity of CO 2 reduction under visible light irradiation. The NiAl-275 sample with the highest defect concentration has the highest selectivity for CH 4 (22.8%). Constructing heterostructure still effectively improves the photo(electro)chemical performance of CO 2 reduction. Lin et al. [93] prepared a FeWO 4 /NiAl-LDH(FWLDH) heterostructure using NiAl-LDH flower-like spheres and FeWO 4 nanoflakes. The NiAl-LDH and FeWO 4 formed a direct Z-scheme heterostructure. Tight binding of the heterostructure interface resulted in a larger specific surface area and thus formed more active sites. The internal electric field enhanced the separation and transport of photogenerated electrons, leading to the prominent improved photoelectron reduction ability of the NiAl-LDH (Figure 8). The photocatalytic CO yield of 10%FWLDH was 2.4 times than that of the original NiAl-LDH. Song et al. [94] fabricated a MgAl-LDO/carbon nitride with nitrogen defect (MgAl LDO/N v -CN) 2D heterostructure. The photocatalytic activity of 10% MgAl LDO/N v -CN for CO 2 reduction was seven times than that of pure g-C 3 N 4 under visible light illumination. Liu et al. [95] synthesized ultrathin Cu 2 O/CuCoCr-LDH p-n type heterojunction nanosheets (U-Cu 2 O/CuCoCr-LDH) as the cathodes of PEC. The photogenerated electrons at the photocathode reduced CO 2 to CO and CH 4 . The maximum CO product yield of photoelectrocatalysis was 1167.6 mmol g −1 h −1 , which was approximately four times higher than that of electrocatalysis. The obvious improvement of photoelectrocatalytic performance was attributed to the internal electric field constructed by Cu 2 O and CuCr-LDH, which accelerates the separation of carriers. LDHs have also been used in the electrocatalytic reduction of CO2. Fu et al. [96] prepared a monolayer NiFe-LDH catalyst using a solid-phase exfoliation method as an electrode for CO2 electroreduction. The optimized NiFe-CN-1 catalyst (NiFe-LDH was 1 wt%) exhibited a faradaic efficiency of CO generation of 93.5 % at 0.8v (vs. RHE). The excellent electrocatalytic performance originates from effective exposure of Ni and Fe active sites doped on the char material and efficient proton transfer channels of NiFe-LDH. Iwase et al. [97] prepared 2D CuAl-LDH as an electrocatalyst for electrochemical CO2 reduction (CO2RR). The optimized CuAl-LDH exhibited a faradaic efficiency of 42% for CO2 reduction to CO and 22% formate generation. It was found that the size of the LDH sheet was a key of CO2RR activity. Contaminant Degradation In 2015, about 9 million people died from environmental pollution, of which 1.8 million people died from diseases caused by water pollution [81]. Organic contaminants, as an important source of water pollution, are difficult to biodegrade because of their stability [98]. Photo(electro)catalysis has attracted extensive attention for solving environmental pollution problems, due to low cost, no pollution, and mild conditions [99,100]. Organic contaminant degradation by photo(electro)catalysis can be broadly divided into three steps: (1) The adsorption of the organic contaminant [101]. (2) The separation and transfer of photogenerated charges: This step is the key to improving photochemical activity [102]. (3) Redox reactions: Organic contaminant are converted to carbon dioxide, water, and inorganic acids by participating in redox reactions [103]. Recently, LDHs, particularly, transition metal-based LDHs (TLDHs), have emerged as promising candidates for contaminant degradation by photo(electro)catalysis [104]. Baliarsingh et al. [105] investigated the effect of M 2+ (Co, Ni, Cu, and Zn) in M II /Cr-LDH to photodegrade methyl orange (MO). Among them, CoCr-LDH showed the highest photoactivity for MO (90% MO removal in 3 h). The improved photocatalytic activity of CoCr-LDH is mainly attributed to the excitation of M 2+ -O-Cr 3+ bridge bonds under visible light irradiation and the effective transfer of photogenerated charge through the bridge bonds, which leads to the production of hydroxyl radicals and superoxide radicals. Zhao et al. LDHs have also been used in the electrocatalytic reduction of CO 2 . Fu et al. [96] prepared a monolayer NiFe-LDH catalyst using a solid-phase exfoliation method as an electrode for CO 2 electroreduction. The optimized NiFe-CN-1 catalyst (NiFe-LDH was 1 wt%) exhibited a faradaic efficiency of CO generation of 93.5 % at 0.8v (vs. RHE). The excellent electrocatalytic performance originates from effective exposure of Ni and Fe active sites doped on the char material and efficient proton transfer channels of NiFe-LDH. Iwase et al. [97] prepared 2D CuAl-LDH as an electrocatalyst for electrochemical CO 2 reduction (CO 2 RR). The optimized CuAl-LDH exhibited a faradaic efficiency of 42% for CO 2 reduction to CO and 22% formate generation. It was found that the size of the LDH sheet was a key of CO 2 RR activity. Contaminant Degradation In 2015, about 9 million people died from environmental pollution, of which 1.8 million people died from diseases caused by water pollution [81]. Organic contaminants, as an important source of water pollution, are difficult to biodegrade because of their stability [98]. Photo(electro)catalysis has attracted extensive attention for solving environmental pollution problems, due to low cost, no pollution, and mild conditions [99,100]. Organic contaminant degradation by photo(electro)catalysis can be broadly divided into three steps: (1) The adsorption of the organic contaminant [101]. (2) The separation and transfer of photogenerated charges: This step is the key to improving photochemical activity [102]. (3) Redox reactions: Organic contaminant are converted to carbon dioxide, water, and inorganic acids by participating in redox reactions [103]. Recently, LDHs, particularly, transition metal-based LDHs (TLDHs), have emerged as promising candidates for contaminant degradation by photo(electro)catalysis [104]. Baliarsingh et al. [105] investigated the effect of M 2+ (Co, Ni, Cu, and Zn) in M II /Cr-LDH to photodegrade methyl orange (MO). Among them, CoCr-LDH showed the highest photoactivity for MO (90% MO removal in 3 h). The improved photocatalytic activity of CoCr-LDH is mainly attributed to the excitation of M 2+ -O-Cr 3+ bridge bonds under visible light irradiation and the effective transfer of photogenerated charge through the bridge bonds, which leads to the production of hydroxyl radicals and superoxide radicals. Zhao et al. [106] synthesized a series of MCr-LDH (M = Cu, Ni, Zn) samples with visible light response. The MCr-LDH samples have excellent photocatalytic activity for degradation of Sulforhodamine-B, Congo red, chlorinated phenol, and salicylic acid sodium. Experimental and computational results indicate that the obvious excellent visible light photocatalytic activity of MCr-NO 3 -LDHs is attributed to the low band gap and the abundant surface OH groups. The visible light response was induced by a d-d transition of CrO 6 octahedra. Construction of heterostructures is a common and effective approach to address the low photogenerated charge transport efficiency of LDHs. Megala et al. [107] obtained NiAl-LDH/CuWO 4 heterostructures by a one-pot hydrothermal method. The photodegradation rate of LDH with 5% CuWO 4 for methylene blue (MB) dye reached 87.5% in 5 h. The enhanced photocatalytic ability of NiAl-LDH/CuWO 4 nanocomposite mainly originates from the heterojunction, which effectively promotes the separation of photogenerated charges. Ma et al. [108] synthesized BiOCl-NiFe-LDH composites using NiFe-Cl-LDH and Bi(NO 3 ) 3 as precursors. Photocatalytic activity of BiOCl-NiFe-LDH composites for Rhodamine B (RhB) degradation was 4.11 times higher than that of BiOCl. The heterostructure formed by BiOCl and NiFe-LDH can transfer photogenerated electron-holes in time (Figure 9a). At the same time, the highly dispersed BiOCl on the NiFe-LDH surface facilitates the formation of ·OH. Walaa R. Abd-Ellatif et al. [109] prepared ZnCo-LDH by a co-precipitation method and then obtained LDO (ZnO/CoO composite) by calcination. The ZnO/CoO composite formed S-scheme heterojunctions, as shown in Figure 9b. The removal rates of LDH calcinated at 300 • C for ponceau 4R (E124) and tartrazine (E102) were 90% and 80%, respectively. Pirkarami et al. [110] prepared CdS/NiCo-LDH heterojunctions by a hydrothermal method to construct photoelectrodes (Figure 9c). The degradation efficiency of Allura Red under an alkaline environment was over 90%. During the degradation process, the N = N of Allura Red would first break and it would eventually become H 2 O, NO 3 , NO 2 , CO 2 , SO 3 , Na + . Lu et al. [111] prepared Ni foam@ZnO@ZnFe-LDH photoelectrodes through electrodeposition of ZnO and hydrothermally grown ZnFe-LDH. Ni foam@ZnO@ZnFe-LDH acted as photoelectrodes in the PEC process and effectively removed Cr (VI) and Acid Red 1 by the synergistic effect of photoelectrocatalysis. Experimental results indicated that the 2D/2D core-shell heterojunction formed by ZnO and ZnFe-LDH not only narrowed the bandgap of ZnO and increased visible light absorption, but also promoted electron-hole separation. Argote-Fuentes et al. [112] synthesized activated MgAl-LDH through the coprecipitation method as heterogeneous catalysts for degradation of Congo red dye. In the photoelectrocatalysis process under 0.5v bias, the photoelectrocatalytic degradation rate of the MgAl-LDH/Cu electrode reached 95% and was the highest compared with other degradation processes. The synergistic effect of the Cu 2+ ions induced by electric current and the photogenerated electrons suppressed the recombination of the electron-hole in the catalyst, resulting in excellent catalytic activity of Congo red degradation. Figure 9. (a) The photocatalytic mechanism of BiOCl/NiFe−LDH heterostructure [108]. Copyright 2015 Elsevier. (b) The possible photodegradation mechanism of ZnO/CoO [109]. Copyright 2022 Elsevier. (c) Proposed photoelectrocatalytic degradation mechanism of CdS/NiCo−LDH heterojunctions [110]. Copyright 2022 Elsevier. Conclusion and Outlook LDHs are promising 2D photo(electro)catalysts with the advantages of low cost, tunable composition, unique thermal decomposition and memory properties, delaminated layer, and abundant surface hydroxyls. The compositional flexibility of LDHs can tune band structure, improve the absorption capacity and separation of charge carriers, and change the selectivity of the reaction. Calcined LDHs form in situ heterojunctions between the metal oxides, resulting in improved photo/electrocatalytic performance. Delamination and calcination of LDHs introduces more vacancy defects and specific surface areas, leading to an increased number of reactive sites. These insights into structure-activity relationships of LDHs provide a theoretical basis for function-oriented design of LDH-based photo(electro)catalytic materials. Although a great deal of exciting research has appeared for improving the photo(electro)catalytic performance and fulfilling the practical applications, the two major drawbacks that need to be addressed for LDHs are the structural instability in a low pH environment and a low quantum efficiency induced by its low conductivity. The structure regulation of LDHs should be an ideal strategy to overcome the drawbacks. LDH is calcined at a certain temperature and then recovered by the memory effect, which improves its structural stability in an acidic environment and its photo(electro)catalytic performance. However, the corrosion resistance mechanism of calcined LDH has not yet been given a certain explanation. Taking advantage of the tunable composition characteristics of LDHs, the doping and defect introduction can effectively improve the conductivity of LDHs, resulting in enhanced quantum efficiency. How to precisely control the composition and structures of LDHs is still a huge challenge, such as precisely controlling the thickness of LDH and adjusting the ratio between metal cations by the electrodeposition method. The structure-performance correlations of LDH-based photo(electro)catalytic materials needs to be more deeply understood to provide theoretical guidance for the design of efficient LDH photo(electro)catalysts. In order to better explore the structure-activity relationship of LDHs, advanced and effective characterization methods should be vigorously developed and applied. In situ characterization techniques would more precisely investigate structural changes of LDHs under reaction conditions. Transient spectroscopic techniques facilitate the research of photogenerated electron-hole separation and transfer dynamics and should be wildly utilized. In addition, theoretical simulations, especially density functional theory calculations, are powerful tools to study the relationship between the structure and properties of LDHs. The combination of advanced and valid characterization technologies and theoretical simulations is necessary to reveal complex charge dynamics, which supply a more detailed understanding of photo(electro)catalytic mechanisms. Further creative investigations will overcome the challenges in photochemistry of LDHs and will continue to advance the photo(electro)catalytic applications of LDHs.
2022-10-12T15:32:52.302Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "943de5199d67b255579be34d0fe2cb2b5165eded", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-4991/12/19/3525/pdf?version=1665307778", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "11e69d60d208c3b24b2da7d02ff7d1fe7487ffa3", "s2fieldsofstudy": [ "Chemistry", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
236918039
pes2o/s2orc
v3-fos-license
The Clinical Application of Robot-Assisted Ventriculoperitoneal Shunting in the Treatment of Hydrocephalus Background This work aims to assess the effectiveness and safety of robotic assistance in ventriculoperitoneal shunting and to compare the results with data from traditional surgery. Methods We retrospectively analyzed 60 patients who had undergone ventriculoperitoneal shunting, of which shunts were implanted using a robot in 20 patients and using traditional surgical methods in the other 40 patients. Data related to surgery were compared between the two groups, and the accuracy of the drainage tube in the robot-assisted group was assessed. Results In the robot-assisted surgery group, the operation duration was 29.75 ± 6.38 min, intraoperative blood loss was 10.0 ± 3.98 ml, the success rate of a single puncture was 100%, and the bone hole diameter was 4.0 ± 0.3 mm. On the other hand, the operation duration was 48.63 ± 6.60 min, intraoperative blood loss was 22.25 ± 4.52 ml, the success rate of a single puncture was 77.5%, and the bone hole diameter was 11.0 ± 0.2 mm in the traditional surgery group. The above are statistically different between the two groups (P < 0.05). Only one case of surgery-related complications occurred in the robot-assisted group, while 13 cases occurred in the traditional surgery group. There was no significant difference in the hospitalization time. In the robot-assisted surgery group, the average radial error was 2.4 ± 1.5 mm and the average axial error was 1.9 ± 2.1 mm. Conclusion In summary, robot-assisted implantation is accurate, simple to operate, and practical; the duration of surgery is short; trauma to the patient is reduced; and fewer postoperative complications related to surgery are reported. INTRODUCTION Hydrocephalus is common in all types of craniocerebral trauma and in the case of an intracranial mass, which leads to the progressive dilatation of the ventricular system and/or subarachnoid space due to the disturbance of absorption, circulation, or excessive secretion of cerebrospinal fluid. Surgical treatments include insertion of a ventriculoperitoneal shunt (VPS) and endoscopic third ventriculostomy (ETV), of which VPS is the most commonly used method for the treatment of all types of clinical hydrocephalus. VPS involves the insertion of a ventricle-end drainage tube into the ventricle through a skull drill. Several commonly used lateral ventricle puncture methods are lateral ventricle frontal puncture, traditional occipital puncture, and triangle puncture. The drainage tube is connected to a shunt valve (to control the flow rate of cerebrospinal fluid), then the abdominal cavity-end drainage tube is placed under the skin into the abdominal cavity through a tunnel. Traditional surgery requires marking the body surface anatomy of the patient and determining the trajectory of the drainage tube into the skull based on these marks. The precise placement of the drainage tube in traditional surgery is vital, but it is difficult to control the position and length of the drainage tube in the ventricle due to the different sizes of the ventricles of each patient. Various complications such as incorrect placement, infection, bleeding, and obstruction of the shunt system may occur after traditional VPS surgery, which leads to poor surgical results. In neurosurgery, there are often space-constrained, highprecision, intensive, and tedious tasks, and the emergence of robots has the potential to simplify procedures and improve accuracy (Guo et al., 2018). Neurosurgery robots overcome the problems of poor accuracy, long operation times and fatigue of the surgeon, and lack of 3D precise vision in traditional surgical operations. The Remebot robotic system is a frameless stereotactic product and neurosurgery assistance tool. The robot includes a computer software system, a six-axis robotic arm, and a camera. The surgeon can use the computer software system to observe multimodal images of the head and plan the best surgical puncture path. The arm can help the surgeon accurately locate the puncture site for the operation, and can act as a multifunctional operation platform. The camera can perform spatial mapping and real-time tracking and ensure that the robotic arm moves along the planned path to the preoperative planned position. The videometric tracker integrated by the Remebot robotic system is a commercially available third-generation stereoscopic optical tracking product. The product is fully passive and uses available visible light illumination to detect and track objects of interest, much as humans do, by triangulating 3D poses between two video cameras with overlapping projections (Choi et al., 2019). This device is intended for the spatial positioning and orientation of neurosurgical instruments and is potentially applicable to any neurosurgical condition in which the use of stereotactic surgery may be appropriate, such as the implantation of DBS electrodes, the implantation of intracerebral electrodes for SEEG, biopsies of intracerebral lesions, puncturing of cysts, and evacuation of hemorrhages, as well as navigation for open neurosurgeries. For example, in deep brain stimulation surgery (von Langsdorff et al., 2015;Goia et al., 2019;VanSickle et al., 2019), the accuracy, safety, and stability of robot-assisted electrode implantation have been proven. Based on compelling evidence of their accuracy, steadiness, and endurance, robotic systems are promising for use in drainage tube placement. In other words, robot assistance could help to place the shunt tube in the optimal position in the ventricle, thereby reducing the incidence of postoperative complications. In this study, 60 patients who underwent ventriculoperitoneal shunting surgery in Beijing Tiantan Hospital from June 2018 to September 2020 were selected, of which shunts were implanted with robot assistance in 20 patients and by traditional surgical methods in 40 patients. After surgery, the precise position of the intracranial shunt, the depth of implantation, and surgical trauma were analyzed. General Data Sixty patients who underwent ventriculoperitoneal shunting surgery in Beijing Tiantan Hospital from June 2018 to September 2020 were selected, 20 of which had shunts implanted with robot assistance and 40 received shunts via traditional surgical methods. The average age of group A was 24 ± 19.59 years, and this group comprised 11 males and 9 females; the average age of group B was 30.65 ± 19.46 years, and this group comprised 22 males and 18 females. There was no significant difference in age or sex between the two groups of surgical patients. The patients and their family members voluntarily chose the operation method before surgery. All the patients in this study provided informed consent and signed the operation informed consent form. This study was approved by the Ethics Committee of Beijing Tiantan Hospital (Grant No. QX201600-706). Robot-Assisted Ventriculoperitoneal Shunting Preoperative Planning and the Operative Procedure All patients underwent magnetic resonance imaging (MRI) (3.0 Tesla, Siemens, Munich, Germany) before surgery. To guarantee the visualization of the anatomical structures of interest, sagittal and axial volumetric T1-weighted MRI (slice thickness 1.0 mm, TR 6.4 ms, TE 3.0 ms, interslice gap 0 mm, flip angle 8 • ) were performed. The images were then compiled to plan the targets and trajectories. On the day of surgery, a dedicated videometric-tracked marker, referred to as the optical frame marker, which was capable of automatic patient-to-image registration, was adhered to the scalp, avoiding injury (Figure 1). Following this, axial volumetric computed tomography (CT) (slice thickness 0.625 mm, interslice gap 0 mm, 120 kVp) was performed. All images were loaded into the Remebot software, and MR images were fused to the CT images as the reference examination due to MRI distortions (Benabid et al., 2009;Guo et al., 2018). After segmenting the 3D objects of interest, surgical planning was performed. The robot working system can automatically calculate the 3D ventricle segmentation of the patient, that is, by selecting the appropriate ventricular region threshold, the ventricular region and other tissues can be distinguished by the image gray level and the pixel level can be divided. Thereby, the ventricular region and some cerebrospinal fluid can be segmented. Then, the area, shape, and other characteristics of the connected domain are used to distinguish and exclude non-ventricular parts, obtaining FIGURE 1 | Videometric-tracked marker: it was capable of automatic patient-to-image registration, was adhered to the scalp in the preoperative and accompanied the patient for a CT scan; detected optical frame marker during surgery. FIGURE 2 | Preoperative planning of robotic work platform: automatically calculate the 3D ventricle segmentation of the patient and plan the target and cranial path before surgery (the orange line represents the implanted drainage tube, the green line represents the safe distance for the drainage tube implantation, and the blue shape represents the ventricle of the patient). a high-precision ventricle segmentation result. The trajectory required to reach that location was planned on 3D objects or any available view, avoiding vessels and nerves (Figure 2). In the operating room, the patient was placed in the supine position and the Mayfield headholder was positioned to avoid any interference. The mobile trolley was stabilized on the left side of the patient and the Mayfield headholder was secured to the trolley with a mechanical support arm to establish rigid immobilization between the head of the patient and the robotic arm ( Figure 3A). The videometric tracker (MicronTracker, ClaroNav, Toronto, Canada), with three stereotactic cameras held by an independent stand, was installed above the head of the patient, where the optical marker could be detected (Figure 1). Next, correlations of the different spaces were carried out, involving two steps, namely, (1) tracker-to-image registration and (2) tracker-torobot registration. The Remebot robotic system features a paired point-based, automatic registration. At the end of the trackerto-image registration, the registration error was validated less than 0.3 mm. The tracker-to-robot registration was achieved by correlating two sets of spatial positions from the robotic arm space and the tracker space. A fiducial point was defined on the videometric-tracked target pattern engraved on the end effector attached to the robotic arm. During registration, the robotic arm automatically moved to certain poses surrounding the head of the patient, and the coordinates of that fiducial point in separate spaces were automatically obtained from the robot forward calculation and the tracker. At the end of the trackerto-robot registration, the registration error was validated as less than 0.08 mm. Subsequently, the robot-to-image registration was accomplished by relying on the above correlations, and data could be transferred between the images and the robotic arm. Following the registration, the robotic arm was oriented on command to the trajectories, and the scalp entry points were marked. After draping and local anesthesia, scalp incisions and burr-hole drillings were performed under the guidance of the robotic arm, but the dura was not opened to prevent untimely cerebral spinal fluid loss and subsequent brain shift ( Figure 3B). The dura mater was penetrated with unipolar electrocautery (Figure 3C). In case of any possible displacement of the head of the patient, the automatic registration was efficiently repeated. Once completed, the accuracy of the registration was visually inspected at less than 0.5 mm by commanding the robotic arm to guide a tooltip of 1 mm in diameter into two holes of 2 mm in diameter on the optical frame marker, according to the preoperative planning. Afterward, the robotic arm moved to a target point and was oriented to the trajectory with a microdrive device. The dura was perforated and cannulas were advanced to the defined depth. The drainage tube enters the ventricle along the cannula, and the vital signs and symptoms of the patient were constantly observed during the process ( Figure 3D). When necessary, the drainage tube was altered through micromovements of the robotic arm by submillimeter steps as small as 0.1 mm. When the physiological and clinical criteria for successful tube placement were fulfilled, the drainage tube was anchored to the skull. The subsequent surgical procedure of placing the drainage tube through the subcutaneous tunnel into the abdominal cavity was similar to the previous surgery ( Figure 3E). All patients had a postoperative CT scan (slice thickness 0.625 mm, interslice gap 0 mm, 120 kVp). The CT images were matched with the preoperative planning to assess the drainage tube placement accuracy (Figure 4). The drainage tube accuracy was the deviation between the actual center of the implanted tube and the intended target point and was assessed using two types of measurements (Starr et al., 2010;VanSickle et al., 2019): the "radial error, " defined as the scalar distance measured from the view perpendicular to the planned trajectory, and the "axial error, " defined as the scalar distance along the planned trajectory measured from the view along the planned trajectory. The distance between the tip of the ventricle of the drainage tube and the interventricular foramen was calculated. All patients were followed up to verify associated complications, such as hemorrhage, infection, or poor incision healing. Traditional Ventriculoperitoneal Shunting In the operating room, the patient was placed in the supine position and the Mayfield headholder was positioned. The surgical site was sterilized with iodine and alcohol. The intersection of 2 cm before the coronal suture or within the hairline and 2.5 cm from the median sagittal line was marked. After local anesthesia, a skull cone or a skull drill was used to drill a hole to reach the dura. A puncture needle was then used to puncture through the drill hole. The puncture direction was parallel to the sagittal plane, and the needle tip was backward and facing down, aligned with the line of the external auditory canal on both sides. After piercing 3-4 cm and a sense of breakthrough, the needle core was pulled out. When the cerebrospinal fluid began to flow, the drainage tube was implanted 1-2 cm deep to ensure its location in the ventricle. The puncture point of the occipital angle puncture was 6 cm above the extraoccipital tuberosity and 2.5-3 cm beside the midline, and the puncture direction pointed to the midpoint of the brow arch on the same side. All patients had a postoperative CT scan (slice thickness 0.625 mm, interslice gap 0 mm, 120 kVp). Statistical Analysis SPSS 23.0 software (IBM SPSS Statistics Inc., Chicago, IL, United States) was used for statistical analysis. The experimental results are expressed as the mean ± standard deviation (x ± s). The normality and homoschedasticity of the two groups of data were detected. If the variance was aligned, a one-way analysis of variance was performed, and if the variance was not uniform, the Wilcoxon test was performed. P < 0.05 was considered statistically significant. Comparison of the Characteristics of Patients in the Robot-Assisted and Traditional Surgery Groups A total of 60 drainage tubes were implanted in 60 patients. The mean age of the patients in the robot-assisted surgery group was 24 ± 19.59 years (range, 1-66 years), whereas the mean age of the patients in the traditional surgery group was 30.65 ± 19.46 years (range, 1-66 years). In this study, there was no significant difference in age or sex between the two groups ( Table 1). Comparison of Clinical Characteristics in the Robot-Assisted and Traditional Surgery Groups In the robot-assisted surgery group, the operation duration (from the completion of disinfection and draping to the fixing of the drainage tube) was 29.75 ± 6.38 min, intraoperative blood loss was 10.0 ± 3.98 ml, the successful rate of once puncture was 100%, and the diameter of the bone hole for robot-assisted implant surgery was 4.0 ± 0.3 mm, which all differed statistically from the data obtained from the traditional surgery group (Table 2) (P < 0.05). In the robot-assisted surgery group, the average radial error was 2.4 ± 1.5 mm and the average axial error was 1.9 ± 2.1 mm. In the traditional ventriculoperitoneal shunting group, the average operation time was 48.63 ± 6.60 min, the intraoperative blood loss was 22.25 ± 4.52 ml, the successful rate of once puncture was 77.5%, and the average bone hole diameter was 11.0 ± 0.2 mm. In the robot-assisted surgery group, only one case of surgeryrelated complications occurred, and a small amount of bleeding from the puncture tract required clinical observation. In addition, the puncture process completely avoided the choroid plexus, and none of the patients contacted the choroid plexus during the drainage tube puncture. In the traditional ventriculoperitoneal shunting group, 13 cases of surgery-related complications occurred, namely, 10 cases of bleeding and 3 cases of infection. In 15 of the 40 patients (37.5%), the drainage tube came into contact with the choroid plexus during the puncture. DISCUSSION Traditional surgical methods are prone to complications such as bleeding, infection, and shunt obstruction. Infection is one of the most serious complications, with an incidence of 4-11% (Kestle et al., 2011;von der Brelie et al., 2012). Infections are mainly related to intraoperative operations. Preventive use of antibiotics, strict aseptic techniques, and delicate and skilled surgical operations can reduce the occurrence of infections. The improper placement of the drainage tube has a high incidence rate, mainly resulting in the tube needing to be removed or reset (Nesvick et al., 2015). According to statistics, in all patients with VPS, about 30% have improper placement of the drainage tube tip, and about 20% require repositioning. Therefore, accurate placement of the tip-end drainage tube is a critical clinical procedure in need of improvement. Therefore, a fast and accurate ventricular puncture guidance method is urgently needed. Robotic assistance is one promising approach. The robot working system can automatically calculate the 3D ventricle segmentation of the patient, that is, by selecting the appropriate ventricular region threshold, the ventricular region and other tissues can be distinguished by the image gray level and the pixel level can be divided. Thereby, the ventricular region and some cerebrospinal fluid can be segmented. Then, the area, shape, and other characteristics of the connected domain are used to distinguish and exclude non-ventricular parts, obtaining a high-precision ventricle segmentation result. The trajectory required to reach that location was planned on 3D objects or any available view, avoiding vessels and nerves. The clinical data of 60 patients who underwent robot-assisted ventriculoperitoneal shunting in the Department of Neurosurgery of Beijing Tiantan Hospital affiliated to Capital Medical University from June 2018 to September 2020 were analyzed, 20 of which had shunts implanted with robot assistance and 40 had shunts implanted by traditional surgical methods. First, robot-assisted surgery can accept smaller bone holes. The DGR-I drill (ACRA-CUT, Acton, MA, United States), used in the conventional operation, had a diameter of 11 mm, whereas the specialized bit, used in the robot stereotactic guided assisted operation, had a diameter of only 4 mm, which could directly drill through the scalp and skull; as a result, the orientation was consistent with the preoperative planning direction. The small diameter and precision of the specialized bit not only can reduce the exposure of brain tissue but also improve patient tolerance. Second, smaller incisions and bone holes mean less bleeding. Intraoperatve blood loss in the robot-assisted surgery group was 10.0 ± 3.98 ml, while the traditional surgery group was 22.25 ± 4.52 ml (P < 0.01). Third, using a robot, the drainage tube can be implanted into the ventricle using only one attempt, especially in patients with small ventricles. In our set of data, the success rate of a single puncture was 100% and the rate of contact with the choroid plexus was 0%, which means that the use of robots can not only improve the efficiency of surgery but can also reduce the risk of trauma caused by multiple punctures. As the number of puncture operations increases, the risk of complications such as bleeding and infection also increases, with the risk increasing exponentially with each successive puncture. One study reported that the desired target was hit in only 39.9% of cases during the free-hand insertion of an external ventricular drain (Toma et al., 2009). In another review, only almost half (47.9%) of the catheters were placed with the entire tip located in the cerebrospinal fluid (Wan et al., 2011). What is more, fewer puncture times can effectively reduce complications. In our cohort, only one case of surgery-related complications occurred in the robot-assisted group, while 13 cases occurred in the traditional surgery group. To compare the operation times between the two groups, we calculated the time from the patient entering the operating room to the end of the operation. The robot-assisted group showed a significantly shorter average operation time than the conventional operation group. The surgical robot can automatically locate the path and target after registration, and then the surgeon can drill a hole, electrocoagulation penetrates the dura mater, and a limited length drainage tube is safely implanted. For the traditional surgery group, skin incision and hand drill are required before drilling, then the dura mater is cut, and finally a drainage tube is implanted. There is no doubt that the procedures have been simplified, and the success rate of puncture has been improved, which has significantly reduced the operation time. In addition, there was no significant difference in the average length of hospitalization between the two groups. This also means that robot-assisted surgery will not increase the burden on patients. In our study, we compared the error between the planned drainage tube tip target and the actual position after surgery. The average radial error was 2.4 ± 1.5 mm, and the average axial error was 1.9 ± 2.1 mm with robot-assisted implantation. There were no cases of passing through the ventricle into the brain parenchyma and no cases of insufficient drainage tube implantation depth. Robot-assisted drainage tube implantation can effectively control the position of the drainage tube. Implantation into the ventricle occurs in one attempt and the implantation depth of the drainage tube can be controlled, effectively avoiding the choroid plexus and reducing the risk of shunt tube obstruction. Precise disposable catheter placement can minimize the risk of complications from intubation through the brain tissue. Inaccurate catheterization not only fails to achieve drainage function but can also cause accidental brain damage, such as damage to the corticospinal tract, basal ganglia, limbic system, optic nerve, optic tract, posterior commissure, and other tissues, causing dysfunction (Shults et al., 1993;Gold et al., 2008;Torrez-Corzo et al., 2009). For traditional techniques, under visual inspection, it is sometimes difficult by manual operation to ensure that the puncture needle does not bend and deviate from the puncture direction due to the incorrect posture of the patient or a difference in the visual angle of the binocular vision of the surgeon; if the puncture is too deep, it may reach the third ventricle or the contralateral ventricle and damage the choroid plexus, causing severe complications. By the way, robot-assisted implantation surgery offers unique adaptability to patients with small cerebral ventricles because of its accuracy and stability. A common feature of idiopathic intracranial hypertension, shunt-dependent syndrome, and slit ventricular syndrome is that the ventricular system does not enlarge and may be smaller than average, but intracranial pressure is increased (Kim et al., 2002;Bateman, 2013;Thurtell and Kawasaki, 2021). The key to treatment is to rebuild the cerebrospinal fluid circulation pathway. At present, most neurosurgeons choose to treat such conditions with a lumbar cisternal-abdominal shunt. Although this operation can relieve the symptoms of intracranial hypertension, long-term follow-up has found that some patients may develop complications such as chronic subtonsillar hernia in the later stages. The advantages of neurosurgery robot-assisted stereotactic puncture are clear in such cases. For patients with small ventricles, traditional ventriculoperitoneal shunting has a higher intraoperative risk and is more likely to result in complications such as improper drainage tube position and multiple punctures. By contrast, the size of the ventricle does not affect the accuracy of robotassisted implantation. During the process of implantation, the drainage tube should only be implanted once, avoiding the neurovasculature and choroid plexus (Figure 5). To our knowledge, no research has been published on the correlation between the size of the ventricle and the number of punctures. However, robotic intervention means that the number of punctures is not affected by the volume of the ventricle. With continuous advances in medical technology, clinicians have begun to use neuroendoscopy and neuronavigation to assist descending lateral ventricular shunt placement. However, a multicenter randomized trial found that endoscopic insertion of the initial VPS did not reduce the incidence of shunt failure (Kestle et al., 2003). Another prospective multicenter study found that neuronavigation in shunt surgery reduces the incidence of poor shunt placement, resulting in a significant decrease in the early shunt revision rate. However, neuronavigation is not widely used in clinical practice due to its high cost and complicated operation (Hayhurst et al., 2010). This study has some limitations. First, the number of patients that underwent robot-assisted implantation was small, and the sample size needs to be expanded in the future. Second, this study was not a randomized trial. In future studies, patients should be randomly assigned. CONCLUSION In summary, robot-assisted implantation is accurate, simple to operate, and practical and involves a short operation time, less trauma to the patient, and fewer postoperative complications related to surgery. In addition, the cortical puncture point and puncture channel can be adjusted according to the head CT scan of the patient, which effectively improves the success rate of puncture. In short, this technique can be widely used to improve clinical practice. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Ethics Committee of Beijing Tiantan Hospital (Grant No. QX201600-706). The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.
2021-08-05T13:24:46.905Z
2021-08-05T00:00:00.000
{ "year": 2021, "sha1": "159160165cc4964a4991a9298d4ac5271311a539", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2021.685142/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "159160165cc4964a4991a9298d4ac5271311a539", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
220604188
pes2o/s2orc
v3-fos-license
A new sex-specific underlying mechanism for female schizophrenia: accelerated skewed X chromosome inactivation Background X chromosome inactivation (XCI) is the mechanism by which the X-linked gene dosage is adjusted between the sexes. Evidence shows that many sex-specific diseases have their basis in X chromosome biology. While female schizophrenia patients often have a delayed age of disease onset and clinical phenotypes that are different from those of males, it is unknown whether the sex differences in schizophrenia are associated with X-linked gene dosage and the choice of X chromosome silencing in female cells. Previous studies demonstrated that sex chromosome aneuploidies may be related to the pathogeneses of some psychiatric diseases. Here, we examined the changes in skewed XCI in patients with schizophrenia. Methods A total of 109 female schizophrenia (SCZ) patients and 80 age- and sex-matched healthy controls (CNTLs) were included in this study. We evaluated clinical features including disease onset age, disease duration, clinical symptoms by the Positive and Negative Syndrome Scale (PANSS) and antipsychotic treatment dosages. The XCI skewing patterns were analyzed by the methylation profile of the HUMARA gene found in DNA isolated from SCZ patient and CNTL leukocytes in the three age groups. Results First, we found that the frequency of skewed XCI in SCZ patients was 4 times more than that in the age- and sex-matched CNTLs (p < 0.01). Second, we found an earlier onset of severe XCI skewing in the SCZ patients than in CNTLs. Third, we demonstrated a close relationship between the severity of skewed XCI and schizophrenic symptoms (PANSS score ≥ 90) as well as the age of disease onset. Fourth, we demonstrated that the skewed XCI in SCZ patients was not transmitted from the patients’ mothers. Limitations The XCI skewing pattern might differ depending on tissues or organs. Although this is the first study to explore skewed XCI in SCZ, in the future, samples from different tissues or cells in SCZ patients might be important for understanding the impact of skewed XCI in this disease. Conclusion Our study, for the first time, investigated skewed XCI in female SCZ patients and presented a potential mechanism for the sex differences in SCZ. Our data also suggested that XCI might be a potential target for the development of female-specific interventions for SCZ. Introduction Schizophrenia (SCZ) is a chronic brain disorder with great physical morbidity and high mortality [1]. Substantial evidence shows that SCZ occurs more frequently in men than in women and that there is a sex difference in SCZ in clinical symptoms, cognitive function, onset age, and even treatment response [2,3]. Whether the X chromosome plays any role in the sex-specific differences in SCZ is unknown. Many genes associated with psychiatric diseases, including SCZ, are located on the X chromosome [4][5][6]. These genes are proven participants in the basic differentiation process of neurons, encoding proteins involved in synaptic transmission [7][8][9]. In particular, the importance of X-linked genes is their specific impacts on the development of the amygdala, which is associated with SCZ [10,11]. Therefore, X chromosome abnormality may be a new target for sex-specific risk of schizophrenia [4]. There are two major aspects regarding X chromosome function: X chromosome aneuploidies and X chromosome inactivation (XCI) [12,13]. The relationship between X chromosome aneuploidies and SCZ has been widely reported by different groups, such as a higher frequency of having extra copies of the X chromosome or an XO karyotype in female SCZ patients than in the general female population [14][15][16], and some reports have shown the abnormal passage of the Y chromosome in male SCZ patients [17,18]. However, studies of XCI and psychiatric diseases are limited. Females have two X chromosomes, and one is activated while the other is inactivated to maintain an equal dosage of X-linked genes with males. XCI is initiated by the transcription of XIST, a 17 kb, alternatively spliced long noncoding RNA mapped to Xq13.2 and exclusively expressed on the inactive X (Xi) chromosome [19]. Once transcribed, XIST molecules spread in cis along the X chromosome [20], inducing progressive epigenetic silencing through the recruitment of chromatin remodeling enzymatic complexes, which impose repressive histone and DNA changes on the Xi chromosome [21,22]. Within each cell, the parental X chromosome selected for inactivation seems to occur at random, and the Xi chromosome is mitotically inherited by future somatic daughter cells. In normal situations, the initial choice of X chromosome (maternal or paternal) silencing is random but stably inherited. Skewing is defined as when a deviation from equal (50%) XCI of each parental allele occurs [23]. The most common criterion for "skewed" XCI has been defined as XCI from the same allele in 75% or 80% of cells [24][25][26], while very skewed XCI is defined as 90% of cells with same-allele XCI [27]. While skewed XCI has been found to be associated with immune disease, thyroid disease and cancer [28][29][30], recent studies have found that the frequency of skewed XCI is related to aging, with even higher skewing in older patients with Alzheimer's disease or Parkinson's disease [27,31]. In psychiatric disease studies, X-linked intellectual disability has been reported in association with skewed XCI as well as X chromosome gene variants in these patients, for example, in MECP2, DDX3X, SMC1A [5,32]. A higher frequency of skewed XCI was also reported in children with autism than in age-matched healthy controls [33]. Together, skewed XCI may have important impact on the sex-specific pathogenesis of psychiatric diseases. To explore the linkage between XCI skewing and SCZ, the aims of the present study were as follows: (1) to investigate XCI skewing in patients with SCZ or major depressive disorder (MDD) and age-matched controls (CNTLs); (2) to identify the association between clinical symptom severity and the degree of skewed XCI in SCZ patients; (3) to study age-related changes in XCI skewing in young, middle-aged and elderly female SCZ patients and compare them with those of matched CNTLs; and (4) to explore whether skewed XCI in SCZ children is transmitted from parents. Subjects A total of 227 female subjects aged with a range from 8 to 77 years, were enrolled in this project. The study population consisted of 109 SCZ patients, 38 MDD patients, and 80 CNTLs. To investigate the relationship between age (age of sampling) and XCI pattern, all SCZ patients and age-matched CNTLs were subgrouped by age as children (age ≤ 18), adults (age [18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35], and elderly individuals (age ≥ 50), as shown in Table 1. The MDD patients, who served as psychiatric disease controls for the SCZ patients, were also subgrouped by age as adults (age 18-35) and elderly individuals (age ≥ 50). The SCZ and MDD patients were diagnosed by at least two experienced psychiatrists according to the criteria of the Diagnostic and Statistical Manual of Mental Disorders, fourth or fifth edition (DSM-IV or DSM-V, respectively). CNTL subjects were in general good health and without a history of psychiatric disorders or neurological disease. This study was approved by the ethics committee of Beijing Anding Hospital, Capital Medical University. Psychiatric symptoms in SCZ patients were evaluated by the Positive and Negative Syndrome Scale (PANSS) [34]. Based on the severity of clinical symptoms, the SCZ patients were further subdivided into more severe (PANSS scores ≥ 90) and less severe (PANSS scores < 90) phenotypes in some experiments. Depressive symptoms were assessed with the 17-item Hamilton Depression Rating Scale (HAMD-17) [35]. Antipsychotic treatment dosages were converted to olanzapine (mg) by "DDD" (dose equivalents based on defined daily doses) in SCZ (13.42 ± 0.84 mg/day) and MDD patients only treated by escitalopram (14.21 ± 0.87 ma/day). To examine the genetic impact of XCI, parents of 14 pediatric SCZ patients were also enrolled in this study. DNA extraction Whole-blood samples from subjects were centrifuged at a speed of 3000 rpm for 10 min to extract the intermediate layer and separate the leukocytes. Genomic DNA was extracted from leukocytes using a genomic DNA purification kit (Promega, Madison, WI, USA). Leukocytes were processed for DNA isolation as follows: 200 μl of PBMCs was digested with proteinase K in cell lysis buffer at 56°C for 10 min, and then the genomic DNA was extracted by a salting-out procedure and dissolved in nuclease-free water according to the standard manufacturer's protocol. Analysis of skewed XCI The identification of the two X chromosomes depends on the polymorphism display. The most commonly used polymorphism site is the short-tandem repeat (STR) of CAG at exon 1 of the Xq13 HUMARA gene, the human androgen receptor locus for XCI pattern experiments, as previously described [26,28]. The methylation profile of the HUMARA gene, located on the X chromosome, was used to determine XCI ratios as previously described [24]. Duplicated DNA aliquots from each sample were digested with restriction enzymes: HpaII, a methylation-sensitive enzyme, and the methylation-insensitive enzyme RsaI, which was used as a control for input DNA [36]. Fifty nanograms of DNA was digested at 37°C for 2 h with 1 μl of HpaII (New England Biolabs) and 1 μl of RsaI (New England Biolabs). For each sample, a control sample (no HpaII) was similarly prepared using only 1 μl of RsaI enzyme. Male DNA was used as a negative-control sample. The sequences of the primers used were as follows: forward 5′-TCCAGAATCTGTTCCAG AGCGTGC-3′, labeled with 5′-FAM, and reverse 5′-GCTG TGAAGGTTGCTGTTCC TCAT-3′ [24]. The samples were amplified for 35 cycles, including 30 s at 95°C, 60 s at 55°C, and 60 s at 72°C with an initial denaturation at 95°C for 10 min. The PCR products were separated by an ABI3500 Genetic Analyzer (ABI, Thermo Fisher Scientific). The size of the PCR product from each allele was analyzed by GeneMapper v4.1 for the quantification of peak height. The evaluation of XCI skewing was performed as previously described [25,37]. The P sup score indicates the proportion of cells with the longer HUMARA allele on the active X chromosome. A and A′ are the peak heights of the longer HUMARA allele from the digested and undigested samples, respectively. a and a′ are the peak heights of the shorter HUMARA allele from the digested and undigested samples, respectively. For the prevalence of skewing (% of skewing), that is, the prevalence of skewed XCI in each group, the most common cut-offs for calculating the prevalence of XCI skewing (% of skewing) are ≥ 75:25% [25] and ≥ 80:20% [38]. The degree of skewing (DS) designates the percentage of the preferentially active allele and does not take into account the direction of skewing but only the degree of deviation from a 50% XCI pattern. DS is calculated using the formula |P sup − 0.5| and represents a continuous variable that ranges between 0 and 50%, where 0% indicates a random X inactivation pattern and Table 1 Genotypic frequencies of subjects enrolled in the study and the prevalence of skewed X chromosome inactivation Group 50% indicates a completely skewed inactivation pattern. The relationship between the XCI skewing from the mother of the SCZ patient and the XCI skewing from the patient themselves was calculated by the quantitative assessment of XCI ratio transmission as described before [25]. P mat and P trans are calculated the same way as P sup , except that A indicated the HUMARA allele shared between mother and daughter. Statistical analysis Continuous variables are described as the mean ± Standard Error of Mean (SEM), and the categorical variables are presented as the number (the percentage). The distribution of continuous variables was decided by the Kolmogorov-Smirnov test followed by the Mann-Whitney U test for nonparametric data comparison, Fisher's exact test or the chi-squared test. P values below 0.05 were considered statistically significant, with Bonferroni's correction for each age group analyzed (p < 0.0167, 0.05 divided by 3 for each age group). The associations between patient age and skewed XCI in the schizophrenia were evaluated using Spearman's correlation and simple linear regression analysis. The univariate and multivariate logistic regression analyses were performed to calculate the odds ratio (OR) of patient age in schizophrenia with an increase of 10 and the corresponding 95% CI in discriminating the presence of severely skewed XCI (≥ 80%) in schizophrenia before and after adjusting for confounding factors, including PANSS sum score and medication. Patient age and the age of onset were compared by pairedsample tests. Data analyses were performed using SPSS version 23.0 (IBM Corp, Armonk, NY, USA). Results Higher frequency of severe XCI skewing in schizophrenia patients (Fig. 1b). Patient age matters in the relationship between XCI skewing and SCZ As already reported, the frequency of XCI skewing increases with age, particularly in populations over 50-60 years of age [39]. In our study, we found that XCI skewing was significantly correlated with patient age in SCZ patients (r = 0.31, p = 0.002**) but not in CNTLs (r = 0.167, p = 0.138) (Fig. 2a). Simple linear regression analysis revealed an increase of 0.62 in skewed XCI with each 10-year increase in patient age in the SCZ group. Logistic regression analysis showed, in predicting the presence of severely skewed XCI in schizophrenia, that the OR of the patient age of schizophrenia with an increase of 10 years in patient age was 1.377 (95% CI: 1.019-1.860, p < 0.05). After adjusting for confounding factors, the association in schizophrenia patients (OR = 1.369, 95% CI: 1.012-1.853, p < 0.05) remained statistically significant. To investigate the relationship between skewed XCI and patient age, the psychiatric patients and controls were divided into a child group (≤ 18 years), an adult group (19-35 years), and an elderly group (≥ 50 years) (shown in Table 1). Using a cut-off of 75% for XCI skewing, the frequency of skewed XCI in adult SCZ patients (39%) was higher than that of adult CNTLs (12.2%, p = 0.005), whereas no significant differences were found between SCZ patients and CNTLs in the child and elderly groups, as shown in Table 1. For SCZ groups, the frequency of skewed XCI in the adult SCZ patients (39%) was higher than that of the child SCZ patients (15.3%) but not of the elderly SCZ patients (31%). For the CNTL group, a significant higher frequency of severely skewed XCI was found in the elderly CNTLs (23%) than in child (5%) and adult CNTLs (12.2%). Similar results were obtained using 80% as cut-off value for the definition of skewed XCI. However, for MDD groups, no difference in severely skewed XCI was found between adult and elderly MDD patients (Table 1). In addition, the degrees of skewed XCI in SCZ patients in each age group were higher than those in age-matched CNTLs, especially in the adult group (p = 0.002). There was a shift in the curve for the severity of skewed XCI with advanced age in the SCZ group compared with the age-matched CNTL group (Fig. 2c). Association between XCI skewing and clinical symptoms in SCZ We investigated the association between the degree of skewing and clinical features of SCZ and MDD, such as disease onset, duration, clinical symptoms, and medication. In patients with SCZ, we found a higher frequency of severely skewed XCI in the group with a PANSS score of ≥ 90 (Fig. 1b), but we did not observe a significant difference in the correlation between the degree of skewing and PANSS scores, as shown in Table 2. Psychiatric treatment had no significant effect on the XCI pattern in our study, as shown in Table 2. The relationship between patient age and the age of disease onset and the degree of skewing in SCZ showed a significant correlation, as shown in Table 2. We found that XCI skewing was significantly correlated with the age of onset in SCZ patients (r = 0.3, p = 0.002) (Fig. 3a). Logistic regression analysis showed, after adjustments for confounding factors, that the association in Figure 2). To exclude the influence of patient age, we compared the age of onset in the different SCZ age groups. Interestingly, in the child and adult groups, the age of disease onset was older in SCZ patients with skewed XCI ≥ 80:20 than in SCZ patients with skewed XCI < 80:20 (adult group p = 0.014) (Fig. 3c). No correlation was observed between the clinical characteristics and the degree of skewed XCI (Tables 2 and 3). Together, our data suggested that the degree of XCI skewing might play important roles in patient age and onset age, and SCZ patients with severely skewed XCI may have more severe psychiatric symptoms. No genetic transmission of XCI skewing from mothers to daughters with SCZ To investigate whether XCI skewing was genetically transmitted from the mother to the daughter with SCZ, we analyzed whether the SCZ patient and the patient's mother shared a similar XCI skewing pattern. Using qualitative analysis, we examined the incidence of skewed XCI in young patients and their parents. Our data showed no difference in the frequency of skewed XCI between the SCZ patients with mothers with skewed XCI and those with mothers with nonskewed XCI (p = 1.0). Using a quantitative analysis by comparing the degree of skewing of the shared X chromosome in mothers and patients, we further confirmed no correlation between the groups, as shown in Fig. 4 (r = 0.214, p = 0.355). These results indicated no genetic transmission of the skewing trait. To investigate whether the XCI skewing in SCZ patients is influenced by mother or father X chromosome, we examined the genetic connect of active X chromosome between the SCZ patients with the patient's parents. Eight of 14 (57%) SCZ patients had the maternal X and 6 of 14 patients (43%) had the paternal X as the predominating active X chromosome. These results imply that the XCI skewing in young SCZ patients was unlikely to be inherited from their parents. Discussion There are many differences in clinical symptoms, the age of disease onset, and antipsychotic treatment response between male and female SCZ patients [3,[40][41][42]. Abnormal X chromosome function may play an essential role in the sex-related phenotype of SCZ [4]. It is unknown whether skewed XCI, as one X chromosome abnormality, might affect female SCZ patients. In this present study, we first showed that SCZ patients had a higher frequency of severely skewed XCI than age-matched CNTLs, and the SCZ patients with severe clinical symptoms showed a higher frequency of severely skewed XCI than those who had less severe clinical symptoms (Fig. 1b). Although XCI skewing has been reported in mammalian cells for decades, the mechanisms of the XCI skewing still remain unknown. There are several hypothesized mechanisms that may result in selection against deleterious alleles [27]. For example, a study found that 7.6% of female intellectual disability patients had extreme skewing (> 90:10) to carry mutations of X-linked genes, such as MECP2, DDX3X, and SMC1A, which might result in selection against the cells with mutated genes carried on the active X chromosome [32]. In addition, this phenomenon also is found in some X-linked diseases, in which carriers are skewed away from the expression of the mutant allele, such as in Wiskott-Aldrich syndrome (WAS) [43]. Genetic mechanisms that may lead to XCI skewing have previously been described in mammals, such as mutations within the XIC such as XIST promoter mutations [44]. However, in our case, genetic transmission of XCI skewing from mother to daughter seems unlikely, since we failed to find a correlation in the degree of skewed XCI between the SCZ patients with mothers with skewed XCI and those with mothers with nonskewed XCI (Fig. 4). Another large-scale investigation of XCI skewing in 502 mother-neonate pairs also showed a similar result, which might reflect that the XCI pattern is not due to a single heritable genetic locus but rather corresponds to a complex trait to be determined [25]. Other hypothetical mechanisms for XCI skewing in humans are stochastic or age-related skewing during the developmental period. A study found that increased skewing with age was a consequence of hematopoietic stem cell senescence [45]. As the age of disease onset in SCZ patients is relatively young, whether the significant acceleration of XCI skewing in SCZ is related to any of these hypothetical mechanisms needs further investigation. Indeed, there might be a secondary nonrandom choice of XCI, or it may occur through nongenetic mechanisms such as aging. To understand whether age is specifically linked to SCZ, we compared the XCI skewing between SCZ patients and CNTLs in three different age groups. First, we showed that the degree of skewed XCI in SCZ The relationship between degree of skewed XCI and clinical indicators analyzed by Spearman's correlation and Spearman's correlation was significant at p < 0.025 for age group of MDD patients. †Positive symptoms in the PANSS §Negative symptoms in the PANSS Fig. 4 Linear regressions and correlations of mother versus child XCI ratios (y = 0.2448*x + 0.07787, r 2 = 0.1257). Ptrans and Pmat scores were used to show the proportion of cells having the transmitted (mothers') or the self (daughters') allele active in mother-daughter duos (n = 14, r = 0.214, p = 0.355) by Spearman's correlation patients was greatly correlated with patient age and the age of disease onset (Fig. 2a, Figure 3a). Then, we observed that the level of skewing in adult SCZ patients is similar to the level in elderly CNTLs. The SCZ patients appeared to have a shift in the age-related skewed XCI curve toward the left (Fig. 2c). Our data suggested an interesting phenomenon that the average age of initial skewed XCI in SCZ patients occurred in the child group with a similar degree of skewing as seen in the adult CNTL group. The skewing of the XCI ratio seen in the blood cells of aging women is a stable biological phenomenon [46], and our data suggested that advanced aging might be one of the pathological mechanisms of SCZ in female SCZ patients. These data are supported by other studies that showed that SCZ is an acceleratedaging disease [47][48][49][50]. The hypothesis is supported in some aspects such as age-related biology markers, cognitive studies, and imaging studies [47][48][49]. However, most early studies on SCZ and aging are limited to elderly individuals or one age group, such as young adults. Our study included three age groups of SCZ patients and CNTLs, from children to adults and elderly individuals, which provided a more complete picture to evaluate the hypothesis. To examine whether the level of skewed severity is associated with clinical symptoms and phenotypes of SCZ, we performed an analysis of the degree of skewed XCI and total PANSS score, positive symptoms, negative symptoms, and the age of disease onset in SCZ patients (Table 2). First, we showed more severely skewed XCI in young SCZ patients than in age-matched CNTLs, using a cut-off of 80% (Table 1). Second, we found that the severity of XCI skewing in the SCZ patients was associated with a later onset age (Fig. 3). However, we did not find a significant association between the skewing of XCI and disease duration in SCZ patients, nor did we find a similar correlation in MDD patients, as shown in Tables 2 and 3. In the same patient age group, the SCZ patients with severe skewed XCI appeared to have later age of onset than those with random or mild skewed XCI (Fig. 3c). While it is unclear whether the severe skewed XCI is also related to the typical later onset age in female SCZ than males, we did not see such an age of onset effect on XCI skewing in MDD which often with no sex difference in disease onset age. Further support from studies of Xlinked diseases, which showed XCI skewing can lead to late-onset disease in patients with X-linked sideroblastic anemia, scleroderma, and common variable immune deficiency [28,51,52]. Although the mechanisms of skewed XCI severity in late-onset diseases were very much disease-specific, such as hematopoietic stem cell loss [53] and variable immunodeficiency [28], some shared mechanisms were also proposed as the consequence of agerelated DNA mutation and gene selection [54,55]. Because the sample size was relatively limited in the current study, the relationship between the severity of skewed XCI and disease onset age in SCZ still needs to be tested in a large-scale study in the future. Last, we found that the SCZ patients with severe symptoms (PANSS score ≥ 90) had more extremely skewed XCI (Fig. 1b). Although there was no direct evidence of skewed XCI influencing clinical symptoms in SCZ patients, other studies have demonstrated some relationships between X chromosome inactivation and SCZ-like symptoms. For example, patients with Klinefelter syndrome, characterized by a 47,XXY chromosomal pattern, often express schizophrenia symptoms, including negative symptoms, positive symptoms, and general psychopathology, as evaluated by the PANSS, but healthy controls with a 46,XY chromosomal pattern do not [56]. This finding may be related to the changes in brain structure. Such a hypothesis has been supported by reduced gray matter volume in 47,XXY males [57], and similar reduced gray matter volume has also been found in SCZ patients with severe positive symptoms [58]. In addition, skewed XCI is also related to X-linked gene mutations, such as hemophilia A and Rett syndrome, and, in particular, when the mutated genes are located on the active X chromosome, the patients often have more severe symptoms [59][60][61]. Many X-linked gene polymorphisms have been associated with the symptoms of SCZ [4]. For instance, MAOA gene polymorphisms were positively correlated with aggressive and negative symptoms evaluated by the PANSS [62], while carriers of the high-activity allele of MAOA present higher scores on behavioral measures of impulsivity than carriers of the low-activity allele of MAOA [63]. The MECP2 rs2734647 polymorphism might lead a lower expression level of MECP2 and more aggressive symptoms [64]. However, whether those X-linked gene polymorphisms in SCZ are associated with the relationship between skewed XCI and PANSS scores needs to be investigated in the future. To investigate whether the observed skewed XCI in SCZ patients was inherited from the patients' mothers, we examined XCI skewing in 14 mother-daughter pairs in which the daughter had SCZ. We did not observe the transmission of the XCI skewing pattern from the mother to the daughter (Fig. 4). Our study, for the first time, investigated the XCI skewing in SCZ and suggested no inheritable XCI skewing in SCZ. There were several limitations in this study. First, as XCI skewing is highly related to age, in this study, we included three age groups of SCZ patients and CNTLs to examine the disease-related XCI skewing from childhood to old age. However, we did not have age-matched MDD patients as a disease control group in this portion of the study; therefore, whether the skewed XCI in young SCZ patients is diseasespecific needs to be further investigated. Second, in the current study, we examined XCI skewing only in the DNA isolated from leukocytes based on previous studies suggesting that skewed XCI in the periphery may reflect the X chromosome inactivation pattern in other tissues, including the brain [65]. In addition, it is known that systemic changes are involved in age-associated XCI, and the circulatory system and blood are involved in some of those changes [66]. The investigation of isolated DNA from multiple tissues of SCZ patients might be needed. Third, in the current study, we examined only the X chromosome inactivation pattern without examining the number of X chromosome based on previous studies demonstrating the rarity of skewed XCI in X chromosome aneuploidy samples [67]. The examination of X chromosome number for SCZ patients might be needed. Conclusion To our knowledge, this is the first study to demonstrate a higher frequency of skewed XCI in female SCZ patients than in matched CNTL subjects, and the skewed XCI in SCZ patients was significantly associated with clinical psychosis symptoms. Furthermore, we found a significantly earlier onset of skewed XCI in adult SCZ patients than in age-matched CNTLs. Our findings suggested that skewed XCI might involve the disease onset as well as severity of clinical symptoms in female SCZ. The outcomes from this study provided a novel insight of the sex-specific biological mechanisms of SCZ. To support the effectiveness of XCI status as a biomarker of SCZ, further research is needed on larger female groups.
2020-07-18T13:40:42.944Z
2020-07-17T00:00:00.000
{ "year": 2020, "sha1": "d6c4ea0985ffa8af92aef0cc9502ba054f2dc64d", "oa_license": "CCBY", "oa_url": "https://bsd.biomedcentral.com/track/pdf/10.1186/s13293-020-00315-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "85cad697fed7b6a6a049b7ced8bb81e49b12da1f", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
6314561
pes2o/s2orc
v3-fos-license
SmileToPhone : A Mobile Phone System for Quadriplegic Users Controlled by EEG Signals Quadriplegic people are unable to use mobile devices without the aid of other persons which can be devastating for them both socially and economically. This has motivated many researchers to propose hardware and software solutions that operate as intermediates between the impaired users and their devices: accessibility switches, joysticks and head movements. However, the efficiency of these tools is limited in some conditions. To alleviate this problem, we propose to exploit electroencephalographic signals captured via an adequate headset. More precisely, the user is asked to perform a facial expression that will be recognized by the system through the analysis of the EEG signals. Several facial expressions are offered and each one corresponds to a command wirelessly sent to the mobile device and executed. This Brain Computer Interface based system is called SmileToPhone. It enables the quadriplegic patients to use their smartphones in an easy way with a minimum of effort and with respect to studied Human-Computer-Interaction requirements. The system includes the main functionalities of a smartphone such as making calls and sending messages. The evaluation of the system usability showed that most of the time, users were able to use the different functionalities of the system in an easy way. The current results are encouraging and motivating to add more features to the system. Keywords—Quadriplegia; EEG; facial expression; BCI system; HCI I. INTRODUCTION Mobile devices like tablets and smartphones are transforming our life by the emerging of new technologies and mobile applications offering new possibilities for communicating, working, shopping, etc.However, people suffering from disabilities particularly due to a Spinal Cord Injury (SCI), find themselves unable to follow this flow of technologies in continuous progress, which can be devastating for a person both socially and economically.Furthermore, a study reveals that the most common age of injury is 19 years and that a large percentage of spinal cord injury patients are under 30 years old (except in Japan where the majority of the patients are over the age of 50 years) [1].Physical difficulties, to mobility and use of basic technology yield to the exclusion of many people from participation in society, especially during this period of life between the age of 19 and 30.Hence the need for a system that allows mobility impaired persons to benefit from the available technologies and services likewise healthy people.Several applications are proposed in the literature that aim to help mobility impaired users to make phone calls [2], [3], use computers [4], [5], play games [6], prepare the meal and other functions [7].The key idea is to use a hardware operating as an interface between the user and the device to be manipulated.The interface could be a joystick that the user moves in different directions using one finger [4], [7] or using his lips [5] in order to navigate or select a functionality on the device.Accessibility switches were also exploited in the Tecla product to transmit commands to a smartphone or a tablet via a Bluetooth connection by using the user's hand or a finger [4].Sip and puff sensors allow the user to puff for clicking and selecting a functionality.In addition to a lip position sensor, a push switch and voice commands are exploited in the Quadstick product for playing games [6].A different idea for moving a cursor and selecting the desired item on an android device is the one implemented in Sesame application [3].It consists of tracking the head movements through the camera of the device, recognizing them using computer vision algorithms, and associating each movement to a defined action on the screen.Applications based on the Brain Computer Interface (BCI) are also proposed to help mobility impaired persons using their mobile devices: the idea consists of analyzing the brain signals to recognize the action to be executed on the device [2], [7]. In the present work, we are interested in developing a mobile phone system to people suffering from a special spinal cord injury which is Quadriplegia (also called Tetraplegia).According to the severity of the injury, quadriplegia yields to varying levels of functional loss in the neck, trunk, and upper and lower limbs [8], whereas quadriplegic patients have a full control of the head and the facial organs.As a consequence, the use of materials such as joysticks and push switches is not appropriate for our target users.Furthermore, puffing may be tiring; in addition to the fact that it requires a wired connection to the device.A number of requirements should be accounted for when designing a mobile phone system for quadriplegic patients.For instance, a physical movement from other than the head and the face of the user are discouraged and even not possible.Besides, in order to ensure a maximum level of usability of the system, it is preferred that the material used for transmitting the commands to the mobile device be wirelessly connected.These requirements are perfectly satisfied in Sesame application [3].However, it presents some limitations restricting its use to some conditions: since the head movements are captured via the camera of the device, it is very sensitive to the brightness level present in the room.Hence, the sesame phone should be in a well lit room without being exposed to a light source.This compromises the comfort of the user when he needs to be within a slightly bright room and restricts the usage of the phone in some areas, especially when the user is out of home and has no control on the lightning level.Another issue is that the unlocking of the phone is performed using the voice, by recognizing the sentence 'Open sesame'.The recognition may fail when the user is in a noisy environment.Neurophone [2] is another phone system that satisfies the aforementioned requirements.It is a BCI based system that exploits the P300 brain potential to select the photo of the contact that the user wants to call.The idea of the Neurophone application is to sequentially flash in a random order the photos stored in the address book contacts.When the flashed photo corresponds to the contact to call, a P300 potential is evoked by generating a peak after a stimulus.Although the idea of using brain signals to send commands to the phone is interesting and ensures flexibility to the user, the P300 depends on the levels of attention and arousal [9].In addition, a more accurate way that does not require a prior training stage and allows the understanding of the user's intent, is to interpret his facial expressions through his brain signals.Furthermore, the Neurophone application restricts the phone calls to the contacts stored in the address book and whose photos are available.Given the high degree of autonomy offered by BCI technology and the success it achieved through several available systems [2], [7], we resort to the exploitation of the brain signals to manipulate the proposed mobile phone system.More precisely, the brain signals are used to recognize a facial expression performed by the user, which is then translated to an action to be executed on the mobile device.Our choice of the analysis of the brain signals is motivated by their accuracy and the quasi real-time of their processing; whereas the use of the camera to capture the facial expression followed by an analysis step based on computer vision algorithms is compromised by the lightning of the room as mentioned earlier. The contribution of our mobile phone application, named SmileToPhone (referring to the smiling facial expression), is not restricted to only phone calls from the contacts of the address book, but also includes dialing a phone number, performing an emergency call (by dialing a number or selecting a predefined number), reading and writing messages, setting alarms and also adjusting some settings regarding the way in which the commands are sent to the device.It also includes a fault management module offering the possibility to the user to reset his inputs in case of error, and allowing an additional flexibility to the application.The remainder of the paper is organized as follows.Section II describes the proposed system by detailing the process of brain signals acquisition, the system features and the HCI requirements specific to the quadriplegic people and taken into consideration in the design phase.The evaluation results of the system usability are presented in Section III.Finally, conclusions and future work are drawn in Section IV. II. PROPOSED MOBILE PHONE SYSTEM FOR QUADRIPLEGIC USERS The SmileToPhone system consists of two main parts: the first part aims to analyze the brain signals in order to recognize the facial expression performed by the user.The second part is an Android application installed on the patient's smartphone that interprets the facial expression as a function to be executed.The high level architecture of SmileToPhone system is illustrated in Figure 1. A. Brain signals acquisition and analysis Thanks to the interactions between billions of neurons present in the brain, people are able to think, move, feel emotions, and more.All these feelings and thoughts start in the brain and are transmitted through neurons to other neurons or other types of cells such as muscles, via electrical signals.The electrical activities of the neurons emerge in the brain surface and thus can be captured by placing electrodes on standard positions on the scalp according to the 10-20 international system [8].The recording of the electrical activity of the neurons is called electroencephalography (EEG).Several cap-like devices composed of electrodes and allowing the acquisition of the EEG signals exist [10].They differ by their external appearance, the number of electrodes, their applicability (medical or non-medical use), cost, and other characteristics. Taking into account the features of the proposed system and the targeted users, some constraints regarding the choice of the EEG headset should be accounted for.In one hand, the cost of the headset should not be expensive and its placement should be relatively easy and does not require a training stage.In the other hand, the acquisition and the interpretation of the signals should ensure a minimum of accuracy that allows a satisfying level of the system usability.Several low-cost EEG devices are commercially available in the market.A survey of most of them along with a comparison are conducted in [11], where the Emotiv Epoc headset [12] was evaluated as the most usable low-cost device.More precisely, a comparison between the Emotiv Epoc headset and the Neurosky headset was conducted in several works, confirming the outperforming of the former one [11], [13].The Emotiv Epoc headset has 14 electrodes located on AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8, and AF4 positions as shown in Figure 2. Eight of these EEG sensors are positioned around the frontal and prefrontal lobes to collect and record signals from facial muscles and eyes.Once the brain signals are collected, they are processed in order to extract the relevant features allowing to recognize the facial expression performed by the user.It is worth pointing out that a Software Development Kit (SDK) for research is available along with the Emotiv Epoc headset and Fig. 2. Positions of the electrodes in the EPOC headset [15] offers the processing of signals, which is mainly composed of the following 3 stages: 1) Preprocessing: The aim of this stage is to make the acquired brain signals suitable for analysis by amplifying them and removing the electrical noise to enhance their quality.The signals are then digitized.2) Feature extraction: In this stage, suitable features helping to recognize the user's facial expression are extracted from the digitized brain signal samples.3) Features classification: This step can also be denoted as the translation algorithm; it is comprised mainly of a signal translation procedure that converts the set of brain signal features into a set of output signals to control a device.This translation is accomplished using conventional classification procedures [14]. Once the facial expression is identified, a command is associated to it in order to control the mobile phone device. B. Command identification The system can recognize up to 12 facial expressions including smile, left wink, right wink, blink, raised eyebrows (surprise) and some others.We associate some of these facial expressions to specific commands that allow the functioning of the desired feature in the mobile phone.The main commands consist of: • Unlocking the phone, • Selecting an icon, • Moving up/down to navigate through icons. A facial expression is attributed by default to each of these commands: smiling to unlock the phone and to select an icon, winking left to move up and winking right to move down.As will be explained later in the paragraph II-D, the keypad is required to be simple with large icons.Consequently, the keypad of our application (for the dialing function) and the icons organization are designed to be vertical.Moving through icons is only in up/down directions, as shown in Figure 3.A minimum number of facial expressions is exploited in order to facilitate their use and memorization by the quadriplegic.However, it is to be noted that the user has the possibility to customize the facial expressions associated to the commands through the function 'Settings' of our application.The remaining features offered by SmileToPhone application are described below. C. System features The application includes the following main functions as shown in the use case diagram (see Figure 4); the first function is the Emergency call; it allows the patient to ask for help through a predefined phone number or a new one that he dials.The second one is the Call function; it helps the patient to call any number from his contacts or enter a new number by using a keypad appropriate for the mobility impaired users.The requirements related to the design of the keypad and more generally, the Human-Computer-Interaction aspects will be discussed in the paragraph II-D.The third function is the Message function: by using it, the patient can read his messages and write a new message with a special keyboard.As a fourth available feature, the user has the possibility to set an alarm. Another important feature of the SmileToPhone system is that it supports a fault management module allowing the user to reset his entry after an error. It is worth noting that our system is designed in such a way it can be easily extended to support additional functionalities without altering to the existing implementation. All the features are presented to the user with respect to Human-Computer-Interaction (HCI) requirements defined in [16] and described in the following. D. HCI user requirements The HCI of the proposed system is based on a study conducted in [16] on 11 participants suffering from mobility impairments.The participants were men and women of different ages and professions.The study aimed to observe how the mobility impaired users interact with computers and mobile devices and what are the limitations they face.A questionnaire was also addressed.Some of the findings of the study are listed below and are taken into consideration in the implementation. • Graphic icons should be large enough to be easily manipulated by users suffering from quadriplegia. • The text should be clear. • It should be easy to read the interface at some distance that allows operation from the wheelchair.• The screen should be vertically positioned. As can be seen in Figure 5, the screens of the Smile-ToPhone system are vertically positioned, with large icons and clear text.The list of contacts also appears in a vertical direction.The keypad used for dialing a number is simple, clear and easy to move up and down through it (by right winking and left winking respectively). III. USABILITY EVALUATION In order to evaluate the usability of the proposed system, a usability study is conducted in which five healthy participants were asked to perform a set of tasks.It was not possible to make the study on quadriplegics.The focus was on the main features of the proposed system, which are: make an emergency call, make a call and send a message. In the emergency call task, participants have to select the emergency call icon from the home interface and make an emergency call.Two sub-tasks are considered in this task; making an emergency call when a number is already saved in the emergency call list and making an emergency call with a new phone number.In the make call task, participants are invited to select the make call icon in order to be able to make a call.Also in this task, two sub-tasks are considered; making a call to a phone number from the list of contacts and making a call by entering a new number.In the send message task, participants have to send a message in two ways; by selecting a predefined message and by writing a new message. As results, it was observed that the executions of the different commands using the corresponding facial expressions were instantaneous, except for some isolated cases where a user had to perform a facial expression twice in order for the related command to be executed. IV. CONCLUSIONS AND FUTURE WORK In this paper, we were interested in facilitating the social integration of users suffering from quadriplegia.We proposed a mobile system that allows them to use the smartphones effectively.Taking into account that physical movements are discouraged and sometimes not possible for most of the quadriplegic patients, the facial expressions were exploited to control the smartphone.In that way, quadriplegics can use their smartphones with a minimum effort.For that sake, the mobile application consists of five main functionalities; make an emergency call, make a call, send message, customize the facial expressions and set an alarm.HCI requirements have been taken into account when designing the system.As a future work, the aim will concern the adding of more functionalities to the system allowing the full control of the smartphone by quadriplegics.
2017-06-30T02:37:49.628Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "af57c901dce29aea25c1bd21547861fb1ba9e6d4", "oa_license": "CCBY", "oa_url": "http://thesai.org/Downloads/Volume8No5/Paper_66-SmileToPhone_A_Mobile_Phone_System.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "af57c901dce29aea25c1bd21547861fb1ba9e6d4", "s2fieldsofstudy": [ "Computer Science", "Engineering", "Medicine" ], "extfieldsofstudy": [ "Computer Science" ] }
245885998
pes2o/s2orc
v3-fos-license
Thermodynamics and Catalytic Properties of Two Novel Energetic Complexes Based on 3-Amino-1,2,4-triazole-5-carboxylic Acid For energetic materials (EMs), the key point of the present research is to improve the energetic property and reduce sensitivity. In this work, two new energetic complexes, Mn(atzc)2(H2O)2·2H2O (1) and Zn(atzc)2(H2O) (2) (Hatzc = 3-amino-1,2,4-triazole-5-carboxylic acid), were synthesized by solvent evaporation and diffusion methods, respectively. The structural analyses illustrate that 1 and 2 exhibit zero-dimensional structural units, which are linked by hydrogen-bonding interactions to give three-dimensional supramolecular architectures. For complexes 1 and 2, the detonation velocities (D) are 10.4 and 10.2 km·s–1 and detonation pressures (P) are 48.7 and 48.6 GPa, respectively. They are higher than most of the reported EMs, which present prominent detonation characteristics. In addition, two complexes can accelerate the thermal decomposition of ammonium perchlorate and exhibit excellent catalytic activity. Therefore, the two complexes can serve as a new class of promising EMs, which have potential application in the design of new high-efficiency solid catalysts. ■ INTRODUCTION Energetic materials (EMs) are one of the most important components of organics with an irreplaceable role in solid propellants, which possess special properties of energy storage and stability. 1−3 However, the currently used EMs, such as hexanitrohexaazaisowurtzitane (CL-20), 4 1,3,5-triamino-2,4,6trinitrobenzene (TATB), 5 and 1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX), 6 have some limitations due to their high sensitivity or relatively low catalytic activity. Therefore, it is challenging to design and synthesize EMs with high catalytic activity, high heat of detonation, low sensitivities, and environmental acceptability. 7−9 Recently, nitrogen-rich organic materials have attracted immense attention because they produce eco-friendly N 2 gas and release enormous energy during the process of decomposition. 10−12 Nevertheless, the conflict of high energy and oxygen balance cannot be resolved completely. The valid strategy is to construct stable ligands containing poly-nitrogen and oxygen-rich fragments. 13,14 3-Amino-1,2,4-triazole-5-carboxylic acid (Hatzc) is one of the high energetic ligands, which possesses a high nitrogen content (N % = 43.8) and possesses high enthalpy of formation from the powerful energy release of C−N, N−N, and NN bonds. 15,16 What is more, Hatzc also presents O atoms from carboxylic groups, which can provide a sufficient oxygen content during the explosion. 17 Ammonium perchlorate (AP) is a commonly used oxidant, which is widely used as the main component of solid rocket propellants. 18 The thermal decomposition performance of AP can affect the combustion behavior of solid propellants directly. 19 In the past, the combustion catalysts in propellants were mainly composed of metal oxides, inorganic salts, and organic salts, 20,21 and these materials were mostly inert catalysts which do not provide energy and even lose a part of the heat, reduce the performance of the propellant during the combustion process. 22 However, the energetic complexes can provide a diverse structure and greater heat of formation. What is more, they can also provide relatively high heat and fresh metal oxides on the propellant surface, which may improve combustion performance. 23 Therefore, the energetic complexes can be applied as an additive to the combustion catalytic of propellants. Based on the above considerations, two energetic nitrogenrich metal complexes, Mn(atzc) 2 (Table S1). The asymmetric unit of 1 comprises one crystallographically independent Mn(II) ion, two atzc − , two coordinated water molecules, and two free water molecules ( Figure 1). Each Mn(II) ion displays a slightly distorted octahedral geometry. The equatorial plan is defined by two oxygen atoms (O1 and O1 i ) and two nitrogen atoms (N4 and N4 i ) from two Hatzc ligands, and the axial position is occupied by two oxygen atoms (O3 and O3 i ) from two coordination water molecules. The bond length of Mn1−O3 [2.209(3) Å] is slightly shorter than that of Mn1−O1 [2.246(3) Å] (Table S2). Furthermore, due to the presence of abundant hydrogen bonds (Table S3) Crystal structure of Zn(atzc) 2 (H 2 O) (2). Single-crystal structure analysis reveals that complex 2 crystallizes in monoclinic space group P2 1 /c (Table S1). The asymmetric unit is consisted of a Zn(II) ion, two atzc − , and one coordinated H 2 O molecule. As shown in Figure 3, the Zn ion is five-coordinated by three O atoms (O1, O4, and O5) and two N atoms (N1 and N5), forming a distorted tetragonal pyramidal geometry. Among them, two N atoms (N1 and N5) and two O atoms (O1 and O4) are coplanar, and Zn1−O5 is at the axial site with an O5−Zn1−O1 bond angle of 93.5(2)°. The Zn−N and Zn−O bond lengths vary from 1.961(8) to 1.986(7) Å and 1.971(5) to 2.167(5) Å, respectively (Table S2). The distorted tetragonal pyramidal motifs can be linked Thermal Stability. The thermal decomposition of the complexes was studied by thermogravimetric (TG) experiments, which are important parameters for EMs. The TG curve suggests that 1 and 2 undergo two weight-loss stages ( Figure S1). They begin to decompose at 120 and 140°C with endothermic peaks at 225 and 215°C, which correspond to the expulsion of coordinated water molecules and free water molecules. The 18.4% weight loss of complex 1 at 120−240°C is attributed to the release of water molecules (calcd: 18.9%). Complex 2 losses coordinated water molecules in the temperature range of 195−251°C with a weight loss of 4.9% (calcd: 5.3%). Then, the main framework collapses with exothermic peaks at 425 and 515°C, respectively. 1 and 2 completely convert to MnO 2 and ZnO with residue weights of 24.1 and 24.3%, which is in agreement with the calculated values of 25.2 and 24.5%, respectively. Energetic Properties. The enthalpies of formation (Δ f H o ) of the two complexes were calculated by Hess thermochemical cycle and deduced as 0.21 and 0.56 MJ·kg −1 , respectively. To confirm that detonation velocity (D) and detonation pressure (P) of detonation characteristic for EMs were estimated based on the EXPLO5 code (Table S4), they were usually applied to the energetic metal−organic frameworks reported previously. 25 For complexes 1 and 2, the D values are 10.4 and 9.9 km·s −1 and the P values are 48.7 and 47.5 GPa, respectively. They are higher than most of the reported EMs. The sensitivity test examined the impact and friction sensitivities of 1 and 2 (Table 1) to reflect the safety of EMs. The impact sensitivity values of 1 and 2 are all greater than 40 J, and the friction sensitivity values of 1 and 2 are higher than 360 N, which are "insensitive". 26 Complexes 1 and 2 have low sensitivity owing to the presence of a large number of hydrogen bonds in the structures. For complexes 1 and 2, they exhibit excellent energetic properties and low sensitivity, which are mainly attributed to the formation of hydrogen bonds between intermolecules and intramolecules. 27 Effects on the Thermal Decomposition of AP. In order to examine the effect of complexes 1 and 2 on the thermal decomposition of AP, the target sample was prepared by mixing AP and the title complexes at a mass ratio of 1:3. The study was carried out by differential scanning calorimetry (DSC) at 30−450°C·min −1 in a hydrostatic air atmosphere with Al 2 O 3 as a reference at a heating rate of 10°C·min −1 . The study was investigated using DSC measurement at a heating rate of 10°C·min −1 in a hydrostatic air atmosphere in the range of 30−450°C with Al 2 O 3 as a reference. The activation energy (E a ) and pre-exponential factor (A) of thermal decomposition for AP and AP with complexes were measured at four different heat rates of 5, 10, 15, and 20°C·min −1 by the Kissinger's method 29 (Figures S2−S4). Figure 5 shows the DSC curves of AP, AP with 1, and AP with 2, respectively. The endothermic peak of pure AP at 245°C is formed by the phase transformation. The exothermic peaks at 290 and 442°C are corresponding to the low-temperature decomposition process and high-temperature decomposition process. The heat releases of the exothermic process are 0.735 and 0.787 kJ· g −1 , respectively. After adding the mixture of AP with 1 and AP with 2, there are no obvious effects on the phase transition of AP, but the exothermic phase has significant change. For AP with 1, the exothermic process at 250−450°C for pure AP becomes narrowed, which appears in the region 255−345°C. For AP with 2, the two exothermic peaks combined into one at 342−370°C. This indicates that the decomposition time of AP is shorter in the presence of complexes at the same heating rate. What is more, the decomposition heat changes to 1.916 kJ·g −1 for 1 and 1.568 kJ·g −1 for 2, significantly higher than the corresponding heat value for pure AP. Clearly, AP decomposes completely in a relatively short time and releases a lot of heat in the presence of the title complexes. It can be inferred that the main skeleton of the ligand releases a large amount of heat during the decomposition process, and the formation of metal oxides at the molecular level on the propellant surface may contribute to their catalytic effects. 28 As shown in Table 2, thermal decomposition peak temperature, activation energy (E a ), and pre-exponent (A) were measured by DSC for AP ( Figure S2) and AP with 1 ( Figure S3) and 2 ( Figure S4) at different heat rates. The increases in activation energy and pre-exponential factor are due to the kinetic compensation effect. The ratio of E a to ln A can be used to describe the reactivity. 30 Generally, a larger ratio means a greater stability of the reactant. The E a /ln A values of AP with 1 and AP with 2 are 13.56 and 14.01, respectively, which are smaller than those of pure AP. Both complexes serve good acceleration effects toward the thermal decomposition of AP, and the catalytic effect of 1 is better than that of 2. Compared with other AP decomposition catalyst candidate EMs, 18,23 these complexes should meet the following requirements. First, the high-energy ligands in these complexes can increase the decomposition heat and favor the thermal decomposition of AP; second, the formation of metals and oxides at the molecular level on the propellant surface during compound decomposition may contribute to the catalytic effect of the catalyst. ■ CONCLUSIONS In summary, two new energetic complexes, Mn(atzc) 2 (H 2 O) 2 · 2H 2 O (1) and Zn(atzc) 2 (H 2 O) (2), were synthesized by solvent evaporation and diffusion methods, respectively. Both 1 and 2 exhibit excellent powerful detonation performances and low sensitivities, which make these new complexes as potential EMs. The superior detonation properties of the two energetic complexes are beneficial to the accelerated activity toward the thermal decomposition of AP, which are expected to be candidates for solid catalysts. ■ EXPERIMENTAL SECTION Chemicals and Apparatus. All chemicals were commercially available and used as purchased (Table 3). Elemental analyses (C, H, and N) were performed on a Vario EL III analyzer. Infrared spectra were obtained from KBr pellets on a BEQ VZNDX 550 FTIR instrument within the 400−4000 cm −1 region. 13 C NMR spectra were recorded on a Bruker Avance III 100 MHz spectrometer. Chemical shifts (in parts per million) were calibrated with dimethyl sulfoxide (DMSO). DSC and TG analyses were carried out on a TA Instruments NETZSCH STA 449 C simultaneous TGA at a heating rate of 10°C·min −1 under hydrostatic air. D and P of detonation characteristic for EMs were estimated based on EXPLO5 v6.01. 31,32 The density of the complex was measured by pycnometer. The heats of formation were tested by oxygen bomb calorimetry and Hess thermochemical cycle. The sensitivity to impact stimuli was determined by the fall hammer apparatus applying the standard staircase method using a 2 kg drop weight, and the results were reported in terms of height for 50% probability of explosion (h 50 ). The friction sensitivity was determined on a Julius Peter's apparatus by following the BAM method. Diffraction data for 1 and 2 were recorded by a Bruker/ Siemens Smart Apex II CCD diffractometer with graphitemonochromated MoKα radiation (λ = 0.71073 Å) at 293(2) K. Cell parameters were retrieved using SMART software and refined using SAINTPLUS 33 for all observed reflections. Data reduction and correction for Lp and decay were performed using the SAINTPLUS software. Absorption corrections were applied using SADABS. 34 All structures were solved by direct methods using the SHELXS program of the SHELXTL 35 package and refined with SHELXL. 36 Experimental details for the structural determination of the complexes are summarized in Table S1, while the selected bond lengths and bond angle data are presented in Tables S2. Hydrogen-bonding parameters are listed in Table S3. Synthesis of Complexes. Mn(atzc) 2 (H 2 O) 2 ·2H 2 O (1): compound 1 was synthesized by the solvent evaporation method. Hatzc (6.4 mg, 0.05 mmol) was dissolved in a NaOH solution (1.0 mol·L −1 , 1.0 mL). The mixture was diluted by 5 mL of distilled water and 5 mL of EtOH, and the pH was adjusted to 6.0 with HCl solution (1.0 mol·L −1 ). MnCl 2 (6.3 mg, 0.05 mmol) was dissolved in distilled water (5.0 mL) and then added to the above mixed solution. The reaction mixture was filtered, and the filtrate was left undisturbed at room temperature. The colorless crystals of 1 were obtained after 5 weeks (2.7 mg, yield: 43%, based on Mn 2+ ). Anal. Calcd: C, 18 (2): compound 2 was synthesized by the diffusion method. Hatzc (6.4 mg, 0.05 mmol) was completely dissolved in water (4 mL), which was carefully added and placed on the bottom of a test tube. Then, an ethanol solution (v/v = 1:1, 6 mL) was layered on the former. Finally, Zn(NO 3 ) 2 ·6H 2 O (29.8 mg, 0.1 mmol) was dissolved in EtOH (4 mL) and it was carefully layered on top. Then, it was allowed to stand at room temperature over a period of 4 weeks, whereupon colorless crystals of 2 were formed in 39% yield The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acsomega.1c06052. Crystal data and structure refinement details, selected bond lengths and bond angles, and hydrogen bond lengths (Å) and bond angles for 1 and 2 (°); calculation parameters of D and P; TG curves of complexes 1 and 2; DSC curves of AP; DSC curves of AP with complex 1; and DSC curves of AP with complex 2 (PDF)
2022-01-13T16:13:30.360Z
2022-01-11T00:00:00.000
{ "year": 2022, "sha1": "148b2eaccf8fbc4cf2b58fbfd40d4f6627e3c4fd", "oa_license": "CCBY", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.1c06052", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0208937d55736905e01a25c857bf03f61b71e359", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
240101732
pes2o/s2orc
v3-fos-license
Defining distribution and habitat use of west‐central Florida’s coastal sharks through a research and education program Abstract Identifying critical habitat for highly mobile species such as sharks is difficult, but essential for effective management and conservation. In regions where baseline data are lacking, non‐traditional data sources have the potential to increase observational capacity for species distribution and habitat studies. In this study, a research and education organization conducted a 5‐year (2013–2018) survey of shark populations in the coastal waters of west‐central Florida, an area where a diverse shark assemblage has been observed but no formal population analyses have been conducted. The objectives of this study were to use boosted regression tree (BRT) modeling to quantify environmental factors impacting the distribution of the shark assemblage, create species distribution maps from the model outputs, and identify spatially explicit hot spots of high shark abundance. A total of 1036 sharks were captured, encompassing eleven species. Abundance hot spots for four species and for immature sharks (collectively) were most often located in areas designated as “No Internal Combustion Engine” zones and seagrass bottom cover, suggesting these environments may be fostering more diverse and abundant populations. The BRT models were fitted for immature sharks and five species where n > 100: the nurse shark (Ginglymostoma cirratum), blacktip shark (Carcharhinus limbatus), blacknose shark (C. acronotus), Atlantic sharpnose shark (Rhizoprionodon terraenovae), and bonnethead (Sphyrna tiburo). Capture data were paired with environmental variables: depth (m), sea surface temperature (°C), surface, middle, and bottom salinity (psu), dissolved oxygen (mg/L), and bottom type (seagrass, artificial reef, or sand). Depth, temperature, and bottom type were most frequently identified as predictors with the greatest marginal effect on shark distribution, underscoring the importance of nearshore seagrass and barrier island habitats to the shark assemblage in this region. This approach demonstrates the potential contribution of unconventional science to effective management and conservation of coastal sharks. | INTRODUC TI ON Sharks are common mid-to-upper-level marine predators that contribute to the health of the world's oceans by influencing marine species populations through predator-prey interactions (Heithaus et al., 2002;Simpfendorfer et al., 2001). They help to balance coastal and marine ecosystems by regulating them vertically, horizontally, and temporally through predation and intimidation, which can support healthier prey populations and mitigate overgrazing of ecologically foundational seagrass meadows (Dulvy et al., 2017;Heithaus et al., 2007). Compared to many fish, sharks are generally late maturing with low reproductive output, rendering the survival of juvenile individuals critical to the success of a population (Kindsvater et al., 2016). As a result, efforts to identify juvenile shark habitat and understand the environmental factors that make it suitable, particularly in the face of a changing climate (Dulvy et al., 2017), are critical to shark management efforts. Like many fish species, sharks often use coastal and estuarine areas as nurseries for juveniles due to their elevated levels of productivity, shallow protected waters, and high abundance of prey (Beck et al., 2001). Heupel et al. (2007) have expanded upon these characteristics to establish criteria specific to elasmobranchs, defining shark nurseries as locations where (1) relative abundance of sharks is greater on average than over all areas; (2) sharks exhibit site fidelity, returning or remaining in the area for extended periods of time; and (3) the area is used repeatedly across years. In addition to providing critical juvenile shark habitat, high shark species diversity has been found in these habitats surrounding barrier islands and around river mouths of the Gulf of Mexico (Bethea et al., 2014). Despite the instrumental role of these habitats in sustaining healthy shark populations, many potential nursery areas lack baseline population data. In Florida, spatially explicit shark distribution data have been used to further understand species life cycles and consequently, to directly inform conservation practices in estuaries. Evaluation of these data has suggested that parameters such as temperature and salinity drive species distribution, as well as influence size-based habitat partitioning. For example, salinity and temperature have been shown to determine size partitioning of bull sharks (Carcharhinus leucas) across estuarine habitats in Florida (Simpfendorfer et al., 2005). Further, Brooks et al. (2019) reviewed the capacity of spatial delineation of habitat in the implementation of successful fisheries conservation strategies. When an aggregation of breeding lemon sharks (Negaprion brevirostris) was identified in a nearshore Floridian estuarine environment, efficient communication between scientists and stakeholders coupled with the availability of spatially explicit data was used to successfully designate the area as a Habitat Areas of Particular Concern (HAPC) by NOAA Fisheries. As human development continues to increase along Florida's coastline, further efforts to implement conservation strategies based on spatially explicit data are urgently needed for nearshore aquatic habitats. Two types of data are commonly used in fisheries management and conservation: fisheries-dependent (i.e., data collected by fishermen) or fisheries-independent (i.e., data collected by professional scientific researchers, usually affiliated with academia or government organizations). However, some protected areas lack either type of baseline population data, which, in turn, limits our understanding of the ecosystem and the effectiveness of conservation strategies that are implemented (Ward-Paige & Worm, 2017). Data collected by third parties, such as citizen scientists and private research and education groups, provide an alternate form of fisheries-independent data. These can be used to understand species distribution and develop spatially explicit management and conservation strategies, an approach that has been successfully implemented in other scientific disciplines such as ornithology and astronomy (Dickinson et al., 2010). While there are concerns over the accuracy of data collected by non-professionals, older volunteers with college experience and those accompanied by professionals demonstrate increased accuracy in scientific performance (Dickinson et al., 2010). In addition, there is often a lack of available funding for traditional fisheriesindependent data collection, but citizen science groups may require payment from their participants to fund the research (Dickinson et al., 2010). The Coastal Marine and Education Research Academy (CMERA), located in Pinellas County, Florida, is an example of a research and education group. Undergraduate and graduate students pay to participate in the program to learn about sharks and rays and gain experience with professional scientists sampling in the field. These efforts have resulted in the dataset used in this study, which details shark and ray captures along the west-central Florida coast since the organization's establishment in 2013. The coastal waters of west-central Florida contain a variety of bay, estuarine, and barrier island habitats. A great diversity of sharks with respect to species and maturity has been noted by CMERA in these habitats; however, a formal population study has never been conducted there, and explicit habitat use of sharks in that region remains unknown. Given the available data and the potential for this area to function as critical habitat, such as a nursery area, there is a need for baseline population analyses and an understanding of factors driving shark distribution. The purpose of this study is to examine the utility of data collected by research and education programs to identify areas of clustering, determine which environmental parameters may be driving shark distribution, and to create spatially explicit distribution maps for selected species and immature sharks in the coastal waters of west-central Florida. | Study area Data were collected along the west-central Florida coast in waters adjacent to Pinellas County, primarily within the Gulf Intracoastal Waterway. The study area encompasses several barrier islands, many of which are protected under the state park system. The waters east of Honeymoon Island State Park have additional protection as a "No Internal Combustion Engine" (NICE) zone. Sites were initially selected haphazardly due to limitations such as boat and fishing accessibility, and remained fixed across sampling years. During sampling season, sites were comprehensively sampled within two-week time frames. The majority of CMERA study sites are within seven miles of the coast to the east and are bordered by barrier islands to the west (Figure 1). A subset of sites lie further west outside of the barrier islands as far as 15 miles from the coast. St. Joseph Sound is separated from the Gulf of Mexico by barrier islands to the east. The bays range from 30 to 600 m in width and are connected to the Gulf of Mexico through a series of inlets separating the barrier islands, which provide a pathway for shark movement between the ocean and the nearshore bays and estuaries. Overall, this area is considered low energy, characterized by infrequent hurricanes and mild winter frontal systems. Longshore sediment transport along the coast is driven south to north by prevailing wave conditions. Prior to 1950, much of the shorelines of the barrier islands was receding. Since then, management efforts such as beach renourishment projects as well as the construction of groins, jetties, and seawalls have created regions of accretion and shoreline advance (PCPWD, 2017). | Fieldwork Fieldwork was conducted by CMERA from 2013 to 2018 during the months of May-August each year. A total of 47 sites were regularly sampled and were classified according to their depth and bottom type. Sampling was conducted by college students under the direct training and supervision of CMERA staff, who also provided quality control of data. Individual sharks were captured using longline, tangle net, and rod and reel methods. Longlines were set for 45 min and type (sand, seagrass, or artificial reef), water temperature (°C), tidal stage, species, sex, pre-caudal length, fork length, and total length (cm) and other details such as noticeable wounds or external tags. For male individuals, maturity was determined by CMERA according to clasper calcification (Clark & von Schmidt, 1965). Unless previously tagged, all sharks were tagged with FLOY FH-69 tags. | Environmental parameters Additional environmental data including salinity (psu), dissolved oxygen content (DO, mg/L), and seagrass extent were provided by Pinellas County Department of Environmental Management (PCDEM). Salinity, which was recorded as surface, middle, and bottom, and DO data were filtered to match the range of CMERA sampling dates and averaged across that range to create data points that | Data analysis The sex ratio of the shark assemblage was examined by year, then assessed for statistical significance using a two-sample t test assuming unequal variances. All spatial data were imported into a Geographic Information System (GIS; ESRI). A bottom-type layer was created by combining the retrieved PCDEM seagrass layer and the reef locations according to CMERA's notes, which were then corroborated with artificial reef coordinates available on the PCDEM website. Areas surrounding seagrass and reef locations were designated as "sand" bottom type. Data collected from PCDEM were filtered to match CMERA sampling dates and averaged at a location across the summer season. Environmental parameters were then interpolated using the inverse distance weighted (IDW) tool to create a continuous raster layer. Following interpolation, the study area covered approximately 36 × 25 km. Output resolution of raster files was 220 × 220 m. This resolution was determined by measuring the width of the narrowest site (site 30, approximately 220 m). This is the coarsest resolution which will allow this site to keep its approximate shape when rasterized. Given the relatively fine scale, we assume that variation of environmental data within a 220 × 220 m pixel will be minimal. Catch-per-unit-effort (CPUE) for each species (where n > 10) and for immature sharks collectively was calculated for each site by gear type (i.e., tangle nets or hooks), resulting in two CPUEs for each species by site, using the equation below: These CPUE values were then linked in the attribute table of sites. Next, a hot spot analysis (Getis-Ord Gi*) was used to identify which sites may be experiencing spatial clustering for a particular shark species or age group. For each species and age group, boosted regression tree (BRT) models were created from 2013 to 2018 data according to methods set forth in Hijmans and Elith (2017) and Elith and Leathwick (2017). Specifically, environmental data were organized and validated according to Hijmans and Elith (2017), and model testing and species distribution map (SDM) creation followed Elith and Leathwick (2017). Only subgroups of the sampled population with sufficiently robust counts (n ≥ 100) were included (Pearson, 2010). These subgroups were as follows: immature sharks, nurse shark, blacktip shark, blacknose shark, Atlantic sharpnose shark, and bonnethead. Environmental variables considered in this analysis were water depth (m), sea surface temperature (SST, ℃), surface, middle and bottom salinity (psu), bottom type (sand, seagrass, or reef), and dissolved oxygen (DO) (mg/L). Due to the relatively small sample sizes and use of presence-only data, distributions were modeled using Bernoulli BRTs. The use of Bernoulli BRTs also accounts for differences in method of capture, where each capture is presumed to mark suitable habitat. The BRTs were created to optimize a combination of parameters: learning rate (lr), bag fraction (bf), and tree complexity (tc). Based on a maximized cross-validation area under the receiver operating curve (CV AUC), minimized standard error, and a maximized training data AUC (TD AUC), a specific combination of the aforementioned parameters was identified as the best fit (Hijmans & Elith, 2017). The ideal model resulting from this optimization was applied to the rasterized environmental data layers, then used to create the SDMs. The SDMs were then exported into ArcGIS. To be consistent with the resolution of the input environmental data, output resolution of SDMs was 220 × 220 m. Given that environmental conditions are unlikely to vary significantly within this pixel size, it is also unlikely that habitat suitability would vary significantly. A BRT model and a SDM were created for each of the aforementioned relevant shark subgroups. The BRT models were constructed in R version 2.6.2 (R Core Team, 2019) using the "gbm" package (v2.1.5, Greenwell et al., 2019). Figure 2). | Data selection and performance Hot spot analysis was conducted on all shark species with n > 10 using the Hot Spot (Getis-Ord G*) tool in ArcGIS 10.7.1 (ESRI, 2019). Hot spot analysis was applied to CPUE values by gear type (i.e., hook or net). Sites were deemed significant at p < .05. Boosted regression trees were applied to species with n > 100. Models incorporated seven predictors (Table 2). No collinearity was present among variables except among the surface, middle, and bottom salinity parameters; due to the general mobility of coastal sharks species within the water column and the insensitivity of BRTs to multicollinearity, all three were incorporated into the model. Ultimately, salinity explained the least variation in distribution across all groups modeled (Table 3). In the final models, TD AUC and CV AUC scores were all >0.9, which suggests excellent model performance according to criteria established by Lane et al. (2009) (Table 3). Cross-validation AUC scores were comparable to TD AUC scores, suggesting overfitting was insignificant (Hijmans & Elith, 2017). | Nurse shark Nurse sharks (n = 310) were predominantly male (2.2:1) and were captured across a broad range of sizes ( Figure 2). Nurse sharks were captured across the entire study area, but two hot spots occurred at offshore locations characterized by vegetated spoil islands and two occurred east of the barrier islands at deeper seagrass beds ( Figures 3 and 4). The three most influential factors driving distribution were depth (28.6%), bottom type (21.9%), and temperature (17.3%) ( Table 3). Marginal effects plots indicate a preference for >~7 m depth, seagrass bottom types, and temperatures >30℃ ( Figure 5). Predicted suitable nurse shark habitat was identified at seagrass meadows surrounding the barrier islands, but also at offshore locations west of Honeymoon Island State Park and Three Rooker Island ( Figure 6). | Blacknose shark The blacknose shark sample population (n = 130) favored females distributions were similar (Figure 2). Individuals were well-dispersed across sites, located in St. Joseph Sound, inlets, and west of the barrier islands. Blacknose shark hot spots encompassed both sandy and seagrass bottom types and were located in a variety of geographic locations (Figures 3 and 4). One hot spot occurred in the NICE zone in the seagrass meadows on the northeast side of Honeymoon Island State Park (Figure 3). Distributions were driven primarily by bottom type (29.4%), temperature (24.1%), and DO (18.9%) ( Table 2). Blacknose shark preferences encompassed seagrass bottoms, temperatures greater than 30 ℃, and DO above 7.5 mg/L ( Figure 5). Predicted relative abundance of blacknose sharks was highest around the southern portion of Anclote Key Preserve State Park, in the sound adjacent to it, in the tidal inlets between barrier islands, and in the sound west of Honeymoon Island State Park ( Figure 6). There are also peaks of predicted relative abundance in the locations of artificial reefs and seagrass meadows west of the barrier islands ( Figure 6). Hot spot locations were consistent with areas of high predicted relative abundance, while other areas of high predicted relative abundance associated with gulfside artificial reef sites were less occupied. | Atlantic sharpnose shark Atlantic sharpnose shark captures (n = 130) were dominated by males (>13:1). Males and females were encountered across a large range of sizes, but the majority of males were mature (Table 1, Figure 2). Captures were characterized by proximity to barrier islands and occurred in St. Joseph Sound, in an inlet, and just offshore. Hot spots occurred on seagrass flats directly adjacent to the east side of Three Rooker island and east of Anclote Key Preserve State Park (Figure 4). One also occurred at the "No Internal Combustion Engine Zone" in the seagrass meadows located on northeast Honeymoon Island State Park (Figure 3). Distributions were primarily driven by depth (28.6%), bottom type (21.3%), and temperature (17.5%) ( Table 3). The temperature (19.2%), and middle salinity (11.5%) ( Table 3). They preferred depths < 4 m, temperatures > 30℃, and salinities > 32 psu ( Figure 5). The geographic extent of predicted suitable habitat was smaller than the other subgroups evaluated, likely due to the dominant influence of depth and lesser influence of bottom type ( Figure 6). Suitable habitat largely encompassed nearshore, barrier island habitat, which aligns with the locations of their hot spots. | Immature shark Immature sharks (n = 569) were identified among the sample populations of each species with n > 10 (Table 1). Female immature sharks outnumbered males 2:1. In particular, all scalloped hammerheads and tiger sharks were immature, and immature individuals comprised most captures (>50%) for blacknose shark, blacktip shark, and bonnethead (Table 1). Immature sharks were present across the study area. Interestingly, two of the three identified hotspots were in or adjacent to the NICE zone (Figures 3 and 4). All hot spot locations were characterized by their close proximity to the barrier islands: a seagrass flat east of Anclote Key Preserve State Park, a sandy inlet F I G U R E 6 Species distribution models derived from boosted regression trees, which display suitable habitat as a proxy for probability of capture for each subgroup n > 100. Land is marked in grey between Anclote Key Preserve State Park and Honeymoon Island, and seagrass bottom on the northeast side of Honeymoon Island (Figures 3 and 4). For immature sharks, depth (35.3%), bottom type (22.4%), and temperature (14.7%) were the three most influential factors influencing distribution (Table 3). Immature sharks showed a preference for depths < 5 m, seagrass bottom types, and temperatures >~30℃ ( Figure 5). Their predicted distribution displayed peak predicted relative abundance surrounding the barrier islands and at seagrass meadows located east of the barrier islands, with other peaks in predicted abundance occurring at locations of artificial reefs further offshore ( Figure 6). Tiger sharks (n = 31) were predominantly female (1.5:1) and immature (Table 1) based on criteria set forth by Kneebone et al. (2008). In contrast to other subgroups in this study, the majority of individuals were captured west of the barrier islands. Tiger shark hot spots occurred offshore at an artificial reef location, a deeper seagrass bed on the Gulf side of Honeymoon Island State Park, and at a deeper seagrass flat east of Anclote Key Preserve State Park in St. Scalloped hammerhead captures (n = 12) were equally distributed between the sexes and entirely immature (Table 1) according to Castro (2010). They were only first observed beginning in 2017. Captures were rare across the study area and only occurred in seagrass locations in the NICE zone. The sole scalloped hammerhead hot spot was located in the NICE zone on the northeast side of Honeymoon Island State Park (Figure 3). (Bethea et al., 2014;Drymon et al., 2020;Froeschke et al., 2010) and oceanwide (Brodie et al., 2015;Santos & Coehlo, 2018). | D ISCUSS I ON This study is unique in that it provided the opportunity to identify and quantify factors correlated with shark distribution on a smaller scale, using unconventional data sourced from a research and education program. Despite many similarities, there were pronounced differences between the assemblage characterized in the current study and the one caught in Peterson and Grubbs (2020). Peterson and Grubbs (2020) caught an order of magnitude more Atlantic sharpnose sharks and had nearly quadruple the proportion of immature captures. Given that Atlantic sharpnose sharks are not known to use nursery habitat, investigation into the stark difference in immature populations along the longitudinal gradient may be merited. In contrast, the relative abundance of bonnetheads in west-central Florida was as much as an order of magnitude larger than in the Big Bend (Peterson & Grubbs, 2020). The proportion of immature bonnethead captures were comparable; however, this study's bonnethead sample was strongly dominated by females as opposed to the male-dominated sample in the Big Bend. Given that mature female bonnetheads are known to use nearshore areas for gestation and pupping , the sexual segregation may suggest that this west-central region may be an important habitat for bonnethead reproduction. While these regions are characterized similarly by heterogeneous bottom types with high seagrass coverage, low energy systems, and low riverine input, our sampling site is unique in that it is also heavily influenced by the presence of barrier islands, which may explain some of this variation in species abundance and life-history composition. Bethea et al. (2014) noted greater shark species diversity associated with barrier islands near riverine-influenced systems in the northern Gulf of Mexico. In the absence of highly variable salinity associated with riverine input, the effects of the barrier islands are more pronounced in this study, and the benefits they provide (e.g., a physical barrier from larger predators in the Gulf) may explain why species characterized by nearshore nursery use and site fidelity, such as the bonnethead (Heupel et al., 2006) or nurse shark (Castro, 2000), have higher immature abundances in west-central Florida. Further, the results from the Big Bend contrast to this region's lesser abundance of immature Atlantic sharpnose sharks, whose lifehistory strategy would not benefit as much from enhanced nursery habitat provided by barrier islands. These clear differences females (Ulrich et al., 2007). Bonnetheads are small coastal sharks whose females do not use estuaries for nursery habitat, but rather move offshore in late summer for parturition and mating . Rather, it is thought that gravid females utilizing estuarine habitat are taking advantage of the availability of high energy benthic prey prior to parturition in August in order to decrease the gestational period, which is one of the shortest compared to other species of sharks (Manire et al., 1995). The sexual segregation of the shark population in this region suggests west-central Florida may not be critical reproductive habitat for the Atlantic sharpnose shark, but may function in this capacity for bonnetheads. Hot spots can be used to guide management decisions for recreational fishing in the area. Oceanwide studies have identified the threat to shark populations caused by overlap between hot spots of pelagic shark abundance and fishing hot spots (Queiroz et al., 2019). Given that Florida is a hot spot for shark fishing in the United States (Shiffman & Hammerschlag, 2014), careful consideration should be taken to avoid unsustainable shark fishing practices in coastal hot spots as well, particularly for species that are protected or are using the area for reproduction. Currently, Florida law 68B-44.004 identifies great hammerhead, lemon shark, scalloped hammerhead, and tiger shark as prohibited species and grants them special protections. Despite these protections, post-release mortality rates can still be high, particularly for the two hammerhead species (Dapp et al., 2016;Gallagher et al., 2017). Many of the hot spots for the shark population of this study area surround barrier islands, where mooring is permitted and land-based fishing may pose a threat. To maintain healthy status of non-protected species, fishing of immature and protected individuals in the area should be monitored closely and perhaps prohibited where especially vulnerable species, such as hammerheads, are known to aggregate. The preponderance of immature individuals in this study suggests the potential of this area as a nursery habitat. More than half of all captures in this study were immature sharks, and for many species, such as the blacknose shark, blacktip shark, bonnethead, great hammerhead, scalloped hammerhead and tiger shark, immature sharks were the vast majority of captures. In the northern Gulf of Mexico, the blacknose shark assemblage is dominated by mature individuals (Drymon et al., 2020), yet the abundance of immature blacknose sharks in the current study suggests the shallow waters of westcentral may function as blacknose shark nursery habitat. Hueter and Tyminski (2007) have identified the gulf coast of Florida as nursery habitat for blacknose, blacktip, and great hammerheads, which is consistent with our predominantly immature sampling of these species. Although we identified a majority immature bonnethead population, bonnetheads do not necessarily utilize nearshore habitat as nursery sites. Rather, young-of-year may move into estuaries and nearshore habitat to reduce predation risk from deeper waters, which may explain the large proportion of immature bonnetheads in this study area (Swift & Portnoy, 2020). While capture numbers were relatively low, nearly all the tiger sharks and all scalloped hammerheads encountered were immature, suggesting future work should focus on quantifying the importance of this region as critical habitat for these species, particularly since scalloped hammerheads have been previously noted as uncommon in these waters (Hueter & Tyminski, 2007). The significant proportions of immature sharks in this study area certainly merit further research to allow this area to be evaluated according to the criteria established by Heupel et al. (2007). The impacts of anthropogenic stressors on shark nursery areas can be difficult to quantify (Ward-Paige et al., 2015). However, as human population densities within 100 km of the ocean exceed triple the global mean, it is crucial to understand these impacts on wildlife, particularly near coastal metropolitan areas such as those examined in this study (reviewed in Whitfield & Becker, 2014). The results of the BRT indicate nearshore coastal areas characterized by shallow, warm, seagrass environments provide critical habitat for shark species and immature sharks in general. In context, shallow depths and warmer waters can be conflated simply as characteristic of seagrass habitat which is associated with greater prey availability for predatory fishes, even when compared with nearby unvegetated habitats (Rozas & Odum, 1988). By underscoring the importance of these nearshore habitats, it is also important to consider that some of these species use offshore environments in different life stages or as segregated groups (Drymon et al., 2020;Parsons & Hoffmayer, 2005) and that damage felt in nearshore environments, such as habitat loss or poor water quality, may not only affect these coastal populations, but can also have ripple effects for offshore populations. While the barrier islands in this study are state parks and thus protected from development, they are still visited by tourists and subject to boating traffic, mooring, and land-based fishing. Given the dense human coastal population and the draw of the barrier islands to visitors, it is essential that these locations are properly managed. Because of the pressures of human activity in these essential nearshore environments, these critical nearshore habitats identified by the BRT require careful protection and management in order to support healthy shark populations. While providing spatially explicit recommendations for future management efforts, this study also offers insight into currently enacted management practices. As demonstrated by the diversity of species hot spots located within the NICE zones, it appears that reduced threats from boating activity may be fostering suitable habitat for sharks. Management in the form of NOAA's marine protected areas (MPAs), which largely regulate public access and activities, has been shown to increase fish abundance in the coastal waters of Florida (Bohnsack, 2011) and in other tropical regions (Bond et al., 2012). Strikes from boat propellers may be fatal, but sites subject to boat wakes have also been associated with lower levels of faunal abundance and diversity, as well as destruction of essential seagrass habitat (Whitfield & Becker, 2014). Immature individuals are particularly vulnerable to boat strikes, and a loss of seagrass structures would likely result in a decrease in prey availability and a loss of threedimensional structure in which immature individuals find protection from predators. The northern end of the NICE zone is located near an inlet, which provides access from St. Joseph Sound to the Gulf of Mexico. These inlets between the islands may create a geographic bottleneck for sharks and other highly mobile migratory animals, including potential shark prey, between the Gulf and St. Joseph sound. This location is an example where a hot spot for sharks may also be a hot spot for boating activity. The link between NICE zones and a higher abundance of sharks, immature sharks in particular, should be used to advocate for expansion of NICE zones to areas of suitable habitat for Florida-protected sharks currently utilizing these zones (i.e., great hammerhead and scalloped hammerhead). These spatial, quantitative, and qualitative insights were made possible by data collected through a research and education program. In a field where baseline abundance data are often lacking for conservation and management interests (Ward-Paige & Worm, 2017), researchers have been criticized for failing to use preexisting datasets collected outside of academia (Buxton et al., 2021). Acknowledging that the uneven sampling effort and haphazard site selection in this study limits some analyses, this work demonstrates the capacity of a limited dataset, collected through a private citizen research and education group, to provide useful information for management. In particular, these results can be used to direct spatial prioritization of management practices as well as identify the common traits that characterize these areas (e.g., proximity to barrier islands, bottom type, NICE zones) for extrapolation outside of the study area. Future efforts to characterize population trends from unconventional data should consider using BRTs. Boosted regression trees are an ideal tool for analysis of incomplete datasets, as they can be tailored to presence/absence values and are robust to missing values, outliers, and multicollinearity (Dedman et al., 2017). Through the application of these BRTs, we were able to overcome the limitations imposed by haphazard site sampling, and extrapolate to identify predicted relative abundance across the study area during the summer. This work exemplifies the capacity of datasets sourced from extra-academic sources to provide meaningful fisheries management information given careful and appropriate selection of analyses. With this kind of work, scientists can resourcefully provide the supporting information needed to guide successful management of fisheries. ACK N OWLED G M ENTS We would like to sincerely thank all the CMERA volunteers for the incredible time and effort spent collecting these data. In addition, we are grateful to Simon Dedman and John Froeschke for their support and guidance during analysis. Lastly, thanks to Elizabeth Zachman and Blair Roberts Castagnetta for their assistance in figure creation. CO N FLI C T O F I NTE R E S T None declared.
2021-10-29T15:07:57.320Z
2021-10-26T00:00:00.000
{ "year": 2021, "sha1": "7e76562f768f11d5330097fa75dc529f706c90aa", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ece3.8277", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6c16a1f047279c86d0c3d6b4ee3883afdd2f42a0", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
253294165
pes2o/s2orc
v3-fos-license
Hyponatremia in COVID-19 Is Not Always Syndrome of Inappropriate Secretion of Antidiuretic Hormone (SIADH): A Case Series Hyponatremia is a common complication in COVID-19-positive patients and is associated with significant mortality and morbidity. Several cases of COVID-19-related hyponatremia secondary to the Syndrome of Inappropriate Secretion of Antidiuretic Hormone (SIADH) have been reported in the literature, which might suggest that SIADH is almost always the underlying cause of hyponatremia in COVID-19 infections. However, COVID-19-related hyponatremia can have diverse underlying etiologies, similar to hyponatremia in non-COVID-19 patients, and requires a thorough assessment to reach a correct diagnosis and implement appropriate management. Introduction Since its initial identification in Wuhan, China, in December 2019, much has been learned about the epidemiology, presentation, and management of Coronavirus Disease 2019 (COVID-19). Although predominantly a respiratory tract infection, ranging from mild to severe disease caused by the novel Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2), the condition's effect on other organs has been well-described and can often result in multisystem illness [1]. Disorders of sodium balance are common in COVID-19, with both hypernatremia and hyponatremia reported in the literature [2]. Hyponatremia is one of the commonest electrolyte abnormalities affecting hospitalized patients and a significant cause of morbidity and mortality. Serum sodium (Na) levels of less than 135 mEq/L have been reported in 10%-45% of patients with COVID-19 in different studies with illnesses ranging from mild to severe [2,3]. In COVID-19 infections, hyponatremia can be a marker of the severity of the underlying pulmonary disease and an independent risk factor for intensive care admissions and mechanical ventilation [2,4]. Uncorrected COVID-19-associated hyponatremia after 72 to 96 hours of hospitalization has been associated with higher mortality [5]. While the Syndrome of Inappropriate Antidiuretic Hormone Secretion (SIADH) is thought to be the primary etiology in many cases [6][7][8][9][10], the underlying pathophysiology driving COVID-19-related hyponatremia can be varied, and a broader differential should be considered while assessing such patients. A thorough and systematic assessment is required to pinpoint the correct etiology and instigate the appropriate corrective therapy. We share our experience of three COVID-19-positive patients with hyponatremia who presented within a short period to a regional hospital during the early stage of the third Omicron wave of COVID-19 infections in Victoria, Australia. All three patients had different underlying etiology for hyponatremia and thus required different treatment strategies. and hypertension. He was a non-smoker and non-alcohol drinker. His regular medications included a calcium channel blocker, angiotensin receptor blocker, enzalutamide, and four-monthly leuprorelin acetate injections. He tested positive for COVID-19 on rapid antigen self-testing (RAT), which was confirmed by a polymerase chain reaction (PCR) test. The patient sustained a mild COVID-19 illness, not requiring disease-modifying therapies throughout his admission. However, his condition was complicated by hyponatremia at presentation with a serum Na nadir of 124 mEq/L. He was clinically euvolemic on examination, with a blood pressure of 128/88 mmHg and a heart rate of 91 beats per minute. He was afebrile and had an oxygen saturation of 95% on room air. His hemoglobin, total white cell, neutrophil, and platelet counts were normal. A chest x-ray was not significant. Renal and liver function testings were within the normal range. Serum cortisol and thyroid function tests were normal. Serum osmolality was low at 258 mOsm/kg with an inappropriately elevated urinary osmolality of 351 mOsm/kg and urinary Na of 122 mEq/L, keeping with SIADH. He was fluid-restricted to 1 liter/day, and his serum Na gradually improved to 130 mEq/L before discharge. Case 2 A 92-year-old man presented with a four-day history of persistent nausea, headache, dizziness, unsteadiness, and poor oral intake. He had close contact with a COVID-19-positive patient 10 days before admission. His only respiratory symptom was that of a mild cough. He had significant comorbidities, including hypertension, hyperlipidemia, ischemic heart disease with previous cardiac bypass, transient ischemic attacks, atrial fibrillation, asthma, gastroesophageal reflux, osteoporosis, ascending aortic aneurysm repair. He was a non-smoker and non-alcohol drinker with no recent travel history and no history of immunosuppression. His regular medications included a thiazide diuretic, novel anticoagulant, calcium channel blocker, angiotensin-converting enzyme inhibitor, statin, and vitamin D supplementation. On examination, the patient was clinically hypovolemic. He was alert and orientated with a Glasgow Coma Scale (GCS) of 15. All other observations were within normal limits. He was found positive for COVID-19 infection with positive PCR. Initial blood tests showed that he had severe hyponatremia with a serum Na of 111 mEq/L on a background of normal serum Na of 136 mEq/L three months prior. Hemoglobin, total white cell, neutrophil, and platelet counts were normal. Renal and liver function testings were within the normal range. He was hypokalemic with serum potassium (K) of 2.9 mEq/L. Chest x-ray revealed no significant abnormality. Serum osmolality was low at 239 mOsm/kg with a urinary osmolality of 484 mOsm/kg and elevated spot urine Na at 102 mEq/L, affected by the thiazide diuretic. His hyponatremia was primarily thought to be secondary to volume depletion due to a combination of thiazide diuretic and poor oral intake. The patient was admitted to the intensive care unit, where a single 100 mL bolus of hypertonic 3% saline was administered, correcting serum sodium to 114 mEq/L. Subsequently, he received fluid resuscitation with 0.9% saline until he was clinically euvolemic. The patient also received an intravenous potassium replacement. His thiazide diuretic was ceased indefinitely. Hyponatremia gradually improved, and he was eventually discharged symptom-free with a serum Na of 129 mEq/L. Case 3 A 53-year-old previously healthy woman presented with acute onset confusion and an altered conscious state. Her husband described a one-day history of headaches and profuse diarrhea. He found her unresponsive the next day and immediately called ambulance services. She had a history of mild hypertension, for which she was not on any regular anti-hypertensives. She lived at home with her husband and worked full-time in an abattoir. She was a non-smoker and non-alcohol drinker with no recent travel history. The remainder of her systems review was unremarkable. On arrival, she was drowsy with an initial GCS of 8. She had an oxygen saturation of 100% on room air. She was clinically hypovolemic and hypotensive with a blood pressure of 95/75 mmHg. COVID-19 RAT and PCR testings returned positive. Initial investigations revealed severe hyponatremia with serum Na of 111 mEq/L. Hemoglobin, total white cell, neutrophil, and platelet counts were normal. Renal function and liver function testings were within the normal range. Thyroid function testing and serum cortisol were within normal limits. The chest radiograph showed no abnormalities, and brain computed tomography (CT) and magnetic resonance imaging (MRI) demonstrated no abnormalities to explain the patient's symptoms. The cerebrospinal fluid examination was within normal limits. Fecal microscopy and culture were negative, as was Clostridium difficile toxin. Due to the neurological manifestations of hyponatremia, hypertonic 3% saline was administered in the Emergency Department to raise the serum Na within a safe range. Her serum Na initially rose rapidly to 127 mEq/L, and intravenous therapy was changed to 5% dextrose. Following this, serum Na dropped to 113 mEq/L again, and the decision was made to change the fluid resuscitation to 0.9% normal saline. The patient's serum Na and conscious state gradually improved with appropriate intravenous volume replacement. She had a GCS of 15 and serum Na of 138 mEq/L at discharge time. Her diarrhea, which was thought to be a gastrointestinal manifestation of COVID-19 infection, and the likely cause of her symptomatic hyponatremia, also steadily improved during admission. Discussion Several cases of hyponatremia secondary to SIADH in COVID-19 patients have been reported in the literature [6][7][8][9][10], with most of these cases being attributed to pneumonia as the underlying cause of SIADH [7][8][9][10]. However, others have also reported the development of SIADH in the absence of pneumonia [6]. Production of inflammatory cytokines in COVID-19 infections, such as interleukin-6, resulting in direct stimulation of nonosmotic release of antidiuretic hormone (ADH) and cytokine-mediated lung injury inducing inappropriate ADH release via hypoxic pulmonary vasoconstriction pathway are considered possible underlying mechanisms in such cases [7,11]. From the reported cases, one might extrapolate that in COVID-19 patients, SIADH might be the sole cause of hyponatremia. However, our experience suggests that in the context of COVID-19 infection, the causes of hyponatremia remain diverse. Such cases should be carefully evaluated for an accurate diagnosis rather than an assumption of SIADH. Identifying the correct etiology of hyponatremia is essential for appropriate management, which varies according to the underlying cause. Our first patient had clinical and biochemical features consistent with SIADH and improved with fluid restriction. On the other hand, our second patient was clinically hypovolaemic and was on long-term thiazide diuretics. Although the serum and urine osmolalities were consistent with SIADH, the patient did not fulfil the diagnostic criteria of SIADH ( Table 2). Thiazide diuretics interfere with renal water excretion resulting in inappropriately high urine osmolality. As hyponatremia was considered secondary to thiazides, the patient was treated with solute replacement and withdrawal of the offending medication, which led to clinical improvement. Similarly, our third patient was volume-depleted due to diarrhea, the most likely cause of her hyponatremia. The patient responded well to adequate fluid and solute resuscitation. Diagnostic criteria Serum Na concentration is primarily a measure of plasma water content rather than the Na balance and is regulated by changes in water intake and excretion. Serum Na accounts for 85% of extracellular fluid osmolality and is typically maintained within a narrow range of 135-145 mEq/L with a corresponding serum osmolality of 280-295 mOsm/kg. Any change in serum osmolality is sensed by osmoreceptors in the hypothalamus, influencing the release of ADH and thirst. ADH regulates free water excretion through cortical and medullary collecting ducts in kidneys, resulting in concentrated or dilute urine. Water retention leads to the suppression of ADH and the production of dilute urine (urine osmolality below 100 mOsm/kg), preventing hyponatremia development. On the other hand, plasma hyperosmolality stimulates thirst leading to increased water intake, which is the primary protective mechanism against water deficit and hypernatremia. ADH secretion is also increased appropriately in such situations resulting in concentrated urine. Serum Na concentration abnormalities occur when the osmoregulation is disrupted. Hyponatremia occurs when too much water cannot be excreted, and conversely, hypernatremia occurs when there is too little water that cannot be replaced [2,12]. True hyponatremia is almost always hypotonic (serum osmolality < 280 mOsm/kg). Isotonic (serum osmolality 280-295 mOsm/kg), or pseudohyponatremia, often results from severe hypertriglyceridemia, hyperproteinemia (multiple myeloma, monoclonal gammopathies, or intravenous immunoglobulin administration), or severe hypercholesterolemia (primarily associated with primary biliary cirrhosis). This is mainly due to an error in serum Na measurement when using an analyzer that measures Na per volume of plasma rather than per volume of water. Hypertonic hyponatremia (serum osmolality > 295 mOsm/kg) is often due to an osmotic active substance in the circulation such as glucose, mannitol, glycine, or radiocontrast agent resulting in the movement of water from intracellular to intravascular space [13]. Although not always necessary, measurement of serum osmolality and a detailed history can help to narrow down the differential diagnosis ( Table 3). Hypovolemic hyponatremia can occur due to gastrointestinal (diarrhea or vomiting), renal (diuretics, cerebral salt wasting, or adrenal insufficiency), or dermal (perfuse sweating or burns) salt and water loss. In these cases, patients have both water and total body sodium deficit. However, the sodium deficit exceeds the water deficit resulting in relative free water excess and hypotonic hyponatremia. Hypervolemic hyponatremia is commonly seen in patients with cardiac failure, liver cirrhosis, nephrotic syndrome, and advanced chronic kidney disease. Patients with hyponatremia secondary to SIADH, hypothyroidism, hypopituitarism, and primary polydipsia are usually euvolemic [2,12]. SIADH is the commonest cause of hyponatremia in hospitalized patients, accounting for 40%-50% of cases. The syndrome is diagnosed by the presence of euvolemic hyponatremia, low serum osmolality (<280 mOsm/kg), inappropriately elevated urine osmolality (>100 mOsm/kg), high urine Na (>30 mEq/L), and the absence of thyroid disorder, adrenal disorder, and current diuretic use [6]. SIADH is usually secondary to other abnormalities, including central nervous system disorders, pulmonary diseases, malignancies, infections, and medications [14]. It is important to remember that SIADH is primarily a diagnosis of exclusion, and other causes of hyponatremia must be excluded before making a diagnosis [12]. Thiazide diuretics are an important cause of hyponatremia in hospitalized patients, which can sometimes be severe. Hyponatremia is often seen during the early stage of treatment with thiazides but can also occur after months or even years of therapy. Although hypovolemia-induced ADH secretion might be responsible for hyponatremia in some cases, most patients with thiazide-associated hyponatremia are euvolemic [15]. The contributing factors include increased water intake, cation depletion, impaired free water excretion, and sodium osmotic inactivation. The treatment involves a combination of the cessation of thiazide diuretics, cation replacement, and restriction of free water intake [15]. Figure 1 summarizes the steps that can help evaluate hyponatremia in most cases, including in patients with COVID-19 infections. Conclusions Hyponatremia is a common complication of COVID-19 infections and is associated with increased morbidity and mortality. Our experience shows that the underlying etiology of hyponatremia in patients with COVID-19 infection can be varied, including SIADH, gastrointestinal losses, and concurrent use of diuretic treatment. Hyponatremia in COVID-19 patients should be evaluated in the same way as in non-COVID-19 patients to pinpoint an accurate diagnosis which is crucial for proper management. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2022-11-04T18:13:49.193Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "945f220af6b50e0e62eac81327738bef7804ef77", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/121804-hyponatremia-in-covid-19-is-not-always-syndrome-of-inappropriate-secretion-of-antidiuretic-hormone-siadh-a-case-series.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bec5a6c544a0c181316c27774e60e3f4778211d8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
119042951
pes2o/s2orc
v3-fos-license
Nanoantenna Enhanced Terahertz Interaction of Biomolecules Terahertz time-domain spectroscopy (THz-TDS) is a non-invasive, non-contact and label-free technique for biological and chemical sensing as THz-spectra is less energetic and lies in the characteristic vibration frequency regime of proteins and DNA molecules. However, THz-TDS is less sensitive for detection of micro-organisms of size equal to or less than $ \lambda/100 $ (where, $ \lambda $ is wavelength of incident THz wave) and, molecules in extremely low concentrated solutions (like, a few femtomolar). After successful high-throughput fabrication of nanostructures, nanoantennas and metamaterials were found to be indispensable in enhancing the sensitivity of conventional THz-TDS. These nanostructures lead to strong THz field enhancement which when in resonance with absorption spectrum of absorptive molecules, causing significant changes in the magnitude of the transmission spectrum, therefore, enhancing the sensitivity and allowing detection of molecules and biomaterials in extremely low concentrated solutions. Hereby, we review the recent developments in ultra-sensitive and selective nanogap biosensors. We have also provided an in-depth review of various high-throughput nanofabrication techniques. We also discussed the physics behind the field enhancements in sub-skin depth as well as sub-nanometer sized nanogaps. We introduce finite-difference time-domain (FDTD) and molecular dynamics (MD) simulations tools to study THz biomolecular interactions. Finally, we provide a comprehensive account of nanoantenna enhanced sensing of viruses (like, H1N1) and biomolecules such as artificial sweeteners which are addictive and carcinogenic. In this review work, we describe the high-throughput fabrication techniques of THz nanoantenna via. photolithography, atomic layer lithography, pattern and peel method, self-assembly lithography etc. Then we discuss the THz field enhancement using metal nanogap antenna where we discuss the field enhancement beyond the skin depth of the metal up to the quantum regime. Then we briefly discuss finite-difference timedomain (FDTD) methods for calculation of field enhancement across the nanogap. At the last, we describe the THz-TDS for sensing the biomolecules due to the field enhancement across the nanogap. We also showed how a molecular dynamics (MD) simulation can be a versatile tool to estimate the terahertz absorption and vibrational density of states (VDOS). Then we (Fig. 2) is one of the most highthroughput methods for fabricating nanogap structures. In photolithography, a desired polymer pattern is patterned using ultraviolet (UV) light and a photomask. This polymer pattering is formed by the physical change of a photo-sensitive material, known as photoresist (PR). Polymers added to PR decides the solubility under UV exposure. A PR which consist of an insoluble polymer which on UV exposure, changes into a soluble polymer, is called a positive PR and a PR which consist of a soluble polymer which on UV exposure, polymerizes and becomes insoluble, is called a negative PR. Therefore by using the desired PR, wafer-scale metal structures are fabricated after performing a metal deposition process on patterned resists and lift-off of the unwanted metal film. The lithography method discussed earlier are limited to fabrication of nanometer-gaps up to a few nanometers. To study the nano-and sub-nanoscale optics, we need better lithography techniques to fabricate sub-10 nm wide gap. Many researchers have reported the significance of deposition of an atomically thin insulating layer or sacrificial layer in fabricating small gaps between two metal structures. 127 ing vertical nanogaps known as atomic layer lithography ( Fig. 3 (A -D)). Figures 3 (E -G) are the scanning electron micrographs (SEM) of 5 nm gap in Ag film, 10 nm gap between Au and Ag, and 9.9Å Au -Al 2 O 3 -Au vertical nanogap, respectively, which are fabricated using atomic layer lithography technique. This method also includes a new planarization scheme which eliminates the background light transmission, enabling background-free transmission measurements. This lithography technique is the combination of atomic layer deposition (ALD) and 'plug-and-peel' adhesive tape planarization and is a three-step process. The first step includes the pre-patterning of metal using conventional lithography techniques, the second step includes the sidewall formation of materials like Al 2 O 3 or SiO 2 by ALD, and the third step includes the 'plug-andpeel' taping process for planarization of metal plugs. Therefore, atomic layer lithography is a high-resolution patterning method with large scale uniformity. Jeong et al. 22 improvised the atomic layer lithography technique which is demonstrated in Fig. 4. In this lithography technique, conventional photolithography and liftoff process were used to pattern 30 nm Cr/150 nm Al layer on the 3 nm Cr/100 nm gold-sapphire substrate. The Cr-Al double layer acts as a sacrificial layer which will be etched out later, removing excess metal and making the structure planar. As compared to Al, Au is less-resistant to ion beam therefore, excluding the Au film beneath Al-Cr layer, the exposed Au layer is removed using ion milling process. Uniform ALD of Al 2 O 3 is performed over the whole structure with a thickness of nanometer accuracy. After that, the second layer of Au with adhesive Cr is deposited, filling the trenches. Finally, Al/Cr wet etching is performed which removes the overhanging Au and Al layers, exposing the dielectric gaps. This lithography technique can be used to fabricate large-scale nanogaps with ultrahigh aspect ratio. Tripathi et al. 133 developed self-assembled lithography process by which they fabricated quantum dots (QDs) nanogap metamaterials (Fig. 5). In this lithography process, photolithography was used to pattern PR on a clean and dried sapphire substrate. With the help of a thermal evaporator (evaporation rate = 1Å/sec), a chrome layer of thickness 10 nm was deposited over the PR pattern. Following the process, a silver film of thickness 200 nm was evaporated over the chrome layer. After the evaporation of first silver layer, the whole sample was then dipped in acetone sonicated for 2-minutes at 150 W and 36 kHz, lifting off the PR pattern. After sonication, the sample was washed with isopropyl alcohol and dried using N 2 gas. The dried sample was then dipped into a toluene solution of OT (1-8 octanedithiol) functionalized QDs, resulting in the formation of a selfassembled monolayer of QDs. The resulting sample was again dried using N 2 gas and a second silver layer of same/higher thickness was deposited over the selfassembled monolayer. Then by using a scotch tape 95 , the second layer of silver was taped-off, leaving a vertical metal nanogap filled with a monolayer of QDs. III. TERAHERTZ-FIELD ENHANCEMENT USING NANOANTENNAS In THz frequency, a metal film is not a perfect conductor as well as infinitely flat. When an electromagnetic wave is incident normally on a perfectly conducting metallic plane, an induced current is developed on the surface, which reflects light back, with no charge accumulation anywhere. 34 When this plane is divided into two perfectly conducting Sommerfeld half planes, macroscopic accumulation of charges take place at the edges with a length scale of one wavelength, such that the surface charge density (σ) depends as a time function for small values of x λ, 34 given by Here, 0 , E 0 , ω and x describes the permittivity of vacuum, the incident electric field, the angular frequency and distance from the edge, respectively. 144 The charge singularity, at x = 0, for this half plane is very feeble and disappears with integration. When the two metallic half planes are brought close together, an electrostatic force will be experienced by the charges due to their counter members across the gap, pulling the charges towards the edge, and developing a strong electric field across the gap. As the gap shrinks, more charges gets accumulated at the edges as the light-induced currents becomes more and more stronger. This increases the surface charge density at the edges as shown in Fig.6 B -C. This system can be portrayed as a line-capacitor driven by the light-induced alternating current as shown in Fig. 6 (A). Numerous studies have reported the monotonic enhancement of the electric field with decreasing gap size. 34,93,95,145 Further studies have also reported the decrease in field enhancement when the gap size decreases and enters a quantum regime. 24,[194][195][196][197][198][199] Increase in field enhancement beyond sub-skin depth is also reported. 16,23,34 In this section of the review, we discuss the THz field enhancement for gaps beyond skin-depth and also in the quantum regime. We also discuss the classical limit of field enhancement of single nanoslit before entering quantum regime, and about finitedifference time-domain algorithm, which is a computational approach developed for the quantitative estimation of far-and near-field enhancement measurements. A. THz Field Enhancement in Metal Nanogaps at Sub Skin-Depth Regime In this part of the section, we discuss the skindepth physics 23 and the results reported in various literatures 16,34 . Consider two perfectly conducting metallic planes of thickness h 16,134 kept at a distance w from each other, forming a metal-air-metal nanogap. An electromagnetic wave is incident normally on the metal nanogap whose electric field has a magnitude of E 0 . Then the ultimate field enhancement in a high-aspect ratio (w/h 1) is given by 23 where, E denotes the electric field at the gap and λ denotes the wavelength of the incident electromagnetic wave in vacuum. If a dielectric material of permittivity Nanogap gets charged just like a line-capacitor due to lightinduced alternating current J. Hence, the electric field enhances which is shown by gradual colour contour. Reprinted with permission from Ref. 34 . © 2009, Springer Nature. Schematics of charge accumulation near metal edges and corresponding current density induced inside metal film of two gap-sizes (B) w ∼ h( λ) and (C) w h( λ). Adapted with permission from Ref. 16 . © 2017, American Physical Society. is filled inside the gap, Eq. 2 modifies to where, 0 denotes the permittivity of vacuum. From the above equations, it can be said that the field enhancement (E enhancement ) is independent of metal characteristics. Considering an electromagnetic wave incident normally on a thin metal film of thickness h which is smaller Here, σ m denotes the conductivity of the metal, µ 0 denotes the magnetic permeability in vacuum and ω denotes the angular frequency of electromagnetic wave. The thickness at which the absorption loss by metal is 50%, is known as the characteristic thickness (h 0 ). For a metal with conductivity 10 7 Ω −1 m −1 , the characteristic thickness is 0.53 nm. At 1 THz, the skin-depth is about 100 nm or more for good metals. Therefore, the thickness range for transitional metal film is 5-100 nm. The electric field amplitude transmission (t) and reflection coefficient (r) for such thin films are given by where, E t denotes the transmitted electric field. 23 Considering a transverse magnetic polarized light (E x , H y ) is incident normally on a thin metal film. Due to reflection from the incident surface, the magnetic field of the light near the incident surface becomes twice that of the incident magnetic field. But the magnetic field on the transmission side is much smaller than the incident field ( Fig. 7 (B)). Assuming that a constant electric field-/current density is developed inside the thin film, using Ampere's law, the expression of the current density inside the thin film is J = 2H 0 h (neglecting vacuum displacement current). 23 Here, J denotes the current density developed inside the thin metal film and H 0 denotes the incident magnetic field. The tangential component of electric fields at the air-metal interface are continuous and is given by therefore, reproducing Eq. 4. At transmitting side, electric field just inside the metal surface is denoted as E m . Across the air nanogap, the normal component of the displacement current will be same ( Fig. 7 (C)), therefore electric field inside the air nanogap will be where, m denotes the terahertz metal dielectric constant given by where, ∞ denotes the dielectric constant of metal at high-frequency and γ denotes the Drude damping constant. Through this analogy, it can be said that with an increase in the conductivity of the metal, weaker electric fields are induced inside the metal. This is compensated by the metal's high dielectric constant when the displacement boundary condition is considered. Therefore, the field enhancement is independent of the conductivity of the metal, which is mathematically proven above. Fig. 8 (A) is the SEM of a gold nanogap sample of gap width 70 nm, fabricated by Seo et al. 34 using a focused ion beam technique. Fig. 8 (B) shows the x-z plane horizontal electric field enhancement in 70 nm gap, analyzed from FDTD simulation at 0.1 THz. In the figure, it can also be seen that the electric field does not penetrate into the metal, rather it is seen to be completely concentrated inside the nanogap, even if the gap size is much smaller than the skin depth (250 nm). Fig. 8 (C) shows the varying electric field enhancement of 5 nm gap with respect to the gap thickness (h) at a frequency of 0.3 THz. The red dots signify the experimental data and the red solid line shows the calculated data from the modal expansion. the figure also signifies the 1/h dependence of the field enhancement. narrow gaps, and in wide (δ, h < w) gaps. In wide gaps, the charges are mostly spread over the surface outside the gap rather than at metal edges ( Fig. 6 (B)), which results in a decrease of field enhancement. In narrow gaps, charges mostly accumulate at the metal edges of the gap ( Fig. 6 (C)), resulting in stronger field enhancement. As the width of the gap (w) decreases further entering the sub-nanometer andÅngstrom regime, the charge distribution becomes insensitive to gap size and electron tunnels through the potential barrier of the nanogap, showing quantum effects. 197,[199][200][201][202][203][204][205][206][207][208][209][210] When THz electromagnetic waves are incident on the nanogap, a transient voltage is developed in the dielectric gap which bends the conduction band of the dielectric toward the Fermi energy of metals (Fig. 9). 24 This increases the chances for electron tunneling through the potential barrier, causing non-linear transmissions. Bahk et al. also investigated the classical limit of field enhancement before the gap-width entered the quantum regime. 16 They reported that the field enhancement exhibited saturation behavior rather than a monotonous increasing nature. This showed that the nanogap acts like a charged capacitor 34 which is shown in the Fig. 6 (A), whose total induced charge is inversely proportional to the frequency of a light-induced alternating current of THz-frequency. FDTD analysis of the electric field distribution around 1.5 nm wide gap in 150 nm thick gold film at a frequency of 1.5 THz, as shown in Fig. 10 (A). As expected the horizontal electric field is strongly concentrated and is enhanced by a factor of 2000 inside the gap. Fig. 10 (B) shows the electric field enhancement in different gap size for various frequencies. The theoretical calculation of the modal expansion for the perfect electric conductor (PEC) model is shown by solid lines. C. The Finite-Difference Time-Domain (FDTD) Algorithm : A Computational Electromagnetism Approach The finite difference time domain (FDTD) method 211 is used to solve Maxwell's equations in the time domain and generally used for computational electrodynamics. are the four Maxwell's equations where, the electric field, magnetic field, electric displacement field, magnetic flux density, free electric charge density, and free current density are denoted by E, H, D, B, ρ and J, respectively. 212,213 These equations are solved numerically on a discrete grid in both space and time, and derivatives are handled with finite differences. No approximations or assumptions are made about the system, making this method highly versatile and accurate. It is a fully vectorial simulation method as it solves for all electric and magnetic field vector components. Being a time-domain method, FDTD can be used to calculate broadband results from a single simulation. FDTD method is typically used when the feature size is of the order of the wavelength. It is general, versatile, accurate, broadband and fasts which makes it the most reliable method in computational electrodynamics. FDTD algorithm solves Maxwell's curl equations in non-magnetic materials. where, D = E. Since the region is vacuum, J = 0. with H, E, and D describing the magnetic field, electric field and displacement field, respectively, while 0 (ω) is a complex relative dielectric constant given by , where n denotes refractive index of the material. In 3-D, the six electromagnetic components of Maxwell's equations are -E x , E y , E z and H x , H y , H z . Assuming that in z-dimension, the structure is infinite and the fields are independent of z, such that r (ω, x, y, z) = r (ω, x, y) , and then, two independent sets of Maxwell's equations will be created. Each set will contain three vector quantities which can only be solved in x-y plane. The equation containing the components -E x , E y , H z , is known as transverse electric (TE) equation, and the equation containing the components -H x , H y , E z , is known as transverse magnetic (TM) equation. 211 Therefore, in TM case, Maxwell's equations reduces to and for TE case, Maxwell's equations reduces to These equations are solved on a discrete, spatial and temporal grid. Each field component is solved at a slightly different location within the grid cell (or, Yee cell). 211 With this method we can simulate the electric and magnetic fields around any nanostructures using commercially available softwares like, FDTD Solutions (Lumerical Inc., Canada), COMSOL, etc. Fig. 11 shows the FDTD analysis of gold nanogap (100 nm thick and wide gap) and nanorod, respectively. 145 IV. THZ-TDS AND SENSING OF MOLECULES AND BIOMOLECULES In the late 1980s, Grischkowsky et al. [214][215][216] introduced the terahertz time-domain spectroscopy technique (Fig. 13). It is a spectroscopic technique in which short-THz pulses are used to probe the properties of matter. The system generally consisted of the femtosecond laser (fs-laser), which operates at a repetition rate of 100 MHz, producing a 100 fs laser pulse train. Using a beam splitter, the fs-pulse train is split into two A B C beams: a pump beam and a probe beam. The pump beam is made to be incident on a THz emitter to emit THz pulses, which is collimated to the sample using a pair of parabolic mirrors. Concurrently, the probe beam is used in a time-gated manner for the detection of THz electric field which contains time-domain information of phase and amplitude. Coherently, the transmitted THz electric field is measured as a function of time to obtain a time-domain signal which is converted to noise-free frequency-domain signal using Fourier transformation function. THz-TDS is a non-contact, non-invasive and labelfree detection technique and is extensively applied in biomedical imaging. However, THz-TDS has proved to be less sensitive for detection of micro-organisms (like yeast, molds, and bacteria) of size equal to or less than λ/100, due to their transparency under THz frequencies. Recently, plasmonic nanoantennas 88,89,217 and metamaterials [92][93][94] have proved to be effective devices for detection of micro-organisms, organic-and biomolecules. These devices show resonances with strong field enhancement across the gap in the THz frequency range, which is highly sensitive to changes in dielectric constant of the gap region. Therefore, these devices are used as biosensors for ultra-sensitive and label-free detection of micro-organisms, organic-and biomolecules. In this section, we discuss the two widely used label-free THz-TDS methods : (i) THz-TDS using molecular dynamic (MD) simulations, and (ii) THz sensing by nanoantennas and metamaterials. A. Computational THz-TDS using Molecular Dynamics (MD) Simulation MD simulation is a computational method used to study the dynamics of a system of molecules in the condensed phase. To accurately study the dynamics of a complex molecule, one needs to employ the quantum model to study the wave function of each sub-atomic particle. But MD employs classical Newtonian mechanics to study the dynamics of the system, which makes it less accurate. Specifically, it integrates Newton's equation of motion in discrete time-steps. Due to high THz absorption by water, THz-TDS becomes challenging for water-based systems. Therefore, MD simulation is one of the preferred methods to study the water dynamics of a hydrated molecular system. In MD, water dynamics has a great significance in the structural arrangements of molecules, which occurs because the molecule alters the dynamics of the surrounding water molecules, adopting a quasi-coherent character, caused by reorganized, loose ter dynamics, which were limited to the first hydration layer. 223 Some similar results were obtained in which it was demonstrated a clear distinct power spectrum (VDOS) for water molecules bonded to different planes of antifreeze protein by hydrogen in the spectral domain of 1-4 THz. 224 While studying the vibrational spectrum of water in the villin headpiece subdomain hy-dration layer, O···O···O bending mode showed a blueshift in the first hydration layer, which is noticed for water molecules bonded to protein by hydrogen. 225 It also showed the possibility of the structural flexibility of protein. Later an atomistic MD simulation of hen egg-white lysosome solvated in explicit water molecules at room temperature was performed, 226 which reported that a few large-amplitude bistable motions exhibited by two coils controlled the overall flexibility of protein molecules. A series of MD simulations were performed to compare the experimental data of far-infrared spectroscopy used to study the dynamics of three aqueous peptides with varied helicity. 233 Using the first principles, extensive studies on water, resolved in time and space, reported that the group motion of H-bonded molecules in the second solvation shell significantly contributed to the absorption at about 2.4 THz, also showing the presence of third-shell effects. 234 Heyden et al. in their future publication 235 showed the capability of nonpolarizable water models in reproducing lowfrequency, inter-molecular vibration of water since electronic polarization is dominated by the static molecular dipoles. The depth of hydration shell in the hydrated lysosome, BPTI, TRP-cage, and TRP-tail was estimated using MD simulations. 218 To perform a basic MD simulation of a molecule, 236 first and foremost a force field must be applied to characterize the molecular interaction of the molecules. In the next step, the molecule is positioned in a unit cell of desired shape and size followed by the addition of water molecules, known as solvation step. Then ions are added to neutralize the net charge of the system. After the addition of ions, the system is then relaxed through a process known as energy minimization. This process also ensures that the system is free from steric clashes or inappropriate geometry. For further simulation, the system needs to be brought to the desired temperature. After attaining the desired temperature, an adequate amount of pressure must be applied until it reaches a proper density. This whole process is known as equilibration and is conducted in two phases. The first phase is conducted to attain the desired temperature under NVT (constant Number of particles, Volume, and Temperature) or isothermal-isochoric ensemble. The second phase is conducted to stabilize both pressure and density of the system under NPT (constant Number of particles, Pressure, and Temperature) or isothermal-isobaric ensemble. After attaining the desired temperature and pressure, the system is finally ready for the MD simulation, known as MD production run. After performing the final MD-run, the system is ready to be analyzed. ulation steps are summed up in a flowchart (Fig. 14). Fig. 15 (A) shows the MD simulation generated VACF curves for oxygen and hydrogen atoms in bulk water and in the water within 3Å of lysosome molecule (first hydration layer). The figure depicts two results: i) The hydrogen dynamics get uncorrelated faster (0.85 ps earlier) than oxygen dynamics, and ii) The dynamics of the oxygen atoms are much more restricted in the first hydration layer than in bulk water (as compared from their extremums) due to caging-effect. The corresponding VACF data was further Fourier transformed to get a set of VDOS data as shown in Fig. 15 (B). In the figure, at 1.1 THz a well-defined peak is shown by the oxygen atoms of bulk water signifying the bending motion of O -O -O atoms (known as triplets of H-bonded oxygen atoms). But in the case of lysosome-bounded water molecules, this peak is blue-shifted by 0.4 THz signifying that the H-bonds between O atoms gets stronger in the hydration layer. This peak has a lower amplitude compared to that of the oxygen atoms of bulk water, as the presence of lysosome does not have much influence on the vibrational mode of water molecules. The VDOS of hydrogen in both situations is very flat as they are not much influenced by the protein molecule. The absorption spectrum can be solved by applying Fourier trans- form on total dipole moment autocorrelation function of the system computed from MD simulation, 219 given by (27) here, V denotes the volume of the system, k B denotes the Boltzmann's constant, T denotes the absolute temperature of the system, c denotes the speed of light and n(ω) denotes the frequency dependent refractive index. biomaterials. As discussed earlier in this section, the strong THz field enhancement along with resonances in the gap region of these nano-structures are sensitive to the changes in the dielectric constant of the insulating gap material. If the gap is filled with desired biomaterial sample, due to enhanced THz-interaction, the dielectric (i.e., the biomaterial sample) will show an enhanced THz-absorption therefore, the THz optical properties can be studied from the THz dielectric response. Thereby, making the necessary change in the THz-TDS setup (Fig. 16), enables enhanced sensing of molecules and micro-organisms which were not sensed by conventional THz-TDS setup. Therefore, from the Fourier transformed spectra (frequency-domain amplitude and phase) obtained from the sample and reference transmitted signals, the complex optical properties ae computed from the following relation: where, A s and A r are the amplitudes of sample and reference transmitted signals, respectively. The real parts of the absorption coefficient and the index of refraction are denoted by α(ω) and n(ω), respectively, and thickness of the dielectric is denoted by d. The real part of refractive index 87 is given by and applying Beer-Lambert law, absorption coefficient 23 is given by where, φ r and φ s are the phases of sample and reference transmitted signals, respectively, T denotes the transmittance and f is the frequency of the input THz-signal. Through FDTD simulation, an intense study of this biosensing process can be done. The biomaterial sample film is assumed to be a homogeneous dielectric film of a thickness proportional to the molecular concentration, is sandwiched between two gold films ( Fig. 17 (A)). The dielectric film has a complex refractive index A·n + i B·κ, where A and B are constants whose values range from 1.0 -3.0 over a frequency band. The absorptive dielectric film was inserted in the gold nanogap by applying an auxiliary differential equation. To study the electric field interaction in the absorptive dielectric film, a non-uniform mesh was applied over the whole film with the smallest step-size of 10 nm. To the nanoan- tenna, THz electromagnetic waves are made to be incident normally. As the THz waves pass through the dielectric film, the transmitted THz-electric field decays exponentially and is given by here, T s = (E s (ω)) 2 and T r (ω) = (E r ) 2 are the transmittances through dielectric and air nanogaps, respectively. In the above relation, C is known as the transmittance ratio at the air-dielectric interface, k = 2π f c is known as the incidence momentum and h is the thickness of the dielectric. Fig. 17 (B -C) shows the simulated transmittance for different complex refractive indices. It is evident from the figure, that the absorption is dependent on the imaginary part of the complex refractive index (κ), and the resonance frequency is solely dependent on the real part of the complex refractive index (n). However, the change in resonance frequency is quite small but stronger absorption can lead to an appreciable change. Therefore, the variation in the transmission spectra will provide evident proofs for identification of molecules and species contained in the biomaterial sample. To study the effects of various gap-widths on THz-sensing, 89 a single-slot THz nanoantennas of length 90 µm and slot-widths of 50 nm (50 nm thick), 100, 200, 500, 1000 and 5000 nm (100 nm thick) was fabricated on a thick quartz substrate and filled with 1 mg/ml RDX (1,3,5-trinitroperhydro-1,3,5-triazine) molecules. Initially, the THz-absorption (α) of 1 mg RDX molecules placed over a bare quartz substrate was studied which is shown in Fig. 18 (B). Then the absorption of RDX molecules inside the various nano-slots was studied and the data obtained was plotted against the slot-widths as shown in the Fig. 18 (C). From Fig. 18 (B -C), it is evident that nanoantenna enhances the THz-absorption by a large factor of 10 3 . The effects of different substrates on biomaterial detection 93 were studied using THz-antenna (length = 100 µm) with 10 × 10 slot-array each with a slot-width of 2 µm and periodicity of 200 µm (Fig. 19 (A -B)), was lithographed using electron-beam on two different substrates, Si (undoped) and quartz, to sense yeast samples. Using FFT (Fast Fourier Transform) algorithm, the normalized THz-transmission amplitudes for THz slot-antenna, with (an average of 50 yeast molecules, or N av = 50) and without yeast sample was analyzed ( Fig. 19 (C)). From the figure, an evident 9 GHz redshift was observed in the resonance frequency. To com- pare the effects of different substrate, the normalized THz-transmission amplitudes were measured for Siand quartz-substrate THz antennas as shown in Fig. 19 (D -E). Comparing the measured amplitudes, quartz showed stronger red-shifts compared to Si-substrate ( Fig. 19 (F)), showing 1/ e f f dependence of the resonant frequency-shift (∆f/f 0 ) , where e f f = n 2 e f f is known as effective dielectric constant. Hence it can be concluded that by using low dielectric constant substrate, the sensitivity of a THz nanoantenna can be enhanced. THz nanoantennas are also selective in nature. 88 This selective detection was shown using a nanoantenna of length 35 µm having a resonance frequency of 1.7 THz. This antenna was specially designed to differentiate between fructose and D-glucose, and is also known as fructose antenna (Fig. 20 (B)). These molecules were discriminated from their measured transmittances using the fructose antenna as shown in Fig. 20 (C -D). Using the same nanoantenna, the sugars and low-concentrated artificial sweeteners (acesulfame K and aspartame) contained in beverages and dietary sodas, respectively, of popular brands were detected. The detection of these artificial sweeteners is important as these are recently found to be addictive and toxic in nature. The different transmittances of sugars contained in sweetened bever-ages of various brands in the THz range (0.5-2.5 THz) are shown in Fig. 20 (E). The low transmittances seen in the figure are due to the two artificially added sweeteners whose THz-absorptions is shown in Fig. 20 (F). Hence, nanoantennas can provide highly sensitive detection even at low molecular concentration. Recently, Lee et al. 92 studied the THz-transmittances of various virus samples (H1N1, H5N2, AND H9N2) using two THz nanoantennas: i) multi-resonance nanoantenna (with resonance frequencies of 0.63, 0.93 and 1.31 THz), and ii) single-resonance nanoantenna (with resonance frequency of 1.4 THz), which are shown in Fig. 21 (B -E). Similar work has been reported by Park et al. 94 in which they studied the transmission amplitude in both presence and absence of virus samples (PRD1 and MS2 viruses) using a THz nanogap metamaterial (Fig. 22). V. CONCLUSION AND FUTURE OUTLOOK THz waves are less penetrating and non-ionising electromagnetic waves. These waves are also known as 'sub-millimeter waves' and hence, is widely used in the fields of astronomy and spectroscopy. Since the description of electromagnetic interactions with metal is elucidated by using Maxwell's equations, researchers are very much fascinated to study the electric field localization and other plasmonic effects in various metallic nanostructures. Numerous investigations reported that the electromagnetic interactions with metal film increase the mobility of charges in the film, tending it to move towards the metal edges, leading to the enhancement of the electric field inside different nanogap structures. To study the light-matter interactions, researchers are fascinated towards THz waves as they can squeeze through sub-nanometer metallic gaps (THz nanoantennas) and show non-linear optical responses. These nanoantennas are further integrated with different materials to study its novel plasmonic properties. Recently, successful fabrication of graphene-integrated plasmonic system 70,167,[239][240][241][242] has boosted further investigations on their plasmonic properties because graphene is atomically thin and is electrically tunable. As discussed earlier in this review, THz nanoantennas have played a crucial role as biosensors for ultrasensitive detection of various biomolecules and biomaterials, which is our prime focus in this review. We have come across various literatures in which we have seen the development of nanoantennas, becoming more sen-sitive and selective. Due to its enhanced detection in low molecular concentrations, in future nanoantennas can be used to monitor blood sugar level. 88 As an emerging future technology, nanoantennas can be used for early detection of cancerous tumours 243,244 which will lead to the development of effective cancer treatment. Few recent works have shown that biosensing based on surface plasmon resonance (SPR) has proved to be effective in label-free tumor detection [245][246][247][248] and can be used to detect a single molecule of an early stage tumor. Currently, research and development are in progress in making a nanoantenna based SPR biosensor which will be able to detect cancer cells at an early stage. VI. ACKNOWLEDGEMENT The authors thanks Birla Institute of Technology, Mesra, Ranchi for providing research facilities and MHRD, government of India for support through TEQIP -III. The authors also thanks Pawan Kumar Dubey, Akriti Raj, Sameer Kumar Tiwari, Kamana Mishra, Priyanshi Srivastava, Dhruv Sood and Devo-tosh Ganguly for useful discussions and proof reading of the manuscript. VII. CONFLICT OF INTEREST The authors declares no conflict of interest.
2019-03-08T15:06:13.000Z
2019-03-08T00:00:00.000
{ "year": 2019, "sha1": "d65ddaaa5b386d4b9ec16ba247e0179539079188", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1903.03415", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d65ddaaa5b386d4b9ec16ba247e0179539079188", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine", "Physics" ] }
255950787
pes2o/s2orc
v3-fos-license
Secreted herpes simplex virus-2 glycoprotein G alters thermal pain sensitivity by modifying NGF effects on TRPV1 Genital herpes is a painful disease frequently caused by the neurotropic pathogen herpes simplex virus type 2 (HSV-2). We have recently shown that HSV-2-secreted glycoprotein G (SgG2) interacts with and modulates the activity of the neurotrophin nerve growth factor (NGF). This interaction modifies the response of the NGF receptor TrkA, increasing NGF-dependent axonal growth. NGF is not only an axonal growth modulator but also an important mediator of pain and inflammation regulating the amount, localization, and activation of the thermal pain receptor transient receptor potential vanilloid 1 (TRPV1). In this work, we addressed whether SgG2 could contribute to HSV-2-induced pain. Injection of SgG2 in the mouse hindpaw produced a rapid and transient increase in thermal pain sensitivity. At the molecular level, this acute increase in thermal pain induced by SgG2 injection was dependent on differential NGF-induced phosphorylation and in changes in the amount of TrkA and TRPV1 in the dermis. These results suggest that SgG2 alters thermal pain sensitivity by modulating TRPV1 receptor. Introduction Genital herpes is a common sexually transmitted disease (STD) caused mainly by herpes simplex virus type 2 (HSV-2) and, with lower incidence, by herpes simplex virus type 1 (HSV-1) [1]. Both viruses initially infect epithelial cells within the skin and the mucosa during primary infection. Following replication in epithelial cells, HSV reaches and infects free nerve endings (FNE) of sensory neurons, establishing latency in ganglia of the peripheral nervous system (PNS). Reactivation of HSV leads to production of infectious viral particles, which are anterogradely transported along the axons to the skin and mucosa, starting a new cycle of infection [2]. Primary HSV infection, reactivation, and shedding can be asymptomatic or proceed with clinically evident disruption of the skin and mucosa, causing papules and ulcers. HSV infection can damage or kill epithelial and neuronal cells [3,4]. The degree of cell damage, together with the associated inflammatory response, will determine the severity of the pathology [1]. Nearly all patients suffering from genital herpes present with itching, burning, and pain, caused probably by an extensive inflammatory response [5,6]. Pain is an unpleasant sensory experience associated with a noxious stimulus that serves as a defense mechanism [7]. It is conveyed to the spinal cord and the brain by specialized sensory neurons known as nociceptors. Each type of nociceptor expresses a subset of receptors that responds to tissue damage caused by chemical, mechanical, or thermal stimulation. These receptors are activated once the stimulus reaches a certain threshold that is considered harmful. However, "pain thresholds" can vary in physiological or pathological conditions. Inflammation is a well-studied scenario in which pain thresholds are reduced in such a manner that nonharmful stimuli can be interpreted as painful [8]. During inflammation, several factors are secreted by damaged tissue and/or by immune cells that regulate nociceptors, decreasing the threshold of pain receptors. Nerve growth factor (NGF) is a neurotrophic factor that belongs to the family of the neurotrophins [9]. NGF binds to and activates tyrosine kinase receptor TrkA to promote neuronal survival, axonal growth, and guidance in the PNS. NGF is also crucial for the development and maintenance of nociceptors [10]. At birth, the majority of nociceptors express TrkA. Afterwards, half of the nociceptive neurons downregulate the expression of TrkA reaching complete extinction during the first 3 weeks of life [11,12]. In mature nociceptors, expression of TrkA is associated with peptidergic neurons expressing inflammatory neuropeptides like calcitonin gene-related peptide (CGRP) or substance P [13,14]. This, together with the increased secretion of NGF during inflammation and its role in activating mast cells and neutrophils, underlines NGF's role in inflammatory pain. Therefore, NGF coordinates pain and inflammation through the regulation of immune and neuronal cells [15][16][17]. The relationship between NGF and inflammatory pain has been well characterized at the molecular level. The thermal pain receptor transient receptor potential vanilloid 1 (TRPV1) is a non-specific cation channel activated by physical stimuli such as high temperatures and chemical stimuli like low pH or capsaicin. TRPV1 activation in nociceptive neurons leads to a painful and burning sensation [18]. TRPV1 is extremely regulated, and its threshold for activation is high (i.e., temperatures higher than 42°C). However, under physiological or pathological conditions, activation thresholds can vary [18]. TRPV1 levels in peripheral nerves in the skin are low while levels in the cell bodies within the dorsal root ganglia (DRG) are high [13]. The NGF-TrkA axis is one of the most important regulators of TRPV1 amount, spatial distribution, and activation threshold [19,20]. Inflammation of peripheral tissues promotes a local upregulation of NGF [21]. As a consequence, phosphorylation levels of TrkA are increased, affecting TRPV1 in two different ways. First, in the short term (from minutes to a few hours), TRPV1 is rapidly and locally phosphorylated in serine/threonine and tyrosine residues. Phosphorylation in serine/threonine residues decreases TRPV1 activation threshold [22][23][24], while phosphorylation in tyrosines alters TRPV1 subcellular localization from vesicles to the plasma membrane [20]. As a result of both increased phosphorylations in TRPV1, sensory neurons show a higher heat pain sensitivity in the short term. Second, in the long term (from hours to days), once NGF-TrkA complex has been retrogradely transported to the cell bodies, nociceptive neurons mobilize TRPV1 anterogradely, increasing its amount in nerve endings [19]. Furthermore, there is an increase in TRPV1 translation, but not expression, in nociceptive neurons [19]. Both mechanisms result in an increased heat pain sensitivity and hyperalgesia in the long term. Then, NGF secretion from damaged tissue or immune cells contributes to the burning and painful sensation at the site of inflammation through these mechanisms (for review, see [25,26]). We have recently shown that secreted glycoprotein G from HSV-2 (SgG2) binds NGF and alters NGF-dependent TrkA activation. SgG2 increases NGF-mediated axonal growth, blocking retrograde transport of TrkA, resulting in an accumulation of high levels of phosphorylated TrkA at the nerve endings. This could attract TrkA+ nerve endings to the site of infection [27]. However, since NGF is not only a neurotrophic factor but also an inflammatory mediator, we hypothesized that SgG2 could play a role in pain and burning sensation produced by HSV-2. Our present results show that injection of SgG2 in the mouse hindpaw increased thermal pain sensitivity at 3-h postinjection (hpi) but not at 16 hpi. At the molecular level, the effect induced by SgG2 at 3 hpi could be explained by an increased NGFdependent TRPV1 phosphorylation in serine residues. We also found reduced amounts of TRPV1 at 16 hpi that may explain the lack of SgG2-increased thermal sensitivity at this time point. These results suggest that SgG2-NGF interaction alters thermal pain sensitivity, affecting the phosphorylation and spatio-temporal levels of TrkA and TRPV1 in a complex scenario. Injection of SgG2 results in transient enhancement of thermal pain sensitivity To test whether SgG2 could be responsible, at least partially, for the painful and burning sensation produced during clinical shedding of genital herpes, we injected SgG2 into the mouse hindpaw and performed a Hargreaves test (also known as plantar test) (Fig. 1). Injection of HEPES or a secreted version of glycoprotein G (SgG1) from HSV-1 did not result in any differential thermal sensitivity at 3 hpi (Fig. 1a). However, injection of SgG2 induced a statistically significant reduction in the latency time to withdraw the irradiated hindpaw at this time point compared to injection of HEPES (Mann Whitney test, p = 0.0043; unpaired t test with Welch's correction, p = 0029), indicating that SgG2 increases thermal pain sensitivity (Fig. 1a). We repeated the test at 16 hpi in the same animals. At this time postinjection, all injected animals showed shorter latency responses compared probably due to a mild inflammatory process. Surprisingly, there were no differences between the injection of HEPES, SgG1, or SgG2 at this time point (Fig. 1b). More surprisingly, SgG2-injected mice at 16 hpi showed a higher latency period than SgG2injected mice at 3 hpi (Mann Whitney test, p = 0.0022; unpaired t test with Welch's correction, p = 0074). These results suggest that injected SgG2 increases heat sensitivity in mice only shortly after injection. SgG2 increases NGF-mediated TRPV1 phosphorylation on serine residues We have previously shown that SgG2 interacts with NGF and alters membrane localization, internalization, retrograde transport, and downstream signaling of TrkA. SgG2 could have an impact on sensory neurons expressing TrkA, including the regulation of heat pain sensitivity threshold. The best characterized heat pain receptor downstream the NGF-TrkA axis is TRPV1 [19,20]. Stimulation of sensory neurons with NGF induces TRPV1 phosphorylation. To test whether SgG2 affects NGFdependent TRPV1 phosphorylation, we used postnatal sensory neurons, which express TrkA and TRPV1 in higher percentage than adult sensory neurons. We starved dissociated mouse DRG neurons of NGF and exposed them to HEPES, NGF plus HEPES, or NGF plus SgG2. As a control, we analyzed the phosphorylation of TrkA and a downstream protein, P38. As previously described, NGF induced an increase in TrkA and P38 phosphorylation [19] (Fig. 2a). In agreement with our previous results [27], addition of NGF plus SgG2 resulted in higher phosphorylation of TrkA and P38 (Fig. 2a). To analyze TRPV1 phosphorylation status in this setting, we immunoprecipitated TRPV1 and detected phosphorylated serine and tyrosine residues by western blotting. SgG2 did not modify tyrosine phosphorylation of TRPV1 (not shown). However, the addition of SgG2 induced a statistically significant increase in serine phosphorylation of TRPV1 (Mann Whitney test, p = 0.0286; unpaired t test with Welch's correction, p = ns, Fig. 2b, c). These results could explain the increased heat sensitivity promoted by SgG2 at 3 hpi, as increased serine phosphorylation of TRPV1 has been associated with reduced threshold to heat-related pain [22][23][24]. Mobilization of TRPV1 to the dermis is reduced at 16 hpi of SgG2 NGF plays a relevant role on TRPV1 phosphorylation and mobilization from DRG soma to nerve endings [13,19]. Moreover, NGF increases the total amount of TRPV1 [19]. We observed an increase in heat pain sensitivity only at 3h but not at 16-h post-SgG2 injection when NGFdependent TRPV1 mobilization is predicted to start being significant, contributing to inflammatory heat increased sensitivity. The effect observed at 3-h post-SgG2 injection correlates with higher serine phosphorylation levels of TRPV1. The lack of effect on heat sensitivity at 16-h post-SgG2 injection prompted us to investigate the localization of TRPV1 in the injected tissue. As we performed intradermal injections, we focused our attention in the dermis, where the injected proteins should be present. It is described that the presence of TRPV1 in the dermis and epidermis is low in non-inflammatory conditions [19]. We also found that non-injected animals had very low levels of TRPV1 in the dermis (Fig. 3a). To detect nerves in the dermis, we used CGRP, a neuronal marker that has been associated with the expression of TRPV1 [13,14]. We did not observe changes in the amount of TRPV1 in the dermis analyzing the injected area at 3 hpi in any of the experimental conditions (Fig. 3b). However, we observed a statistically significant increase in the presence of TRPV1 in the dermis of HEPES-injected mice at 16 hpi (Fig. 3c). This correlates with a tendency to reduce the latency time to withdraw the irradiated hindpaw between HEPES-injected Viral proteins were injected in HEPES buffer which was also used as injection control. Error bars represent the mean plus standard deviation. ***p < 0.001; n.s. non-significant, s seconds mice at 3 and 16 hpi (Fig. 1). This could be due to a mild inflammatory response following injection of fluid in the hindpaw. Surprisingly, the amount of TRPV1 in the dermis of mice injected with SgG2 at 3 or 16 hpi was similar (Fig. 3c). The reduced amounts of TRPV1 in the dermis of SgG2-injected mice at 16 hpi compared to the HEPES control (Mann Whitney test, p < 0.0001; unpaired t test with Welch's correction, p < 0.0001) could explain the absence of heat pain sensitivity despite increased levels of NGFdependent TRPV1 serine phosphorylation. However, this result does not explain the lower mobilization of TRPV1 from cell bodies to nerve endings in SgG2 injected mice. SgG2 alters TrkA spatial distribution after injection Sensory neurons have very long projections. Signals activated in a distal organ, like the skin, must reach the neuronal cell body for their processing. When NGF activates TrkA in distal tissues, TrkA must be endocytosed and retrogradely transported to the neuronal cell body for a complete response to NGF to occur [28]. Our previous results show that SgG2 impairs internalization and retrograde transport of TrkA in response to NGF [27]. Impairment of TrkA retrograde transport by SgG2 could explain the reduced mobilization of TRPV1 from cell bodies of DRG neurons to the nerve endings at 16-h post-SgG2 injection. To test if TrkA spatial distribution was altered, we analyzed the levels of TrkA in the dermis after injection (Fig. 4). As a control for nerves in the dermis, we used CGRP, a neuronal marker that has been associated with the expression of TrkA [11]. The levels of TrkA in the hindpaw dermis of non-injected animals were high (Fig. 4a). The levels of TrkA in the dermis of HEPES-injected mice were highly reduced at 3 hpi, probably due to NGF secretion by epidermal and immune cells following injection (Fig. 4b). However, injection of SgG2 resulted in lower reduction in the amount of TrkA in the dermis when compared to HEPES control at 3 hpi (Fig. 4b). The difference in TrkA levels was statistically significant between HEPES and SgG2 at this time point (Mann Whitney test, p < 0.0001; unpaired t test with Welch's correction, p < 0.0001). We also measured the amount of TrkA at 16 hpi. At this time point, we observed that the level of TrkA started to be restored in the HEPES-injected dermis (Fig. 4c) but was still significantly lower than that in the dermis of animals injected with Fig. 2 SgG2 increases NGF-dependent TRPV1 serine phosphorylation. DRG neurons were grown for 3 days in NGF medium, NGF starved for 16 h, and stimulated with HEPES, NGF in HEPES, or NGF plus SgG2 for 30 min. Western blots showing phosphorylation of TrkA and p38 (a) and TRPV1 phosphorylation in serine residues (b), which was detected following TRPV1 immunoprecipitation. c Graph showing the quantified serine phosphorylation in TRPV1. The data corresponds to the average of three independent experiments for TRPV1 serine phosphorylation. Error bars represent the mean plus standard deviation *p < 0.05 SgG2 (Mann Whitney test, p = 0.0022; unpaired t test with Welch's correction, p = 0.0015) (Fig. 4c). These results suggest that TrkA spatial distribution is altered by SgG2 in vivo, remaining in the site of the injection which could explain why sensory neurons did not mobilize TRPV1 to the site of the SgG2 injection 16 h later. Discussion HSV-1 and HSV-2 are two human pathogens with prevalence values around 65 % for HSV-1 [29] and 11.3 % for HSV-2 [30]. Following lytic infection of epithelial cells in the skin or the mucosa, they establish latency in peripheral ganglia. HSV-1 is more commonly acquired during childhood and is associated with establishment of latency in the trigeminal ganglia and oro-labial disease. HSV-2 is acquired later in life, normally through sexual contact, and is linked to establishment of latency in sacral ganglia and genital herpes. Genital herpes is a painful disease that can be caused by both HSV-1 and HSV-2. The symptoms (pain, itch, burning sensation) reported by HSV-1-and HSV-2-infected patients during the first episode of genital herpes are similar [5,6]. However, periodicity and severity of genital herpes episodes increase when HSV-2 is the causative agent [1,5]. The viral and cellular elements and the molecular mechanisms leading to burning sensation in HSV-2induced genital herpes are not known. We show here that HSV-2 SgG induces heat-related pain, an effect that may contribute to HSV-2 pathogenicity. NGF is a neurotrophic factor involved in the development and maintenance of nociceptors [10] and an important mediator of inflammatory pain [17]. NGF is expressed in the mucosa and the skin, common sites of HSV replication during primary and recurrent infection [31]. We have recently described that SgG2 specifically binds NGF altering its receptor and downstream signaling pathways [27]. This results in increased neurite outgrowth and impairment of TrkA retrograde transport. On the contrary, SgG1 binds NGF but does not alter NGF activity [27]. TrkA, together with CGRP, is a common marker of peptidergic neurons present in the DRG. Since TrkA peptidergic neurons are enriched in the genitalia [32][33][34], we hypothesized that the modification of NGF/TrkA axis could have implications in the physiological properties of these nociceptors following HSV-2 infection. In particular, we hypothesized that SgG2 may be involved in HSV-2-induced pain during episodes of genital herpes. HSV-2 infection, or transfection of SgG2, in the mouse footpad, results in a higher percentage of peptidergic FNE entering the stratum granulosum [27]. On the contrary, infection with HSV-1 or transfection of SgG1 does not affect peptidergic FNE growth [27]. In this report, we show that footpad injection of recombinant SgG2, but not SgG1, caused an increase in heat pain sensitivity at 3 hpi. This result correlates with increased phosphorylation of TRPV1 in serine residues after stimulation with recombinant SgG2 plus NGF. It also fits with previous data showing that TRPV1 serine phosphorylation is associated with reduced threshold activation and that some serine/threonine residues within the N and C termini of TRPV1 are implicated in receptor sensitization and activation [22-24, 35, 36]. Due to the long-term involvement of NGF in inflammation [17,19] and the reports of chronic neuralgias induced by HSV-2 infection [37], we expected a prolonged effect of SgG2 inducing heat-related pain. However, SgG2 did not increase heat sensitivity compared to HEPES or other viral proteins at 16 hpi. At this time point, SgG2 injection induced less mobilization of TRPV1 to the site of injection than HEPES. This may explain the absence of differences in heat-induced pain at 16 hpi even with increased levels of TRPV1 serine phosphorylation. Reduced long-term mobilization of TRPV1 after SgG2 injection may appear contradictory. However, this result fits with our previous described data [27]. In order to accomplish all its biological functions during inflammation, NGF must be retrogradely transported from the inflamed distal tissue to the cell bodies of nociceptors [38]. Our previous results showed that SgG2 impairs NGF-induced TrkA retrograde transport in primary culture of neurons grown in microfluidic devices [27]. Similarly, we report here that injection of recombinant SgG2 alters TrkA spatial distribution of the CGRP + neurons, maintaining high levels of TrkA in axons crossing the dermis, which would fit with a reduced TrkA retrograde transport. We hypothesize that this differential TrkA spatial distribution, with TrkA retained in the distal axons upon SgG2 intradermal injection, may explain our observations: in the short term, it may contribute to enhanced local TRPV1 phosphorylation, favoring an increase in heat pain sensitivity and, in the long term, it may explain the reduced mobilization of TRPV1 to the SgG2 injection site, diluting the short-term effect. HSV-2 infection of genitalia can course from asymptomatic to extremely painful [1]. This suggests that HSV-2 interaction with the host is complex, and many different variables contribute to the final outcome. Then, understanding of SgG2 involvement in HSV-2-induced pain will require further studies in a more complete framework. Also, SgG2 interacts with chemokines and modulates chemokine receptor activity [39,40]. Since chemokines also participate in nociceptive processes and inflammation [41], SgG2 could transiently contribute to pain induction by modifying chemokine activity. In conclusion, our results suggest that SgG2 alters thermal nociception by altering TrkA and TRPV1, and may contribute, at least partially, to HSV-2 induced pain. Ethics statement All animal experiments were performed in compliance with national and international regulations and were approved by the Ethical Review Board of the Centro de Biología Molecular Severo Ochoa under the project number SAF2009-07857 and SAF2012-38957. Expression and purification of viral proteins Viral proteins were expressed and purified by affinity chromatography from the supernatant of Hi-5 insect cells as previously described [40]. In vivo injection of viral proteins in mouse hindpaw All mice used were CD-1 males with 5 to 8 weeks of age from Charles Rivers (Wilmington, MA). Mice were anesthetized with a mixture of ketamine/xylazine (100 and 10 mg/kg body weight, respectively) prior to injection. We injected the viral proteins intradermally, in a region located between the proximal pads and heel of the ventral hindpaw. Always, the left hindpaw was injected; 5 μL of HEPES or indicated viral proteins at 6.8 μM in HEPES buffer were injected. Hargreaves plantar test The Hargreaves test was performed using a standard apparatus from Ugo Basile (Monvalle, Italy). Mice were placed in a transparent acrylic box. A mobile infrared heat lamp was positioned to irradiate the left hindpaw. Intensity of the infrared heat lamp was set using non-injected mice. The latency time of the withdrawal response of each hindpaw was determined at 3-and 16-h postinjection. Measurements for each time point and mouse were taken several times and considered as technical replicates. Nerve staining and non-permeabilized inmunofluorescence Mice were euthanatized and hindpaw skin was immediately removed by using a 3-mm biopsy punch and fixed in Zamboni's fixative for 6 h. The biopsies were then washed, embedded in agarose sucrose, and sectioned using a vibratome; 50-μm, free-floating sections were washed in phosphate-buffered saline (PBS) with 0.5 % Triton X-100 (PBS + TX), blocked for 30 min in 10 % horse serum PBS + TX. Anti-CGRP (whole protein) antibody was from Sigma (St. Louis, MO), anti-extracellular TrkA AF1056 was purchased from R&D Systems (Minneapolis, MN), and anti-N-terminal TRPV1 (named as VR1, P-19) was from Santa Cruz (Santa Cruz, Ca). To-Pro-3 and secondary antibodies used were from Life Technologies (Life Technologies, Thermo Fisher Scientific, Carlsbad, CA). Confocal analysis was performed with a LSM 510 Confocal Laser Scanning Microscope from Carl Zeiss. Images for an experiment were taken with the same settings to allow proper comparison. Analysis and treatment of images was performed using LSM Image Browser, Fiji and Adobe Photoshop; firstly, a region of interest (ROI) in the CGRP image was defined. The area of the staining within this ROI was measured using Fiji after a threshold correction. The ROI was maintained for measurements in the other channels, and thresholds applied were the same for all the analyzed channels. Treatment of DRG neurons Dissociated neurons were grown during 3 days in vitro (DIV) and starved of NGF during 16 h when indicated. NGF and SgG2 were mixed in DMEM-F12 prior stimulation. To calculate NGF molarity, we considered NGF as a dimer (26 kDa). The concentrations used were 0.5 nM NGF with 100 nM SgG2 for signaling experiments, and the stimulation period was 30 min. Statistical analysis The significant value (p value) was calculated using GraphPad Prism. First, we calculated whether the data followed a Gaussian distribution using D' Agostino and Pearson omnibus normality test, Shapiro-Wilk normality test, and Kolmogorov-Smirnov normality test. Since the data did not follow a Gaussian distribution, we employed two different statistical analyses: Mann Whitney test and unpaired t test with Welch's correction.
2023-01-18T14:21:35.394Z
2016-08-30T00:00:00.000
{ "year": 2016, "sha1": "31eaf1b7b33fe58d2ab9a6cfb28e4775884e0dbc", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12974-016-0677-5", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "31eaf1b7b33fe58d2ab9a6cfb28e4775884e0dbc", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
6573709
pes2o/s2orc
v3-fos-license
Multi-institutional Comparison of Intensity Modulated Radiation Therapy (IMRT) Planning Strategies and Planning Results for Nasopharyngeal Cancer The intensity-modulated radiation therapy (IMRT) planning strategies for nasopharyngeal cancer among Korean radiation oncology facilities were investigated. Five institutions with IMRT planning capacity using the same planning system were invited to participate in this study. The institutions were requested to produce the best plan possible for 2 cases that would deliver 70 Gy to the planning target volume of gross tumor (PTV1), 59.4 Gy to the PTV2, and 51.5 Gy to the PTV3 in which elective irradiation was required. The advised fractionation number was 33. The planning parameters, resultant dose distributions, and biological indices were compared. We found 2-3-fold variations in the volume of treatment targets. Similar degree of variation was found in the delineation of normal tissue. The physician-related factors in IMRT planning had more influence on the plan quality. The inhomogeneity index of PTV dose ranged from 4 to 49% in Case 1, and from 5 to 46% in Case 2. Variation in tumor control probabilities for the primary lesion and involved LNs was less marked. Normal tissue complication probabilities for parotid glands and skin showed marked variation. Results from this study suggest that greater efforts in providing training and continuing education in terms of IMRT planning parameters usually set by physician are necessary for the successful implementation of IMRT. INTRODUCTION Intensity modulated radiation therapy (IMRT) is a new approach to the planning and delivery of radiation (1,2). Unlike 2-dimensional radiotherapy (2D-RT) and 3-dimensional radiotherapy (3D-CRT), IMRT allows more conformal dose coverage of the clinical target volume (CTV) in three dimensions, thereby sparing the surrounding normal tissues. Highly conformal treatment plans may reduce the risk of radiation toxicities and provide a means of potentially escalating the dose to target in selected patients, thus improving tumor control. Head and neck cancers, especially nasopharyngeal cancer, are good candidates for IMRT because of their horseshoeshaped CTVs and many critical normal organs that surround the CTV (3). Improving the conformity of the radiation dose to targets in the head and neck using IMRT promises reduced toxicity and, in some cases, improved loco-regional tumor control (3)(4)(5)(6)(7). Recently published phase III studies confirmed that IMRT provides superior treatment results with lower morbidity than conventional 2D-or 3D-CRT (8,9). IMRT may actually be disadvantageous in some situations, however, because it is relatively difficult to plan and administer. The planning process for IMRT is greatly influenced by physician-dependent factors such as segmentation of the target volume and non-tumor tissues on the planning CT, and specifying the dose to the target and the surrounding normal organs. Although many hospitals are now equipped with IMRT capabilities, the excellence of next level practice achieved by the use of IMRT could differ among hospitals Multi-institutional Comparison of Intensity Modulated Radiation Therapy (IMRT) Planning Strategies and Planning Results for Nasopharyngeal Cancer The intensity-modulated radiation therapy (IMRT) planning strategies for nasopharyngeal cancer among Korean radiation oncology facilities were investigated. Five institutions with IMRT planning capacity using the same planning system were invited to participate in this study. The institutions were requested to produce the best plan possible for 2 cases that would deliver 70 Gy to the planning target volume of gross tumor (PTV1), 59.4 Gy to the PTV2, and 51.5 Gy to the PTV3 in which elective irradiation was required. The advised fractionation number was 33. The planning parameters, resultant dose distributions, and biological indices were compared. We found 2-3-fold variations in the volume of treatment targets. Similar degree of variation was found in the delineation of normal tissue. The physician-related factors in IMRT planning had more influence on the plan quality. The inhomogeneity index of PTV dose ranged from 4 to 49% in Case 1, and from 5 to 46% in Case 2. Variation in tumor control probabilities for the primary lesion and involved LNs was less marked. Normal tissue complication probabilities for parotid glands and skin showed marked variation. Results from this study suggest that greater efforts in providing training and continuing education in terms of IMRT planning parameters usually set by physician are necessary for the successful implementation of IMRT. depending on the clinical situation and the treatment team. Preparedness in performing IMRT is a crucial factor in its implementation. In Korea, IMRT practices were established in several centers in 2001, and it is now being rapidly adopted. According to a 2006 survey conducted by the Korean Society for Therapeutic Radiology and Oncology (KOSTRO), 22 of 61 (35%) radiotherapy facilities nationwide possessed the hardware capacity necessary to implement IMRT practice (10). Although several Korean facilities have undertaken planning studies and explored the feasibility of IMRT in their clinic, it appears that a lack of preparedness exists in terms of fully implementing IMRT as a clinical routine (3,4,(11)(12)(13). The purpose of the present study was to investigate IMRT planning strategies for nasopharyngeal cancer among Korean radiation oncology facilities. We compared the planning parameters, resultant dose distributions, and values of biological indices, including normal tissue complication probabilities (NTCP) and tumor control probabilities (TCP), with a treatment plan generated in five different institutions using the same radiation treatment planning system (RTPS) for the same clinical cases. We also discussed the measures needed to improve the degree of clinical excellence in implementing IMRT as an advanced treatment technology. Study schemes In May 2006, five institutions, all with the same RTPS with IMRT planning capacity, were invited to participate in this study. As planning images, contrast-enhanced computed tomography (CT) scans were provided for Case 1. For Case 2, both contrast-enhanced CT scans and positron emission tomography (PET) images were provided. Planning images for the study cases were obtained on a Discovery ST PET-CT scanner (GE Healthcare, Milwaukee, WI, U.S.A.), using a slice thickness of 0.37 cm, and with the patients immobilized in the supine position using a Type S thermoplastic mask (Medtec, Orange City, IA, U.S.A.). Planning images were de-identified prior to use in the study. The employed IMRT planning platform was the p 3 -IMRT inverse planning module of Pinnacle 3 (Philips, Fitchburg, WI, U.S.A.) commercial RTPS. Back-up CD disks were prepared with the treatment plans for the study cases; these plans had no added regions of interests (ROIs), beams, or inverse planning parameters. The beam commissioning data for a standard linear accelerator was added to these plans and used for dose calculations. We used the beam data of a Primus linear accelerator (Siemens, Munich, Germany) housed at a participating institution. Each institution received the back-up CDs. The plans for the two clinical cases were restored in the RTPS of each insti-tution. IMRT plans were performed in each institution according to the study guidelines, up to the step of generating ideal intensity maps for the beams. The treatment plans from each institution were then compiled and restored in a planning computer as different trials of the treatment planning data; the data were then analyzed. Treatment planning guidelines The participating institutions were requested to produce the best plan possible that would deliver 70 Gy to the planning target volume of gross tumor (PTV1), 59.4 Gy to the region of high-risk regional lymph nodes (PTV2), and 51.5 Gy to the lymph node area (PTV3) in which elective irradiation was required. The advised fractionation number was 33. Delineation of the target volume and normal organs, the number and orientations of the beams, and the prescribed number and type of dose constraints were left to the discretion of the planning teams in each institution. Clinical cases Case 1 was a 50-yr-old male diagnosed with cT2aN1 poorly differentiated squamous cell carcinoma of the nasopharynx. The primary tumor involved the right side of the Rosen-mu_ ller fossa and posterior wall of the nasopharynx, and extended to the posterior oropharyngeal wall. There was lymph node (LN) metastasis to the right level 2 area. Case 2 was a 24-yr-old female diagnosed with T2bN2 undifferentiated carcinoma of the nasopharynx. The primary tumor involved the left side of the nasopharynx and extended to the parapharyngeal fatty tissues. There was tumor involvement in the left-sided retropharyngeal LNs and bilateral level 2 LNs. Plan comparison IMRT planning strategies were compared. Parameters compared in the planning process included the delineation of target volume and normal organs, the number and orientations of the beams, and the prescribed number and type of dose constraints for optimization of the inverse planning. General features of the planning results were compared, including the prescribed monitor unit (MU), maximum dose and its location, inhomogeneity index of the dose distribution to individual institution's PTV, and the deviation of D50 from the planning goal. The homogeneity index was defined as (D5-D95)/(Dmean). Dvolume was defined as the dose level where the cumulative dose volume histogram (DVH) intersects with the given volume of the ROI. Dmean was defined as the mean dose received by the ROI. The distributions of the isodose curve and DVHs for individual institution's PTV were also compared. The ROIs delineated by the institution that provided the clinical case were set as the standard ROIs. To avoid potential difficulties in comparing plans that might have risen had ROIs been delineated separately by individual institutions, both standard ROIs and individual institution's ROIs were used for the comparison of dose statistics and biological indices among institutions. TCP was calculated via the Okunieff model, using the values of the dose of 50% tumor control (TCD50), slope50, and γ 50 (14). NTCP was calculated via the Lyman-Kutcher-Burman model, using the Kutcher-Burman histogram reduction scheme, with n and m as determined by Burmann et al. (15,16). Contouring of the ROIs We found differences in the volumes contoured as the gross tumor volume (GTV) of the primary tumor and the involved LNs ( Table 1). GTV of the primary tumor was 13.8-28.5 cm 3 in Case 1, and 35.8-69.6 cm 3 in Case 2. The volumes of PTV1, PTV2, and PTV3 were also different: we found 2-fold variation in the volume of PTV1, and a 3-fold variation in the volume of PTV2. One institution did not delineate PTV3 in Case 1, and three institutions did not delineate PTV3 in Case 2. This marked variation was due to difference in the level of lymph node areas treated as high risk area, and those treated as areas for elective nodal irradiation ( Table 2). The major difference among institutions consisted of whether they decided to treat the level 1 and contralateral level 5 lymph node areas. The variations encountered in contouring of the target volumes are presented on the same axial CT slice (Fig. 1). The contoured GTV for Case 2 was more consistent than that of Case 1. Although the degree of volumetric differences was the same in both cases, delineation of the GTV assisted by PET images appeared to cover the gross tumor more consistently, as in Case 2. For delineation of PTV2 and PTV3, almost all of the institutions used geometrical extension from the margin of anatomical structures such as regional LN areas; however, in constructing PTV2, Institution 2 used the direct geometrical extension from PTV1 without consideration for the anatomical margin of the regional LN area. A similar degree of variation was identified in the delineation of normal tissue. All institutions delineated the parotid glands, eye balls, lens, optic nerve, and spinal cord. Not all institutions delineated the brainstem, pituitary gland, optic chiasm, oral cavity, trachea, esophagus, inner ear, vocal cord, brain, and temporal lobe of the brain. Some institutions excluded the deep lobe of the parotid gland from the structures to be spared, especially in the involved side (Fig. 2). Only two institutions used the planning at risk volume (PRV), which is expanded with margins ranging from 2 to 3 mm from contoured normal tissues such as lens, brainstem, and spinal cord. All institutions used the so-called 'pseudo-target' structure ROI, region of interest; GTV, gross tumor volume; LN, lymph node; PTV, planning target volume. around the PTV to enhance dose conformity or normal tissue avoidance while performing optimization. Differing number of variously shaped pseudo-targets were used: global, local, or both (Table 3, Fig. 3). Planning parameters Wide variation also existed in the setting of planning parameters (Table 4). Only one institution used the split-field IMRT technique, in which the low anterior neck is treated with an anterior field and matched with the IMRT portion. All of the others used extended whole-field IMRT, in which all of the target volumes are included within the IMRT field. The number of employed beams ranged from 7 to 11. All institutions used coplanar beams with equally spaced angles. The p 3 -IMRT inverse planning module of Pinnacle 3 provides two different methods for prescribing constraints for optimization. A hard constraint is the maximum or mini-mum dose for a ROI to be met absolutely while optimizing beam intensity, whereas a DVH constraint is the dose prescribed to a certain percentage of ROI volume that can be bargained using a weighting factor for each constraint. Only one institution applied hard constraints. The number of prescribed constraints ranged from 7 to 29, with a variety of weighting factors applied for each constraint. Dose statistics and biologic indices The general features of the planning results are presented in Table 5. The number of MUs planned to be delivered showed wide variability: 466-1,134 in Case 1 and 490-2,269 in Case 2. MU was normalized for 90% of PTV1 to be covered with Color key: red, institution 1; green, institution 2; blue, institution 3; yellow, institution 4; magenta, institution 5. GTV, gross tumor volume; PTV, planning target volume. Color key: red, institution 1; green, institution 2; blue, institution 3; yellow, institution 4; magenta, institution 5. Because the individual institution's ROIs and the employed planning parameters were different for each institution, there was a striking range in the distribution of isodose curves and the DVH (Fig. 4, 5). The resulting plans by Institutions 1 and 2 showed excess dose deposit in the skin, oral cavity, and the soft tissues in the neck. The inferior border of the retropharyngeal lymph node was insufficiently covered in Institution 5. DVH for PTV2 showed that some institutions failed to obtain a sharp dose fall-off at the border between DVH, dose volume histogram. PTV1 and PTV2, and that the high-dose area in PTV2 was even hotter than that in PTV1 in one institution. For given clinical cases, the dose statistics showing IMRT dose delivery patterns showed marked variations for both of standard ROIs and individual institution's ROIs among institutions ( Table 6). The values of D95 for PTV1 were less than the prescribed dose in all plans. In Institution 1, D5 for PTV1 was more than 110% of the prescribed dose. The sparing of the parotid glands was tightest in Institution 5, with consideration that Institution 5 delineated the deep lobe on both sides. The other institutions tried to spare at least one parotid gland, located at the contralateral side of the primary tumor. D5 values for the spinal cord exceeded 45 Gy in one institution for standard ROI and in two institutions for individual institution's ROI. The value of D5 for brainstem exceeded 54 Gy in one institution. In contrast to the profound variations recorded in dose statistics for PTV, variation in TCP for the primary lesion and involved lymph nodes was not so pronounced for both standard ROIs and individual institution's ROIs (Table 7). Color key: red, institution 1; green, institution 2; blue, institution 3; yellow, institution 4; magenta, institution 5. ROI, region of interest; PTV, planning target volume; D95, the dose where the cumulative DVH intersects with 95% of the volume; D50, the dose where the cumulative DVH intersects with 50% of the volume; D5, the dose where the cumulative DVH intersects with 5% of the volume; DVH, dose volume histogram. The NTCP for parotid glands and skin was largely unsatisfactory, showing marked variation among institutions. Considering the trend to sacrifice the parotid gland of the involved side, the NTCP for the contralateral side still showed unsatisfactory results in some institutions. Excess dose delivery to the skin, as shown in Fig. 4, explains the high NTCP of the skin in Institutions 1 and 2. DISCUSSION In Korea, IMRT is in the early stage of implementation, with routine use limited to a small number of hospitals; however, more hospitals are being equipped with modern stateof-the-art IMRT technology (10). Some institutions in Korea have performed the planning studies and explored the feasibility of IMRT in the clinical settings (3,4,(11)(12)(13). The Korean Radiation Oncology Group (KROG) is currently performing a study regarding the optimal radiation prescription using IMRT in applying simultaneous integrated boost for nasopharyngeal cancer (KROG-0501). Early publications on IMRT for head and neck cancer suggested significant heterogeneity in global head and neck IMRT practice patterns (17). For the same prescribed target dose and dose constraints for organ at risk, IMRT strategies in the study showed striking difference in various aspects, such as beam setup, total number of segments, PTV dose coverage and dose statistics for organs at risks. This European Society for Therapeutic Radiology and Oncology (ES-TRO) planning exercise demonstrated that the planning of IMRT needs close cooperation between the various disciplines involved in the preparation and execution of a treatment. It is believed that much of the heterogeneity in IMRT arises from physician-based factors. Clearly, radiation oncologists should receive sound training in those factors determined by the physician in performing IMRT, including target delineation, setting of parameters for objective functions, interpretation of the resultant plan, and understanding of uncertainties in delivery. It appears that there exists a lack of preparedness for fully implementing IMRT as a clinical routine in Korea. This study was therefore undertaken to assess the pattern of planning strategies for head and neck IMRT planning, with particular emphasis on physician-based factors such as identifying the target volume and non-tumor tissues on the planning CT, and specifying the dose to the target volume and the surrounding normal organs. The factors such as RTPS, its optimization algorithm, beam data, and other physical QA-related issues were identically controlled, meaning that physician-related factors in IMRT planning (e.g., target segmentation and the prescription for inverse planning) have a greater influence on the resulting plan quality than QA issues in the delivery of complex IMRT plan. Although physical QA issues are also important, physi-cian-based factors in IMRT planning are critical, and must undergo preparation before state-of-the art IMRT technology can be implemented in the clinical setting. The present study identified marked variations in IMRT design and global patterns of planning strategies for IMRT of nasopharyngeal cancer in Korea. Substantial variations existed not only in target definition and dose prescription, but also with regard to the management of neck node areas; consequently, the resultant IMRT plans were strikingly different among the institutions. The variation in volume segmentation translated into the variation of dose statistics for standard ROIs. However, the dose statistics for individual institution's ROIs also showed substantial variation among institutions. This indicates that the difference of planning quality originates not only from the difference of ROI delineation but from the ability to perform inverse planning. One of the shortcomings of the current study is that target volume determination guidelines and plan acceptance guidelines were not provided to each institution participated. It was remained to each institution's discretion. The IMRT could not be planned with only condition of doses of PTV1, PTV2 and PTV3 and without target determination guideline, acceptance criteria and so on. The marked variations in IMRT design and global patterns of planning results shown in the current study are not only due to the quality of IMRT planning in each institution but because of the lack of precise guideline while performing the IMRT planning. However, the institutions participated are in substantial agreement that the cause of wide variation of planning results are mainly due to variation of physician-based factors such as volume segmentation and setting the parameters for inverse planning. This wide variation could potentially be translated into different treatment outcomes between each institution. Preparedness in performing IMRT is a crucial factor in its implementation. In the present study, the planning results of Institution 1 and 2 were relatively difficult to brag out. At the time of this planning study, Institution 1 and 2 did not performed IMRT for clinical cases. The attainment of excellence in IMRT practice requires greater effort in terms of time, man-power, and education, among other factors. Given the growing popularity of IMRT, it is not surprising that numerous IMRT ''schools'', seminars, and workshops have appeared internationally. However, there are few educational opportunities available for participating radiation oncologists in Korea. The human resources available to the KOS-TRO radiation oncology society remain limited (10). It is also necessary to support efforts to develop 'class-solution', in which an automated inverse planning protocol using a single set of inverse-planning parameters can be used for most patients to generate an acceptable IMRT plan. This would eliminate the need for time-consuming user-interface optimization in most cases. The adoption of computer-based assistance for physician factors in IMRT is also desirable (18).
2017-08-15T09:41:25.351Z
2009-04-01T00:00:00.000
{ "year": 2009, "sha1": "02c29cf5a76ba3be6ee0c6d8e16dbb5cdeca0de4", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3346/jkms.2009.24.2.248", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "02c29cf5a76ba3be6ee0c6d8e16dbb5cdeca0de4", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
57758751
pes2o/s2orc
v3-fos-license
Comparison of synthetic bone graft ABM/P-15 and allograft on uninstrumented posterior lumbar spine fusion in sheep Background Spinal fusion is a commonly used procedure in spinal surgery. To ensure stable fusion, bone graft materials are used. ABM/P-15 (commercial name i-Factor™ Flex) is an available synthetic bone graft material that has CE approval in Europe. This peptide has been shown to improve bone formation when used in devices with fixation or on bone defects. However, the lack of external stability and large graft size make posterolateral lumbar fusion (PLF) a most challenging grafting procedure. This prospective randomized study was designed to evaluate early spinal fusion rates using an anorganic bovine-derived hydroxyapatite matrix (ABM) combined with a synthetic 15 amino acid sequence (P-15)–ABM/P-15 bone graft, and compared with allograft in an uninstrumented PLF model in sheep. The objective of this study was to assess fusion rates when using ABM/P-15 in uninstrumented posterolateral fusion in sheep. Methods Twelve Texas/Gotland mixed breed sheep underwent open PLF at 2 levels L2/L3 and L4/L5 without fixation instruments. The levels were randomized so that sheep received an ABM graft either with or without P15 coating. Sheep were euthanized after 4.5 months and levels were harvested and evaluated with a micro-CT scanner and qualitative histology. Fusion rates were assessed by 2D sections and 3D reconstruction images and fusion was defined as intertransverse bridging. Results There was 68% fusion rate in the allograft group and an extensive migration of graft material was noticed with a fusion rate of just 37% in the ABM/P-15 group. Qualitative histology showed positive osteointegration of the material and good correlation to scanning results. Conclusions In this PLF fusion model, ABM/P15 demonstrated the ability to migrate when lacking external stability. Due to this migration, reported fusion rates were significantly lower than in the allograft group. The use of ABM/P15 as i-Factor™ Flex may be limited to devices with fixation and bone defects. Background Spinal fusion is a commonly used procedure in spinal surgery worldwide and is indicated in the surgical management of different spinal disorders such as degenerative disorders, pain, tumor, deformity, and trauma [1,2]. Over the last decade, the number of spinal fusion procedures has increased significantly, and in 2008 more than 400,000 fusions were performed annually in the USA [3]. Between 2001 and 2010, 79% to 86% of total interbody fusions were posterior/transforaminal lumbar fusions [4]; this number is estimated to have increased since 2010 [3]. Spinal fusion is a procedure where bone graft material is used to facilitate novel bone formation between two adjacent vertebral bones. The aim of fusion is to segmentally impair movement and stabilization, and the procedure may be performed with or without instrumentation [5,6]. Many different approaches have been tried, and posterior, anterior, and interbody fusion between vertebral bodies are commonly used [7,8]. In this study, a posterolateral lumbar fusion (PLF) model was used. PLF is the most commonly used fusion model and also the most challenging model in regard to novel bone formation and graft properties. This is due to lack of external support in fixating graft material and large defect size for novel bone formation. To achieve solid bone formation between vertebral bones, graft materials are used. Traditionally, autograft from the iliac crest has been the gold standard, as autograft possesses osteoinductive, osteoconductive, and osteogenic properties [9,10]. Because of limited availability in harvesting autograft and patient donor site morbidity such as pain and bleeding, using alternative materials garners high interest [10][11][12]. Allograft is the most often used surrogate graft material today and is considered a gold standard second only to autograft for lumbar fusion. Allograft possesses a conductive property and a partial osteoinductive property but no osteogenic property. This is because of the freezing procedure for storage after harvesting [13]. Literature reporting lumbar fusion rates when using autograft or allograft is inconsistent with a range of 40-93% [14,15]. New graft materials that resemble today's gold standard but are without the risks and limitations associated with autograft or allograft are needed, and several composite materials have been investigated. ABM/P-15 is a recently investigated composite material, which consists of anorganic bovine-derived hydroxyapatite matrix (ABM) combined with a synthetic 15 amino acid sequence (P-15). P-15 has an identical sequence as found in the cell-binding domain in collagen type-1 (α-chain) [16]. This composite material has been proven to stimulate bone formation. ABM/P-15 bears osteoconductive and osteoinductive properties [17][18][19]; its osteoconduction (ABM) occurs by providing a three-dimensional matrix for bone ingrowth and by releasing necessary minerals. Its osteoinduction (P-15) occurs by providing binding site for α2-β1 integrin on the surface of bone forming cells. The binding of α2β1-integrins to P-15 initiates natural intra-and extracellular signaling pathways and induces production of growth factors, bone morphogenic proteins, and cytokines [17,20]. The potential of ABM/P-15 on bone formation has been previously shown in preclinical and clinical studies. ABM/P-15 induces bone formation comparable to allograft in critical sized defects and implant fixation sheep models [21] and also improves bone formation in rat osteoporotic models [22]. ABM/P-15 has had comparable fusion rates as allograft in an interbody ovine fusion model [23] and in humans [24]. It has gained CE approval in Europe and is used today in humans as i-Factor™. To this point, no studies have evaluated ABM/ P-15 in a flex formula in a PLF model. The aim of this prospective randomized study was to evaluate early spinal fusion rates using ABM/P-15 bone graft compared with allograft in a two-level uninstrumented PLF model in sheep. This preclinical evaluation is essential prior to using the ABM/P-15 graft for PLF in clinic. As described, this model indicates other challenges when compared to other bone grafting models. We hypothesized that ABM/P-15 graft material had similar or improved fusion rates compared with traditional allograft in an ovine uninstrumented PLF model. Animals Twelve skeletally mature female Texas/Gotland breed sheep were purchased from local farmer. These sheep were 3-5 years old and had body weight of 56-87 kg. Sheep were chosen for this study as they provide a good model regarding bone remodeling as their bones biomechanically share similarities to human bone [25]. When compared with pigs and dogs, sheep are also both easier to acquire with mature bones and are easier to handle [25,26]. The sheep were acclimated for a period of 8 weeks before surgery. During the experiment, they were given standard food and hay and were allowed free access to water. Staff from the Biomedicine Laboratory, University of Southern Denmark took care of them and monitored their daily activity normally. Their body weights were recorded monthly. Allograft was obtained from a euthanized healthy donor sheep and was immediately made into chips under sterile conditions with a bone mill (Ossano Scandinavia ApS, Stockholm, Sweden). The chips were kept in an − 80°C freezer for 3 months. The size of the chips was between 1 and 3 mm, and had irregular structure, which was verified under microscopy. The synthetic bone graft used was ABM/P-15 as i-Factor™ Flex strip (Cerapedics, Westminster, CO, USA), which was a combination of freeze-dried AMB granule, 50 μm in size, coated with P-15 peptide. Study design A prospective randomized paired design was used. Twelve sheep were included according to a statistical power calculation. Sheep were randomly divided into two groups; one group had ABM/P15 located at level L2-L3 and allograft at L4-L5 (n = 6) while the other group had allograft at L2-L3 and ABM/P15 at L4-L5 (n = 6). This design was used to eliminate bias that could be caused by any difference in bone formation capacity between levels and to ensure the animals were their own control. All levels were transplanted with same graft material on both sides (Fig. 1). The observation time was set for 4.5 months and was based on our pilot study. Observation time was chosen as fusion with allograft could be expected after this period. Surgery Two days prior to surgery, the sheep were transported to operation facilities to be acclimated. On operation day, the animals were premedicated with Rompun (xylacinhydrochlorid, 20 mg/ml, Bayer animal health GmbH, Leverkusen, Germany) 0.2 mg/kg. Anesthesia was induced with Rapinovent (propofol 10 mg/ml, Schering-Plough animal health, Ballerup, Denmark) 3 mg/kg and maintained with isofloran 2%. Fentanyl 1 mg/kg was given as analgesic during the procedure. The veterinarian at the Biomedicine Laboratory gave the anesthesia and experienced orthopedic spine surgeons performed the surgeries. The sheep were placed in a prone position, and after shaving and thoroughly disinfecting the area, a posterior access incision was made from lumbar L1 to L6. Dissection was done carefully at level L2-L3 and L4-L5 after identification through palpation from thoracic vertebra 12 with attached costae. There was one level intact (L3-L4) between intervention levels to minimize local interference. Decortication of the transverse processes and opening of the facet joint were performed at L2-L3 and L4-L5. Bone chips from decortication were left at the site at all levels. Both levels were prepared before implantation. Graft transplantation Allograft chips of 5 mg were prepared and weighed in 10 ml syringes. ABM/P-15 was used as i-Factor™ Flex100 and was separated in two; furthermore, 50 mm was used on each side of the same level. After transplantation, the wound was closed in layers. Postoperatively, all sheep were treated with Temgesic (0.03 mg/ml, Schering-Plough, Ballerup, Denmark) three times daily according to body weight for at least 3 days post-surgery, and treatment lasted no longer than 1 week. Then, 9.0 ml ampicillin (250 mg/ml, Ampivet Vet, Boehringer Ingelheim, Denmark) was given once daily for 5 days. After an observation time of 3-5 days at the animal center, the sheep were moved to farm facilities for further observation until the end of the experiment. Sample handling Sheep were euthanized after 4.5 months with an overdose of 10-20 ml pentobarbital (200 mg/ml), and their spines were harvested. Sample blocks were carefully dissected and soft tissue removed. Macroscopic implant migration was noted. Each vertebral level was divided sagittally through the vertebral body to isolate each implant bilaterally. Samples were then placed in 4% formalin for 3 days and afterward changed into a PBS solution. All blocks were scanned with a micro-CT scanner (detail below) and divided through the middle into two blocks with a sagittal section with EXAKT Diamond Band Saw (Norderstedt, Germany) using a laser light as guide. Micro-CT scanning Micro-CT scanning was performed to validate fusion rates, and fusion was defined as bony bridge formation from two transverse processes. All blocks were scanned with micro-CT50 (Scanco Medical AG, Brüttisellen Switzerland) using energy 90 kV and intensity 155 mA to quantify their 3D microarchitectural properties of the newly formed bone tissue and to discriminate between newly formed bone and implant. The scanned images had 3D reconstruction cubic voxel sizes of 24*24*24 μm 3 (2048*2048*2048 pixels) with 32-bit-gray-levels. 3D reconstruction was performed and healing was evaluated by 3D images and 2D sections (Fig. 1). Histology Qualitative histology was performed. From scanned images, samples were divided into fusion and non-fusion groups. Randomized samples from each group were prepared for histology by dehydration in graded solutions of ethanol from 70 to 99% and then infiltrated embedded in methyl methacrylate (MMA). Each sample block was divided transversely in the middle using a template to facilitate sectioning. Histological sections were cut sagittally with a custom-made diamond blade Microtome (Medeja Instrumentmakerij, Assendelft, the Netherlands). A random cutoff secured randomization, after which one 50-μm-thick section was dissected from the top, middle, and bottom of the sample and used for qualitative histomorphometry. Sections were stained with toluidine blue 0.1% to differentiate between newly formed bone and mature bone. Statistical analysis Posterolateral lumbar fusion rates assessed by micro-CT at two levels were accessed by chi-squared test using SPSS for Windows, version 25 (SPSS Inc. Chicago, Illinois, USA). It was planned to perform one-way analysis of variance (ANOVA) to compare the properties among groups. However, due to migration of the ABM/P-15, the planned quantified histomorphometry and microarchitectural analysis were not performed, and statistical analyses were not reported. Results One sheep was euthanized 2 days after surgery as a result of immobilization. Autopsy revealed no nerve damage or other surgical complications and no other complications were noted. In total, 11 sheep completed this study and were used for analysis. Spines were harvested after 4.5 months. Macroscopic evaluation revealed migration of ABM/P-15 graft material at all levels. Granules were found either on the ventral side of the transverse processes or had migrated in caudal direction at different degrees. Migrated material was encapsulated and showed no sign of bone formation (Fig. 2). This finding was consistent for all sheep in this study. No migration was found in the allograft group. For the harvested materials, micro-CT scans were performed and 3D reconstructions were done to evaluate fusion rates. The allograft group had a fusion rate of 68% (Table 1), which was consistent with earlier studies on allograft fusion rates [15,19]. The ABM/P-15 group showed no complete fusion in connection with bridging of newly formed bone in the transplant (Table 1). Fusion was determined by level that newly formed bone created a stable bridge between transverse processes. Histology Quantitative histology was performed in both AMB/ P-15 and allograft groups. In the ABM/P-15 group, graft material was still evident. New bone formations were found in implant close to the transverse processes in both proximal and distal sections. Good osteointegration between newly formed bone and ABM/-P15 was found and well integrated into pre-existing bone (Fig. 3). In the ABM/P-15 group, mostly woven bone was present; moreover, few areas showed lamellar initiation. Signs of activity such as osteoid deposition, numerous osteocytes, reabsorption areas, and active surfaces were observed (Fig. 3). There was a well-defined transition zone in the implant between newly formed bone and cartilage (Fig. 3), and no sign of foreign body reaction was found. In the allograft group, graft material was found around mature bone. New bone formation served as a bridge between transverse processes, and new bone formation occurred continuously. Good osteointegration was observed between graft and pre-existing bone, and more areas with lamellar organized bone compared with ABM/P15 group (Fig. 3) were observed. Discussion The aim of this study was to evaluate early spinal fusion rates using ABM/P-15 bone graft compared with allograft in a two-level uninstrumented PLF model in sheep. In the ABM/P-15 group, we found 37% fusion rate while there was 68% fusion rate in the allograft group. Allograft fusion rates are comparable to earlier reported fusion rates in sheep studies [27,28]. One major cause of failure in the ABM/P-15 group was due to the extensive migration of the graft material. As mentioned earlier, ABM/P-15 bone substitute has been proven to be a suitable bone graft substitute and has gained CE approval in Europe. The bone formation ability was demonstrated when ABM/P-15 was applied in closed containers or in small bone defects, in which settings the surrounding structures supported the implanted bone graft with external fixation. There are no previous studies that have used ABM/P-15 in this challenging PLF model. The use of ABM/P-15 in this study was comparable with clinical settings and the clinical use of graft material, and is therefore highly clinically relevant [5,6]. This study has been proven that ABM/P-15 in the i-Factor™ Flex formula migrated when lacking external support as used in an uninstrumented PLF. ABM/P-15 has been approved for human use in Europe and is used today as graft material for spinal surgery; hence, it is vital to make these findings available to surgeons so that they are more aware when using this material in unconfined areas during their procedure. It is expected that improved stability of the material will be required, which means further documentation of its efficacy on spine fusion is needed. Because of migration of ABM/P-15, the planned quantified histomorphometry and microarchitectural analysis were not performed. The migration rate in the allograft group was not possible to report because allograft material is reabsorbed much faster than ABM-P15. The major component of ABM/P15 is hydroxyapatite and may take 12-24 months to be reabsorbed when migrated [29]. Nevertheless, 68% of bridge formation indicated that sufficient amount of allograft must have stayed at transplantation site. It was a severe mistake that the migrations were found at all the ABM/P-15 transplanted levels. In this study, the graft material was used in a clinically comparable setup and after manufacturer's guidelines. The reason for this migration might be found in the smaller size of granule when compared to allograft. When decorticating bleeding was unavoidable, the small size of the granule might have facilitated sedimentation of the granule with blood, which means that it was likely that early migration occurred within the first days after surgery. Compared with humans, sheep were mobilized faster and were not placed in supine position after surgery. These factors might explain the migration we report in this study. It is thus not directly applicable to humans, and migration might not be as significant a problem as found in this study. It is still a problematic and great concern for clinical application, since migration would cause spinal non-union or delayed fusion. It is evident that ABM/P-15 as used in this preclinical setup has the ability to achieve major migration. Migration to lesser extent has been reported earlier; in particular, Sherman et al. found migration from cages in an interbody lumbar fusion model [23]. ABM/P-15 bone substitute has been proven to be a suitable bone graft alternative when used in confined containers, devices with fixation, or on small bone defects. It has proven to be a promising bone graft substitute that gives faster and more extensive bone formation when compared to allograft in bone defects [21]. Our next study is to investigate the potential of ABM/P-15 on spinal fusion with improved stability of material. Conclusions Bone substitute ABM/P-15 has been demonstrated to have high potential of migration when used without external fixation in a clinically comparable setting with PLF; and perhaps due to shorter degeneration time, migration of allograft was not found in this study. ABM/P-15 in the i-Factor™ Flex formula revealed significantly lower fusion rates when compared to the allograft group. This finding is important as i-Factor™ Flex has been approved for human use as a bone graft in Europe and is used today in spinal surgery. In humans, migration might be less pronounced due to species differences, which can be seen in slower mobilization and the post-operational supine position of human patients compared to sheep. These findings are important for surgeons who intend to use i-Factor™ Flex in patients, and the material should be used correctly for accurate indications. It is of vital importance to further document the efficacy of i-Factor™ Flex on spine fusion with improved stability of the material.
2019-01-06T12:18:55.787Z
2019-01-03T00:00:00.000
{ "year": 2019, "sha1": "7c544ebca8e3596a1e94844f8f7855afef2ff520", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s13018-018-1042-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7c544ebca8e3596a1e94844f8f7855afef2ff520", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
17093659
pes2o/s2orc
v3-fos-license
Effects of various combinations of cryoprotectants and cooling speed on the survival and further development of mouse oocytes after vitrification Objective The objectives of this study were to analyze efficacy of immature and mature mouse oocytes after vitrification and warming by applying various combinations of cryoprotectants (CPAs) and/or super-rapid cooling using slush nitrogen (SN2). Methods Four-week old ICR female mice were superovulated for GV- and MII-stage oocytes. Experimental groups were divided into two groups. Ethylene glycol (EG) only group: pre-equilibrated with 1.5 M EG for 2.5 minutes and then equilibrated with 5.5 M EG and 1.0 M sucrose for 20 seconds. EG+dimethylsulfoxide (DMSO) group: pre-equilibrated with 1.3 M EG+1.1 M DMSO for 2.5 minutes and equilibrated with 2.7 M EG+2.1 M DMSO+0.5 M sucrose for 20 seconds. The oocytes were loaded onto grids and plunged into SN2 or liquid nitrogen (LN2). Stored oocytes were warmed by a five-step method, and then their survival, maturation, cleavage, and developmental rates were observed. Results The EG only and EG+DMSO groups showed no significant difference in survival of immature oocytes vitrified after warming. However, maturation and cleavage rates after conventional insemination were greater in the EG only group than in the EG+DMSO group. In mature oocytes, survival, cleavage, and blastocyst formation rates after warming showed no significant difference when EG only or EG+DMSO was applied. Furthermore, cleavage and blastocyst formation rates of MII oocytes vitrified using SN2 were increased in both the EG only and EG+DMSO groups. Conclusion A combination of CPAs in oocyte cryopreservation could be formulated according to the oocyte stage. In addition, SN2 may improve the efficiency of vitrification by reducing cryoinjury. Introduction Since the first successful pregnancy derived from cryopreserved hu-man oocytes was reported in 1986 [1,2], various freezing and thawing protocols have been applied to the cryostorage of oocytes. Oocyte cryopreservation boasts various advantages over embryo cryopreservation. Oocyte cryopreservation would significantly contribute in assisted reproductive technology (ART) programs. It allows patients to cryopreserve their oocytes when they have no partner or are about to lose their ovarian function due to surgery, chemotherapy, or radiotherapy or want to delay the delivery [3]. Also, it avoids ethical issues and legal restrictions. Unfortunately, despite remarkable progress, oocyte cryopreservation remains a demanding task, as there is no precise standardization of cryopreservation and warming procedures. Hence, the technique is not yet widely used in clinical practice. The slow cooling method was initially used for oocyte cryopreservation. However, only a few studies have demonstrated successful outcomes of oocytes that were frozen using a slow cooling method [1,2]. This can be explained by zona pellucida hardening from premature cortical granule exocytosis, chromosomal nondisjunction caused by serious disturbance of the microtubules, disturbance in pronuclear (PN) formation, and polar body release from microfilament damage and cytoskeleton alteration after cryopreservation [4][5][6][7][8][9][10]. Several studies have reported effective outcomes of applying various improved systems including reducing the concentration of sodium in cryoprotectants (CPAs) [11,12] or using different types and concentration of CPAs during cryopreservation [13][14][15][16][17]. In addition, some studies reported excellent survival, embryonic development, and pregnancy rates by applying a vitrification method which avoids the formation of ice crystals using a high concentration of CPA and ultra-rapid cooling speed [18]. Advantages of vitrification include that it is time-saving, easy to perform, and does not require expensive equipment. Most importantly, it could minimize the damage to oocytes in that it avoids the forming of ice crystals. However, its drawback is that it requires a high concentration of CPA, which may cause toxicity and osmotic damage to the oocyte. Since the development of vitrification, the toxicity of CPA has been the one of the major concerns in applying this method as a cryopreservation method. Several studies have been conducted to avoid the drawback by using CPA with high speed permeability and less toxicity or by applying a combination of permeable and non permeable CPAs to decrease the absolute concentration without a decrease in the relative concentration. Also, in order to increase the cooling rate for vitrification, many studies have been attempted for minimizing the CPA solution volume or improving the heat conductivity of cryo-equipment. Furthermore, using liquid nitrogen in a slush state (slush nitrogen, SN2) has improved the survival and embryonic development rates after vitrification of oocytes and embryos and this may be due to the increase in the heat transfer rate that is associated with SN2 [19,20]. Ethylene glycol (EG) is widely used in the vitrification method as it is one of the major permeable CPAs with a low molecular weight, and it is also less toxic to mammal oocytes or embryos including humans [21][22][23][24]. In particular, the strategy for reducing cell injury by applying a short exposure to a high concentration of CPA has been widely used. In contrast, Mukaida et al. [25] has recently reported a high survival rate of embryos that were vitrified at a blastocyst stage with a combination of EG and dimethylsulfoxide (DMSO, a slow permeable CPA). The combination of CPAs for the vitrification process may have induced a lower relative concentration and also lower toxicity of CPAs. In fact, as DMSO penetrates into the cell, it accelerates its characteristics of glass-forming and it increases the permeability rate as it combines with the other types of CPAs which complement each other. Also, there has been a report that when a combination of CPAs was used, a higher blastocyst development was obtained compared to using only 40% EG in bovine oocytes [26]. These studies have been conducted only with embryos or blastocyst stage embryos, but not many studies have worked with germinal vesicle (GV) stage or metaphase II (MII) stage oocytes. In addition, few studies have been conducted regarding the cooling rate, which is one of factors that affect oocytes or embryos during vitrification [27]. Therefore, in this study, we analyzed the survival and subsequent embryonic development rates of immature and mature mouse oocytes after a vitrification and warming process using combinations of CPAs and/or using SN2 to develop an efficient vitrification method for immature and mature oocytes. Preparation of immature and mature oocytes ICR mice (Samtako, Seoul, Korea) were maintained in a temperature-and-humidity-controlled room under a 12 hours: 12 hours light: dark cycle. For immature oocyte collection, four-week-old female mice were superovulated via an intraperitoneal injection of 5 IU pregnant mare serum gonadotropin (PMSG; Dae Sung Microbiological Labs, Seoul, Korea). At 44-46 hours post PMSG, mice were sacrificed by cervical dislocation for the collection of ovaries. The ovaries were then transferred to Quinn's advantage medium with HEPES (Quinn's-HEPES; Sage, In Vitro Fertilization, Trumbull, CT, USA) containing 10% substitute protein serum (SPS; Sage BioPharma, Inc., Bedminster, NJ, USA). Immature oocytes were collected by puncturing of follicles with a needle (29 G). Cumulus-enclosed immature oocytes were selected for the experiment. For the collection of mature oocytes, 4week-old ICR female mice were superovulated with 5 IU PMSG, followed by injection with 5 IU human chorionic gonadotropin after 48 hours (hCG; Intervet, Boxmeer, the Netherland). Cumulus-enclosed mature oocytes were retrieved at 13.5-14 hours post-hCG from the oviducts. Vitrification and warming of oocytes Quinn' s-HEPES with 20% (v/v) fetal bovine serum (FBS; Gibco, Grand Island, NY, USA) was used as the base medium for preparation of all vitrification and warming solutions. As a CPA, EG (Sigma-Aldrich, St. Louis, MO, USA) only or a combination of EG and DMSO (Sigma-Aldrich) were used for the vitrification procedure. For group 1, oocytes were pre-equilibrated with 1.5 M EG for 2.5 minutes, followed by equilibration with 5.5 M EG and 1.0 M sucrose for 20 seconds. For group 2, oocytes were pre-equilibrated with 1.3 M (7.5%) EG and 1.1 M (7.5%) DMSO for 2.5 minutes, followed by 2.7 M (15%) EG, 2.1 M (15%) DMSO and 0.5 M sucrose for 20 seconds. CPA-equilibrated oocytes were loaded onto an electron microscopic (EM) copper grid and plunged into liquid nitrogen (LN2) or SN2. SN2 was produced using a Vit-master (IMT, Ness Ziona, Israel). The concentration of CPA was determined according to the study of Mukaida et al. [25]. Vitrified immature and mature oocytes were stored for at least 2 weeks and then warmed to compare their survival and subsequent embryonic development. The vitrified oocytes were warmed by a five-step method. The copper grids were sequentially transferred to 1.0, 0.5, 0.25, 0.125, and 0 M sucrose with an interval of 2.5 minutes. The vitrified/warmed oocytes were then washed with fresh culture medium for three times. The immature oocytes were transferred to the culture medium and observed under a microscope for the survival rate of oocytes 1 hour after warming. The surviving immature oocytes were then induced to develop into mature oocytes by culturing them in the in vitro maturation medium. The mature oocytes from the immature stage and the survived mature oocytes obtained earlier were fertilized in vitro for further experimentation. In vitro fertilization and culture Epididymal spermatozoa were obtained from 8-to 10-week-old male ICR mice. Sperm suspension was capacitated in the incubator at 37˚C, in 5% CO2 in air for 90 minutes. Capacitated spermatozoa were mixed (1-2×10 6 /mL) with cumulus-oocyte complex in Quinn's advantage fertilization medium and incubated for 6 hours. The oocytes were then washed three times in modified simplex-optimized medium (KSOM; Millipore, Danvers, MA, USA) supplemented with 0.3% bovine serum albumin (BSA; Sigma-Aldrich). The fertilized oocytes were cultured in KSOM under 37˚C in 5% CO2 for 5 days to analyze embryonic development. Statistical analyses Survival and maturation rates of oocytes, embryonic development, and blastocyst formation rates were analyzed for statistical significance with one-way ANOVA (Duncan-test). p-values<0.05 were considered statistically significant. Effect of CPA on the survival, maturation, and embryonic development of immature oocytes after vitrification using LN2 or SN2 The survival and maturation rates of immature oocytes were compared after the vitrification/warming process using EG only or a combination of EG+DMSO as CPAs (Table 1). After vitrification/warming using LN2, there was no significant difference in the survival rates of the EG only group and the EG+DMSO group (79.8±4.7% vs. 87.9± 1.1%; p >0.05), nor was there any difference between the groups when using SN2 (79.4±3.2% vs. 88.9±1.4%; p>0.05). However, the EG+DMSO group showed a higher maturation rate than the EG only group in both LN2 and SN2 (73.2±6.1%, 73.9±1.2% vs. 49.1±9.0%, 52.3±3.4 %; p<0.05). The cleave rate of immature oocytes in the EG only group was significantly lower than that of the control and EG+ DMSO groups after vitrification using LN2 (26.3±7.8% vs. 62.9±12.2% and 57.7±5.0%; p<0.05). After vitrification using SN2, the cleave rate in the EG group was lower than in the EG+DMSO group, but the difference was not statistically significant (37.4 ±4.8% vs. 58.8 ±3.7%; p>0.05). All groups showed a very low blastocyst formation rate. Numerically, a higher rate of blastocyst formation was observed in the EG+ DMSO group after vitrification using SN2, than the other groups but it was not statistically significant (3.9±1.7% vs. 0±0.0%, 1.3±0.7%, 1.9±1.2%; p>0.05). Effect of CPA on the survival, maturation, and embryonic development of mature oocytes after vitrification using LN2 or SN2 Mature oocytes were vitrified and warmed as immature oocytes and the survival rate was evaluated ( Discussion Cryopreservation of immature oocytes at the GV stage was considered an alternative to avoid the drawbacks of cryopreservation of mature oocytes such as chromosomal nondisjunction induced by damage to the spindle and premature cortical granule exocytosis [28,29]. However, as cumulus cells and cytoplasm are tightly connected together, immature oocyte cryopreservation resulted in major damage and also a low maturation rate; hence, its clinical applications have been limited. Recently, the vitrification process was applied to overcome these drawbacks. In fact, in this study, a high survival rate was obtained by applying vitrification; in particular, using combinations of CPAs, EG, and DMSO in vitrification showed an approximately 90% survival rate ( Table 1). The fertilization and cleave rates of oocytes were also higher in the CPA combination group. In many studies, the pre-equilibrium and equilibrium time were longer when DMSO was used for cryopreservation, as it has a lower permeability rate than EG [30]. Hence, we have conducted a preliminary study and compared the blastocyst formation rate of oocytes and embryos by using different pre-equilibrium and equilibrium times. As a result, there was no difference in the blastocyst formation for various time and, in fact, when oocytes or embryos were treated with DMSO with the same exposure time as to EG, they showed even better results [31]. Consequently, unlike other studies, we used the same exposure time for the EG group and EG+DMSO combination group. Slush nitrogen was used to analyze the effect of the cooling rate. The Table 2. The effect of different cryoprotectants and cooling speed generated by using LN2 or SN2 on the survival and maturation of mature oocytes following vitrification results showed that there was no particular effect on survival, maturation, or cleavage rates after warming immature oocytes ( Figure 1). The fertilization rate showing two PN could not be identified because the conventional insemination method was used in this study and also the blastocyst formation rate was very low. These results may be due to the inefficiency of the vitrification/warming process or the possibility of side effects such as parthenogenesis. Hence, this results in a limitation of the study. For evaluation of the effectiveness of immature oocyte cryopreservation, it is necessary to carry out further research using a method such as intracytoplasmic sperm injection, which directly injects sperm into an oocyte. Furthermore, in this study, only the survival and embryonic development rates were observed, but not intracellular changes that may occur during the vitrification/ warming process, and so the direct effect of using a combination of CPAs was not possible to evaluate. Hence, further study is required. In mature oocytes, the results obtained were different from those of immature oocytes. After the vitrification/warming process, the survival, cleavage, and blastocyst rates of the EG only and EG +DMSO groups were not significantly different (Table 2, Figure 2). However, when SN2 was used for vitrification, there was no significant difference in the survival rate, but it did increase the cleavage and blastocyst formation rates. Also, the blastocyst formation rate was significantly higher when a combination of CPAs, EG and DMSO, and SN2 were used. As stated previously, this result is may be due to the increase in the efficiency of vitrification by using DMSO, which accelerates its glass-forming characteristics as it penetrates into the cell and also due to SN2, which increases the cooling rate [19,20,25]. From this study, we observed that adjusting an existing vitrification solution that is generalized worldwide is required in order to increase the efficiency of vitrification, as the characteristics of immature and mature oocytes are different. Also, although applying SN2 in vitrification process did not affect immature oocytes, it did affect the embry-onic development of mature oocytes after the warming process. These results are due to the characteristics of mature oocytes which include an exposed spindle that is essential for chromosomal division; however, further study is required to evaluate the mechanism in more detail. In this study, although the survival rate of immature oocytes was as high as that obtained from mature oocytes, immature oocytes still showed a low embryonic development rate after vitrification/ warming process. This result was similar to studies of human oocytes [32,33]. It is suggested that this is not only caused by problems with the oocyte cryopreservation process but also the lack of research on the in vitro maturation process. Therefore, if studies on the process of in vitro maturation of oocytes and development of a culture system move forward then the efficiency of immature oocyte cryopreservation should be expected to improve. In conclusion, the combination of CPAs, EG, and DMSO showed greater effectiveness in the vitrification of immature oocytes while for that of mature oocytes, it is possible to use either EG only or EG and DMSO. In addition, SN2 may improve efficiency by reducing cryoinjury during vitrification.
2014-10-01T00:00:00.000Z
2011-03-01T00:00:00.000
{ "year": 2011, "sha1": "bfe4abfc89d782ba34bff7e17f950d47c4741489", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc3283046?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "bfe4abfc89d782ba34bff7e17f950d47c4741489", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
221655712
pes2o/s2orc
v3-fos-license
Effective Federated Adaptive Gradient Methods with Non-IID Decentralized Data Federated learning allows loads of edge computing devices to collaboratively learn a global model without data sharing. The analysis with partial device participation under non-IID and unbalanced data reflects more reality. In this work, we propose federated learning versions of adaptive gradient methods - Federated AGMs - which employ both the first-order and second-order momenta, to alleviate generalization performance deterioration caused by dissimilarity of data population among devices. To further improve the test performance, we compare several schemes of calibration for the adaptive learning rate, including the standard Adam calibrated by $\epsilon$, $p$-Adam, and one calibrated by an activation function. Our analysis provides the first set of theoretical results that the proposed (calibrated) Federated AGMs converge to a first-order stationary point under non-IID and unbalanced data settings for nonconvex optimization. We perform extensive experiments to compare these federated learning methods with the state-of-the-art FedAvg, FedMomentum and SCAFFOLD and to assess the different calibration schemes and the advantages of AGMs over the current federated learning methods. Introduction Federated learning (FL) is a privacy-preserving learning framework for large scale machine learning on edge computing devices, and solves the data-decentralized distributed optimization problem: is the loss function of the i th client (or device) with weight p i ∈ [0, 1), N i=1 p i = 1, D i is the distribution of data located locally on the i th client, and N is the total number of clients. FL enables numerous clients to coordinately train a model parameterized by x, while keeping their own data locally, rather than sharing them to the central server (Konečnỳ et al. (2016a,b)). Compared with existing well-studied distributed computing, there are four main key differences (Konečnỳ et al. (2016a); Kairouz et al. (2019); Sattler et al. (2019); Li et al. (2019a,b)): 1) the number of clients N is large and the communication between clients and central server can be slow; 2) partial device participation is allowed during model training, e.g., some devices may randomly drop out or come back during the training phase; 3) the distributions of training data over clients are non-independent and non-identically distributed (non-IID), i.e., local data on each client cannot be regarded as samples IID drawn from an overall distribution; 4) the data quantities and weights can be unbalanced across devices. Some studies assume p i = 1 N , but in general p i can differ on different clients. The clients/devices can be smartphones, personal computers, network sensors, or other accessible information resources. The data decentralization is important during model training because the data on each edge computing device can be private and sensitive, such as photos or personal conversations. FL has attracted widespread attentions, and how to effectively solve Problem (1) is essential. The FedAvg algorithm is proposed in (McMahan et al. (2016)) and has become the de facto FL algorithm where clients do not communicate with the central server at each iteration but after K inner iterations. In the t-th round, a client obtains x (i) t,k+1 = argmin 2 for k ∈ {0, ..., K − 1} and sends x (i) t,K to the central server, which then averages all the updates from the clients to obtain the global update t,K to broadcast to all clients in the (t + 1)-th round. FedAvg can significantly reduce the communication cost. It however has been identified to suffer from the client drift issue under non-IID data situation (Hsu et al. (2019); Karimireddy et al. (2019); Reddi et al. (2020)). The average of client updates could drift to 1 N N i=1 x * i , rather than x * , where x * and x * i are the optimal solutions, respectively, to Problem (1) and to the problem of min x f i (x) for the i th client. This issue becomes exacerbated when partial device participation is present. Hsu et al. experimentally explore the performance of FL algorithms on visual classification tasks and show that FedAvg performs worse with increasing non-IIDness than FedMomentum, which can consistently improve the test accuracy (Hsu et al. (2019(Hsu et al. ( , 2020) and theoretically converge to a first-order stationary point (Huo et al. (2020)). (Karimireddy et al. (2019)) proposed SCAFFOLD to use variance reduction technique in the local K inner iterations to alleviate the effect of client drift. The FedProx algorithm is proposed in (Li et al. (2018b)) where the i th client tries to add a proximal term µ 2 x−x t 2 to the local subproblem to effectively limit the impact of variable local updates, thus keeping local updates close to the global iterate. Adaptive gradient methods (AGMs) form a family of algorithms that can utilize accumulative gradient information from the past iterations, including not only the gradients (the first-order momentum) but also the squared gradients (the second-order momentum) (Duchi et al. (2011); Kingma and Ba (2014); Zeiler (2012)). Adam is one of the most famous AGMs and is heavily used in training deep neural networks. Using AGMs in FL may enable the algorithm to implicitly communicate past gradients across devices and has the potential to correct the bias in the search directions caused by partial device participants, heterogeneity of local data distributions, and multiple local updates of FedAvg. A question that is naturally raised is: can AGM-based FL achieve better performance than FedAvg or FedMomentum? Our investigation leads to an affirmative answer. Note that although (Reddi et al. (2020)) proposes FedYogi that uses an adaptive learning rate in FL, it only explores the second-order momentum but not the accumulated gradients. We examine here the stan- (Li et al. (2018a)). (c) shows the learning curves of the different algorithms with a multi-stage learning rate decay at different non-IID levels across 100 clients as specified by α. The concentration parameter α in the Dirichlet distribution controls the degree of data dissimilarity across devices. Smaller α values correspond to higher levels of non-IID data distributions among clients. dard AGMs with both the first-order momentum as search direction and the second-order momentum as adaptive learning rate (or stepsize) and propose Federated AGMs (including FedAdam and FedAMSGrad) to solve Problem (1). Figure 1 is a summary illustration which shows Federated AGMs are usually more efficient in early training stages (e.g., 1-1000 rounds for ResNets with CIFAR10 data), especially for non-IID data. However, even equipped with a multi-stage learning rate scheme ; ), a direct merge of Adam and FL, which gives FedAdam, tends to trap in a narrow (bad) local minimal, leading to lower training accuracy. Under FL, the stochastic direction on the central server (virtual direction ∆ t in Algorithm 1) aggregated by clients tends to be very small when approaching local minimal; thus, momenta will be small and the element-wise division of momenta could be inconsistent, due to the different speed of moving averages of momenta. What's more, the adaptive stepsize in FL may have a large span across coordinates. To alleviate these issues, the base learning rate for FedAdam is often set smaller than it should be. However, similar to (Li et al. (2018a)), with a very small learning rate, Federated AGMs may get stuck at narrow and sharper local minimals, which exhibit bad generalization performance. Hence, we propose to use several calibration schemes ; Chen and Gu (2018); Tong et al. (2019)) to calibrate the adaptive learning rate in Federated AGMs. Our main results are summarized as follows: 1. We design Federated AGMs (FedAdam and FedAMSGrad), and characterize their performance: rapid progress in early stages but prone to be stuck at narrow local minimal with non-IID dataset. 2. We propose a calibration framework for Federated AGMs, which includes -calibrated, p-calibrated, and a calibration method using the activation function softplus, denoted by s-calibrated. This effort also unifies FedMomentum and Federated AGMs so to achieve the best of both approaches for fast convergence as well as good generalization performance. 3. Theoretical analysis based on more practical reality -partial device participation over unbalanced and non-IID datasets for nonconvex objective functions -shows that convergence rate is highly related to calibration parameters, gradient dissimilarity, and the interplay between the learning rate, the number of local iterations, and the quantity of participated clients. 4. Experimental results show that the calibrated Federated AGMs equipped with stagewise local learning rate decay can achieve the best performance in both training and test accuracy on multiple FL tasks over the state of the art. Notations. For any vectors a, b ∈ R d , we use a b for element-wise product, a m for element-wise power of m, √ a for element-wise square root, a/b for element-wise division, and a, b to denote the inner product of a and b. We use x (i) to denote the parameter update on the i-th device, and x to denote the l 2 -norm of x. Let N denote the total number of clients, [N ] denote the integer set {1, 2, ..., N }, and S(≤ N ) be the maximum number of participated clients, T be the total rounds that the sever updates its global model, K be the number of local updates on each client, O(·) hide constants which do not rely on the problem parameters, Θ(·) denote the same order of computation. AGMs Revisited. Distributed learning with data z collected to the central server can be formulated as the minimization problem of: , where both f (x) and f (x, z) are usually nonconvex. The SGD, its momentum variants (Ghadimi and Lan (2013);Wright and Nocedal (1999);Wilson et al. (2016); Yang et al. (2016)), and its adaptive versions (AGMs) (Duchi et al. (2011);Kingma and Ba (2014); Zeiler (2012)) can readily distribute their computation to multiple processors due to their stochasticity and simplicity. The updating rule of these methods can be generally written as: where calculates element-wise product of the first-order momentum m t and the learning rate ηt √ vt . Here we call η the base learning rate, 1 √ vt the adaptive learning rate. Researchers generally have an agreement on how to compute m t to accelerate the convergence, i.e., m t = β 1 m t−1 + (1 − β 1 )g t , β 1 ∈ [0, 1), but there are various formulas for the second-order momentum v t and the related adaptive learning rate in Adagrad (Duchi et al. (2011)), Adadelta (Zeiler (2012)), Rmsprop (Tieleman and Hinton (2012)), Adam (Kingma and Ba (2014)), AMSGrad ), Yogi ) and AdaBound (Luo et al. (2019)). Among these methods, Adam uses exponential moving averages of past squared gradients, i.e., v t = β 2 v t−1 + (1 − β 2 )g 2 t , β 2 ∈ [0, 1). AMSGrad takes the larger second-order momentum estimated in the past iterations by v t = max{v t−1 ,v t }, wherê v t = β 2 v t−1 + (1 − β 2 )g 2 t to theoretically ensure convergence. In this paper, we mainly focus on Adam and AMSGrad, but our analysis is readily applicable to other AGMs. Algorithm 1 Federated AGMs: Input: The SGD learning rate γ t , the AGM base learning rate η, momentum parameters 0 ≤ β 1 , β 2 < 1, the number of clients N , the number of inner iterations K. Initialize x 0 randomly, and We design the Federated AGMs (in Algorithm 1) based on the classical SGD in inner loops and the AGMs in outer loops. We provide theoretical analysis and empirical studies in more practical reality: (1) partial device participation, which means that in each communication round, only a portion of the N clients is active. Because clients are assumed to randomly leave or join the FL, we can randomly sample a set of clients (S t ⊂ [N ]). Different clients may have different weights p i in the FL, so active clients are sampled according to p i 's. When the cardinality S = |S t | is N , all clients participate. (2) Local devices are presumably unbalanced in the capability of curating data, i.e., that clients may exhibit different amounts of local data as specified by p i 's and become balanced if p i = 1 N . (3) There can be different levels of non-IID data across clients, as discussed in (Hsu et al. (2019)), which provides a simulation process. We draw a distribution q with Dirichlet distribution, i.e., q ∼ Dir(αI), where I = (1, .., 1) is a prior class distribution over a pre-specified number of classes and α is the so-called concentration parameter. When α → 0, it generates strongly different distributions among clients. When α → ∞, all clients have identical distributions to the prior. After q is drawn, data for each class will be drawn according to q. Algorithm 1 depicts the steps of Federated AGMs in a nested loop structure, the outer loops/communication rounds require message passing between the central server and active devices. Each round corresponds to parallel inner loops that run within individual clients. The client i updates x (i) t,k at the k-th inner iteration of the t-th outer round, and performs K steps of SGD without communication to other clients. The final local update x (i) t,k . The number of inner iterations K should be a small number so that each client does not move too far away independently. In experiments, we set it to a number that makes sure a full epoch is finished (i.e., the amount of available data on client i divided by the mini-batch size). The central server then aggregates the local updates by averagingx t+1 = 1 S i∈St x (i) t,K . Then the virtual direction amounts to computing the sum of all the gradients obtained in the inner loops ∆ t,k . Algorithm 1 specifies the AGM steps in outer loops. For inner loops, although Algorithm 1 instantiates with SGD, other optimizers can be applicable, e.g., the gradient descent (GD) using full local on-device data, or stochastic variance reduced gradient (SVRG) method. The recent SCAFFOLD (Karimireddy et al. (2019)) uses variance reduction technique in a FedAvg framework. We can also use SCAFFOLD in inner loops, which we have compared in our experiments. The inner stepsize γ t is predefined and can differ from round to round. Our theoretical analysis shows that, to guarantee convergence, the total local aggregate Kγ t in each client's update should be restricted to be constant, to warrant that the local alteration is not too far off across rounds. Thus the inner stepsize should be small if K is large or vice versa. In an outer step of AGM update, both momenta m t and v t are used to track more past information. The base learning rate η is predefined. The adaptive step size is determined by v t via a specific calibration, which we will discuss in the next subsection. When the calibrate(v t ) function takes the form of √ v t + where > 0 is small, Algorithm 1 gives us the Federated Adam (FedAdam). In Figure 1 (c), FedAdam ( = 10 −8 as commonly suggested) shows rapid initial progress but in later stages, has worse accuracy than other methods. With the increasing level of non-IID, FedAdam remains its fast convergence rate, but it tends to be more likely trapped in narrow local minimal, and the final accuracy is often surpassed by FedMomentum. Our study shows that careful calibration of the adaptive stepsize can be important for FedAGMs to improve performance in later stages. Different Calibration Schemes. As a special intermediate product in FL, the virtual direction is decided by clients number, inner stepsize γ t , and stochastic gradient of each client. We theoretically prove that γ t is a small amount, and stochastic gradient will close to zero when the federated algorithm converges; thus, the aggregated virtual direction ( ∆ t in Algorithm 1) will be approximating to zero. Hence the base learning rate η needs to be set small to avoid violent fluctuation caused by adaptive stepsize. On the other hand, a small η is hard to escape the narrow local minimum. This becomes worse with the increasing dissimilarity of data distributions across clients. When the training/test accuracy curve reaches a plateau in the later stages, Federated AGMs may reach a too small η to jump out of a sharp local minimum, and sharp local minimizers (e.g., Fig. 1 (b) top-left corner) often have worse generalization performance than the optimizer whose neighborhood is flat and convex. In this work, we further provide a careful examination of three calibration techniques that all regulate the span of the adaptive stepsizes under FL. We take FedAdam as an (a, d, g), test accuracy (b, e, h) and the norm of gradients (c, f, i) for -, p-, s-FedAdam with a multi-stage decay of γ for ResNets on the CIFAR10 dataset. After γ is decayed to 0.1 * γ at round 1000, the test accuracy of the FedAdam ( = 10 −8 ) first increases quickly and then gets stuck at a sharp local minimal in the few steps before round 1500. At rounds 1500 and 2000, γ is further decayed. The test accuracy degenerates again. Monitoring the norm of the gradients in the plots on the last column helps us examine the local areas of the stationary points. Without careful calibration, FedAdam is stuck at a sharp local minimal after round 1000 (the gradient norms vary a lot in the proximity of a sharp local minimal). However, the p-and s-FedAdam converge to a flat local minimal. example to discuss these calibration formulae and present its corresponding performance, but the FedAMSGrad can certainly benefit from them as well (as shown in Appendix). -FedAdam. Like the Adam method, the FedAdam can also use the hyper-parameter to avoid the denominator of adaptive learning rate, 1 √ vt+ from vanishing to zero. The value of decides the largest span or dissimilarity of the adaptive stepsizes (learning rates) over the coordinates. For instance, if = 10 −8 , the smallest and largest adaptive stepsizes in a single iteration could range in (0, 10 8 ). If we calibrate the adaptive stepsizes directly by controlling , we call the method the -FedAdam. When we choose ∈ {10 −2 , 10 −4 , 10 −6 }, we clearly observe different performance in Figure 2 (a,b,c). The similar empirical results for -FedAMSGrad" are included in Appendix. p-FedAdam. Another choice of calibration replaces the square root of v t in the calibration(v t ) function by the p-th root of v t . In other words, calibrate(v t ) = (v t + ) p where p ∈ { 1 2 , 1 4 , 1 8 , 1 16 } and = 10 −8 . It was shown in (Chen and Gu (2018)) that this calibration improves the test accuracy over the Adam that uses . We can similarly calibrate the adaptive stepsize of the FedAdam by controlling p, which we call p-FedAdam". When p is smaller, more compression is placed on the second-order momentum v t , so it allows the base learning rate of the FedAdam to choose a larger value. The above two kinds of calibration are straightforward and somehow efficient in constraining the adaptive stepsize to a moderate range. However, they lose adaptivity" to some degree because increasing or decreasing p both work on all coordinates without distinction. For instance, if v t ranges from 10 −16 to 10 8 over the coordinates, setting = 10 −2 will constrain adaptive stepsizes of all coordinates into (10 −4 , 10 2 ) in -FedAdam, and changing p = 1 2 to p = 1 4 will change the adaptive stepsizes from the range (10 −4 , 10 8 ) to (10 −2 , 10 4 ) in p-FedAdam. s-FedAdam. A method explored in (Tong et al. (2019)) uses the property of the sof tplus function (actually any suitable activation function) to construct calibration function, which can be more effective to solve the issue of Federated AGMs. So the calibrate(v t ) becomes sof tplus( . This calibration brings some benefits: smoothing out extremely small v t rather than hard thresholding while keeping moderate stepsizes untouched with an appropriate β; removing the parameter because the softplus function can be lowerbounded by a nonzero number, sof tplus(·) ≥ 1 β log 2. When β is small, FedAdam behaves similar to FedMomentum; if β is chosen to be very large, it becomes similar to the standard Adam. Figure 2 shows the cases for β ∈ {10, 50, 100}. In summary, the above three calibration methods are all easy to implement in the Federated AGMs, and tuning the calibration parameters does improve performance. Based on our empirical observations, the s-FedAdam/AMSGrad with β = 50 almost always gave the best performance among all calibration techniques. Theoretical Analysis In previous FL studies, theoretical analysis is performed under assumptions that the data are i.i.d. drawn, and/or all devices are active and have equal computing and storage ability. However, in reality, devices like smartphones have limited memory and battery, and they may run out of battery, so drop out of the computation. It is thus more practical to consider the case that only a partial collection of devices participate in the FL at a time. Hence, our analysis copes with these realistic situations under the following fairly standard assumptions used in the analysis of nonconvex optimization. Assumption 1 The loss f i and the objective f satisfy: Following the same convention in FL (Li et al. (2018b); Karimireddy et al. (2019)), to measure the level of non-iid across clients, we assume the dissimilarity between the gradients of the local functions f i and the global function f is bounded as follows. Assumption 3 (Sahu et al. (2018)) Let S t be the set of the active devices in the t-th round, . Assume S t contains a subset of |S t | = S nodes randomly selected with replacement according to the sampling probability where n i is the number of samples located on client i. Assume that the devices' capabilities are unbalances, i.e., p i 's can be distinct for different i. Under Assumption 3, the exact-average step can be computed asx t+1 = 1 S i∈St x (i) t,K . In our analysis, we use g t,k as accumulated gradient direction that from all participating devices at the k-th iteration of t-round, g t,k = 1 S i∈St g (i) t,k ; and the virtual direction computed on the server in Algorithm 1 can also be written as ∆ t = K−1 k=0 γ t g t,k . Before giving the main theorem of the proposed algorithms, we first establish the following lemma for the variance of g t,k , which plays an important role in our theoretical analysis. Lemma 1 Let Assumptions 1 and 2 hold. We have the following properties: local updates This lemma shows that the expected update direction E[g t,k ] is the sample mean of each local update direction with the probabilities p i , and the second moment of the direction g t,k can be bounded by the sum of two terms: 1) the variance due to partial client participation, and 2) the variance caused by the stochasticity in the local SGD updates. The more clients participate, the smaller the first term will be. Moreover, Lemma 7 also serves as a transmission for the influence of dissimilarity across clients to the convergence performance. Theorem 2 Let all assumptions hold, and L, σ i , G i , β 1 , β 2 be defined therein. Let µ lower and µ upper be the lower bound and upper bound, respectively, for the adaptive stepsizes 1/calibrate(v t ), t ∈ [0, T −1] be the index of communication rounds, the total number of iterations be T K. Then with an appropriately chosen inner stepsize γ t < min{ 1 8LK , 1 K µ lower 10µupper }, the iterate sequence generated by the Federated AGMs with partial device participation satisfies Theorem 2 shows the convergence rate of Federated AGMs, which is highly related to three items: the total number of iterations T K, the calibration parameters , p, β used in µ upper , and the gradient dissimilarity bound σ 2 g . Particularly, when increasing the number of inner loops K, the effects of dissimilarity among clients (non-IIDness) will be enlarged, which corresponds to severer client drift (Karimireddy et al. (2019)). Our theoretical analysis from an angle shows the relationship between K and client drift for nonconvex function (it has only been shown for quadratic function (Charles and Konečnỳ (2020))). For each calibration scheme, the µ lower and µ upper can be calculated explicitly as shown in the following table and then their convergence analysis can be further developed. Corollary 3 ( -Federated AGMs) If all the conditions in Theorem 2 hold, with appro- Corollary 4 (p-Federated AGMs) If all the conditions in Theorem 2 hold, with appro- Corollary 5 (s-Federated AGMs) If all the conditions in Theorem 2 hold, with appro- The alteration of inner step Kγ t will be upper bounded by a constant related to calibration parameter. When K is large, γ t should be correspondingly small; if we limit K, γ t may be larger. When S is larger (more devices participate), the variance caused by the sampling of devices becomes smaller, and then the local stepsize γ t can be properly enlarged. As discussed in Methods section and to be consistent with later experimental observation, the calibration parameters do play important roles in the algorithm convergence to a stationary point. The convergence rate can be reduced by calibration as increasing < 1, decreasing p, and decreasing β. The above derivations are all based on the situation where p i 's are quite different. If p i 's are the same p i = 1 N , our result is also applicable to the case with balanced on-device data. Experimental Setup. We use three datasets for image classifications: MNIST, CIFAR10 and CIFAR100, they are individually tested on a CNN with 5 hidden layers, Residual Neural Network with 20 layers (ResNets 20) (He et al. (2016)) and VGGNet (Simonyan and Zisserman (2014)). During the training, we use a weight decay factor of 10 −3 and a batch size of 64. For MNIST, we use LR decay with Reduce on Plateau scheme. For the CIFAR tasks, we use a fixed multi-stage LR decaying scheme: η decays by 0.1 at the 1 2 total epochs and 3 4 total epochs. All algorithms perform grid search for hyper-parameters to choose from {10, 1, 0.1, 0.01, 0.001, 0.0001} for η, {0.9, 0.99} for β 1 and {0.99, 0.999} for β 2 . For algorithm-specific hyper-parameters, they are tuned with the following criteria: As a sanity check, the MNIST dataset is used in our experiments where data decentralization is created with the sort-and-partition procedure (SP). Each device has data for two digits. Results are present in Table 2, showing that the FedAdam and FedAMSGrad can improve test accuracy in all settings. As expected, the test accuracy is further improved by the proposed calibrated versions. CIFAR10. Using the PyTorch framework, we run the ResNets 20 model on CIFAR10 and results are shown in Figure 3, 4. Similar to (Hsu et al. (2019)), the federated CIFAR10 data is generated as: the number of clients is set to be 100 and the number of data points located on each client is set to be 500 for training data and 100 for testing data. For each client, we use Dirichlet distribution to generate non-IID data. Particularly, we observe that more computations are needed for smaller α values, which means that clients are more dissimilar to each other. It is also safe to conclude that FL algorithms, with large dissimilarity across clients, tend to perform worse; however, calibrated Federated AGMs can always improve the learning performance. 4.4 CIFAR100. The popular architecture VGGNet is also tested on CIFAR100 dataset to compare all algorithms. The federated CIFAR100 data is generated as: first split 100 labels into 10 large groups; then, we utilize Dirichlet distribution on top of 10 large groups to generate non-IID data. We set the number of participated clients S = 10, the number of local updates K = 10 for each client in Figure 5, and more results about S = 30 are in appendix. The s-FedAMSGrad consistently achieves the highest test accuracy. SCAFFOLD tends to degrade in training large-scale deep learning applications. It may be due to the local client control variate in SCAFFOLD, which lags behind when only a small set of clients participate in FL at each round. However, when the s-FedAMSGrad with SVRG updates in inner loops (as done in SCAFFOLD) is used, the issue is alleviated. In summary, we observe that Federated AGMs can improve test performance over existing federated learning methods, calibration further helps Federated AGMs, and stage-wise training (with exponential decay of the local learning rate γ t ) is also useful for federated learning of deep neural networks. Compared with calibrated Federated AGMs, the FedAvg suffer more from client drift, and SCAFFOLD tends to more easily trap in local minimals. Conclusion In this paper, we propose a family of federated versions of adaptive gradient methods where clients run multiple steps of stochastic gradient descent updates before communication to a central server. Different calibration methods previously proposed for adaptive gradient methods are discussed and compared, providing insights into how the adaptive stepsize works in these methods. Our theoretical analysis shows that the convergence of the algorithms may rely on not only the inner loops, participating clients, gradient dissimilarity, but also to the calibration parameters. Empirically, we show that the proposed federated adaptive methods with careful calibration of the adaptive stepsize can converge faster and are less prone to trap into bad local minimizers in FL. B.2.1 Non-IIDness As introduced in the main paper, α is concentration parameter for Dirichlet distribution to control the degree of dissimilarity across devices. With α → 0, we can generate real different distributions among clients, with α → ∞, all clients have identical distributions to the prior. We first observe how the identicalness affects experimental performance. Figure 7 shows the performance of all the FL methods for CIFAR10 data with participated clients S = 10, and concentration parameter α = 100. Figure 8 and Figure 9 use concentration parameter α = 1 for calibrated FedAdam and calibrated FedAMSGrad, respectively.; Figure 10 and Figure 11 use concentration parameter α = 0.05 for calibrated FedAdam and calibrated FedAMSGrad, respectively. Notice that with the increase of concentration parameter α, the data located on local clients are more identical, and the performance will be better. Then we provide comparisons of calibrated Federated AGMs with multi-stage LR decay scheme for ResNets with CIFAR10 dataset. Figure 8 and Figure 10 are comparisons of -FedAdam, p-FedAdam and s-FedAdam with multi-stage LR decay scheme for ResNets with CIFAR10 dataset; Figure 9 and Figure 11 are comparisons of -FedAMSGrad, p-FedAMSGrad and s-FedAMSGrad with multi-stage LR decay scheme for ResNets with CIFAR10 dataset. We can see that no matter FedAdam or FedAMSGrad, calibration techniques posed on second-order momentum matter lot. With appropriate calibrating parameters, Federated AGMs can always improve the training and testing performance a lot. We also provide more comparisons of calibrated Federated AGMs with multi-stage LR decay scheme for VGGNet with CIFAR100 dataset (see Figure 12). And Figure 13 is a comparison among s-FedAMSGrad, s-FedAMSGrad+SFAFFOLD and SCAFFOLD methods under different non-IIDness (α ∈ {1, 0.5, 0.05}) in CIFAR10 and CIFAR100. C.3.1 Prepared Lemmas Let's define first with the stochastic gradient under partial device participation, t,k ; and the virtual direction will be With full device participation, we can accordingly have, In the following analysis, we always consider the partial device participated case. We have a series of prepared lemmas to help with optimization convergence rate analysis. Lemma 6 Assume the above assumptions hold, we can easily derive the properties of unbiased stochastic gradient with full device participation, Proof From the problem formulation, The inequality holds due to Jensen's inequality. Lemma 7 Assume the above assumptions hold, we can easily derive the properties of unbiased stochastic gradient with partial device participation, local update With all device active, the partial participation term can be removed. Proof From the problem formulation, g t,k = 1 The first inequality holds due to Jensen's inequality. ull participation Now we are going to measure the virtual direction and the related slow momentum and second-order momentum. Notice that these vectors are calculated beyond each working nodes self iteration, then we only use subscript t to denote the current iterate. Lemma 8 For virtual direction, we have where V := Remark 9 In our analysis, V is a very important term that effects the convergence rate, and we notice that V = O(K 2 γ 2 t (1 + 1 S )) when treated p i , σ i , G i as constants, then we know it is related with inner loop iterations, inner loop stepsize and partial device numbers, which is verified in our experiments. Lemma 10 All momentum-based optimizers using first momentum Proof From the updating rule of first momentum estimator, we can derive Let Γ t = Σ t l=1 β t−l 1 = 1−β t 1 1−β 1 , by Jensen's inequality and Lemma 8, Now let's consider adaptive algorithms in Federated learning, besides the above Lemma 7, 8, 10, we also need to bound the adaptive term as follows. , µ 6 = β log 2 , then we can get the above result. From the definition of V, and regard p i , σ i , G i as constants, we then have, Remark 13 Adaptive learning rate pairs (µ lower , µ upper ) are related with algorithm's calibrated parameters (i.e., , p, β), inner loop iterations K, inner loop stepsize γ t and participated device numbers S. C.3.2 -FedAdam Convergence Analysis with Paritical Device Participation in Nonconvex Setting This time we build a complicated auxiliary sequence for FedAdam. Lemma 14 Define z t = x t + β 1 1−β 1 (x t −x t−1 ), ∀t ≥ 1 β 1 ∈ [0, 1). Then the following updating formula holds for -FedAdam optimizer: Proof Lemma 15 As defined in Lemma 14, with the condition that v t ≥ v t−1 , we can derive the bound of distance of E[ z t+1 − z t 2 ] as follows: Proof The first inequality holds because a − b 2 ≤ 2 a 2 + 2 b 2 , the second inequality holds because Lemma 8, Lemma 10 and Lemma 12, the third inequality holds because (a − b) 2 ≤ a 2 − b 2 when a ≥ b, and in our assumption, we have v t ≥ v t−1 holds. Lemma 16 As defined in Lemma 14, with the condition that v t ≥ v t−1 , we can derive the bound of the inner product as follows: The first inequality holds because 1 2 a 2 + 1 2 b 2 ≥ − < a, b >, the second inequality holds for L-smoothness, the last inequalities hold due to Lemma 8, 10 and 12. Lemma 17 Proof Following Lemma 4 in Reddi et al. (2020), we can derive the corresponding drift bound for our problem that for Then we get the upper bound, Proof of -FedAdam with partial device participation in nonconvex setting. Proof From L-smoothness and Lemma 14, we have Take expectation on both sides, Plug in the results from prepared lemmas, then we have, Then we derive, Then, we have, Require ηKγtµ 1 2 − 5ηµ 2 K 3 γ 3 t > 0, then γ t < µ 1 10µ 2 K 2 . We further derive the bound of inner loop stepsize as γ t < min{ 1 8LK , µ 1 10µ 2 K 2 }, or we can get Kγ t < min{ 1 8L , Sum from t = 0 to T − 1 and divide by 1 T , because z 0 = x 0 and Lemma 12, Our p-FedAdam methods are proved to converge with convergence rate of O( 1 T ). And we get the result that, if Kγ t < O(min{ 1 L , µ 1 µ 2 }), Federated AGMs always converge. Consider the calibration parameter (µ lower , µ upper ) in FedAdam, and we further require η = 1 K ; for vinilla FedAdam and -FedAdam, Kγ t < O(min{ 1 L , 3 Lemma 20 As defined in Lemma 18, with the condition that v t ≥ v t−1 , we can derive the bound of the inner product as follows: Lemma 21 Proof of p-FedAdam with partial device participation in nonconvex setting. We first study the convergence analysis of p-FedAdam methods, which include FedAdam and -FedAdam methods. Proof From L-smoothness and the sequence derived in Lemma 18, we have Take expectation on both sides, Then, we have, 10µ 4 K 2 . We further derive the bound of inner loop stepsize as γ t < min{ 1 8LK , µ 3 10µ 4 K 2 }, or we can get Kγ t < min{ 1 8L , Sum from t = 0 to T − 1 and divide by 1 T , because z 0 = x 0 and Lemma 12, Our p-FedAdam methods are proved to converge with convergence rate of O( 1 T ). And we get the result that, if Kγ t < min{ 1 8L , µ 3 10µ 4 }, Federated AGMs always converge. Consider the calibration parameter (µ lower , µ upper ) in FedAdam, and we further require Thus, we get the sublinear convergence rate of p-FedAdam methods in nonconvex setting with full device participation, which can also recover -FedAdam methods. C.3.4 s-FedAdam Convergence in Nonconvex Setting As s-FedAdam also has constrained bound pair (µ 5 , µ 6 ), we can learn from the proof of p-FedAdam method. Lemma 23 As defined in Lemma 22, with the condition that v t ≥ v t−1 , we can derive the bound of distance of z t,k+1 − z t,k 2 as follows: Proof Since softplus function is monotone incereasing function, we can similarly prove it as the way in Lemma 15. Lemma 24 As defined in Lemma 22, with the condition that v t ≥ v t−1 , we can derive the bound of the inner product as follows: Proof We can similarly prove it as the way in Lemma 16. Lemma 25 Proof We can similarly prove it as the way in Lemma 17.
2020-09-15T01:01:20.258Z
2020-09-14T00:00:00.000
{ "year": 2020, "sha1": "9a0102998bbacdd6e1d55c9304e7e87dfecea0fd", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "9a0102998bbacdd6e1d55c9304e7e87dfecea0fd", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
269920649
pes2o/s2orc
v3-fos-license
Air bubble guide for adequacy of ophthalmic viscosurgical device during phacoemulsification Phacoemulsification in hard cataracts is a challenge. The use of dispersive ophthalmic viscosurgical devices (OVDs) to protect the endothelium is a routine step in such scenarios. However, as OVD is transparent, it is difficult to spot within the anterior chamber. Therefore, surgeons may not be aware when the OVD coating of the endothelium disappears during surgery. Consequently, there may be too frequent OVD injections, resulting in a waste of resources. On the contrary, the surgeon may fail to inject OVD at an appropriate time, leading to greater endothelial damage. We propose a novel technique of using an air bubble as a guide that helps in identifying the time when OVD disappears from the anterior chamber, thereby suggesting the surgeon to reinject before proceeding further. Cataract surgery is the most commonly performed surgical procedure in the world.Phacoemulsification has now become the standard method of performing cataract surgery.Nevertheless, performing phacoemulsification in very hard cataracts may require longer phaco time with the utilization of a greater amount of ultrasonic energy.This translates to greater endothelial damage and the development of striate keratopathy.In most instances, the striate keratopathy resolves with time, irrespective of the use of hypertonic saline. [1]However, endothelial cells do not regenerate in human beings since they are arrested in the G1 phase of the cell cycle. [2]There is only compensatory enlargement and migration, which helps in maintaining their function in most individuals.With advancing age and a further decrease in endothelial cell density, these eyes may develop significant endothelial dysfunction and consequently pseudophakic bullous keratopathy later in life. [3]merous surgical techniques including multilevel chop, "crater and chop," and "drill and chop" have been described for phacofragmentation to reduce the amount of energy utilized within the anterior chamber in cases of hard cataracts. [4][7] Irrespective of the parameters, one of the most important surgical steps in getting the ideal outcome from these advances is adequate endothelial protection using ophthalmic viscosurgical devices (OVDs).The use of dispersive or viscoadaptive OVD while operating on hard cataracts has now become an essential step to get the best results despite the additional cost involved. [8,9]They are usually injected before starting phacoemulsification and, if necessary, during nuclear fragment removal, depending on the time taken for these steps. As OVD is transparent, it is difficult to spot within the anterior chamber.As the surgery progresses, surgeons may not be aware of whether the endothelial OVD coating is intact or not.Therefore, they periodically inject OVD intraoperatively.It is possible that OVD could have been aspirated much earlier and ultrasonic energy with the generated heat has quick direct access to the corneal endothelium.On the contrary, OVD could still be present under the cornea when reinjection is done, which adds to the wastage.We propose a novel technique that helps in identifying the time when OVD disappears from the anterior chamber, suggesting the surgeon to reinject OVD before proceeding further. Surgical Technique The technique involves utilizing a small air bubble as a guide for the adequacy of OVD during phacoemulsification.After draping and speculum application under strict aseptic sterile conditions by cleaning the eye with povidone-iodine, one or two side port incisions are made depending on whether coaxial or bimanual phacoemulsification is to be performed.A 2.8-mm main incision is created using a keratome.Trypan blue is used to stain the anterior lens capsule under air.After washing the dye, OVD is injected.A 5.5-mm capsulorrhexis is performed, followed by hydrodissection.We aim to use a small air bubble inside the anterior chamber as a guide.To this end, we inject a very small air bubble intentionally into the anterior chamber.In most cases, an air bubble is invariably seen after hydrodissection even without intentional injection; however, if no such bubble is inside, one is injected intentionally [Fig.1a].The highest point inside the anterior chamber would be below the center of the cornea due to the concave nature of the posterior corneal surface.An air bubble being lighter than fluid and by buoyancy will try to occupy the highest position of the containing vessel, in this case, the anterior chamber [Fig.1b].Care needs to be taken that it is not too big, obscuring the view.Excess air may be aspirated.It is safer to have the air bubble in one corner.This can be achieved by injecting OVD in the center of the anterior chamber beside the air bubble [Fig.1c].This pushes the air bubble to the periphery.OVD around the air bubble will prevent its free movement within the anterior chamber.The nuclear disassembly may be performed by the surgeon's preferred technique.We prefer the horizontal or vertical chopping technique in a hard cataract [Fig.1d].The nuclear fragments are then emulsified [Fig.1e]. When the OVD gets aspirated inadvertently during phacoemulsification, the now free air bubble moves around with ease [Video 1].This rapid movement of the air bubble indicates to the surgeon that the OVD has been washed off and needs to be reinjected.If the air bubble migrates from the periphery back to the center, it probably indicates that the peripheral cornea may not have adequate protective OVD coating and therefore needs reinjection.Alternatively, the air bubble can also get aspirated along with OVD, implying the same.Usually, not more than one injection of air bubble would be required to complete phacoemulsification.However, an air bubble can be reinjected as needed [Fig.1f, g].It is seen that the air bubble stays in situ when adherent to OVD.At the end of phacoemulsification, the air bubble usually remains without getting aspirated despite the turbulence in the anterior chamber created by emulsification, which indicates that the endothelial coating by the OVD is still intact [Fig.1h]. Cortex wash and intraocular lens insertion are performed in a routine manner.The OVD is washed off the anterior chamber, incisions are hydrated, and intracameral moxifloxacin is finally injected.The same principle can also be used while washing off the OVD at completion.A conscious attempt to remove all air bubbles at the end of the surgery will remove the bulk of OVD even before the air bubbles. Discussion This air bubble guide is useful not only in cases of hard cataracts, but also in situations with compromised endothelial function, like Fuchs' endothelial dystrophy or uveitis.However, this might not make a clinical difference in soft cataracts.Nevertheless, this could be performed in all phacoemulsification surgeries.During surgical training, where the surgery is prolonged or when greater energy is utilized, this technique can be beneficial.Retained OVD has been implicated in transient immediate postoperative intraocular pressure spike and toxic anterior segment syndrome.A knowledge of this concept with a thorough OVD wash at the end will reduce its incidence.When a sticky surface like an OVD is present, the air bubble tends to be loosely attached to it, preventing it from moving freely or getting aspirated.A dispersive OVD, however, may tend to entrap air bubbles generated by the tip of the phaco probe and can impede the surgeon's operative view.A single air bubble in the central part of cornea may not guide about the adequacy of OVD protection at the paracentral and the peripheral cornea.The disappearance of the bubble depends on various factors including turbulence within the anterior chamber and proximity of the phaco probe to the air bubble.Usually, it lasts until at least two quadrants are removed. An OVD like Viscoat (sodium hyaluronate 3% with chondroitin sulfate 4%; Alcon, Geneva, Switzerland) is reported to have a greater binding to corneal endothelium due to the presence of three negative charges.An air bubble immersed in water has been described to acquire a charge and move in an electrical field as if it is negatively charged.The presence of this negative charge may further aid adherence to the corneal endothelium.Alsmman et al., [10] through a comparative study, also reported that the air bubble is nontoxic to the endothelium. The limitations are that if a bigger air bubble is injected, it may obscure visualization during surgery.The surgical time would be slightly prolonged in getting an appropriately sized air bubble in the anterior chamber; however, this time reduces with experience.Further, the air bubble may get washed off easily if the phaco probe is placed adjacent to it.However, it needs to be emphasized that the air bubble is only a guide for the adequacy of OVD and not a substitute to it.Every attempt should be made to protect the corneal endothelium by reinjecting dispersive OVD to minimize corneal endothelial cell loss.Future research analyzing the difference in endothelial cell loss with or without an air bubble guide ensuring similar cumulative dissipated energy in both groups may add greater value to this technique. In summary, our experience suggests that this air bubble visualization can give a fair approximation of the amount of OVD still present in the anterior chamber.Considering that there is no additional instrumentation or cost involved, this could be a beneficial guide during phacoemulsification. Financial support and sponsorship: Dr. Annamalai Odayappan is currently a Research Scholar at the Kellogg Eye Center, University of Michigan, USA supported by a grant from NIH/ Fogarty International Center (D43TW012027). Figure 1 : Figure 1: (a) An air bubble is seen in the anterior chamber at the end of hydrodissection.(b) The air bubble occupies the highest position inside the anterior chamber.(c) OVD is injected in the center to push the air bubble to the periphery.(d) The hard cataract is chopped into multiple fragments.(e) The nuclear fragments are emulsified.(f).Air is reinjected again since it was washed away.(g) OVD is injected again in the center to push the air bubble to the periphery.(h) At the end of phacoemulsification, the air bubble persists, indicating that the endothelial coating by the OVD is intact.OVD = ophthalmic viscosurgical device
2024-05-21T06:16:07.080Z
2024-05-20T00:00:00.000
{ "year": 2024, "sha1": "8083d22dc0ed0160ac9d6db2b673c31377eb4169", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/ijo.ijo_2998_23", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fb202bbab2c17d4435d3167ecb26c5f593ccc685", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
231856470
pes2o/s2orc
v3-fos-license
The CLAIRE COVID-19 initiative: approach, experiences and recommendations A volunteer effort by Artificial Intelligence (AI) researchers has shown it can deliver significant research outcomes rapidly to help tackle COVID-19. Within two months, CLAIRE’s self-organising volunteers delivered the World’s first comprehensive curated repository of COVID-19-related datasets useful for drug-repurposing, drafted review papers on the role CT/X-ray scan analysis and robotics could play, and progressed research in other areas. Given the pace required and nature of voluntary efforts, the teams faced a number of challenges. These offer insights in how better to prepare for future volunteer scientific efforts and large scale, data-dependent AI collaborations in general. We offer seven recommendations on how to best leverage such efforts and collaborations in the context of managing future crises. Introduction Inspired by successful early use of AI by China, Taiwan, Singapore and South Korea to support the management of the COVID-19 pandemic, on 20 March 2020 CLAIRE, the Confederation of Laboratories for AI Research in Europe (CLAIRE) launched a volunteer effort to help tackle the pandemic. As the World's largest, non-profit network of AI researchers, CLAIRE was quickly able to recruit 150 volunteer AI researchers. This report describes the major activities and achievements of these volunteers, and shares experiences, lessons learnt and recommendations. The starting point for CLAIRE's COVID-19 initiative was the insight that the AI community has much to offer in support of efforts to handle the pandemic, its societal and economic consequences, and many AI researchers and practitioners stood ready to help public institutions in the front line of the crisis (Luengo-Oroz et al. 2020). Our conviction was that AI could be successfully used across a broad spectrum of areas directly related to managing the COVID-19 crisis, such as: -Analysis of existing drugs to test their efficacy against COVID-19 -Analysis of data from patients in intensive care, to support prioritisation in triage and therapy -Analysis of epidemiologic and mobility data, with the goal of better modelling and predicting the spread of the virus, and of facilitating the assessment of impact of containment actions -Use of advanced 3D printing approaches, with the goal of alleviating the scarcity of equipment for protection and intensive therapy -Use of automated scheduling and resource management approaches, with the goal of efficiently managing scarce resources in the medical sector (ICU beds, ventilators, specialists) and other key elements of public infrastructure (personnel, warehouses). These and many other examples suggest that AI techniques can play a key role in assisting human experts with managing the pandemic and its economic aftermath. We note that, as evident even from the small set of examples given above, it is clear that a broad spectrum of AI techniques and approaches can be brought to bear; for this reason CLAIRE, whose research network spans all areas of AI, across all of Europe, saw itself as particularly well-positioned to mobilise bottom-up support for the use of AI techniques and expertise in fighting the pandemic and in managing its impact on societies across Europe and the world. Setup phase Directly after CLAIRE's COVID-19 initiative was launched in late March 2020, a task force was put into place to coordinate the effort and the volunteer experts supporting it. This task force collected information on the various initiatives on leveraging AI techniques in the context of COVID-19 and supported the development of new projects, connecting the European network of AI experts together with health institutions and governments. By the end of March, the task force had enrolled 150 volunteers, covering the full spectrum of AI methods, tools and technologies. Volunteers indicated their willingness to work on one or more of 11 research topics. Of these, a significant number of volunteers and topic team leaders were found for 7 topics: • Epidemiological data analysis-10 volunteers • Mobility and monitoring data analysis-36 volunteers • Bioinformatics (protein and molecular data analysis)-25 volunteers Overview of research activities The 7 groups of volunteers, led by the topic coordinators and with the support of the task force team, are working on several outcomes summarised below. Epidemiological modeling and decision support Topic coordinator: Ann Nowé, Vrije Universiteit Brussel, Belgium. No. of volunteers: 10. This research group works on different types of models for epidemics (Pernice et al. 2020; Report 9: Impact of nonpharmaceutical interventions (NPIs) Response Team xxxx; Data Science Institute and UHasselt xxxx), ranging from high level compartment models to agent based models, and how they can be used to study the dynamical aspects to improve complex decision taking on the effectiveness of prevention strategies. On the one hand, this involves model fitting and optimisation, on the other hand, learning and optimisation of prevention strategies, using epidemiological models as simulation environments (Libin and Guiding . xxxx). Work is underway to identify collaboration mechanisms and structures, considering the support AI can offer in decision-making. This recognises the multi-criteria nature of the problem, balancing the needs of different stakeholders all of whom should be involved. Mobility and monitoring data analysis Topic coordinator: Jose Sousa, Faculty of Medicine, Health and Life Sciences, Queen's University Belfast, Northern Ireland. No. of volunteers: 36. This work sets out to understand the symptoms progression through self-reported data and its integration with mobility to forecast healthcare decision making. The goal is the development of an AI multilayer learning approach capable of creating evidence based knowledge, using complex networks for self-supervised learning (LeCun et al. 2015), spatial temporal analysis and deep learning. Work is underway to understand the data collected under the several self-reporting systems (Sun et al. 2020) and test how useful the self-reported data is to forecast events (Realtime tracking of self-reported symptoms to predict potential COVID-19 2020). The initial models will be produced using different methodologies and compared with the officially reported statistics. Bioinformatics (protein and molecular data analysis) Topic coordinator: Davide Bacciu, Computational Intelligence and Machine Learning Group, Universita' di Pisa, Italy. No. of volunteers: 35. Work on this topic aims to (1) support the community in characterising the disease from its related structural information, including prediction of viral protein folding; (2) study the interactions between the virus and human hosts, including analysing protein-protein interaction data; (3) design and validate methodologies for filtering, retrieval, and generation of targeted drugs leveraging molecular and well as proteomic information; (4) deliver predictive insights onto the genetic features of the virus. As a first contribution to the community, the workgroup has created a curated collection of COVID-19-related datasets useful for drug-repurposing tasks, integrating data from multiple studies (Cheng et (Ashburner et al. 2000) and drug interactions (Cheng et al. 2019). This resource has already been released to the community. The group will use the resource to provide a methodology for fast retrieval of drugs whose action can be correlated to target proteins, by leveraging deep learning for graphs (Bacciu et al. 2020). Image analysis (CT scans) Topic coordinator: Marco Aldinucci, Computer Science Dept, University of Torino, Italy. No. of volunteers: 48. Research in this area aims to (1) distil the current state of the art of methodologies and data sets for AI-assisted diagnosis of COVID-19 by way of imaging (TC Scan, X-ray, etc.), with the goal of making diagnosis faster, cheaper and more manageable in the hospital processes (e.g. using lowresolution images); and (2) to contribute to the improvement of multidisciplinary knowledge by cross-breeding knowledge in computer science and radiology aiming at creating better, more informative reference datasets, together with data-gathering strategies, beyond the current outbreak (Tartaglione et al. 2004;Shi et al. 2020). The team is developing a review paper and contributes to already active projects, including EU H2020 DeepHealth and EU ERDF HPC4AI. Two further projects are motivated by the strongly perceived need to distil science from the hype COVID-19 induced in different aspects of everyday life, including scientific works (Deephealth project: EU ICT-2018; EU ICT-2018). The first addresses a reproducibility and benchmarking task: the main publically available deep neural networks and datasets will be collected and cross-validated to compare them across a common baseline. This task will need a substantial human and compute effort. For this, the group is finalising an agreement with the Italian National Supercomputing Center CINECA that will actively support the group activity, which will require both training and inference of the cartesian product of networks, datasets and network parameters. A non-trivial but enabling aspect of the work will be designing and experimenting tools making it possible to bring AI workload to supercomputers and make AI experts efficiently use large scale platforms (Aldinucci et al. 2018;Colonnelli et al. 2002). The second seeks to consolidate AI performance metrics for both datasets and networks, which will be needed to assess both quality and compute efficiency aspects. This work uses AI models to analyse social media data together with social, behavioral and economic data for two main purposes: (1) monitor social dynamics to analyse the COVID-19 "infodemic" -"an over-abundance of information -some accurate and some not -that makes it hard for people to find trustworthy sources and reliable guidance when they need it" (WHO -Novel Coronavirus(2019)-with the goal of identifying, monitoring and analysing the overload of unreliable information; of collaborating with data providers to obtain free access to relevant data; and of creating an interdisciplinary hub of experts to fight the "infodemic"; and (2) develop early-warning signals to support policy, informed by spatio-temporal analysis of emotions and sentiments; quantifying and modelling the socio-behavioural response. Social media are playing a crucial role for spreading information, both reliable and unreliable, during the COVID-19 pandemic. Efforts are devoted to unravel the role played by both humans and software-assisted (i.e., social bots) in disseminating false or inflammatory content for social manipulation, a phenomenon recently discovered during political events (Stella et al. 2018), with the ultimate goal of attracting or driving collective attention (Domenico and Altmann 2020) towards a specific information. Products of the individual team members, such as the infodemic observatory model (Gallotti et al. 2004) developed by the topic coordinator within the Complex Multilayer Networks Lab at FBK, allow to monitor the current infodemic globally, in each country, or at sub-regional resolution in real time. Information, complemented with the analysis of cognitive content, based on natural language processing and computational psycholinguistics, might help to shed light on mass psychology and socio-behavioral response to the pandemic. Results can be used to support policy and decision makers with adequate and zone-specific actions. Social dynamics and networks monitoring Such tools can be disseminated and further developed with the support of the entire research team. Robotics Topic coordinator: Alessandro Saffiotti, AASS Cognitive Robotic Systems Lab, School of Science and Technology, Orebro University, Sweden. No. of volunteers: 5 Work in this area investigates possible uses of robotic systems and robotic technologies in response to the current COVID-19 emergency and to its aftermaths, as well as strategies to improve technological preparedness to possible future crises Specifically, this team has studied: the use of mobile robots for disinfection of environments; specialized laboratory robots for biological tests and drug development; telepresence robots for social and medical assistance; manufacturing robots for flexible production. These uses of robotic technologies are in line with a recent editorial in Science Robotics (Yang 2020). The group maintains a catalogue of robotic offers and demands relevant to the COVID-19 emergency, and it is liasing active research laboratories across Europe. We have found that the liaison aspect is especially important during a crisis, when access to laboratory resources and material may be seriously limited. It is also supporting euRobotics (the association of European robotic stakeholders) in writing a white paper on the potential usage of robotic technology in the COVID-19 emergency. No. of volunteers: 30. The group working on this topic has focused on automated planning and scheduling, and resource management in healthcare systems leveraging AI (deductive) methodologies and tools. An initial assessment of relevant resources has been completed, and a review of relevant publications, data and projects is underway. In addition, collaboration with the Galliera hospital in Genova, Italy, is underway to assist with workforce scheduling and automated planning of the utilisation of operating rooms with scarce resources and equipment (Alviano et al. 2018; Dodaro and Galatà 2019). Recommendations for future efforts in a crisis Unfortunately, it is more than likely that our societies will be confronted in the not-so-far future with other crises of similar scale. The results of our efforts thus far demonstrate that rapidly assembled volunteer efforts including large teams of experts, although complex to initiate and coordinate, can make valuable contributions in this context. Important initiatives may result from bottom-up efforts, which may consolidate in white papers, joint project applications and dissemination of annotated datasets. However, preparedness for such future events can be improved in a number of ways, and there are important lessons to be learned. Here, we outline some of our experiences and cautiously formulate some recommendations based on these. Involving domain experts and public authorities is difficult in a crisis Many domain experts in public and medical authorities were already preoccupied with tackling the pandemic, limiting their scope to assist volunteer teams. As a result, teams had to develop their own analysis of the problems to tackle, seeking to engage with experts later in the development process as the crisis began to be controlled. The teams who most rapidly developed research outputs were those where the topics being worked on were close to their existing expertise. Recommendation 1 Effectively interfacing with domain experts and public authorities in a crisis situation is challenging, but this should not discourage qualified volunteers. Recommendation 2 The contribution that voluntary expert teams can make should be taken into account in planning for future crises. See also Recommendation 6 below. The need for open licenses and standards for data The ethical implications of processing medical and other sensitive data, and the strategic and policy impacts of research during a crisis pose major challenges. While researchers were fully committed to respecting European citizens' privacy, in accordance with European values, fundamental rights and regulations such as GDPR, the weak standardisation of the data collected on COVID-19 and embryonic state of open data access in the medical and epidemiological field made it difficult to compile bigger data sets needed for the data-driven approaches. Thus ethical, data management and standardisation efforts should be carefully considered from the outset of future volunteering efforts. Open licenses designed by Creative Commons have been used for several of the products from this effort to encourage reuse. CLAIRE has more broadly analysed the Creative Commons Open COVID Licence, and welcomes this open approach to sharing research products. Accelerating the development of open licenses and standards for medical data and models, such as epidemiological models, and applying them consistently would reduce these challenges in future. Recommendation 3 Address and coordinate ethical issues, standardisation and data management at the beginning of the research effort. Consider using open licences to support and accelerate data availability. The need for large open datasets and infrastructures Many AI techniques, notably from the area of machine learning, depend on access to large-scale data. Work with platforms like Twitter and Facebook, and with mobile telecommunication service providers, should become more routine in the future to help speed up the large-scale analyses required to inform policy based on quantitative measures of human behavioural responses to the pandemic. This also demonstrates the need for a European data space, such as that proposed in the European Data Strategy and the European Open Science Cloud, which should include such datasets. Of course, it is very important to not only ensure the quality of such data sets, but also to protect the rights of citizens, in particular their privacy. Recommendation 4 Support the development of a European data space and an open data approach to medical and sensitive data for scientific purposes, while protecting individuals anonymity, dignity and human rights. The role of large scale research infrastructures The ability of large-scale research and development infrastructures, such as the Robert Koch and Francis Crick Institutes (or, indeed, Apple and Google), to redeploy expertise to work effectively on the pandemic is notable. They have offered public authorities single points of contact for key expertise and helped rally efforts of related communities. Many scientists in AI have shown they are eager to dedicate significant time and effort to voluntary activities which might not be necessarily sustainable in a short-time horizon or according to more conventional funding channels. The CLAIRE initiative purposefully decided to build on this to go beyond a sterile communication exercise, to bootstrap a number of concrete scientific collaborations on a voluntary basis. While this effort has demonstrated that volunteer efforts can be effective, this observation supports the case for a large-scale investment into an AI hub (or lighthouse centre) in Europe, acting as the reference point of European nations and institutions for all AI research and development. Since the largest part of the research community kept on using conventional means and strategies to deal with the COVID-19 crisis-notably: competitive research funding, publication (though accelerated by a largest use of arXiv distribution services) and networking-a European centre for AI could promote an innovative approach to research collaboration, funding and dissemination. Fast international uptake was only possible thanks to existing research networks and network organisations, or built upon personal international academic networks or ongoing project consortia. Outreach and links to initiatives across the EU network were difficult to deploy or to set up, given the grass-roots nature of most initiatives. A European network would establish permanent relations with all the relevant AI institutions and initiatives globally. Recommendation 5 Establish a European hub (or lighthouse centre) for AI of very substantial scale. Bridging communication between medical and AI expertise The software platforms and hardware with suitable computational power needed to make use of advanced AI techniques (as recommended by AI experts) are lacking in many hospital environments, whether for e-Health or other solutions. While the function of hospitals is first-and-foremost to deliver medical care, encouraging the future development collaborations between local hospitals and AI researchers and investment into infrastructure that enables these collaborations during normal times would help reduce these barriers in future crises. The approach can be extended to other areas using, for example, national risk registers that identify topics of concern to build networks with those who have to manage crises. Recommendation 6 Set up stable collaboration between hospitals and AI researchers, and other areas of work where future crises can be expected. Organising large teams remotely Remotely organising teams of as many as 47 volunteers to quickly decide on research priorities and means of delivering that research presents its own challenges. Preventing fragmentation that dilutes effort, documenting research plans and the work underway, communicating effectively within the group and disseminating results become significant overheads that are not easy to resolve using slow, traditional methods. There is a rich selection of tools available to address many of these issues, used especially in the software industry, but familiarity with these tools within the scientific community varies greatly. This pushes teams towards the simplest, lowest-common-denominator, legacy solutions as well as towards pre-existing networks and project teams. While even simple videoconferencing and document sharing tools have enabled substantial work to be progressed and completed, scientific researchers should build their familiarity with complex and feature-rich collaboration tools that now exist. This will improve inter-institutional research, assisting both building of teams, internal collaboration, and dissemination of results. Recommendation 7 Scientific researchers should become fluent with the collaboration tools and techniques routinely used, for example, in the software industry. Conclusions As the COVID-19 pandemic and its wider ramifications have yet to play out, it is far too early to draw any definitive conclusions. But the outcomes of CLAIRE's COVID-19 initiative suggest that bottom-up, expert-driven, non-profit endeavours can play an important role. It also offers further evidence supporting the basic premise that AI can and should play a key role in handling crises such as this one, both in the health and medical aspects of the COVID-19 pandemic, and in the societal and economic recovery to come. The highly interdisciplinary nature of AI makes it an ideal discipline to create bridges with other scientific domains to attack important societal problems and crises. In crises resources are limited, time is of the essence and the consequences of action or inaction are severe but difficult to predict. We are convinced that AI, with its potential to support human analysis, planning and decision making has much to offer not just in the context of the current pandemic, but also for handling future crises. We have shown that many experts are willing to work quickly together on novel solutions for the benefits of society. Such collaboration depends on quick access to large amounts of data and information, computation and, most importantly, to each other even when social distancing measures severely restrict physical interaction. We remain aware that when used, developed and deployed under the pressure of exceptional circumstances, AI technology can be a double-edge sword. It is technologically quite easy to put in place systems that might be difficult to dial back once the crisis is over, eroding privacy and other fundamental rights aided by advanced AI tools and techniques. We must not allow this to happen and must develop standards and frameworks that permit rapid progress without eroding human dignity. Especially in times of crisis, we need to keep our eyes and resources firmly on AI that enhances human intelligence, helps us recognise and avoid our biases and limitations, and that is designed and used to protect and further our interests as individuals and societies, managing potential risks and reinforcing our European values and the goal of developing humancentred AI. Despite its limited scale, the experience of the CLAIRE COVID-19 initiative has not only made concrete progress on COVID-19 problems, but offered insights on the potential and limitations of a non-conventional, voluntary and bottom up approach based on the good will of AI experts, fully aware of the societal role of their knowledge and of the importance of an open dissemination of science and contributions to open scientific data collaborations with all stakeholders of our innovation ecosystem. Additional information on the CLAIRE COVID-19 initiative can be found in the website https ://covid 19.clair e-ai.org/
2021-02-10T05:05:15.036Z
2021-02-09T00:00:00.000
{ "year": 2021, "sha1": "c1b6f744d0d8aac247ab0a14604fa901411b1131", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s10676-020-09567-7.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "c1b6f744d0d8aac247ab0a14604fa901411b1131", "s2fieldsofstudy": [ "Medicine", "Computer Science", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
9054242
pes2o/s2orc
v3-fos-license
Pediatric extracorporeal shock wave lithotripsy: Predicting successful outcomes Extracorporeal shock wave lithotripsy (ESWL) is currently a first-line procedure of most upper urinary tract stones <2 cm of size because of established success rates, its minimal invasiveness and long-term safety with minimal complications. Given that alternative surgical and endourological options exist for the management of stone disease and that ESWL failure often results in the need for repeat ESWL or secondary procedures, it is highly desirable to identify variables predicting successful outcomes of ESWL in the pediatric population. Despite numerous reports and growing experience, few prospective studies and guidelines for pediatric ESWL have been completed. Variation in the methods by which study parameters are measured and reported can make it difficult to compare individual studies or make definitive recommendations. There is ongoing work and a need for continuing improvement of imaging protocols in children with renal colic, with a current focus on minimizing exposure to ionizing radiation, perhaps utilizing advancements in ultrasound and magnetic resonance imaging. This report provides a review of the current literature evaluating the patient attributes and stone factors that may be predictive of successful ESWL outcomes along with reviewing the role of pre-operative imaging and considerations for patient safety. INTRODUCTION Extracorporeal shock wave lithotripsy (ESWL) was introduced as a minimally invasive treatment for nephrolithiasis in 1980, with the first successful use in the pediatric population by Newman in 1986. [1] In ESWL, shock waves are generated by a source (lithotripter) external to the patient's body and are then propagated into the body and focused on a renal stone with the goal of fracturing the stone and allowing passage of the stone fragments via the urinary tract. In the past two decades, lithotripters have become more widely available throughout the world, and ESWL is now considered a first-line treatment for minimally invasive management of pediatric stone disease of the upper urinary tract. [2][3][4][5] Efficacy of ESWL is best measured by the stone-free rate, typically within 3 months of ESWL therapy to allow time for passage of stone fragments. In a review of 22 pediatric ESWL series, D'Addessi found that the stone-free rates mostly exceed 70% at 3 months, although many of these series included results after multiple ESWL sessions that are known to improve the stone-free rate. [2] Our group recently reviewed results of 149 pediatric patients treated with a single session of ESWL at multiple community and academic centers in the Midwestern US and found a 71% stone-free rate. [6] In other pediatric series, ESWL has been demonstrated to be successful in treating large stones (20-30 mm), with a 95% stone-free rate, [7] staghorn calculi with a 73% stone-free rate [8] and lower-pole calculi with a stonefree rate between 61% and 92%. [7,9] Thus, the efficacy of ESWL for renal stones in the pediatric population is well established. of different series, primarily for two reasons. Firstly, it important to recognize that some series achieving high stone-free rates have defined ESWL monotherapy to include up to 6 ESWL sessions, [10] while single-session stone-free rates may be as low as 44%. [3] Because multiple sessions result in additional patient anesthesia, stress to patients and families and expenditure of hospital and physician resources, Hammad and other authors have called for measurement of the efficacy quotient (EQ) as an important measure of ESWL success, particularly in the pediatric population. [11] EQs account for repeat ESWL sessions as well as secondary and ancillary procedures with respect to the stone-free rate. Secondly, some series consider ESWL stone fragments <4 mm to be clinically insignificant residual fragments (CIRFs), and include patients with these fragments as having a successful outcome. [3,9] The definition of CIRFs is extrapolated from the finding that the majority of stones <5 mm pass spontaneously in the adult population. However, Afschar demonstrated in a study of children that 69% of ESWL stone fragments <5 mm resulted in the adverse outcomes of either clinical symptoms or growth. [12] In our practice, we do not consider any residual stone fragments insignificant and patients with residual fragments require close monitoring for stone growth, potential complications and the need for subsequent intervention. This report will discuss patient attributes and stone factors that may be predictive of successful ESWL outcomes along with reviewing the role of pre-operative imaging and considerations for patient safety. AGE AND GENDER It is generally accepted that pediatric patients have an increased clearance rate of stones when compared with adult patients possibly due to lesser length and greater distensibility of the pediatric ureter. [9,13] Children may also have an infundibulopelvic angle that is more favorable to clearance of lower-pole stones. [14] In a study of children aged 0-14 years, Aksoy et al. found that after ESWL, children aged 0-5 years had the greatest stone-free rate and that children aged 11-14 years had the poorest outcomes, although age was not a statistically significant predictor of ESWL success in this series or other series to date. [6,15,16] The efficacy of ESWL has been demonstrated to be up to 100% in both children under 6 years of age [10] and in low birth-weight infants. [5] To our knowledge, no study has demonstrated a significant relationship between gender and ESWL outcomes in the pediatric population. BODY HABITUS AND ANATOMY The increasing rate of pediatric nephrolithiasis in the United States over the past 30 years [17] parallels an increase in obesity, which has been attributable to increased incidence of nephrolithiasis in adults. In a multivariate analysis of adult patients, Ackermann and coworkers found that body mass index (BMI) was a predictive factor in the results of ESWL, with greater BMI linked to decreased stone-free rates. [18] Subsequent studies in the adult population have confirmed this relationship, although studies of BMI in the pediatric population have been lacking. At our institution, we found pediatric BMIs to range from to 12 to 44 among 149 children aged 1-17 years, yet found no significant relationship of BMI to ESWL success in this cohort. [6] Our results may be attributed to the smaller body size of pediatric patients and the fact that many overweight and obese children have a skin to stone distance (SSD) within the focal distance of the lithotripter. For this reason, it may be valuable to evaluate SSD as a predictor of ESWL success in this population. A relationship of increased SSD has been correlated with ESWL failure in the adult population [19] and has been suggested to be more prognostic than BMI. SSD is best evaluated on non-contrast computed tomography (NCCT), [19] which is likely why study of this parameter in children has been limited to date. Future studies in the pediatric population should evaluate both BMI and SSD to determine the effect of body habitus on ESWL outcomes. Anatomic factors, congenital or acquired, that hinder stone clearance adversely affect the results of ESWL, and any obstruction distal to the stone remains a contraindication to ESWL. In patients with anatomic abnormalities, stonefree status may be as low as 12.5%. [20] In the presence of obstruction and infection, ESWL may result in lifethreatening urosepsis. Furthermore, stone fragments are unlikely to clear and a stone is likely to recur if the concomitant obstruction is not resolved. Clearance of residual fragments is also impaired when hydronephrosis is present. When considering lower pole stones, infundibular length > 3cm and infundibulopelvic angle <45° are associated with poorer outcomes. [21,14] Gurocak et al. have introduced a pediatric infundibulopelvic index (IPI) for lower calyceal stones, suggesting that a combination of infundibular length, width and infundibulopelvic angle may be a more beneficial predictor of ESWL success then evaluation of these parameters individually. [22] THE ROLE OF IMAGING The clinical suspicion of nephrolithiasis must be confirmed by imaging in order make a definitive diagnosis. Preoperative imaging is essential for the determination of stone characterization and location when planning management and treatment options. Current imaging modalities include NCCT, ultrasound, kidney ureter bladder plain film and intravenous pyelography. Of these modalities, NCCT has been demonstrated to be the most sensitive and specific, [23] and also offers the added potential of illustrating patient anatomy, identifying alternative pathology, determining stone density via attenuation value (Hounsfield units), determining the skin-to-stone distance as well as for characterizing stone volume using multiple views. Despite being the gold standard for evaluation of renal colic in the adult population, use of NCCT for the evaluation of stones in children has been limited to date because of concerns for ionizing radiation and the potential of cancer. Sound evidence for this concern derives from studies of atomic bomb survivors in Japan, demonstrating a direct relationship of the amount of ionizing radiation to the development of cancer. Along with the additive risk of repeated exposures, children are felt to be inherently more susceptible to ionizing radiation than adults because they have a higher population of dividing cells and because they have more remaining years of life during which a latent radiation-induced cancer could develop. [24] In what is now a commonly cited study, Rice speculated the incidence of fatal cancer in children to be as high as one per 1,000 CT scans. [25] In our practice, the use of NCCT scanning is limited and not considered necessary for all children if a stone is clearly visible on ultrasound and/or kidney-ureter-bladder (KUB) X-ray. But, in older children, those who present with symptomatic renal colic, or patients with hydronephrosis without a visible renal stone, NCCT imaging is the preferred imaging modality in our practice. Similarly, while NCCT most accurately determines the stone-free status post-ESWL, detecting fragments as small as 1 mm, use in this population is reserved for the same concerns, and either ultrasound or KUB is typically utilized for radiolucent and radiopaque stones, respectively. Risks of radiation can be reduced with protocols that limit the area scanned to the region of necessity only and by proper radiation dosing tailored toward individual patient size and age. Lower radiation dose NCCT protocols can produce equivalent sensitivity and specificity for stone detection when compared with standard CT. [23] However, it is not clear whether calculation of attenuation density in Hounsfield Units (HU) or post-processing algorithms designed to determine stone composition remain equally as accurate. We anticipate that advancements toward faster imaging, lower radiation dose and improved postprocessing will allow more widespread use of NCCT for the detection of urolithiasis in the pediatric population without compromising patient safety. STONE SIZE AND LOCATION Stone size has frequently been cited as the most important predictor of ESWL success in the pediatric population, [2,6] but variation in the methods by which stone size is measured and reported can make it difficult to compare individual studies and make recommendations for ESWL treatment. Stone size has been reported as a single diameter, a sum of diameters, [3,11] an area [9,10] and as total stone burden, which is the sum of the diameters or areas of treated stones in patients with multiple stones. [6] With the use of CT, it also possible to estimate the stone volume. Single transverse diameter is the most commonly used measure of stone size in large retrospective ESWL studies to date. [26] One could argue that future studies should move toward the use of stone area under the assumption that a 1 mm x 8 mm stone has a better likelihood of fracture and passage after ESWL than an 8 mm x 8 mm stone, despite having the same maximal transverse diameter. Regardless of the method of determining stone size, the vast majority of studies have demonstrated a direct relationship of worsening stone-free rate with increasing stone size. In studies that suggest that stone size does not significantly affect the stone-free rate, it is important to recognize that as the stone size increases, the number of sessions, number of shock waves and fluoroscopy time per session may have also increased. [27] In the aforementioned multi-institutional study conducted by our group, stone burden, measured as the maximal transverse diameter, was the only independent predictor of single-session ESWL success on multivariate analysis. [6] While multiple studies have demonstrated improved outcomes in stones <1 cm compared with larger stones, [6,15,28,29] success has been seen in stones up to 25-30 mm in diameter with multiple ESWL sessions. [27,30] There is a general consensus in adult endourology that lower-pole stones tend to be more refractory to ESWL when compared with other stone locations. This same conclusion was met in a pediatric study by , [30] but other series have shown no statistically significant relationship between stone-free rates and location when comparing lower-pole stones with other intrarenal stones or intrarenal to uretral stones. [2,6,9,11,16] In our practice, ESWL is most commonly used for intrarenal stones of the lower pole that are <1 cm and for stones <2 cm in other locations. We typically avoid ESWL in the mid and distal ureter in children due to difficulties with localization over the sacroiliac joint and to avoid possible injury to the developing reproductive systems. STONE COMPOSITION AND AT TENUATION DENSITY The ease with which a stone is fragmented by ESWL varies among stones of different compositions. Data reported by Saw and coworkers showed that when adjusted for stone size, cystine and brushite stones are the least amenable to fracturing with ESWL, followed by calcium oxalate monohydrate stones. Hydroxyapatite, struvite, calcium oxalate dihydrate and uric acid stones are increasingly more amenable to fracturing with ESWL. [31] A means to determine stone composition on initial presentation would be beneficial to planning surgical treatment or medical therapy. Different stone compositions have been demonstrated to have different radiodensities as well as different attenuation values measured in HU on NCCT. The HU is a measure of the radiodensity of a tissue or substance using an index based on the radiodensity of water. Substances like bone have a higher density while air and fat have the lowest densities. Unfortunately, most stones are not pure in composition and attempts to correlate attenuation value with stone composition have demonstrated that it is possible to distinguish uric acid stones from calcium-based stones, but that it is not easy to discern between types of calcium-based stones (e.g., calcium oxalate monohydrate from hydroxyapetite), [19,32] making it challenging to assess the usefulness of NCCT in predicting stone composition in the clinical setting. Newer dual-energy multi-detector CT protocols with advanced post-processing techniques appear to allow for improved discrimination among the main different subtypes of urinary calculi in both in vitro and in vivo when compared with single-energy multi-detector CT acquisitions with basic attenuation assessment. [33] Despite advancements, these techniques require a nearperfect breath hold, which may be difficult to achieve in young children without the use of general anesthesia. Continued advancement in imaging modalities and postprocessing of images may provide improved pre-operative characterization of stone composition and enhance surgical and medical management. In addition to being used as an adjunct to predicting stone composition, CT attenuation value has been determined to be an independent predictor of stone-free rates after ESWL therapy in the adult population. [34] Improved stone-free rates are seen for stones with lower attenuation values, with 1,000 HU being suggested as a significant cutoff for stones that are most amenable to ESWL. In a recent study that is pending publication, we retrospectively evaluated a cohort of 53 pediatric patients aged 1-18 years who underwent NCCT prior to single-session ESWL monotherapy and found that the stone attenuation value of the stone-free patients was 710 ± 294 HU vs. 994 ± 379 HU for those with treatment failure. When patients were stratified into two groups by attenuation value, <1,000 HU and ≥1,000 HU, the ESWL success rates were 77% and 33%, respectively. As in adult studies, larger stone sizes tended to have higher attenuation values. Pre-operative knowledge of the stone attenuation value is beneficial when considering treatment modalities and discussing potential outcomes with patients and family members. SAFETY While the efficacy of ESWL is clearly established, there remains debate over the safety of this procedure, particularly in the very young patient with growing kidneys. Animal studies have demonstrated the appearance of histological changes when immature kidneys are subjected to shock waves, with parenchymal damage proportional to the number of shocks received. [35] However, when children are evaluated clinically and with scintography, parenchymal damage does not appear to persist on long-term followup, [36,37] and renal growth and function do not appear to be significantly altered. [38] In 2006, Kramcheck et al. reported on a 19-year follow-up of adult patients treated with ESWL, raising concerns of long-term effects, namely an increased risk of developing hypertension and diabetes. [39] The development of hypertension was associated with bilateral ESWL treatment. All patients in the Kramcheck study were treated with a first-generation lithotripter, which is known to have a greater focal diameter and likely causes more damage to the surrounding renal and pancreatic tissues than the second-and third-generation lithotripters more widely in use today. Furthermore, outcomes for patients with untreated stone disease are significantly worse than for those who undergo treatment, and multiple studies have shown ESWL to be a safe and effective procedure in children. [1,29] Because the long-term safety of ESWL remains nebulous, we attempt a course of conservative follow-up in small children for as long as possible. To optimize safety and efficacy, ESWL should only be performed if lithotripter focal size and treatment facilities are adapted to the size of the child. Modifications to ensure proper shielding, positioning of the child and appropriate dose of electrical discharge to the size of the patient are required to reduce the likelihood of complications such as hematomas or lung contusions. Specifically, polysterene pads may be placed over the lung fields during an ESWL session to ensure pulmonary shielding. A traditional Dornier lithotripter may require modification of the gantry with wood slats or a car seat with an opening made on the rear. With regards to gating of shocks during ESWL, studies have demonstrated that ungated shocks are safe in the pediatric population, and that the arrhythmias seen in adults are not likely to occur in this population. [40] CONCLUSIONS Despite numerous reports and growing experience, few prospective studies and guidelines for ESWL have been completed. Variation in the methods by which study parameters are measured and reported can make it difficult to compare individual studies or make definitive recommendations. Individual surgeon experience and availability of instrumentation remain the most important factors for counseling patients and determining the most appropriate treatment options for nephrolithiasis in children. At our institution, advancements in instrumentation for pediatric PCNL and ureteroscopy, for example, facilitate the application of similar protocols for surgical intervention in children and adults. ESWL remains the procedure of choice for most upper urinary tract stones <2 cm in size because of established success rates, its minimal invasiveness and long-term safety with minimal complications. Still, imaging protocols in children with renal colic must be improved to minimize exposure to ionizing radiation, perhaps utilizing advancements in ultrasound and magnetic resonance imaging.
2018-04-03T00:05:41.026Z
2010-10-01T00:00:00.000
{ "year": 2010, "sha1": "23ce24bfeec2293c3d61de75f70e307d3f3c3951", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/0970-1591.74457", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "1ce7a2fcf73a6c717c28b622ef5589410a6f210f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
207870768
pes2o/s2orc
v3-fos-license
Attributed Sequence Embedding Mining tasks over sequential data, such as clickstreams and gene sequences, require a careful design of embeddings usable by learning algorithms. Recent research in feature learning has been extended to sequential data, where each instance consists of a sequence of heterogeneous items with a variable length. However, many real-world applications often involve attributed sequences, where each instance is composed of both a sequence of categorical items and a set of attributes. In this paper, we study this new problem of attributed sequence embedding, where the goal is to learn the representations of attributed sequences in an unsupervised fashion. This problem is core to many important data mining tasks ranging from user behavior analysis to the clustering of gene sequences. This problem is challenging due to the dependencies between sequences and their associated attributes. We propose a deep multimodal learning framework, called NAS, to produce embeddings of attributed sequences. The embeddings are task independent and can be used on various mining tasks of attributed sequences. We demonstrate the effectiveness of our embeddings of attributed sequences in various unsupervised learning tasks on real-world datasets. I. INTRODUCTION Sequential data arise naturally in a wide range of applications [1], [2], [3], [4]. Examples of sequential data include click streams of web users, purchase histories of online customers, and DNA sequences of genes. Different from conventional multidimensional data [5], the sequential data [6] are not represented as feature vectors of continuous values, but as sequences of categorical items with variable-lengths. Many real-world applications involve mining tasks over sequential data [4], [7], [3]. For example, in online ticketing systems, administrators are interested in finding fraudulent sequences from the clickstreams of users. In user profiling systems, researchers are interested in grouping purchase histories of customers into clusters. Motivated by these real-world applications, sequential data mining has received considerable attention in recent years [2], [1]. Sequential data usually requires a careful design of its embedding before being fed to data mining algorithms. One of the feature learning problems on sequential data is called sequence embedding [8], [9], where the goal is to transform a sequence into a fixed-length embedding. Conventional methods on sequence embedding focus on learning from sequential data alone [10], [8], [9], [11]. However, in many real-world applications, sequences are often associated with a set of attributes. We define such data as attributed sequences, where each instance is represented by a set of attributes associated with a sequence. For example, in online ticketing systems as shown in Fig. 1, each user transaction includes both a sequence of user actions (e.g., "login", "search" and "pick seats") and a set of attributes (e.g., "user name", "browser" and "IP address") indicating the context of the transaction. In gene function analysis, each gene can be represented by both a DNA sequence and a set of attributes indicating the expression levels of the gene in different types of cells. Motivated by the recent success in attributed graph embedding [12], [13], in this paper, we study the problem of attributed sequence embedding. Building embedding for attributed sequences (as shown in Fig. 2d corresponds to transforming an attributed sequence into a fixed-length embedding with continuous values. Different from the work in [14], [15], we do not have labels for any attributed sequence instances in the embedding task. Sequence embedding problems are particularly challenging with additional attributes. In sequence embedding problems (as shown in Fig. 2a, conventional methods focus on modeling the item dependencies, i.e., the dependencies between different items within a sequence. However, in attributed sequences, the dependencies between items can be different if the sequence is observed under different contexts (attributes). Even the same ordering of the items can have different meanings if associated with different attribute values. In this paper, instead of building embeddings to model only the dependencies between items in each single sequence, we aim to model three types of dependencies in an attributed sequence jointly: (1) item dependencies, (2) attribute dependencies (i.e., the dependencies between different attributes) and (3) attribute-sequence dependencies (i.e., the dependencies between attributes and items in a sequence). Despite its relevance, the problem of producing attributed sequence embeddings in an unsupervised setting remains open. We summarize the major research challenges as follows: 1) Heterogeneous Dependencies. The bipartite structure of attributed sequences poses unique challenges in feature (b)Attribute embedding [16]. (d) Attributed sequence embedding (this paper). learning. As shown in Fig. 1, there exist three types of possible dependencies in an attributed sequence: item dependencies, attribute dependencies and attributesequence dependencies. Motivating Example 1. In Fig. 3, we present an example of fraud detection from a user privilege management system in Amadeus [18]. This system logs each user session as an attributed sequence (denoted as J 1 ∼ J 5 ). Each attributed sequence consists of a sequence of user's activities and a set of attributes derived from metadata values. The attributes (e.g., "IP", "OS" and "Browser") are recorded when a user logs into the system and remain unchanged during each user session. We use different shapes and colors to denote different user activities, e.g., "Reset password", "Delete a user". In real-world applications like this, the attributes and the associated sequences are already saved within one integrated record. An important step in this fraud detection system is to "red flag" suspicious user sessions for potential security breaches. In Fig. 3, we observe three groups of embeddings learned from the Amadeus application logs. For each group, we use a dendrogram to demonstrate the similarities between embeddings within that group. Neither of the embeddings using only sequences or only attributes detects any outliers due to the lacking of considerations of attributesequence dependencies. However, user session J 5 is discovered to be fraudulent using a learning algorithm that incorporates all three types of dependencies. 2) Lack of Labeled Data. With the continuously incoming volume of data and the high labor cost of manually labeling data, it is rare to find attributed sequences from many real-world applications with labels (e.g., fraud, Fig. 3: Dendrograms of embeddings learned from attributed sequences for fraud detection tasks. J 5 is a user committing fraud. However, it is considered a normal user session by the embedding generated using either only attributes or only sequences. J 5 can only be caught as a fraud instance using the embedding learned using both attributes and sequences. normal) attached. Without proper labels, it is challenging to learn an embedding function that is capable of transforming attributed sequences into compact embeddings concerning the three types of dependencies. Motivating Example 2. Continuing with our Motivating Example 1, the Amadeus records user activities and their session metadata in the log files. Due to the large volume of entries and complex user sessions, the log files do not have labels depicting whether one user session is fraudulent or not. Only when an embedding function that is capable of transforming unlabeled user sessions J 1 ∼ J 5 respecting the differences between them, an anomaly detection algorithm can identify J 5 as a fraudulent session. In this paper, we focus on the generic problem of embedding attributed sequences in an unsupervised fashion. We propose a novel framework (called NAS) using deep learning models to address the above challenges. This paper offers the following contributions: • We study the problem of attributed sequence embedding without any labels available. • We propose a framework and a training strategy to exploit the dependencies among the attributed sequences. • We evaluate the embeddings generated by NAS framework on real-world datasets using outlier detection tasks. We also conduct case studies of user behavior analysis and demonstrate the usefulness of NAS in real-world applications. A. Preliminaries Definition 1 (Sequence): Given a set of r categorical items I = {e 1 ,· · · , e r }, the k-th sequence in the dataset is an ordered list of items, where α (t) k ∈ I, ∀t = 1, · · · , l k . Different sequences can have a varying number of items. For example, the number of user click activities varies between different user sessions. The meanings of items are different in different datasets. For example, in user behavior analysis from clickstreams, each item represents one action in user's click stream (e.g., I = {search, select}, where r = 2). Similarly in DNA sequencing, each item represents one canonical base (e.g., I = {A,T,G,C}, where r = 4). There are dependencies between items in a sequence. Without loss of generality, we use the one-hot encoding of S k , denoted as S k = (α Additionally, each sequence is associated with a set of attributes. Each attribute value can be either categorical or numerical. The attribute values are denoted using a vector x k ∈ R u , where u is the number of attributes in x k . For example, in a dataset where each instance has two attributes "IP" and "OS", u = 2. With the attributes and sequences, we now formally define the attributed sequences (Def. 2) and the attribute-sequence dependencies (Def. 3). Definition 2 (Attributed Sequence): Given a vector of attribute values x k and a sequence S k , an attributed sequence J k = (x k ,S k ) is an ordered pair of the attribute value vector x k and the sequence S k . B. Problem Definition The goal of attributed sequence embedding is to learn an embedding function that transforms each attributed sequence with a variable-length sequence of categorical items and a set of attributes into a compact representation in the form of a vector. However, these representations are only valuable if an embedding function is capable of learning all three types of dependencies. Hence, given a set of attributed sequences, we define the learning objective of the embedding function as a minimization of the aggregated negative log likelihood of all three types of dependencies. Definition 4 (Attributed Sequence Embedding.): Given a dataset of attributed sequences J = {J 1 , · · · , J n }, the problem of attributed sequence embedding is to find an embedding function Θ with a set of parameters (denoted as θ) that produces embeddings for J k in the form of vectors. The problem is formulated as: , ∀t = 2,· · · ,l k represents the items prior to α (t) k in the sequence. Our problem can be interpreted as: we want to minimize the prediction error of the α A. Attribute Network Fully connected neural network [19] is capable of modeling the dependencies of the inputs and at the same time reduce the dimensionality. Fully connected neural network has been widely used [20], [19], [21] for unsupervised data representations learning, including tasks such as dimensionality reduction and generative data modeling. With the high-dimensional sparse input attribute values x k ∈ R u , it is ideal to use such a network to learn the attribute dependencies. We design our attribute network as: where ρ and σ are two activation functions. In this attribute network, we use the ReLU function proposed in [22] (defined as ρ(z) = max(0, z)) and sigmoid function (defined as σ(z) = 1 1+e −z ). The attribute network is an encoder-decoder stack with 2M layers, where the first M layers composed of the encoder while the next M layers work as the decoder. With d M hidden units in the M -th layer, the input attribute vector u by the encoder. Then the decoder attempts to reconstruct the input and produce the reconstruction result x k ∈ R u . An ideal attribute network should be able to reconstruct the input from the V B. Sequence Network The proposed sequence network is a variation of the long short-term memory model (LSTM) [23]. The sequence network takes advantage of the conventional LSTM to learn the dependencies between items in sequences. [23] defines the conventional LSTM model is defined as where denotes element-wise product, σ is a sigmoid activation function, i Integration of Attribute Network and Sequence Network. Different from the conventional LSTM, our proposed sequence network also accepts the output from the attribute network to condition the sequence network. In particular, we have redesigned the function of the hidden states to integrate the information from the attribute network by conditioning the sequence network at the first time step as This integration requires the attribute network and the sequence network have the same number of the hidden units (i.e., d M = d). Since the attributed sequences are unlabeled, we designed the sequence network to predict the next item in the sequence as the training strategy. The prediction is carried out by designing an output layer applying a softmax function on the hidden states as where y (t) k ∈ R r is the predicted next item in sequence computed using softmax function, W y and b y are the weights and bias of this output layer. With the softmax activation function, the y (t) k can be interpreted as the probability distribution over r items. C. Training 1) Training Objectives: We use two different learning objectives for attribute network and sequence network targeting at the unique characteristics of attribute and sequence data. 1) Attribute network aims at minimizing the differences between the input and reconstructed attribute values. The learning objective function of attribute network is defined as 2) Sequence network aims at minimizing log likelihood of incorrect prediction of the next item at each time step. Thus, the sequence network learning objective function can be formulated using categorical cross-entropy as 2) Embedding: After the model is trained, we use the parameters in attribute network and sequence network to embed each attributed sequence. Specifically, the attributed sequences are used as inputs to the trained model only with the one forward pass, where the parameters within the model remain unchanged. After the last time step for an attributed sequence S k , the cell state of sequence network, namely c (l k ) k , is used as the embedding of S k . IV. EXPERIMENTAL EVALUATION In this section, we evaluate NAS framework using realworld application logs from Amadeus and public datasets from Wikispeedia [24], [25]. We evaluate the quality of embeddings generated by different methods by measuring the performance of outlier detection algorithms using different embeddings. A. Experimental Setup 1) Data Collection: We use four datasets in our experiments: two from Amadeus application log files and two from Wikispeedia 1 . The numbers of attributed sequences in all four datasets vary between ∼58k and ∼106k. • AMS-A/B: We extract ∼164k instances from log files of an Amadeus internal application. Each record is composed of a profile containing information ranging from system configurations to office name, and a sequence of functions invoked by click activities on the web interface. There are 288 distinct functions, 57,270 distinct profiles in this dataset. The average length of the sequences is 18. • WIKI-A/B: This dataset is sampled from Wikispeedia dataset. Wikispeedia dataset originated from a humancomputation game, called Wikispeedia [25]. We use a subset of ∼3.5k paths from Wikispeedia with the average length of the path as 6. We also extract 11 sequence context (e.g., the category of the source page, average time spent on each page) as attributes. 2) Compared Methods: To study NAS performance on attributed sequences in real-world applications, we use the following compared methods in our experiments. • LEN [26]: The attributes are encoded and directly used in the mining algorithm. • MCC [27]: MCC uses the sequence component of attributed sequence as input and produces log likelihood for each sequence. • SEQ [9]: Only the sequence inputs are used by an LSTM to generate fixed-length embeddings. • ATR [16]: A two-layered fully connected neural network is used to generate attribute embeddings. • EML [28]: Aggregate MCC and LEN scores. • CSA [29]: The attribute embedding and the sequence embedding are independently generated by ATR and SEQ, then concatenated together. 3) Network Parameters: Following the previous work in [30], we initialize weight matrices W A and W S using the uniform distribution. The recurrent matrix U S is initialized using the orthogonal matrix as suggested by [31]. All the bias vectors are initialized with zero vector 0 0 0. We use stochastic gradient descent as optimizer with the learning rate of 0.01. We use a two-layer encoder-decoder stack as our attribute network. B. Outlier Detection Tasks We use outlier detection tasks to evaluate the quality of embeddings produced by NAS. We select k-NN outlier detection algorithm as it has only one important parameter (i.e., the k value). We use ROC AUC as the metric in this set of experiments. For each of the AMS-A and AMS-B datasets, we ask domain experts to select two users as inlier and outlier. These The methods not using embeddings are placed on the left. We vary the number of dimensions on the right. The higher score is better. We observe that the combinations of k-NN and NAS embeddings have the best performance on the four datasets. two users have completely different behaviors (i.e., sequences) and metadata (i.e., attributes). The percentages of outliers in AMS-A and AMS-B are 1.5% and 2.5% of all attributed sequences, respectively. For the WIKI-A and WIKI-B datasets, each path is labeled based on the category of the source page. Similarly to the previous two datasets, we select paths with two labels as inliers and outliers where the percentage of outlier paths is 2%. The feature used to label paths is excluded from the learning and embedding process. Performance. The goal of this set of experiments is to demonstrate the performance of outlier detection using all our compared methods. Each method is trained with all the instances. For SEQ, ATR and NAS, the number of learning epochs is set to 10 and we vary the number of embedding dimensions d from 15 to 30. We set k = 5 for outlier detection tasks for LEN, SEQ, ATR, CSA and NAS. Choosing the optimal k value in the outlier detection tasks is beyond the scope of this work, thus we omit its discussions. We summarize the performance results in Fig. 6. Analysis. We find that the results based on the embeddings generated by NAS are superior to other methods. That is, NAS maximally outperforms other state-of-the-art algorithms by 32.9%, 27.5%, 44.8% and 48% on AMS-A, AMS-B, WIKI-A and WIKI-B datasets, respectively. It is worth mentioning that NAS outperforms a similar baseline method CSA by incorporating the information of attribute-sequence dependencies. Parameter Study There are two key parameters in our evaluations, i.e., k value for the k-NN algorithm and the learning epochs. In Fig. 4, we first show that the embeddings (dimension d = 15) generated by our NAS assist k-NN outlier detection algorithm to achieve superior performance under a wide range of k values (k = 5, 10,15,20,25). We omit the detailed discussions of selecting the optimal k values due to the scope of this work. Fig. 5 presents the results when we fix k = 5, d = 15 and vary the number of epochs in the learning phase. We notice that compared to its competitor, the embeddings generated by NAS can achieve a higher AUC even with a relatively fewer number of learning epochs. Compared to other neural networkbased methods (i.e., SEQ, ATR and CSA), NAS have a more stable performance. The NAS performance gain is not due to the advantage of using both attributes and sequences, but because of taking the various dependencies into account, as the other two competitors (i.e., CSA and EML) also utilize the information from both attributes and sequences. V. RELATED WORK Sequence Mining. Many sequence mining work focuses on frequent sequence pattern mining. Recent work in [2] targets finding subsequences of possible non-consecutive actions constrained by a gap within sequences. [32] aims at solving pattern-based sequence classification problems using a parameter-free algorithm from the model space. It defines rule pattern models and a prior distribution on the model space. [33] builds a subsequence interleaving model for mining the most relevant sequential patterns. Deep Learning. Sequence-to-sequence learning in [9] uses long short-term memory model in machine translation. The hidden representations of sentences in the source language are transferred to a decoder to reconstruct in the target language. The idea is that the hidden representation can be used as a compact representation to transfer sequence similarities between two sequences. Multi-task learning in [11] examines three multi-task learning settings for sequence-tosequence models that aim at sharing either an encoder or decoder in an encoder-decoder model setting. Although the above work is capable of learning the dependencies within a sequence, none of them focuses on learning the dependencies between attributes and sequences. This new bipartite data type of attributed sequence has posed new challenges of heterogeneous dependencies to sequence models, such as RNN and LSTM. Multimodal deep neural networks [34], [29], [35] is designed for information sharing across multiple neural networks, but none of these work focuses on our attributed sequence embedding problem. VI. CONCLUSION In this paper, we study the problem of unsupervised attributed sequences embedding. Different from conventional feature learning approaches, which work on either sequences or attributes without considering the attribute-sequence dependencies, we identify the three types of dependencies in attributed sequences. We propose a novel framework, called NAS, to learn the heterogeneous dependencies and embed unlabeled attributed sequences. Empirical studies on realworld tasks demonstrate that the proposed NAS effectively boosts the performance of outlier detection tasks compared to baseline methods.
2019-11-03T19:16:51.000Z
2019-11-03T00:00:00.000
{ "year": 2019, "sha1": "17a2a5d7c4475cf16bbeb3429f211c2b6cd6fb5c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1911.00949", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "2ddd619e0bf87759105004815c7d2548cff0c95f", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
248416667
pes2o/s2orc
v3-fos-license
Multisensory synchrony of contextual boundaries affects temporal order memory, but not encoding or recognition We memorize our daily life experiences, which are often multisensory in nature, by segmenting them into distinct event models, in accordance with perceived contextual or situational changes. However, very little is known about how multisensory boundaries affect segmentation, as most studies have focused on unisensory (visual or audio) segmentation. In three experiments, we investigated the effect of multisensory boundaries on segmentation in memory and perception. In Experiment 1, participants encoded lists of pictures while audio and visual contexts changed synchronously or asynchronously. After each list, we tested recognition and temporal associative memory for pictures that were encoded in the same audio-visual context or that crossed a synchronous or an asynchronous multisensory change. We found no effect of multisensory synchrony for recognition memory: synchronous and asynchronous changes similarly impaired recognition for pictures encoded at those changes, compared to pictures encoded further away from those changes. Multisensory synchrony did affect temporal associative memory, which was worse for pictures encoded at synchronous than at asynchronous changes. Follow up experiments showed that this effect was not due to the higher dimensionality of multisensory over unisensory contexts (Experiment 2), nor that it was due to the temporal unpredictability of contextual changes inherent to Experiment 1 (Experiment 3). We argue that participants formed situational expectations through multisensory synchronicity, such that synchronous multisensory changes deviated more strongly from those expectations than asynchronous changes. We discuss our findings in light of supportive and conflicting findings of uni- and multi-sensory segmentation. Introduction We temporally segment our everyday experiences into distinct episodic events in memory (Brunec et al., 2018;Radvansky & Zacks, 2017;Zacks, 2020). Segmentation crucially depends on perceived changes in contextual features, which constitute deviations from situational predictions and thereby act as event boundaries separating previous from current event models in working memory (Radvansky & Zacks, 2017;Richmond & Zacks, 2017). Temporal segmentation occurs in many different situations, from crossing a doorway segmenting experiences to different spatial environments (Radvansky & Copeland, 2006;van Helvoort et al., 2020) to narrative changes in space, time, perspective or action goals segmenting our memory of film or story materials (Newtson, 1973;Radvansky & Copeland, 2010;Schwan & Garsoffky, 2004;Schwan et al., 2000;Swallow et al., 2018;Zacks et al., 2009). How we segment our experiences influences how we perceive and remember them. The detection and processing of contextual boundaries comes at an attentional cost (Huff et al., 2012), but can also enhance recognition memory for information presented near those boundaries relative to information away from boundaries (Aly & Turk-Browne, 2016;Newtson & Engquist, 1976;Swallow et al., 2009). Further, experiences from the same situational context become more strongly associated in memory, such that experiences sharing a common contextual segment in memory can be more readily or accurately retrieved than experiences from different contextual segments (Radvansky & Copeland, 2006;Smith, 1985;van Helvoort et al., 2020). As such, boundaries help shape memory and understanding of temporally structured experiences. Segmentation has often been studied using movie clips, in which the contextual changes are typically multisensory (i.e., audio and visual) (Baldassano et al., 2017;Ben-Yakov & Henson, 2018;Boltz, 1992;Chen et al., 2017;Cutting, 2019;Furman et al., 2007;Huff et al., 2014;Newtson, 1973;Schwan & Garsoffky, 2004;Schwan et al., 2000;Zacks et al., 2009). Segmentation studies that used unisensory contextual features showed comparable segmentation effects in the visual (Ezzyat & Davachi, 2011;Newberry & Bailey, 2019;Zacks et al., 2009) and auditory domain (Baldassano et al., 2018;Huff et al., 2018;Sridharan et al., 2007), suggesting that segmentation is independent of sensory modality. However, the contribution of multisensory dimensionality to temporal segmentation has remained under-investigated. Recently, Meitz et al. (Meitz et al., 2020) compared the detection of film cuts occurring between or within scenes (i.e., at or away from filmic boundaries) when movie clips were presented with or without their audio tracks. They found that participants better detected film cuts at boundaries than cuts away from boundaries, irrespective of the audibility of the audio track. Likewise, recognition memory was better for between-scene changes than for within-scene changes, irrespective of whether the audio track was played during encoding. The authors suggested that segmentation followed semantically congruent boundaries, rather than multisensory complexity or integration. Possibly, the semantic associations between the audio and visual tracks made the multisensory information redundant in segmenting perception and memory. This suggestion is in line with a previous finding (Meyerhoff & Huff, 2016) that reversing the visual track of movie clips did not decrease subsequent recognition performance compared to synchronous audio and visual tracks, indicating that event memory depended on semantic rather than multisensory congruency. The lack of a multisensory effect in segmentation appears at odds with observations that boundary detection increases with increasing number of changing narrative dimensions, such as space, time or action goal . Event segmentation theories postulate that more concurrently changing contextual features would constitute a larger deviation of situational predictions in working memory (Zacks, 2020). Likewise, an effect of audio-visual integration on boundary processing would be expected from the perspective that multisensory synchronization facilitates stimulus encoding (Chen & Spence, 2010;ten Oever et al., 2013) and memory formation of individual items (Botta et al., 2011;Thompson & Paivio, 1994). Indeed, audio-visually presented movie clips are subsequently better recognized than audio-only or visual-only clips (Meyerhoff & Huff, 2016). Further, visual (or auditory) information is better encoded when semantically congruent auditory (resp., visual) information is presented synchronously rather than asynchronously with the other modality (Bushara et al., 2003;Miller & D'Esposito, 2005;Van Atteveldt et al., 2007;Chen & Spence, 2010;ten Oever et al., 2013). Another point of contention is that the (lack of) effect of multisensory boundaries in previous studies was obtained from recognition memory, rather than associative memory. The enhanced recognition of items encoded at contextual boundaries (as opposed to those away from boundaries) may come with the trade-off of impaired temporal binding between items crossing a boundary in associative memory (DuBrow & Davachi, 2013;Heusser et al., 2018;van de Ven et al., 2021). The context dependence of associative memory could thus be more sensitive than recognition memory to the effect of multisensory boundary processing and segmentation (Clewett & Davachi, 2017), but this scenario remains to be tested. To investigate these issues, we conducted three experiments in which participants encoded lists of random, unrelated visual objects while audio and/or visual contextual features changed after a number of objects. Previous studies using this design showed that visual (DuBrow & Davachi, 2013Heusser et al., 2018) or temporal boundaries (van de Ven et al., 2021) impaired temporal order memory judgments for picture pairs crossing a boundary during encoding relative to temporal memory judgments for picture pairs coming from the same context. The contextual changes thus mimicked the effect of boundaries in narrative segmentation (Ezzyat & Davachi, 2011;Lositsky et al., 2016). In our experiments, we used continuously presented audio and visual contexts in the form of, respectively, ambient soundscapes and colored frames. The audio and visual contexts were not semantically related, and neither were the pictures semantically related to the audio or visual contexts. In Experiment 1, we manipulated the synchrony of audio-visual boundaries and assessed its effect on temporal order memory performance. We hypothesized that, if multisensory synchronicity affects segmentation, then temporal order memory performance for items crossing a synchronous multisensory boundary would be worse than performance for items crossing an asynchronous multisensory boundary. In this experiment, the contexts were continuously multisensory. The perceptual expectations would therefore likely differ from a unisensory context, such that an asynchronous multisensory boundary (e.g., changing audio but continuous visual context) may affect segmentation differently than a unisensory boundary (changing audio in the absence a visual context). If boundary processing is sensitive to perceptual dimensionality, then multisensory boundaries would impair across-context temporal order judgments more than unisensory boundaries. We tested this hypothesis in Experiment 2. Finally, the mix of synchronous and asynchrous boundaries in Experiment 1 could be experienced as irregular or unpredictable, and thereby affect boundary processing independently of perceptual dimensionality. We investigated this issue in Experiment 3, in which we manipulated temporal expectancy of unisensory contextual 1 3 changes, such that audio or visual changes occurred at regular or irregular intervals. If boundary processing depends on temporal expectations about when a boundary occurs, then across-context temporal order memory judgments would be worse for irregularly than regularly distributed boundaries. Experiment 1 In Experiment 1, we tested the hypothesis that synchronous multisensory boundaries would impair temporal order processing in memory more than asynchronous boundaries. To this end, we manipulated the synchronicity of multisensory contextual boundaries during the encoding of a series of visual objects and assessed its effect on subsequent recognition or temporal memory performance. While audio and visual contexts were continuously and concurrently presented, the contextual changes occurred in synchrony (multisensory audio and visual context change) or out of synchrony (unisensory change of either audio or visual context). Participants We initially recruited 34 participants in the age range of 18-40 years from the academic environment of Maastricht University. Participants were recruited via social media platforms and were required to have at-home access to a computer or laptop, headphones, Internet access to download and install the experiment software and a quiet place without distractions (see below for further details). Participants who could not or were not willing to install the experiment software were excluded from participation in the study. Of the recruited sample, 24 participants (16 females; mean ± SD age = 21.2 ± 2.0 years, range 18 to 27) successfully installed and completed the experiment (the other 10 participants could not install, run or complete the experiment for technical reasons). All participants provided informed consent before participating in the experiment and were monetarily compensated. The study was approved by the ethical committee of the Faculty of Psychology and Neuroscience of Maastricht University. Procedure Due to national regulations in response to the COVID-19 pandemic during 2020-2021 in The Netherlands, we designed the experiment so that it could be completed at home. We programmed the experiment in Psychopy (Peirce, 2007), which has been shown to operate reliably at high temporal precision and with limited variations across operating systems (Bridges et al., 2020;Garaizar & Vadillo, 2014). We asked participants to download and install the latest version of Psychopy from the website. After successful installation, participants downloaded, unpacked and ran the experiment code. Participants were instructed to use headphones in order to maximize audibility of the sound stimuli and minimize distracting environmental sounds. Participants were further instructed to reduce distractions to a minimum by turning their mobile phone or social media apps off during the experiment. Prior to starting the experiment, participants were contacted via online conference call by one of us (JS or GK) to verify correct software installation and compliance to task instructions. After completion of the experiment, participants were asked to return the data files by email to the investigators. Materials and task design Participants saw 12 lists of 36 visual items, which were randomly selected for each participant from a publicly available image set (Kovalenko et al., 2012). Each item was presented for 2.5 s, with a 2 s interval between consecutive items. Items were presented on an audio-visual background that comprised audioscapes of continuous ambient sounds (audio context) and a colored frame (visual context; Fig. 1). To motivate active encoding of the stimuli (Sheldon, 2020), participants considered for each item how pleasant they found its combination with the audio and the visual contexts (that is, including both the ambient sound and the frame color) during the time the item was presented on the screen. For each list, the audio and visual contexts changed at different rates. In half of the lists, the audio context changed every six items while the visual context changed every nine items. This resulted in two synchronous and six asynchronous audio-visual contextual changes. The asynchronous changes comprised four audio changes (while visual context did not change) and two visual changes (while audio context did not change). In the other half of the lists, the ratio of visual to audio changes was reversed. Note that all contextual changes were multisensory, and that multisensory contexts were continuously presented throughout the encoding of the items. For each list, frame color was randomly selected from a color set (red, yellow, blue, green, purple, black, white, pink) without repetition. The audioscapes were generated from the soundtracks of publicly available online ambience videos and were chosen to be perceptually different from one another in terms of spectral distribution (Fig. 1B) and perceived environment, and to contain no intelligible speech or resemblance of music. The selected soundtracks included continuous natural sounds from different environments (oceanside beach sounds, cave water drops and rustling, fireplace crackle, office buzz, rainfall and underwater bubbling). The ambient samples were individually edited to match volume, while different frequencies were pronounced in each soundtrack to exaggerate the perceived differences. Sound samples were also stereo separated and minor reverb was added to increase the feeling of immersion. Processed audiofiles were cut to 41 s duration. After each list, item recognition and temporal order memory were tested in two separate tasks. In the recognition task, participants were presented with items that were either shown during the preceding list or not and had to indicate whether they thought they saw it during the preceding list (Old judgment) or not (New judgment). Items that were drawn from the preceding list were taken directly after a contextual change (boundary item) or three items away from a contextual change, provided that it did not overlap with a following contextual change (non-boundary item; see Fig. 1A). Lure items were drawn from a set of items that were never presented during any of the encoding lists. Each recognition test trial ended upon button response, with a maximum response window of 6 s. In the temporal order memory test, participants saw pairs of items that were drawn from the preceding list and had to indicate which item of the pair was presented first. The items of a temporal order pair were presented on the left and right side of the center of the screen. The item that was presented first during the encoding phase was shown on the left in half of the trials, and shown on the right in the other half of the trials. The order of trials with left or right-sided presentation of the first encoded item was randomized across lists and participants. Pairs were either drawn from the same audiovisual context (Within pair) or from opposite sides of a contextual change (Across pair), with the number of items spanning between those of the pair being equal for Within and Across trials. Further, Across trials included pairs that were drawn across synchronous or asynchronous audiovisual context changes. Each trial ended upon button response, with a maximum response window of 6 s. In half of the lists, frame color changed every six pictures while soundscapes changed every nine pictures (vice versa for other half of lists), such that audio and visual contexts sometimes changed synchronously (Sync) or asynchronously (Async). After each list, recognition memory for boundary (Sync or Async) and nonboundary items (Non) was tested, as well as temporal order memory for items drawn from the same audiovisual contexts (Within) or crossing a Sync or Async boundary (Across). This design was also used in Experiments 2 and 3, in which audio and visual contexts were presented simultaneously or separately (Experiment 2) or at regular or irregular intervals (Experiment 3). B Audio spectrograms of the six soundscapes. Memory for the presented pictures was assessed using a visual object recognition task (C) and a temporal order memory task (D) Analysis For the recognition task, we calculated d' from the hit rates and false alarm rates (Macmillan & Creelman, 2005;Stanislaw & Todorov, 1999) for each of the three types of boundary (non-boundary, asynchronous boundary and synchronous boundary). We then calculated statistical effects using a one-way repeated measures ANOVA with Boundary as within-subject factors. Average response times for correct trials were also analysed using the same ANOVA model. For the temporal order task, we calculated hit rate for the three types of context (within the same context, across asynchronous contexts and across synchronous contexts) and calculated statistical effects using a one-way repeated measures ANOVA with Context as within-subject factors. The same ANOVA model was used for analysis of the response times (correct trials only). All ANOVA models included a full-factorial interaction term, using Type III sum of squares. Post hoc paired comparisons were conducted to parse significant main or interaction effects (p < 0.05). We report partial etasquared, 2 p , as effect size for significant ANOVA main or interaction effects, unless otherwise stated. Post hoc comparisons are reported as F-tests and 2 p to facilitate comparison with ANOVA outcomes. All statistical analyses were conducted using the open-source freeware package JASP (JASP Team, 2018), which runs on all three major operating systems. The asynchronous boundary trials included two types of unisensory contextual changes: audio but no visual changes and visual but no audio changes. To explore whether these two asynchronous boundaries affected recognition performance differently, we conducted a post hoc paired samples T test and found no significant effect (F(1,23) = 1.92, P = 0.18). Figure 2B shows the means and 95% confidence intervals of response times (correct trials only) for each of the three boundary items. Repeated measures ANOVA of Temporal order task Figure 2C shows mean hit rate and 95% confidence intervals for each of the three contexts. A one-way repeated measures ANOVA yielded a significant effect of Context (F(2,46) = 8.97, P < 0.001, 2 p =0.28). Temporal order judgments for pairs crossing a synchronous boundary (mean ± SE = 0.50 ± 0.02) were significantly less accurate than judgments for pairs crossing an asynchronous boundary (0.58 ± 0.02; F(1,23) = 2.68, P = 0.013, 2 p =0.24), as well as for judgments for pairs drawn from the same audiovisual context (Within Context pairs; 0.62 ± 0.02; F(1,23) = 12.11, P = 0.002, 2 p =0.34). The lower accuracy for temporal order judgments from asynchronous Across compared to Within context pairs did not reach significance (F(1,23) = 3.61, P = 0.07, 2 p =0.14). Post hoc comparison between temporal order judgments crossing the two types of asynchronous boundary yielded no significant effect (F(1,23) = 1.86, P = 0.19). Figure 2D shows the means and 95% confidence intervals of response times (correct trials only) for each of the three temporal order contexts. Repeated measures ANOVA of recognition response times yielded no significant effect of Context (F(2,46) = 0.02, P = 0.98), with comparable average response times for each of the three boundary conditions (mean ± SE No Boundary = 2010.59 ± 103.60 ms; asynchronous boundary = 2016.10 ± 107.35 ms; synchronous boundary = 2025.53 ± 135.83 ms). Post hoc comparison between the two asynchronous boundary types yielded no significant difference (F(1,23) = 0.29, P = 0.60). Discussion We observed two main findings. First, we found that recognition memory was worse for items that were presented at multisensory boundaries compared to non-boundary items, regardless of multisensory synchrony. This finding suggests that synchronous boundaries affected recognition memory in a similar way as asynchronous boundaries. The superior performance for non-boundary items contrasts findings from previous movie segmentation studies (Meitz et al., 2020;Schwan & Garsoffky, 2004;Schwan et al., 2000;Swallow et al., 2009), in which filmic information at boundaries is commonly better recognized than non-boundary information. Interestingly, studies using a similar segmentation task as ours, in which series of face pictures were interspersed with a semantic boundary of visual objects (or vice versa) (DuBrow & Davachi, 2013, reported recognition performance (hit and false alarm rates) comparable to our results, with no significant difference between boundary and non-boundary items. These results, in combination with ours, could indicate that the effect of contextual boundaries on recognition performance of picture series may defer from recognition of movie stimuli. Second, we found that multisensory synchronicity of contextual changes affected temporal order memory, with synchronous boundaries impairing temporal associative processing more than did asynchronous boundaries. This finding supports the suggestion that synchronous multisensory boundaries are better processed than asynchronous changes, which, in keeping with the proposed encodingmemory trade-off, comes at a greater expense of temporal memory interference (Heusser et al., 2018). Synchronous multisensory boundaries may constitute a higher sensory dimensionality that may more likely lead to event model updating. This reasoning is in line with previous findings that reading times of narrative texts increased at boundaries of higher narrative dimensionality, e.g., co-occurring changes in time, space or perspective (Meyerhoff & Huff, 2016;Radvansky & Copeland, 2010). (Non), asynchronous boundary (Async) and synchronous boundary (Sync) items in the recognition task. C, D Mean hit rate (C) and RT in seconds (D) for the Withinand Across-context conditions of the temporal order memory task (TOMT). Error bars indicate 95% confidence intervals (Masson & Loftus, 2003). p < 0.1, *p < 0.05, ***p < 0.005 A previous study found that event perception and memory of audiovisual movie clips did not differ between synchronous and asynchronous audio-visual tracks (Meyerhoff & Huff, 2016). The authors reasoned that participants segmented the audiovisual clips in all conditions similarly because the audio and visual manipulations did not alter the semantic predictions derived from the clips. That is, the semantic overlap between the audio and visual tracks made mismatching sensory-level information redundant in event segmentation. In our experiment, the pictures were not conceptually related to the visual or audio contexts, suggesting that segmentation involved expectations at sensory rather than semantic levels. A limitation of Experiment 1 was that all boundaries constituted multisensory changes. Therefore, we could not investigate whether asynchronous multisensory contexts affected memory differently from truly unisensory contexts, in which only one sensory modality would provide contextual boundaries in the complete absence of the other modality. Further, as Experiment 1 is, to our knowledge, the first to use arbitrary audio and visual contexts to induce event boundaries, the recognition and temporal order memory effects require conceptual replication. These two issues were addressed in Experiment 2. Experiment 2 Experiment 2 was designed to test if (synchronous) multisensory boundaries affected memory performance more than unisensory boundaries. Importantly, in contrast to Experiment 1, unisensory boundaries in Experiment 2 constituted contextual changes in one sensory context while the other context was entirely absent. The outcome of this experiment could further elucidate the results of Experiment 1. Finding a larger temporal memory impairment for multisensory as opposed to unisensory boundaries would indicate that the effect of a boundary on memory increases with increasing perceptual dimensionality of the contextual changes. This finding would then suggest that the results of Experiment 1 arose from increased perceptual dimensionality of the synchronous multisensory boundaries. However, finding that uni-and multi-sensory boundaries similarly affected temporal order memory would indicate that perceptual dimensionality per se cannot explain the results of Experiment 1. Participants We recruited 20 new participants (9 females; mean ± SD age = 22.4 ± 1.6 years, range 20-26) from the same academic environment and using the same procedures. All participants gave informed consent before participating in the experiment and were monetarily compensated. The study was approved by the ethical committee of the Faculty of Psychology and Neuroscience of Maastricht University. Procedures We used a similar design as Experiment 1, with the following changes. First, pictures were presented in one of three contexts: audio only (A, playing the soundscapes without any frame color), visual only (V, changing frame color without playing any soundscape) or audio-visual (AV, simultaneous presentation of soundscapes and colored frames). Second, in each context condition, a context changed after every six pictures. Further, in the audio-visual context, the frame changed color simultaneously with a change in soundscape. Figure 3A shows a schematic representation of the design of Experiment 2. Previous studies found that contextual changes resulted in slower response times during encoding (Heusser et al., 2018;Radvansky & Copeland, 2010;Zacks et al., 2009), possibly due to the increased load that event model updating puts on working memory processing (Zacks, 2020). To assess Fig. 3 Design and results of Experiment 2. A Experiment 2 included unisensory audio (A) and visual (V) boundaries, and multisensory audiovisual (AV) boundaries. B Encoding response time (RT) for the boundary item (P1) was significantly slower than encoding time for non-boundary items at subsequent positions. Relatively slower boundary item response time was larger for AV compared to V context. C Pooled across contexts (audio in black, visual in dark gray, audiovisual in light gray), recognition sensitivity (d') was higher for the middle non-boundary item (P4) compared to the boundary item (P1). D Pooled across contexts, temporal order memory accuracy (hit rate, Hr) for Within-context judgments was higher than for Acrosscontext judgments. Error bars represent 95% confidence intervals (Masson & Loftus, 2003). p < 0.1, *p < 0.05, ***p < 0.005 whether multisensory boundaries taxed encoding processes more than unisensory boundaries, we logged and analysed encoding response times in Experiment 2. At the start of each list, participants were instructed to overtly report their perceived pleasantness about the combination of each visual item and the audio, visual or audio-visual context by pressing either key "1" ("Pleasant") or key "2" ("Unpleasant") on the computer keyboard while the respective item was available on the screen. The item did not disappear after key press, so that presentation time remained the same for all items. Encoding Encoding responses of one participant were not properly logged, leaving 19 datasets for the analysis of encoding responses. Figure 3B shows the mean encoding time for all six positions in the three contexts of the remaining 19 participants. To test whether boundary item responses were slower than non-boundary item responses, we averaged response times across non-boundary items for each context and calculated a repeated measures ANOVA with Position (boundary vs. non-boundary positions) and Context (audio, visual, audio-visual) as within-subject factors. This analysis yielded significant main effects for Position (F(1,18) = 19.57, P < 0.001, 2 p =0.52) and Context (F(2,36) = 11.42, P < 0.001, 2 p =0.39), and a significant Position × Context interaction effect (F(2,36) = 4.41, P = 0.019, 2 p =0.20). Post hoc comparisons showed significantly slower response times for boundary items compared to non-boundary items for the audio (F(1,18) = 10.40, P = 0.005, 2 p =0.37), visual (F(1,18) = 6.97, P = 0.017, 2 p =0.28) and audiovisual contexts (F(1,18) = 19.63, P < 0.001, 2 p =0.52). The significant Position × Context interaction effect suggested that the slower encoding time associated to boundary items differed between contexts. To assess whether this was the case, we used post hoc comparisons to compare the slower boundary response times relative to the non-boundary response times between the various uni-and multi-sensory contexts. The relatively slower boundary response times (compared to non-boundary response times) for visual context was marginally significantly smaller than for auditory context (V minus A: F(1,18) = 3.26, P = 0.088, 2 p =0.15) and significantly smaller than for audiovisual context (V minus AV: F(1,18) = 7.48, P = 0.014, 2 p =0.29). Relatively slower boundary response times did not differ between auditory and audiovisual contexts (A minus AV: F(1,18) = 1.06, P = 0.32). Thus, audio and audiovisual boundaries led to slower boundary encoding times than did visual boundaries. Could this difference in response slowing be related to increased difficulty in rating audio contexts (uni-or multisensory) over visual contexts? To address this issue, we conducted a post hoc one-way repeated measures ANOVA on the non-boundary encoding times of the three sensory contexts. We found no significant effect (F(2,56) = 0.34, P = 0.99), suggesting that difficulty of non-boundary judgments was comparable between the uni-and multi-sensory contexts. Recognition As in Experiment 1, recognition accuracy was again relatively high, with accuracy around 0.9 for both boundary and non-boundary positions in the auditory, visual and audiovisual contexts. False alarm rates were below 0.1 across all conditions (see Table 1 for average hit and false alarm rates across all conditions). Figure 3C shows mean sensitivity (d') for each condition, with the 95% confidence interval plotted as error bars. A two-way repeated measures ANOVA with Position (P1, P4) and Context (audio, visual, audiovisual) as within-subject factors yielded a significant effect for Position (F(1,19) = 4.46, P = 0.048, 2 p = 0.19), with higher recognition for middle items (P4 items, pooled across contexts; mean ± SE d' = 3.2 ± 0.14) compared to boundary items (P1 items, 3.0 ± 0.15). We found no significant main effect for Context (P = 0.15) or the Position × Context interaction term (P > 0.9). A two-way repeated measures ANOVA for recognition response times revealed a significant effect for Position (F(1,19) = 9.35, P = 0.006, 2 p =0.33), with faster response times for middle items (1117.6 ± 45.8 ms) compared to boundary items (1189.0 ± 55.5 ms). We found no significant effect for Context (P = 0.45) or the Position × Context interaction (P = 0.42). These findings suggest better performance for middle items compared to boundary items, in line with the recognition accuracy results. Figure 3D shows mean hit rate for temporal order memory in each context, with the 95% confidence interval plotted as error bars. The two-way repeated measures ANOVA with Event (within, across) and Context (audio, visual, audiovisual) as within-subject factors yielded a significant effect for Event (F(1,19) = 20.59, P < 0.001, 2 p =0.52), but no significant effect for Context (F(1,19) = 0.75, P = 0.48) or the Event × Context interaction term (F(2,38) = 0.66, P = 0.53). Post hoc comparisons showed that hit rates for Within trials were significantly higher than those for the Across trials of all three sensory contexts (see Table 2). Temporal order memory Experiment 1 showed that temporal order accuracy differed for item pairs crossing the synchronous and asynchronous boundaries. To test for a similar effect in Experiment 2, we first conducted a one-way repeated measures ANOVA of the Across trials, with Context (audio, visual, audiovisual) as within-subject factor. The main effect of Context was not significant (F < 1). To mimick the asynchronous condition of Experiment 1, we pooled the Across trials of the two unisensory conditions of Experiment 2 and compared it to the Across trials of the multisensory condition (one-way repeated measures ANOVA). We again found no significant effect (F < 1), suggesting that uni-and multisensory boundaries in Experiment 2 similarly affected temporal order accuracy. Finally, a repeated measures ANOVA of the temporal order memory response times showed a significant effect for Event (F(1,19) = 8.37, P = 0.009, 2 p =0.31), with faster response times for Within trials (2445.1 ± 0.13 ms) than Across trials (2618.4 ± 0.12 ms). There was no significant effect for Context (P = 0.17) or the Event × Context interaction (P = 0.12). Discussion Our finding of slower encoding times for boundary compared to non-boundary items replicated previous findings in unisensory segmentation Radvansky & Copeland, 2010;Heusser et al., 2018;Huff et al., 2018;van de Ven et al., 2021). However, we also found that audio-related boundaries affected encoding more than (unisensory) visual boundaries, which was not due to increased overall difficulty in rating audio contexts. One explaining factor is that changes in audio contexts take time to be processed, as the soundscapes are perceptually defined over time, while the color change in the visual context is instantaneous. Despite this modality-dependent difference in perceptual identification, we did not find evidence for sensory dimensionality (i.e., uni-vs. multi-sensory contextual changes) affecting perceptual boundary processing. That is, synchronous multisensory contextual changes were processed in a similar way as unisensory contextual changes. This finding is in agreement with a previous multisensory segmentation study of movie clips (Meitz et al., 2020), in which participants detected across-scene boundaries better than withinscene boundaries regardless of whether the audio track was audible. More generally, our finding may weigh in on the inconsistent results of encoding time of boundary items increasing or not changing with higher dimensional complexity [Experiment 3 in (Huff et al., 2018)], with our findings supporting the latter. In another study, reading times during story reading were analysed when contextual changes occurred in one or more narrative dimensions, such as spatial, temporal, goal-directed and protagonist-related contexts (Zwaan et al., 1998). Results showed that reading times were systematically slower for non-spatial contextual changes. However, reading times for spatial contextual changes only slowed when participants had learned the spatial environment of the story prior to reading it, regardless of whether the spatial changes were clearly demarcated or not. The authors suggested that the familiarization prior to reading could have made the spatial context more relevant to understanding the story, thereby enhancing its role in segmentation. In our Experiment 1, the synchronous multisensory changes may have become more relevant when offset to the asynchronous changes, while in Experiment 2, the multisensory changes offered no new boundary information with respect to the unisensory changes. In the memory domain, we again found better recognition memory for non-boundary compared to boundary items, thus replicating our finding in Experiment 1. Further, we found no recognition difference between uni-and multisensory contexts, which contrasts previous findings of better multisensory than unisensory recognition of movie clips (Meitz et al., 2020;Meyerhoff & Huff, 2016). However, an important difference with these studies is that in our task, the items were not semantically related to the contextual information, allowing separation of context-induced boundary processing from item memory. Finally, we found worse temporal memory performance for items crossing a boundary compared to items from the same context, regardless of sensory modality or complexity of the boundary or context. This finding fits with the suggestion that contextual boundaries impair temporal associative processing similarly for different sensory modalities or complexity. In sum, these findings suggest that synchronous multisensory boundaries affect memory in similar ways as unisensory boundaries. By extension, the effect of synchronous multisensory boundaries on temporal memory in Experiment 1 does not seem to be the result of multisensory dimensionality per se. An alternative explanation could be that boundary processing in Experiment 1 was augmented by the uncertainty about when a boundary would occur, due to the mix of synchronous and asynchronous boundaries during a list. Previous studies have shown that rhythmic stimulus presentation enhances attentional processing, compared to non-rythmic presentation (Rohenkohl et al., 2011;Jones & Ward, 2019;ten Oever & Sack, 2019). In Experiment 1, the decreased predictability of boundary occurrence could have impaired the temporal deployment of attention, resulting in less efficient processing of the asynchronous boundaries and subsequently less impaired temporal order memory. In Experiment 2, the regular occurrence of both uni-and multisensory boundaries could have led to comparable boundary effects on temporal order memory, thereby obscuring a possible effect of temporal uncertainty on boundary processing. The possible role of boundary predictability was investigated in Experiment 2. Experiment 3 In Experiment 3, unisensory audio or visual contexts changed regularly or irregularly during encoding, such that boundary occurrence was temporally predictable or unpredictable, respectively. We included only the two unisensory contexts to maximize the statistical power of the design, as Experiment 2 indicated no explanatory power of synchronous multisensory over unisensory boundaries. Finding a larger temporal memory impairment for regular boundaries, compared to non-regular boundaries, would indicate that temporal expectancy about when contextual changes will occur affects encoding and memory formation. It would then explain the results of Experiment 1 by the temporal irregularity of the occurrence of the synchronous and asynchronous boundaries. However, finding comparable effects of regular and irregular boundaries would suggest that temporal expectancy about boundary occurrence does not modulate temporal segmentation in perception and memory. Participants We recruited 20 new participants (8 females; mean ± SD age = 22.6 ± 2.2 years, range 20-26) from the same academic environment. Recruitment and exclusion criteria were the same as for Experiment 1. All participants gave informed consent before participating in the experiment and were monetarily compensated. The study was approved by the ethical committee of the Faculty of Psychology and Neuroscience of Maastricht University. Procedure The experiment instructions and procedure were the same as in Experiment 2, including the overt encoding responses, the recognition and the temporal order memory tasks, but with the following exceptions. In Experiment 3, we only used the unisensory contextual conditions. Participants completed six visual-only and six audio-only context lists. In each unisensory condition, the contextual changes either occurred consistently and regularly after the 6 th image (regular condition), or occurred irregularly after three, six or nine objects (irregular condition). Figure 4A depicts the temporal structure of unisensory contextual changes in the regular and irregular conditions. In both conditions, the visual or audio context changed five times. The order of contextual intervals in the irregular condition varied across different lists. At the start of each list, the participant saw a cue specifying whether the contextual changes of the list would be regular or irregular. The order of lists with visual or audio contexts, and regular or irregular context changes was randomized for each participant. Encoding Encoding response times were incomplete or missing for eight participants, possibly due to these participants accidentally pressing the wrong keys during encoding. To assess the effect of regularity of the contextual changes on encoding response times, we compared response times between the boundary and (the pooled) non-boundary items. A repeated measures ANOVA with Position (boundary, non-boundary), Context (A, V) and Regularity as withinsubject factors yielded a significant main effect of Position (F(1,11) = 40.28, P < 0.001, 2 p =0.79) and of Context (F(1,11) = 9.27, P = 0.011, 2 p =0.46), and a significant Position × Context interaction term (F(1,11) = 7.66, P = 0.018, 2 p =0.41). The effects related to Context resulted from slower boundary response times for audio contextual changes, compared to visual contextual changes, in the Irregular (F(1,11) = 7.98, P = 0.017, 2 p =0.42) and the Regular condition (F(1,11) = 4.22, P = 0.065, 2 p =0.28). More importantly, none of the effects related to Regularity were significant (main effect P > 0.9, first-order interaction effect Ps > 0.6, second-order interaction effect P = 0.23). Thus, the slower encoding times for boundary items did not differ between regular and irregular contextual changes. Recognition As in Experiments 1 and 2, recognition performance was relatively high, with overall hit rate around 0.9 and overall false alarm rate around 0.1 (see Table 1 for average hit and false alarm rates across all conditions). Figure 4B shows recognition sensitivity (d') for the boundary (P1) and nonboundary items (P4) in the regular and irregular unisensory conditions. A repeated measures ANOVA with Position, Context and Regularity as within-subject factors revealed no significant main or interaction effects (all Ps > 0.20). Thus, neither regularity nor modality of the contextual changes affected recognition performance. Analysis of response times (correct responses only) using a similar repeated measures ANOVA model yielded no significant main effects (Ps > 0.31) or interaction effects (Ps > 0.40). Average ± SE response times across all conditions was 1084.17 ± 45.67 ms, comparable to that of the previous experiments. Figure 4C shows the performance accuracy on the temporal order memory test. Accuracy for within-context temporal order judgments was higher than for across-context judgments for both audio and visual contextual changes. A repeated measures ANOVA with Event (within, across), Context and Regularity revealed a significant main effect of Event (F(1,18) = 14.48, P = 0.0013, 2 p =0.45), thus statistically supporting superior within-context performance. Other main or interaction effects were not significant (Ps > 0.36). Temporal order memory To assess whether irregular unisensory boundaries affected temporal order memory differently than regular boundaries, Across trial data was pooled across the audio and visual modalities and analysed using a one-way repeated measures ANOVA with Regularity as within-subject factor. We found no significant effect (F < 1). Analysis of responses times (correct responses only) using a similar repeated measures ANOVA model yielded no significant main (Ps > 0.14) or interaction effect (Ps > 0.26). Average ± SE response times across all conditions was 2477.14 ± 183.59 ms, which was also comparable to the previous experiments. Discussion We found no evidence that temporal irregularity of boundary occurrence affected recognition or temporal memory, which suggests that temporal irregularity did not modulate boundary processing in terms of event model updating (Zacks, 2020) or encoding instability (Clewett & Davachi, 2017). Temporal segmentation may depend on the occurrence or presence of a contextual boundary, rather than the temporal regularity by which they occur. Further, these findings suggest that temporal irregularity does not explain the synchronous multisensory boundary effect of Experiment 1. General discussion Our findings can be summarized as follows. Synchronous multisensory contextual changes during encoding interrupted temporal associative memory processing more than asynchronous multisensory changes. This effect could not be explained by higher sensory dimensionality of multisensory over unisensory contexts, nor by the temporal irregularity of boundary occurrence during encoding. Instead, we argue that the synchronous multisensory boundaries constituted a stronger deviation from multisensory expectations than asynchronous boundaries. Further, the effect of multisensory synchronicity was not found in recognition memory, similarly to a previous multisensory segmentation study, in which asynchronous audio and visual tracks of movie clips did not alter memory performance with respect to synchronous movie clips (Meyerhoff & Huff, 2016). Our findings underscore the suggestion that temporal associative memory tests may be more sensitive to the effect of contextual boundaries on memory formation than recognition memory (Clewett & Davachi, 2017;Heusser et al., 2018). Notably, in Meitz et al., movie clip asynchrony was obtained by playing the visual track in reverse. This approach differs from our implementation of multisensory asynchrony by temporal onset difference, which is more comparable with implementations in multisensory integration research [e.g., (Van Atteveldt et al., 2007;Chen & Spence, 2010;ten Oever et al., 2013)]. Reversing the visual track may have biased semantic processing over perceptual processing in segmentation, whereas our paradigm favoured perceptual processing in the absence of semantic relatedness. The results of Experiment 2 indicated that, in the absence of asynchronous boundaries, the unisensory and multisensory boundaries affected encoding and memory in a similar way. This result appears at odds with findings that multisensory stimuli are better encoded or remembered than unisensory stimuli (Botta et al., 2011;Thompson & Paivio, 1994). However, in such studies, participants had to detect or memorize the multisensory items themselves, whereas in our case, participants did not have to detect or remember the uni-or multi-sensory backgrounds. Further, the audio and visual contexts were not semantically related to the encoded pictures, thereby limiting the interaction between picture encoding and uni-or multi-sensory background features. In Experiment 3, the finding that memory did not depend on the temporal regularity of boundaries aligns with event segmentation models that propose that a contextual boundary violates expectations of the currently active event model in working memory (Radvansky & Zacks, 2017;Zacks, 2020). From this perspective, the irregularity of boundary occurrence across events will not violate expectations of individual events, unless temporal context is a defining feature of those events (van de Ven et al., 2021). Further evidence for this notion comes from segmentation studies using naturalistic stimuli (Meyerhoff & Huff, 2016;Schwan & Garsoffky, 2004;Sridharan et al., 2007;Zacks et al., 2009), in which participants segment those stimuli despite a substantial variation in scene length that causes temporal irregularity of the contextual boundaries. The results provide further insight into how contextual changes lead to temporal segmentation. Several theories have proposed that segmentation depends on event models that provide situational predictions, which are derived from the temporal integration of previous experiences or prior semantic or schematic knowledge (Radvansky & Zacks, 2017;Richmond & Zacks, 2017;Zacks, 2020). A perceived contextual change violates event-based predictions, which triggers prediction error and prompts updating of the event model in working memory to better accommodate the new context. Prediction error in segmentation is processed in prefrontal and striatal areas (Sridharan et al., 2007;Zacks et al., 2011), which also process prediction error in reward-based learning (Garrison et al., 2013;Gershman & Uchida, 2019). In our experiments, participants could have formed perceptual expectations about the multisensory backgrounds, for which the synchronous multisensory boundaries posed the largest deviation from those expectations. However, it is unclear if violations of perceptual expectations would elicit prediction error in working memory or reward processing areas. The lack of semantic relation between pictures and background features argue against a prominent role of eventlevel prediction error. Other studies have also found that changes in semantically unrelated contextual features during encoding, such as frame color (Heusser et al., 2018) or timing (Logie & Donaldson, 2021;van de Ven et al., 2021), affect memory formation. These findings arguably better fit to the alternative suggestion that segmentation is based on encoding instability that arises from changing contextual features without a conceptual event model (Clewett & Davachi, 2017). In this view, uni-or multi-sensory boundaries could segment information in memory without requiring event-level prediction error in working memory. However, an unresolved issue in this view is that the monitoring of encoding stability requires some comparison between current and previous perceptual states, which may re-introduce prediction error-like processing (de Lange et al., 2018;Keller & Mrsic-Flogel, 2018), albeit for perceptual rather than conceptual features, and therefore in lower level (sensory) brain areas rather than working memory. An important consideration is that we implicitly inferred boundary processing from performance on encoding or memory tasks, rather than having participants explicitly report on their observed contextual changes. In "unitization tasks", participants watch a movie or read a narrative and are asked to overtly indicate (e.g., via button press) when they think a meaningful event has ended or a new one has started (Huff et al., 2014;Lassiter & Slaw, 1991;Newtson & Engquist, 1976;Newtson et al., 1977). What is considered "meaningful" is inherently subjective, which can lead to substantial variability in unitization across participants (Sargent et al., 2013). The temporal co-occurrence of a boundary across participants can be regarded as the segmentation magnitude of that boundary, with higher segmentation magnitude indicating that participants tend to show more similar unitization (Huff et al., 2014). Previous studies have suggested that increased contextual dimensionality may increase segmentation magnitude, which in turn may facilitate memory formation (Flores et al., 2017;Huff et al., 2014;Sargent et al., 2013). Extrapolating from our findings, it may be synchronicity of contextual feature changes, rather than dimensionality, influencing segmentation magnitude. However, this notion remains to be tested. Further, it is unknown if unitization for items that lack semantic relatedness, as in our task, is different than for semantically related informations, such as in narratives. Combining the two tasks could reveal new insights in how contextual changes drive segmentation and support memory formation. A possible limitation of our study is that participants completed the task outside of a controlled laboratory. We aimed to control technical aspects of the study by using freely available and well-tested software that has been verified to run on all major operating systems without major differences in software performance (Bridges et al., 2020). We also provided instructions to limit environmental distractions as much as possible. Further, we replicated previous findings from laboratory settings of slower encoding response times in our design, as well as the typical observation of worse temporal order memory performance for items crossing a boundary. Finally, the high recognition performance in all three experiments suggests that participants effortfully and attentively completed the tasks. We therefore think that our results provide a reliable contribution to the understanding of event segmentation in perception and memory. In conclusion, we found that the synchronicity of multisensory contextual boundaries affected temporal order memory. Further, neither multisensory dimensionality nor the temporal regularity of boundary occurrence affected temporal or recognition memory. Our findings provide further insight into how contextual changes affect the organization of perceptual and mnemonic processing of our experiences. Author contributions VV and JS designed the experiments; VV programmed the experiments; JS designed the audioscapes; GK and JS collected the data; all authors contributed to data analysis, interpretation and writing the manuscript; VV supervised the study. Funding No external funding was obtained for this study. Data sharing Experimental data and audio stimuli are publicly available via an Open Science Foundation (OSF) project page at https:// osf. io/ rcx56/.
2022-04-29T06:23:08.654Z
2022-04-28T00:00:00.000
{ "year": 2022, "sha1": "a28d694e969a0574dd78592ba2c02329794c957d", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00426-022-01682-y.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b08e9b02652ff6b3c4bad62f0117b9004d19b623", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
252805597
pes2o/s2orc
v3-fos-license
Geomagnetic Disturbances That Cause GICs: Investigating Their Interhemispheric Conjugacy and Control by IMF Orientation Nearly all studies of impulsive geomagnetic disturbances (GMDs, also known as magnetic perturbation events MPEs) that can produce dangerous geomagnetically induced currents (GICs) have used data from the northern hemisphere. In this study, we investigated GMD occurrences during the first 6 months of 2016 at four magnetically conjugate high latitude station pairs using data from the Greenland West Coast magnetometer chain and from Antarctic stations in the conjugate AAL‐PIP magnetometer chain. Events for statistical analysis and four case studies were selected from Greenland/AAL‐PIP data by detecting the presence of >6 nT/s derivatives of any component of the magnetic field at any of the station pairs. For case studies, these chains were supplemented by data from the BAS‐LPM chain in Antarctica as well as Pangnirtung and South Pole in order to extend longitudinal coverage to the west. Amplitude comparisons between hemispheres showed (a) a seasonal dependence (larger in the winter hemisphere), and (b) a dependence on the sign of the By component of the interplanetary magnetic field (IMF): GMDs were larger in the north (south) when IMF By was >0 (<0). A majority of events occurred nearly simultaneously (to within ±3 min) independent of the sign of By as long as |By| ≤ 2 |Bz|. As has been found in earlier studies, IMF Bz was <0 prior to most events. When IMF data from Geotail, Themis B, and/or Themis C in the near‐Earth solar wind were used to supplement the time‐shifted OMNI IMF data, the consistency of these IMF orientations was improved. • Large (>6 nT/s) geomagnetic disturbances (GMDs) were identified in data from conjugate magnetometer arrays in Greenland and Antarctica • GMD amplitudes were larger in the winter hemisphere and larger in the north (south) when interplanetary magnetic field By was >0 (<0) • Minima in the Bx component of most GMDs appeared simultaneously (within 3 min) in conjugate hemispheres Supporting Information: Supporting Information may be found in the online version of this article. damaging; in some cases cumulative effects can lead up to damage that is finally triggered by isolated events (Marshall et al., 2011). Nearly all prior studies of GMDs have used northern hemisphere data. This is especially appropriate at high latitudes: large populations in Northern Europe are affected by GICs, but there are no large populations at high latitudes in the southern hemisphere, and Antarctica has only very sparse magnetometer coverage. However, because perturbations of the ionospheric plasma in the northern hemisphere depend in part on the ionospheric conductivity in both hemispheres and on the plasma/driving conditions along the entire length of magnetic field lines connecting them, interhemispheric comparisons are needed to fully validate theories and models of GMDs, whether in the northern or southern hemisphere. A set of four case studies by Engebretson et al. (2020) comparing GMDs observed in latitudinally extended magnetometer arrays at magnetically conjugate high latitude locations in the Arctic (Greenland and eastern Canada) and Antarctica found that these nighttime GMD events appeared within a few minutes of each other at stations in opposite hemispheres but with similar magnetic latitudes. These events occurred under a wide range of geomagnetic conditions, but common to each was a negative interplanetary magnetic field Bz that often exhibited at least a modest increase at or near the time of the event. This study also noted that the GMD amplitude was largest in the winter hemisphere during three of the four intervals presented, and concluded, using these data along with models of ionospheric conductances, that GMDs corresponded better to driving by a current generator model than by a voltage generator model. IMF orientations dominated by large By components are known to cause some nonconjugate magnetospheric and ionospheric effects at high latitudes, but the effect of IMF By on GMDs was not addressed in this earlier study. In a more recent superposed epoch study, Engebretson, Ahmed et al. (2021) reported that the medians of nearly all the nearly 700 ≥ 6 nT/s GMDs observed at five stations in Arctic Canada during 2015 and 2017, both premidnight and postmidnight, were preceded by intervals of negative IMF Bz. This pattern held for the 25th and 75th percentile traces in most cases as well, but not every Bz trace was negative prior to GMD occurrence or showed a similar time dependence. This paper also included work comparing a set of 156 intervals during 2015 compiled by Shane Coyle of Virginia Tech when the IMF vector was within ±30° of the GSM Y axis, |By| was >6 nT, and events lasted longer than 30 min, to the times of 200 GMD occurrences at three stations in eastern Arctic Canada during that year. Only one of these GMDs occurred during the time of a large IMF By event. These results suggested that conditions strongly dominated by IMF By orientations may suppress the magnetotail instabilities that appear to be the cause of these events, but did not address the effect of moderate or zero IMF By conditions on GMDs or their conjugacy. This current study was begun with the intent to look for the influence of IMF By and possibly other factors that might affect the interhemispheric conjugacy of these events, using all nighttime GMDs with amplitudes ≥6 nT/s (≥360 nT/min) that appeared in at least one station in magnetically conjugate subsets of these same Greenland and Antarctic arrays during the first 6 months of 2016. In this study, we present four case studies as well as detailed information on a large number of GMDs observed in conjugate hemispheres. We can confirm our earlier findings that IMF By polarity and seasonal effects cause hemispheric differences in amplitude, but even combined these are unable to account for the large variability in amplitude ratios, and we also demonstrate the near simultaneity of many of these events in both hemispheres. Section 2 describes the data used in this study and the procedures used to identify and quantify conjugate events. Section 3 presents four multistation case studies, and Section 4 presents statistical studies that focus on the relative amplitude and timing of these events. Section 5 discusses the implications of these observations, and Section 6 summarizes our findings. Data Set and Analysis Methods Northern hemisphere magnetometer data used in this study were recorded by the Greenland West Coast magnetometer chain (https://www.space.dtu.dk/MagneticGroundStations) and the MACCS array (https://doi.org/10.48322/ sydj-ab90, Engebretson et al., 1995). Southern hemisphere data were recorded by the AAL-PIP magnetometer chain in Antarctica (Clauer et al., 2014), the British Antarctic Survey (BAS) Low Power Magnetometer chain (Kadokura et al., 2008), and the fluxgate magnetometer at South Pole Station, Antarctica (Engebretson et al., 1997;Lanzerotti et al., 1990). Data are presented in local magnetic coordinates. In the northern hemisphere (at MACCS and Greenland West Coast chain stations) and in the southern hemisphere (at AAL-PIP and BAS-LPM stations) Figure 1 and Table 1 show that South Pole Station in Antarctica is in approximate magnetic conjugacy to MACCS station Pangnirtung in Canada. Figure 1 also shows that the six AAL-PIP stations in Antarctica, located about 20° farther east in corrected geomagnetic (CGM) longitude, are in close magnetic conjugacy to the middle of the Greenland West Coast chain, and that the BAS-LPM chain is conjugate in CGM magnetic latitude to several of the lower latitude Greenland West Coast stations, but approximately midway in CGM longitude between the Canadian and Greenland stations (Table 1). The statistical part of this study is based on data from a subset of four stations in the equatorward part of the AAL-PIP array (PG2, PG3, PG4, and PG5) and four nearly conjugate stations ((UMQ, GDH, STF, and SKT, respectively) in the Greenland West Coast Chain. Data from 2016 were chosen for study because of the best AAL-PIP up-time during conditions of either active or moderate solar activity. The limitation to the first 6 months is a consequence of the power availability at the remote AAL-PIP stations. These are powered by solar cells and batteries, and at most of these stations the batteries discharged slightly more than halfway through the calendar year. For case studies data from these stations were supplemented by data from Pangnirtung, South Pole, and the three most poleward stations of the BAS-LPM Chain (M85, M84, and M83) in order to provide a modest extension of longitudinal coverage to the west but in the same range of MLAT. The separation in MLT of SPA from GDH and of PGG from STF is ∼1.3 hr, and PGG is at a predominantly westward distance of 673 km from STF. Full-day data from each of the stations in the four Greenland/AAL-PIP station pairs were analyzed to identify GMDs with amplitude ≥6 nT/s each day at each station. Events were selected and derivatives calculated using the semiautomatic procedure described by . This procedure began by displaying a daily magnetogram (a 24-hr three-axis plot of the magnetic field at each station) in local geomagnetic coordinates on a computer screen. Once a rapid (<20 min duration) and large amplitude (>∼200 nT) magnetic perturbation was visually identified, the IDL cursor function was used to select times ∼15-60 min before and after the perturbation in order to zoom in on the relatively short duration of the event and separate it from the times of other possible activity. The times and values of extrema in this interval were recorded for each component, and PGG after application of a 10-point smoothing to reduce noise and eliminate isolated bad data points, the data were numerically differentiated using the three-point Lagrangian approximation. Plots of the time series of data and derivatives were produced and saved, and the maximum and minimum derivative values were automatically determined and recorded. Interplanetary magnetic field data for these studies were taken from three sources: (a) the OMNI database accessed via CDAWEB (https://cdaweb.gsfc.nasa.gov/index.html/), which provides measurements from the L1 upstream libration point after time-shifting to the nose of the bow shock, (b) observations by the Artemis spacecraft (Themis B and C) in orbit around the moon (also accessed via CDAWEB) and c) from the much nearer Geotail spacecraft, in orbit around the Earth (Weygand & McPherron, 2006a, 2006b) and (http://vmo.igpp.ucla. edu/data1/Weygand/PropagatedSolarWindGSM/weimer/Geotail/). Only Artemis and Geotail data verified to be in the solar wind were retained. Case Studies For each of the events presented in this section, we show a composite figure consisting of 2-hr excerpts of three-axis magnetograms (in local geomagnetic coordinates) from the stations listed above, as well as simultaneous 2-hr plots of the IMF (in GSM coordinates) from the OMNI time-shifted database and a near-Earth monitor (either Geotail or Themis B). Also included at the bottom of each figure is a table listing the largest derivative (in any component, and either positive or negative) at each station during this interval, and an orange circle on the corresponding plot indicates the time of its occurrence. For each event, we also note the timing of its occurrence relative to a recent geomagnetic storm (if any) and in Table 2 we list the most recent prior substorms (if any), as compiled in three substorm lists (Forsyth et al., 2015;Newell & Gjerloev, 2011;Ohtani & Gjerloev, 2020) Table 1 Magnetometer Stations Used in This Study Data obtained during the first event exhibited very similar magnetic perturbations and derivatives with comparable amplitudes in the northern and southern polar regions. During the second and third events much stronger perturbations and derivatives appeared in one hemisphere. The fourth event exhibited more complex patterns. The initial negative turnings of the Bx component near 20:43 UT were nearly simultaneous at the lowest latitude stations in both hemispheres in all three columns (b-d). The Bx minima were strongest between 70° and 72° MLAT. Perturbations in By and Bz had opposite signs in the two hemispheres. As noted by Engebretson et al. (2020), the relative orientations of the Bx and By perturbations most likely reflect the hemispheric difference in the circular Hall current flow around a localized field-aligned current (FAC), counterclockwise in the Northern Hemisphere and clockwise in the Southern Hemisphere. 14 April 2016 The largest ΔBx perturbations were similar and their minima occurred within ∼3 min of each other near 20:52 UT at latitudinally conjugate stations in Antarctica (PG4 and PG5) and Greenland (STF and SKT). They were smaller and occurred later at the higher latitude stations (PG2, PG3, UMQ, and GDH). These differences show both the localized nature of the GMDs and the often-observed poleward motion of the structures that generate these events, as noted earlier in case studies presented by . Similar or slightly weaker ΔBx perturbations appeared at corresponding times at the BAS-LPM stations to the west. Figure S1 in the Supporting Information S1 shows the derivatives observed at each available station during this event, in a format similar to that of the corresponding panels in Figure 2. Although there was a general similarity between the amplitudes of the ΔB components and of the derivatives, they were not strictly proportional. (Note that for 14 April 2016, BAS M84 data was only available at 10-s resolution, so the derivatives are smaller.) This lack of proportionality in amplitude has been noted in several earlier studies as well Viljanen, 1997;Viljanen et al., 2006). It can also be seen that the largest derivatives appeared on the falling or rising slopes of the ΔBx perturbations, not at their minima or maxima, and thus did not occur at the same times. This event occurred at the end of the main phase of a geomagnetic storm (minimum SYM/H = −67 nT). The solar wind velocity (Vsw) was ∼410 km/s, the solar wind dynamic pressure (Psw) was ∼2.2 nPa, and the AL and AU magnetic indices were ∼−500 and ∼250 nT, respectively. Three prior substorm onsets between 20:20 and 20:30 UT were identified on this day (Table 2), but none of them appeared in all three substorm lists, and it is not clear that the GMD onset near 20:50 UT was closely related to any of them. (It is important to note that the Ohtani and Gjerloev (2020) In the OMNI data shown in panel a1, IMF Bz rose from −3 to −2 nT coincident with the beginning of the GMD at 20:50 UT and returned to −5 nT at 20:55 UT. In the near-Earth Geotail data shown in panel a2, Bz rose more gradually from −3 to −1 nT during the interval before returning to −3 nT after 20:50 UT. In both OMNI and Geotail data the By component fell gradually until ∼20:51 UT, shortly before the time of GMD onset, and then rose rapidly past 0 near 20:55 UT. Figure 3 shows equivalent ionospheric currents produced using the Spherical Elementary Current Systems (SECS) method (Weygand, 2009;Weygand et al., 2011) for both the northern and southern hemispheres at two times: at 20:15 UT during the geomagnetically quiet period before the GMD (panels a and b), and at 20:52 UT, during the time of the strongest magnetic perturbations at the lower latitude stations (panels c and d). The left The left (right) panels display the northern (southern) hemisphere currents plotted over the landmasses (gray curve) in a magnetic coordinate system with magnetic noon at the top, dawn on the right side, dusk on the left side, and magnetic midnight at the bottom. Mauve colored stars show the location of the SKT (north) and PG5 (south) conjugate station pair, and red colored stars the location of the STF (north) and PG4 (south) conjugate station pair. The orange ovals in panels c and d indicate the regions in each hemisphere surrounding these two conjugate pairs, at which strong and latitudinally localized currents appeared. (right) panels display the northern (southern) hemisphere currents plotted over the landmasses (gray curve) in a magnetic coordinate system with magnetic noon at the top, dawn on the right side, dusk on the left side, and magnetic midnight at the bottom. The dots mark where the equivalent current has been derived and the vector indicates the magnitude and direction of the current. The stars indicate stations with available and valid data for this date. The amplitude key for the currents is in the lower right corner of each panel. In panel (a) a portion of the dusk side convection cell is apparent and the throat of the cusp starts just north of the Northwest Territories. The eastward electrojet crosses over Hudson Bay and the east coast of Canada. Panel (b) shows the equivalent ionospheric currents in the southern hemisphere over a limited region. The southern hemisphere is shown as a glass earth projection so magnetic noon is at the top, dawn on the right side, dusk on the left side, and magnetic midnight at the bottom. Because of the limited magnetometer coverage in Antarctica only a small portion of the eastward electrojet is visible near Coats Land, Antarctica. Panels (c and d) display the equivalent ionospheric currents during the GMD event at 20:52 UT. In general, in the Northern hemisphere all the currents are significantly larger, the duskside convection cell is still present, and the throat of the cusp is not readily apparent most likely because IMF By is about −3 nT. The orange ovals in panels (c and d) show the region where the GMD was located: Mauve colored stars show the location of the SKT (north) and PG5 (south) conjugate station pair, and red colored stars the location of the STF (north) and PG4 (south) conjugate station pair. The GMD is visible in the lower latitude portion of Greenland around magnetometer stations STF and SKT as equivalent ionospheric currents pointing toward the sun. The GMD is also apparent in Antarctica near stations PG4 and PG5 as equivalent ionospheric currents pointing toward the sun. The SECS technique also identified pairs of upward and downward currents (a proxy for FACs) in both hemispheres. Figure S2 in the Supporting Information S1 shows that these currents were much weaker over western Greenland and the Antarctic AAL-PIP and BAS-LPM arrays at 20:15 UT than at 20:52 UT, the time of the GMD event. At 20:52 UT an upward current appeared south of STF and a downward current north of it. Similarly, a downward current appeared above and south (poleward) of PG3 and an upward current north (equatorward) of it. Applying the right-hand rule to the Pedersen current connecting the two vertical current pairs reproduces the westward equivalent current seen in Figures 3c and 3d. However, because of the paucity of magnetometer coverage east and west of these arrays, we cannot determine the longitudinal extent of the inferred FACs or the location of their epicenters. 6 January 2016 Figure 4 shows IMF and high latitude magnetometer data from 00:00 to 02:00 UT 6 January 2016 with the interval between 00:30 and 01:30 UT highlighted. This geomagnetically quiet interval (Dst = +12) occurred 6 days after the most recent geomagnetic storm. The solar wind velocity (Vsw) was ∼500 km/s, the solar wind dynamic pressure (Psw) was ∼5.52 nPa, and the AL and AU magnetic indices were ∼−700 and ∼100 nT, respectively. Several substorm onsets (Table 2) were noted prior to or during this interval (very differently in the three lists), but only the one at 00:57 UT appeared to closely precede the GMDs. Sharp drops in Bx appeared near 00:37 UT at STF and SKT in Greenland, simultaneous with weak inflections at PG4 in Antarctica, the more poleward Greenland stations GDH and UMQ, and PGG in Arctic Canada. Sharp drops at these four latter stations appeared near 00:57 UT. Short-lived transient perturbations can be seen to occur within the subsequent negative bays at each of these stations, culminating in final large spikes near 01:25 UT, but perturbations were larger in all three components at all northern hemisphere stations than at southern hemisphere stations at comparable latitudes. Perturbations farther west, at PGG, appeared to be intermediate in amplitude but slightly delayed in time relative to those at comparable latitudes in Greenland. In contrast, variations at SPA, PG2, and PG3 during this interval were very weak. Perturbations in Bx at Antarctic stations at lower latitude (M85, M84, M83, and PG4) were similar to but weaker than those at STF and SKT, and their perturbations in the By and Bz components were again significantly weaker. Figure S3 in the Supporting Information S1 shows the derivatives observed at each available station during this event, in a format similar to that of the corresponding panels in Figure 4. During this event derivatives at each of the northern hemisphere stations remained at elevated levels for time intervals ranging from ∼30 to ∼60 min. Both the OMNI and Geotail data showed that during the highlighted interval the IMF Bz component was again mostly negative but that the IMF By component was positive. The By magnitude was larger than the Bz magnitude in OMNI data but similar in Geotail data. pressure (Psw) was ∼8 nPa, and the AL and AU magnetic indices were ∼ −700 and ∼150 nT, respectively. Several substorm onsets were noted prior to or during this interval (very differently in the three lists); the onset at 21:58 occurred just before the beginning of the highlighted interval, but none of the onsets occurred within the interval. Only very small perturbations and derivatives appeared at the higher latitude Antarctic stations SPA, PG2, and PG3 and at the higher latitude Greenland stations PGG, UMQ, and GDH, consistent with a storm-induced equatorward expansion of the auroral oval. Large perturbations and derivatives appeared at Antarctic stations M85, M84, M83, PG4, and PG5, but only much smaller perturbations and derivatives appeared at Greenland stations STF and SKT. We also note that the Bx minima at PG4 and STF occurred nearly simultaneously. Figure S4 in the Supporting Information S1 shows the derivatives observed at each available station during this event, in a format similar to that of the corresponding panels in Figure 5. Only shorter intervals of elevated derivatives appeared at the more equatorward stations in the southern hemisphere, but with clear enhancements in the X and Y components nearly simultaneously at PG4 and PG5. 6 March 2016 Both the OMNI and Themis B data showed that during the highlighted interval the IMF By component was strongly negative (near −10 nT). The IMF Bz component in Themis B data was negative but relatively steady and smaller, near −7 nT, and the OMNI Bz component was near −7 nT between 22:48 and 22:35 but slightly positive before and after that interval. During this event Themis B was located upstream and on the dawnside of Earth, at Rx = 49 R E , Ry = −27 R E , and Rz = 3 R E in GSE coordinates. Geotail was in the magnetosphere during this interval. 11 May 2016 Figure 6 shows IMF and high latitude magnetometer data from 00:00 to 02:00 UT 11 May 2016 with two short intervals highlighted: 00:40 to 01:05 UT and 01:10 to 01:20 UT. This moderately disturbed interval (Dst = −28) occurred on the fourth day of recovery after a strong geomagnetic storm with minimum Dst = −88. The solar wind velocity (Vsw) was ∼550 km/s, the solar wind dynamic pressure (Psw) was ∼0.8 nPa, and the AL and AU magnetic indices were ∼−250 and ∼140 nT, respectively. Table 2 shows that two substorm onsets occurred during the final hour of the previous day (May 10), and one onset (included only in the Forsyth et al. list) occurred at 00:58 UT. There was considerable magnetic activity throughout this 2-hr period, but it was generally weaker than in the three previous examples (note the smaller scale of the vertical axis during this event). During the first highlighted interval large magnetic bays appeared at the lower latitude Antarctic stations M85, M84, M83, PG3, PG4, and PG5, with narrow spikes in several components at 01:00 UT at M83 (9.8 nT), PG4 (5.8 nT), and PG5 (5.8 nT). Much weaker bays and spikes appeared at SPA and PG2, and very little activity appeared at UMQ and GDH. Slightly stronger variations appeared at STF and SKT, with a narrow spike only in the Bz component at SKT (4.6 nT). During the second highlighted interval negative bays were evident only at SPA and more weakly at PGG, but large derivatives appeared at many stations that showed little evidence of negative bays. Large narrow spikes with large derivatives appeared in all three components at 01:15 UT at STF (8.0 nT) and SKT (5.1 nT), and much smaller peaks appeared in Bz at UMQ and GDH. Spikes of moderate to large derivative amplitude also appeared simultaneously at 01:15 UT in one or more components at PG3 (5.3 nT), PG4 (3.7 nT), and PG5 (3.0 nT), and in the higher latitude range to the west in both hemispheres, at SPA (5.6 nT), PGG (5.0 nT), and M85 (2.0 nT). Figure S5 in the Supporting Information S1 shows the derivatives observed at each available station during this event, in a format similar to that of the corresponding panels in Figure 6. This event included a mixture of isolated derivative peaks, most noticeable in data from M84 and M83 at 01:00 UT and at STF and SKT at 01:15 UT, and more extended intervals with smaller but still elevated derivative amplitudes. IMF data from OMNI and Geotail were not only variable but showed significant disagreement during this two-hour interval, as did also the data from the three L1 monitors. In particular, during the first shaded interval tions. Figure S6 in the Supporting Information S1 shows that the three upstream solar wind monitors orbiting the L1 Langrangian point were located ∼250 R E upstream from Earth, but were considerably off the Sun-Earth line (WIND −96 R E , DSCOVR −22 R E , and ACE +24 R E in the Y GSE direction, respectively). Panels (c-e) show that the IMF observed during the shaded intervals at the three spacecraft also showed significant variability and differences in all three components. Note especially the isolated spike at many stations near 01:16 UT that was nearly simultaneous at many stations both N and S. It was not associated with any significant magnetic bay at most stations, so was presumably caused by a very localized set of ionospheric and/or field-aligned currents. Table 3 summarizes the characteristics of the case study events, including the occurrence of nearly simultaneous conjugate ΔBx minima. The variety in IMF By polarity and geomagnetic activity will be considered in the next section. Statistical Studies A total of 66 separate >6 nT/s GMDs were identified at one or more of the stations in the four Greenland-Antarctica station pairs listed above during the first 6 months of 2016. A large majority of these exceeded 6 nT/s at one or both stations in more than one station pair. In the few cases during which more than one >6 nT/s GMD was identified at a given station during a given 2 hr UT interval, only the largest amplitude event was counted. Columns 2 and 4 of Table 4 list the number of GMDs with derivatives >6 nT/s in any component at the northern and southern hemisphere station in each station pair, respectively. Columns 6-10 show the number of events at each station pair with one or two exceeding the 6 nT/s threshold, and their sum and ratio, respectively. It is clear that more >6 nT/s events appeared at the two lower latitude station pairs (45 and 55) than at the two higher latitude pairs (19 and 34). This latitudinal pattern is similar to that found in Table 2 of for stations at comparable magnetic latitudes in eastern Arctic Canada. Table 4 Number of >6 nT/s GMDs Recorded at Four Stations Each in the Greenland West Coast Magnetometer Chain and in the AAL-PIP Magnetometer Chain day of the year is also shown. There is considerable scatter in each plot (to be discussed in Section 5.1), but the lines fit the distributions reasonably well (there is little evidence for a nonlinear relation), and the majority of the events have error bars (based on the documented noise level of the magnetometers in each array) of roughly the same size as the plotting symbols. Documentation for the error bar calculations is provided in the Supporting Information S1. These panels clearly show a seasonal dependence on the slope; it is roughly twice as steep for the highest latitude UMQ/PG2 pair as for the lowest latitude SKT/PG5 pair. IMF By Dependence In an attempt to identify a source for the scatter in each of the amplitude ratio plots in Figure 7, we next examined the IMF Bz and By components (using both OMNI and Artemis/Themis time-shifted IMF data as available) to determine their values prior and up to the time of GMD occurrences. Of the 66 GMDs, 47 (71%) were preceded by an interval of at least 15 min of IMF Bz < 0, while five were preceded by IMF Bz > 0, and another 14 by intervals with mixed IMF Bz polarity. However, only 34 of the 47 GMDs with consistently negative IMF Bz values had a consistent IMF By value (+, within 1 nT of 0, or −) during this same interval. Figure 8 shows the GMD amplitude ratios for the STF/PG4 station pair following these intervals of consistently negative IMF Bz and consistent IMF By values. Events with By > 1 nT are shown in blue, By within 1 nT of 0 in red, and By < −1 in green. Panels (a and b) show all events for which OMNI data and Artemis/Themis IMF data satisfied these conditions, respectively, and panel (c) shows only those events for which OMNI and Artemis/Themis data both saw consistent IMF Bz < 0 and the same category of consistent IMF By values. Plots for the other station pairs are shown as Figures S7-S9 in the Supporting Information S1. The patterns shown are consistent with a small IMF By effect (N/S ratio larger for By > 1 than for By < −1) that is convolved with a seasonal effect, but there is considerable overlap, and the numbers of By > 1 and By near 0 events are very small. Table 5 shows the results of a regression analysis to test the difference between the means of the By > 1 and By < −1 GMD amplitude ratio distributions for each of the four station pairs. The seasonal trend was removed by including day of year as a covariate in this analysis. (These seasonal trends are shown in Figure 7.) The few By ∼ 0 events were not included in this analysis. Mean differences were calculated for the mean date of observations at each station while accounting for the seasonal trend. The differences in the means were statistically significant for all four station pairs, using either OMNI or Artemis/Themis IMF data. The slopes of the regression lines in Figure 7 were also significantly different from zero (all p values < 0.05; p value = 0.002 for UMQ, all other p values < 0.001). The remaining large scatter in the values of individual ratios, however, cannot be explained by either seasonal or IMF By effects or, as noted above, by instrumental measurement errors. Time Delay Analysis The addition of stations somewhat west of the Greenland-AAL-PIP conjugate arrays in the case studies above gave little direct evidence for any IMF By-induced longitudinal skewing in opposite hemispheres. It also became apparent while surveying all the GMDs in this data set that in many cases the waveforms of the Bx (north-south) component at conjugate stations were roughly simultaneous (the two minima occurred within 3 min of each other). Events with such near simultaneity were also noted in Figures 2, 5, and 6. In order to further investigate the conditions leading to close timing between conjugate hemispheres, we determined the time of each ΔBx minimum to within ±1 s by successively zooming in on magnetograms of each of the GMD events at each station in the three lowest latitude station pairs. Figure 9 shows the distribution of time delays (positive values are associated with later event times in the north than in the south) for the STF-PG4 station pair. The relative timing error is ±2 s, much less than the size of the diamond symbols. What stands out in all three sets are that there are two populations: one with |T N − T S | < 3 min, and the other with larger time differences, ranging from ∼5 to ∼30 min. All nine distributions in Figure 9 are dominated by events with time delays clustered within 3 min of 0, but also show a small number of events with much larger delays. In each of the three panels IMF By > 1 events are skewed slightly to the left and IMF By < −1 events are skewed to the right. In the bottom panel, however, using only those events for which Themis and OMNI IMF observations agreed, the pattern was more consistent: By > 0 to the left, By ∼ 0 near 0, and By < 0 to the right with only one exception. Figures for the GDH-PG3 and SKT-PG5 station pairs showing similar distributions are shown as Figures S10 and S11 in the Supporting Information S1. Figure 10 provides a comparison of the distributions from all three of these station pairs, but combines events in all three IMF By categories in one histogram. Events were strongly peaked near 0 at each station pair and in both data sets, but with a slight skewing toward more positive values in the OMNI data compared to the Artemis/Themis data. The few large time lags in either direction occurred most often in the STF-PG4 data set. In order to better characterize the dependence of T N − T S on the IMF By/|Bz| ratio, Figure 11 shows a plot of time differences T N − T S as a function of the IMF By/|Bz| ratio component in Artemis/Themis data for those GMD events in the data set used for Figure 8 and Figures S7, S9 in the Supporting Information S1 with consistent IMF Bz < 0 and the same category of consistent IMF By values (>1 nT, near 0, or <1 nT) during an interval of at least 15 min prior to an event, but now excluding cases when By changed sign during this interval. This resulted in a further reduction in the number of events and reduced even further the number of events with By > 0. Because of the small number of events, this plot includes all events at three conjugate station pairs (GDH-PG3, STF-PG4, and SKT-PG5). Table 6 provides information on the number of events at each of the three station pairs, indicating that for 77% of these events the Bx minima occurred within 3 min of each other. The accuracy of the times of Bx minima at each station, determined from high resolution plots with a range of 1 min, was usually <±1 s. The accuracy of each T N − T S value is thus usually ±2 s, a value much smaller than the plot symbols. Errors in IMF By and Bz were derived from visual estimates of half the distance from the mean to either approximate extreme during the 15 min prior to the GMD. These values were then used to calculate the errors in the IMF By/|Bz| ratio. The resulting error limits ranged from ±0.3 to ±1.5. Figure 8. Plots of the ratios of amplitudes of GMD events observed at the STF/PG4 conjugate station pair during events preceded by an interval of at least 15 min of interplanetary magnetic field (IMF) Bz < 0 and IMF By being consistently either >1 nT (blue), within 1 nT of 0 (red), or <−1 (green). Panels (a) and (b) show all events for which OMNI data and Artemis/Themis IMF data satisfied these conditions, respectively, and panel (c) shows only those events for which OMNI and Artemis/Themis data saw both consistent IMF Bz < 0 and the same category of consistent IMF By values. Table 5 ANCOVA Test of the Difference Between the Means of the By > 0 and By < 0 GMD Amplitude Ratios for Each of the Four Station Pairs After Removal of the Effects of the Linear Seasonal Variations The division into two populations noted above is evident in Figure 11: most have time delays between −3 and +3 min, and a much smaller number have delays from 3 to 15 min. There is no evident dependence on the By / |Bz| ratio for the events between −3 and +3 min; the events in this population have a remarkably flat distribution. It is also evident that most of the events in Figure 11 are in the left half. This again reflects the strong skewing of all large GMD events in this data set to be associated with intervals of negative IMF By. Although the distribution of events with time delays above 3 min in Figure 9 are skewed slightly to the left for By > 1 events and to the right for By < −1 events, and Figure 11 gives evidence of a relation between the polarity of the IMF By component and the relative time delay between northern and southern conjugate stations, their number is so small and the IMF ratio errors so large that any slope determined from these data is not statistically significant. We also looked for a seasonal trend in the time delays, but no pattern was evident for either extreme or modest time differences. Discussion This paper has compared observations of large magnetic perturbation events at high northern and southern latitudes to better understand their similarities and differences at magnetically conjugate high latitude sites. We have identified a clear seasonal variation and a somewhat weaker dependence on the sign of the By component of the IMF, using data from the OMNI data base (using data from the L1 upstream libration point that has been time-shifted to the nose of the bow shock), from the Artemis/Themis spacecraft (in orbit about the Moon, again after time-shifting), and from the Geotail spacecraft (in orbit about Earth). None of these three provided useable data for all the events cataloged during the first 6 months of 2016, and in a considerable number of cases the available IMF data exhibited at least minor differences. The 11 May 2016 event is one of several exceptions to the general pattern of N/S derivative amplitude ratio depending on the sign of IMF By. Given the observed amplitudes of perturbations in the two shaded intervals, one might expect either small or negative IMF By at 01:00 UT and either small or positive IMF By at 01:15, along with a negative IMF Bz. It is possible that neither IMF data set correctly shows the IMF data that impinged on the magnetosphere during this interval. Determining the character of the IMF that actually impinges on Earth's magnetosphere presents many challenges, as noted by Weimer et al. (2002), Borovsky (2018), and Burkholder et al. (2020) and exemplified in a study of Pc 3-4 waves by Bier et al. (2014). In both our case studies and statistical studies, we have presented data using IMF data from both OMNI and a nearer-Earth monitor. These have produced modest but recognizable differences in the resulting patterns in amplitudes, but have led to similar statistical conclusions regarding the influence of seasonal and IMF By effects on the ratios of amplitudes at conjugate stations. Even in combination these influences are insufficient to remove most of the scatter in these ratios. A check of the values of the IMF magnitude and solar wind velocity and pressure for each event revealed no additional pattern of influence external to the magnetosphere that would explain the remaining scatter in conjugate amplitudes. Amplitude Comparisons The control of GMD amplitude by IMF By reported here is consistent with the results of several earlier studies. Holappa et al. (2021b) noted that many studies using ground magnetometers, beginning with Friis-Christensen and Wilhjelm (1975) and using polar-orbiting satellites (Friis-Christensen et al., 2017;Smith et al., 2017), have shown that auroral electrojets in the northern hemisphere winter are stronger in both hemispheres for By > 0 than Figure 10. Histograms of the north-south time delay between GMD events observed at magnetically conjugate station pairs GDH-PG3, STF-PG4, and SKT-PG5, using events in all three categories of interplanetary magnetic field (IMF) By from (a) Artemis/Themis data and (b) OMNI data. Note the larger bin sizes beyond ±5 min. for By < 0, and that in NH summer the dependence on the By sign is reversed. Holappa et al. (2021b) noted that this By sign dependence is very strong in the winter hemisphere, but it is weak in the summer hemisphere, and is much stronger in the westward electrojet than in the eastward electrojet. Holappa and Buzulokova (2022) noted that the physical mechanisms of IMF By effects, which apply not only to auroral zone electron precipitation and ionospheric conductance but also to the fluxes of energetic magnetosphere protons and the growth rate of the ring current, are still not fully understood. In addition, Workayehu et al. (2021), using nearly 6 years of magnetic field measurements from the Swarm A and C satellites, reported that auroral currents were stronger in the northern hemisphere than the southern hemisphere for IMF By > 0 in most local seasons under both signs of IMF Bz. This pattern provides an explanation for the distribution of IMF orientations in the ecliptic plane shown in the Engebretson, Ahmed et al. (2021) superposed epoch study of GMDs observed in Arctic Canada, because the northern hemisphere values would be stronger for By > 0, so more likely to exceed the 6 nT/s amplitude threshold. The distributions of IMF Bx and By, shown separately in Figure 9 of that paper, included both positive and negative values, but the median in Bx was <0 and that in By was >0, consistent with a Parker-spiral oriented IMF vector directed toward Earth. Figure S7 in the Supporting Information S1 of that paper, showing the medians of the x-y vector component of the IMF, revealed that a Parker-Spiral vector directed Earthward (with By > 0) was observed consistently for premidnight events occurring less than 30 min after the most recent substorm onset (panels a1-a5), and was often observed also during most premidnight events occurring between 30 and 60 min after substorm onset (panels b1-b5). However, the directions and sign of the By component were much more varied and at times had ortho-Parker-Spiral orientation for postmidnight events. (It is notable that no postmidnight GMDs during the first half of 2016 satisfied the selection criteria for the present study.) Our observations of a strong seasonal dependence regardless of the sign of IMF By appear to be somewhat inconsistent with the IMF By polarity dependence in these earlier studies. However, Workayehu et al. (2021) also reported a complex dependence on season, and it is conceivable that the seasonal dependence evident in our data set might be restricted to the longitude region and/or 6 month period where these observations were made. A subsequent study by Holappa et al. (2021a) found that the substorm onset latitude and the isotropic boundary latitude of energetic protons were both ∼1° lower during IMF |By| > 3 conditions than for smaller By, and that the substorm occurrence frequency was larger for small |By|. They suggested, consistent with the results of a resistive MHD study by Hesse and Birn (1990), that the magnetotail was more stable during conditions of large IMF |By|, requiring the magnetotail lobes and the polar cap to contain more flux to initiate a substorm compared to the situation when |By| was small. Our observations that GMDs were strongly suppressed under IMF conditions dominated by the By component (Engebretson, Ahmed, et al., 2021) and occurred only when preceded by intervals of IMF Bz < 0 and conditions when |IMF By| < 2 |IMF Bz| (this study) suggests that their generation is in some way linked to magnetotail reconnection. A probable explanation for the remaining scatter in amplitude ratios that is independent of seasonal or IMF By-related factors is based on the horizontal dimensions of the GMDs and of the effective separation (from ∼150 to ∼300 km) and range of sensitivity of ground-based magnetometers. Chinkin et al. (2021), using data from the IMAGE magnetometer network, reported that magnetic field variations associated with GICs had a spatial scale of a few hundred km, consistent with estimates of the horizontal half-amplitude radius of the GMDs reported by and of ∼275 km, and by Weygand et al. (2021) of ∼250-450 km, with a somewhat greater longitudinal extent in some cases. The sensitivity of a ground magnetometer to ionospheric Note. Events were restricted to those for which the IMF had a fairly steady By/ Bz ratio in Artemis/Themis data during the 15 min prior to the GMD and the Artemis/Themis and OMNI data agreed on their signs. currents varies as the inverse square of the distance from the magnetometer on the ground to the current in the overhead ionosphere (∼100-150 km altitude), and thus also falls off rapidly as the horizontal separation exceeds 200-300 km. If the center of an event fell within 200-300 km of both the northern and southern "conjugate" stations, both stations would see the same event with little additional difference in amplitude. If the horizontal distance between the center of a GMD and only one ground magnetometer site exceeded 200-300 km, this would produce an additional reduction in the measured amplitude at that station. Relative Timing We have also noted that GMDs observed in conjugate hemispheres very often occurred nearly simultaneously (within <3 min) regardless of IMF By polarity as long as |By| < ∼2|Bz|. Many satellite imaging studies reviewed by Ohma et al. (2018) using simultaneous observations of similar auroral features in both hemispheres have shown that they are displaced longitudinally when IMF By ≠ 0, such that when IMF By > 0 structures appear in the southern auroral zone up to ∼2 hr MLT later than in the north (i.e., shifted eastward), and vice versa for IMF By < 0. Østgaard et al. (2011b), using data from several years of conjugate auroral observations from the IMAGE and Polar spacecraft, found a sinusoidally varying mean longitudinal displacement at substorm onset between the two hemispheres that maximized near ±0.5 hr MLT at IMF clock angles of 90° and 270°, respectively, and an event study by Reistad et al. (2016) showed displacements of up to 3 hr MLT. However, substorms have been observed to rapidly decrease this displacement. Østgaard et al. (2011a) found that the conjugate auroral features became more similar in MLT during the expansion phase of two substorms. Throughout the first substorm the IMF was stable and By dominated, so they concluded that the longitudinal displacement was removed by processes related to the magnetospheric substorm. Ohma et al. (2018) subsequently presented 10 case studies confirming that a reduction in the longitudinal displacement was a common signature of substorms: the aurora became more north-south symmetric in 8-30 min, which is similar to the typical duration of the substorm expansion phase, and the rate of change was related to the reconnection rate. As noted in earlier studies by and Engebretson, Ahmed, et al. (2021), the majority of >6 nT/s GMDs most often occurred within 30 min after substorm onsets (but only very rarely coinciding with them), although many others occurred long after the onset of any prior substorm. If GMDs are triggered by reconnection in the magnetotail, as appears likely, then this close agreement in Bx minima associated with GMDs should also be expected for events occurring shortly after a substorm onset, as happened prior to all four of the case study events presented above, but a time shift should be expected for events occurring after extended intervals of lesser geomagnetic activity. The 10 events shown in Figure 11 with T N − T S > 3 min (appearing later in the north) were all associated with IMF By < 0. This is consistent with the shift in auroral longitudes observed by Østgaard et al. (2011a) and Ohma et al. (2018). The values of the AL index 1 hr before the occurrence of the GMDs shown in Figure 11 provide additional evidence suggesting consistency with their findings. The values of the AL index 1 hr before the 10 GMD events with T N − T S > 3 ranged from −10 to −350 nT, with a mean of −117 nT and a median of −105 nT, characteristic of relatively quiet conditions. In contrast, the AL values during the 33 events with | T N − T S | < 3 min ranged from −40 to −460 nT, with a mean of −191 nT and a median of −180 nT, indicating somewhat more disturbed conditions. The relative timing pattern noted here is also subject to observational uncertainties, however. The magnetic conjugacy between locations in Antarctica and Greenland is known to vary with season and dipole tilt as well as with magnetic activity, which in turn is parameterized in empirical magnetic field models by magnetic indices and the components of the IMF. According to their nominal corrected geomagnetic latitudes, GHB (69.2°) is slightly closer to the conjugate latitude of PG5 (−69.9°) than is SKT (70.7°) and SKT is slightly closer to the conjugate latitude of PG4 (−71.2°) than is STF (71.9°). Using the T89 model to trace the field lines of the West Greenland stations to the surface of Antarctica, at 21:00 UT on 14 April 2016 (using K p = 3.333) SKT was magnetically very close to PG5 and PG4 was at the same magnetic latitude as STF but shifted westward, consistent with the pairing used to determine amplitude ratios above. However, at 01:00 UT on 11 May 2016 (using K p = 2.0), GHB was magnetically very close to PG5 and SKT was very close to PG4. Given these K p values, higher latitude stations were near or outside the region of closed field lines, so no conjugate tracing using the T89 model was possible. Motivated by the variation in conjugacy indicated by these model results, we have calculated the timing differences between the GMD Bx minima for the GHB-PG5 and SKT-PG4 station pairs in order to compare them with SKT-PG5 and STF-PG4 station pairs. Table 7 shows the number of events with the differences between Bx minima >3 and < 3 min for each of these pairs, using all 66 events regardless of IMF conditions but excluding those events at which multiple closely spaced minima appeared at one or both stations. The number of events in the GHB-PG5 column was reduced because no data were available from GHB between 22 May and 2 June. Table 7 shows that the number of Bx minima simultaneous to within 3 min was more than double the number with larger time delays for three of the four station pairs. The numbers for GHB-PG5 were almost equal, and the reason for this discrepancy is not clear. These comparisons suggest that despite the modest shifts in conjugacy expected between hemispheres at these high magnetic latitudes, the similarities and differences reported in Section 4 above appear to be reasonably consistent. Conclusions Using the only currently available conjugate high latitude magnetometer arrays, we have investigated the conjugacy of large transient geomagnetic disturbances that, if they occurred over more technologically developed regions, would generate large GICs. Four case studies have demonstrated some of the similarities and differences between GMD events in conjugate hemispheres, and by using 6 months of magnetic field data from four conjugate station pairs in West Greenland and Antarctica in combination with measurements of the IMF, we have been able to quantify their dependence on season and IMF Bz and By polarity. Uncertainty in the IMF dependences stems from the still-limited number of events, the high variability of the IMF, and disagreements between currently available sources of IMF data due to the lack of consistent measurements near the Earth-Sun line and near Earth. In addition, some of the variability in the timing of conjugate GMDs may be due to the inaccuracy and variability of conjugate mappings between hemispheres. 1. We have found that IMF Bz was <0 shortly during the 15 min preceding and/or during a large majority (71%) of these events (as in our other recent studies). This suggests but cannot strongly confirm the influence of reconnection in the magnetotail as a link in the causal chain leading to these events. 2. Two factors appeared to exert modest control over the relative amplitude of GMDs in the northern and southern polar regions. (a) The N/S amplitude ratio was increased when IMF By > 0, and decreased when IMF By < 0. (b) Latitudinal/seasonal dependences caused GMDs to have larger amplitudes in the winter hemisphere; larger seasonal differences were observed at higher latitudes. 3. The remaining differences in amplitude may well be due to the convolution of the spatial localization of the ionospheric currents that cause these events and the horizontal range of detection of these currents by ground-based magnetometers. A dense, two-dimensional array in at least one hemisphere may be needed in order to counter the combined effects of small-scale size of nighttime GMDs and the dynamically varying points of magnetic conjugacy at these high latitudes in order to diminish the large event-to-event variability evident in this data set. 4. The relative timing between conjugate GMDs (the majority of them simultaneous to within ±3 min) was consistent both with the sense of longitudinal shift in auroral features revealed in earlier studies of simultaneous satellite images (Figure 9), and with the rapid reduction in these shifts during substorms. The addition of stations somewhat west of the West Greenland-AAL-PIP conjugate arrays in the case studies gave no evidence for any IMF By-induced longitudinal skewing in opposite hemispheres, consistent with the close temporal connection of many of these events to prior substorm activity. 5. GMDs were observed in conjugate hemispheres regardless of IMF By polarity as long as |By| < ∼2|Bz|. As noted by Engebretson, Ahmed, et al. (2021), a separate study of 156 intervals in 2015 when the IMF was dominated by large By values found that only one of these coincided with a GMD in the northern hemisphere. This suggests that GMD occurrences are suppressed by large and dominant IMF By values. Note. Events with isolated minima are included here regardless of IMF orientations. Much work remains to be done before the dependences on external factors identified here can be accurately characterized. The statistical associations found here between GMD occurrences and prior intervals of IMF Bz < 0 and IMF By of either sign are insufficient to quantify any possible IMF-related delay time until GMD onset, in part because of the limitations of the IMF data bases themselves. In addition, the physical processes leading to GMDs are still only poorly understood. The fact that they occur not only during geomagnetically disturbed conditions but also during relatively quiet times suggests that although they are likely to be caused by instabilities in the magnetotail, ground-satellite conjunction studies at various tailward distances appear to necessary in order to characterize, and even more so to predict, the occurrence of the mesoscale or small-scale events that trigger them.
2022-10-11T15:35:00.591Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "185bc4737707baa9dac47863c2676da0e09dac90", "oa_license": "CCBY", "oa_url": "https://backend.orbit.dtu.dk/ws/files/292396531/JGR_Space_Physics_2022_Engebretson_Geomagnetic_Disturbances_That_Cause_GICs_Investigating_Their_Interhemispheric_2_.pdf", "oa_status": "GREEN", "pdf_src": "Wiley", "pdf_hash": "b997b291b153a15c2aadfbb5bb1a17b3a12b9cbe", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
13577614
pes2o/s2orc
v3-fos-license
Should the diagnosis and management of OSA move into general practice? Obstructive sleep apnoea (OSA) together with insomnia are the most common sleep disorders [1]. OSA is secondary to complete or partial airway obstruction caused by recurrent pharyngeal collapse during sleep [2], producing loud snoring or choking and frequent awakenings. This chronic sleep disturbance results in daytime sleepiness and fatigue that impedes patient’s ability to function, thereby negatively affecting his or her quality of life [3, 4]. In 2015, the American Academy of Sleep Medicine (AASM) task force released quality measures for the care of adult patients with OSA. The first quality measure outcome is to improve detection and categorisation of OSA symptoms and severity [4]. Introduction Obstructive sleep apnoea (OSA) together with insomnia are the most common sleep disorders [1]. OSA is secondary to complete or partial airway obstruction caused by recurrent pharyngeal collapse during sleep [2], producing loud snoring or choking and frequent awakenings. This chronic sleep disturbance results in daytime sleepiness and fatigue that impedes patient's ability to function, thereby negatively affecting his or her quality of life [3,4]. In 2015, the American Academy of Sleep Medicine (AASM) task force released quality measures for the care of adult patients with OSA. The first quality measure outcome is to improve detection and categorisation of OSA symptoms and severity [4]. The current prevalence rate of OSA is about 10 to 20% of middle-aged adults, with at least 4-8% of men and 2-4% of women suffering from symptomatic disease [3]. Increased knowledge of OSA by general practitioners and the general population has heightened the demand for consultations with a specialist. Over the past two decades, with the increasing prevalence of obesity, the most important risk factor in sleep breathing disorders, the number of patients diagnosed as suffering from OSA has increased drastically and it will increase over the coming years [3]. However, this increase in demand has not been accompanied by strategic changes in the cost-efficient diagnosis and/or treatment of these diseases. Therefore, there is a pressing need to improve management of this disease by new strategies where definitely primary care medicine has to be involved. The impact of OSA on global health has been widely reported. It is associated with somnolence and fatigue as mentioned, impaired cognitive function, deficit in sustained attention which may result in an increased motor vehicle accident risk [5,6] and is also a source of lost productivity in the workplace [7]. The Sleep Heart Health and other studies [8] have suggested that patients with OSA are at increased risk of cardiovascular disease, including hypertension [9], myocardial infarction, refractory angina [10], stroke [11] and even death. In addition, nocturnal cardiac arrhythmias [12,13] and mild-to-moderate pulmonary hypertension can be present in patients with OSA [14]. Metabolic abnormalities, including diabetes are observed in up to 50% of patients with OSA [15,16]. However, it has to be mentioned that causality is not clear in a number of the previous mentioned medical entities. In addition, anaesthesiologists have also suggested that patients with OSA have an increased risk of postoperative complications. In a population of surgical patients with OSA, Deflandre et al. [17] recorded an incidence of 7.17%. Therefore, nowadays OSA represents a major public health issue [3,4]. High prevalence, accessibility and cost problems are the main reasons that justify research into more available and less costly, but comparably reliable, alternatives. To this end, all levels of medical care must be involved: 1) primary care or specialists not directly involved with sleep, 2) second-level hospitals, which should have the ability to perform simplified studies, and 3) tertiary hospitals with complex equipment and multidisciplinary environment have to be prepared to receive patients with complex sleep disorders of breathing as well as to solve the sleep related diseases [18,19]. Management, screening and assessment for OSA needs to be a priority in primary care settings The involvement of different fields or levels of medicine is needed to face the management of OSA patients and search for strategies that guarantee cost-effectiveness [19][20][21][22]; specifically focusing on diagnosis, therapeutic decision (i.e. continuous positive airway pressure (CPAP) or other treatments) and follow up. While the follow-up is already implemented in some primary care settings, the diagnosis and therapeutic decision, which are probably the most important, are not yet fully implemented in primary care. Both are handled in sleep centres using different devices and a range of variables, including among the most relevant clinical symptoms (i.e. sleepiness), the potential consequences of OSA (i.e. high risk of cardiovascular events) and the apnoea-hypopnoea index level [22,23]. It is important to consider two types of questionnaires to be used in primary care. Selfreported questionnaires have already been tested in a primary care environment with predictive performance similar to when implanted in sleep units (Berlin Questionnaire, Stop-Bang Questionnaire, Sleep Apnea Clinical Score) [23]. The other type of questionnaire, including only objective data, may be a better predictor of OSA. Among others (see table 1), the DES-OSA score, a questionnaire developed by Deflandre et al. [17] analyses five patient anthropometric variables (Mallampati score, distance between the thyroid and the chin, body mass index, neck circumference and sex) and has been proven to be effective on pre-operative assessments of OSA. Perhaps this type of anthropometric questionnaire, due to its simplicity and objectivity, should be implemented in primary care for screening purposes. Regarding sleep studies, there are two major types: full polysomnography (PSG) and home respiratory polygraphy (HRP). PSG is considered the diagnostic gold standard. However, access to this procedure is limited because it requires special institutions with trained technicians and is relatively expensive overall. As a result, suspected OSA patients may be left waiting a significant amount of months before being diagnosed and able to initiate medical therapy or CPAP [22]. HRP is a simplified portable monitor that includes sensors to measure airflow, respiratory efforts (assessed by thoracic and abdominal bands), pulse oximetry and body position [24,25]. Institutions such as the AASM and the American Thoracic Society recommend the management of OSA by HRP in pre-test subjects with high OSA suspicion (usually male patients, snores, with witnessed apnoeas, daytime sleepiness, obese and short neck), without notorious morbidity or suspicion of neurological disorders, as stated in their guidelines for the use of portable monitors [25]. In addition, HRP is considered a cost-effective alternative for OSA diagnosis in selected patients [26,27]. Randomised controlled studies have already shown that ambulatory management of OSA in specialist sleep unit using HRP and autotitrating CPAP (auto-CPAP) produce comparable patient outcomes with standard laboratory-based sleep study methods [21,[25][26][27][28]. However, whether an ambulatory approach would be noninferior when directly and broadly transferred to a primary care setting is still unknown and this represent a major challenge since one-third of primary care patients report symptoms suggestive of OSA [29]. Overnight oximetry should be considered as a screening tool. As demonstrated by the Australian group, an oxygen desaturation index >16 in combination with anthropometric objective questionnaires, predicts an apnoea-hypopnoea index >30 in most patients [30]. As mentioned, this way of work should be implemented in primary care in the years to come. Therapeutic decision In their study, Masa et al. [26] made a further step by comparing automatic versus manual scoring of home single-channel nasal pressure and showing that automatic scoring is good enough to correctly recommend CPAP in most of the more symptomatic patients. In addition, the authors suggested that the optimal pressure could be calculated automatically by an auto-CPAP device [26]. The existence of these devices for diagnosis and treatment could be very useful in primary care management in the future, along with a networked way of working, with educational and training sessions in primary care, which are essential and should be compulsory. Very few research studies analysed the effectiveness of the management (diagnosis) of high pretest OSA subjects in primary care with appropriate medical backup using simplified devices [30][31][32][33] (table 1). These were multicentre, randomised studies performed on an adult population aged over 18 years involving primary care physicians and trained nurses. The main outcomes included were: functional improvements on sleep questionnaires (daytime sleepiness using Epworth Sleeping Scale and Functional Outcomes of Sleep Questionnaire (FOSQ), among others), cognitive impairment tests, CPAP adherence and cost-effectiveness. Although they showed similar functional outcomes and adherence to CPAP treatment in patients managed in a primary care context compared with patients managed with in-laboratory PSG, at present, this way of working has not yet been fully implemented due to several reasons: on the one hand, there is a deficit of time in primary care and, on the other, there is an absence of proper education and training sessions. It is also worth noting that these trials validating HRP for OSA diagnosis in primary care excluded patients with comorbidities, such as chronic obstructive pulmonary disease and congestive heart failure, for whom, as demonstrated by Olivera et al. [34], the concordance between HRP and in-lab PSG (at least with COPD) is inadequate, due either to poor oximetry and/or flow recordings in a significant number of patients. Final comments The management of OSA has evolved over the past 30 years. In the beginning, it seemed that sleep diseases, particularly sleep breathing disorders, were rare and needed to be controlled in laboratory hospital sleep units by a specialist. The use of portable home-based monitoring sleep devices has allowed physicians (especially respiratory sleep specialists) to start diagnosing OSA and prescribing therapy based on home studies. However, to be implemented in a primary care environment, personal use requires proper instruction, and support must be available when needed. At present, family physicians should screen patients based on questionnaires, such as STOP-BANG (snoring, tiredness, observed apnoea, high blood pressure, body mass index, age, neck circumference, gender), that analyse symptoms and anthropometric variables [35] or those that incorporate oximetry [30]. Diagnosis procedures by using simple devices are definitely the next step. Summary When a disease is common, with comorbidities and high costs, all levels of medical care must be implicated. Nurses and family physicians, extra hospital respirologists, non-reference centres, as well as sleep units must work in coordination; each one with duties and rights. Adequate preparation and training in sleep medicine are key. At present, a significant number of nondifficult OSA patients must be followed by primary medicine (family physicians and specially nurses). Diagnostic procedures are more difficult to perform in primary care but should definitely be the next step in nondifficult patients. We have to realise that, in the future, technology will be better and simpler and a significant number of OSA patients will be managed in primary care. Sleep centres have to be multidisciplinary, working in other crucial fields such as healthy sleep, chronobiology, telemedicine and mechanical ventilation, and should remain in charge of difficult patients such as non-compliers or with important comorbilities. Finally, it is important that a sleep unit, with adequate preparation and training, should comprise a sleep laboratory; with inside hospital clinic and outside primary care medicine both having a role. [19] with permission from the publisher.
2018-04-03T00:38:39.714Z
2016-09-01T00:00:00.000
{ "year": 2016, "sha1": "d14291d96e23a03d1aec0ab0d76ec6e414940ceb", "oa_license": "CCBYNC", "oa_url": "https://breathe.ersjournals.com/content/breathe/12/3/243.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d14291d96e23a03d1aec0ab0d76ec6e414940ceb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
209515617
pes2o/s2orc
v3-fos-license
$J/\psi$ and $\psi(2S)$ production at forward rapidity in $p$+$p$ collisions at $\sqrt{s}=510$ GeV The PHENIX experiment at the Relativistic Heavy Ion Collider has measured the differential cross section, mean transverse momentum, mean transverse momentum squared of inclusive $J/\psi$ and cross-section ratio of $\psi(2S)$ to $J/\psi$ at forward rapidity in \pp collisions at \sqrts = 510 GeV via the dimuon decay channel. Comparison is made to inclusive $J/\psi$ cross sections measured at \sqrts = 200 GeV and 2.76--13 TeV. The result is also compared to leading-order nonrelativistic QCD calculations coupled to a color-glass-condensate description of the low-$x$ gluons in the proton at low transverse momentum ($p_T$) and to next-to-leading order nonrelativistic QCD calculations for the rest of the $p_T$ range. These calculations overestimate the data at low $p_T$. While consistent with the data within uncertainties above $\approx3$ GeV/$c$, the calculations are systematically below the data. The total cross section times the branching ratio is BR $d\sigma^{J/\psi}_{pp}/dy (1.2<|y|<2.2, 0 The PHENIX experiment at the Relativistic Heavy Ion Collider has measured the differential cross section, mean transverse momentum, mean transverse momentum squared of inclusive J/ψ and cross-section ratio of ψ(2S) to J/ψ at forward rapidity in p+p collisions at √ s = 510 GeV via the dimuon decay channel. Comparison is made to inclusive J/ψ cross sections measured at √ s = 200 GeV and 2.76-13 TeV. The result is also compared to leading-order nonrelativistic QCD calculations coupled to a color-glass-condensate description of the low-x gluons in the proton at low transverse momentum (pT ) and to next-to-leading order nonrelativistic QCD calculations for the rest of the pT range. These calculations overestimate the data at low pT . While consistent with the data within uncertainties above ≈ 3 GeV/c, the calculations are systematically below the data. The total cross section times the branching ratio is BR dσ J/ψ pp /dy(1.2 < |y| < 2.2, 0 < pT < 10 GeV/c) = 54.3 ± 0.5 (stat) ± 5.5 (syst) nb. I. INTRODUCTION Charmonium states such as J/ψ and ψ(2S) mesons are bound states of a charm and anti-charm quark (cc). At the Relativistic Heavy Ion Collider (RHIC) energies, they are produced mostly from hard scattering of two gluons into a cc pair followed by the evolution of this pair through a hadronization process to form a physical charmonium. Despite several decades of extensive studies [1][2][3][4][5][6][7][8][9] since the discovery of J/ψ, we still have very limited knowledge about the J/ψ production mechanism and hadronization. Therefore, carrying out as many charmonium measurements as possible in p+p collisions over a wide range of transverse momentum (p T ) and of rapidity (y) at different energies is essential to understanding production mechanisms. These measurements over a wide range of p T (down to zero p T ) and rapidity allow calculating quantities, such as the mean transverse momentum p T , the mean transverse momentum squared p 2 t , and the p T -integrated cross section dσ/dy. The collision energy dependence of these quantities can put stringent constraints on the different theoretical approaches that are used to describe the hadronic production of J/ψ. These approaches include the color-evaporation model (CEM) [10,11], the color-singlet model (CSM) [12] and the nonrelativistic quantum chromodynamics formalism (NRQCD) [13]. In this work, we compare the data to NRQCD, an effective field theory derived from QCD and valid for heavy-quark pairs with low relative velocity, where a J/ψ can be formed from cc pair produced in a color-singlet or a color-octet state. In this paper, we present the inclusive J/ψ production cross section and the ratio of ψ(2S) to J/ψ production cross sections at forward rapidity (1.2 < |y| < 2.2) measured in p+p collisions at center of mass energy √ s = * Deceased † PHENIX Spokesperson: akiba@rcf.rhic.bnl.gov 510 GeV. These mesons are measured in the dimuon decay channel. The J/ψ inclusive differential cross sections are obtained as a function of p T and y over a wide range of p T . The J/ψ and ψ(2S) results at √ s = 510 GeV are the first measurements at this rapidity. Comparisons to similar PHENIX measurements performed at √ s = 200 GeV [2] and Large Hadron Collider (LHC) measurements at √ s = 2.76, 5.02, 7, 8 and 13 TeV [3][4][5][6] allow studying the variations of p T , p 2 t and dσ/dy as a function of √ s. The results are also compared to next-to-leading order (NLO) NRQCD calculations [8]. The paper is organized as follows: the PHENIX apparatus is described in Sec. II, the data samples used for this analysis and the analysis procedure are discussed in Sec. III, while the results are presented and compared to measurements at different √ s as well as to models in Sec. IV. II. EXPERIMENTAL SETUP A complete description of the PHENIX detector can be found in Ref. [14]. Only the detector systems relevant to this measurement are briefly described here. The PHENIX muon spectrometers, see Fig. 1, cover the full aziumth and the north (south) arm cover forward (backward) rapidity, 1.2 < y < 2.2 (−2.2 < y < −1.2). Each muon spectrometer comprises a hadronic absorber, a magnet, a muon tracker (MuTr), and a muon identifier (MuID). The absorbers comprise layers of copper, iron and stainless steel and have about 7.2 interaction lengths. Following the absorber in each muon arm is the MuTr, which comprises three stations of cathode strip chambers in a radial magnetic field with an integrated bending power of 0.8 T·m. The MuID comprises five alternating steel absorbers and Iarocci tubes. The composite momentum resolution, δp/p, of particles in the analyzed momentum range is about 5%, independent of momentum and dominated by multiple scattering. Muon candidates are identified by reconstructed tracks in the MuTr matched to MuID tracks that penetrate through to the last MuID plane. Since 2012 the PHENIX detector had a new forward vertex detector (FVTX) [15], which comprises four planes of silicon strip detectors, finely segmented in radius and coarsely segmented in azimuth. For the subset of muon candidate tracks passing several of these detector planes, this additional information was used to improve mass resolution by a factor of 1.5 for studying ψ(2S). Another detector system relevant to this analysis is the beam-beam counter (BBC), comprising two arrays of 64Čerenkov counters, located on both sides of the interaction point and covering the pseudorapidity range 3.1 < |η| < 3.9. The BBC system was used to measure the p+p collision vertex position along the beam axis (z vtx ), with 2 cm resolution, and initial collision time. It was also used to measure the beam luminosity and form a minimum bias (MB) trigger. III. DATA ANALYSIS The results presented here are based on the data sample collected by PHENIX during the 2013 p+p run at √ s = 510 GeV. The BBC counters provided the MB trigger, which required at least one hit in each of the BBCs. Events, in coincidence with the MB trigger, containing a muon pair within the acceptance of the spectrometer are selected by the level-1 dimuon trigger (MuIDLL1-2D) requiring that at least two tracks penetrate through the MuID to its last layer. The data sample, used in this analysis, corresponds to 3.02 × 10 12 MB events or to an integrated luminosity of 94.4 pb −1 . A. Raw yield extraction A set of quality cuts is applied to the data to select good p+p events and good muon candidates as well as to improve the signal-to-background ratio. Good p+p events are selected by requiring that the collision occurs in the fiducial interaction region |z vtx | < 30 cm as measured by the BBC. Each reconstructed muon candidate comprises a combination of reconstructed muon tracks in the MuTr and in the MuID. The MuTr track is required to have more than 9 hits out of the maximum possible of 16 while the MuID track is required to have more than 6 hits out of the maximum possible of 10. In addition, a cut on individual MuTr track χ 2 of 23 is applied. The MuTr track χ 2 is calculated from the difference between the measured hit positions of the track and the subsequent fit for each MuTr track. The MuTr tracks are then matched to the MuID tracks at the first MuID layer by applying cuts on maximum position and angle differences. Furthermore, there is a minimum allowed single muon momentum along the beam axis, p z , which is reconstructed and energy-loss corrected at the collision vertex, of 3.0 GeV/c corresponding to the momentum cut effectively imposed by the absorbers. Finally, a cut on the χ 2 of the fit of the two muon tracks to the common vertex of the two candidate tracks near the interaction point was applied. The invariant mass distribution is formed by combining muon candidate tracks of opposite charges (unlikesign). In addition to the charmonium signal, the resulting unlike-sign dimuon spectrum includes correlated and uncorrelated pairs. In the J/ψ and ψ(2S) region, the correlated pairs arise from correlated semi-muonic decays of charmed hadrons, beauty and the Drell-Yan process as well as light hadron decays. The uncorrelated pairs are mainly coming from the decays of light hadrons (π ± , K ± and K 0 ) which decay before or after passing through the absorber, and form the combinatorial background. The combinatorial background is estimated using two methods: The first one derives the combinatorial background from the mass distribution of the same sign (like-sign) pairs of muon candidates within the same event. The second method derives the combinatorial background from the mass distribution of the unlike-sign pairs of muon candidates from different events (mixedevent) of z-vertex position within 2 cm. The normalization of the mass distribution of the combinatorial background from the like-sign dimuon distributions (N ++ and N −− ) is calculated as: N CB = 2 N ++ N −− . The mixedevent like-sign dimuon mass distribution is normalized to the same-event like-sign combinatorial background distribution in the invariant mass range 2.0−4.5 GeV/c 2 . This factor is then used to normalize the mixed-event unlikesign dimuon mass distribution. Figure 2 shows the unlike-sign dimuon spectrum together with the combinatorial background estimated by both methods. Both background distributions from the mixed-event and like-sign methods are consistent, however, the mixed-event background is more statistically stable, because we mix each event with the previous four events. Therefore, the mixed-event background was used to subtract the uncorrelated background from the unlikesign dimuon spectrum. After subtracting the uncorrelated background, the unlike-sign spectra including the correlated background are fitted by the following function, where p 0 − p 7 are free parameters and m µµ is the unlikesign dimuon mass. The J/ψ shape is better described with two Gaussian distributions, corresponding to the first two terms in Eq. 1, one for the J/ψ peak and a second one with larger width to account for the wider tails, which occurs due to limitations in MuTr resolution, as discussed in sec. II. The peak also includes contribution from ψ(2S), which is not resolved. An exponential is used to describe the continuum contributions from correlated backgrounds. Panels (a) and (b) of Fig. 2 show the raw spectra for selected p T and rapidity bins and panels (c) and (d) show the spectra after subtracting the combinatorial background fitted with the function described above for those selected bins. To extract the ψ(2S) signal we improve the mass resolution of the muon tracking systems by utilizing the FVTX. The FVTX being located before the absorber allows measuring the dimuon opening angle before any multiple scattering occurs in the absorber [15]. Using this additional tracking information gives a more precise measurement of the dimuon opening angle and thereby a more precise measurement of the pair mass, as well as rejecting backgrounds from decay muons that emerge from the absorber. However, these additional requirements on the dimuon tracks that are necessary to separate the J/ψ and ψ(2S) peaks also reduce the statistics by a factor of 6 due to the geometric acceptance of FVTX, therefore, we study the dimuon mass spectra in each arm integrated over p T and rapidity within each arm. The dimuon mass spectrum extracted including the FVTX after subtracting the mixed-event background is shown in Fig. 3. Given the resolution enhancement, the sum of a Gaussian and a crystal-ball function [16,17], rather than a double Gaussian, was used for each of J/ψ and ψ(2S) peaks to fit the dimuon mass spectrum. The ψ(2S) peak is expected to be wider than the J/ψ peak, due to the fact that the higher mass and harder p T spectrum of the ψ(2S) state will produce higher momentum decay muons which have larger uncertainty in their reconstructed momentum in the spectrometer due to a smaller bend in the magnetic field. By selecting only poorly reconstructed tracks, we found a J/ψ width of ≈ 200 MeV/c 2 , therefore, the width of the second Gaussian in the fit to the entire sample of tracks is set to 200 MeV/c 2 . The ratio of widths of the ψ(2S) to J/ψ is set to 1.15, following expectations of the performance of the muon tracking system [18]. The difference between the centroids of the ψ(2S) and J/ψ peaks is set to the Particle Data Group value of 589 MeV/c 2 [19]. The relative normalization of the second Gaussian is fixed to be the same for both resonances, as are the parameters for the crystal-ball line shape. B. Detector acceptance and reconstruction efficiency The acceptance and reconstruction efficiency (Aε rec ) of the muon spectrometers, including the MuID trigger efficiency, is determined by running a pythia 1 [20] generated J/ψ signal through a geant4-based full detector simulation [21] of PHENIX. The simulation tuned the detector response to a set of characteristics (dead and hot channel maps, gains, noise, etc.) that described the performance of each detector subsystem. The simulated vertex distribution was tuned to match that of the 2013 data. The simulated events are reconstructed in the same manner as the data and the same cuts are applied as in the real data analysis. the MuTr and MuID systems and different amount of absorber material. In the case of ψ(2S), we are interested in the ratio of its differential cross section to that of J/ψ, therefore, we extract the ratio of Aε rec for ψ(2S) and J/ψ with addition of the FVTX information in analyzing the simulation to match that of the data analysis. A factor of 0.77 (0.69) is applied to the ψ(2S)/J/ψ ratio extracted from the fit to the invariant mass spectrum to account for differences in acceptance, efficiency, and dimuon trigger efficiencies between the north (south) arm of the muon spectrometer. C. Differential cross section The differential cross section is evaluated according to the following relation: where N ψ is the extracted J/ψ or ψ(2S) yield in y and p T bins with ∆y and ∆p T widths, respectively. BR is the branching ratio where BR J/ψ→µ + µ − = (5.93 ± 0.06) × 10 −2 and BR ψ(2S)→µ + µ − = (7.9±0.9)×10 −3 [19]. Aε rec is the product of the acceptance and reconstruction efficiency. N BBC MB = 3.02 × 10 12 is the number of MB events and ε BBC = 0.91±0.04 is the efficiency of the MB trigger for events containing a hard scattering [22]. σ BBC is the PHENIX BBC cross section, 32.5 ± 3.2 mb at √ s = 510 GeV, which is determined from the van der Meer scan technique [23]. D. Systematic uncertainties All systematic uncertainties are evaluated as standard deviations and are summarized in Tables I and II. They are divided into three categories based upon the effect each source has on the measured results: Type-A: Point-to-point uncorrelated uncertainties allow the data points to move independently with respect to one another and are added in quadrature with statistical uncertainties; however, no systematic uncertainties of this type are associated with this measurement. Type-B: Point-to-point correlated uncertainties which allow the data points to move coherently within the quoted range to some degree. These systematic uncertainties include a 4% uncertainty from MuID tube efficiency and an 8.2% (2.8%) from MuTr overall efficiency for the north (south) arm. A 3.9% signal extraction uncertainty is assigned to account for the yield variations when using different functions, i.e., second, third and fourth order polynomials, to fit the correlated background and ≈ 3% uncertainty is assigned to account for the ψ(2S) contribution. The systematic uncertainty associated with Aε rec includes the uncertainty on the input p T and rapidity distributions which is extracted by varying these distributions over the range of the statistical uncertainty of the data, yielding 4.4% (5.0%) for the north (south) arm. Additional 11.2% (8.8%) systematic effect for the north (south) arm was also considered to account for the azimuthal angle distribution difference between data and simulation. To be consistent with the real data analysis, a trigger emulator was used to match the level-1 dimuon trigger for the data. The efficiency of the trigger emulator was studied by applying it to the data and comparing the resulting mass spectrum to the mass spectrum when applying the level-1 dimuon trigger which resulted in a 1.5% (2%) uncertainty for the north (south) arm. Type-B systematic uncertainties are added in quadrature and amount to 16.0% (12.4%) for the north (south) arm. They are shown as shaded bands on the associated data points. Type-C: An overall normalization uncertainty of 10% was assigned for the BBC cross section and efficiency uncertainties [24] that allow the data points to move together by a common multiplicative factor. In the measurement of the ψ(2S) to J/ψ ratio, most of the mentioned systematic uncertainties cancel out. However, the fit that was used to extract the yields is more complex and additional systematic uncertainties arose from the constraints applied during the fitting process. A systematic uncertainty from constraining the normalization factor is determined by varying the mass range over which the factor is calculated and a 3% systematic uncertainty is assigned for both arms. Systematic uncertainty of 3% (7%) was assigned to the north (south) arm on the fit range by varying the range around the nominal values, 2-5 GeV/c 2 . The effect of constraining the second Gaussian peak width to 200 MeV/c 2 was studied by varying the width between 175 and 225 MeV/c 2 , resulting in a systematic uncertainty of 12% (10%) in the north (south) arm. The systematic uncertainty component on Aε rec that survived the ratio amounts to 2.7% (4.1%) in the north (south) arm. The systematic uncertainties associated with the ratio measurement are summarized in Table II. IV. RESULTS The inclusive J/ψ differential cross section as a function of p T is calculated independently for each muon arm, then the results are combined using the best-linearunbiased-estimate method [25]. Results obtained using the two muon spectrometers are consistent within statistical uncertainties. The combined inclusive J/ψ differential cross section is shown in Fig. 5 and listed in Table III. The gray shaded bands represent the weighted average of the quadratic sum of type-B systematic uncertainties of the north and south arms, ≈ 10.1%. The average is weighted based on the statistical uncertainties of each arm. The data points are corrected to account for the finite width of the analyzed p T bins [26]. We compare the data to inclusive J/ψ data at 200 GeV [2] which show similar p T dependence. At low p T , the data are compared to prompt J/ψ leading-order (LO) NRQCD calculations [8,13] coupled to a Color Glass Condensate (CGC) description of the low-x gluons in the proton [9]. For the rest of p T range, the data are compared to prompt J/ψ NLO NRQCD calculations [8,13]. The LO-NRQCD+CGC calculations overestimate the data at low p T . The NLO-NRQCD calculations underestimate the data at high p T , while to some extent, are consistent with the data at intermediate p T , 3-5 GeV/c. It is important to stress that the nonprompt J/ψ contribution (from excited charmonium states and from B-meson decays) is not included in these calculations. This is expected to be a significant contribution at high p T ; therefore, the addition of the nonprompt J/ψ contribution could account for the difference between the data and calculations [27][28][29]. The p T coverage down to zero p T allows the extraction of the p T -integrated cross section, BR dσ J/ψ pp /dy(1.2 < |y| < 2.2, 0 < p T < 10 GeV/c) = 54.3 ± 0.5 (stat) ± 5.5 (syst) nb. Inclusive J/ψ differential cross section as a function of rapidity is listed in Table IV and shown in Fig. 6, which also includes PHENIX inclusive J/ψ data at 200 GeV [2] and NLO-NRQCD calculations [8]. The 510 GeV data show a similar rapidity dependence pattern to that of the 200 GeV data. NLO-NRQCD calculations overestimate the data, and this is consistent with what was observed in the case of p T -dependent differential cross section (see Fig. 5) because the y-dependent differential cross section is dominated by the low-p T region where NRQCD calculation overestimates the data. To quantify the feed-down contribution of excited charmonium states, the ratio of the cross section of ψ(2s) to J/ψ, multiplied by their respective branching ratio to dimuons, is measured (R = 2.84±0.45%) and shown in Fig. 7. This ratio is compared with other p+p and p+A systems at different collision energies [17,[30][31][32][33][34][35][36][37][38]. The results are consistent with world data within uncertainties with no significant dependence on collision energy. To better understand the shape of the p T spectrum for J/ψ at forward rapidity and quantify its hardening at √ s = 510 GeV, we calculate the corresponding mean transverse momentum p T and mean transverse momentum squared p 2 T . This is done by fitting the inclusive J/ψ p T -dependent differential cross sections with the following function [2,6]: [7,17,[30][31][32][33][34][35][36][37][38]. The associated uncertainties are the quadrature sum of the statistical and systematic uncertainties. The first error is statistical, and the second is the systematic uncertainty from the maximum shape deviation permitted by the type-B correlated errors. Figure 8 shows p T as a function of √ s from this measurement compared with results from 200 GeV PHENIX data at the same rapidity range [2], and results from AL-ICE at different √ s values and in the rapidity range, 2.5 < y < 4.0 [42]. This result follows the increasing pattern observed between PHENIX results at 200 GeV and ALICE results at 2.76-13 TeV. Figure 9 shows p 2 T as a function of √ s from this measurement compared with several other measurements [1,2,6,39,40,42,43]. Similar to p T , p 2 T from this measurement follows the increasing pattern versus √ s established by several sets of data over a wide range of energies. Below √ s of 2 TeV, the trend is qualitatively consistent with a linear fit of p 2 T versus the log of the center of mass energy from Ref. [2]. However, above √ s of 2 TeV, the ALICE data indicate p 2 T grows at an increased rate which is interpreted by authors of Ref. [6] as due to the fact that ALICE data sets have different p T ranges. The bottom cross section also increases with increasing √ s, changing the relative prompt and B-meson decay contributions to the inclusive J/ψ samples discussed here [27,44]. This may also contribute to the observed differences in the measured p 2 T . The dσ J/ψ pp /dy measurement at √ s = 510 GeV offers an opportunity to test the center-of-mass energy dependence of the p T -integrated cross section. Moreover, it bridges the gap between RHIC data at 200 GeV and ALICE data starting at 2.76 TeV [3][4][5][6]. However, ALICE data are collected at mid (|y| < 0.9) and forward (2.5 < y < 4.0) rapidities and to have a proper comparison we interpolate the ALICE data to the PHENIX forward rapidity range, 1.2 < y < 2.2. This is done by fitting the pythia generated dσ/dy distribution at each energy to the data at the same energy with only the normalization as a free parameter. An example is shown in Fig. 10. We used several pythia [45] tunes including PHENIX default, tune-A, modified tune-A and atlas-csc [46]. After fitting each of these pythia tunes to the data, we extracted dσ/dy at 1.2 < y < 2.2, from these fits. The rms value of the extracted dσ/dy from the different fits is used in the comparison to RHIC data. The error on the rms value is the rms of the errors associated with the fit results. Figure 11 shows that the data are well described by a power law, dσ V. SUMMARY We studied inclusive J/ψ production in p+p collisions at √ s = 510 GeV for 1.2 < |y| < 2.2 and 0 < p T < 10 GeV/c, through the dimuon decay channel. We measured inclusive J/ψ differential cross sections as a function of p T as well as a function of rapidity. The p T integrated differential cross section multiplied by J/ψ branching ratio to dimuons is BR dσ J/ψ pp /dy (1.2 < |y| < 2.2, 0 < p T < 10 GeV/c) = 54.3 ± 0.5 (stat) ± 5.5 (syst) nb. With these data measured over a wide p T range, we calculated p T , p 2 T and dσ/dy. The results were compared to similar quantities at different energies from RHIC and LHC to study their √ s dependence. These new measurements could put stringent constraints on J/ψ production models. The inclusive J/ψ differential cross sections were compared to prompt J/ψ calculations. These calculations included LO-NRQCD+CGC at low p T and NLO-NRQCD for the rest of p T range. These model calculations overestimated the data at low p T and underestimated the data at high p T . The nonprompt J/ψ contribution was not included which could account for the underestimation at high p T where the nonprompt processes are significant. In addition, we measured the ratio of the cross section of ψ(2s) to J/ψ, multiplied by their respective branching ratio to dimuons, R = 2.84 ± 0.45%. The result is consistent with world data within uncertainties with no dependence on collision energy.
2019-12-31T17:00:31.000Z
2019-12-31T00:00:00.000
{ "year": 2019, "sha1": "c99a620c27e60e9b6025c4d1eeb215770b2e09ff", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.101.052006", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "a8d295557cf376e9808320ef698068df0d086892", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
255693145
pes2o/s2orc
v3-fos-license
The paradigm of amyloid precursor protein in amyotrophic lateral sclerosis: The potential role of the 682YENPTY687 motif Neurodegenerative diseases are characterized by the progressive decline of neuronal function in several brain areas, and are always associated with cognitive, psychiatric, or motor deficits due to the atrophy of certain neuronal populations. Most neurodegenerative diseases share common pathological mechanisms, such as neurotoxic protein misfolding, oxidative stress, and impairment of autophagy machinery. Amyotrophic lateral sclerosis (ALS) is one of the most common adult-onset motor neuron disorders worldwide. It is clinically characterized by the selective and progressive loss of motor neurons in the motor cortex, brain stem, and spinal cord, ultimately leading to muscle atrophy and rapidly progressive paralysis. Multiple recent studies have indicated that the amyloid precursor protein (APP) and its proteolytic fragments are not only drivers of Alzheimer’s disease (AD) but also one of the earliest signatures in ALS, preceding or anticipating neuromuscular junction instability and denervation. Indeed, altered levels of APP peptides have been found in the brain, muscles, skin, and cerebrospinal fluid of ALS patients. In this short review, we discuss the nature and extent of research evidence on the role of APP peptides in ALS, focusing on the intracellular C-terminal peptide and its regulatory motif 682YENPTY687, with the overall aim of providing new frameworks and perspectives for intervention and identifying key questions for future investigations. Introduction Over the past few years, common pathways involved in neurodegenerative diseases have been highlighted [1]. Indeed, neurodegenerative disorders, such as Parkinson's disease (PD), Alzheimer's disease (AD), and amyotrophic lateral sclerosis (ALS), show various degrees of overlapping pathology, not only in clinical appearance but also at the single-protein level or in an entire signalling cascade. One case is that of the amyloid precursor protein (APP), a protein primarily at the center of AD research. An increasing number of studies have proposed APP as an active contributor to certain forms of ALS [2]. In line with this concept, APP is expressed at the neuromuscular junction (NMJ) [3] and is required for the normal development and function of the NMJ [4,5] suggesting that alterations in the signalling or processing of APP might influence NMJ function and are likely to predispose patients to motor neuron diseases (MND), such as ALS. Accordingly, alterations in the APP pathway have been proposed to represent an ALS signature preceding or anticipating the pathology [1,6]. ALS and AD are age-associated sporadic disorders with no precisely identified genetic causes but with a large number of susceptibility genes in which selective and progressive dysfunctions of specific neuronal populations occur [1,7]. Although apparently unrelated, as AD is primarily a central nervous system disease and ALS targets the peripheral nervous system, approximately 30 % of ALS patients show neuritic plaques and neurofibrillary tangles, especially in the amygdala, hippocampus, and entorhinal and insular cortices [6,8]. In addition, both AD and ALS show accumulation and deposition of a specific misfolded protein, APP in AD and TDP-43 in ALS, conferring vulnerability to specific neuronal populations [9] affecting mitochondrial and autophagy functions [10,11] and triggering neurotoxic mechanisms [12]. In this short review, we provide evidence for the role of APP peptides in ALS, and underline new frameworks and perspectives for future research. Findings regarding the pathophysiology of AD and ALS and their similarities are beyond the scope of this review, as many outstanding reports have extensively discussed this area of research [13,14]. In particular, because APP contains multiple structural and functional domains, we focused our review mainly on the properties of APP intracellular domains and its regulatory motif 682 YENPTY 687. Lights and shadows of APP in ALS APP is expressed in both neuronal and non-neuronal cells and is largely distributed in extra-neuronal tissues [15]. APP is present at synaptic sites in both the central and peripheral nervous systems, including the NMJ, and plays an essential role in the development of neuromuscular synapses [3,4]. APP is post-transcriptionally processed into three major isoforms with differential cellular and tissue expression patterns. The three main isoforms of APP described to date are APP 695 , APP 751 and APP 770 , depending on the number of amino acids, and are produced through alternative splicing of exons 7 and 8, which encode the Kunitz protease inhibitor and OX-2 domains, respectively [16]. APP 695 lacks both domains, whereas the APP 751 isoform containes only KPI domain in the extracellular sequence. APP 770 in addition to KPI domain, contains an OX-2 domain [17]. APP 751 and APP 770 are ubiquitously expressed, whereas APP 695 is predominantly expressed in neurons [18,19]. APP belongs to an evolutionarily conserved type I transmembrane glycoprotein family that includes two paralogues, amyloid precursor-like proteins 1 and 2 (APLP1 and APLP2), with similar structures and membrane topologies [20]. Notably, previous studies using knockout mice have emphasized the high functional redundancy of APP, APLP1, and APLP2 [21]. These proteins contain several conserved motifs that are shared between all vertebrates, including E1 and E2 domains in the extracellular region and a short intracellular C-terminal domain (AICD) that contains the highest conserved consensus motif, Y 682 ENPTY 687 [22]. The latter is thought to be crucial for AICD binding to adaptor proteins, and for APP trafficking and localization in cells [23]. Notably, while Aβ originates solely from APP, AICD originates from APP, APLP1, and APLP2 [20]. Increased β-secretase activity has been observed in animal models of ALS and nerve injury [33,34]. Similarly, a lack of α-secretase expression associated with increased β-secretase expression and activation of the amyloid cascade of APP, leading to increases in amyloid-β and AICD peptides, has been reported in the hippocampi of ALS patients [35]. In addition, deficits in lysosomal autophagic pathways have been demonstrated to activate the γ-secretase complex and lead to Aβ42 accumulation in cultured human muscle fibers [36]. Pharmacological inhibition of β-secretase enhances peripheral functional recovery after sciatic nerve ablation and increases axonal sprouting due to partial nerve injury [37]. Treatment with a monoclonal antibody (MAb) that blocks β-secretase cleavage prevents an increase in APP expression, phosphorylation, processing, and inflammatory processes [33,34]. β-Secretase cleavage to generate Aβ peptides and AICD occurs preferentially in the APP 695 isoform, although increased expression of APP 751 and APP 770 has been detected in the brains of patients with AD and is associated with increased Aβ deposition [38,39]. Interestingly, prolonged activation of extrasynaptic NMDA receptors, which has been associated to neurodegenerative diseases [40,41], shifts APP splicing from APP 695 to KPI-containing APP isoforms in neurons and triggers APP processing to produce Aβ [40]. This might imply that dysregulated splicing of APP mRNA occurs in pathological conditions and might allow discrimination of different pathologies in which APP has been demonstrated to be involved, including PD and ALS. Indeed, most reports focusing on the role of the APP gene in ALS face difficulties in discriminating between the three isoforms and refer to APP generically [22]. In this regard, a recent study reported the development of a new PCR procedure that can accurately measure and quantify the transcript copy numbers of all three major isoforms, APP 695 , APP 751 , and APP 770 [42]. It is noteworthy that specific adaptors might bind APP 695 , APP 751 , and APP 770 because of the differences in their APP sequences, APP/ KPI versus APP 695 , thus affecting APP endocytosis, trafficking, and metabolism in neuronal cells. Accordingly, sequence differences between APP 695 , APP 751 , and APP 770 may regulate the transport of APP 695 along a distinct processing route, leading to β-secretase cleavage, whereas APP/KPI isoforms are excluded from this pathway or located in a distinct subcellular compartment. In this context, the identification of these different adaptor proteins may be useful for designing innovative strategies for the differential diagnosis of neurodegenerative diseases associated with altered APP levels. Notably, only AICD generated by β-and γ-secretase cleavage translocates to the nucleus, where several potential target genes have been identified Table 1 [25]. Although γ-secretase cuts AICD in several subcellular locations, AICD generated by α-secretase cleavage at the plasma membrane has a lower likelihood of reaching the nucleus because of its short half-life and longer distance from the cell surface [43]. In contrast, AICD produced in the endosomes by βand γ-secretase cleavage can reach the nuclear vicinity before γcleavage releases AICD owing to dynein-and microtubule-mediated transport systems [44]. Interestingly, less AICD is produced in amyloidogenic APP processing than in non-amyloidogenic processing, raising the question of whether a reduction in AICD levels results in the loss of physiological functions or the gain of new functions. Some of these genes, such as those encoding the Aβ-degrading enzyme, neprilysin (NEP), are implicated in APP metabolism. Although the direct involvement of NEP in ALS has not yet been defined, it is known that NEP not only participates in the regulation of various brain functions but also in movement regulation [59,60]. Loss of NEP expression results in altered locomotor activity [61]. Other putative AICD target genes are α2-actin and transgelin, which are involved in the regulation of actin cytoskeleton dynamics [44]. Notably, many mutations in ALS-related genes that affect cytoskeletal integrity and dynamics have been identified [62]. For instance, mutations in proteins that regulate actin polymerization, including superoxide dismutase (SOD1), TDP-43, FUS, and Profilin1 [57] EGFR (Epidermal growth factor receptor) [58] (PFN1), have been identified in patients with ALS, causing an increased tendency to aggregate and leading to the formation of cytoplasmic inclusions [63]. Notably, mutations in PFN1 (C71G, M114T, G118V, A20T, T109M, Q139L, and E117G) [64,65] and other cytoskeletal-related proteins such as Tubulin A4A (TUBA4A) [66] and kinesin family member 5A (KIF5A) [67] have been identified in familial ALS patients. The ability of mutant PFN1 to associate with actin is impaired in ALS, and mutant PFN1 motoneurons exhibit morphological abnormalities characterized by smaller growth cones and shorter axons [68]. Indeed, the disruption of cytoskeletal integrity and/or motor neuron-dependent transport are key features of ALS. This highlights the necessity of potentially differentiating variants of these genes that might act as primary causes of the disease from those that might become risk factors or disease modifiers of the pathology. In addition, the possibility that altered levels of AICD in ALS might influence the expression of some of these genes and activate neurotoxic downstream pathways is an aspect not enough speculated that might deserve attention. Glycogen synthase kinase 3β (GSK3β) promotes tau hyperphosphorylation and neurofibrillary tangle formation in AD [69]. Dysregulations of GSK3β signalling has also been recognized in ALS [70]. In this regard, increased levels of GSK3β expression and phosphorylation of the Tyr 216 residue have been reported in the spinal cord, frontal and temporal cortices, and hippocampus of patients with ALS [71][72][73]. The tumor suppressor genes p53 and cyclin B1, and D1 or KAI1 are pro-apoptotic factors and cell cycle reentry, respectively, and are involved in neuronal death processes, also included ALS (Reviewed by Szybińska et al. [74]. Consistently, activation of p53 and an altered Bcl-x/Bax ratio were also observed in the ventral horns of the lumbar spinal cord of SOD1 transgenic mice harboring a single amino acid substitution of glycine to arginine at codon 86 (SOD1 G86R) mice [75]. p53 [76] and other apoptotic markers, such as Rb, Bax, Fas, and caspases [77] are increased in the motor cortex and spinal ventral horns of postmortem brain tissues [74,78]. APP regulates Cu/Zn SOD1 expression and function, which is one of the major targets of oxidative damage in the brains of AD patients [79,80] and its mutations have been linked to familial ALS [81]. In ALS neurons, Aβ acts as an early and short-lived change [82] directly interacting with superoxide dismutase 1 (SOD1), decreasing its enzymatic activity [83] and accelerating the onset of motor impairment [84]. Accordingly, increased Aβ immunoreactivity has been reported in the perikaryal region of anterior horn neurons of patients with familial and sporadic forms of ALS, and proximal axonal swelling was detected in mild lesions or in the early stage of the pathology [85] supporting the concept that ALS is a disease not confined to the motor system [86][87][88]. Indeed, neurodegeneration in patients with ALS also involves brain areas such as the dorsolateral prefrontal cortex, anterior cingulate, hippocampus, dentate gyrus (DG), parietal lobe, substantia nigra, cerebellum, amygdala, and basal ganglia [86,[89][90][91] and amyloid cascade-related biomarkers have been found in the cerebral spinal fluid of patients with ALS and frontotemporal dementia (FTD) [92,93]. Additionally, an increase in Aβ levels has been observed in the skin and muscles of ALS patients [93,94]. Similar results were obtained in SOD1 transgenic mice harboring a single amino acid substitution of glycine to alanine at codon 93 (SOD1-G93A), which is commonly used to model ALS, where Aβ peptide accumulation and increased APP levels have been detected in a restricted subpopulation of vulnerable muscle fibers and in the spinal cord [2]. Interestingly, genetic ablation of APP (APP −/− ) in SOD1-G93A mice significantly prevents neuromuscular junction loss, reduces disease progression, and promotes motor neuron survival, further supporting the idea that APP and Aβ peptides might contribute to ALS pathology by accelerating muscle denervation [2]. The hypothesis that Aβ can also be neurotoxic in the peripheral nervous system was further supported by evidence from murine models of familial AD overexpressing Aβ, in which the susceptibility of motor neurons to Aβ peptides, progressive degeneration of skeletal muscle, and age-dependent axonal degeneration in the spinal cord have been described [95][96][97]. The 682 YENPTY 687 -mediated APP processing regulation: possible implications in ALS As mentioned above, APP processing can result in the production of Aβ peptides, which contribute to AD or the secretion of the sAPPα peptide as well as intracellular AICD. The production of sAPP(α or β) and AICD metabolites largely depends on the level of Tyr 682 phosphorylation of the highly conserved 682 YENPTY 687 motif on AICD (referred to as neuronal APP 695 numbering). The 682 YENPTY 687 motif represents a docking site for multiple interacting proteins. 682 YENPTY 687 phosphorylation changes the AICD conformation, which shifts the cis/trans isoforms, resulting in loss of affinity for binding proteins. Notably in both APP 695 as well as APP 751 and APP 771 the 682 YENPTY 687 motif is preserved. For instance, Grb2, Shc, Grb7, and Crk interact with APP only when Tyr682 is phosphorylated, whereas Fe65, Fe65L1, and Fe65L2 interact with APP only when this tyrosine is not phosphorylated (reviewed by Matrone et al. [23]) Table 2. In this regard, the 682 YENPTY 687 binding protein, Fe65 acts as an AICD stabilizer in the nuclear compartment, where it binds to histone acetylase Tip60 to form AFT complexes and prevents APP amyloidogenic processing [118]. Notably, decreased Fe65 expression has been identified in patients with ALS, in which the accumulation of APP and Aβ has also been detected, suggesting that the AICD-Fe65 complex is internalized into the nucleus, as occurs when the APP amyloidogenic signalling pathway is activated [86]. Similarly, the 682 YENPTY 687 binding proteins Clathrin and AP2 control APP endocytosis, as well as many other transmembrane proteins, and proper trafficking to the early endosome and back to the plasma membrane, thus preventing APP accumulation in the late endosome and lysosome, where because of the acidic environment, APP is preferentially cleaved by β secretase [119] thereby initiating amyloidogenic processing [116,120]. Although a direct link between ALS and the Clathrin and AP2 adaptors has not yet been demonstrated, alterations in the transport of endosomes or lysosomes have been proposed to be likely causative of the pathology, as in many other neurodegenerative diseases [121]. Accordingly, several genes involved in endosomal maturation, lysosome biogenesis, and vesicle trafficking have been linked to ALS [122], suggesting that these [116] SorLA (Sortilin-related receptor) [117] pathways are altered in ALS. In addition, changes in the expression of proteins responsible for endocytic trafficking have been detected in ALS patients [123,124]. Among others, SorLA, which belongs to the VPS10Ps protein family and interacts with the 682 YENPTY 687 motif of APP [125], decreases in the anterior horn cells (AHCs) of patients with ALS compared to controls [126]. Notably, abundant SorLA expression has been detected in neurons throughout the central nervous system, including the cortex, hippocampus, cerebellum, and spinal cord, which controls retromer-dependent sorting of APP and prevents APP amyloidogenic processing [125,127,128]. Referring to another 682 YENPTY 687 binding protein, Notch, studies have reported that Notch and APP compete for α-and γ-secretase cleavage. Interestingly, inactivation of the Notch pathway and a reduction in α-secretase expression have been described in the hippocampus of patients with motor neuron deficits. Such alterations are associated with increased β-secretase expression and the activation of the amyloidogenic cascade, leading to Aβ and AICD accumulation [35,129,130]. Of note Notch 1 is essential for hippocampal neurogenesis [131,132] and the Notch receptor is expressed in neural stem cells [131]. Consistently, inactivation of the Notch pathway results in inhibition of neurogenesis, and Notch signalling is repressed in the hippocampi of patients with ALS [133]. Interestingly, some drugs that increase Notch signalling have been found to promote hippocampal neurogenesis [134]. Similarly, a rat model of AD showed that soluble Aβ 42 suppresses Notch1 expression [135]. The 682 YENPTY 687 adaptor protein Numb is involved in stem cell maintenance and differentiation, as well as in neuritogenesis, and antagonizes Notch-1 signalling [136,137]. Numb is reduced one week after the spinal cord lesion or after motor neuron ablation and then restored at one month [129] in animal models of ALS, in line with other evidence of decreased neurogenesis in patients with ALS [35,133]. Nevertheless, the role of Numb, as well as the other APP adaptor protein Shh, has also been reported in the regulation of adult neurogenesis [138] and the expression of these proteins has been found to be downregulated in animal models of motor neuron degeneration [129]. Furthermore, c-Abl [49,139] and Fyn tyrosine kinase (TK) phosphorylate the APP Tyr 682 residue of APP under physiological or pathological conditions, although Fyn appears to be primarily responsible for aberrant Tr 682 phosphorylation in AD neurons [115]. Interestingly, an increase in the amount of c-Abl mRNA, phosphorylated c-Abl and Fyn TK has been detected in motor neurons of ALS [140][141][142]. Consistently, treatment with c-Abl and Fyn inhibitors, such as dasatinib and bosutinib, or the new compound SC75741, has been shown to exert protective effects on motor neuron degeneration in G93A-SOD1 transgenic ALS mice [142] as well as iPSC-derived motor neurons from patients with ALS [141,143,144]. In addition, multiple studies have associated mutations in genes encoding different kinases with ALS [145,146], suggesting that alterations in the function of specific kinases and/or their downstream targets are crucial to neuronal survival, and that protein kinase inhibitors may be a reasonable target for the design of innovative ALS treatment [147,148]. Multiple lines of evidence indicate that regulation of APP trafficking might prevent Aβ generation. Consistently, increased sAPPα levels appeared to be associated with a reduced risk of developing AD [149][150][151][152]. Interestingly, variations in sAPPα production have also been reported in conditions other than AD such as cerebrovascular and neurodegenerative diseases [153], bipolar disorder [154] and ALS [92,93]. In particular, sAPPα is upregulated in the muscles of mouse models of familial ALS and in patients [1,2,94], whereas low sAPPα concentrations have been found in the CSF of patients with ALS with a rapidly progressive course of the disease [92]. However, whether the increase in sAPPα represents a cell survival response to molecular changes caused by MND [86] or a neurotoxic process to promote neuronal death is a matter of debate. Interestingly, Barbagallo et al. previously demonstrated that sAPPα production largely depends on Tyr 682 phosphorylation of the 682 YENPTY 687 motif of APP in neurons [155]. Accordingly, when Tyr 682 is not phosphorylated, APP is largely located in the plasma membrane where it is processed by α-secretase to generate sAPPα. In contrast, when APP is phosphorylated at the Tyr 682 residue, APP endocytosis and trafficking inside neurons are affected, resulting in APP accumulation in acidic neuronal compartments, such as late endosomes and lysosomes, where it is preferentially cleaved to generate sAPPβ peptides [114,116,125]. Consistently, APP YG knock-in mice, in which Tyr 682 is not phosphorylated because it is replaced by glycine (YG), show aberrant sAPPα production in the brain and motor neurons [155,156]. In addition, YG mice display a progressive reduction in muscular strength, motor functions and abilities, and learning performance [157]. Such deficits are associated with agedependent cognitive decline, autophagic dysfunction, and progressive dendritic spine loss [125], mirroring some of the crucial features reported in patients [158] (Fig. 1). Notably, the YG background, when introduced into an APLP2 null background failed to rescue early postnatal lethality or neuromuscular synaptic defects present in APLP2 null mice, supporting the role of the Tyr 682 residue and 682 YENPTY 687 motif in regulating NMJ neurodevelopment and function [156]. In accordance with the importance of Tyr 682 phosphorylation on the 682 YENPTY 687 motif in controlling sAPPα release and preventing aberrant sAPPα secretion, when the APP background lacking the 682 YENPTY 687 domain was reintroduced into APP-knockout mice, an increased cell surface expression of sAPPα was detected [159]. Interestingly, YG hippocampal neurons fail to differentiate properly in vitro because of deficits in nerve growth factor (NGF) response [160]. In fact, the lack of Tyr 682 phosphorylation prevents the association between APP and the NGF receptor TrkA, resulting in TrkA perinuclear accumulation and causing APP redistribution towards the non-amyloidogenic pathway with the accumulation of sAPPα and AICD peptides [160]. This critical role of NGF in APP trafficking, control of neuronal functions, and prevention of dysfunction largely reminds us of the crosstalk between glial cell-derived neurotrophic factor (GDNF) and APP at the neuromuscular junctions [161,162]. GDNF controls muscle and Schwann cell functions [163]. Deficits in GDNF and APP signalling have been associated with ALS. GDNF is decreased in the serum of patients with ALS, whereas sAPPα levels are increased in the same fluid [164]. APP regulates GDNF gene expression [164,165]. NGF promotes trophic effects and protects neurons from AD-related processes [166]; when GDNF is administered directly to muscles, it improves muscle-nerve synapse performance and promotes motor neuron activity and survival [167]. In addition, overexpression of GDNF in muscles extends the lifespan of ALS mice [168]. Whether GDNF activity and secretion levels change depending on APP Tyr 682 phosphorylation is worth investigating. Conclusions Considerable knowledge gaps and clinical challenges associated with neurodegenerative diseases remain unaddressed. Perhaps the biggest challenge is to better define and understand the factors that initiate the pathology and drive cellular dysfunction in the disease. Numerous studies suggest that neurodegenerative diseases share not only clinical phenotypes but also molecular mediators (s). Although the findings discussed here portray only part of the broad literature on AD and ALS and their roles in these diseases, it is likely sufficient to delineate some of the critical questions for the next phase of studies. Herein, we discuss a novel hypothesis that might deserve to be expanded and sustained in the future regarding the potential role of the conserved 682 YENPTY 687 motif located on the AICD of APP in ALS and speculate that modifications in the 682 YENPTY 687 peptide might represent an early signature of the disease, as previously described in AD [23,120]. The 682 YENPTY 687 peptide has been consistently viewed as an active and critical player in controlling APP function and preventing the switch from the non-amyloidogenic to amyloidogenic pathway through phosphorylation of the Tyr 682 residue [23,120]. However, the idea that this peptide can also regulate APP activity in other pathologies such as ALS has never been speculated. Importantly, evidence regarding the role of 682 YENPTY 687 peptide in regulating the levels of sAPPα in motor neurons and influencing the correct development of NMJ has been reported previously [33,34,82,164]. Indeed, Tyr 682 phosphorylation of the 682 YENPTY 687 motif controls APP trafficking and prevents amyloidogenic APP processing to generate Aβ [157,160]. Conversely, the lack of Tyr 682 phosphorylation in YG mice causes an increase in sAPPα levels, autophagic deficits, locomotor deficiency, and cognitive deficits, all of which have been observed in ALS patients [155,157]. Consistently, an aberrant increase in sAPPα levels has been detected in the dysfunctional NMJ of patients with ALS [1,2,94]. These findings raise the question of whether a possible malfunction of the 682 YENPTY 687 pathway might influence Tyr 682 phosphorylation and predispose APP to aberrant production of sAPPα in patients with ALS. Based on these perspectives, this short review provides new and important directions for the investigation of ALS. Author statement I declare that this manuscript is original and has not been published before and is not currently, being considered for publication elsewhere., I confirm that the manuscript has only one author and that there are no other persons who, satisfied the criteria for authorship and are not listed., I will be the responsible for communicating with the editor about progress, submissions of revisions and final approval of proofs. Conflict of Interest The authors declare that they have no affiliations with or involvement in any organization or entity with any financial interest in the subject matter or materials discussed in this manuscript.
2023-01-12T16:22:39.954Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "d7b53cb97455f58e4763db41ce0e9550ffe9f2ec", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.csbj.2023.01.008", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bf658cb4fc8c56a1a20df4609fa5dcf6792e1583", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
195886355
pes2o/s2orc
v3-fos-license
On the Influence of Bias-Correction on Distributed Stochastic Optimization Various bias-correction methods such as EXTRA, gradient tracking methods, and exact diffusion have been proposed recently to solve distributed {\em deterministic} optimization problems. These methods employ constant step-sizes and converge linearly to the {\em exact} solution under proper conditions. However, their performance under stochastic and adaptive settings is less explored. It is still unknown {\em whether}, {\em when} and {\em why} these bias-correction methods can outperform their traditional counterparts (such as consensus and diffusion) with noisy gradient and constant step-sizes. This work studies the performance of exact diffusion under the stochastic and adaptive setting, and provides conditions under which exact diffusion has superior steady-state mean-square deviation (MSD) performance than traditional algorithms without bias-correction. In particular, it is proven that this superiority is more evident over sparsely-connected network topologies such as lines, cycles, or grids. Conditions are also provided under which exact diffusion method match or may even degrade the performance of traditional methods. Simulations are provided to validate the theoretical findings. There are several techniques that can be used to solve problems of the type (1) such as diffusion [8]- [11] and consensus (also known as decentralized gradient descent) [11]- [14] strategies. The latter class of strategies has been shown to be particularly well-suited for stochastic and adaptive learning scenarios from streaming data due to their enhanced stability range over other methods, as well as their ability to track drifts in the underlying models and statistics [9]- [11]. We therefore focus on this class of algorithms since we are mainly interested in methods that are able to learn and adapt from data. For example, the adapt-then-combine (ATC) formulation [9], [10] of diffusion takes the following form: where the subscript k denotes the agent index and i denotes the iteration index. The variable x k,i is the data realization observed by agent k at iteration i. The nonnegative scalar a k is the weight used by agent k to scale information received from agent , N k is the set of neighbors of agent k (including k itself), and it is required that ∈N k a k = 1 for any k. In (2)-(3), variable ψ k,i is an intermediate estimate for w at agent k, while w k,i is the updated estimate. Note that step (2) uses the gradient of the loss function, Q(·), rather than the gradient of its expected value J k (w). This is because the statistical properties of the data are not known beforehand. If J k (w) were known, then we could use its gradient vector in (2). In that case, we would refer to the resulting method as a deterministic rather than stochastic solution. Throughout this paper, we employ a constant step-size µ to enable continuous adaptation and learning in response to drifts of the global minimizer due to changes in the statistical properties of the data. The adaptation and tracking abilities are crucial in many applications, as already explained in [10]. Previous studies have shown that both consensus and diffusion methods are able to solve problems of the type (1) well for sufficiently small step-sizes. That is, the squared error E w k,i 2 approaches a small neighborhood around zero for all agents, where w k,i = w − w k,i . These methods do not converge to the exact minimizer w of (1) but rather approach a small neighborhood around w with a small steadystate bias under both stochastic and deterministic optimization scenarios. For example, in deterministic settings where the individual costs J k (w) are known, it is shown in [10], [15] that the squared errors w k,i 2 generated by the diffusion iterates converge to a O(µ 2 )-neighborhood. Note that, in the deterministic case, this inherent limiting bias is not due to any gradient noise arising from stochastic approximations; it is instead due to the update structure in diffusion and consensus implementations -see the explanations in Sec. III.B in [4]. For stochastic optimization problems, on the other hand, the size of the bias is O(µ) rather than O(µ 2 ) because of the gradient noise. When high precision is desired, especially in deterministic optimization problems, it would be preferable to remove the O(µ 2 ) bias altogether. Motivated by these considerations, the works [4], [16] showed that a simple correction step inserted between the adaptation and combination steps (2) and (3) is sufficient to ensure exact convergence of the algorithm to w by all agents -see expression (11) further ahead. In this way, the O(µ 2 ) bias is removed completely, and the convergence rate is also improved. While the correction of the second order O(µ 2 ) bias is critical in the deterministic setting, it is not clear whether it can help in the stochastic and adaptive settings. This motivates us to study exact diffusion these settings in this paper and compare against standard diffusion. To this end, we carry out a higher-order analysis of the error dynamics for both methods, and derive their steady-state performance as an expansion in the first two powers of the step-size parameter, i.e., µ and µ 2 . In contrast, traditional analysis for diffusion and consensus focus mainly on performance expressions that depend on a first-order expansion in µ [9], [10]. Our analysis will reveal conditions under which bias correction improves the performance of diffusion. A. Main Results In particular, we will prove in Theorem 1, that, under sufficiently small step-sizes, the exact diffusion strategy will converge exponentially fast, at a rate ρ = 1 − O(µν), to a neighborhood around w . Moreover, the size of the neighborhood will be characterized as lim sup where δ and ν are the Lipschitz and strong convexity constants, the quantity σ 2 is a measure of the variance of the gradient noise, and λ ∈ (0, 1) is the second largest magnitude of the eigenvalues of the combination matrix A = [a k ] which reflects the level of network connectivity. The subscript ed indicates that w k,i is generated by the exact diffusion method. In comparison, we will show that the traditional diffusion strategy converges at a similar rate albeit to the following neighborhood: where the subscript d indicates that w k,i is generated by the diffusion method (2)-(3), and b 2 = (1/K) K k=1 ∇J k (w ) 2 is a bias constant independent of the gradient noise. Observe that the expressions on the right-hand side of (4) and (5) depend on µ and µ 2 . These are therefore more refined performance expressions, which are more challenging to derive than earlier expressions that just depend on µ (see [8]- [10], [12], [15]). The terms that depend on µ 2 in (4) and (5) help reveal the important insights that arise from using the exact diffusion strategy. Expressions (4) and (5) have the following important implications. First, it is obvious that diffusion suffers from an additional bias term µ 2 λ 2 b 2 /(1 − λ) 2 , which is independent of the gradient noise σ 2 , while exact diffusion removes it completely. In the deterministic setting when the gradient noise σ 2 = 0, it is observed from (4) and (5) that diffusion converges to an O(µ 2 )-neighborhood around the global solution w while exact diffusion converges exactly to w . This result is consistent with [10], [15], [16]. Second, it is further observed that the performance of diffusion and exact diffusion differs only on the O(µ 2 ) terms inside (4) and (5). When the step-size is moderately small so that these O(µ 2 ) terms are non-negligible, then the superiority of exact diffusion or diffusion will highly depend on the network topology. In particular, when the network topology is sparselyconnected (in which case λ approaches 1), the bias term µ 2 λ 2 b 2 /(1 − λ) 2 will be significantly large and the correction of this term will greatly improve the steady-state performance. It should be emphasized that the bias-correction property of exact diffusion is particularly critical for large-scale linear or cyclic networks where 1 − λ = O(1/K 2 ) and grid networks where 1 − λ = O(1/K) since the bias term will grow rapidly on these network topologies as the size K increases. On the other hand, when the network is well-connected (in which case λ approaches 0), one can find that the O(µ 2 ) terms in diffusion (5) diminishes while the O(µ 2 ) term in exact diffusion (4) still exists. This implies that for well connected networks and moderatly-small step-sizes, diffusion is a better choice than exact diffusion. The comparison between (4) and (5) provides guidelines on the proper choice of diffusion or exact diffusion in various application scenarios. Third, the difference between exact diffusion and diffusion will vanish as the step-size µ approaches 0. This is because O(µσ 2 /Kν) will dominate the O(µ 2 ) terms when µ is sufficiently small, i.e., lim sup The "sufficiently" small µ can be roughly characterized as , where x is any positive constant. While relations (6) and (7) show diffusion and exact diffusion have the same upper bound on the steady-state performance, however, it is still an upper bound and not an exact expression. To more accurately characterize the steady-state performance of diffusion and exact diffusion when µ is sufficiently small, we Similar performance s † Exact diffusion performs better unless λ = 0 shall establish the precise MSD expression defined as [10]: for exact diffusion and find that it matches that of diffusion: where H k = ∇ 2 J k (w ) and S k is the covariance matrix of gradient noise. Obviously, the MSD expression (8) is exact to first order in µ and ignores all higher-order terms. Equality (9) states that when µ is sufficiently small, both diffusion and exact diffusion perform exactly the same during the steady-state stage. The main results derived in this paper are summarized in Table I in which we omit the constants δ, ν and K for clarity. B. Related work In addition to exact diffusion, there exist some other useful bias-correction methods such as EXTRA [1], [17], DIGing or gradient-tracking methods [3], [18]- [21], Aug-DGM [22], [23] and NIDS [24]. All these methods converge linearly to the exact solution under the deterministic setting, but their performance (especially their advantage over diffusion or consensus) in the stochastic and adaptive settings remains unexplored and/or unclear. The recent work [25] studies the gradienttracking method (referred to as DIGing in [3]) to the stochastic setting and shows that it can outperform the decentralized gradient descent (DGD) [12], [14] via numerical simulations. However, it does not analytically discuss when and why biascorrection methods can outperform consensus. Similarly, the work [26] studies the gradient-tracking method [20], [21] under the stochastic setting and shows that it converges linearly around a neighborhood of the minimizer. No comparison with diffusion or consensus is presented in [26]. Another useful work is [27], which establishes the convergence property of exact diffusion with decaying step-sizes in the stochastic and non-convex setting. It proves exact diffusion is less sensitive to the data variance across the network than diffusion and is therefore endowed with a better convergence rate when the data variance is large. Different from [27], our bound in (5) shows that even small data variances (i.e., b 2 ) can be significantly amplified by a bad network connectivity -see the example graph topologies discussed in Sec. IV-B. This observation implies that the superiority of exact diffusion does not just rely on its robustness to data variance, but more importantly, on the network connectivity as well. In addition, different from the works [25], [27], which claim or suggest that the gradient-tracking method [25] or exact diffusion [27] always converges better than traditional DGD or diffusion, our current work disproves this statement and clarifies analytically that there are important scenarios where exact diffusion performs similarly or even worse than diffusion. Simulations also suggest that gradient tracking methods [25], [26] may also degrade the performance of traditional diffusion, which was not explored prior to this work. Finally, we remark that work [28] showed that diffusion outperforms traditional primal-dual methods in the stochastic setting for b 2 = 0 and quadratic problems only, and is hence more restricted than our result. Our results recover this case (see Remark 2) and show that exact diffusion, which is also a primal-dual method, can outperform diffusion when b 2 = 0. Notation. Throughout the paper we use col{x 1 , · · · , x K } and diag{x 1 , · · · , x K } to denote a column vector and a diagonal matrix formed from x 1 , · · · , x K . The notation 1 K = col{1, · · · , 1} ∈ R K and I K ∈ R K×K is an identity matrix. The Kronecker product is denoted by "⊗". For two matrices X and Y , the notation X ≥ Y denotes X − Y is nonnegative. II. EXACT DIFFUSION STRATEGY A. Exact Diffusion Recursions The exact diffusion strategy from [4], [16] was originally proposed to solve deterministic optimization problems. We adapt it to solve stochastic optimization problems by replacing the gradient of the local cost J k (w) by the stochastic gradient of the corresponding loss function. That is, we now use: For the initialization, we let w k,−1 = ψ k,−1 = 0. Observe that the fusion step (12) now employs the corrected iterates from (11) rather than the intermediate iterates from (10). Note that the weightā k is different from a k used in the diffusion recursion (3). If we let A = [a k ] ∈ R K×K andĀ = [ā k ] ∈ R K×K denote the combination matrices used in diffusion and exact diffusion respectively, then the relation between them is A = (A + I K )/2. In the paper, we assume A (and, hence,Ā) to be symmetric and doubly stochastic. As explained in [4], [16], exact diffusion is essentially a primal-dual method. We can describe its operation more succinctly by collecting the iterates and gradients from across the network into global vectors. Specifically, we introduce A = A ⊗ I M and A = (A + I KM )/2. Then recursions (10)- (12) lead to the second-order recursion The initialization is W −1 = 0 and W 0 = A(W −1 − µ∇Q(W −1 ; X i )). We can rewrite the update (14) in a primaldual form as follows. First, since the combination matrixĀ is symmetric and doubly stochastic, it holds that I −Ā is positive semi-definite. By introducing the eigen-decomposition I −Ā = U ΣU T and defining V = U Σ 1/2 U T ∈ R K×K , where Σ is a non-negative diagonal matrix, we know that V is also positive semi-definite and V 2 = I −Ā. Furthermore, if we let V = V ⊗ I M then V 2 = I KM − A. With these relations, it can be verified 1 that recursion (14) is equivalent to for i ≥ 0 with Y −1 = 0 where Y i ∈ R KM plays the role of a dual variable. The analysis in [4], [16] explains how the correction term in (11) guarantees exact convergence to w by all agents in deterministic optimization problems where the true gradient ∇J k (w) is available. In the following sections, we examine the convergence of exact diffusion (10)- (12) in the stochastic setting. III. ERROR DYNAMICS OF EXACT DIFFUSION To establish the error dynamics of exact diffusion, we first introduce some standard assumptions. These assumptions are common in the literature (e.g, [10], [25]). Assumption 1 (CONDITIONS ON COST FUNCTIONS). Each J k (w) is ν-strongly convex and twice differentiable, and its Hessian matrix satisfies We remark that the twice differentiability assumption is necessary to derive the MSD expression in Sec. V. Assumption 2 (CONDITIONS ON COMBINATION MATRIX). The network is undirected and strongly connected, and the combination matrix A satisfies Assumption 2 implies thatĀ = (I + A)/2 is also symmetric and doubly-stochastic. Since the network is strongly connected, it holds that To establish the optimality condition for problem (1), we introduce the following notation: where w k in (19) is the k-th block entry of vector W. With the above notation, the following lemma from [16] states the optimality condition for problem (1). then it holds that the block entries in W satisfy: where w is the unique solution to problem (1). A. Error Dynamics We define the gradient noise at agent k as and collect them into the network vector It then follows that Next, we introduce the error vectors where (W , Y ) are optimal solutions satisfying (21)- (22). By combining (15), (21), (22), (27) and (28), we reach Since each J k (w) is twice-differentiable (see Assumption 1), we can appeal to the mean-value theorem from Lemma D.1 in [10], which allows us to express each difference in (29) in terms of Hessian matrices for any k = 1, 2, . . . , N : We introduce the block diagonal matrix Substituting (32) into the first recursion in (29), we reach Next, if we substitute the first recursion in (33) into the second one, and recall that V 2 = I KM − A, we reach the following error dynamics. Lemma 2 (ERROR DYNAMICS). Under Assumption 1, the error dynamics for the exact diffusion recursions (10)-(12) is as follows and H i is defined in (31). B. Transformed Error Dynamics The direct convergence analysis of recursion (34) is challenging. To facilitate the analysis, we identify a convenient change of basis and transform (34) into another equivalent form that is easier to handle. To this end, we introduce a fundamental decomposition from [16] here. Lemma 3 (FUNDAMENTAL DECOMPOSITION). Under Assumptions 1 and 2, the matrix B defined in (34) can be decomposed as where c can be any positive constant, and D ∈ R 2KM ×2KM is a diagonal matrix. Moreover, we have Also, the matrix D 1 is a diagonal matrix with complex entries. The magnitudes of the diagonal entries in D 1 are all strictly less than 1. By multiplying X −1 to both sides of the error dynamics (34) and simplifying we arrive at the following result. Lemma 4 (TRANSFORMED ERROR DYNAMICS). Under Assumption 1 and 2, the transformed error dynamics for exact diffusion recursions (10)-(12) is as follows The relation between the original and transformed error vectors are IV. MEAN-SQUARE CONVERGENCE Using the transformed error dynamics derived in (39), we can now analyze the mean-square convergence of exact diffusion (10)- (12) in the stochastic and adaptive setting. To begin with, we introduce the filtration The following assumption is standard on the gradient noise process (see [10], [25]) and is satisfied in many situations of interest such as linear and logistic regression problems. Assumption 3 (CONDITIONS ON GRADIENT NOISE). It is assumed that the first and second-order conditional moments of the individual gradient noises for any k and i satisfy for some constants β k and σ k . Moreover, we assume the s k,i (w k,i−1 ) are independent of each other for any k, i given With Assumption 3, it can be verified that (102), then the w k,i generated by exact diffusion recursion (15) converges exponentially fast to a neighborhood around w . The convergence rate is ρ = 1− O(µν), and the size of the neighborhood can be characterized as follows: Proof. See Appendix A. Theorem 1 indicates that when µ is smaller than a specified upper bound, the exact diffusion over adaptive networks is stable. The theorem also provides a bound on the size of the steady-state mean-square error. To compare exact diffusion with diffusion, we examine the mean-square convergence property of diffusion as well. where λ = max{|λ 2 (A)|, |λ K (A)|}, β 2 max = max k {β 2 k }, e 1 and e 2 are constants that are independent of λ, δ, ν and β, then w k,i generated by the diffusion recursions (2)-(3) converge exponentially fast to a neighborhood around w . The convergence rate is 1 − O(µν), and the size of the neighborhood can be characterized as follows Comparing (47) and (49), it is observed that the expressions for both algorithms consist of two major terms -one O(µ) term and one O(µ 2 ) term. However, diffusion suffers from an additional bias term O( Remark 1 (DETERMINISTIC CASE). When σ 2 = 0, both diffusion and exact diffusion reduce to the deterministic scenario in which the real gradient ∇J k (w) is available. In this scenario, it is observed from (47) and (49) that the error w k,i in exact diffusion converges to 0 while that in diffusion converges to O(µ 2 b 2 ), which is consistent with the results presented in [4], [14]- [16]. Remark 2 (ZERO BIAS). When b 2 = 0, it holds that each local minimizer w k coincides with the global minimizer w , i.e., w k = w for any k. In this scenario, it is observed from (49) that diffusion has the steady-state error bound lim sup which is smaller than the error bound (47) for exact diffusion especially when λ approaches 0. This result is consistent with [28], which finds diffusion outperforms primal-dual distributed adaptive methods when w k = w in terms of steady-state performance. Remark 3 (LARGE BIAS). When b 2 is sufficiently large so that the bias term (i.e., the third term) in (49) dominates the entire error bound, it is observed from (47) and (49) that exact diffusion performs better than diffusion since it removes the bias term completely. This result is consistent with [27], which claims exact diffusion is endowed with faster convergence rate when the data variance across the network is large. In the following subsections, we will focus on the scenario where σ 2 > 0 and the bias b 2 is a small positive constant. In this scenario, we will study how the step-size µ and topology λ influence the diffusion and exact diffusion algorithms. A. Well-connected Network When the network is well-connected, it holds that λ approaches 0. For example, the fully-connected network has λ = 0. In this scenario, the O(µ 2 ) terms inside diffusion's error bound will vanish and (49) becomes lim sup In comparison, the error bound (47) for exact diffusion is lim sup as λ → 0. When µ is moderately small such that the term O(µ 2 δ 2 σ 2 /ν 2 ) is non-negligible, we conclude that diffusion works better than exact diffusion. To roughly characterize the "moderately" small step-size, we assume is non-trivial and diffusion has better steadystate performance than exact diffusion. To make the interval in (53) valid, it is enough to let K be sufficiently large. However, if the step-size µ is chosen sufficiently small, then the second term in (52) is also negligible and hence both diffusion and exact diffusion will perform similarly. An example for "sufficiently" small step-size is when µ = ν/(K 2 δ 2 ). By substituting µ = ν/(K 2 δ 2 ) into (52), we reach lim sup i→∞ B. Sparsely-connected Network When the network is sparsely-connected, it holds that λ approaches 1. In this scenario, even a trivial bias constant b 2 can be significantly amplified by the coefficient 1/(1 − λ) 2 . When λ approaches 1, the first two terms in (49) will be the same as those in (47). As a result, when µ is moderately small and λ is close to 1 such that the bias term O(µ 2 δ 2 λ 2 b 2 /(1 − λ) 2 ν 2 ) is non-negligible, we conclude that exact diffusion works better than diffusion. Furthermore, the advantage of exact diffusion will be more evident if the bias gets more significant as λ → 1. In the following example, we list several network topologies in which the bias O(µ 2 b 2 /(1 − λ) 2 ) dominates (5) easily. Example (Linear, Cyclic, and Grid networks). A linear or cyclic network with K agents is a network where each agent connects with its previous and next neighbors. On the other hand, a grid network with K agents is a network in which each node connects with its neighbors from left, right, top, and bottom. The grid and cycle networks are illustrated in Fig.1. For these networks, it is shown in [29], [30] that and therefore, the bias term O(µ 2 b 2 /(1−λ) 2 ) in diffusion over linear (or cyclic) graph and grid graph becomes O(µ 2 b 2 K 4 ) and O(µ 2 b 2 K 2 ) respectively, which increases rapidly with the size of the network. As a result, exact diffusion, by correcting the bias term, is evidently superior to diffusion over these network topologies. To roughly characterize the "moderately" small step-size, Combining it with (48), we conclude that if µ satisfies where d 2 = 12 + 4e 1 e 2 + √ 6e 1 e 2 is a constant, then the bias term in (49) is significant and exact diffusion is expected to have better performance than diffusion in steady-state. To make the interval in (57) valid, it is enough to let λ be sufficiently close to 1 and K be sufficiently large such that On the other hand, if we adjust µ to be sufficiently small, the O(µ) term in both expressions (47) and (49) will eventually dominate for any fixed b 2 and λ. In such scenario, it holds that lim sup lim sup It is observed that both diffusion and exact diffusion will have the same mean-square error order, which implies that diffusion and exact diffusion will perform similarly in this scenario. Such "sufficiently" small step-size can be roughly characterized by the range for some d 3 > 0. The comparison between exact diffusion and diffusion is listed in Table I. V. MEAN-SQUARE DEVIATION EXPRESSION In the last section, we showed that when µ is sufficiently small, the steady-state mean-square deviation of both diffusion and exact diffusion will be dominated by a term on the order of O(µσ 2 /ν), as illustrated by (59)-(60). However, the hidden constants inside the big-O notation are still unclear. In this section, we show that, when µ is approaching 0, i.e., µ → 0, diffusion and exact diffusion will have exactly the same MSD expression in steady state. To this end, we recall the definition of mean-square deviation (MSD) from [10] as follows: Note that the MSD defined above is precise to the first-order in the step-size. All higher order terms are ignored. A. Approximate Error Dynamics It is generally difficult to derive the MSD performance of exact diffusion with the original transformed error dynamics developed in Lemma 4. We therefore propose an approximate error dynamics and employ it to assess the MSD performance. To this end, we define Obviously, it holds that H k,i → H, H i → H and T i → T if W i → W . Next, we consider the approximate error dynamic as follows. Note that we replace H k,i−1 , H i−1 and T i−1 in (39) with H k , H and T in (64). We can show that the iteratesZ i andŽ i generated through the approximate error dynamic (64) are close toZ i andŽ i generated from the original recursion (39) -see Lemma 6 below. This implies that we can employ recursion (64) rather than (39) to establish the MSD performance. To this end, we first introduce a few more assumptions on cost functions and the gradient noise. These assumptions are adapted from [10]. Assumption 4 (SMOOTHNESS CONDITION IN THE LIMIT). For each cost function J k (w), it is assumed that for small perturbations ∆w ≤ , where κ > 0 is a constant. Assumption 5 (FORTH-ORDER MOMENT). It is assumed for each k and i that where β 4,k and σ 4,k are some positive constants. By following the proof of Theorem 10.2 from [10], we can prove in the following lemma that difference between the original iterates (39) and the transformed iterates (64) is small. B. Deriving the MSD expression Recall from (40) that This together with I T X R,u = 0 2 implies that For simplicity, in the following we let and it holds that Lemma 7 (APPROXIMATION SCALED ERROR). Under Assumptions 1-5, it holds for sufficiently small step-sizes that Proof. It holds that which implies that where λ max (Γ) is the largest eigenvalue of Γ. From (70) we know it holds for sufficiently small µ that Also, from (67) we have Since Γ is independent of µ, it therefore holds that Now we establish the MSD expression for exact diffusion. as proved in Lemma 7, we will first derive the MSD expression for E Z i Γ and use it to facilitate the derivation of the MSD for exact diffusion, i.e., E W i 2 . To proceed, we assume that, in the limit, the following covariance matrix evaluated at the global solution w exists The following theorem establishes the MSD expression of the approximate error dynamics. Theorem 2 (MSD EXPRESSION). Under Assumptions 1-5, it holds for exact diffusion that Proof. See Appendix C. Recall the MSD expression for standard diffusion is [10, Equation (11.140)]: It is observed that the MSD expression for diffusion (77) is equal to that of exact diffusion (76). This implies that diffusion and exact diffusion will perform exactly the same in steady state for sufficiently small step-sizes. A. Mean-square-error Network In this subsection we consider the scenario in which K agents observe streaming data {d k (i), u k,i } that satisfy the regression model where w k is the local optimal solution at agent k, and the noise process, v k (i), is independent of the regression data, u k,i . The cost over the mean-square-error (MSE) network is defined by To generate {d k (i), u k,i }, we first generate the local optimal solution following a standard Gaussian distribution, i.e., w k ∼ N (0, I M ). Next we generate u k,i ∼ N (0, Λ k ) where Λ k is a positive diagonal matrix and v k (i) ∼ N (0, 0.1I M ). With w k , u k,i and v k (i), we generate d k (i) according to (78). Also, we can verify that the global solution to (79) is given by In all figures below, the y-axis indicates the MSD performance K k=1 E w k,i − w 2 /K. We first compare the performance of exact diffusion and diffusion over a grid topology -see the first plot in Fig.1. We first let K = 9 and µ = 0.005 and compare exact diffusion and diffusion. With these two parameters, it is shown in the first plot in Fig.2 that both methods perform almost the same, and the steady-state MSD performance of both methods coincide with the derived MSD expression (76). In the second plot in Fig.2, we maintain µ = 0.005 but increase the network size to 100 nodes. As we explained in Sec.IV-B, a grid topology with larger network size has λ closer to 1, which amplifies the inherent bias O(µ 2 b 2 /(1 − λ) 2 ) suffered by diffusion. It is observed that exact diffusion has a clear advantage over diffusion during the steady-state stage. Note that in the second plot both diffusion and exact diffusion do not coincide with the derived theoretical MSD expression. This is because the theoretical MSD expression in (76) is only precise to firstorder in µ. When λ approaches 1 as the grid network gets larger, the second-order term of µ is amplified by 1/(1−λ) and becomes non-negligible. In the third plot, we maintain K = 100 and µ ed = 0.005 for exact diffusion while decreasing the step-size of diffusion to (µ d = 0.003) so that it has the same steady-state MSD performance as diffusion. It is observed that in this scenario exact diffusion converges faster than diffusion to reach the same steady-state performance, which implies that exact diffusion has faster adaptive and tracking abilities than diffusion over large grid graphs. In the fourth plot of Fig.2, we adjust µ = 0.0001 for both methods while keeping K = 100. Since µ gets much smaller, the inherent bias in diffusion (49) becomes trivial and both methods perform similarly again, and they coincide with the derived MSD expression. To further show how superior the exact diffusion can be compared to diffusion over the grid network, we depict the performance of diffusion and exact diffusion for different network sizes in Fig.3. It is observed that the superiority of exact diffusion becomes more evident as the grid network gets larger, and exact diffusion performs much better than diffusion when K = 400. In the third experiment, we compare diffusion with exact diffusion over a fully connected network with K = 30. Since λ = 0 for this scenario, it is expected diffusion has better steady-state performance than exact diffusion when µ is moderately small, see the discussion in Sec. IV-A. Also, the superiority of diffusion should vanish as the step-size becomes sufficiently small. The comparison results shown in Fig.4 are consistent with our discussion in IV-A. B. Distributed Logistic Regression In this subsection we compare the performance of exact diffusion and diffusion when solving a decentralized logistic regression problem of the form: where (h k , γ k ) represent the streaming data received by agent k. Variable h k ∈ R M is the feature vector and γ k ∈ {−1, +1} is the label scalar. In all experiments, we set M = 20 and ρ = 0.001. To make the J k (w)'s have different minimizers, we first generate K different local minimizers {w k }. All w k are normalized so that w k 2 = 1. At agent k, we . To generate the corresponding label γ k (i), we generate a random variable We first compare these two methods over a cyclic network, see the simulation in Figs. 5 and 6. Similar to Sec. VI.A, the simulation results shown in Figs. 5 and 6 are also consistent with our discussions in Sec.IV-B. In the third plot in Fig.5, we set µ d = 0.003 and µ ed = 0.006 so that both diffusion and exact diffusion have the same MSD performance. Next, we compare diffusion with exact diffusion over a fully connected network in Fig.7. It is observed that the results are consistent with the discussion in Sec.IV-A. C. Comparison with Gradient Tracking Methods In this subsection we compare exact diffusion with the distributed stochastic gradient tracking method [25], [26]. While [25] shows stochastic gradient tracking has better steady-state MSD performance than decentralized gradient descent (DGD) via numerical simulations, it does not study when and why gradient tracking can be better DGD. In fact, since gradient tracking can also be used to correct the bias suffered by diffusion, we can expect the gradient tracking method to have roughly a similar behavior to exact diffusion. In other words, gradient tracking will have better MSD performance than diffusion when the network is sparselyconnected and worse MSD performance when the network is well-connected. Moreover, the difference between diffusion and gradient tracking will diminish for small step-sizes. In this subsection, we verify this conclusion using simulations. We first consider the MSE-network (79) over a cyclic network (which is a sparsely-connected network). The results in Fig.8 show stochastic gradient tracking behaves as we expected, and it has almost the same performance as exact diffusion in all scenarios. Note though that the gradient tracking method [25] requires twice the amount of communication that is required by exact diffusion, which implies exact diffusion is more communication efficient. In the third plot in Fig.8, we set µ d = 0.003 and µ ed = µ track = 0.006 to endow the algorithms with the same steady-state MSD performance. We next compare diffusion, exact diffusion, and gradient tracking method over a fully-connected network (which is a well-connected network). It is observed in Fig.9 that diffusion has the best MSD performance compared to exact diffusion and gradient tracking, which confirms our conclusion. While reference [25] suggests that gradient tracking is superior to consensus, we observe from the analytical results in the current manuscript and from the simulations in Fig.9 that there are situations when gradient tracking cannot outperform the traditional diffusion; their performance measures match each other and sometimes gradient tracking can be worse. APPENDIX A PROOF OF THEOREM 1 From the first line in the transformed error dynamics (39), we know that By squaring and taking conditional expectation of both sides of the recursion and recalling (42), we get Next note that where (a) holds for t ∈ (0, 1) because of Jensen's inequality, and (b) holds since ν 2 ≤ H i−1 2 ≤ δ 2 , I 2 = K, and Moreover, equality (c) holds if we choose t = µν. In addition, recall from (45) that Moreover, we can bound W i−1 2 as Substituting (84), (85) and (86) into (83), we reach where the last inequality holds since By taking expectation over the filtration, we get On the other hand, from the second line in (39) we havě By squaring and taking conditional expectation of both sides of the above recursion and recalling (42), we get Note that where t ∈ (0, 1). To simplify the above inequality, we denote SinceĀ = (A + I K )/2 and A is doubly-stochastic, we have From Lemma 4 in [16] we know that Also, from the definition of T i in (34), we have By substituting (97) into (92), setting t = √ λ and recalling R 1 2 = I 2 = K, we get In addition, it also holds that where (a) holds because of inequality (45) and the fact in which the last equality holds because of Lemma 3. The inequality (b) holds since 1 − √ λ ∈ (0, 1) and inequality (88). By substituting (98) and (99) into (91), we have By taking expectation over the filtration, we get To simplify notation, we introduce the constants Combining (89) and (101), we have Note that c is a parameter that can be set to any positive value. If we let c 2 = Kc 1 , then the above inequality becomes If we choose µ sufficiently small such that then inequality (104) becomes To satisfy (105)-(108), it is enough to let µ satisfy Also, note that 1 − From (94) we have |λ 2 | ≤ λ, which further implies −λ ≤ λ 2 ≤ λ. This together with (111) leads to With relation (112), we know that if µ satisfies then µ must also satisfy (110). Recall that Next we examine the spectral radius of the matrix C. Note that λ ∈ (0, 1), it is easy to verify that and therefore C is a stable matrix, and ρ(C) = 1 − O(µν) is the convergence rate of E W i 2 . Next we examine: where inequality (a) holds since when µ satisfies (110). By iterating (109), we conclude that As a result, we obtain where (a) holds because λ = (1 + λ 2 (A))/2 ≤ (1 + λ)/2 and (b) holds because λ < 1. Result (118) leads to (47) by dividing K to both sides of (118). APPENDIX B PROOF OF LEMMA 5 This section establishes the mean-square convergence of diffusion. With definition (13), we can rewrite diffusion recursions (2)-(3) as With relation (27), the above recursion becomes which also leads to where W i = W − W i and h ∆ = ∇J (W ). Note that A = A ⊗ I M is symmetric and doubly stochastic, it holds that where Note that X R and X L are different matrices from the ones defined in (35). Now we define and multiply X −1 to both sides of (121), it holds that For notational simplicity, we further defině Recalling that h = ∇J (W ) and, thus, In the first line of the above transformed recursion, we havē By following arguments in (82)-(89), we reach In the second line of (128), we havě By following arguments similar to the ones in (90)-(101), we have To simplify notation, we introduce the constants Meanwhile, we also set c 2 = e 1 K in (130) and (132). With these notations and operations, we combine (130) and (132) to get If we choose sufficiently small µ such that then inequality (134) becomes To make inequalities (135)-(138) hold, it is enough to set Note that Kβ 2 = β 2 max . Similar to (114), it can be easily verified that when µ satisfies (140), we have that ρ(C) < 1. Moreover, we also have where step (a) denotes entry-wise inequality, which holds because when µ satisfies (140). By iterating (139), we get This leads to (48) by dividing K to both sides of (144). APPENDIX C PROOF OF THEOREM 2 The derivation of the MSD expression adjusts the arguments from [10, Ch. 11] to our case. We start by introducing With these definitions, we can rewrite the approximate error dynamics (64) as Z i = CZ i−1 +µGs i . By squaring and taking conditional expectation over the filtration F i−1 , we have (147) where Σ is any positive semi-definite matrix to be decided later. By taking expectation again, we have Now we analyze the gradient noise term. To do that, we introduce the network noise quantity S ∆ = diag{S 1 , S 2 , · · · , S K }. where S k is defined in (75). Note that µ 2 E s i where ⊗ b is block Kronecker operation. Now we define F = C T ⊗ b C T ∈ R (2K−1) 2 M 2 ×(2K−1) 2 M 2 . Since C is stable for sufficiently small step-sizes, we know F is also stable and hence I − F is invertible. Therefore, it holds that bvec(Σ) = (I − F) −1 bvec(Γ). (155) Next we evaluate the right-hand side in (151). From property (153b), we have To examine the above quantity, we have to evaluate (I −F) −1 first. We recall from (145) that With definition F = C T ⊗ b C T , we partition F into four blocks where It can be verified that where Z = where step (a) follows from property (153a) and in the last step we used (164) and (167). With the same technique as above, we can also derive that Tr(Σ) · o(µ 2 ) = o(µ). Recalling the facts that E W i 2 = K k=1 E w k,i 2 and lim µ→0 o(µ)/µ = 0, we therefore derive the MSD expression of exact diffusion as follows
2019-03-26T15:28:36.000Z
2019-03-26T00:00:00.000
{ "year": 2019, "sha1": "c48d5ba8473aa01fc12a89f7ffe4da8e408afa56", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://doi.org/10.1109/tsp.2020.3008605", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "5edc28c3eed0449e5c8044245225d23ffc4dad1d", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
6792482
pes2o/s2orc
v3-fos-license
Universal security for randomness expansion from the spot-checking protocol Colbeck (Thesis, 2006) proposed using Bell inequality violations to generate certified random numbers. While full quantum-security proofs have been given, it remains a major open problem to identify the broadest class of Bell inequalities and lowest performance requirements to achieve such security. In this paper, working within the broad class of spot-checking protocols, we prove exactly which Bell inequality violations can be used to achieve full security. Our result greatly improves the known noise tolerance for secure randomness expansion: for the commonly used CHSH game, full security was only known with a noise tolerance of 1.5%, and we improve this to 10.3%. We also generalize our results beyond Bell inequalities and give the first security proof for randomness expansion based on Kochen-Specker inequalities. The central technical contribution of the paper is a new uncertainty principle for the Schatten norm, which is based on the uniform convexity inequality of Ball, Carlen, and Lieb (Inventiones mathematicae, 115:463-482, 1994). Motivation Randomness is indispensable for modern day information processing. It captures the essence of secrecy. This is because a message being secretive means precisely that it is random to the adversary. It also drives randomized algorithms (such as physics simulation), besides many other applications. However most practical random number generators (RNG) are heuristics without theoretical guarantees. There are known vulnerabilities in the methods currently in use [9]. More recently, RNGs based on quantum measurements have emerged in the market. While a (close to) perfect implementation of certain measurements can theoretically guarantee randomness, current technology is still far from reaching that precision. This raises a serious question: would the implementation imperfections open the door to adversary attacks? An additional concern is, even if in the future when the implementation technology is satisfactory, could there be "backdoors" in the generator inserted by a malicious party? It is difficult for the user, as a classical being, to directly verify the inner-working of the quantum device. Those considerations motivated the study of untrusted-device quantum protocols, which are deterministic procedures interacting with (necessarily) multiple "untrusted" quantum devices. The user makes no prior assumptions about inner-workings of the devices. In particular, the devices may be entangled among themselves, or even with the external adversary. This protocol includes a certification procedure which decides whether the outputs should be "accepted" or "rejected." Ideally, two types of errors should be minimized. The "completeness error" is the chance of rejecting an honest implementation (that is, a correct implementation with a possible limited amount of noise, or device deficiency), and the "soundness error" is the chance of accepting when the generated output is not uniformly random. An untrusted-device protocol necessarily needs a classical input X to begin with that is not fully known to the adversary-device system. In this paper we assume that X is a small uniformly random seed, and our goal is to expand it into a much longer output which is also (nearly) uniformly random. This is randomness expansion (or "seeded" extraction in the terminology of [3]). In his Ph.D. thesis [4], Colbeck formulated the problem of randomness expansion and proposed protocols based on quantum non-local games. New protocols and security analyses followed. Several authors proved classical security only [17,8,18,5]. Vazirani and Vidick [21] was the first to prove full quantum security. Their protocol is also exponentially expanding using just two non-communicating devices. In [16], the present authors developed a different approach for the security analysis, and proved quantum security together with several new desirable properties including robustness (i.e., the honest implementation being imperfect), cryptographic security, and unit size quantum memory requirement for each device. In [16] and in the current paper, we work with the "spot-checking protocol" developed in [21] and [5]. Informally, the protocol proceeds as follows: an n-player nonlocal game G is chosen, and a specific n-letter input string (a 1 , . . . , a n ) from the game is selected. We suppose the existence of an untrusted n-part quantum device D. At each round of the protocol, the user choses a bit g ∈ {0, 1} according to a biased (1 − q, q) distribution (with q > 0 small). If g = 1 ("game round"), she plays the game with D; if g = 0 ("generation round") she merely gives the input string a = (a 1 , . . . , a n ) to D. At the end of the protocol, the total number of wins during game rounds is computed, and if it is above a certain threshold ("acceptance threshold"), the user accepts the results and applies a randomness extractor to the outputs of D to produce the final outputs of the protocol. (See figure 3 in section 7 for a more formal description.) In the current work, we ask the following: Question: What is the minimum requirement for a device to guarantee quantum security in an untrusted-device randomness expansion protocol? Our goal is to identify the essential features that guarantee full security. This leads to several more specific questions. What is the broadest class of devices that can be used securely? The analysis in [16] allows G to be any binary XOR game that has the strong self-testing property [15]. There are plenty of binary games that are not strong self-tests, and far more non-local games that are not binary. Furthermore, non-local games, which are based on Bell inequalities, are a proper subset of contextuality games, which are based on Kochen-Specker inequalities. There have been proposals and experiments for randomness expansion using contextuality without non-local games [10,1,20,6]. No full quantum-security proof for those contextuality-based protocols is known. Is quantum entanglement necessary? All protocols proved to be quantum-secure require at least a linear (in the output length) amount of entanglement [21,16]. Yet the optimal quantum strategy for a contextuality game does not necessarily require entanglement. For example, the KCBS inequality [12] can be maximumly violated by an unentangled qutrit [12,14]. Thus understanding what contextual (but not non-local) game can be used securely will shed light on the role of entanglement. What is the largest amount of noise tolerable? Here "noise" refers to device deficiency, i.e., the gap between the device's probability of winning the game and the optimal probability of winning the game. The answer to this question is important for the implementation. The analysis in [16] requires that the noise be a sufficiently small constant. For example, for the well-known CHSH game, the level of noise with quantum-security guarantee implied by [16] is ∼ 1.5%, which is still challenging for experimental implemention and is far smaller than the full classical-quantum gap, which is cos 2 π 8 − 3 4 ≈ 10.3%. Are there protocols that are classically secure but not quantum-secure? If only classical security (i.e., security against an adversary who does not have quantum memory) is required, then the noise tolerance and class of games are already well understood [5]. This raises the question of whether there could be protocols that are classically secure but not quantum secure. Indeed, there are classical-quantum states (A, E) such that A and E are highly correlated, but to a "classical" adversary (i.e., one who is forced to make a measurement on E before using it to eavesdrop on A) the two systems appear almost independent (see, e.g., [7]). Could such systems occur as outputs in randomness expansion? Our contributions The result of this paper answers each of the questions above. We use the notion of a contextuality game, which is a generalization of nonlocal games broad enough to encompass all Kochen-Specker inequalities. For any contextuality game G, and chosen input a, denote by w * G the optimal quantum winning probability. Let w a G denote the optimal winning probability among all quantum strategies that produce deterministic output on input a. Refer to δ a G := w * G − w a G as the quantumdeterministic gap of G on a. We define Protocol K, an analogue of Protocol R for contextuality games (see figure 2). We prove the following (see Theorem 6.4). Theorem 2.1 (Main Theorem; Informal). Let (G, a) be a contextuality game with selected input. Let u (the acceptance threshhold) be a real number between w a G and w * G . Then, when Protocol K is executed for N rounds (with G, a, u as parameters), it produces at least f (u)N quantum-proof extractable bits, where The same result also holds for Protocol R and nonlocal games (see Theorem 7.1). The crucial aspect of this theorem is that the function f is nonzero over the whole interval (w a G , w * G ). Therefore quantum security is achieved whenever the acceptance threshhold u lies in this interval. Of course, any acceptance threshold less than w a G cannot guarantee security, since the device could give deterministic outputs during all generation rounds. So the range of security threshholds (w a G , w * G ) cannot be made larger. One can show that any super-classical device for a game G ′ can be used for playing a restricted game G with a positive quantum-deterministic gap on some input. Thus being super-classical is the minimum device requirement. Answers to the other questions also follow. The largest allowable noise tolerance is the quantumdeterministic gap δ G , and the class of contextuality games that are usable are precisely those for which δ G > 0. Classical security is equivalent to quantum security for spot-checking protocols. (The number of quantum-proof extractable bits is at least linearly related to the number of classically-proof bits.) Entanglement is not necessary for randomness expansion, provided that contextuality can be used as a basis for security. [16]. Our security analysis in this work falls into the paradigm of our earlier work [16]. In this work, we have introduced new ingredients that allow us to obtain generalizations of the results in [16]. The main improvements in the current work are (1) that we work with arbitrary nonlocal or contextual games (whereas [16] was restricted to binary XOR games) and (2) that we enlarge (maximally) the amount of noise permitted in Protocol R. Comparison with Miller-Shi We note that in the context of binary XOR games, Theorem 2.1 is complementary (neither stronger nor weaker) to Corollary I.3 from [16]. The rate curve (2.1) in Theorem 2.1 is nonzero over a larger interval, but the rate curve in Corollary I.3 approaches a rate of 1 as the acceptance threshhold approaches w * G (which is not true of (2.1)). The strong robust self-testing property for a nonlocal game G asserts that any near-optimal strategy for G must be close to a certain unique optimal strategy. This property was used crucially in the proofs of [16]. One interesting consequence of the current paper is that this property is not necessary: games that do not satisfy self-testing can still be used for randomness expansion. Outline and proof techniques. We summarize the new ingredients in this paper. The main technical contribution of this paper is a new universal uncertainty principle for the Schatten norm · 1+ǫ . Once introduced into the framework of [16] (in place of the old uncertainty principle, Theorem E.2), the new principle implies the strong security claims above. Let H be a quantum system in state τ, and let {τ 0 , τ 1 } and {τ + , τ − } be states of H arising from anticommuting measurements on H. Suppose for simplicity that τ 1+ǫ = 1. Then, we prove the following. (See the proof of Theorem 4.2.) The critical aspect of this inequality is that the function on the right hand side (which determines the rate curve (2.1) is bounded below 1 as long as τ − 1+ǫ is bounded away from 1/2. The basis for this assertion is the uniform convexity of the Schatten norm [2]. Specifically, we exploit the uniform convexity of the function where τ = τ 0 X X * τ 1 , and use the fact that X 1+ǫ is an approximate upper bound for the Having proved (2.2), the next step is to generalize the class of measurements that can be used. In [16] we focused just on measurements that are partially trusted (i.e., partially anti-commuting), but this too can be extended. A quantity that is used in other uncertainty principles (e.g. [13]) to measure the non-commutativity of a pair of POVMs {A 0 , A 1 }, {A 2 , A 3 } is the following: The use of this term is the crucial step for closing the quantum-classical gap. We prove a version of (2.2) which incorporates d (Theorem 4.4). We state a new protocol (Protocol U) which phrases randomness expansion with minimal assumptions: we need only a device D which has one of two measurement settings at each round 3 }) such that the commutativity parameters (2.4) have a uniform upper bound. The uncertainty principle (2.2) implies security for Protocol U, which specializes to provide the proof of security for Protocol K. (See Theorem 6.4.) Our proof (like that of [16]) suggests a deep relationship between quantum security and the geometry of the Schatten norm. This is an avenue that would be good for further exploration. Preliminaries For any Hermitian operator X on a finite-dimensional Hilbert space V, let us say that an enlargement of X is an embedding i : V → V ′ of V into a larger finite-dimensional Hilbert space together with a Hermitian operator X ′ on V ′ satisfying X = i * X ′ i. The acronym POVM stands for positive operator-valued measure, and denotes a collection of positive semidefinite operators M i on a Hilbert space V satisfying ∑ i M i = I. We use the symbols 0, 1, +, − (in appropriate context) to denote respectively the vectors The key mathematical concept in our proofs of randomness is the (1 + ǫ)-Schatten norm. For any linear operator Z and any ǫ ∈ (0, 1], the (1 + ǫ)-Schatten norm is given by This norm has properties closely related to those of the 1-norm. Although the (1 + ǫ)-norm does not approximate the 1-norm in the strictest sense (since X 1+ǫ / X 1 can be arbitrarily small) it has many of the same properties modulo terms that vanish as ǫ → 0. If X and Y are positive semidefinite operators satisfying X 1+ǫ , Y 1+ǫ ≤ 1, then Additionally, if Z is an operator on C 2 ⊗ C m , satisfying Z 1+ǫ ≤ 1, then we have the following (see Proposition 1 in [19].) The device D begins with the quantum system Q in state Φ. At the nth use of the device, it accepts a single bit x n as input. If x n = 0, then D applies the nondestructive measurement Q → Q ⊗ C 2 given by and outputs the resulting bit y n . If x n = 1, then D applies the same measurement with A 0 replaced by A 2 and A 1 replaced by A 3 . Note that the measurements A (n) i could be such that they encode the bits x n and y n into the state of Q; thus this device model allows memory. For every positive integer n, a sequence of binary POVMs satisfying the condition that for any nonempty T ∈ S, the operators {A On the nth round, the contextual measurement device accepts a context T = {a 1 , . . . , a k } ∈ S as input, performs the nondestructive measurements for each i ∈ T and outputs the results as a k-tuple of bits (b 1 , . . . , b k ). We will also use the notion of a multi-part quantum device, with a definition similar to that of Definition 4 in [16]. Definition 3. A quantum device D with r components and alphabet size b consists of the following. 1. Quantum systems Q 1 , . . . , Q n whose initial state is given by a density operator Simulation. We use the term "simulation" in the same sense as in Section B.3 of [16]. Specifically, a procedure P 1 "simulates" another procedure P 2 if, for any purifying system E 1 for the devices used in P 1 and any purifying system E 2 for the devices used in P 2 , the joint state O 1 E 1 of the outputs of P 1 together with E 1 is isomorphic to the state of O 2 E 2 . Uncertainty Principles Anti-commuting measurements. The starting point for the results in this section is the next proposition, which follows from Theorem 1 in [2]. Proposition 4.1. For any ǫ ∈ (0, 1], and any linear operators W and Z such that W 1+ǫ = Z 1+ǫ = 1, Theorem 4.2. Let H be a finite-dimensional Hilbert space, and let ǫ ∈ (0, 1]. Let R : C 2 ⊗ H → C 2 ⊗ H be a positive semidefinite operator such that the operator ρ := Tr Proof. We will first dualize the statement of the result. Purify R by taking an additional Hilbert space K and a vector r ∈ C 2 ⊗ H ⊗ K such that R = Tr K (rr * ). Let τ = Tr H (rr * ) and define τ v = Tr C 2 [(vv * ⊗ I K )τ] for any v ∈ C 2 . The operators τ, τ 0 , τ 1 , τ − have the same eigenvalues respectively as ρ, ρ 0 , ρ 1 , ρ − . In particular, τ 1+ǫ = 1. To prove (4.2), we need only to show that the same relation holds for the operators τ * . Write τ as Applying Proposition 4.1 to the operators W = τ and Z = (σ z ⊗ I)τ(σ z ⊗ I), we have the following: as desired. The generalizations of Theorem 4.2 that follow use techniques from known uncertainty principles (summarized in [22]; see especially [13]). We borrow a term (see the left side of equation (4.11) below) that measures the noncommutativity of two POVMs. (4.16) Similar simplifications show that the quantity on the left side of (4.11) remains the same when {A 0 , A 1 } is replaced by {A 0 , A 1 }. Therefore, we will simply assume at the outset that {A 0 , A 1 } is projective. Taking an appropriate choice of basis, we may assume that A 0 = I 0 0 0 and A 1 = 0 0 0 I . Note that both A 0 A 2 A 0 and A 0 (I − A 2 )A 0 have operator norm ≤ 1 2 , which is possible if and only if A 0 A 2 A 0 = A 0 /2. Generalizing this reasoning, we can put A 2 and A 3 in the form Note that we must have Y ≤ 1 2 . The following projective measurement is an enlargement of {A 2 , A 3 }. Note that M 00 , M 11 ≤ c (from the definition of c) and M 01 , M 10 ≤ 1. We therefore have the following. (4.24) Combining this bound with (4.26) yields the desired result. A Universal Protocol The central object of this section is Protocol U (see Figure 1) which is an abstraction of the spotchecking protocol (developed in [5] and [21]). In order to use Theorem 4.4, we will first restate it in the following alternate form which is more compatible with [16]. ( 1. Let D be a binary device with commutativity parameter ℓ, let E be a puritfying system for D, and let ρ 0 , ρ 1 , ρ 2 , ρ 3 denote the (subnormalized) states of E corresponding to the input-output combinations 00, 01, 10, 11 for D. Let Then, 2. The function π ℓ (y) = lim x→0 Π ℓ (x, y) satisfies The following theorem now holds by repeating the reasoning from Sections H and I in [16]. Let G = (g 1 , . . . , g N ) and O = (o 1 , . . . , o N ) denote the input and output registers for Protocol U. If E is a purifying system for the device D, let Γ EGO denote the final state of E, G, and O, and let Γ s EGO denote the subnormalized state corresponding to the "success" event. Theorem 5.2. Let ℓ ∈ [ 1 2 , 1], η ∈ (0, 1 2 ), and δ > 0 be real numbers. Then, there exist positive reals b and q 0 such that the following holds. If Protocol U is executed with the parameters N, ℓ, η, q, D, with q ≤ q 0 , and E denotes a purifying system for D, then Kochen-Specker Inequalities Randomness expansion from Kochen-Specker Inequalities has previously been explored in [10,1,20,6]. In this section we give a full proof of security for such expansion. We begin with a formalism which is similar to that of other papers on Kochen-Specker inequalities [12,1,11]. Definition 4. A contextuality game G with m measurement settings is a is a multilinear polynomial Such a polynomial encodes rules for a game as follows. Let D be a contextual measurement device whose set of contexts contains Supp f (that is, contains every element T ⊆ {1, 2, . . . , m} for which f T = 0). To play the game, choose a subset T at random under the probability distribution {| f T |}, and give T as input to the device D . Let (b 1 , . . . , b k ) be the output bits. The score of the game is then given by Remark 1. In Definition 4 we have restricted the scoring functions to be XOR functions and also made assumptions on the probability distribution used to choose the inputs. A more general definition would allow for an arbitrary probability distribution {p T | T ∈ {1, . . . , m}}, and allow the score for each context T to be given by arbitrary functions g T : {0, 1} T → R which assigns to each possible outcome a real number. But in fact such a scoring rule can be rewritten in the form of Definition 4 (modulo linear scaling). Let Let c be the sum of the absolute values of the coefficients of the polynomial f , and let f = f /c. Then, for any compatible contextual measurement device D, the expected score awarded to D by ({p T }, {g T }) is c times the expected score awarded by f . A protocol for randomness expansion from contextuality games is given in Figure 2. One convenient aspect of this polynomial formulation is that it is easy to express the supremum of possible expected scores for the game that can be achieved by a contextual measurement device. If the measurements used by D (which we may assume to be projective) are {P i , I − P i } i∈{1,...,m} , and the intial state of D is Φ, then the expected score is where Y i = 2P i − I. Therefore the optimal upper bound on possible expected scores is Another important quantity is the largest possible score that can be achieved by a noncontextual, deterministic device. This quantity is given by If a device achieves a score above c f , then some of its outputs must be random. For the purposes of randomness expansion it is more useful to have a guarantee that a particular output is random (an observation made in [1]). Therefore we will use the following quantity: This is the optimal upper bound on the expected score that can be achieved by a device D given that its output on input {1} is deterministic. Also let w G = (q G + 1)/2 and w ′ G = (q ′ G + 1)/2. (6.7) These are the corresponding bounds on "winning probabilities" (i.e., probabilities of obtaining a score of (+1)). The KCBS game. An example of a contextuality game is the KCBS game from [12] (which was used in [6] for randomness expansion). We express this game as Proposition 6.1. Let g be given by (6.8). Then, q ′ g = 0.6. Proof. It is clear that a score of 3/5 can be achieved by a deterministic device (say, by a device which outputs 0, 1, 0, 1, 0 on inputs 1, 2, 3, 4, 5, respectively). Suppose, for the sake of contradiction, that there is a device D compatible with the KCBS game which outputs a score above 3/5, and which gives a deterministic output on input 1. Let B 1 , B 2 , B 3 , B 4 , B 5 be contextual random variables which represent the outputs of D on inputs 1, 2, 3, 4, 5, and let Z i = 1 − 2B i . It is easy to see that for any i, j ∈ {1, 2, 3, 4, 5} with j = (i + 1) mod 5. (Here we are using · to denote expectation.) Therefore, Using the inequality |a − b| + |b − c| ≥ |a − c|, this bound implies Given that Z 1 = ±1 by assumption, this is a contradiction. As proved in [12], the value of q g is at least 0.788. Therefore there is a gap between q g and q ′ g , which, as we will see momentarily, enables randomness expansion. Universal security. We are now prepared to prove security for Protocol K. All that is needed, in fact, is to write the quantity w ′ f defined above (see (6.6)) in a form that is more compatible with Corollary 5.1. Theorem 6.4. Let f be a contextuality game with m measurement settings, and let η ∈ (0, 1 2 ), and δ > 0 be real numbers. Then, there exist positive reals b and q 0 such that the following holds. If Protocol K is executed with the parameters N, m, η, q, f , D, with q ≤ q 0 , and E denotes a purifying system for D, then (6.14) where ǫ = √ 2 · 2 −bqN and π w ′ f denotes the function from Corollary 5.1. Note that by definition, π w ′ f (η) is nonzero for any η > w ′ f . Nonlocal Games Finally, we note that the reasoning in section 6 carries over easily to the context of nonlocal games. Let n, b be positive integers, and define a binary measurement simulation procedure with parameters n, b to be a triple (p, F, V), where p : {1, 2, . . . , b} n → R is a probability distribution, if a function and V : {1, 2, . . . , b} n → {0, 1} is another function. This first two elements p, F prescribe a nonlocal game with n players as follows: a tuple x ∈ {1, 2, . . . , b} n is chosen randomly according to p and the terms x 1 , . . . , x n are given as input respectively to the components D 1 , . . . , D n of an n-part quantum device. The outputs y = (y 1 , . . . , y n ) ∈ {1, 2, . . . , b} n (which are also assumed to be in the alphabet {1, 2, . . . , b}) are received and then the function F is applied to (x 1 , . . . , x n , y 1 , . . . , y n ) to obtain the outcome of the game. (If F = 0 the game is won, and if F = 1, the game is lost.) In Figure 3, we have written a general version of Protocol R (randomness expansion for nonlocal games) which uses the concept of a binary measurement simulation procedure. Let w denote the supremum of the winning probabilities for quantum strategies for the game (p, F). Let w ′ denote the same supremum taken just over quantum strategies that give strictly deterministic outputs when the input (1, 1, . . . , 1) is given and the function V is applied (as in a generation round in Protocol R). Repeating the reasoning from section 6 shows that Protocol R simulates Protocol U with commutativity parameter ℓ = w ′ . Therefore we have the following. Theorem 7.1. Let η ∈ (0, 1 2 ), and δ > 0 be real numbers. Then, there exist positive reals c and q 0 such that the following holds. If Protocol R is executed with parameters N, η, q, X, D, with q ≤ q 0 , and E denotes a purifying system for D, then where ǫ = √ 2 · 2 −cqN and π w ′ denotes the function from Corollary 5.1. Acknowledgements Many thanks to Dong-Ling Deng and Kihwan Kim for sharing with us their work on randomness expansion, and to Patrick Ion for introducing us to the literature on Kochen-Specker inequalities.
2015-09-03T17:22:39.000Z
2014-11-24T00:00:00.000
{ "year": 2014, "sha1": "508942952a7882d961b05acd0ab5a6e08cab9991", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1411.6608", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "955cf555a2e620a18a2dc6dc9b981440a68d74f5", "s2fieldsofstudy": [ "Computer Science", "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics", "Computer Science" ] }
6295209
pes2o/s2orc
v3-fos-license
The impact of pulmonary metastasectomy in patients with previously resected colorectal cancer liver metastases Background 40–50% of patients with colorectal cancer (CRC) will develop liver metastases (CRLM) during the course of the disease. One third of these patients will additionally develop pulmonary metastases. Methods 137 consecutive patients with CRLM, were analyzed regarding survival data, clinical, histological data and treatment. Results were stratified according to the occurrence of pulmonary metastases and metastases resection. Results 39% of all patients with liver resection due to CRLM developed additional lung metastases. 44% of these patients underwent subsequent pulmonary resection. Patients undergoing pulmonary metastasectomy showed a significantly better five-year survival compared to patients not qualified for curative resection (5-year survival 71.2% vs. 28.0%; p = 0.001). Interestingly, the 5-year survival of these patients was even superior to all patients with CRLM, who did not develop pulmonary metastases (77.5% vs. 63.5%; p = 0.015). Patients, whose pulmonary metastases were not resected, were more likely to redevelop liver metastases (50.0% vs 78.6%; p = 0.034). However, the rate of distant metastases did not differ between both groups (54.5 vs.53.6; p = 0.945). Conclusion The occurrence of colorectal lung metastases after curative liver resection does not impact patient survival if pulmonary metastasectomy is feasible. Those patients clearly benefit from repeated resections of the liver and the lung metastases. Results 39% of all patients with liver resection due to CRLM developed additional lung metastases. 44% of these patients underwent subsequent pulmonary resection. Patients undergoing pulmonary metastasectomy showed a significantly better five-year survival compared to patients not qualified for curative resection (5-year survival 71.2% vs. 28.0%; p = 0.001). Interestingly, the 5-year survival of these patients was even superior to all patients with CRLM, who did not develop pulmonary metastases (77.5% vs. 63.5%; p = 0.015). Patients, whose pulmonary metastases were not resected, were more likely to redevelop liver metastases (50.0% vs 78.6%; p = 0.034). However, the rate of distant metastases did not differ between both groups (54.5 vs.53.6; p = 0.945). PLOS Introduction Colorectal carcinoma (CRC) is the most common cancer of the gastrointestinal tract and the second most common cause of cancer-related deaths both in the United States and Europe [1]. About half of all patients develop distant metastases, either as synchronous metastases diagnosed at the time of initial detection of cancer or in the follow-up period as metachronous metastases [2]. These distant metastases are mainly located in the liver (CRLM). Over the past two decades, resection of CRLM has increased significantly and has led to a long-term survival of up to 50% after curative liver resection [3]. The second most common organ, in which distant metastases arise, is the lung and around 10% of the patients with CRC will develop pulmonary metastases [4,5]. The five-year survival rate of these patients without surgery is assumed to be below 5%. Similar to liver metastases, the resection of pulmonary metastases has increased during the last decade, leading to five-year survival rates of up to 68% for patients after metastases resection [6][7][8][9][10]. The introduction of multimodal treatment options, including chemotherapy and surgery, has resulted in a dramatic survival benefit for patients with metastatic disease. After curative metastases resection patients benefit from the surgery with an increased survival rate, which has led to surgery being introduced as the gold standard in this selected patient population. Around 10-20% of patients with CRC will develop both liver and lung metastases. So far, the benefit of surgical resection of pulmonary metastases, arising either simultaneously or after the resection of liver metastases, is discussed controversially in the literature. Similar to liver metastases, several factors have been identified as being associated with negative outcome, such as short disease-free survival, high carcinoma embryonic antigen (CEA), as well as the number and size of metastases [10][11][12]. Moreover, several studies focused on the outcome after pulmonary metastasectomy and found prior liver resection to be a negative predictive marker [13,14]. The number of patients with metastatic colorectal disease being treated with a multimodal therapy approach is rapidly increasing. Therefore, it is of great interest to further stratify treatment options for a subgroup of patients presenting with pulmonary metastases, either synchronous or metachronous with regard to CRLM. The aim of this study was to evaluate the oncological outcome after pulmonary metastasectomy in patients with previous liver resection for CRC metastases. Patient population All patients with colorectal liver metastases treated at the University of Wuerzburg Medical Centre (UKW) between January 2003 and May 2013 were registered in the Wuerzburg Institutional Database (WID). Data source The WID is a central data repository, which has been continuously expanded on a daily basis since 1984 with clinical, operative and research data of patients, who were evaluated and treated at the UKW. The collection of data and scientific analysis was approved by the institutional review board ("Ethik-Komission bei der Medizinischen Fakultät" #2017011001). The UKW is one of three institutions in an area with a population of about 515,000 to treat patients with CRC. Data available within the WID include patient demographics, histological diagnoses based on coding standards of the International Classification of Diseases, physician data, inpatient admission and outpatient registration data, operative procedures, laboratory results and computerized pharmacy records. Continuous cross platform integration with the Wuerzburg Comprehensive Cancer Registry ensures updated follow-up information for identification of deceased patients. Inpatient and outpatient records of all identified patients were reviewed retrospectively to extract information regarding type and duration of chemotherapy, sites of metastatic disease at presentation and disease status at last follow-up. Missing data was retrieved from patient case notes when possible. Demographic details were compiled, along with clinical variables recorded at the time of primary diagnosis as well as during the initial operation (tumor site and the presence of any metastases) and histological details of the resected specimen (tumor (T) stage, nodal (N) stage, tumor differentiation (G) and evidence of microscopic venous (V) and lymphatic vessel invasion (L)). This data was correlated with survival data obtained from prospective follow-up. Follow-up Postoperative follow-up consisted of quarterly outpatient assessments or the gathering of complete information from patients' primary care physician in 3-month intervals for at least 10 years. After 10 years, information was gathered retrospectively on an annual basis. Follow-up was performed by protocols according to entity and tumor stage with abdominal ultrasound after 3, 6, 12 and 18 months and after that on a yearly basis. Computer tomography and surveillance colonoscopy were performed routinely 3 or 6 months after the operation and were repeated every year. After 5 years, structured follow-up ceased and diagnostic tests were based on symptoms or incidental findings and initiated according to individual cases. Statistical analysis The data was analyzed with a statistical software set up in Linux by an in-house biostatistician. Clinical and histological parameters were compared with the Mann-Whitney U or Kruskal-Wallis test for continuous data and with the χ2 test for categorical variables. P<0.05 was considered statistically significant. Cox proportional hazard modeling or 'Cox regression' was used for multivariate testing. Survival curves were drawn according to Kaplan-Meier methods. Ethic statement The study was performed with permission of the local ethics committee (#2017011001). The head of the board for internal data requests, Dr. U Maeder granted permission to access data from the registry. All patients provide informed written consent to have their medical record data used in research. Patients with additional pulmonary metastases did not differ in age, sex, performance status, location and classification of the primary cancer (T-stage, N-stage and UICC-stage), as well as the time of liver metastasis occurrence (synchronous / metachronous) from those patients without pulmonary metastases. However, primary tumors of patients with additional pulmonary metastases showed less venous infiltration in the pathological staging (summary of data in Table 1). Between Of the 53 patients with additional pulmonary metastases, 22 (41.5%) underwent curative resection, in three (5.7%) patients a partial, most likely non curative, resection was performed and 28 (52.8%) did not undergo surgery for their pulmonary metastases for various reasons. Among these twenty-eight patients, three patients showed a diffuse lung metastatic pattern not suitable for resection, nine presented a recurrence of their liver and pulmonary metastases, eleven patients had additional metastases other than in the lung or liver, in two cases a multidisciplinary watch and wait decision was made, one patient showed a complete response following chemotherapy, and in two cases the reason for not undergoing surgery was unknown. The above mentioned three patients with partial, most likely non curative, resection were excluded from further analysis. The decision for pulmonary resection was made in a multidisciplinary team round according to operation technique and oncological reasons. No differences in main demographic and clinical parameters were detected when comparing the patients, who underwent resection, with those, who did not undergo resection of pulmonary metastases ( Table 2). When comparing the pathological analysis of resected metastases to the radiological analysis of metastases in the non-resected group, there was a trend to a higher number of metastases in the group without resection compared to the resected group, though not reaching statistical significance. There was also a trend to lower CEA-levels in the pulmonary resection group, also not reaching statistical significance (10.7μg/l vs. 92.4μg/l; p = 0.06). The median follow-up for all patients was 37.97 months, with a median survival of 76.78 months. The median time span from liver resection to the occurrence of pulmonary metastasis was 288.5 days (range: -798 to 2646 days). The time span was shorter for patients, who had pulmonary resection, than for those, who did not (147 days vs. 578 days; p = 0.009). This result was greatly influenced by three patients, who underwent pulmonary metastasectomy prior to liver resection in synchronous liver and lung metastases. When only analyzing the metachronous metastasis there is no significant difference between these two groups (362 days vs. 578 days; p = n.s.). Compared to patients without pulmonary metastases, those with additional pulmonary metastases developed a recurrence of their liver metastasis and other extra-pulmonary metastases significantly more often (66.0% vs. 34.5%; p <0.001 and 54.7% vs 21.4%; p<0.001). This reflects a more advanced stage of the disease. Analyzing the group of patients with additional pulmonary metastases following result was found: those, who underwent resection, showed a less likely recurrence of their liver metastases compared to those, who did not undergo surgery (50% vs. 78.6%, p = 0.034). However, the percentage of patients with a recurrence of extra-hepatic metastases did not differ in these two patient groups (54.5% vs. 53.6%, p = 0.945) ( Table 3). The median overall survival of all patients was 76.78 months (+/-SD 14.21). The 3-and 5-year survival rate was 72.8% and 60.5%, respectively. Unexpectedly, the 5-year survival rate of patients with pulmonary metastases in addition to CRLM did not differ from the survival rate of patients with CRLM, who had not developed pulmonary metastases (5-year survival rate: without pulmonary metastases 56.7%; with pulmonary metastases 63.5%) (Fig 1). When focusing on the group with pulmonary metastases, curative resection of pulmonary metastases resulted in a significant survival benefit. Patients undergoing surgery showed a significantly better 3-year-survival of 87.2% and a 5-year survival of 77.5% compared to 62.5% 3-year survival and 36.5% 5-year survival in patients, who did not undergo pulmonary metastases resection (p-value: 0.015) (Fig 2). When comparing the survival of patients with pulmonary metastases resection to those not undergoing resection with regard to the primary tumor location (colon / rectum), a survival benefit for resected patients was seen regardless of the primary tumor location. This result did not reach statistic significant values due to too few patients in each group (Colon: p = 0.078; Rectum: p = 0. 22). Surprisingly, we found an improved 3-and 5-year survival in patients with resected pulmonary metastases compared to those patients, who did not develop pulmonary metastases at all (3-year survival rate 87.2% vs. 71.1% 5-year survival rate 77.5% vs. 63.5%; p = 0.211) (Fig 3). While we found the N-stages of the primary tumor to be a significant factor for long term survival after resection of liver metastases in multivariate testing, we were unable to identify a predicting factor for the prognosis of patients with pulmonary and liver metastases. In a multivariate analysis of the potential outcome-related factors (CEA-level, N-stage, primary tumor location, time span to occurrence of pulmonary metastases, age), we did not find any statistically significant correlation to an inferior or superior outcome after pulmonary metastasectomy (Fig 4). Discussion During the last decade, the therapeutic options for patients with metastatic colorectal cancer have improved dramatically. New chemotherapeutic agents and improvement in surgical techniques for liver and / or lung metastases resections allow long term survival rates of up to 40% in UICC stage IV patients [15,16]. The surgical options for the resection of liver metastases have improved drastically over the last decade. The resection of single or few metastases has evolved to anatomic major hepatectomies and more recently to extended liver resections, requiring multiple operative steps together with interim induction of hypertrophy of the future liver remnant; i.e. conventional two stage hepatic resections and the ALPPS procedure (Associating Liver Partition and Portal vein Ligation for Staged hepatectomy) [17]. Due to the increased survival in resected patients and potential cure in about 30% of patients with stage IV disease, liver metastasectomy has become the gold standard for treatment of resectable liver metastases, even in bilobar multifocal metastases [11,[18][19][20][21]. This development resulted in a steadily growing number of stage IV patients, who are considered for hepatic resection. With increasing numbers of patients considered for and ultimately undergoing surgical resection of liver metastases, as part of multimodal therapy concept for stage IV colorectal cancer, the cohort of patients with a combination of hepatic and pulmonary metastases will increase as well. Resection of pulmonary metastasized colorectal cancer has clearly been demonstrated to improve survival. However, surgical treatment options for patients with both liver and lung metastases has been discussed controversially in the past [22]. Metastases in more than one distant site has been regarded as a sign of aggressive tumor biology with poor outcome and little chance for long-term survival following surgical treatment. In contrast, others report favorable outcome data for patients undergoing both liver and lung metastasectomy [23]. A recent pooled analysis identified 146 patients in five studies published between 1983 and 2009, who underwent pulmonary metastasectomy after previous liver resection. The five-year overall survival was 54.4%, which was found to be superior to the expected survival of patients with UICC stage IV CRC [7]. This survival is comparable to the observed overall survival in our study of 77.5%. In the literature previous liver resection has been considered as a negative predictive marker for the oncological outcome following lung metastasectomy [14]. In fact, many of these studies include only a series of consecutive patients and / or were carried out before the introduction of modern chemotherapeutic and biological agents, which might be one explanation for the difference in outcome. To estimate the prognosis of patients presenting with pulmonary and liver metastases in our own patient population, we retrospectively analyzed all patients, who underwent resection of liver metastases from colorectal cancer at our institution with a special focus on the occurrence and treatment of additional pulmonary metastases. Our results clearly demonstrate that patients with additional pulmonary metastases, who did not undergo resection, experienced an inferior outcome. The overall 5-year survival rate in this group was less than 40%, but nearly 80% for patients who underwent curative resection of their pulmonary metastases. This might be the result of different biological types of tumors, as nearly all patients, who did not undergo pulmonary resection, displayed a diffuse metastatic pattern. Interestingly, our group of patients showed a better 5-year survival rate compared to the data found in the literature [13,14]. This can be explained by several reasons. First, many patients in our study were also treated with modern multimodal chemotherapy agents, differing from previous studies in the literature [14]. Second, there might be a selection bias in the patients undergoing pulmonary / liver resection. However, based on the registry data, we could not identify any factors varying between the two patient cohorts. Third, except the enhanced 5-year survival rate, which is higher compared to current published data, the disease-free or relapse-free 5-year survival rate was about 30% (data not shown), comparable to the results found in current publications [13,14]. This indicates an improved survival due to the application of new chemotherapeutics and repeat-liver resection, which prolongs the overall survival, but did not influence the recurrence-free survival. One limitation of this study is the sole comparison of patients undergoing pulmonary resection to patients, who were diagnosed with pulmonary metastases based on growing lesions or newly identified lesions in a CT scan. In retrospect, we were only partially able to evaluate why some patients did not undergo resection despite no significant differences in the number of pulmonary metastases and other demographic factors between both groups. But patients in the non-resected group mainly presented with advanced cancer spread at the point in time of pulmonary metastasis detection, reflecting a worse tumor biology. Another limitation is that the patients were treated with different chemotherapy protocols and agents, making it impossible to evaluate the chemotherapy impact due to the small study cohort. Several factors have been proposed to correlate with the survival after resection of pulmonary metastasis, such as the number of metastases CEA levels or the N-stage of the primary tumor [24,25]. For liver metastases, the so-called Fong score and other scoring systems predict survival after resection. One major prognostic factor of the Fong score is the occurrence of lymph-node metastasis combined with the primary tumor. This turned out to be reproducible for liver metastases in our study. However, we did not find an influence of the primary N-stage on the development of additional pulmonary metastases in our patients. A recent short Meta-analysis by Lamuchi and colleagues including 1669 patients identified elevated preoperative serum levels of carcinoembryonic antigen (CEA), the presence of Pulmonary metastasectomy after curative CRLM resection multiple or bilateral pulmonary metastasis, mediastinal lymph node involvement, and a shorter disease free survival as worse prognostic factors. Unfortunately, we could not reproduce this data in our cohort due to different reasons. Only for a minority of patients the CEA levels were available prior to pulmonary resection. Furthermore, the number of patients in each group was too small to reach valid data [26]. Interestingly, there is also a tendency for patients with both lung and liver metastasis, who underwent successful resection of their metastases, to have a better outcome than patients not developing pulmonary metastases at all. Comparable results have been published by Brouquet in 2011 and Riquet in 2010 [8,27]. One explanation might be an altogether favorable tumor biology leading to the development of single or resectable multiple pulmonary metastases. Furthermore, while pulmonary metastases can be resected and, in principle, cured, most patients with peritoneal carcinomatosis, diffuse lymphatic metastasis or bone metastasis cannot be treated by surgical resection. In line with these results, patients with metastases outside the liver or lung have a worse outcome. Similarly, a large series of patients with pulmonary resection of colorectal metastases showed the occurrence of extra-thoracic metastases as an independent prognostic factor for poor survival [7]. This observation could be the reason why the development of pulmonary metastases per se is not associated with a worse outcome. Furthermore, the resection of pulmonary metastasis leads to a "tumor free timespan" and thereby could reduce the number of CTX and cumulative dose toxicity and could save the opportunity for multimodal CTX in diffuse metastatic stage. In our population, only 6 out of 22 patients (27.3%) showed long term disease free survival (data not shown). In conclusion, we could show that resection of both pulmonary and liver colorectal metastases led to an excellent long-term survival and should be considered whenever possible. Furthermore, the development of additional resectable pulmonary metastases is not necessarily a poor prognostic marker. In case of synchronous metastases to the liver and lung we prefer a "liver first" approach, due to two reasons. A) to avoid compromised ventilation after abdominal laparotomy, which is the case when pulmonary metastases are resected in advance. B) lung metastases are often small and relative growth during the time delay due to liver resection does not render them inoperable, whereas vice versa the growth of CRLM could lead to an inoperable state. This is especially the case in bipulmonary metastasis where a two stage procedure is intent which will take a timeframe of up to 12 weeks. This data from a retrospective, single institution analysis should encourage multi-disciplinary tumor boards to consider patients with metachronous and synchronous hepatic and pulmonary metastases for surgical resection.
2018-04-03T06:05:10.812Z
2017-03-22T00:00:00.000
{ "year": 2017, "sha1": "e325a94349187ef86e48f2a4429e406a8aadaaa0", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0173933&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e325a94349187ef86e48f2a4429e406a8aadaaa0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252996363
pes2o/s2orc
v3-fos-license
Investigation of mixing viscoplastic fluid with a modified anchor impeller inside a cylindrical stirred vessel using Casson–Papanastasiou model In process engineering as chemical and biotechnological industry, agitated vessels are commonly used for various applications; mechanical agitation and mixing are performed to enhance heat transfer and improve specific Physico-chemical characteristics inside a heated tank. The research subject of this work is a numerical investigation of the thermo-hydrodynamic behavior of viscoplastic fluid (Casson–Papanastasiou model) in a stirred tank, with introducing a new anchor impeller design by conducting some modifications to the standard anchor impeller shape. Four geometry cases have been presented for achieving the mixing process inside the stirred vessel, CAI; classical anchor impeller, AI1; anchor impeller with added horizontal arm blade, AI2 and AI3 anchor impeller with two and three added arm blades, respectively. The investigation is focused on the effect of inertia and plasticity on the thermo-hydrodynamic behavior (flow pattern, power consumption, and heat transfer) by varying the Reynolds number (Re = 1, 10, 100, 200), Bingham number (Bn = 1, 10, 50), in addition to the effect of geometry design in the overall stirred system parameters. The findings revealed an excellent enhancement of flow pattern and heat transfer in the stirred system relatively to the increase of inertia values. Also, an energy reduction has been remarked and the effect of anchor impeller shape. AI3 geometry design significantly improves the flow pattern and enhances heat transfer by an increased rate of 10.46% over the other cases. List of symbols Mechanical agitation and mixing are critical processes in various sectors, including chemical-pharmaceutical, petroleum metallurgy, etc. It was utilized in various processes, including polymerization, dispersing, emulsifying, suspending, and mass and heat transfer enhancement. The purpose of mixing is to obtain a specific degree of homogeneity within the stirred system. Temperature is a significant element in the majority of chemical reactions. It significantly influences thermodynamics, kinetics, homogeneity process, and product quality. The thermal exchange may be accomplished by jacket heating or through the use of heat exchange internals in a stirred tank. In many industrial processes, the mixing of viscoplastic fluids is commonly carried out in stirred tanks, such as fermentation, pharmaceuticals, polymerization, personal and home care products, and food products. The study of thermal behavior inside the stirred tank is essential since they are essential in many relevant processes. Such as, they are critical to achieving the aimed reaction products, preventing the thermal loss control of reactions, and generating the appropriate super-saturation for the development of suitable crystals. Computational Fluid Dynamics (CFD) is an effective software for studying complex fluid flows involving multi-physical fields. There are many numerical thermal behavioral studies and fluid flow in stirred tanks. Given the importance of mixing in various industrial processes, numerous studies consisted of experimental design and computational studies to develop the effectiveness of hydro-dynamic compositions and the operating constraints. Several studies were reported in the mechanical agitation literature by many authors; Bertrand et al. 1 9 , and Jaszczur et al. 10 . Rajasekaran et al. 11 used ANSYS Fluent software to investigate milk's flow pattern and thermal behavior during heating inside a stirred vessel. They discovered that the heat transfer coefficient values simulated by a CFD program correspond to experimental data that are very congruent. Daza et al. 12 numerical investigation of thermal behavior inside jacketed stirred vessel stirred. They acquire a correlation of dimensionless Nusselt number of stirred systems with an introducing six-blade turbine. Hami et al. 13 numerically studied the hydrothermal behavior of Newtonian fluid inside a stirred vessel by introducing inclined blades anchor. They found that the average Nusselt number decreased with increasing the angle degree of the blades. Benmoussa et al. 14,15,16 analyzed the effect of plasticity and inertia on the thermal and hydrodynamic behavior inside a stirred tank. They discovered that the thermal behavior efficiency is affected by the hydrodynamic parameter's variation (inertia and plasticity) inside the stirred vessel. An experimental study of the agitated helical coil heat exchanger evaluated its performance using an Al 2 O 3 -water nanofluid by Srinivas et al. 17 . The result indicated increasing the rotation speed and temperature value of fluid energy up to 10.65% energy rate. Jaszczur et al. 10 analyzed the heat transfer along the jacketed cylindrical vessel. The findings established Nusselt number correlations with Reynolds number and found the thermal behavior's dependence on inertia parameters. Additionally, additional experimental experiments were conducted to investigate and improve the thermal behavior of stirred vessels; SK et al. 18 However, it is crucial to note that few works include the viscoplastic fluid thermal studies in stirred tanks. Furthermore, no research implores this particular fluid (Casson-Papanastasiou) in the investigation process. Previous research indicates a shortage of mixed convection investigations in laminar streams induced by activists that generate predominantly tangential movements. The empirical study of this research entails a thermal hydrodynamical examination of a Casson-Papanastasiou fluid and the effect of inertia ( Re = 1−100 ), plasticity ( Bn = 1−100 ), and the geometrical design of anchor impellers. Four different cases are under study; CAI: classical anchor impeller; AI1: anchor impeller with an added arm blade; AI2: anchor impeller with two added blades; AI3: anchor impeller with three added blades on the flow pattern, heat transfer intensity, and power consumption. www.nature.com/scientificreports/ Figure 1 illustrates the agitated vessel equipped with a classical anchor impeller. The stirred procedure contains a cylindric tank with a flatness bottom. The mixing process was investigated under a hot temperature T h at the tank's sidewall, while the anchor and the tank's bottom wall are assumed to be adiabatic. However, this agitated vessel was equipped with a different geometrical configuration of anchor impeller, as shown in Fig. 2. The first case is a Classical Anchor Impeller (CAI), the second case is Anchor Impeller with one added arm blade (AI1), the third case is an anchor impeller with two blades (AI2), and the fourth case is an anchor impeller with three blades (AI3). All geometry parameter details are described in Table 1. Mathematical model 3-D numerical simulation of thermal and laminar mixing of viscoplastic fluid inside the stirred vessel. This investigation was performed with the CFD Code that solved the momentum and energy equations based on the finite element methods using Galerkin's discretization with an unstructured mesh, as shown in Fig. 3. In the discretization of the computational domain, the tetrahedral mesh was introduced, and it is particularly suitable for representing the geometrical domain due to its high adaptability to curved surfaces. www.nature.com/scientificreports/ Mesh test. We opted for a finer mesh for the 3D study, for which the results mentioned in Table 2 remain unchanged. In addition, the computation time is more important. It should be noted that, that the convergence criterion relates to the values of each of the dimensionless dependent variables whose error must be lower than 10 -6 . The mesh test is done under the condition that ( Re = 100 , µ = 0.01 Pa s, and τ = 1 Pa). Energy equation. To model the stress-deformation behavior of yield stress fluids, the Casson-Papanastasiou constitutive equation has been modified by Papanastasiou 25 . Here µ p represents the plastic viscosity, m is the stress growth exponent called a regularization parameter, γ is the shear rate, and τ is the shear stress. Dimensionless parameters. The below dimensionless parameters were used to reduce the number of variables and also to convert the dimensional governing equations to their dimensionless form: Bingham numbers. γ . (7) RePr For the heat transfer phenomena in the stirred tank, the boundary condition is assumed a dimensionless parameter however, T h (signifies the hot wall temperature and supposed that T h = 1 ) and T c (signifies the temperature of the cold wall and presumed that T c = 0 ). In addition the Nusselt number Nu can be defined as the heat transfer rate along the mixing operation and it is expressed as denoted in Eq. (16). Power number. The power consumption P calculate by the following equation where N is the rotational speed, A surface around the impeller, and (F x , F y ) represent the x and y− force direction. Validation To validate and verify the study of the computation code, numerical results were compared to those previously published in the literature to validate the numerical analysis used in Fig. 4. Power numbers and the tangential velocity obtained from the present study have been compared with the previous numerical and experimental studies existing in the mixing and mechanical agitation literature using anchor impellers, Ameur and Youcef 4 , Prajapati and Ein-Mozaffari 26 , and Marouche et al. 27 . Figure 4a illustrates the variation in power numbers as a function of Reynolds numbers ( Re = 1, 10, 50, and 100 ); when these works are compared to the results obtained in our investigation, a very excellent agreement is achieved. Marouche et al. 27 referenced for comparison in our results in this research. The Bingham fluids were used with the same rheological and geometrical parameters (anchor impeller, a working fluid with the characterization µ = 0.1 , τ = 0.1 with inertia value ( Re = 13.8 ). In Fig. 4b, the tangential velocity was of a high degree and slowly decreased as it approached the wall. Our findings are incredibly similar to the previous numerical results of Marouche et al. 27 , as illustrated in Fig. 4b. www.nature.com/scientificreports/ Results and discussion Inertia effect. The flow pattern is an essential criterion for previewing the mixing performance in the stirred tank system. The flow pattern is affected by the rotational speed of the anchor impeller and its design geometry. A different inertia value is tested ( Re = 1, 10, 50, and 100 ). Many anchor impeller shapes have been introduced to analyze their influence on the hydro-thermal behavior inside the mixing vessel system. Figures 5 and 6 show the velocity magnitude and vectors filed velocity distribution along the vertical median plane and impeller plane, respectively, for different values of the Reynolds number. From the vector flow fields in the case of Re = 1 , it can remark a similarity of vector flow field flow along the vertical plane. It appears to parallel at all heights, explaining that the flow field is predominantly tangential. Furthermore, the limitation on the moving zone on the impeller area and low dimensionless velocity value, in this case, Vt * = 1 × 10 −4 . With the increase of inertia value ( Re = 10 and 50 ), an increase in the vectors field density around the impeller has been remarked, and a tiny change accompanied it on the flow pattern, an inclination of the vector field flow from the radial toward the axial direction and an expanding of moving zone toward the whole tank. In the same way, for the case Re = 100 , a high vector field density has been created near the impeller area compared to the previous cases. In this case, the flow pattern becomes greatly radial. Figure 7 presents the contour velocity on the horizontal section of the vessel. A significant positive correlation between the inertia and moving well zone with the increase in inertia values and rise in velocity magnitude is recorded. That means an important variation in flow intensity along with stirred tank according to the rise www.nature.com/scientificreports/ of inertia value. Figure 8 shows the evolution of the power consumption ( Np ) as a function of the inertia value. The results indicate that the continuous increase of the Reynolds number reduces the energy required in the stirred system. The inertia parameter significantly influences energy consumed with the rise of this parameter, which means less power consumption is required. This rise of Reynolds minimizes the cost of energy consumed in this stirred vessel. Figures 9 and 10 illustrate the velocity magnitude in the tangential direction (impeller plane) and the radial direction (median plane) along the vertical section of the stirred tank. Overall, it can be noted that the maximum velocity was almost located in the middle of the vessel strictly at the anchor region and it becomes decays at the immediate contact with the side vessel wall. The velocity distributions on the impeller plane are comparable to outcomes obtained by Kada et al. 28 and Benmoussa 15 . Also, it can be observed that the maximum velocity on the median plane (radial velocity) becomes Vr * = 0.36 and the maximum velocity value on the impeller plane (tangential velocity) is Vt * = 0.96 . As a result of this observation, we may conclude that tangential flow dominates in this stirred system; the same effect was found by Ameur 29 and Mebarki et al. 30 . Analyzing the velocity distribution along the vertical axis precisely gives the flow structure inside the stirred tank system. Figure 11 illustrates the velocity distribution in the vertical axis of the vessel. It is apparent from this result that the maximum value of the Vz is 0.11. Compared with the value obtained from the tangential velocity, www.nature.com/scientificreports/ we can note that Vz is very small compared to the tangential flow inside stirred tank. This means that they have a low impact on axial flow on the stirred tank, and the tangential flow usually is dominant in this system. Heat transfer on the stirred tank. Figures www.nature.com/scientificreports/ of inertia values ( 50 and 100 ) from the axial to the radial direction, indicating a change in the thermal behavior inside the stirred tank. The same remark can be seen in Fig. 13 at the horizontal view of the stirred vessel, regularity in the thermal gradient field direction for low Reynolds value Re = 1 and 10 , and the thermal fields changing this direction, especially near the anchor region. Similar outcomes have been remarks in previous analyses where the thermic and hydrodynamical behaviors were investigated using 2-D numerical studies by Refs. 15,[29][30][31] and in the 3-D simulation probed by Pedrosa et al. 32 and Gammoudi et al. 33 . Figures 14 and 15 illustrate the temperature variation along with the radial direction and the axial direction inside the stirred tank, respectively. From the result, it can be noted that the increase in the inertia value leads to an increase in the temperature value. From Fig. 16, by zooming the graph and verifying more precisely the temperature along the axial direction from the finding data ( Re = 1 the temperature up to 0.16 and for Re = 100 up to 0.48 ) explains the increase of inertia multiplied by three times the thermal behavior acceleration inside the stirred vessel. A positive correlation was found between heat transfer and inertia variation along with the www.nature.com/scientificreports/ stirred tank. Interestingly, intensification progress on the thermal behavior was observed to explain the positive impact of inertia on enhancing thermal flow in the mixing system. Rheology effect. Rheology is an important parameter affecting the thermo-hydrodynamic behavior within the agitated system; flow pattern, heat transfer, and power consumption are required inside a mixing system. In this case part, we analyze the effect of the Rheology parameter on the thermo-hydrodynamic structure by varying the plasticity value of this viscoplastic fluid. Figures 17, 18, 19 www.nature.com/scientificreports/ An increase in flow intensity is remarked with the decrease in plasticity value ( Bn = 10 ) illustrated in Figs. 19 and 20. A radical change in the flow pattern in case inertia ( Re = 50 , and 100 ) was found in the streamlines' significant inclination toward the axial direction. For low plasticity value ( Bn = 1 ), it is clear to see no vortex was created near the blade region for ( Re = 1 ), and the flow quickly becomes axial with the increase of inertia value ( Re = 50 , and 100) as shown in Figs. 21 and 22. It can also be observed across the effect between inertia and plasticity. The decrease in plasticity leads to a change in flow pattern structure fast on the mixing system as a low inertia value. It is apparent from these results in Figs. 17, 18, 19, 20, 21, 22 that there's a slow-moving inside stirred tank with a stagnant region in the vicinity of the vessel for high plasticity. The decrease in Bingham number leads the fluid to become efficient in shifting in a vertical path, which enhances the flowing inside the tank. In low Reynolds numbers, the existence of the vortex area nearby the shaft of the vessel and it's dissipated with the increase of Reynolds values; the same result obtained from Ameur 29 . Figure 23 represents the power consumption as a function of plasticity parameters Bn . It can be seen that the increase in the plasticity value leads to an increase in the energy consumed in stirred system. Geometry design effect. In this study, the different geometrical design of anchor impellers has been introduced in the mixing system to analyze their impact on the thermal hydrodynamics comportment in a stirred tank. The anchor impeller classic was modified by adding an arm blade in the original shape at a different posi- www.nature.com/scientificreports/ tion. This work examined four different geometry combinations. CAI is for a classical anchor impeller; AI1 is for an anchor impeller with an additional blade in the center. AI2 illustrates the addition of two blades to the anchor impeller's center, while AI3 demonstrates the addition of three blades to the anchor impeller's center. Figure 24 shows the velocity and vector field distribution on the vertical section in the vessel with different anchor impeller shapes, a significant increase in the well-moving zone with the rise of arm blade number. In CAI and AI1, the vectors flow filed are typically parallel for all levels of the stirred tank. This means a tangential flow found in most of the stirred systems for this case and the low-velocity value. With the increase of blade numbers (AI2 and AI3), a radical change in the flow pattern occurred with the appearance of axial flow. Axial flow resulted in pumping fluid from the bottom to the top of the stirred tank, resulting in an improvement in the flow pattern inside the stirred tank and a high vector field density along the vessel, which explains the existence of a large moving zone. Figure 25 demonstrates the outline of the isotherm in the vertical section of the tank for diverse geometry configurations. From the results obtained, it can be seen that the increase in blades number leads to an increased temperatures value along with the stirred tank. The isotherm pattern showed a deviation of the thermal gradient direction (irregular degradation in the thermal contour) in AI2 and AI3, compared with CAI and AI1, which have a regularity in the thermal gradient distribution. These are related to the effect of forced convection on the stirred tank and the impact of anchor impeller design. From Fig. 25, increasing the number of blades does not affect energy consumption. However, based on the previous findings, the impeller with three blades (AI3) may be selected as the most efficient because of the improvement in the heat transfer and acceleration flow field within the stirred tank. Figure 26 shows the Nusselt number variation as a function of the Reynolds number for different geometry cases. From the finding, the AI3 has a significant effect on the thermal behavior comparing with other cases, which also confirms the efficiency of this geometry configuration on the improvement and intensifies the thermal behavior inside the stirred tank. This case leads to an increase in the heat transfer flow with a rate of 10.46% for Reynolds number Re = 100 as shown in Table 3. Conclusion Numerical investigation of thermo-hydrodynamic behavior of viscoplastic (Casson-Papanastasiou model) stirring inside the cylindrical vessel by introducing new anchor impellers designed to improve the overall performances inside the stirred tanks. There were four different cases: CAI (classical anchor impeller), AI1 (anchor impeller shape with one added arm blade), AI2 (anchor impeller shape with two added arm blades), and AI3 (anchor impeller shape with three added arm blades). This study aimed to demonstrate and analyze the effect of inertia and plasticity on the thermo-hydrodynamic structure (flow pattern, power consumption, and heat transfer) by varying the Reynolds number Re from 1 to 100, Bingham number Bn from1 to 50. besides the influence of geometry design in the overall stirred system parameters. The finding results revealed the following consequences: • The flow pattern inside the stirred system changes proportionally with the inertia parameters, increasing the inertia value concomitant to an increase in the velocity inside the stirred tank. • The flow is predominantly tangential with a low inertia value, especially for ( Re = 1 ); however, the rise in Reynolds value ( Re = 50 and 100 ) leads to a change in the flow pattern from the tangential to the axial direction. www.nature.com/scientificreports/ • The inertia variation has influenced heat transfer; a low temperature appears with low inertia values; moreover, with rising inertia Re = 200 , the heat ratio went up from 0.16 to 0.48, which means the inertia multiplied three times the heat transfer inside the stirred tank. • While the height of the plasticity parameters causes a high energy cost in the stirred vessel, the inertia value increases, and the energy used within the vessel decreases. The power numbers value was remarkably near thanks to the anchor impeller's different geometrical shape, nevertheless. • The finding results revealed that all geometry configurations have the same power energy consumption rate during the mixing operation, however the geometry configuration with three blades (AI3) significantly influences the improvement of flow pattern and enhances heat transfer by an increased rate of 10.46% over the other cases. Data availability All data generated or analyzed during this study are included in this published article.
2022-10-20T13:51:46.117Z
2022-10-20T00:00:00.000
{ "year": 2022, "sha1": "85c2fc2136ebfea035c1bfa3a5b8d99dba0f1321", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "85c2fc2136ebfea035c1bfa3a5b8d99dba0f1321", "s2fieldsofstudy": [ "Engineering", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
248488475
pes2o/s2orc
v3-fos-license
Non-Traditional Pathways for Platelet Pathophysiology in Diabetes: Implications for Future Therapeutic Targets Cardiovascular complications remain the leading cause of morbidity and mortality in individuals with diabetes, driven by interlinked metabolic, inflammatory, and thrombotic changes. Hyperglycaemia, insulin resistance/deficiency, dyslipidaemia, and associated oxidative stress have been linked to abnormal platelet function leading to hyperactivity, and thus increasing vascular thrombotic risk. However, emerging evidence suggests platelets also contribute to low-grade inflammation and additionally possess the ability to interact with circulating immune cells, further driving vascular thrombo-inflammatory pathways. This narrative review highlights the role of platelets in inflammatory and immune processes beyond typical thrombotic effects and the impact these mechanisms have on cardiovascular disease in diabetes. We discuss pathways for platelet-induced inflammation and how platelet reprogramming in diabetes contributes to the high cardiovascular risk that characterises this population. Fully understanding the mechanistic pathways for platelet-induced vascular pathology will allow for the development of more effective management strategies that deal with the causes rather than the consequences of platelet function abnormalities in diabetes. Introduction Cardiovascular complications represent the leading cause of morbidity and mortality in patients with diabetes (DM), increasing the economic burden on healthcare systems [1][2][3]. There is an elevated risk of a first vascular event in individuals with diabetes, and outcomes following vascular ischaemia are inferior compared to those with normal glucose regulation [4]. The increased cardiovascular morbidity in subjects with DM is associated with profound metabolic and functional changes in the cells of the vasculature. In the context of the current evidence, the premature and more extensive vascular disease is coupled with a prothrombotic environment in which platelet hyperactivity is thought to play a key role in the suboptimal clinical outcomes. Oxidative stress, dyslipidaemia, and a combination of insulin resistance/hyperglycaemia, typical of patients with DM, have been proposed to contribute to abnormal platelet function [5,6]. The pervading view of the role of platelets in the development of cardiovascular complications in this cohort is focused on their contribution to arterial thrombosis at sites of plaque rupture. Antiplatelet agents, including aspirin, ticagrelor, and clopidogrel, are routinely used to suppress platelet function and reduce the risk of atherothrombosis [7][8][9]. However, there is emerging evidence to indicate that platelets may also contribute to the pervasive low-grade inflammation that promotes increased cardiovascular risk in DM [10][11][12][13]. Platelets possess a full repertoire of inflammatory functions and a diverse array of mechanisms for the transcellular transfer of inflammatory factors, allowing them to coordinate the interactions of endothelial cells with circulating immune cells [14]. The release of platelet α-granules results in the surface expression of P-selectin and the release of preformed chemokines such as CCL3, CCL5, CCL5, platelet factor 4(PF4), PAF, and CXCL10, amongst others [14]. P-selectin facilitates heterotypic interactions with both endothelial cells and leukocytes through P-selectin glycoprotein ligand-1. These bioactive mediators trigger the expression of proinflammatory gene products in both endothelium and leukocytes [14]. Through the release of vasoactive factors and the formation of heterotypic cell complexes, platelets act as a focal point for vascular inflammation, enabling the recruitment of leukocytes to the endothelium and their transmigration to the subendothelial space [15]. However, the precise mechanisms of platelet-driven inflammation in individuals with DM are unclear. The role that platelets play in these various pathways has led to the emerging evidence around their involvement in thrombo-inflammation. This concept is distinct from inflammation alone in that it refers to pathological states, when, following vascular injury there is a coordinated response from both thrombotic and inflammatory pathways to ensure the pathological process remains limited to the site of injury, allowing for effective and complication-free healing [16]. This narrative review highlights the role of platelets in the inflammatory and immune responses that contribute to cardiovascular disease in diabetes. In particular, this work discusses the potential effects of DM on platelet-driven inflammation, the principles of platelet reprogramming in diabetes, and the potential therapeutic targets that these pathways may provide. Pathophysiology of Vascular Disease in Diabetes The fundamentals of the pathophysiology behind inflammation-driven vascular damage in diabetes are key to identifying potential pathways for therapeutic targets to prevent and treat vascular complications in diabetes. Endothelial dysfunction is a key abnormality in diabetes and contributes to both a proinflammatory and prothrombotic environment that promotes vascular occlusive disease [5,15,17]. A close association between endothelial dysfunction and platelet activity has been repeatedly demonstrated [17,18], and recent evidence suggests this relationship is bidirectional. Endothelial Dysfunction and Atheroma Formation The endothelium is a principal regulator of a number of thrombotic and non-thrombotic pathways [19,20]. Collectively, the endothelia act as a bioactive organ that controls the function of blood cells, the integrity of the vascular wall, and vascular reactivity. Critical to these functions are the vasoactive mediators, nitric oxide (NO) and prostacyclin [21][22][23][24]. The tonic release of these mediators prevents vascular inflammation by ensuring platelet quiescence and preventing platelet-mediated immune cell infiltration of the subendothelial space, factors that are critical to preventing vascular inflammation. A key characteristic of endothelial dysfunction is the lack of bioavailable NO and PGI 2 , leading to the loss of their athero-protective effects. When inflamed, endothelial cells increase the cell surface expression of cell adhesion molecules and release chemotactic messengers that promote the recruitment and reaction of monocytes into the subendothelial space and their subsequent transformation into macrophages [18,25,26]. Endothelial dysfunction occurs as a result of several metabolic features typical of diabetes, including hyperglycaemia, insulin resistance, and the resulting increased oxidative stress [5,23]. There is also an increase in permeability, which potentially allows for an increased accumulation of low-density lipoproteins (LDLs) in the vessel wall, where they are retained and prone to oxidative attack. The subsequent unregulated uptake by macrophages of oxidised-LDL results in the formation of foam cells. These cells secrete cytokines, including interleukin-6 (IL-6) and tumour necrosis factor (TNF)-α [25,26], further enhancing the proinflammatory environment [27]. As this process continues, atherosclerotic plaques continue to grow and eventually rupture, causing the activation of platelets; this drives clot formation. Intravascular Thrombus Formation Upon the rupture of the atherosclerotic plaque, a cascade of events ensues that results in the activation of both the cellular and acellular arms of coagulation, promoting thrombus formation. NO once again plays a vital role in the regulation of platelet adhesion and aggregation, normally preventing thrombus formation by inhibiting platelet adhesion and aggregation, while also promoting the disaggregation of pre-formed platelet aggregates [28]. Thus, when NO bioavailability falls in diabetes, the consequence is an increased potential for platelet activation and thrombus formation, also contributing to an inflammatory state [25]. The activation of platelets facilitates the localised activation of the coagulation cascade and the generation of a fibrin network that stabilises the thrombus. DM is characterised by dense fibrin networks and hypofibrinolysis [29][30][31], which contribute to vascular complications and adverse clinical outcomes in this population [32,33] (Figure 1). Intravascular thrombus formation Upon the rupture of the atherosclerotic plaque, a cascade of events ensues that results in the activation of both the cellular and acellular arms of coagulation, promoting thrombus formation. NO once again plays a vital role in the regulation of platelet adhesion and aggregation, normally preventing thrombus formation by inhibiting platelet adhesion and aggregation, while also promoting the disaggregation of pre-formed platelet aggregates [28]. Thus, when NO bioavailability falls in diabetes, the consequence is an increased potential for platelet activation and thrombus formation, also contributing to an inflammatory state [25]. The activation of platelets facilitates the localised activation of the coagulation cascade and the generation of a fibrin network that stabilises the thrombus. DM is characterised by dense fibrin networks and hypofibrinolysis [29][30][31], which contribute to vascular complications and adverse clinical outcomes in this population [32,33] ( Figure 1). Diabetes-related mechanistic pathways modulating thrombo-inflammatory function of platelet In patients with DM, particularly T2DM, a number of changes in the receptor and signal transduction function have been described that contribute to platelet dysfunction. We discuss below the main pathways that are likely to operate in diabetes and which are responsible for modulating platelet function, with a focus on thrombo-inflammatory pathways. Insulin and the insulin receptor The majority of patients with diabetes have T2DM, typically characterised by insulin resistance and consequent hyperinsulinaemia [6,17,34]. These features may have often been present for decades prior to a formal diagnosis of T2DM [35]. Platelets express the insulin receptor on their surface, although the exact function of the receptor is yet to be fully determined [5,23]. In healthy non-overweight people, insulin binding to its receptor results in the inhibition of platelet activation, secondary to the intracellular translocation of magnesium [35]. This pathway is mediated by the activation of insulin receptor sub- Diabetes-Related Mechanistic Pathways Modulating Thrombo-Inflammatory Function of Platelet In patients with DM, particularly T2DM, a number of changes in the receptor and signal transduction function have been described that contribute to platelet dysfunction. We discuss below the main pathways that are likely to operate in diabetes and which are responsible for modulating platelet function, with a focus on thrombo-inflammatory pathways. Insulin and the Insulin Receptor The majority of patients with diabetes have T2DM, typically characterised by insulin resistance and consequent hyperinsulinaemia [6,17,34]. These features may have often been present for decades prior to a formal diagnosis of T2DM [35]. Platelets express the insulin receptor on their surface, although the exact function of the receptor is yet to be fully determined [5,23]. In healthy non-overweight people, insulin binding to its receptor results in the inhibition of platelet activation, secondary to the intracellular translocation of magnesium [35]. This pathway is mediated by the activation of insulin receptor substrate (IRS-1) via tyrosine phosphorylation, which in turn increases cytosolic cyclic adenosine monophosphate (cAMP), a key platelet inhibitor. The increased cytosolic cAMP concentration is proposed to reduce activation signalling by the ADP receptor P2Y 12 , thereby suppressing platelet activity. Impaired insulin signalling as a result of insulin resistance (IR), seen in individuals with T2DM, or absolute insulin deficiency occurring in T1DM, leads to disinhibited platelet activation [23,36]. While studies on platelet reactivity in T1D are both limited and conflicting [37][38][39][40], the lower plasma level of magnesium in these individuals may contribute to altered platelet function [41]. Alterations in insulin receptor signalling in insulin resistance can also reduce cAMP levels, which results in increased cytosolic calcium concentration, resulting in platelet hyperreactivity [42]. Nitric Oxide and Reactive Oxygen Species Hyperglycaemia and insulin resistance, as well as dyslipidaemia and obesity, commonly seen in patients with DM, also drive cardiovascular disease through vascular inflammation. These factors result in an imbalance between the production of endothelial NO synthase (eNOS), derived NO, and the elevated production of reactive oxygen species (ROS), leading to the disruption of this vital homeostatic environment [22,43]. The increased accumulation of ROS results in the inactivation of NO to form peroxynitrite, following the generation of superoxide anion. This key event, driven by both insulin resistance and hyperglycaemia, leads to a reduction in NO bioavailability, which is further exacerbated by peroxynitrite driving the uncoupling of eNOS, with a preferential production of ROS [15]. Peroxynitrite has been shown to result in the damage and death of both endothelial and vascular smooth muscle cells, and thus, has been linked to the development of cardiovascular complications in diabetes [44][45][46]. Another mechanism contributing to reduced endothelium-derived NO in diabetes is the decreased activity of eNOS [19,23], as a result of both excess ROS production and increased protein kinase C (PKC) activity. Given the vasculo-protective actions of NO, a reduction in its bioavailability is associated with adverse cardiovascular outcomes [23,47]. Reduced NO levels coupled with elevated ROS levels promote the production of transcription nuclear factor kappa B (NF-kB), a transcription factor involved in several cellular pathways in endothelial cells, resulting in the increased production of chemokines and cytokines that are potentially associated with inflammation [19,23]. The increased expression of NF-kB has been shown to enhance the expression of leukocyte adhesion molecules in endothelial cells while also stimulating the production of chemokines and cytokines, further contributing to an inflammatory state and atherosclerotic changes [48]. The decreased bioavailability of NO in DM could also potentially lead to a loss of platelet activation pathways. In diabetic mice, the inhibition of NO synthase led to increased fibrinogen-platelet binding and the expression of activation markers CD40-L and P-selectin [49]. Improving endothelial NO availability resolved these observed pathological changes. Indeed, some studies, but not all, have demonstrated reduced NO in patients with DM [50,51]. This further supports the impact of both NO bioavailability on platelet hyperreactivity as well as the impact of diabetes on NO production. In addition to reduced levels of NO, the accumulation of ROS leads to the activation of other additional pathways that contribute to inflammation [15,52], particularly the generation of advanced glycation end products (AGEs) [52,53]. The production of AGEs affects protein function and also activates the receptor for AGEs (RAGEs). AGEs further drive ROS production, and RAGE activation leads to increased superoxide anion production, both of which additionally contribute to diminished NO. PKC activation has been linked to hyperglycaemia and leads to changes that contribute to vascular disease, including inflammation and platelet hyperreactivity, as well as alterations in angiogenesis, cell growth, and apoptosis [54]. Elevated PKC activity has been demonstrated in the platelets of healthy controls left in hyperglycaemic conditions, although this has been variable in patients with T2D [55]. PKC activation drives ROS generation via NADPH oxidase-mediated superoxide production [56]. It also decreases eNOS activity, with the resultant diminished NO production described above. Along with reduced vasodilation through these mechanisms, PKC also drives the elevated production of the vasoconstrictor, endothelin-1, which promotes vasoconstriction and platelet aggregation [54]. Platelet Activation and P-Selectin It has been well-established that individuals with both T1D and T2D display enhanced platelet activation compared to platelets taken from healthy individuals. Much early evidence has come from studies focussing on thromboxane (TXA) biosynthesis [57,58]. Davi et al. crucially demonstrated that DM, amongst other risk factors for CVD, such as hypertension, causes a persistent state of platelet activation, measured through thromboxane biosynthesis. This may, in turn, also suggest a persistent secretion of inflammatory mediators [59]. Further to this, other studies have also demonstrated enhanced TXA synthesis in the context of post-prandial hyperglycaemia alone [60]. More recently, many studies have used P-selectin as a marker of platelet activation. The activation of platelets upregulates P-selectin expression on cell membranes. The binding of P-selectin to P-selectin glycoprotein ligand-1 on leukocytes is the primary pathway in the formation of heterotypic platelet-leukocyte aggregates (specifically, the monocyte and neutrophil subtypes). Platelet-monocyte aggregates have been the most widely studied, largely due to the fact they are the most stable platelet-leukocyte aggregates. These aggregates have been shown to further enhance platelet adhesion, thereby contributing to the prothrombotic environment through excess platelet aggregation and interaction with the endothelium [14]. P-selectin-mediated platelet-leukocyte interaction also activates inflammatory processes, upregulating the gene expression of proinflammatory cytokines and integrins that contribute to vascular damage [61]. Individuals with T1D have been shown to have higher circulating levels of both P-selectin and platelet-monocyte aggregates compared to healthy controls, without an increase in platelet-neutrophil aggregates [13]. Medium-term hyperglycaemia, measured through glycated haemoglobin (HbA1c), correlates with P-selectin expression and platelet-monocyte aggregate formation, directly implicating raised glucose levels in plateletmediated inflammation. A further study demonstrated that experimental hyperinsulinaemia and hyperglycaemia in healthy patients are associated with increased plateletmonocyte, but not platelet-neutrophil aggregates, suggesting that both insulin resistance and hyperglycaemia affect the proinflammatory properties of platelets [62]. To further emphasise the importance of hyperglycaemia, platelet reactivity has been shown to decrease (measured by reduced P-selectin expression) as a result of improvements in glycaemic control [63]. Studies have also shown elevated P-selectin levels in patients with T2DM, with Eibl et al. demonstrating a significant reduction of P-selectin levels following improvement in glycaemic control (assessed as HbA1c) after 3 months [64,65]. CD40-Ligand CD40L, a tumour-necrosis factor ligand, is stored in platelets and is rapidly expressed on the platelet surface before cleavage [66]. CD40-L interacts with cells displaying the CD40 receptor, which includes a number of important inflammatory cells, such as monocytes and macrophages. The binding of CD40 to its ligand is potentially very important since it induces a signalling response that drives the synthesis and release of a number of key chemokines and cytokines from inflammatory cells, including IL-6 and IL-8 [67]. It was observed that both platelet CD40L expression and platelet-monocyte aggregates are elevated in patients with T1D compared with healthy controls [68]. Consistent with this observation, elevated circulating CD40L in patients with DM (both T1D and T2D) compared to healthy age-matched healthy controls was also observed [69]. There is further evidence to suggest that this is another potential pathway by which inflammation is increased in DM, with healthy participants demonstrating an increased number of CD40L on platelets following the induction of a hyperglycaemic and hyperinsulinaemic environment [62]. Enhanced platelet activation in obese individuals with normal blood glucose levels emphasises the importance of insulin resistance in modulating platelet function. The evidence of increased platelet activity has been shown in obese individuals with elevated levels of plasma CD40-ligand (CD40L), higher urinary thromboxane metabolite, as well as higher levels of platelet-derived microparticles, and these elevated markers have been shown to improve with weight loss and better glycaemic control [70][71][72][73]. Toll-like Receptors and Immune Response The relatively recent identification of the expression of Toll-like receptors (TLR) in human and mouse platelets supports the theory that platelets possess immune-related capabilities beyond haemostasis. These receptors, which recognise a plethora of endogenous damage-associated molecular patterns (DAMPs) and exogenous pathogen-associated molecular patterns (PAMPs), allow platelets to play a prominent role in the immune surveillance of the vasculature. Their enhanced expression on platelets has now been repeatedly demonstrated at both the mRNA and protein level in a number of disease states, including infection (bacterial and viral), as well as in CVD [63,74,75]. TLR expression drives the activation of platelets and induces aggregation in addition to the release of inflammatory cytokines and the activation of the NF-kB pathway [76,77]. Particularly relevant to CVD, studies have demonstrated elevated platelet TLR-2 mRNA expression and protein production in patients with acute coronary syndrome [74,75], linking TLRs not only to chronic but also acute vascular pathology. The mechanism by which TLRs potentially contribute to platelet inflammatory function is beginning to emerge and may be related to an increased synthetic capacity. In immune cells, TLR activation is linked to the activation of inflammasomes, particularly the NOD-like receptor protein 3 (NLRP3) inflammasome, which generates interleukin 1β (IL-1β) [78]. Metabolic DAMPs, such as AGEs, palmitate, and glucose, often elevated in T2DM, typically drive NLRP3 activation, and thus, IL-1β synthesis [79]. The activation of the NLRP3 inflammasome has been shown in monocytes from patients with T2DM, leading to increased IL-1β [80]. Interestingly, this was modulated by treatment with metformin. A number of studies have demonstrated that metabolic dysregulation, such as obesity, leads to the activation of the NLRP3 inflammasome in various cells, including PBMCs and endothelial cells. It has been postulated that the metabolic environment of T2D, characterised by hyperglycaemia and hyperinsulinaemia, is a key activator of the NLRP3 inflammasome, particularly given its upregulation in this population [81]. One study demonstrated that NLRP3 activation was increased in monocyte-derived macrophages from patients with diabetes [81] as well as in the endothelial cells of diabetic mice [82]. Further to this, the NLRP3 knockdown in a mouse model for diabetic atherosclerosis was shown to have reduced endothelial inflammation and lower atherosclerotic lesion burden [82]. Additionally, NLRP3 inflammasome activation is enhanced in patients with newly diagnosed diabetes compared to healthy matched controls. The same study also showed that improvement in the glycaemic control in this patient cohort led to significant reductions in NLRP3 inflammasome activity [82]. Elevated levels of circulating free fatty acids, often seen in diabetes, can bind to TLRs, inducing an increased expression of key inflammatory molecules, including IL-6 and TNFα, as a result of the activation of the described NF-kB pathway [15,54,83], both of which are known to result in abnormal platelet function [84]. The various pathways modulating the thrombo-inflammatory function of platelets are summarised in Figure 2. Metabolic reprogramming and platelet bioenergetics The links between metabolism and inflammation have been shown, predominantly in immune cells. Immunometabolism is a term relating to the interplay between metabolic regulation and immune function [85]. Evidence has shown that in immune cells, a switch can occur in metabolic pathways from oxidative phosphorylation to aerobic glycolysis, and this may drive a persistent inflammatory state [86]. The abundance of nutrients, with hyperglycaemia and elevated circulating free fatty acids seen in DM, have been proposed as potential drivers of this 'immunometabolic reprogramming', resulting in sustained low-grade inflammation [86]. Given the growing evidence implicating platelets in immune responses, it can be hypothesised that similar changes occur in these two cell types in response to pathological changes [87,88]. Platelet activation, in response to both thrombotic and inflammatory processes, is energetically expensive, and thus, requires a significantly enhanced generation of ATP via glycolysis and oxidative phosphorylation. Specific disease states have been shown to increase platelet glycolysis and oxidative phosphorylation, evidenced by an elevated extracellular acidification rate (ECAR) and increased oxygen consumption rate (OCR), respectively [89,90]. Glucose is a key and potent energy source driving these processes, and therefore, hyperglycaemia in DM may drive these processes, whilst improved glycaemic control can reverse these changes, at least partly [91]. Although little evidence exists to demonstrate changes in the bioenergetics of platelets in patients with DM, a study investigated these changes in the platelets of patients with sickle cell disease [92]. The results suggested that there is variation in the bioenergetic programming amongst individuals and that there is metabolic adaptability within platelets to meet energy demands that are particularly affected in disease states. Of particular note was the observation of a dysfunctional relationship between this metabolic ability to meet energy demands in those with Metabolic Reprogramming and Platelet Bioenergetics The links between metabolism and inflammation have been shown, predominantly in immune cells. Immunometabolism is a term relating to the interplay between metabolic regulation and immune function [85]. Evidence has shown that in immune cells, a switch can occur in metabolic pathways from oxidative phosphorylation to aerobic glycolysis, and this may drive a persistent inflammatory state [86]. The abundance of nutrients, with hyperglycaemia and elevated circulating free fatty acids seen in DM, have been proposed as potential drivers of this 'immunometabolic reprogramming', resulting in sustained low-grade inflammation [86]. Given the growing evidence implicating platelets in immune responses, it can be hypothesised that similar changes occur in these two cell types in response to pathological changes [87,88]. Platelet activation, in response to both thrombotic and inflammatory processes, is energetically expensive, and thus, requires a significantly enhanced generation of ATP via glycolysis and oxidative phosphorylation. Specific disease states have been shown to increase platelet glycolysis and oxidative phosphorylation, evidenced by an elevated extracellular acidification rate (ECAR) and increased oxygen consumption rate (OCR), respectively [89,90]. Glucose is a key and potent energy source driving these processes, and therefore, hyperglycaemia in DM may drive these processes, whilst improved glycaemic control can reverse these changes, at least partly [91]. Although little evidence exists to demonstrate changes in the bioenergetics of platelets in patients with DM, a study investigated these changes in the platelets of patients with sickle cell disease [92]. The results suggested that there is variation in the bioenergetic programming amongst individuals and that there is metabolic adaptability within platelets to meet energy demands that are particularly affected in disease states. Of particular note was the observation of a dysfunctional relationship between this metabolic ability to meet energy demands in those with sickle cell disease compared to healthy controls, demonstrated by a loss of the relationship between basal OCR and ATP-linked OCR and suggesting a reduction in the maximal respiration capacity despite demand [92]. It is possible that other disease states, including DM, may see a similar pattern. Altered Platelet mRNA and Protein Expression In addition to platelet metabolism, platelet transcriptomics and proteomics have been growing areas of interest [93,94] and may prove to have a role in tailored therapies in individuals at risk of CVD. It has been well-established that platelets, whilst anucleate, still have mRNA, which, once spliced into mature RNA, can be translated into proteins. Given the complex conditions within the inflammatory and metabolic milieu of the blood of patients with DM, it is possible that platelets can respond by altering their proteome. Alterations in mRNA expression and subsequent protein transcription have been linked to a number of disease states and may help to establish whether and how the disease environment specific to DM can result in 'immunometabolic reprogramming' [94]. Such studies have been undertaken in patients with sickle cell disease and systemic lupus erythematosus (SLE), demonstrating differences in protein expression compared to healthy volunteers, directly affecting platelet function [95,96]. Similarly, platelets from patients with obesity and HIV have been shown to have an altered platelet transcriptome and proteome [97][98][99]. In the case of HIV, the enhanced platelet expression of ABCC4 is directly associated with platelet hyperactivity [97]. Early studies in those with ACS were shown to have elevated platelet TRP14 and CD69, which was also associated with hyperactivity [100]. The reverse engineering of these studies demonstrated that TRP14 is a ligand for platelet CD36 and drives thrombosis in hyperlipidaemic mice. It is yet unclear if similar changes are associated with platelets from people with DM. However, platelet mRNA may represent a useful tool for both the prognostication and/or diagnosis of vascular risk as a result of functional platelet changes in certain patient groups. Platelet-Specific miRNA Several studies have also investigated miRNAs and their role in endothelial dysfunction in diabetes. Platelet miR-223 has been implicated in the ADP-receptor P2Y 12 pathway [101], where reduced levels in patients with T2D compared to healthy controls are associated with increased activity of the receptor and enhanced platelet reactivity [102]. miR-26b and miR-140 are believed to target P-selectin mRNA, driving excess P-selectin levels and, thus, heightening platelet activity [103]. Platelet miR-223 has been shown to be reduced in patients with DM as well as in mouse models of DM. miR-223 knockout mice were shown to have increased platelet aggregation and thrombus formation compared to wild-type mice [104,105]. However, Parker et al. investigated patients with T2DM receiving antiplatelet therapy (aspirin, clopidogrel, prasugrel) and found reduced levels of miR-223, miR-197, miR-24, and miR-191 in those receiving prasugrel compared to aspirin, a treatment that was associated with more profound platelet suppression. Furthermore, in those patients on aspirin or prasugrel with a history of CVD, there were lower levels of miR-197 compared to individuals without a CVD history, which may be of use as a potential biomarker in this cohort [106]. Another study examined miRNA in patients with DM with and without ischaemic stroke. In those who had an ischaemic stroke and DM or DM alone, there were lower circulating levels of both platelet miR-223 and miR-146a, which was associated with increased platelet activation compared to those patients with only an ischaemic stroke or healthy controls. The conditions of hyperglycaemia have also been shown to downregulate all three miRNAs, miR-223, miR-26b, and miR-140. The reduced levels of these miRNAs lead to the upregulated expression of the various prothrombotic receptors in platelets, including P2Y 12 and P-selectin [102,103], and have been linked to elevated platelet activation measured through surface P-selectin expression. In addition to representing potential biomarkers, the affected pathways driving platelet reactivity may be useful in developing therapeutic targets to reduce platelet-driving thrombo-inflammation [102,103]. Therefore, miRNA may be used as a marker of vascular disease or, alternatively, to monitor the response to specific therapies. This may, in turn, lead to therapies that enhance or suppress specific miRNA as a new management strategy to reduce vascular risk. Mitochondrial Dysfunction As previously described, oxidative stress is a key aspect of the cellular environment of patients with DM. This coupled with the driving force of hyperglycaemia and disordered insulin production/function, altering platelet reactivity and the inflammatory profile, also contributes to mitochondrial dysfunction [107][108][109]. Increased oxidative stress has been demonstrated in T2DM [110], affecting platelet mitochondria, which in turn, increases ROS production, creating a vicious cycle [111]. Lee et al. demonstrated that elevated oxidative stress increased the protein phosphorylation of p53 in pooled platelets from patients with DM. The increased phosphorylated p53 and translocation to mitochondria is a driver of mitochondrial dysfunction in the platelets of patients with DM as well as elevated platelet apoptosis [108,112]. This increase in the phosphorylation of p53 in DM platelets has also been shown to be mediated by aldose reductase in both human and mouse models and also contributes to platelet activation in DM [112,113]. Further to this, the blocking of aldose reductase has been shown to reduce thromboxane release in response to collagen and, thus, reduces platelet activation, demonstrating its potential key role in driving not only mitochondrial dysfunction in platelets but also the levels of activation [114]. Given the importance of oxidative stress in the pathways responsible for vascular pathology, several studies have investigated the role of antioxidants with variable and inconclusive results. Limited data suggest an association between increasing dietary antioxidant nutrients and protection against cardiovascular disease [115]. Specifically, in patients with diabetes, low carotenoid intake has been linked to reduced insulin resistance [116]. In contrast, the HOPE trial failed to show any benefit of Vitamin E on cardiovascular outcomes or mortality in high-risk individuals with diabetes [117]. The exact reasons for the lack of positive outcomes with the use of antioxidants in these trials are not fully clear. It may be related to studying highly heterogeneous populations, with antioxidants having variable and inconsistent effects. It is also possible that different doses of antioxidants are required according to various factors, including DM duration, glycaemic control, and therapies, as well as the presence of vascular complications, which have never been explored. Further to this mitochondrial dysfunction, the maladaptive changes in the metabolism seen in DM as well as other disease states, such as obesity, with readily available fatty acids [79], have recently been linked to the activation of the aforementioned NLRP3 inflammasome and may link nutrient excess to inflammation and inflammatory pathways. Therefore, this previously described immunometabolic reprogramming may be a potential explanation for the upregulation of the NLRP3 inflammasome seen in DM [81]. Recent data also support the fact that elevated ROS, as a result of mitochondria, drive NLRP3 inflammasome activation. Lee et al. demonstrated that monocyte-derived macrophages in patients with T2DM have much higher mRNA and protein expression of NLRP3 and IL-1β compared to healthy controls. Following 2 months of metformin treatment with associated HbA1c and fasting glucose improvements, the levels of IL-1β maturation and production following stimulation fell [81]. Similarly, platelets from subjects with IR and obesity were found to have an upregulated expression of mRNA for IL-1β and NLRP3 inflammasome [118]. The relative importance of the role of platelet function in the vascular risk in patients with DM is all the more heightened by the successful use of antiplatelet treatment, particularly in secondary prevention. Thus, dysfunction in platelet activity not only drives the vascular risk itself but may have implications for the efficacy of these treatment options, as seen by the apparent aspirin resistance in this patient cohort [119,120]. Having a fundamental understanding of the translational changes affecting platelet function may also help to mitigate these potentially negative clinical outcomes. Conclusions While modern management strategies have reduced cardiovascular complications in patients with diabetes, long-term outcomes remain inferior compared to individuals with normal glucose metabolism. Platelets play a key role in contributing to pathological vascular occlusion in diabetes, and it is now clear that platelet function stems far beyond the traditional role in haemostasis, with important effects not only on thrombosis but also on both the immune and inflammatory processes. While some studies have shown reduced platelet activation by improving glycaemic control, this appears to be partial, with the added complication that aggressive glycaemic control induces hypoglycaemia, which is itself both prothrombotic and proinflammatory. A number of methods have been used to test the thrombotic properties of platelets, reviewed elsewhere [8], but tests to measure the inflammatory characteristics of these cells remain an area for future work. This review highlights a number of platelet-specific pathways that operate in diabetes and drive the thrombo-inflammatory milieu. In particular, platelet reprogramming in diabetes transforms these cells to display not only prothrombotic but also proinflammatory characteristics. This in turn contributes to the ongoing vascular pathology and results in premature and more severe vascular disease in this population. Rather than dealing with the consequences of platelet reprogramming in diabetes, which can be associated with unwanted side effects, a more efficient strategy is to understand the pathways leading to these changes. This in turn will allow for effective risk stratification and the development of targeted therapies. For example, the identification of potentially important platelet miRNA/mRNA may help in risk stratification and the intensification of treatment, accordingly. Targeting mitochondrial dysfunction offers another novel management strategy that has the potential to normalise platelet function and limit vascular pathology. Developing therapies that target individual-specific pathological processes will help to safely and effectively reduce the thrombo-inflammatory milieu in diabetes and improve outcomes in this high-risk population.
2022-05-02T15:05:21.242Z
2022-04-29T00:00:00.000
{ "year": 2022, "sha1": "cee04b2353f8337921964475799a1ca15f69152f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/23/9/4973/pdf?version=1651816827", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "32011a14d6b19a53394fd1bf05d78832400c0c74", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
16724761
pes2o/s2orc
v3-fos-license
Hybrid multi-layer Deep CNN/Aggregator feature for image classification Deep Convolutional Neural Networks (DCNN) have established a remarkable performance benchmark in the field of image classification, displacing classical approaches based on hand-tailored aggregations of local descriptors. Yet DCNNs impose high computational burdens both at training and at testing time, and training them requires collecting and annotating large amounts of training data. Supervised adaptation methods have been proposed in the literature that partially re-learn a transferred DCNN structure from a new target dataset. Yet these require expensive bounding-box annotations and are still computationally expensive to learn. In this paper, we address these shortcomings of DCNN adaptation schemes by proposing a hybrid approach that combines conventional, unsupervised aggregators such as Bag-of-Words (BoW), with the DCNN pipeline by treating the output of intermediate layers as densely extracted local descriptors. We test a variant of our approach that uses only intermediate DCNN layers on the standard PASCAL VOC 2007 dataset and show performance significantly higher than the standard BoW model and comparable to Fisher vector aggregation but with a feature that is 150 times smaller. A second variant of our approach that includes the fully connected DCNN layers significantly outperforms Fisher vector schemes and performs comparably to DCNN approaches adapted to Pascal VOC 2007, yet at only a small fraction of the training and testing cost. INTRODUCTION In this paper we propose a new hybrid image feature for image classification obtained from a mix of the classical image feature extraction pipeline and the more recent and very successful Deep Convolutional Neural Network (DCNN) pipeline. The classical image feature extraction pipeline consist of three major steps: 1) Extracting local descriptors such as SIFT [1] from the image; 2) mapping these descriptors to a higher dimensional space; 3) and sum or max-pooling the resulting vectors to form a fixed-dimensional image feature representation. Examples of methods corresponding to this classical approach include Bag-of-Words (BoW) [2], Fisher Vector (FV) [3], Locality-constrained Linear Encoding [4], Kernel codebooks [5], super-vector encoding [6] and VLAD [7]. We refer to these type of image feature extraction schemes as aggregators given that they aggregate local descriptors into a fixed dimensional representation. Generally these approaches This work was partially supported by the FP7 European integrated project AXES. require computationally inexpensive unsupervised models of the local descriptor distribution, and the resulting image features can be used to learn likewise inexpensive linear classifiers using SVMs. The novel DCNN pipeline of [8] has drastically pushed the performance limits of image classification. DCNNs consist of multiple interconnected layers including spatial convolution layers, half-wave rectification layers, spatial pooling layers, normalization layers, and fully connected layers. While this method attains outstanding classification performance, it also suffers from large testing complexity, particularly due to the first fully connected layer, as well as large training complexity, since all the coefficients in the pipeline are learned in a supervised manner and require lots of training images. To address this latter issue, [9] proposed to use DCNN models pre-trained on the Imagenet dataset (consisting of many million images) and then transfer all but the last layer of this pre-trained DCNN to a new target dataset, where two new adaptation layers are learned. This reduces training time and the amount of required training data, but the training data needs to be annotated with bounding box information. The fact that the method works on a per-patch basis further increases the testing complexity relative to standard DCNNs. Several approaches exist that, like ours, attempt to bridge the classical approach and the DCNN approach using hybrid mixes. Inspired by the popularity of DCNNs, Simonyan et al. [10] proposed to incorporate the deep aspect of DCNNs into traditional SIFT/FV schemes by stacking multiple layers of FV aggregators, with each layer operating on successively coarser overlapping spatial cells. Sydorov et al. [11] instead proposed viewing the standard FV aggregator as a deep architecture, substituting the unsupervised GMM parameters of the FV aggregator by supervised versions. While these methods adopted only the deep aspect of DCNNs, our goal is to combine the advantages of both approaches (DCNNs and classical aggregators) using hybrid mixes of both pipelines. We do this by treating the output of the pre-trained intermediate layers of the DCNN architecture as local image descriptors, which we aggregate using standard aggregators such as BoW or FV. There is no need to carry out costly tuning of the DCNN adaptation layers [9] to the target dataset, as both BoW and FV rely on unsupervised learning. The closest related method in the literature is that of Gong et al. [12], who propose using the output of the previous-to-last fully connected layer as a local descriptor, computing this descriptor on multi-scale dense patches subsequently aggregated using VLAD on a per-scale basis. This approach is very complex because, contrary to our approach, one needs to compute the full DCNN pipeline not only on the original image but also on a large number of multi-scale patches and further apply two levels of PCA dimensionality reduction. The remainder of this paper is organized as follows: In Section 2, we describe the two classical aggregators (BoW and FV) that we use in our experiments, as well as the DCNN architecture. In Sec-tion 3, we describe our hybrid image feature extraction pipeline. We evaluate our proposed method in Section 4 and provide concluding remarks in Section 5. BACKGROUND In this section we present an overview of two classical local descriptor aggregation methods: the BoW aggregator [13,14,15] and the FV aggregator [16]. Up until recently, such aggregation schemes together with SVM classifiers were the reference in image classification [17]. We then present an overview of the new state-of-the art DCNN image classification pipeline [8]. Image Classification using Local Descriptor Aggregators The classical image classification procedure consists of first mapping images to a fixed-dimensional image feature space where linear classifiers are computed using SVMs. The image feature construction process operates by aggregating the local descriptors extracted from the image in question, f : where the x k are the local descriptors of the image. The Bag-of-Words (BoW) aggregator offers one such way to map local descriptors to image features. A training set of local descriptors T from a representative set of images is first used to build a codebook C = [cj]j using K-means. Letting Cj denote the Voronoi cell for codeword cj, the BoW aggregated image feature is the relative frequency of occurrence of local descriptors in the Voronoi cells: where we let # denote set cardinality. The BoW encoder offers an intuitive image feature and enjoys a low computational cost that can be important in user-in-the-loop applications such as [18]. A more recent image feature, the Fisher vector, offers an important gain in image classification performance [17]. The Fisher encoder requires that a training set of local descriptors T be used to learn a GMM model G = {β k , Σ k , c k } k with k-th mixture component having prior weight β k , covariance matrix (assumed diagonal) Σ k and mean vector c k . The first order Fisher vector for a given image can then be computed as follows: Both the BoW and Fisher aggregators are built from unsupervised models for the distribution of local descriptors, with supervision coming into play only at the classifier learning stage. Deep CNNs instead construct a fully supervised image-to-classification score pipeline. Deep Convolutional Neural Networks (DCNNs) Deep Convolutional Neural Networks have established an overwhelming presence in image classification starting with the 2012 ImageNet Large Scale Visual Recognition Challenge (ILSVRC) [8]. The performance gap of DCNNs relative to the second entry in that year's competition (and relative to SIFT-based Fisher aggregation schemes [19]) is in excess of 10 percentage points in absolute improvement of top-5 error rate. In Fig. 1 we illustrate the deep DCNN processing pipeline of [8]. It consists of convolutional layers, max-pooling layers, normalization layers and fully connected layers. At any given layer l, the The K l kernels at layer l have dimension n l × n l × K l−1 . The layer index l (respectively, kernel spatial dimension n l ) is indicated below (above) the box for each layer. The input image is assumed normalized to size 224 × 224 × 3, and 4× downsampling is applied during the first layer. Dark-lined boxes: convolutional layers; dashlined boxes: normalization layers; light-lined boxes: max-pooling layers; grayed-in boxes: fully-connected layers. layer's output data is an that is the input to the next layer, with the input to layer l = 1 being an RGB image of size R0 × C0 and K0 = 3 color channels. The convolutional layers (l = 1, 4, 7 − 9) first compute the spatial convolution of the input with K l kernels of size n l × n l × K l−1 and then apply entry-wise Rectified Linear Units (ReLUs) max(0, z). The normalization layers (l = 2, 5) normalize each x ∈ {x l−1 ij }ij at the input using what can be seen as a generalization of the l2 norm consisting of dividing each entry xm of x by (2 + 10 −4 n∈Im x 2 n ) 0.75 . The summation indices Im are taken to be the m-th sliding window over the indices of all entries. The max-pooling layers (l = 3, 6, 10) carry out per-kernel spatial maxpooling by taking the maximum value from each spatial bin of size 3 × 3 spaced every 2 pixels. The fully connected layers (l = 11 − 13) can be seen as convolutional layers with kernels having the same size as the layer's input data. The last layer (l = 13) uses a softmax non-linearity instead of the ReLU non-linearity used in other layers and acts as a multi-class classifier, having as many outputs as there are classes targeted by the system. Transfer learning using DCNNs The architecture in Fig. 1 contains more than 60 million parameters and training it can be a daunting task requiring expensive hardware, large annotated training sets (ImageNet 2012 contains 15 million images and 22,000 classes) and training strategies including memory management schemes, data augmentation and specialized regularization methods. Moreover, extending the architecture to new classes would potentially require re-training the entire structure, as the full architecture is learned for a specific set of target classes. To address this last difficulty, Oquab et al. [9] use transfer learning to apply the architecture in Fig. 1 to new classes while incurring reduced training overhead. Their approach consists of substituting only the last fully-connected classification layer by two learned adaptation layers, a fully-connected ReLU layer with 4096 neurons followed by a fully-connected softmax classification layer with as many neurons as target classes. The first 12 layers are transferred from the net in Fig. 1 (learned from ImageNet 2012 data), and only the new adaptation layers are learned using training data for the new set of target classes (eg., those of the Pascal VOC 2007 test bench). While their approach reduces the training overhead and required training set size, training the adaptation layers still requires non trivial complexity as these contain a large number of parameters (more than 16 million ). To obtain an adequately large training set from Pascal VOC 2007 data, they derive a patch-based training set, labeling every patch according to its intersection with the provided object bounding boxes. Their approach thus operates on a per-patch classification basis, and the overall class score is obtained by summing this per-patch scores over the entire image for each class. This brings the important benefit of also providing the object localization, but it requires laborious bounding-box annotations on the training set and costly training of millions of parameters. A HYBRID DCNN/AGGREGATOR FEATURE Inspired by the transfer learning approach of [9], in this section we propose a new hybrid feature that combines parts of the DCNN architecture in Fig. 1 trained on ImageNet 2012 with the unsupervised BoW or Fisher local descriptor aggregation schemes in (1) and (2). The resulting feature is used with one-vs-all linear SVM classifiers and hence new classes can be added with little training overhead and without the need for costly object bounding box annotations. Per-layer aggregation of DCNN local descriptors Our hybrid scheme is based on the observation that the vectors x l ij in (3) comprising the output of layers l = 1, . . . , 10 in Fig. 1 (i.e., all layers except fully-connected layers) can be treated as densely extracted local descriptors. We will hence build one aggregated feature f l for each layer l (or a subset of layers l ∈ L) and concatenate all the resulting aggregated layer features to form a single image feature Using only a subset of layers L ⊆ {1, . . . , 10} allows us to control training, testing and storage complexity and further serves as a means of regularization. Training per-layer aggregators In order to train the per-layer aggregators adapted to the DCNN layers, we take each image from a representative set of training images and extract from it all vectors x l ij for l = 1, . . . , 10. We then group all the resulting local descriptors x l ij for each layer l to form a training set T l for the l-th layer. Each training set T l of local descriptors is then used to train a codebook C l for layer l using K-means when using BoW aggregators. Likewise, a GMM model G l is learned for the l-th layer when using Fisher aggregators. Extensions based on classic approaches Our proposed approach shares similarities with several existing approaches and we now discuss these and related extensions. One first observation is that the spatial support (relative to the original image) used to compute the x l ij is of size 11 (in each spatial dimension) for the first layer and grows by 4 × 2 · (na − 1) for each convolutional layer 1 < a ≤ l, yielding possible supports of size 11, 43, 59 and 75. Dense approaches likewise compute local descriptors from supports of varying size (16,24,32,40) by means of multi-resolution spatial grids [17], but all descriptors for all supports are pooled together (for the benefit of scale invariance) and used to form a single aggregated image feature. A similar pooling approach could be used for DCNN local descriptors x l ij ∈ R K l by first mapping all layers to a common dimensionality via, eg., PCA or discriminative dimensionality reduction. The layer feature concatenation scheme (4) that we use instead is reminiscent of spatial pyramid matching [14,15], where one feature gc is computed for each spatial cell c = 1, . . . , 8 and these are subsequently concatenated. Our concatenated image features f l are instead computed from high-dimensional filtered versions of the image, and indeed this approach can be combined with SPM to produce per-spatial-cell layer features f lc . Other standard successful approaches can also be combined with our proposed hybrid DCNN/aggregator features, including power normalization of the x l ij [20], application of an explicit Hellinger kernel-map to our hybrid feature [17] and late fusion with other feature channels. Alternate aggregation schemes such as VLAD or triangulation embedding [7,21] can also be used, but we chose BoW for its low computational cost and Fisher given that is the best performing aggregator in classification. RESULTS In this section we validate our proposed hybrid DCNN/aggregator feature using the publicly available Pascal VOC 2007 dataset [22]. This dataset consists of 9163 images representing 20 visual categories and split into training, validation and test sets. We use the standard mean Average Precision (mAP) measure computed over the test set as a performance metric. Impact of layer subset L In Fig. 2 we evaluate the impact on performance of the layer subset L in (4) used to build hybrid features. We consider three strategies for selecting L: using a single layer, L = {L}, using the first L layers, L = {1, . . . , L}, and using the last L layers, L = {10, 9, . . . , 10 − L + 1}. As seen in Fig. 2, the results for the single-layer strategy indicate that layers further down the pipeline are more informative (although the curve is not monotonic). Indeed the best strategy overall consists of using the last 5 layers (and using only 3 layers results in marginal performance decrease). The resulting hybrid feature performs substantially better than BoW+SPM with 4, 000 codewords and performs similar to FV+SPM with 256 mixture components [23], despite being 150 times smaller. Impact of codebook size In Fig. 3 we evaluate the impact on performance of varying the codebook size when using hybrid DCNN/BoW features built from the last 5 layers. A codebook of size 500 yields the best performance. And even with a codebook size of 30, which amounts to a feature vector size of 150, our method outperforms BoW + SPM. Comparison to other approaches In Table 2 we compare our results with some of the best results reported in the literature. We include results for hybrid features built using FV aggregators with 64 mixture components. Despite the established superiority of FV aggregation over BoW aggregation, the FV-based hybrid features perform poorly relative to BoW-based hybrid features. We believe that this is due to the small number of local descriptors in DCNN layers, as this makes the vector-averaging pro-method Training time+resource PRE1000C [9] ≈ 1 day (GeForce GTX Titan GPU) Hybrid DCNN/BoW, N=500 ≈ 1hr + 5min (8 core CPU) cess in (2) statistically noisy. The best performing system in Table 2 is PRE1000C [9]. Their approach consists of substituting layer 13 in Fig. 1 by two adaptation layers trained on Pascal VOC. As is the case for DCNN pipelines, this training procedure is time consuming and requires expensive GPU cards, as illustrated in Table 1. Furthermore, at testing time, their approach requires applying the full 13-layer DCNN pipeline to each of 500 patches from an image, increasing testing complexity considerably. Our approach requires a single DCNN pipeline pass over the non fully-connected layers, resulting in dramatically lower testing time, as the DCNN complexity is largely concentrated in the first fully-connected layer. The same complexity problem is incurred by the feature construction scheme of [12], where the authors propose using the output of DCNN layer 13 as a local descriptor computed on multi-scale dense image patches. Inspired by this approach, we further consider stacking the output of the fully connected layers (11, 12, and 13) to our hybrid DCNN/aggregator feature. We illustrate the results of this approach in Fig. 4, where the non-fully connected layers are processed according to (4), and the fully-connected layers are concatenated without any processing. Note that using the 3 fully connected layers and the last non-fully connected layer results in performance close to 74 mAP points. This compares very well to the performance of 77.73 of PRE1000C in Table 2, particularly considering the drastic difference in training time and testing time. method feature dimension mAP BoW + SPM, N=4000 [17] 32000 45.39 FV (SIFT) [23] 262144 58.3 FV (SIFT + color) [23] 262144 60.3 PRE1000C [9] 77.73 Hybrid DCNN/FV, m=64 81920 54.56 Hybrid DCNN/BoW, N=30 150 50.53 Hybrid DCNN/BoW, N=500 2500 60.32 Table 2. Comparison of our results (using last 5 layers) with the state-of-the-art (N represents the codebook size in BoW). CONCLUSION In this work, we proposed a hybrid Deep Convolutional Neural Network (DCNN) / Bag-of-Words (BoW) image feature extraction approach. Treating the output of intermediate layers of a pre-trained DCNN as local descriptors allowed us to use an unsupervised Bagof-Words aggregator to obtain an image feature that outperforms standard aggregators based on local descriptors substantially on the Pascal VOC 2007 benchmark. Appending the output of the fullyconnected layers to our hybrid feature further improves the performance of our approach, making it competive with DCNNs variants
2015-03-13T06:49:26.000Z
2015-03-13T00:00:00.000
{ "year": 2015, "sha1": "4314bce38b9df35e874ea86573bc2b5427142f4b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1503.04065", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4ba021a304c1376dce923021ebd8bbed989f018a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
236656613
pes2o/s2orc
v3-fos-license
Review of the Prognosis Factors of COVID-19 Infection An epidemic of Coronavirus disease 2019 (COVID-19) outbroke in December 2019 in China, Wuhan, which is becoming a Public Health Emergency of International Concern. As this entity has become one of the worst infectious disease outbreaks of recent times, with mortality estimates in general population ranging from 1.4% to 8%, it is crucial to better understand the prognostic factors which can be associated to the outcome of this disease. However, as the pandemic is still unfortunately under progression, there are limited data with regard to the prognostic factors. Hence, this review seeks to gather and provide the existing data of the literature of all the prognosis factors of COVID-19 infection such as older age, obesity, comorbidities, lymphocytopenia, d-dimers elevation, thrombocytopenia, elevated levels of high-sensitivity cardiac troponin, C-reactive protein elevation and imaging features of COVID-19. ing the prognostic factors associated with clinical outcomes are scarce. It is urgent to identify these in order to predict the outcome of this entity. Thus, the present article aims to review the prognostic factors of COVID-19 associated with the worst outcome, which might provide evidence for risk stratification, help improve clinical practice and reduce fatality. Moreover, Table 1 summurizes the major and most of recent studies' conclusions published in the literature. Prognostic Factors Based on the currently available information, the prognosis of SARS-CoV-2 infection seems to depend mostly on patient's characteristics, comorbidities, severity of clinical manifestations, laboratory test results and imaging. Therefore, we chose to divide the risk factors into five different groups, describing each of them thoroughly. Advanced Age A well-described poor prognosis factor exposed and sustained through many reports is older age. Its definition varies according to each study, with 50-year-old being the lower limit conveyed [1]. Wang et al. have demonstrated that older age increased the likelihood of death due to occurrence of a more severe pneumonia comparing to the younger [1]. In another study carried by the Chinese Center for Disease Control and Prevention, 80% of deaths occurred among adults ≥ 60-year-old even in moderate disease. By opposition, young patients seemed to have a much better prognosis with milder disease occuring in most cases [2] [3]. Moreover, in an analysis from the United Kingdom, the risk of death among individuals ≥ 80-year-old was 20-fold higher than among individuals from 50 to 59-year-old [3]. In the United States, mortality was also higher among older individuals, with 80% of deaths occurring in those aged ≥ 65 years-old. In contrast, individuals aged 18 to 34 years old accounted for only 5% of adults hospitalized due to COVID-19 in a large health care database study, giving rise to a mortality rate of 2.7% [4]. In what concerns the severe acute respiratory syndrome (SARS), older age is likely to be the most important predictor for the adverse prognosis in COVID-19 infected patients, with approximately half of the patients being over 50 years old in a study comprehending 8866 cases [5]. Pathophysiology can be explained by several mechanisms. Firstly, frailty and multiple comorbidities in the elderly increase the risk for pulmonary infection [6]. Secondly, the effects of aging on the immune system include reduced B and T cell production with diminished cellular function. Consequently, old individuals do not respond to immune challenge as robustly as young ones. Moreover, there is evidence of an age-related shift towards a type 2 cytokine profile. Together, all these manifestations lead to a deficiency in the control of viral replication and to more prolonged proinflammatory responses, resulting in a poor outcome [7]. Wang L et al. [1] Liu W et al. [6] Zhou F et al. [7] Zhao X et al. [10] Zhang JJ et al. [25] Wu C et al. [35] Li K et al. [41] 339 Zhang JJ et al. [25] Wu C et al. [35] 29 Zhou F et al. [7] Li K et al. [41] Zhao X et al. [10] Zhang JJ et al. [ Li K et al. [41] Zhao X et al. [10] Zhang JJ et al. [25] Wu C et al. [35] 138 Zhao X et al. [10] Zhang JJ et al. [25] Wu C et al. [ Zhao X et al. [10] Zhang JJ et al. [ Li K et al. [41] Zhao X et al. [10] Zhang JJ et al. [ Wang L et al. [1] Li K et al. [41] Zhao X et al. [10] Zhang JJ et al. [25] Wu C et al. [35] Du R-H et al. [51] NA (total of 339 patients) 44 Zhou F et al. [7] Zhao X et al. [10] Zhang JJ et al. [25] Wu C et al. [ Zhao X et al. [10] Wu C et al. [35] Fogarty et al. [54] NA Zhou F et al. [7] Zhao X et al. [10] Du R-H et al. [ Liu W et al. [6] Li K et al. [41] Zhao X et al. [10] Zhang JJ et al. [25] NA Liu W et al. [6] Li K et al. [41] Zhao X et al. [10] Zhang JJ et al. [25] Wu C et al. [ Zhao X et al. [10] Wu C et al. [35] Zhang Y et al. [68] NA Liu W et al. [6] Zhao X et al. [10] Wu C et al. [35] NA Beyond its association with a worse prognosis by contributing to disease progression, it also seems to delay progression in recovery of COVID-19 [8]. Gender Gender is also a well-established risk factor for severe COVID-19 outcome [3]. The existing literature shows that men are more likely to be infected than women, with over 90% of British deaths occurring in people over 60, and 60% in men [1] [3] [9]. In fact, the majority of the reports have shown that men are more likely to be infected, this gender being identified as a predictor of severe disease and, therefore, demonstrating a higher mortality rate [9] [10]. This disparity between sexes is likely multifactorial and may be due to a higher expression of angiotensin-converting enzyme 2 (ACE2) receptor in males than that in females, due to lack of gene expression protective regulation by estrogen and X chromosome (as ACE2 is located in the X chromosome) [10] [11] [12]. Obesity Obesity appears to be one of the most important predictors related to the severity of COVID-19 disease. One of the largest studies identifying obesity as a prominent risk factor, which analyzed data from more than 4000 COVID-19 patients, has demonstrated that obesity was one of the strongest hospitalization risk factors, body mass index > 40 kg/m 2 (odds ratio (OR) 6.2, 95% confidence interval, CI, 4.2 -9.3). The authors have also demonstrated its important role as a powerful predictor of COVID-19's outcome [3] [13]. Another study, which focused on patients under the age of 60, found that those with obesity were twice as likely to be hospitalized and were at higher risk of requiring critical care. Surprisingly, it there was no demonstrated association between obesity and a severer disease in patients over 60 years [14]. In a systematic review, obesity was considered an independent risk and prognosis factor for disease severity as these population required an increase need for invasive mechanical ventilation. Therefore, aggressive treatment and prevention are recommended, since these individuals are considered a high-risk group [15]. Pathophysiology can be explained by many reasons. Firstly, obesity is a pro-inflammatory condition [13] in which the abnormal secretion of adipokines and cytokines, such as tumor necrosis factor (TNF), alfa and interferon, leads to immune dysfunction [16] [17], therefore contributing to increased morbidity in COVID-19 infection. Secondly, abdominal obesity is associated with a poor pulmonary function by decreased diaphragmatic excursion [18]. Thirdly, obesity interacts with insulin resistant states and the metabolic syndrome, promoting inflammatory and pro-thrombotic states that could lead to deleterious responses to infectious pathogens. Lastly, obese patients have more adipocytes, which, in turn, present a greater number of ACE2-expressing cells and, thus, SARS-CoV-2 is more likely to entry [17]. Hypertension Hypertension seems to be the most prevalent comorbidity in COVID-19 infection and is associated with a poor prognosis. In fact, hypertension is associated with ACE2 dysregulation, which could aggravate the imbalance caused by the infection [19]. According to a Chinese metanalysis based on 8 studies that included 46248 participants, hypertense patients demonstrated a higher risk of more agressive COVID-19 presentation (OR 2.36, CI 95%, 1.46 -3.83) [20]. The same result was also demonstrated in an another retrospective study with 1590 patients [21]. In addition, a different metanalysis including 1527 patients disclosed that hypertense individuals carried an augmented risk of requiring intensive care unit (ICU) treatment, with hypertensive states leading to worse prognosis (CI = 95%, 1.54 -2.68) [22]. Diabetes Mellitus (DM) DM is one of the most characterized comorbidities related to this pulmonary in- [22]. It should also be considered a risk factor for severe disease, acute respiratory distress syndrome (ARDS), rapid progression and a poor prognosis factor of COVID-19 as described by several studies [3] [17] [23] [24]. Additionally, in a study comprising 52 intensive care patients, DM was a comorbidity in 22% of 32 non-survivors [7]. In other studies of patients with severe disease, DM ranged from 12% to 16% suggesting the role of diabetes as a worsening prognostic factor [25]. Moreover, it appears that the incidence of COVID19 is two-fold in diabetic individuals [22]. These findings are explained by many reasons. Simply put, the greater frequency of infections in diabetic patients is caused by the hyperglycemic environment that favours immune dysfunction with reduced T cell response and neutrophil func- Smoking The mechanism of increased susceptibility to infections in smokers is multifactorial and includes the suppressive effects of cigarette smoke on the immune system [28]. However, there is scarce evidence in what concerns to smoking, with contradictory results. On the one hand, in a systematic review of 5 studies aiming to evaluate the association between smoking and COVID-19 outcome, a higher disease severity was reported in active smokers, requiring most ICU admissions. In addition, death occurred more often among this population. The majority of the included studies concluded that smokers were 1.4 times more likely to have severe symptoms of COVID-19 and approximately 2.4 times more likely to be admitted to an ICU, need mechanical ventilation or die, compared to non-smokers [29]. On the other hand, a study that did not support these data was comprised in a meta-analysis which encompassing five studies with no significant association found between active smoking and COVID-19 severity (OR, 1.69; 95% CI, 0.41 -6.92; p = 0.254). In this study, even after excluding the broadest of all five studies (which included 89.5% of the entire sample size), no statistically significant association was observed (OR, 4.35; p = 0.129) [30]. Nevertheless, in a more recent meta-analysis, active smoking was significantly associated with the risk of severe COVID-19 infection. Hence, the most recent evidence suggests that smokers are more vulnerable to this entity [31]. Cancer According to the literature, COVID-19 infection has a tremendous impact on cancer diagnosis, prognosis and therapeutic effects. However, results are contradictory as other studies also indicate that the percentages of COVID-19 infection and severe events in cancer patients are not higher compared to the general population [31]. Studies have shown that SARS-CoV-2 infected patients with cancer exhibit a steeper decline compared to those without this disease (p < 0.001) [7]. They have also revealed poorer outcomes, due to higher incidence of acute complications Clinical Presentation Among patients with symptomatic COVID-19, some common symptoms that have been linked to COVID-19 include cough, myalgias and headache. Other features, comprising diarrhea, sore throat, and smell or taste abnormalities are also well-described. Pneumonia is the most frequent serious manifestation of infection, characterized primarily by fever, cough, dyspnea and bilateral infiltrates on chest imaging [34]. ARDS Patients with COVID-19 are at risk for ARDS and death by respiratory failure. ARDS is a life-threatening complication of SARS-CoV-2 infection and predisposes to inferior outcomes. Several risk factors for the development of ARDS and evolution to death have been analyzed and identified in quite a few studies. Age above 65 years old, neutrophilia, organ dysfunction and coagulation disturbances were all established as risk factors [35]. Likewise, Land Wang et al. provided strong evidence for ARDS as an extremely strong predictor of death in 339 patients with COVID-19. It was demonstrated that when ARDS occurred, the 28-day mortality would be near 50% [1]. It is also important to note that ARDS in different stages of COVID-19 patients causes diffuse alveolar damage in the lung. In the acute stage, there is hyaline membrane formation in the alveoli and this is followed by interstitial widening and by oedema. In the organising stage, fibroblast proliferation occurs [36] [37]. As patients move through the course of their illness, there seems to be more outcomes of ARDS are being reported, with lung fibrosis appearing as part of COVID-19 ARDS. Ye Z reported that 17% of patients had fibrous stripes in chest CT scans, and considered that the fibrous lesions may form during the healing of pulmonary chronic inflammation or proliferative diseases, with gradual replacement of cellular components by scar tissues [38]. White Blood Cell Population 1) Lymphocytopenia Viral infections in the human body primarily involve damage to the immune system, resulting in a decline of the absolute lymphocyte number [6]. Additional studies have suggested that SARS-CoV-2 may impair the function of CD4+ helper and regulatory T-cells and promote the initial hyperactivation which is fol- The lower absolute lymphocyte count in severe patients implied a more pronounced immunological disfunction, making it a useful index on the evaluation of disease severity [41]. Zhou F has also showed that baseline lymphocyte count was significantly higher in survivors than in non-survivors. Severe lymphopenia was observed until death in non-survivors and was more commonly observed in severe COVID-19 illness [7]. stated that the lower the blood lymphocyte percentage was, the more severe is the disease. The authors have suggested that this marker could be used to classify the disease in moderate, severe, and critical ill types, based on the blood lymphocyte percentage (LYM%), regardless of any other auxiliary indicators. They suggested to classify moderate disease if LYM% > 20%, severe if 5% < LYM% < 20% and critically ill patients if LYM% < 5% [40]. Moreover, in a prospective study, CD3+ CD8+ T-cells 75 cells·μL −1 was a reliable predictor for mortality of patients with COVID-19 infection. It seems that CD3+ T-cells are the major type that is supressed in infected patients and this depletion is associated with an adverse outcome due to cytokine storm [43] [44]. Coagulopathy The development of coagulopathy is one of the most significant poor prognostic features in patients who progress to multiple organ failure [45]. 1) D-dimers D-dimer elevation is a very common laboratory finding observed in COVID-19 patients, requiring hospitalization and predicting a poor prognosis [19] [42] [45] [46]. In fact, SARS-CoV-2 enters cells via ACE2 receptors, which are found in endothelial cells. This binding may lead to life-threatening micro and macrovascular thrombosis [47]. Data from literature suggest that the incidence of venous thromboembolism (VTE) can reach 25% [48]. In the most in-depth analysis of clinical cases published to date, including data concerning 1099 SARS-CoV-2 positive patients from over 550 hospitals in China, D-dimers ≥ 0.5 mg/L were found in 260/560 (46.4%) of the patients tested. Among these, only 43% displayed raised D-dimers in case of non-severe disease, and about 60% had severe illness [49]. In another study, Tang identified markedly elevated D-dimers as one of the predictors of mortality. In fact, higher D-dimers levels probably indicate a severe inflammatory response accompanied by a secondary hypercoagulable state [45] [50]. Moreover, higher levels of D-dimers [2.12 μg/ml (range 0.77 -5.27 μg/ml)] were observed in dead patients against survivors, who had a 0.61 μg/ml (range 0.35 -1.29 μg/ml). Additionally, a study carried by Huang demonstrated that patients who required ICU had higher D-dimer levels on admission (median D-dimer level 2.4 mg/L) comparing to those who did not demand it (median D-dimer level 0.5 mg/L, p = 0.0042) [45]. A different study also found a strong association between the elevation of D-dimers and death during hospitalization (p = 0.003) [7]. It is also important to refer that there is not yet a consensus on if and when patients should receive anticoagulation, what type and for how long [51]. In fact, Songpin et al. had studied the incidence of VTE in 81 patients requiring ICU admission. The group concluded that D-dimer > 1500 ng/ml had an 85% sensitivity and 88.5% specificity for predicting which patients would develop DVT. This study supported the concept of empiric anticoagulation for patients with markedly elevated D-dimers as it cannot only predict thrombosis but also monitor the effectiveness of anticoagulants [52]. Moreover, the American Society of Hematology also reported that prophylactic dose enoxaparin is recommended for all hospitalized COVID-19 patients despite abnormal coagulation tests in the absence of active bleeding. Furthermore, it should only be held if platelet counts are less than 25 × 109/L, or if fibrinogen is less than 0.5 g/L. Nonetheless, it is also considered that therapeutic anticoagulation is not required unless another indication for therapeutic anticoagulation is documented (such as VTE, atrial fibrillation or mechanical valve) [53]. 2) Coagulation studies Patients with severe COVID-19 infection can develop a coagulopathy meeting criteria for Disseminated Intravascular Coagulopathy with fulminant activation of coagulation and consumption of coagulation factors. Tang demonstrated that a prolonged PT was found in the non-survivors when compared to survivors [45]. In what concerns to patients who needed ICU, a mild PT elevation was perceived when compared to ordinary patients [54]. In another study, it was reported that increased PT was associated with a higher risk of ICU, ARDS (p < 0.001), a more severe disease (p = 0.004) and death (p = 0.001) [42] [46]. The hypercoagulable state observed in patients with COVID-19 can be explained by the dysfunction of endothelial cells induced by systemic pro-inflammatory cytokine, which results in excess thrombin generation and fibrinolysis shutdown [19] [55]. Furthermore, severe pneumonia induces hypoxia which can stimulate thrombosis through increasing blood viscosity and hypoxia-inducible transcription factor-dependent signaling pathway [56]. 3) Platelet count Thrombocytopenia is another predictor of COVID-19 severity that is independently associated with disease severity and risk of mortality [57]. In fact, thrombocytopenia can occur by direct infection of bone marrow cells by the virus, platelet destruction by the immune system or by platelet aggregation in the lungs resulting in microthrombi and platelet consumption [58]. Thrombocytopenia was identified as a significant risk factor for mortality and reported to occur in up to 55% of patients. Moreover, a low platelet count has long been recognized as an independent risk factor for sepsis-related mortality [57] as it correlates with multi-organ failure [59]. A meta-analysis of nine studies including 399 COVID-19 patients with severe disease showed that the platelet count was significantly lower in patients with more severe COVID-19. Subgroup analysis comparing patients by survival noted that lower platelet count correlated with mortality. Thrombocytopenia was also associated with over threefold enhanced risk of severe COVID-19 illness [42] [57] [59]. Therefore, based on the currently available literature, the measurement of these parameters in patients with COVID-19 should be performed not only for their documented prognostic value but also to help to stratifying the severity of the disease [42] [59]. Elevated Levels of High-Sensitivity Cardiac Troponin The myocardial injury, defined in several studies by the increase in troponin levels, may be due to myocardial ischemia or non-ischemic cardiac events. In a retrospective multicenter study, high serum sensitivity troponin I (hs-TpI) was measured during the clinical evolution of the majority of COVID-19 patients. An increase in hs-TpI levels was observed as a clinical worsening occurred, and a more significant rise was found in more than half of the patients who died. Therefore, it has been considered one of the biomarkers associated with in-hospital lethality. Concerning that same study, after comparing the patients who died to those who did not, the median level of hs-TnI was 8.8 pg/mL in casualties vs. 2.5 pg/mL in survivors. During the follow-up period, the median hs-TnI value did not change significantly in those who survived (2.5 -4.4 pg/mL). Contrarily, in non-survivors this value increased up to 290.6 pg/mL on day 22, after the onset of symptoms [7]. In other cohort studies of COVID-19 patients, myocardial injury was documented in 7% -17% of hospitalized patients, being significantly more common among patients admitted to the ICU (22.2% vs. 2.0%, p < 0.001) and in those who died (59% vs. 1%, p < 0.0001). However, part of the increase in TpI levels may also be explained by kidney failure and, consequently, late troponin excretion, which is common in severe SARS-CoV-2 disease [60]. In addition, the analysis of six studies comprising patients with severe illness (defined by ICU admission, development of ARDS or death) showed that in these cases serum hs-TpI was markedly elevated [61]. In addition, the analysis of six studies of patients with severe illness (defined by ICU admission, development of ARDS or death) showed that in these cases serum hs-TpI was markedly elevated [60]. The underlying pathophysiological mechanisms of myocardial injury in COVID- 19 are not yet fully understood. Acute myocardial infarction and direct damage to cardiomyocytes by the virus itself are among possible mechanisms Increased Amount of Inflammatory/Infection Markers 1) C-reactive protein (CRP) elevation CRP is an acute-phase reactant protein whose levels rise in response to inflammation correlating with severity. The link between CRP level with the severity and prognosis of COVID-19 is also reported in several studies [5] [62]. Ling W has similarly documented that, during COVID-19 early stages, CRP levels were positively correlated to lung lesions and could reflect disease severity [63]. This was also corroborated by Chen W, who provided direct evidence that the level of CRP correlated to the severity of COVID-19 infection and could help to discern patients of moderate to severe COVID-19 infection from the mild ones. In addition, it could be an earlier indicator for severe illness and help physicians to stratify patients for ICU [64]. It is important to note that CRP values also seems to correlate with lung lesions, ARDS development, higher TpI levels and should be used as key indicator for disease monitoring [42] [62]. Increased Amounts of Proinflammatory Cytokines in Serum It is well-known that whenever SARS-CoV-2 infects the respiratory tract, it can cause the release of pro-inflammatory cytokines. Additionally, more severe inflammatory reactions correlate with disease's severity [7]. In fact, in some cases of more serious infection, SARS-CoV-2 is associated with a cytokine "storm", characterized by increased plasma concentrations of IL-2, IL-7, IL-10, macrophage inflammatory protein 1 (MIP-1), granulocyte colony-stimulating factor (G-CSF), monocyte chemoattractant protein 1, interferon gamma-induced protein 10 (IP-10) and tumor necrosis factor (TNF). This marked elevation of inflammatory cytokines has been associated with pulmonary inflammation, extensive lung damage, clinical progression of extrapulmonary multi-organ collapse and a higher death rate [54] [65] [66]. The association with disease severity is also corroborated by another study, which reports that patients who required ICU admission had higher concentrations of these markers [41]. Among all these cytokines, IL-6 and IL-8 demonstrated that the most significant changes and their levels inversely correlate with lymphocyte count [44]. Wang et al. have equally provided evidence that cytokine release syndrome is a crucial factor in patients with SARS-CoV-2, leading to disease progression. It has also been shown that as the severity of the disease increases, the levels of IL-6 and IL-10 rise, too [67]. Abnormal Liver Tests 1) Liver enzyme elevations and liver injury Liver damage in COVID-19 patients might be directly caused by the viral infection of liver cells. Quite a few studies have exposed different degrees of elevated serum liver bi- have not been reported so far [68]. In a large cohort including 1099 patients, elevated levels of AST were present in 112 (18.2%) of the non-severe individuals and in 56 (39.4%) of severe disease cases [49]. Moreover, the proportion of abnormal ALT in serious cases (28.11%) was higher than in mild cases (19.8%). Similarly, Huang et al. reported that the proportion of liver injury in ICU patients (62%) was greater than non-ICU patients (25%) [54]. Contrarily, Wu et al. disclosed no significant differences in the liver function when comparing mild/moderate patients to severe ones [8]. Furthermore, Wang and colleagues analyzed 339 elderly COVID-19 patients and described that there were no evident differences in ALT levels between survivors and casualties (p > 0.05) [1]. In addition, cases of severe acute liver injury have rarely been mentioned [69]. Hence, abnormal liver function tests during the course of COVID-19 are common, though clinically significant liver injury is rare [70]. Albumin is the most intuitive index of nutritional body status. When hypoalbuminemia occurs, body resistance to viruses lessens, leading to disease progression [6]. Also, a recent study concluded that hypoalbuminemia was found to be a useful prognostic factor for severe patients with COVID-19. It was also associated with exacerbation of disease-associated inflammatory responses and progression of the disease [74]. Laboratory Tests Indexes 1) Neutrophil-to-lymphocyte ratio In the clinical practice of treating patients with COVID-19, emerging evidences suggested that the neutrophil-to-lymphocyte ratio (NLR), an inflammatory index reflecting systemic inflammatory cascades, can be used as systemic inflammation marker. Several studies have reported that this ratio could differentiate between mild/moderate and severe/critical groups and give the probability of death in patients with COVID-19. Moreover, current evidence suggests that NLR may also be a reliable predictor of COVID-19 progression and that an elevated NLR correlates with higher mortality [75]. In laboratory examination of COVID-19, lymphopenia is common. In severe or non-survival patients with COVID-19, the lymphocytes count decreases progressively, while the neutrophils count gradually increases (probably due to excessive inflammation and immune suppression caused by SARS-CoV-2 infection). On the one hand, neutrophils are generally regarded as pro-inflammatory cells, which can be triggered by virus-related inflammatory factors. On the other hand, systematic inflammation triggered by SARS-CoV-2 significantly depresses cellular immunity, leading to a decrease in T cells (CD3+, CD4+ and CD8+ T cells). Hence, NLR can be easily calculated from peripheral blood routine tests and may be associated with the progression and prognosis of COVID-19 [75]. Other recent studies have also stated that the NLR was the most helpful independent prognostic biomarker in determining COVID-19 presence and the treatment efficacy. Besides, NLR had a higher diagnostic accuracy than other assessment tools, such as the CURB-65 [76]. NLR has good predictive values on disease severity and mortality in patients with COVID-19 infection [76]. NLR is readily calculated and cost-effective, which means clinicians can screen high-risk individuals earlier. This is especially desirable in settings experiencing healthcare resource scarcity [76]. Evaluating NLR can help clinicians identify potentially severe cases early, conduct early triage and initiate effective management in time, which may reduce the overall mortality of COVID-19 [75] as NLR could help in assessing the allocation of respiratory equipment in ICU patients and early evaluation of those in need of extracorporeal membrane oxygenation [76]. 2) PaO 2 /FiO 2 ratio In COVID-19 infection, the lung is the most important organ invaded by SARS-CoV-2, several COVID-19 patients being characterized by hypoxia and respiratory distress. Hence, PaO 2 /FiO 2 ratio, the most commonly used oxygenation index, is used in COVID-19 infection [77]. PaO 2 /FiO 2 ratio is a widely used measure of hypoxemia in respiratory failure, calculated as the ratio between the arterial oxygen partial pressure (PaO 2 ) and the fractional inspired oxygen (FiO 2 ). This ratio was validated as a criterion for ARDS definition and severity [78]. An observational, prospective and multicenter study demonstrated that moderate-to-severe impairment in PaO 2 /FiO 2 (<200 mm Hg) was independently associated with a threefold increase in risk concerning in-hospital mortality. The severity of respiratory failure assessed with the PaO 2 /FiO 2 ratio is significantly associated with intubation rate and need for respiratory support. This study has also suggested that the severity of hypoxemia could be useful to triage patients with COVID-19 as well as to identify patients at higher risk of unfavorable outcomes [79]. In another study, PaO 2 /FiO 2 ratio was significantly associated with prolonged hospital-stay. Moreover, the authors also reported that its use at the admission, so as to make a decision on the treatment intensity, as a single measurement, predicts a longer hospitalization [78]. Imaging Chest CT can accurately evaluate the type and extent of lung lesions, as supported by Kunhua Li et al. who investigated the clinical and CT features associated with severe COVID-19 pneumonia. CT manifestations of COVID infection include ground glass opacities, consolidation, reticular patter, crazy-paving patterns and bronchial wall thickening (BWT) [41]. In what concerns to advanced disease, several studies have mentioned more frequent occurrence rates of consolidation, linear opacities, crazy-paving pattern, multiple lung lobe involvement, BWT and extrapulmonary lesions when compared to non-severe patients [41]. It was also determined that the presence of bilateral pneumonia and progressive radiographic deterioration on follow-up CT could have a roll as worst prognosis markers [9]. Conclusions COVID-19 is emerging and spreading at an unprecedented rate, triggering a heavy impact worldwide. The present review has collected published data on COVID-19 prognostic factors and their correlation with SARS-CoV-2 infection outcomes. Nevertheless, further investigation is required to objectively confirm the clinical value of prognostic factors related to COVID-19. As described throughout this article, the chronic diseases addressed are associated with an increased risk of severe clinical manifestation and, consequently, with a worse prognosis effect on the COVID-19 infection. This review was developed not only in the hope of helping healthcare providers worldwide effectively recognize and deal with the 2019 SARS-CoV-2, but also to deliver a reference for future studies.
2021-08-03T00:04:08.518Z
2021-04-02T00:00:00.000
{ "year": 2021, "sha1": "2b034acf9d389f049e365c7007e8ae868d6288b6", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=109818", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "ba17ddf6ff76ca499a75091ae452d683c522168e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
59287528
pes2o/s2orc
v3-fos-license
KNOWLEDGE OF PEOPLE ABOUT THE TUBERCULOSIS INFECTION IN THE HEALTH CENTER IN BAGHDAD the common cause of death from infectious disease (after those due to HIV/AIDS). Aims : To identify the knowledge of people about TB disease in the health center, and to find out any relationships between demographic characteristic and knowledge of people. Methods : A cross-sectional study was conducted at one of the health centers in a Sheikh Omar in the Al-Rusafa directorate for the period from January 2018. The sample size was 150 participants who reviewed the center for treatment and diagnosis. Samples were collected through a pre-prepared questionnaire containing demographic information and the interview was conducted directly with the auditor. The data analysis through descriptive (frequency, percent, p. value). Results : The highest percentage of the participants 107/150(71.3%) still in the age groups 20-24 years, the female cases 79/150 (52.7%) were higher more than the male cases 71/150(47.3%). The distribution of the visitors to the sections of the center, the highest percentage were 42/150 (28%) of the vaccine followed by the (22.7%). Conclusions : We that the significant been found between the department of the center and overall knowledge assessments p= 0.004. Not significant relationship has been found between the age groups, gender, marital status, monthly income and overall knowledge assessments at the p= 0. 145.p= 0.750, p=0.073, p=0.777. Recommendation : Educational programs should be carried out to create awareness among the at-risk groups. INTRODUCTION Tuberculosis is the second-most common cause of death from infectious disease (after those due to HIV/AIDS). [1] Roughly one-third of the world's population has been infected with M. tuberculosis [2] with new infections occurring in about 1% of the population each year. [3] However, most infections with M. tuberculosis do not cause TB disease [4] , and 90-95% of infections remain asymptomatic. [5] In 2012, an estimated 8.6 million chronic cases were active. [6] In 2010, 8.8 million new cases of TB were diagnosed, and 1.20-1.45 million deaths occurred, most of these occurring in developing countries. [7,8] Of these 1.45 million deaths, about 0.35 million occur in those also infected with HIV. [9] China has achieved particularly dramatic progress, with about an 80% reduction in its TB mortality rate between 1990 and 2010. [9] The number of new cases has declined by 17% between 2004-2014. [10] Tuberculosis is more common in developing countries; about 80% of the population in many Asian and African countries test positive in tuberculin tests, while only 5-10% of the US population test positive. [11] Hopes of totally controlling the disease have been dramatically dampened because of a number of factors, including the difficulty of developing an effective vaccine, the expensive and time-consuming diagnostic process, the necessity of many months of treatment, the increase in HIV-associated tuberculosis, and the emergence of drugresistant cases in the 1980s. [12] The rates of TB vary with age. In Africa, it primarily affects adolescents and young adults. [13] However, in countries where incidence rates have declined dramatically (such as the United States), TB is mainly a disease of older people and the immunocompromised (risk factors are listed above). [14] Worldwide, 22 "high-burden" states or countries together experience 80% of cases as well as 83% of deaths. [10] the aim of this study to identify the knowledge of people about the TB disease in the health center, and to find out any relationships between demographic characteristic and knowledge of people. and the elderly. It also has a special section for men. Samples were collected through a pre-prepared questionnaire containing demographic information and the interview was conducted directly with the auditor. The data analysis through descriptive (frequency, percent, p. value at < 0.05). RESULTS Out of 150 participants, 107/150(71.3%) still in the age groups 20-24 years, the female cases 79/150 (52.7%) were higher more than the male cases 71/150(47.3%). Regarding the distribution of the visitors to the sections of the center, the highest percentage 42/150 (28%) of the vaccine room followed by the physician room 34/150 (22.7%). in this table shows highly significant Relationship have been found between the scientific department and overall knowledge assessments p= 0.004. DISCUSSION Tuberculosis (TB) remains a major cause of morbidity and mortality, and Viet Nam ranks 12 among the 22 high-TB burden countries. [15] In this study we found 71.3% of samples in the age groups 20-24 years with compared with results in Viet Nam 44.9% [15] , in Bangladesh 61.7% [16] , this refers to the deterioration of the health situation due to the wars, resulting lack of Ridhaa. European Journal of Biomedical and Pharmaceutical Sciences www.ejbps.com 59 attention to the health aspect and the lack of medicines. Significant differences in TB organ manifestation in association with season, sex and age suggest different pathophysiological mechanisms of disease development. [17] In our study 52.7% of samples were female, other results found in Malaysia 27.7% [18] , in Taiwan 54.4% [19] in India 66.8% [20] , this indicate that the difference in lifestyle between countries and most countries suffer from poverty. In our study, 80.7% of samples were single compared with results found in Mexico 92.4% [21] ; this refers to the different customs and traditions between the two countries. TB patients and their households are characterized by increasingly lower employment income, lower employment rate, and higher dependency on public transfer, but the socio/economic deterioration is rather a risk factor for TB. [22] In this study 48% were moderate monthly income, other results found in Denmark 53% [23] , in Sudan 14.9% [24] , this is due to the difference in the standard of living between countries and most of limited income families as well as unemployment and lack of opportunities for work. CONCLUSIONS We conclude that the half the number of participants still in the age groups 20-24 years; were females; were single; had a moderate monthly income. Highly significant Relationship have been found between the department and overall knowledge assessments at the p= 0.004. Not significant Relationship have been found between the age groups, gender, marital status, and monthly income with the overall knowledge assessments at the p. value = 0. 145.p= 0.750, p=0.073, p=0.777. Recommendation We need to build the communication strategies like training, timely dissemination of information of policy changes and one-to-one dialogue with private practitioners to dispel misconceptions may enhance TB notification. Trust building strategies like providing feedback about referred cases from private sector, health personnel visit or a liaison private doctor may ensure compliance to public health activities. In addition, educational programs should be carried out to create awareness among the at-risk groups.
2019-01-27T14:09:16.690Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "73665c72dd3b891a3e5916dd15de2a5870cb28a2", "oa_license": null, "oa_url": "https://doi.org/10.21767/1791-809x.1000623", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "1be6e59f2cfe17b2b8bd47fddb87fa09da1bcad6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247953753
pes2o/s2orc
v3-fos-license
Defining the Role of Mitochondrial Fission in Corneal Myofibroblast Differentiation Purpose Fibrosis caused by corneal wounding can lead to scar formation, impairing vision. Although preventing fibroblast-to-myofibroblast differentiation has therapeutic potential, effective mechanisms for doing so remain elusive. Recent work shows that mitochondria contribute to differentiation in several tissues. Here, we tested the hypothesis that mitochondrial dynamics, and specifically fission, are key for transforming growth factor (TGF)-β1–induced corneal myofibroblast differentiation. Methods Mitochondrial fission was inhibited pharmacologically in cultured primary cat corneal fibroblasts. We measured its impact on molecular markers of myofibroblast differentiation and assessed changes in mitochondrial morphology through fluorescence imaging. The phosphorylation status of established regulatory proteins, both of myofibroblast differentiation and mitochondrial fission, was assessed by Western analysis. Results Pharmacological inhibition of mitochondrial fission suppressed TGF-β1-induced increases in alpha-smooth muscle actin, collagen 1, and fibronectin expression, and prevented phosphorylation of c-Jun N-terminal kinase (JNK), but not small mothers against decapentaplegic 3, p38 mitogen-activated protein kinase (p38), extracellular signal-regulated kinase 1 (ERK1), or protein kinase B (AKT). TGF-β1 increased phosphorylation of dynamin-related protein 1 (DRP1), a mitochondrial fission regulator, and caused fragmentation of the mitochondrial network. Although inhibition of JNK, ERK1, or AKT prevented phosphorylation of DRP1, none sufficed to independently suppress TGF-β1–induced fragmentation. Conclusions Mitochondrial dynamics play a key role in early corneal fibrogenesis, acting together with profibrotic signaling. This is consistent with mitochondria's role as signaling hubs that coordinate metabolic decision-making. This suggests a feed-forward cascade through which mitochondria, at least in part through fission, reinforce noncanonical TGF-β1 signaling to attain corneal myofibroblast differentiation. C orneal fibrosis can result from traumatic injury, chemical burns, infection, and even surgery. 1 Transparent corneal keratocytes differentiate into fibroblasts, thence into opaque myofibroblasts, whose generation and deposition of abnormal extracellular matrix critically impacts the resulting loss of corneal transparency. 2 Transforming growth factor (TGF)-β1 is the major profibrotic cytokine active in this system, orchestrating physiological healing that includes fibrosis, but which can also lead to scar tissue. 3,4 In the context of corneal wounding, TGF-β1 is a central mediator guiding stromal responses in fibroblastto-myofibroblast conversion. 2 Fibrogenesis can be initiated through binding of TGF-β1 to its cognate receptor on the cell surface, resulting in phosphorylation of small mother against decapentaplegic (SMAD), its nuclear effector. Receptor-regulated SMADs (r-SMADs) can assemble into transcription regulatory complexes with partner SMADs (co-SMADs), translocating into the nucleus to directly regulate gene expression. [5][6][7] In addition to profibrotic gene regu-lation by SMADs, other intracellular signaling pathways can also contribute to fibrotic activation. 8 These non-SMAD signaling pathways may reinforce, attenuate, or otherwise modulate downstream cellular responses. For example, TGF-β1 can activate mitogen-activated protein kinases (MAPKs), phosphoinositide 3-OH kinases (PI3K), and several others, such as Rho-like GTPases and protein kinase A. 2,9 The non-SMAD-and SMAD-dependent pathways are often interconnected and display broad interactions to generate cell-typespecific or context-dependent TGF-β1 signaling. 10 Over the past decade, evidence has emerged suggesting that mitochondria may play a role in TGF-β1-driven fibrosis. Mitochondria respire to generate ATP, but are also a major source of reactive oxygen species (ROS) that, although damaging when produced in excess, can also serve in a signaling capacity; as such, mitochondria are becoming recognized as a signaling hub for integrating cellular metabolic decisions. 11,12 Both pulmonary and cardiac mitochondria contribute to TGF-β1-driven fibrosis in the lung and heart, 13,14 and metabolic reprogramming through mitocentric mechanisms contributes widely to fibrosis in many tissues (for recent reviews, see [15][16][17] ). There are accumulating reports of ROS, redox, mitochondria, and metabolic reprogramming influencing TGF-β1's fibrogenic effects, [18][19][20] but how these processes functionally intersect with SMAD-and non-SMAD-dependent signaling to impact corneal fibrosis is entirely unknown. Mitochondria exist as dynamically-regulated filamentous networks, which change shape and subcellular distribution by the balanced activity of two opposite processes-fusion and fission (fragmentation)-that function to meet cellular energetic and metabolic requirements. 21,22 Mitochondrial dynamics are predominantly mediated by large GTPases in the dynamin family, including dynamin-related protein 1 (DRP1). 23 This is of interest here because mitochondrial morphologic remodeling through DRP1-mediated fission has been shown to be necessary for TGF-β1-induced clinical phenotypes including kidney fibrosis, 24 cardiac fibroblast activation, 25 idiopathic pulmonary fibrosis, 26 and most recently, alkali burn-induced corneal injury. 27 Here we ask if this process is equally important for corneal fibrosis. Our goal was to define hitherto unknown physiological interactions between established molecular signaling cascades that integrate cellular metabolism with the process of TGF-β1mediated differentiation of stromal fibroblasts into myofibroblasts. Our results lead to a complex interaction between noncanonical fibrotic mediators and mitochondrial fission that will serve as a foundation to better understand corneal wound healing. Materials and Methods Isolation, Culture, and Pharmacologic Treatment of Cat Corneal Fibroblasts. Primary feline corneal fibroblasts were generated as previously described 28 and in complete accordance with the ARVO Statement for the Use of Animals in Ophthalmic and Vision Research. In brief, fresh eyeballs were obtained immediately postmortem from adult, domestic short-hair cats (Felis cattus; Marshall Bioresources, North Rose, NY, USA). The corneal epithelium and endothelium were scraped off, and the stroma underwent double enzyme digestion with Dispase II (C756V28; Roche Diagnostics, Risxh-Rotkreuz, Switzerland) and Collagenase (Clostridium histolyticum; C8176; Sigma-Aldrich, St. Louis, MO, USA) overnight and 45 minutes, respectively. 28 Isolated stromal cells were grown in fibroblast growth factor-containing medium (no. C-23010; PromoCell GmbH, Heidelberg, Germany) at 37°C in a humidified chamber at 5% CO 2 . After passage 2, the medium was changed to Dulbecco's modified Eagle medium (DMEM)-low glucose (no. D6406; Sigma Aldrich) with 15% serum (5% fetal bovine serum [FBS; no. F0926; Sigma Aldrich] + 10% newborn calf serum [no. 16010167; Gibco Laboratories, Gaithersburg, MD, USA]) and 1% (vol/vol) penicillin/streptomycin (no. 15323671; Corning Cellgro, New York, NY, USA) until they reached confluence. Harvested cells were cryopreserved at passage 3, and passage 6 to 9 post-thaw cells were used for all experiments. In general, cat corneal fibroblasts were seeded into 35 mm wells at a density of 3 × 10 4 cells/cm 2 . After the cells became adherent, the medium was changed to low glucose DMEM + 1% charcoal stripped FBS (DMEM-CSF; no. 12676-029; Gibco Laboratories), and the cells were incubated overnight to promote quiescence. Differentiation was induced using DMEM-CSF containing 1 ng/mL recombinant human TGF-β1 (R&D Systems Inc., Minneapolis, MN, USA). For data shown in the main figures of this article, pharmacologic inhibitors were added to separate wells 30 minutes before the addition of TGF-β1 and included (final concentration in DMEM-CSF) 10 μM of the DRP1 inhibitor Mitochondrial division inhibitor-1 (Mdivi-1; no. M0199; Sigma-Aldrich), 2.3 μM of the TGF-βR1 inhibitor SB431542 (no. S1067; Selleckchem, Houston, TX, USA), 10 μM of the ERK inhibitor U0126 (no. 662005; Calbiochem, San Diego, CA, USA), 10 μM of the JNK inhibitor SP600125 (no. S1460; Selleckchem), 5 μM of the p-38 inhibitor SB203580 (no. 559389; Calbiochem), or 2.5 μM of the PI3K inhibitor LY294002 (no. 440204; Calbiochem). Control groups had 0.1% dimethyl sulfoxide (Sigma Aldrich) substituted in place of inhibitors. Except for Mdivi-1, all of the inhibitors listed above have been used previously by our group on cat corneal fibroblasts maintained in similar culture conditions and in the context of TGF-β1-induced myofibroblast differentiation. 9,29 As such, the present experiments used previously ascertained optimal doses for each inhibitor. For Mdivi-1, we performed a preliminary dose-response experiment ( Supplementary Fig. S1), which showed alpha-smooth muscle actin (α-SMA) expression to be reduced at doses ranging from 5-20 μM. This motivated our decision to run all remaining experiments in the present study with 10 μM Mdivi-1. It was also necessary to establish the time-course of DRP1 phosphorylation in our culture model; to this effect, we assayed control fibroblasts and cells exposed to 1 ng/ml TGF-β1 at 30 mins, 1 hr, 2 hrs, 4 hrs and 6 hrs ( Supplementary Fig. S2). As the most significant differences between control and TGF-β1-treated cells were seen at 4 and 6 hrs of incubation, we decided to use the shorter time point (4 hrs) for the experiments outlined in Figure 4. Fibroblast Imaging and Western Blot Analyses. Cell morphology was assessed using an Olympus IX73 microscope (Olympus America Inc., Center Valley, PA, USA). For western blotting, cells were washed and mixed with 2x SDS loading buffer (0.125M Tris-HCl, pH 6.8, 20% glycerol, 4% SDS, 0.004% bromophenol blue and added immediately before use with 0.1M DTT to generate whole-cell lysates as described. 30 The cell lysates were separated by molecular weight via electrophoresis on an 8% denaturing gel and transferred to nitrocellulose membranes. Ponceau S (#P7170; Sigma Aldrich) staining and β-Tubulin (1:5000; #sc-166729; Santa Cruz Inc., Dallas, Texas, USA) were used to verify that the same amount of protein was loaded in each lane. Non-specific protein binding to the membrane was blocked using PBS containing 0.1% Triton-X100 (PBS-T) and 5% nonfat dry milk (#sc-2325; Santa Cruz Inc.). In order to maximize efficiency, membranes were often cut into pieces, each with a specific molecular weight range encompassing the target of interest; this allowed for multiple targets to be probed on a single blot without stripping and re-probing. Blots were incubated overnight at 4°C containing primary antibodies to the following targets at the dilutions indicated: After several washes in PBS-T, secondary antibodies (antimouse IgG or anti-rabbit IgG-horseradish peroxidase; GE Healthcare, Chicago, IL, USA) were applied for one hour at room temperature. Bands were detected by Western Lightning plus-ECL (PerkinElmer, Waltham, MA, USA) or Super-Signal West Dura Luminol/Enhancer Solution (ThermoFisher Scientific). Finally, the membranes were scanned with a Chemi-doc machine (Bio-Rad, Hercules, CA, USA), and the resulting images were imported into Image J (NIH) for densitometric analysis, performed using standard protocols as previously described. 28 Assessing Mitochondrial Morphology. Cat corneal fibroblasts with passage 6 were seeded at a density of 2.5 × 10 4 cells per cm 2 into glass-bottom plates (no. P12-1.5H-N; Cellvis, Sunnyvale, CA, USA) in low glucose DMEM containing 15% serum. After adherence, the media was switched to DMEM-CSF to induce quiescence, and the next day the cells were exposed to 1 ng/mL TGF-β1 in DMEM-CSF for times ranging from two hours to 48 hours. To visualize mitochondria, cells were incubated in DMEM-CSF containing 50 nM Mitotracker Red CMX-Rosamine vial (MTR; Ther-moFisher Scientific) for 30 minutes, rinsed twice in DMEM-CSF, and imaged immediately via confocal microscopy as described below. In general, we aimed for three biological replicates, with 20 to 30 cells imaged per independent replicate. For inhibitor experiments, cells were preincubated in DMEM-CSF containing 10 μM SP600125, 10 μM U0126, or 2.5 μM LY294002 for 30 minutes before TGF-β1 treatment, and inhibitors were included in the media throughout the entire experimental time course. Images of MTR-stained mitochondria were acquired using a Nikon A1R HD Laser scanning confocal microscope (Eclipse Ti2) equipped with a ×60 oil objective and running NIS-Elements version 5.11 software (Nikon Instruments, Melville, NY, USA). General acquisition parameters included using resonance scanner mode, 561 nm excitation, a 595/50 emission window, an image resolution of 1024 × 1024 pixels, PMT gain of HV = 10, and magnification ×2.78. Because of the thickness of fibroblasts, a z-stack with six steps and an optimal step size for the objective/pinhole was taken and converted to a maximum-intensity image. Images were saved as .nd2 files. Custom code for image analysis was written in MATLAB (MitochondriaAnalysis.m) and ImageJ (MitochondrialMor-phologyAnalysis.ijm). Our ImageJ Macro was heavily based upon existing code 31 (https://github.com/BoschCalvo2018/ MitochondrialMorphologyAnalysis_Folder.ijm.git), modified to accommodate our specific images and MATLAB. Briefly, our code extracts mitochondrial morphology parameters from all images within a folder, allowing conserved metrics to be used in the analysis so as to minimize image-to-image variability, and relies on edge recognition to identify individual mitochondria. First, individual cells within an image were identified to create regions of interest. Second, segmentation algorithms in ImageJ were used to identify and analyze the mitochondrial network in each individual cell (region of interest). Two filtering parameters were used to accurately capture mitochondrial networks in each data set by reducing local background noise and smoothing the mitochondrial signal. Finally, two key parameters of mitochondrial morphology were computed: circularity ( 4π * area perimet er 2 ) and form factor ( 1 circularity ). Values for form factor close to 1 indicated a more rounded morphology, whereas higher values indicated a longer, more contiguous mitochondrial network within cells of interest. To assess fragmentation of the mitochondria, the average form factor was computed for each treatment group. Statistical Analyses. To evaluate differences in protein expression levels on western blots, when three or more groups were compared, inter-group differences were tested with a one-or two-way analysis of variance (ANOVA), followed by either Tukey's or Dunnett's post-hoc tests, as appropriate. When only two groups were compared, a twotailed Student's t-test was performed. A probability of error of p < 0.05 was considered statistically significant in all cases. To evaluate differences in mitochondrial morphology between three or more groups, Welch's one-way ANOVA was used with Dunnett's T3 post-hoc multiple comparison test or Tukey's HSD post hoc test. For two group comparisons, a two-tailed Student's t-test was performed. Mdivi-1 Decreases TGF-β1-Induced Expression of Profibrotic Molecules in Cat Corneal Fibroblasts Basal expression of α-SMA in cat corneal fibroblasts cultured in 1% CSF was close to zero, while basal levels of COL1 and t-FN were low but distinctly above zero (lane 1, Figs. 1A, 1B). This is consistent with both prior observations, 9,30 as well as the notion that low levels of extracellular matrix components are generated by fibroblasts in vitro. After 2 days of culture with TGF-β1, α-SMA protein expression increased by ∼32fold, COL1 by nearly 4-fold, and t-FN by about 8-fold over baseline levels (Figs. 1A, 1B). Morphologically, cells changed appearance from the flat, elongated spindle shape of fibroblasts at baseline (Fig. 1C) to the more spread-out, balanced aspect ratio characteristic of myofibroblasts, with prominent stress fibers (Fig. 1D). Mdivi-1 is a cell-permeable, selective inhibitor of mitochondrial fission. 32 they were not as large or rich in stress fibers as typical myofibroblasts (Fig. 1D). TGF-β1 Stimulates Mitochondrial Fission in Cat Corneal Fibroblasts The ability of Mdivi-1 to suppress the TGF-β1-induced fibroblast-to-myofibroblast transition suggested mitochondrial morphologic remodeling may be necessary for differentiation. Mitochondrial morphology can be assessed in live cells by fluorescent labeling with a targeted dye such as MTR (Figs. 2A, 2C). Skeletonization and segmentation of raw fluorescent images (Figs. 2B, 2D) is used to derive metrics for area, length, perimeter, circularity, and major/minor axis of each individual mitochondria in the cell; these metrics can then be applied to compute a shape representation termed form factor (FF). Higher FF denotes more elongated (i.e., less fragmented) mitochondria, whereas lower FF denoted more rounded (i.e., more fragmented) mitochondria. An initial time course of mitochondrial morphology after the addition of TGF-β1 to fibroblast cultures suggested that a statistically significant reduction in FF had occurred by 24 hours and that FF returned to baseline by 48 hours (Supplementary Fig. S3). These results were consistent with transient fragmentation of the mitochondrial network by TGF-β1, a finding that was further confirmed both qualitatively (Figs. 2A-D) and quantitatively (Fig. 2E). All in all, our data support the notion that TGF-β1 stimulates mitochondrial fission in early stages of fibrotic activation, with cells reaching a new homeostasis once in their new, differentiated state. Impact of Mdivi-1 on Intracellular Signals Mediating the Effects of TGF-β1 in Corneal Fibroblasts To more clearly outline the molecular nature of early events surrounding mitochondrial fission, phosphorylation of key profibrotic mediators of TGF-β1 signaling were examined using western analysis (Fig. 3). As previously reported, 9 incubation of cat corneal fibroblasts with TGF-β1 for 1 hour caused a large increase in levels of phosphorylated SMAD3 (compare lanes 1 and 2, Figs. 3A, 3B). Pre-incubation with the TGF-β1 receptor inhibitor SB431542 completely blocked this effect (lane 4, Figs. 3A, 3B), but preincubation with 10 μM Mdivi-1 did not (lane 3, Figs. 3A, 3B), suggesting that the ability of TGF-β1 to induce mitochondrial fission was unlikely to be SMAD3-dependent. Total SMAD2/3 levels were similar across conditions and did not change as a result of treatment with Mdivi-1 (Fig. 3A). Interplay Between TGF-β1, Intracellular Profibrotic Signals, and DRP1 Ser616 Phosphorylation DRP1 is the canonical mediator of mitochondrial fission, the mitochondrial target of Mdivi-1, and is regulated by mechanisms including selective phosphorylation, recruitment from the cytosol to the mitochondrial outer membrane, and protein oligomerization. 33,34 Here, we examined the impact of TGF-β1 signaling on selective phosphorylation of DRP1 at Ser 616 , which is known to stimulate mitochondrial fission. 35 Consistent with the TGF-β1-mediated change in mitochondrial morphology (Fig. 2), a 1.6-fold increase in the ratio of p-DRP1 Ser616 /t-DRP1 was observed four hours after TGF-β1 treatment ( Fig. 4 and Supplementary Fig. S2). Mitochondrial morphology reflects a balance between fission and fusion, with inner mitochondrial membrane protein Opa1 regulating inner mitochondrial membrane fusion and outer mitochondrial membrane proteins Mfn1 and Mfn2 contributing to outer mitochondrial membrane fusion. Reduced expression of pro-fusion proteins could result in apparent fragmentation, mimicking activation of DRP1. However, Western analyses suggested that there was no difference in the expression levels of these pro-fusion proteins between fibroblasts and myofibroblasts in culture ( Supplementary Fig. S4). This result does not preclude a role for their acute regulation in the transient mitochondrial fragmentation observed during the process of fibroblast-tomyofibroblast transition, but it did motivate focusing further effort on deciphering the interaction of DRP1 phosphorylation with other pro-fibrotic signaling mediators. To ascertain whether DRP1 Ser616 phosphorylation is influenced by activity of noncanonical signaling molecules activated by TGF-β1 in corneal fibroblasts, we examined the impact of specific inhibitors of JNK, ERK, AKT and p38 phosphorylation. In addition to regulating the TGF-β1-mediated fibroblast-to-myofibroblast transition through the noncanonical axis, several of these kinases are known to phosphorylate DRP1 directly (see Discussion for details). Our data showed that the upregulation of DRP1 Ser616 phosphorylation following TGF-β1 stimulation was suppressed at least in part by pre-incubation with inhibitors of JNK, ERK, and AKT, but not of p38 (Fig. 4). Finally, there are several competing mechanisms through which Mdivi-1 has been reported to inhibit fission, including by suppressing DRP1 phosphorylation, but our data with Mdivi-1 was ambiguous in this regard -DRP1 Ser616 phosphorylation status was not significantly different from either baseline or TGF-β1 stimulated (Fig. 4) at the 4 hours time point. Finally, we tested whether inhibitors of the noncanonical TGF-β1 signaling axis could individually suppress de facto mitochondrial fragmentation, akin to their ability to suppress phosphorylation of DRP1 Ser616 . Surprisingly, mitochondrial FF in cells treated with TGF-β1 together with either JNK, ERK, or AKT inhibitors was indistinguishable from cells treated with TGF-β1 alone ( Supplementary Fig. S5). This suggests that none of these targets are independently required for TGF-β1-induced fragmentation. As such, we favor an interpretation where fragmentation is necessary and permissive for other pro-fibrotic signaling pathways, which act redundantly to assure morphologic remodeling. The implications of these results are discussed in greater depth below. DISCUSSION In the present study, we used a primary cat corneal cell culture model of TGF-β1-induced fibroblast activation to probe-for the first time-the relevance of mitochondrial fission to corneal fibrosis. Our results show that TGF-β1 causes acute phosphorylation of DRP1 and mitochondrial fragmentation in corneal fibroblasts. We then demonstrate -also for the first time -that treatment with the mitochondrial fission inhibitor Mdivi-1 impedes TGF-β1mediated corneal fibroblasts' transformation into myofibroblasts, both in terms of alterations to cell morphology and increased expression of molecular surrogates of the differentiated phenotype. To more critically address the relationship between mitochondrial fission and the regulation of fibrogenesis, both SMAD-and non-SMAD-dependent intracellular signaling pathway activation by TGF-β1 were assayed after Mdivi-1 treatment. Finally, pharmacological targeting of some of these pathways was used to ask whether they in turn contributed to the TGF-β1-dependent phosphorylation of DRP1 and de facto mitochondrial fission. TGF-β1 signaling involves both canonical and noncanonical mechanisms, with the former tied closely to regulation of the SMAD transcription factors. 2,6,9,10 The observations made here that phosphorylation of SMAD3 by TGF-β1 signaling occurs independent of mitochondrial fission and on a time scale (∼1 hour) where fragmentation of the mitochondrial network is only just becoming apparent suggest that the role of mitochondrial fission on TGF-β1-induced myofibroblast differentiation may be mediated through noncanonical signaling mechanisms. However, the exact nature of this role is complex. Our lab and others have defined the contribution of multiple signaling kinases that contribute to noncanonical TGF-β1-mediated effects in corneal fibroblasts. 9,29,36,37 Hence, we incorporated mitochondrial fission into these known pathways. Clearly, inhibition of JNK blocked phosphorylation of DRP1, an important step in its activation (Fig. 4). JNK/DRP1/fission have been linked previously in the context of tumor suppression by the Hippo/Yap pathway, 38 acetaminophen toxicity through Receptor Interacting Protein Kinase-1, 39 and cardiac ischemia-reperfusion injury. 40 However, conversely and perhaps somewhat counter-intuitively, inhibiting mitochondrial fission through Mdivi-1 also blocked phosphorylation of JNK by TGF-β1 signaling (Fig. 3C). We interpret this to mean that fission is a positive regulator of JNK activation, suggesting a mechanism whereby these downstream effectors of noncanonical TGF-β1 signaling exhibit regulatory reciprocity, and perhaps form a communication axis that reinforces differentiation decisions. Mdivi-1 is known to impair yeast Dynamin 1 (Dnm1) GTPase activity, likely via an allosteric binding mechanism that prohibits Dnm1 self-assembly, 32 and there are reports of DRP1 phosphorylation being suppressed by Mdivi-1. 41 However, Mdivi-1 has also been reported to be a selective inhibitor of mitochondrial Complex I (Cx-I). 42 This later offtarget effect could influence the production of ROS through both forward and reverse electron transport mechanisms. As such, the involvement of Cx-I and/or Cx-I-derived ROS in JNK phosphorylation independent of mitochondrial fission remains a formal possibility. Regarding the remaining signaling molecules tested here, ERK and AKT are clearly upstream regulators of DRP1 ( Fig. 4) but are not themselves regulated by fission (Fig. 3), resulting in more linear signal transduction pathways (Fig. 5). ERK2 can mediate phosphorylation of DRP1 to drive tumor growth, 43 and the ERK-CREB pathway has been linked to BCL2/adenovirus E1B 19kDa-interacting protein 3 (Bnip3)mediated mitophagy, 44 as well as to Activation-Induced Cell Death, a form of immune cell apoptosis where ERK regulates DRP1 together with JNK. 45 Similarly, the PI3K/Akt signaling axis has also been linked to DRP1 activation, for example, in the context of Alzheimer's Disease-relevant Amyloid beta (Aβ) signaling. 46 Moreover, both ERK and AKT have been shown recently to promote proliferation and invasion of lung adenocarcinoma through phosphorylation of DRP1 Ser616. 47 However, it is essential to note that even requisite kinases do not necessarily phosphorylate DRP1 directly (i.e., why would inhibition of three separate kinases all block phosphorylation individually if such was the case?), and their effect may be cell-type specific. For example, under select circumstances, AKT can also negatively regulate DRP1 to prevent mitochondrial fission and promote cell survival. 48,49 Hence, our results do not preclude the involvement of inter- Putative role of mitochondrial fission in early stages of corneal myofibroblast differentiation. TGF-β1 dimerization and ligand binding to its cognate receptor TGF-RII recruits TGF-RI into a heteromeric complex, initiating a signal transduction cascade that can act through canonical and noncanonical pathways. The canonical TGF-β1 signaling pathway requires SMADs, which are directly phosphorylated by TGF-R1, transit to the nucleus, and regulate gene transcription (including a pro-fibrotic repertoire of targets such as a α-SMA, COL1, and t-FN). In contrast, noncanonical, non-SMAD pathways are activated directly by ligand-occupied receptors to modulate downstream effectors, such as p38, AKT, ERK and JNK kinases. Our results suggest that non-canonical AKT, ERK and JNK signaling pathways can modify DRP1, the central mediator of mitochondrial fission, and that TGF-β1 signaling causes a necessary fragmentation of the mitochondrial network early in the fibroblast-tomyofibroblast transition. Moreover, mitochondrial fission appears to reinforce JNK activation. In contrast, p38-although part of the noncanonical TGF-β1 signaling axis-does not appear to influence mitochondrial fission. However, it is important to note that first, our results do not preclude fission-independent roles for AKT, ERK, and JNK in fibrotic decision-making, and second, that neither AKT, ERK, or JNK is absolutely required for TGF-β1-mediated fragmentation, suggesting functional redundancy. mediate kinases or alternative molecular signaling cascades that further inform these decisions. This point is emphasized by data showing that inhibition of JNK, ERK, or AKT, all of which block phosphorylation of DRP1 and inhibit the fibroblast-to-myofibroblast transition, do not individually prevent fragmentation of the mitochondrial network. It is worth noting that ERK inhibition had a subtle intermediate effect on is own and that the AKT + TGF-β1 data set was not statistically-significantly different from either the baseline or TGF-β1-treated data sets (Supplementary Fig. S5). However, neither of these observations alters the conclusion above. Moreover, although the Mdivi-1 result suggests that fragmentation is necessary for differentiation, these later results suggest that it is not sufficient to convey the effects of TGF-β1 in lieu of other presumed, non-mitochondrial functions of JNK, ERK, and AKT. Ultimately, the role of mitochondrial fragmentation, intertwined as it is in the noncanonical TGF-β1 signaling axis, may be to provide a metabolic context through which cell fate decisions are made-because mitochondrial fragmentation has also been closely tied to apoptosis and cell death. In conclusion, our work demonstrates that mitochondrial fission integrates with fibrosis-relevant signaling pathways to support TGF-β1-mediated differentiation of corneal fibroblasts into myofibroblasts. This unexpected role may herald a larger contribution of mitochondria acting to provide a metabolic context for competing cell fate decisions, such as whether TGF-β1 leads to differentiation or apoptosis.
2022-04-06T06:22:53.649Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "5c7a5d963077adb4f6f0fc0a272cbdce44203af9", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1167/iovs.63.4.2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "545a6739c0d90a342f2ecb640990b15321e84737", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
246748364
pes2o/s2orc
v3-fos-license
Combining the catalytic enantioselective reaction of visible-light-generated radicals with a by-product utilization system We report an unusual reaction design in which a chiral bis-cyclometalated rhodium(iii) complex enables the stereocontrolled chemistry of photo-generated carbon-centered radicals and at the same time catalyzes an enantioselective sulfonyl radical addition to an alkene. General Information All catalytic reactions were carried out under an atmosphere of nitrogen with magnetic stirring in a Schlenk tube (10 mL). The catalysts Δ-IrS 1 and Δ-RhS 2 were synthesized according to our published procedures. Δ/Λ-RhO were synthesized with some modifications (see Section 2). 3 Solvents were distilled under nitrogen from calcium hydride (CH3CN, CH2Cl2), sodium/benzophenone (THF,Et2O). HPLC grade of acetone, methanol, and ethanol was used without further purification. Dry 1,4-dioxane was bought from Alfa-Aesar. Reagents that were purchased from commercial suppliers were used without further purification. Flash column chromatography was performed with silica gel 60 M from Macherey-Nagel (irregular shaped, 230-400 mesh, pH 6.8, pore volume: 0.81 mL  g -1 , mean pore size: 66 Å, specific surface: 492 m 2  g -1 , particle size distribution: 0.5% < 25 m and 1.7% > 71 m, water content: 1.6%). 1 H NMR, 19 F NMR and proton decoupled 13 C NMR spectra were recorded on Bruker Avance 300 (300 MHz), or Bruker AM (500 MHz) spectrometers at ambient temperature. NMR standards were used as follows: 1 H NMR spectroscopy:  = 7.26 ppm (CDCl3). 19 F NMR spectroscopy:  = 0 ppm (CFCl3). 13 C NMR spectroscopy:  = 77.0 ppm (CDCl3). IR spectra were recorded on a Bruker Alpha FT-IR spectrophotometer. High-resolution mass spectra were recorded on a Bruker En Apex Ultra 7.0 TFT-MS instrument using ESI technique. HPLC chromatography on chiral stationary phase was performed with an Agilent 1200 or Agilent 1260 HPLC system. Optical rotations were measured on a Krüss P8000-T polarimeter with []D 22 values reported in degrees with concentrations reported in g/100 mL. The EPR spectrometer is from Bruker (model esp300), with a modified Varian rectangular X-band cavity and the modulation frequency was set to 100 kHz, the modulation amplitude was 0.1 mT. The Stern-Volmer quenching experiments were recorded on a Spectra Max M5 microplate reader in a 10.0 mm quartz cuvette. Light sources and emission spectra of the lamps A 21 W compact fluorescent lamp (CFL, OSRAM DULUX ® SUPERSTAR Micro Twist) or 24 W Blue LEDs (Hongchangzhaoming from Chinese Taobao, https://hongchang-led.taobao.com) served as light sources. Figures S1 and S2 display their emission spectra. S3 Figure S1. Emission spectrum of the 21 W CFL lamp. Figure S2. Emission spectrum of the 24 W blue LEDs. S4 Modifications for the Synthesis of Λ/Δ-RhO Racemic RhO complex was synthesized according to our previous procedures, 3 in which the enantiopure RhO was obtained through a proline-mediated route resulting in a loss of at least 50% of rhodium complex. Herein, we modified the resolution process using a chiral auxiliary (R)-Aux, namely (R)-3-fluoro-2-(4-phenyl-4,5-dihydrooxazol-2-yl)phenol, instead of proline. The corresponding complexes Λ/Δ-(R)-RhO are stable and could be separated by flash chromatography, thus improving the atom economy of the catalyst synthesis. General procedure A To a mixture of diethyl (cyanomethyl)phosphonate (20 mmol) and a 37% aqueous solution of formaldehyde (80 mmol), a saturated aqueous solution of potassium carbonate (37.5 mmol) was added at room temperature dropwise over 30 min. After stirring for an additional 2 h, the reaction was quenched with saturated aqueous ammonium chloride (20 mL). Afterwards, the reaction mixture was extracted with diethyl ether (3 × 12.5 mL). The organic layers were combined and dried over sodium sulfate. The solvent was evaporated using a rotary evaporator, and the remaining colorless oil was purified by flash chromatography using pentane/CH2Cl2 (2/1) giving the pure product S1 as a colorless oil (70% yeild). To a solution of S1 (14 mmol) in dry ether (20 mL) was added phosphorus(III) bromide (5 mmol) at 10 C. The temperature was allowed to rise to 20 C and stirring was continued for 3 h. Water (10 mL) was then added and the mixture was extracted with diethyl ether (3 × 30 mL). The organic phase was washed with brine (20 mL), dried with sodium sulfate and concentrated under reduced pressure. The crude product was purified by column chromatography on silica gel (pentane/ CH2Cl2, 1/1) to give S2 as a colorless oil (89% yield). To a solution of S2 (2.0 mmol) in methanol (5 mL) was added corresponding sodium aryl sulfinate (3.0 mmol). After 2.5 h of reflux, the mixture was concentrated under reduced pressure, the thereby obtained residue was dissolved in EtOAc and the mixture was washed with water, brine, dried with Na2SO4, filtered and the filtrate was evaporated and purified by chromatography (EtOAc/n-hexane, 1/1) to give corresponding products 2a-h. The characteristic data of 2a are in accord with literature. 4 General procedure B To a solution of corresponding alcohol (ROH, 10 mmol) and triethylamine (15 mmol) in acetone (15 mL) was added acryloyl chloride (13 mmol) dropwise at 0 C. After stirring at 0 C for 30 min, the reaction mixture was warmed to room temperature and stirred for additional 5 h. The resulting mixture was concentrated, then taken up in EtOAc (50 mL) and washed with brine (3  10 mL). The organic extracts were dried over anhydrous Na2SO4, concentrated by rotary evaporation. To a solution of a 37% aqueous solution of formaldehyde (7.0 mmol) and ester S3 (5 mmol) in 5 mL 1,4-dioxane-water (1:1, v/v) was added DABCO (7.0 mmol) and the reaction progress was monitored by TLC. Upon completion, the reaction mixture was partitioned with EtOAc (50 mL) and water (20 mL). The organic layer was separated and washed with brine (5 mL), dried over anhydrous Na2SO4 and concentrated under reduced pressure. The crude product was purified by column chromatography on silica gel (EtOAc/n-hexane, 1/1) to afford corresponding alcohol ester S4. To a solution of S4 (5 mmol) in dry ether (10 mL) was added phosphorus(III) bromide (1.7 mmol) dropwise at 10 C. The temperature was allowed to rise to 20 C and stirring was continued for 3 h. Water (20 mL) was then added and the mixture was extracted with diethyl ether (3 × 10 mL). The organic phase was washed with saturated sodium chloride solution (5 mL), dried with sodium sulfate and concentrated under reduced pressure. The crude product was purified by column chromatography on silica gel (pentane/CH2Cl2, 1/1) to give corresponding brominated compound S8 S5. To a solution of S5 (2.0 mmol) in methanol (5 mL) was added corresponding sodium aryl sulfinate (3.0 mmol). After 2.5 h of reflux, the mixture was concentrated under reduced pressure, the thereby obtained residue was dissolved in EtOAc and the mixture was washed with water, brine, dried with Na2SO4, filtered and the filtrate was evaporated and purified by chromatography (EtOAc/n-hexane, 1/1) to give corresponding products 2j-m. General procedure C To a solution of methyl phenyl sulfone (1.25 g, 8.0 mmol) in THF (40 mL) cooled at -78 ºC, n-BuLi (1.6 M in n-hexane, 5.5 mL, 8.8 mmol) was added dropwise under argon atmosphere. The resulting solution was stirred at 0 ºC for 30 min, and then cooled back to 78 ºC. A solution of 2,3,4,5,6-pentafluorobenzaldehyde (1.72g, 8.8 mmol) in THF (2.0 mL) was added dropwise and the temperature was allowed to slowly raise to room temperature, and the solution was stirred until methylphenylsulfone disappeared by TLC. A saturated aqueous solution of NH4Cl (20 mL) was added, the organic layer was separated and the aqueous layer was extracted with CH2Cl2 (3  10 mL). The combined organic layers were dried with Na2SO4 and evaporated under reduced pressure. Without further purification, the resulting alcohol was dissolved in dry CH2Cl2 (25 mL) under argon atmosphere, cooled to 0 ºC, then Et3N (11.2 mL, 80 mmol) and methanesulfonyl chloride (0.93 mL, 12 mmol) were added continuously. After stirring at room temperature for 90 min, a saturated aqueous solution of NH4Cl (30 mL) was added, the organic layer was separated and the aqueous layer was extracted with CH2Cl2 (3  15 mL). The combined organic layers were dried (Na2SO4) and the solvent was evaporated. The residue was purified by flash chromatography (n-hexane/EtOAc, 5/1) to afford compound 5 (1.73g, 65%) as a white solid. S9 General Procedure A dried 10 mL Schlenk tube was charged with 2a (20.7 mg, 0.10 mmol), Δ-RhO (6.6 mg, 8.0 mol%) and HE-1 (42.2 mg, 0.15 mmol, synthesized following a reported procedure 6 ). The tube was purged with nitrogen for three times. Then, 1,4-dioxane (1.0 mL, 0.10 M, bubbling with nitrogen gas for five minutes before addition) was added via syringe followed by addition of 1a (32.8 mg, 0.2 mmol) under nitrogen atmosphere. The tube was sealed and positioned approximately 5 cm away from a 21 W compact fluorescent lamp. The reaction was stirred at room temperature for the indicated time (monitored by TLC) under nitrogen atmosphere. Afterwards, the mixture was diluted with CH2Cl2. The combined organic layers were concentrated under reduced pressure. The residue was purified by flash chromatography on silica gel (n-hexane/EtOAc) to afford the products 3a and 4a. Racemic samples were obtained by carrying out the reactions with rac-RhO. The enantiomeric excess was determined by HPLC analysis on a chiral stationary phase. S16 Table S1. Effect of Lewis acid catalysts a a Reaction conditions: 1a (0.20 mmol), 2a (0.10 mmol), Lewis acid (8.0 mol%) and HE-1 (0.15 mmol) in 1,4-dioxane (0.1 M) were stirred at room temperature for 24 h with a 21 W CFL. b Δ-RhPP was synthesized according to our previous report. 7 c (20 mol%) of Sc(OTf)3 was employed. d (200 mol%) of LiBF4 was employed. The reduced product, 1-(3,5-dimethyl-1H-pyrazol-1-yl)butan-1-one 8 , was detected in less than 5% yield. Figure S1 for emission spectrum. c See Figure S2 for emission spectrum. S18 Identification of RhO-1a Intermediate To a solution of rac-RhO (83.1 mg, 0.1 mmol) in CH2Cl2 (2 mL) was added α,β-unsaturated N-acylpyrazole 1a (16.4 mg, 0.1 mmol). The mixture was stirred at room temperature for 1 minute and then the solvent was removed in vacuum. The procedure was repeated for another 3 times until the ligand exchange finished completely (detected by 1 H NMR). The resulting solid was recrystallized from CH2Cl2/Et2O giving pure RhO-1a, which was characterized by single crystal X-ray diffraction (see Section 11). Using BHT as a radical trap As shown above, when BHT (3.0 equiv) was added to the reaction 1a+2a3a+4a under standard conditions, the reaction was significantly inhibited delivering 3a and 4a in decreased yields. S19 Using TEMPO as a radical trap When TEMPO (3.0 equiv) was added to the reaction 1a+2a3a+4a under standard conditions, the reaction was completely inhibited. Using 1,1-diphenyl ethylene a radical trap When the 1,1-diphenylethylene (3.0 equiv) was added to the reaction 1a+2a3a+4a under standard conditions, the reaction was partly inhibited and the yields of the products were decreased to 20% for 3a and 22% for 4a. All these control experiments indicate that radical processes might be involved in the present transformation. UV-Vis absorption spectra and luminescence emission spectra As shown in Figure S3, both HE-1 and RhO-1a absorb visible light with wavelength < 425 nm. In order to simulate the reaction conditions, the luminescence quenching experiments were performed with the photoredox mediator Hantzsch ester alone (see section 6.3.2) and with the mixture of Hantzsch ester and RhO in a molar ratio of 2 : 1 (see section 6.3.3), respectively. Figure S3. UV-Vis absorption spectra and luminescence emission spectra. Concentration for absorption spectra in 1,4-dioxane: HE-1 = 0.05 mM, RhO = 0.05 mM, RhO-1a = 0.05 mM. Concentration for emission spectra of HE-1 in 1,4-dioxane = 0.5 mM. Quenching experiments with the Hantzsch ester alone The solutions of HE-1 (0.5 mM in 1,4-dioxane) were excited at  = 360 nm and the emission was measured at 455 nm (emission maximum). For each quenching experiment, after degassed with a nitrogen stream for 5 minutes, the emission intensity of the solution (1 mL are not capable of quenching ( Figure S4). RhO might quench the luminescence of HE-1 via competitive absorption (inner filter effect). 10 Considering the similar absorption of RhO-1a and RhO ( Figure S3), the in situ generated RhO-1a can quench the luminescence of the mixture of HE-1 and RhO slightly ( Figure S5), indicating RhO-1a might undergoes a photoinduced electron transfer with HE-1. Furthermore, RhO-1a as the major existing species of rhodium complexes, is most likely responsible for oxidative quenching of photoexcited HE-1, which is further supported by cyclic voltammetry studies (see Section 6.4). Cyclic Voltammetry All cyclic voltammetry experiments were carried out using analytical grade CH2Cl2 as the solvent containing 0.1 M Bu4NPF6 as the electrolyte and 1 mM of the analyte. Cyclic voltammetry experiments were conducted with a computer controlled Eco Chemie Autolab PGSTAT302N potentiostat in a Metrohm electrochemical cell containing a 1 mm diameter planar glassy carbon (GC) disk electrode (eDAQ), a platinum wire auxiliary electrode (Metrohm) and a silver wire miniature reference electrode (eDAQ) that was connected to the test solution via a salt bridge containing 0.5 M nBu4NPF6 in CH3CN. Accurate potentials were referenced to the ferrocene/ferrocenium (Fc/Fc + ) redox couple, which was used as an internal standard. All solutions used for the voltammetric experiments were deoxygenated by purging with high purity argon gas and measurements were performed in a Faraday cage at room temperature (22 ± 2 o C). Substrate 1a showed one chemically irreversible reduction process with a cathodic peak potential (Ep red ) at -2.59 V vs. Fc/Fc + ( Figure S6, red curve). RhO-1a could be reduced with an Ep red at approximately -1.62 V vs. Fc/Fc + and oxidised with an Ep ox at approximately +1.32 V vs. Fc/Fc + , both in chemically irreversible processes ( Figure S6, blue curve). It is noteworthy that coordination of the cyclometalated rhodium catalyst could significantly decrease reductive potential of 1a. Besides, HE-1 could be oxidised in a chemically irreversible process with an anodic peak potential (Ep ox ) at approximately 0.50 V vs. Fc/Fc + ( Figure S7). According to luminescence emission spectra ( Figure S3, maximum wavelength = 455 nm, corresponding to 2.73 eV), the redox potential of photoexcited HE-1 is estimated as -2.23 V vs. Fc/Fc + , which is feasible to selectively reduce RhO-1a instead of free 1a. Figure S6. CV of compound 1a and RhO-1a. Figure S7. CV of compound HE-1. Trapping experiments with but-3-en-2-one To trap the sulfonyl radical, but-3-en-2-one which is not able to bind the Rh catalyst was added to act as a sulphonyl trap. As shown, the expected radical trapping product 4a' could be obtained in S24 28% yield, along with the formation of 3a and 4a, indicating the involvement of sulfonyl radical. 133.9, 129.4, 127.9, 50.5, 35.8, 29.8. All characteristic data are consistent with the literature report. 11 EPR experiments EPR spectra were recorded at room temperature using DMPO (5,5-dimethyl-1-pyrroline N-oxide) as free radical spin trapping agent. According to general procedure, the reaction of 1a and 2a under standard conditions with the addition of 10 µL DMPO solution (1M in H2O) was stirred with 21 W CFL for 30 min. Then, a portion of the reaction mixture was taken out to an EPR tube and measured by EPR (9.18142 GHz; Mod. Frequency = 100 kHz; Mod. Ampl. = 0.08 mT). As shown in Figure S8, two sets of signals were observed, one of which is simulated as signals 1 with 6 lines (g = 2.006; αN = 9.5 G, αH β = 12.9 G) and further signed as EPR signals of sulfonyl radical adducts. 12 These results suggest more than one radical species including sulfonyl radicals are involved in this transformation. Determination of the Quantum Yield The quantum yield of the title reaction 1a+2a3a+4a was determined by a method and setups developed by Prof. Dr. Eberhard Riedle's Group. 13 As light source 420 nm LEDs were employed. A powermeter was used as detector. The measurement was accomplished in a dark room with a 1.1 W red LEDs. Step 1: The radiant power of light transmitted by the cuvette with a blank solution was measured as Pblank = 46.25 mW. Step 4: The overall quantum yield can be calculated as following: where Nproduct is the number of product 3a formed; Nphoton is the number of photons absorbed; NA is Avogadro's constant; nproduct is the molar amount of product 3a formed; Pabsorbed is the radiant power absorbed; t is the irradiation time; h is the Planck's constant; c is the speed of light; λ is the wavelength of light source, Pblank is the radiant power transmitted by the cuvette with a blank solution; Psample is the radiant power transmitted by the cuvette with reaction mixture. Single-Crystal X-Ray Diffraction Studies X-ray data were collected with a Bruker 3 circuit D8 Quest diffractometer with MoKα radiation (microfocus tube with multilayer optics) and Photon 100 CMOS detector at 100 K. Scaling and absorption correction was performed by using the SADABS software package of Bruker. Structures were solved using direct methods in SHELXT and refined using the full matrix least squares procedure in SHELXL-2014. The hydrogen atoms were placed in calculated positions and refined as riding on their respective C atom, and Uiso(H) was set at 1.2 Ueq(Csp 2 ) and 1.5 Ueq(Csp 3 ). Disorder was refined using restraints for both the geometry and the anisotropic displacement factors. The absolute configuration of 4d and 9 have been determined. Crystal structure of RhO-1a Single crystals of RhO-1a suitable for X-ray diffraction were obtained by slow diffusion from a solution of raacemic RhO-1a (20 mg) in CH2Cl2 (2.0 mL) layered with ethyl ether (1.0 mL) at room temperature for several days in a NMR tube. Crystal structure, data and details of the structure determination for RhO-1a are presented in the Figure S102 and Table S6. Crystal structure of 4d Single crystals of 4d suitable for X-ray diffraction were obtained by slow diffusion from a solution of 4d (20 mg) in ethyl ether (0.5 mL) layered with n-hexane (0.5 mL) at 4 o C for several days in a NMR tube. Crystal structure, data and details of the structure determination for 4d are presented in the Figure S103 and Table S7. Figure S103. Crystal structure of 4d. Crystal structure of 9 Single crystals of compound 9, which was obtained via transamidation of (R)-3a (obtained from the reactions catalyzed by Λ-RhO, suitable for X-ray diffraction were obtained by slow diffusion from a solution of 9 (30 mg) see Section 8.3), in CH2Cl2 (0.5 mL) layered with n-hexane (0.5 mL) at room temperature for several days in a NMR tube. Crystal structure, data and details of the structure determination for 9 are presented in the Figure S104 and Table S8. Figure S104. Crystal structure of 9. S100 Table S8. Crystal data and structure refinement for 9.
2018-04-03T00:21:10.543Z
2017-09-01T00:00:00.000
{ "year": 2017, "sha1": "184a6c7004a054d63484a0747bbc5806d37fe4a8", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2017/sc/c7sc02621h", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "56f05148c03f8e802000c0cea1c4298fe86b12ab", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
46963174
pes2o/s2orc
v3-fos-license
Times series analysis of age-specific tuberculosis at a rapid developing region in China, 2011–2016 The city of Shenzhen has recently experienced extraordinary economic growth accompanied by a huge internal migrant influx. We investigated the local dynamics of tuberculosis (TB) epidemiology in the Nanshan District of Shenzhen to provide insights for TB control strategies for this district and other rapidly developing regions in China. We analyzed the age-specific incidence and number of TB cases in the Nanshan District from 2011 to 2016. Over all, the age-standardized incidence of TB decreased at an annual rate of 3.4%. The incidence was lowest amongst the age group 0–14 and showed no increase in this group over the six-year period (P = 0.587). The fastest decreasing incidence was among the 15–24 age group, with a yearly decrease of 13.3% (β = 0.867, P < 0.001). In contrast, the TB incidence increased in the age groups 45–54, 55–54, and especially in those aged ≥65, whose yearly increase was 13.1% (β = 1.131, P < 0.001). The peak time of TB case presentation was in April, May, and June for all age groups, except in August for the 45–54 cohort. In the rapidly developing Nanshan District, TB control policies targeted to those aged 45 years and older should be considered. The presentation of TB cases appears to peak in the spring months. In Nanshan, the overall unadjusted incidence of TB has witnessed a continuous decline, with an annual decrease of 6% between 2011 and 2015. For subareas of Nanshan, a spatial-temporal analysis study revealed a low-but-rising TB incidence in the district's central region, where high tech enterprises were clustered, and a high-but-declining incidence of TB in the northwest region where several labor-intensive factories are located 12 . However, the associations between demographics and TB incidence have not been analyzed. Poisson regression based Serfling methods have proved accurate for estimation of seasonal variation in the presentation of cases and have also been used to detect TB outbreaks 13,14 . We similarly employed a Poisson Serfling regression model to estimate the seasonal amplitude in TB cases of Nanshan. Results General characteristics of TB cases. From 2011 to 2016, there were a total of 5497 TB cases reported in Nanshan, whose general characteristics are shown in Table 1. The two age groups with the greatest percentage of the TB cases were 25-34 years and 15-24 years, accounting for 36.9% and 29.0% of total TB cases, respectively. There were nearly twice as many cases in males as in females. About 44.8% of PTB cases were clinically diagnosed without microbiological confirmation. Only 7 extra-PTB cases were reported in the years 2014-2016, but in China pleural disease is registered as PTB. Using the WHO criteria of extra-pulmonary TB, which includes pleural tuberculosis involvement, there were 575 extra-pulmonary TB cases in Nanshan, which accounted for 10.5% of all TB patients, a little more than the average 8% in China, according to the global tuberculosis report 2017 1 . Annual notified TB incidence. The crude incidence of TB for the entire population of Nanshan decreased from 67.2 per 100000 population in 2011 to 48.3 in 2013, followed by a smaller decrease from 2013 to 2015, and then a marked decrease in 2016. After adjusting for age, however, the trend showed a less steep decline. The adjusted incidence for the entire population dropped from 56.4 in 2011 to 47.5 in 2016, with an annual decline rate of 3.4% (P < 0.001) (see Fig. 1a). The age-specific TB incidence is illustrated in Table 2 and Fig. 1b. From 2011 to 2016, the incidence amongst the age group 0-14 years remained lower than 1 per 100000 population and did not change significantly over the six-year period (P = 0.587). After adjusting for gender and population size, the TB incidence showed a significant decrease in the age groups 15-24, 25-34 and 35-44, while increasing in the older age-groups 45-54, 55-64, ≥65. Although the crude TB incidence was consistently highest amongst those aged 15-24 years, with an average incidence of 145.3 during the study period, the incidence dropped from 207.6 in 2011 to 95.2 in 2016, with a yearly decrease of 13.3% (β = 0.867, P < 0.001). In contrast, the incidence among those aged ≥65 shifted from the 5 th highest incidence (54.9) age group in 2011 to the 2 nd highest (58.5) in 2016, with an annual increase of 13.1% (β = 1.131, P < 0.001). Monthly TB cases' trend and peak time detection. Using more detailed monthly data, a Serfling regression model showed a rising quadratic trend for all TB cases. Very similar to the yearly incidence data, increasing trends were detected among the age-groups of 45-54 and ≥65, and decreasing trends amongst age-groups 15-24, 25-34 and 35-44 (see Fig. 2). The peak months for the presentation of TB cases was May for age-groups of 15-24 and 55-64, June for those aged 25-34 and 35-44, August for the age group 45-54, and April for age-group of ≥65. The amplitudes were less than 20% for those aged 25-34, 34-44 and 45-54 years old, and more than 40% for age groups of 55-64 and ≥65 (see Table 3). Discussion We used age-specific TB data and regression models to estimate the temporal changes in incidences and the peak time of case presentation among different age groups in Nanshan in the period from 2011 to 2016. We found that the variations of the temporal changes and peak time among different age groups pose challenges to TB control. The TB incidence in Nanshan showed a decreasing trend in the 25-34 and 35-44 age groups, with the greatest annual decrease (13.3%) in the 15-24 group. We also found that the incidence of TB in the 0-14 age group, which makes up an increasing proportion of the population, was consistently the lowest of all age groups and shows no evidence of increasing. This is distinct from the situation in most of China where, from 2010 to 2015, the incidence of TB increased among children both under 5 (10.68 to 16.59) and 5-14 years (15.62 to 21.32) 15 . Children and adolescents are generally at the highest risk for TB infection 16,17 , so the decreasing incidence among the younger population could indicate a decrease in active TB transmission in Nanshan as a result of the implementation of local TB control measures. Contrary to the stable decreases among other age groups, during the 2011 to 2016 period we detected an increase in the TB incidence among age groups 45-54, 55-64 and ≥65. It appears plausible that the group of ≥65 could soon replace 15-24 as the age group with the highest incidence of TB. Similar results were found in a study Figure 1. Yearly notified TB incidence in Nanshan. (a) Crude incidence among whole population and age-adjusted incidence using the WHO standard World population age distribution; (b) age-specific crude incidence. In the background of an ageing population in China and a maturing demographic in Shenzhen City and Nanshan District, TB control policies targeted to the population of age 45 and older should be considered. A mathematical model of TB transmission at the country level in China suggested that a combination of active case finding among the >65 population, increasing DOTS coverage, reducing treatment time, and increasing treatment success rates could reduce TB incidence by 59% (50-76%), and with the addition of preventative therapy for the elderly the incidence could be reduced by 84% (78-93%) 19 . Studies of the seasonality of TB reported in the United States 20 , Israel 21 , India 22 , Portugal 23 and China [24][25][26][27] , have shown that the spring months are often the peak period of TB case presentation, although some studies found that the seasonal trend varied with age. Kohei et al. analyzed the active TB cases in Japan from 1998 to 2013 using maximum entropy spectral analysis and the least squares method 28 . They observed that the peak time was June and July for the 10-39 age group, while the peak for the ≥70 age group was in August and September. A study on 27 . In our study, the predominant peak time for most age groups was during the spring months (April, May or June), except for the age-group of 45-54, whose incidence peaked in August. The mechanisms responsible for the seasonality of tuberculosis are not clear. Some studies have shown that TB seasonality correlated with the concentration of PM2.5 particulate matter in the air 29 and the latitude 30 . Others have proposed that the increased transmission due indoor crowding and vitamin D deficiency during the winter followed by an incubation period and then delays in the diagnosis could potentially explain a spring peak for TB incidence 31,32 . In Shenzhen, the mean delays for care seeking and diagnosis among all TB patients were 34.0 and 15.1 days, respectively 33 . However, the longest delay in care seeking was 50.5 days among the age group of 45-60, which might partially explain the later August presentation peak for the age group of 45-54. There are potential limitations in this study. First, there is no information on the official registered household address of the patients in the data, so it was not possible to determine what percentage of each age cohort of TB patients were migrants. The TB referral and diagnostic procedures in Nanshan are the same for both residents and migrants. Second, it is possible that there was an underreporting of TB cases in one or more age groups that would alter the observed trends. This seems unlikely, however, because in the WHO TB report 2017, China was estimated to have a low level of underreporting. In addition, in Nanshan the TB examinations and drugs are free, and patients receive reimbursements for transportation and nutrition. Conclusion In Nanshan, within the context of industry upgrading and a rapidly developing economy, the instituted TB control activities including active case finding, high treatment success and strict school TB control, appear to have been successful in reducing the disease incidence in the younger population. TB control policies targeted to the migrating and resident populations aged 45 years and older should also be considered. The presentation of TB cases appears to peak in the spring months, except for the summer peak among the age group of 45-54. Methods Setting. The Nanshan district is in the southwest of the city of Shenzhen, across the bay from Hong Kong, with the geographic coordinates of latitude from 22°24′ to 22°39′ and of longitude from 113°53′ to 114°1′. Nanshan has a population of 1.36 million, of whom 0.55 million (40.3%) are migrants. Its GDP ranked 3 rd of all counties in China 34 , with a per capita GDP that reached US$43,700 in 2016. There were more than 2000 high-tech enterprises and 122 listed companies in Nanshan, ranking 2 nd of all counties in China. Besides the fast economic development, Nanshan has also been a pilot site for introducing new TB control strategies in China. In 1995, Nanshan was one of the first counties in China to launch the DOTS (Directly Observed Therapy, Short-course) strategy, supported by a TB Control project funded by the World Bank. In 2006, Nanshan initiated the "Floating Population Tuberculosis Control project" and "MDR Tuberculosis Control project", which were sponsored by the Fifth Round of the China -Global Fund TB Program, resulting in a good relationship between the floating TB patients and public health institutions 35 . Also, in Nanshan a three-tier TB control network was established, stipulating that all the community health centers and hospitals must report and refer the suspicious TB cases to a single government TB health center. Perhaps partially owing to free anti-TB medication and medical tests, in addition to transportation-and-nutrition reimbursement and rigorous DOTS, the treatment success rate for new smear positive TB patients is greater than 85% 6 . Based on National Operation Specification of School TB Control (pilot edition in 2010 and updated edition in 2017), Nanshan developed a local handbook for TB control in schools in 2017. The measures outlined include daily school TB surveillance, health education, preventing TB patients from attending school until they are successfully treated, and strict investigation of close contacts. Data. We retrospectively analyzed all TB cases newly reported to the National Information System for Disease Control and Prevention (NISDCP) from 2011 to 2016, according to the audit date of each case report. Both pulmonary and extra-pulmonary cases were included in the study. PTB cases were further categorized into four groups: sputum smear positive (SS+), sputum smear negative but culture positive (Culture+), clinically diagnosed cases (SS−), and tuberculosis pleurisy (Pleurisy). Pleural disease, either with or without TB disease in the lungs, is reported as PTB, according to the criteria in China. The population data were obtained from the local government of the Nanshan District of Shenzhen. As all data on TB patients in this study were collected retrospectively from the NISDCP, patient informed consent was not required. The study was approved by the Institutional Review Board of the Shenzhen Nanshan Center for Chronic Disease Control. The datasets analyzed during the current study are available from the corresponding author upon reasonable request. Statistical Analysis. The yearly crude age-specific incidence was calculated as age-specific cases divided by the mid-year population. The WHO standard world population was used to generate age-adjusted incidence by direct standardization 36 . Secular trends of age-specific TB incidence were analyzed by multivariate Poisson regressions, using the general linear model with log link and adjusted by variation of gender. In accordance with the Poisson model (count model), the log (incidence) was decomposed into log(cases) and log(population), with the latter being moved to the right of the equation. The multivariate Poisson regression model for each age group was: α α α ε = + + * + * + log(cases) offset(log(population)) y ear g ender , where "year" was the year when cases were reported and audited, "gender" was dummied as "female = 0, male = 1", and "ε" was the random residual. The Poisson Serfling cyclical regression method using monthly TB cases was performed to analyze the temporal trends and to detect the peak months of TB case presentation for each age group. The coefficients of the regression model were estimated by ordinary least squares, and model selection was also based on minimization of the Akaike information criterion (AIC). For each age-group, the full Poisson Serfling cyclical regression model was: = α + α + α + γ * π + δ * π + γ * π +δ * π + γ * π + δ * π +γ * π + δ * π + ε t t t t t t t t Log(Y ) t t cos(2 /12) sin(2 /12) cos(4 /12) sin(4 /12) cos(6 /12) sin(6 /12) cos (8 /12) sin(8 /12) , where Y t is the monthly reported TB cases; t is the month of the TB cases reported, and t = 1, 2, 3,…, 72. In the model, there are secular trends and cyclical components for seasonal trends. All the data were processed by Microsoft Excel 2013, and analyzed by R (Version 3.3.1 Patched for x64 Window system).
2018-06-08T14:06:53.531Z
2018-06-07T00:00:00.000
{ "year": 2018, "sha1": "37400f5ef79f99b5e13195de7d764a83f9c54ca0", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-018-27024-w.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7d9d494b16537e196d423648092c27ca12574831", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
220837851
pes2o/s2orc
v3-fos-license
Renal expression of cytokines and chemokines in diabetic nephropathy Background Diabetic nephropathy (DN) is the leading cause of end-stage renal disease worldwide. Inflammatory mediators have been implicated in the pathogenesis of DN, thus considered an inflammatory disease. However, further studies are required to assess the renal damage caused by the action of these molecules. Therefore, the objective of this study was to analyze the expression of cytokines and chemokines in renal biopsies from patients with DN and to correlate it with interstitial inflammation and decreased renal function. Methods Forty-four native renal biopsies from patients with DN and 23 control cases were selected. In situ expression of eotaxin, MIP-1α (macrophage inflammatory protein-1α), IL-8 (interleukin-8), IL-4, IL-10, TNF-α (tumor necrosis factor-α), TNFR1 (tumor necrosis factor receptor-1), IL-1β, and IL-6 were evaluated by immunohistochemistry. Results The DN group showed a significant increase in IL-6 (p < 0.0001), IL-1β (p < 0.0001), IL-4 (p < 0.0001) and eotaxin (p = 0.0012) expression, and a decrease in TNFR1 (p = 0.0107) and IL-8 (p = 0.0262) expression compared to the control group. However, there were no significant differences in IL-10 (p = 0.4951), TNF-α (p = 0.7534), and MIP-1α (p = 0.3816) expression among groups. Regarding interstitial inflammation, there was a significant increase in IL-6 in scores 0 and 1 compared to score 2 (p = 0.0035), in IL-10 in score 2 compared to score 0 (p = 0.0479), and in eotaxin in score 2 compared to scores 0 and 1 (p < 0.0001), whereas IL-8 (p = 0.0513) and MIP-1α (p = 0.1801) showed no significant differences. There was a tendency for negative correlation between eotaxin and estimated glomerular filtration rate (eGFR) (p = 0.0566). Conclusions Our results indicated an increased in situ production of cytokines and chemokines in DN, including IL-6, IL-1β, IL-4, and eotaxin. It was observed that, possibly, eotaxin may have an important role in the progression of interstitial inflammation in DN and in eGFR decrease of these patients. Background Diabetic nephropathy (DN) is a chronic microvascular complication that affects about 20 to 30% of patients with type 2 diabetes mellitus (T2DM). It is considered the leading cause of end-stage renal failure requiring renal replacement therapy [1,2], although its pathogenesis has not yet been fully elucidated. Immune and inflammatory mechanisms play important role in the development and progression of DN, which is considered a chronic inflammatory disease [3,4]. Several cells, such as monocytes, macrophages, and lymphocytes, as well as chemokines and cytokines, have been implicated in this process [5,6]. Among them, it is known that IL-1β, IL-6, TNF-α (tumor necrosis factor-α), IL-8, MIP-1α (macrophage inflammatory protein-1α) are relevant for the development of DN, as they are potentially involved in the onset of disease complications [7][8][9]. Patients with DN have a predominance of increased plasmatic and urinary levels of inflammatory mediators, both in early and end stages of the disease [9][10][11][12][13]. However, the extent of renal damage caused by immune cellderived cytokines and chemokines and the importance of such inflammatory mechanisms on the development and progression of DN requires further investigation [14,15]. Renal biopsies are considered the gold standard for diagnosis of glomerulopathies; however, diabetic patients are only subjected to renal biopsies in cases of atypical clinical courses of DN. Atypical presentations include microalbuminuria without diabetic retinopathy, a rapid decline in eGFR, rapidly increasing proteinuria, a sudden onset of nephrotic syndrome, hematuria, a period of less than 5 years from the diagnosis of diabetes to the onset of nephropathy or signs and symptoms of systemic diseases [16,17]. However, further studies using this type of samples to investigate mechanisms associated with the expression of inflammatory mediators involved in DN pathogenesis are required, as level of these mediators reflects the direct action of molecules in organs, as well as the relationship with DN. Therefore, this study aims to analyze the expression of cytokines and chemokines such as IL-1β, IL-6, IL-4, IL-10, TNF-α, TNFR1 (tumor necrosis fator receptor-1), IL-8, MIP-1α e eotaxin in renal biopsies from patients with DN and determine its correlation with interstitial inflammation and decreased renal function. Patients Forty-four cases of native renal biopsies from adult patients diagnosed with DN were selected from the Renal Pathology Service database of the Federal University of Triângulo Mineiro (UFTM), Uberaba-MG, Brazil, from 1996 to 2018. All cases of DN in patients over 18 years old, with satisfactory samples for analysis and without overlap with other renal diseases were included in the study. Control group (n = 23) consisted of kidneys obtained from autopsies of patients older than18 years, with no evidence of infection or previous renal changes. Cases with autolysis, acute tubular necrosis, and congestion with moderate to severe changes were excluded from control group. These samples were obtained from the Pathology Service of the University of São Paulo/ Ribeirão Preto. This study was approved by the Ethics and Research Committee of the Federal University of Triângulo Mineiro (no. 3.001.006). Renal histopathology The diagnosis of DN was performed with three samples used for light microscopy (LM), direct immunofluorescence (IF) and transmission electron microscopy (TEM) according to the standard procedures [18]. For LM, 2-μm paraffin sections were stained with hematoxylin and eosin (H&E), Sirius red, methenamine silver, and Masson's trichrome. LM was used to analyze morphological changes and interstitial inflammation. Interstitial inflammation in DN was scored as score 0 (absence of interstitial inflammation), score 1 (presence of inflammatory infiltrate exclusively around the atrophic tubules) and score 2 (inflammatory infiltrate also occurs in areas other than around atrophic tubules). DN classes were defined according to the pathologic classification of DN [19]. For IF, IgG, IgM, IgA, kappa and lambda light chains, C3 and C1q complement fractions and fibrinogen were detected in 2-μm frozen sections using fluorescein isothiocyanate (FITC)-conjugated antibodies (Dako, Copenhagen, Denmark). IF was used to exclude or identify renal diseases overlapping DN. For TEM, tissue was fixed in 2.5% Karnovsky + 0.2% ruthenium red, then fixed in 2% osmium tetroxide. Next, was dehydrated using a graded series of alcohol and acetone solutions before embedding in Epon 812 resin. Ultra-thin sections of 60 nm were prepared and placed in nickel grids. Sections were then stained with uranyl acetate and examined under a transmission electron microscope (EM-900; Zeiss, Germany) [18]. TEM was used to measure thickness of the glomerular basement membrane (GBM) and to exclude or identify renal diseases overlapping DN. All cases of DN overlapping with other renal diseases were excluded from the study. Immunohistochemistry Immunohistochemistry was performed manually on slides containing 2-μm paraffin-embedded tissue sections using the Novolink non-biotin polymer system (Novolink Polymer Detection System Kit; BL, UK) according to the manufacturer's recommendations. Specifications of the antibodies used are summarized in Table 1. Quantification of in situ immunostaining All fields of renal biopsy samples and 40 fields of autopsy kidney fragments, which included glomerular and tubulointerstitial compartments, were analyzed. Immunostained cells showing an intense brownish staining were marked by the observer using the interactive AxionCam ICc 5 (Zeiss, Germany) image analysis system with a 40× objective (final magnification of 1600×). Results were expressed as percentage of marked area compared to total area of the analyzed fields. Statistical analysis A spreadsheet (Microsoft Excel) was created for statistical analysis. Data analysis was performed using Graph-Pad Prism version 7.0 (GraphPad Software, USA). Normality was tested using Kolmogorov-Smirnov test. In cases of normal distribution and similar variances, parametric ANOVA (F) test was used, followed by posthoc Tukey's test and Student's t-test (t). In cases of a non-normal distribution, Kruskal-Wallis (H) test was used, followed by post-hoc Dunn's test and Mann-Whitney (U) test. Proportions were compared by Chisquare test (χ 2 ). Pearson's test (r) was used to determine correlations with parametric variables and Spearman's test (rS) for non-parametric variables. Differences were considered statistically significant when p < 0.05. (25%) as class IV, and 6 as classes I, IIa, and IIb, with 2 (4.5%) cases in each class. General characteristics of the patients are summarized in Table 2. Role of cytokines and chemokines in diabetic nephropathy Expression profile of inflammatory cytokines and chemokines was analyzed in patients with DN and this group showed a significant increase in IL-6 (p < 0.0001; U = 82, Fig. 1a and b), IL-1β (p < 0.0001; t = 5.16, Fig. 1c and d), and IL-4 (p < 0.0001; U = 182, Fig. 1e and f) and a decrease in TNFR1 (p = 0.0107; t = 2.631, Fig. 2a and b) compared to the control group. In contrast, there were no significant differences between groups for the cytokines IL-10 (p = 0.4951; t = 0.6862, Fig. 2c and d) and TNF-α (p = 0.7534; t = 0.3155, Fig. 2e and f). Analysis of chemokine expression showed a significant increase in eotaxin (p = 0.0012; U = 265.5, Fig. 3a and b) expression and a decrease in IL-8 (p = 0.0262; t = 2.275, Fig. 3c and d) expression in DN group compared to control group. However, there were no significant Fig. 3e and f) expression. Relation of cytokines and chemokines in interstitial inflammation in diabetic nephropathy After determining the cytokine and chemokine expression profile in DN, we analyzed how these inflammatory mediators could be related to the interstitial inflammation of this disease. There was a significant increase in IL-6 in scores 0 and 1 compared to score 2 (p = 0.0035; F = 6.592, Fig. 4a) and a significant increase in IL-10 in score 2 compared to score 0 (p = 0.0479; F = 3.295, Fig. 4b). For chemokines, there was a significant increase in eotaxin in score 2 compared to scores 0 and 1 (p < 0.0001; H = 19.19, Fig. 4c), whereas IL-8 (p = 0.0513; F = 3.208, Fig. 4d) and MIP-1α (p = 0.1801; F = 5.203, Fig. 4e) showed no significant differences between groups. Correlation between the estimated glomerular filtration rate (eGFR) and chemokine expression in diabetic nephropathy As there was a predominant increase in chemokine expression as interstitial inflammation progressed, the potential correlation between chemokine expression and decreased eGFR was analyzed in patients with DN. It was observed that only eotaxin tended to have a negative correlation with eGFR (p = 0.0566; rS = − 0.3253, Fig. 5). Correlation between proteinuria and cytokines/ chemokine expression in diabetic nephropathy A possible association between proteinuria and cytokine/ chemokine expression was tested in patients with DN. However, no significant correlation was found between IL-6 (p = 0.4123, rS = − 0.1453), IL-1β (p = 0.5869, r = Discussion This study analyzed in situ expression of cytokines and chemokines in the renal biopsies of patients with DN and related this expression with interstitial inflammation and eGFR to improve our understanding of immune and inflammatory mechanisms that may act directly on kidney and decrease renal function. Our results showed an increase expression of proinflammatory cytokines IL-6 and IL-1β, as well as of the Th2 cytokine, IL-4 and the chemokine eotaxin in patients with DN. In contrast, TNFR1 and IL-8 expression was reduced in DN. These findings suggest that in DN there may be a simultaneous sharing of cytokine actions of the innate and acquired immune response, associated with the increase of a potent eosinophilic chemokine. A study with microalbuminuric DMT2 patients demonstrated the activation of innate immunity in glomeruli of patients with DMT2 and early nephropathy and suggested that improved Toll-like receptors 4 (TLR4) signaling, expressed in native renal cells, may contribute to the progression of microalbuminuric for macroalbuminuric nephropathy [20]. In addition, it has already been proposed that the identification of transcriptional networks shared between human and mice glomeruli with DN may have a possible role in pathogenesis, which will allow a previous selection of the mouse model that most mimics human DN pathway under investigation [21]. Eotaxin expression may play an important role in DN interstitial inflammation, as its expression was increased exclusively in score 2, in which interstitial inflammation is related to areas other than interstitial fibrosis and tubular atrophy (IFTA) and represents a more severe condition. On the other hand, IL-6 expression was higher in scores 0 and 1 compared to score 2, whereas IL-10 expression was higher in score 2 compared to score 0. Possibly, increase IL-10 expression in score 2 may be affecting IL-6 expression through its pro-fibrotic and anti-inflammatory action. In addition to the relationship between eotaxin increased expression and interstitial inflammation in patients with DN, there is a correlation between eotaxin expression and decreased eGFR. Therefore, eotaxin on DN may influence both interstitial inflammation and eGFR, indicating its possible role in DN pathogenesis. Kidney cells (endothelial, mesangial, epithelial, and tubular) are able to synthesize diferente cytokines and chemokines according to the cell and stimuli. Cytokines, chemokines, growth factors, adhesion molecules, nuclear factors, and immune cells, such as monocytes, lymphocytes, and macrophages, have been previously demonstrated to be implicated in DN pathogenesis [22][23][24]. In this study, patients with DN showed increased expression of IL-6, IL-1β, IL-4 and eotaxin, and decreased expression of TNFR1 and IL-8 both in glomerular and in tubulointerstitial compartment. Thus, it was observed that cells from both renal compartments may be involved in DN pathogenesis and, in this study, the expression of the analyzed cytokines and chemokines was similar in different renal compartments. IL-1β and IL-6 are among the cytokines that play an important role in DN pathogenesis, affecting renal resident and infiltrating cells. IL-1β induces the expression Fig. 2 In situ expression of TNFR1, IL-10, and TNF-α in glomerular and tubulointerstitial compartments in patients with diabetic nephropathy (DN) and control group. a TNFR1 expression in control and DN groups. b TNFR1 immunostaining in control and DN groups. c IL-10 expression in control and DN groups. d IL-10 immunostaining in control and DN groups. e TNF-α expression in control and DN groups. f TNF-α immunostaining in control and DN groups. Results are expressed as median (min-max). Horizontal lines represent the medians, the bars represent the 25-75% percentiles and the vertical lines represent the percentiles 10-90% of the intercellular adhesion molecule 1 (ICAM-1) via mesangial and tubular cells and increases vascular permeability and chemokines expression, resulting in proliferation and synthesis of extracellular matrix (ECM) in glomerular mesangium [25]. IL-6 acts on mesangial cell proliferation and promotes ECM synthesis and GBM thickening, in addition to affecting vascular permeability and facilitating neutrophil infiltration into tubulointerstitium, leading to DN progression [24,26]. Studies using experimental DN models have shown a correlation between increased renal expression of IL-1β and increased expression of chemotactic factors and adhesion molecules [27,28]. Previously, T2DM patients with DN were found to show an increased production of IL-6, which was associated with GBM thickness and was considered a strong marker of renal function decline [29]. As GBM thickening is the earliest morphological alteration in DN associated with increased IL-6 production, it is possible that increased in situ IL-6 expression occurs from the early stages of DN, whereby patients show increased expression even without interstitial inflammation (score 0) or in score 1. Moreover, actions associated with increased IL-1β and IL-6 expression promote greater cell infiltration in the kidney, which may exacerbate the inflammatory process and lead to impaired renal function. Studies have reported that serum IL-10 levels are elevated in T2DM patients with DN and that there is a positive correlation between IL-10 and albuminuria [30][31][32]. It has been shown that mononucleated cells are able to adopt an anti-inflammatory phenotype in tissue repair process later in the course of inflammation, which is believed to occur after exposure to IL-10. These cells eliminate cellular and matrix debris and generally promote the resolution of renal inflammation, stimulating renal tubular cell proliferation and angiogenesis [33]. Increased IL-10 production most probably represents a Score 0 (n = 4), score 1 (n = 19) and score 2 (n = 20). a IL-6 expression in cases classified as scores 0, 1, and 2 for interstitial inflammation in DN group. b IL-10 expression in cases classified as scores 0, 1, and 2 for interstitial inflammation in DN group. c Eotaxin expression in cases classified as scores 0, 1, and 2 for interstitial inflammation in DN group. d IL-8 expression in cases classified as scores 0, 1, and 2 for interstitial inflammation in DN group. e MIP-1α expression in cases classified as scores 0, 1, and 2 for interstitial inflammation in DN group. Results are expressed as median (min-max). Horizontal lines represent the medians, the bars represent the 25-75% percentiles and the vertical lines represent the percentiles 10-90% Fig. 5 Correlation between estimated glomerular filtration rate (eGFR) and in situ chemokine expression in patients with diabetic nephropathy (DN). Negative and significant correlation trend between eGFR and eotaxin in DN group compensatory mechanism due to proinflammatory cytokines increased expression and is a negative regulator of inflammation, which corroborates our findings. Although IL-4 stimulates ECM synthesis through glomerular mesangial and epithelial cells, DN patients were found to have low serum levels of IL-4 [9]. Furthermore, no significant differences were found in this cytokine serum levels comparing patients with and without DN [14]. However, our results show that in situ expression of IL-4 is increased in patients with DN. The main morphological alteration associated with DN is progressive ECM accumulation, which may account for increased IL-4 expression and suggests that the action of this cytokine in promoting ECM synthesis is more effective in situ than systemically. Studies with T2DM patients have shown that only TNFR1 and TNFR2 receptors are associated with a risk of end-stage renal disease, wherein elevated serum levels of TNFR1 are associated with DN [34] and decreased renal function [35,36]. TNFR1 is mainly present in glomerular and endothelial cells of the peritubular capillaries [37]. High serum levels of TNFR1 have been associated with global sclerosis, increased ECM, decreased glomerular filtration, and foot process effacement in T2DM patients [38]. Although in vitro-activated TNFR1 induces tissue damage via proinflammatory signals and/or cell death [39], the mechanisms associating TNF receptors with DN remain unknown [38,40]. However, it has been shown that glomerular and tubular TNFR1 expression is not associated with a loss of renal function nor with any clinical parameters in DN patients [41]. Our results showed decreased in situ expression of TNFR1 in DN, which suggests that elevated serum levels of TNFR1 may be mostly implicated in DN progression. Eotaxin is a CC chemokine that is specially chemotactic for eosinophils, from the activation of its CCR3 receptor, and is secreted by endothelial cells, macrophages, fibroblasts, and smooth muscle cells [42]. Roy et al. showed that eotaxin could be used as an independent predictor of renal failure. However, the relationship between an increased eotaxin plasma concentration and the progression to renal failure in diabetic patients remains poorly understood [43]. Elevated levels of urinary eotaxin are associated with prolonged hyperglycemia and microalbuminuria in T2DM patients [44]. In kidneys, eotaxin has been reported to contribute to renal interstitial eosinophilia; however, these results do not refer to DN [45]. The slow and continuous decline of renal function is associated with progressive tubulointerstitial damage and renal fibrosis, which is characterized by accumulation of leukocytes, fibroblasts, EMC and tubular atrophy [2,46]. Accumulation of macrophages and lymphocytes in interstitium is critical for tubular and interstitial damage, since these cells are the main sources of proinflammatory and pro-fibrotic cytokines [47,48]. Eotaxin is a potent chemoattractant chemokine and/or activator of eosinophils but may also be involved in the regulation of other cells. In atherosclerosis, smooth muscle cells express eotaxin and macrophages and mast cells express the CCR3 receptor, suggesting that eotaxin and its receptor contribute to recruitment and activation of inflammatory cells in ateroma [49]. It was also observed that Th2 lymphocytes, neutrophils, and bronchial endothelial cells also express the CCR3 receptor, suggesting the potential role of eotaxin in the non-eosinophilic inflammatory process [50,51]. Macrophages are the main inflammatory cells involved in renal damage, the accumulation of which is correlated with DN severity [5,52,53] and mesangial expansion [54]. Mast cells also infiltrate tubulointerstitial compartment and release inflammatory mediators and proteolytic enzymes. The intensity of macrophage infiltration and the extent of mast cell degranulation has been previously associated with the level of tubulointerstitial inflammation and with decreased eGFR in DN [55]. Thus, we suggest that the increased in situ expression of eotaxin may be related to its contribution to recruitment and activation of cells other than eosinophils, which strongly promotes further infiltration and accumulation of these cells in kidneys. This may also account for the decreased in situ expression of IL-8 found in patients studied here. Eotaxin action associated with inflammatory cytokines role may worsen the inflammatory process and impair renal function, which corroborates our findings and suggests that eotaxin exerts an in situ role in DN pathogenesis. In this study, most cases had advanced disease and we recognize this is a limitation of the study, clarified earlier. However, although 87% of our cases were in stage 3-4 of DN, this profile is expected in studies based on renal biopsies samples diagnosed with DN without overlapping with other non-diabetic renal disease [56], due to the indications of the procedure itself. Studies with DN patients showed that severity of glomerular and interstitial lesions had a significant impact on renal prognosis and could be used as independent risk factors for progression of DN [57,58]. Therefore, our findings indicate that in situ expression analysis of cytokines and chemokines, especially eotaxin, could be used to assist in analysis of renal function impairment based on the analysis of interstitial inflammation developed in patients with DN. Conclusions Our results show that in situ expression of cytokines and chemokines, including IL-6, IL-1β, IL-4 and eotaxin, is increased in patients with DN. It was observed that, possibly, eotaxin may have an important role in progression of interstitial inflammation in DN and in the decrease of eGFR of these patients.
2020-07-29T13:22:38.045Z
2020-07-28T00:00:00.000
{ "year": 2020, "sha1": "364a7441f2e1fc86d1beed5246734d57a183678f", "oa_license": "CCBY", "oa_url": "https://bmcnephrol.biomedcentral.com/track/pdf/10.1186/s12882-020-01960-0.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "43102cfa0b6de5ce0f13c3e8c19041181bcc98e1", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
240040400
pes2o/s2orc
v3-fos-license
Efficacy of the Orally Disintegrating Strip Sildenafil for the Treatment of Erectile Dysfunction: A Prospective, Randomized Trial Introduction Phosphodiesterase 5 inhibitors are the predominant treatment option for erectile dysfunction. Aim This study evaluates the efficacy and safety of sildenafil orally disintegrating strips for the treatment of erectile dysfunction. Methods One hundred twenty erectile dysfunction patients were enrolled in a prospective, randomized, controlled crossover study and allocated into 2 groups of 60 participants. Patients were either treated with sildenafil strips or tablets for 8 weeks after which they crossed over into the alternate treatment formulation for another 8 weeks following a 4-week wash-out period. Each participant was assessed 8 times throughout the study period and their formulation preference registered at the end of the study. Main outcomes and measures Changes in the abridged International Index of Erectile Function (IIEF-5) score and Erection Hardness Score (EHS) resulting from sildenafil orally disintegrating strip or tablet treatments were the primary end points, with differences in onset of action, duration of action, and incidence of adverse events between the 2 formulations included as secondary end points. Results Both sildenafil formulations were effective in treating patients with erectile dysfunction. There was significant improvement of erectile function in term of IIEF-5 score and EHS from both formulations. The number and type of adverse events were also comparable. Likewise, there were no statistically significant differences between the earliest onset of action times and longest duration of action times. However, the results showed a 7.1-minute earlier onset of action time for orally disintegrating strips that may be considered as clinically meaningful by some patients. Conclusion Sildenafil orally disintegrating strips are a safe and effective alternative to the conventional tablet formulation for the treatment of erectile dysfunction. Sangkum P, Sirisopana K, Matang W, et al. Efficacy of the Orally Disintegrating Strip Sildenafil for the Treatment of Erectile Dysfunction: A Prospective, Randomized Trial. Sex Med 2021;9:100453. INTRODUCTION Erectile dysfunction (ED) is defined as the inability to attain and/or maintain penile erection sufficient for satisfactory sexual performance. 1 In an effort to estimate the prevalence of ED in Thailand, a nationwide study was conducted by the Thai Erectile Dysfunction Epidemiologic Study Group in 1998. 2 From a population of 1,250 men aged 40 −70 years residing in urban or municipal areas, ED was diagnosed in 37.5% of participants for whom ED prevalence increased with advancing age. 3 Similarly, recent reports establish that the prevalence of ED is higher in populations with risk factors such as diabetes, hypertension, cardiovascular disease, and smoking. 4−7 By virtue of the nature of the condition, ED has negative effects not only on a patient's quality of life but also on that of their partner. [8][9][10] Currently, phosphodiesterase 5 (PDE5) inhibitors are the predominant treatment for ED. PDE5 inhibitors act by suppressing the conversion of cyclic guanosine monophosphate (cGMP) to guanosine monophosphate (GMP) during penile erection. The ensuing increase in cGMP results in heightened and prolonged relaxation of smooth muscle, vasodilation of blood vessels, and subsequent improvement of penile erection. Introduced in 1998, sildenafil was the first PDE5 inhibitor to be approved for the treatment of ED. During its first 6 years on the market, sildenafil was used to treat more than twenty million men. Initial treatment regimens begin with 50 mg sildenafil, followed by dosage adjustments taking into account treatment responses and side effects. Sildenafil has an action duration of 8 hours with a T max of 30-60 minutes. However, T max can last longer in patients with a prolonged gastric emptying time. 11 In addition, the efficacy of sildenafil decreases when it is used by diabetic patients or by those who take sildenafil with a high-fat diet. 12 Tablet formulations are the conventional sildenafil delivery method. Orally disintegrating films are a novel orodispersible drug delivery system that can dissolve rapidly and disperse its payload when placed in the mouth without the need for water. 13 The film is hydrated by saliva in the oral cavity, thereby releasing the active ingredients for local and systemic absorption. Compared to tablets, orodispersible films provide a more convenient delivery method with a reduced risk of choking and a potentially accelerated onset. In a small sample study, Radiciono et al 14 confirmed that there was no statistically significant difference between the 100 mg sildenafil orally disintegrating strip (ODS) formulation and the 100 mg tablet formulation in terms of pharmacokinetics. These results suggest that the orodispersible film formulation may represent a viable alternative to the current products for the treatment of ED, with additional benefits coming in the form of patient convenience and acceptability, which could enhance treatment compliance owing to the film's ease of use. In Thailand, while the conventional sildenafil tablet formulation has been in use since 2000, alternative formulations such as the sildenafil ODS are not yet widely available. Comparisons of the different drug delivery methods have mainly focused on bioequivalence studies, with a dearth of clinical study data regarding the ODS formulation. In answer to this shortage, we conducted a prospective, randomized, controlled crossover study to compare the efficacy and safety of sildenafil ODS with that of conventional sildenafil tablets in a Thai population. In addition to the primary objective of assessing the efficacy of sildenafil strips for the treatment of ED, secondary objectives include determining onset of action and duration of action for sildenafil orally disintegrating strips while identifying treatment side effects and evaluating patient satisfaction. Study Design The sildenafil ODS trial was a prospective, randomized, controlled crossover study that evaluated the efficacy of sildenafil orally disintegrating strips (Hart-S sildenafil citrate, Pacific Biosciences Pte. Ltd., Singapore) and compared the ODS efficacy to that of sildenafil tablets (Sidegra sildenafil citrate, Government Pharmaceutical Organization, Thailand). Efficacy was assessed as a function of the abridged International Index of Erectile Function (IIEF-5) score and the Erection Hardness Score (EHS). The study consisted of 120 ED patients who were randomized and allocated into 2 groups of 60 participants (eg, Arm 1 and Arm 2). As shown in Figure 1, patients in Arm 1 were first treated with sildenafil strips for 8 weeks, with treatment pausing for a 4-week wash-out period after which treatment resumed with sildenafil tablets for 8 additional weeks. Alternatively, patients in Arm 2 were first treated with sildenafil tablets for 8 weeks, with treatment pausing for a 4week wash-out period after which treatment resumed with sildenafil strips for 8 additional weeks. Throughout the study period, each participant was assessed every 4 weeks (eg, Visits 1 through 8) to build a dataset of all onset of action and duration of action time values. During each visit, patients completed the IIEF-5 and EHS questionnaires. Patient satisfaction and reports of adverse events were continually assessed and each patient's preference for either the sildenafil ODS or tablet formulation was registered at the end of the study. Study Population This study was conducted between November 2018 and December 2020. A total of 132 candidates were enrolled. Subsequently, eleven candidates withdrew after being informed about the details of the research protocol. One patient was later excluded because of their medical history, which involved a radical prostatectomy. One hundred twenty eligible patients started the sildenafil ODS study following the run-in period. Figure 2 shows the flow diagram for study enrollees. Following the enrollment period, the remaining 120 ED patients were randomly allocated into 2 treatment arms, each consisting of 60 participants. Patients in Arm 1 were treated with sildenafil ODS for 8 weeks (period 1) followed by a 4-week pause in treatment (wash-out period). Subsequently, these patients crossed over and were treated with sildenafil tablets for another 8 weeks (period 2). Alternatively, patients in Arm 2 were treated with sildenafil tablets for 8 weeks (period 1) followed by a 4-week pause in treatment (wash-out period). Subsequently, these patients crossed over and were treated with sildenafil ODS for another 8 weeks (period 2). The study design afforded each patient the opportunity to use both treatment formulations, thereby informing their preference for one or the other. The clinical trial was conducted in accordance with the principles of the Declaration of Helsinki and the International Conference on Harmonization Harmonized Tripartite Guidelines for Good Clinical Practice. The study protocol, information sheet, consent form, case report form, and all other relevant documentation were reviewed and approved by the institutional Human Research Ethics Committee. The study population was composed of male patients aged 18 years or older. Inclusion criteria screened for patients who had been diagnosed with ED for more than 3 months, who were unable to maintain an erection long enough to achieve successful sexual intercourse in 50% of attempts (or higher), and who have only 1 female sexual partner with whom sexual intercourse occurs at least 2 times per month. Study candidates who were contraindicated for or allergic to PDE5 inhibitors, had been diagnosed with ED as a result of spinal cord injury or radical prostatectomy, had a fasting blood sugar level over 270 mg/dL, or had untreated hypogonadism were excluded from the clinical trial. Written informed consent was obtained prior to the start of treatment for all patients. Participants who subsequently decided to withdraw from the clinical study were not required to provide a reason for withdrawal. Treatment Formulations Two drug delivery formulations were used in this study. Hart-S orally disintegrating strips each contain a dosage of 50 mg sildenafil citrate. Patients were prescribed 2 strips before sexual intercourse with a maximum dosage of 2 strips per week (eg, 100 mg per week). Each patient received a total of 16 strips for use during the 8-week treatment period involving sildenafil ODS. Sidegra tablets each contain a dosage of 100 mg sildenafil citrate. Patients were prescribed 1 tablet before sexual intercourse with a maximum dosage of 1 tablet per week (eg, 100 mg per week). Each patient received a total of 8 tablets for use during the 8-week treatment period involving conventional sildenafil tablets. Study Assessments The efficacy of both sildenafil formulations was assessed using a questionnaire to determine the IIEF-5 score and the EHS from patient assessments collected during both the treatment and non-treatment periods. The IIEF consists of 15 items and 5 domains and is a psychometrically valid and reliable method for determining efficacy of treatment in controlled clinical trials. 15 The IIEF-5 is an abbreviated version that can be used as a diagnostic tool. 16 ED severity can be classified on the IIEF-5 scale as follows: severe (5−7), moderate (8−11), mild to moderate (12 −16), mild (17−21), and no ED (22−25). The EHS is a singleitem, patient-reported outcome for scoring erection hardness. 17 The 4 grade levels in the EHS are used to categorize the severity of ED. EHS grade 1 represents cases of severe ED. EHS grade 2 represents cases of moderate ED that do not allow for vaginal penetration. EHS grade 3 represents cases of mild ED. And EHS grade 4 signifies normal erectile function. IIEF-5 and EHS results were compared as a measure of ODS and tablet efficacy. Safety profiles and adverse events were constantly evaluated after the start of treatment. Furthermore, values for the onset of action and duration of action for both sildenafil Figure 1. Schematic of the study protocol workflow. Following screening and enrollment, patient randomization was performed during the 4week run-in period marking the clinical trial starting point. The study proceeded with an 8-week treatment period followed by a 4-week wash out period and then a second 8-week treatment period. Patients starting treatment with 1 sildenafil formulation crossed over to the other formulation during the wash out period between Visits 4 and 5. Eight visits at 4-week intervals were spaced throughout the trial period. formulations were collected at visits 3, 4, 6, and 7 after 4 weeks of medication use. This data was collected through patient selfreporting with the aim of reflecting real-life situation outcomes. As indicated in Figure 1, patients were assessed every 4 weeks throughout the study regimen for a total of 8 mandated medical center visits. Patient preference between the 2 sildenafil formulations was assessed at the end of the study. Statistical Analysis Data analysis was performed using STATA version 14.1 (STATA Corp., TX, USA). Categorical variables were evaluated using the chisquared test with the corresponding data being reported as numbers and percentages. Continuous variables were compared using a 2sample t-test with the corresponding data being reported as median § standard deviation. For univariate and multivariate analyses using the multilevel mixed-effects linear regression, a P value of less than .05 was considered statistically significant. RESULTS Patient demographics are presented in Table 1. The mean age of all participants was 64.48 § 8.66 years. No significant difference in baseline characteristics was detected between the 2 treatment arm groups. Within the study population, the 3 most common associated diseases were hypertension (55.8%), dyslipidemia (51.7%), and benign prostatic hyperplasia (47.9%). Both sildenafil formulations resulted in significantly improved erectile function after treatment. As a result of treatment with sildenafil ODS, IIEF-5 scores increased from 14.47 From the 12 excluded candidates, one did not meet the inclusion criteria while the remaining eleven declined to participate after being informed of the protocol. During the first treatment period, 4 patients withdrew from each arm of the study. After the washout period and treatment crossover, 2 patients in Arm 2 withdrew during the second treatment period without completing their sildenafil ODS regimen. respectively (P value < .05). Furthermore, EHS values also increased from 2.59 § 0.75 to 3.00 § 0.75 and 3.13 § 0.74 at the 4-week and 8-week time points, respectively (P value < .05). Figures 3 and 4 show the changes in IIEF-5 scores and EHS values, respectively, as a function of time for each sildenafil formulation. The total ODS data set was collated by pooling questionnaire responses from Arm 1 patients during treatment period 1 and Arm 2 patients during treatment period 2. The total Tablet data set was collated by pooling questionnaire responses from Arm 1 patients during treatment period 2 and Arm 2 patients during treatment period 1. Changes in IIEF-5 scores and EHS values following sildenafil treatment were used to quantify treatment efficacy for the study population. A comparison of the changes in treatment efficacy between the sildenafil ODS and sildenafil tablet formulations yielded no statistically significant difference for the treatment of ED at identical dosage levels. Mean IIEF-5 scores for the sildenafil ODS and tablet formulations were 17.69 § 4.93 and 17.75 Week 0 IIEF-5 score represents the baseline value at the beginning of a treatment period. For patients in Arm 1, baseline IIEF-5 scores before ODS treatment were recorded during Visit 2. For patients in Arm 2, baseline IIEF-5 scores before ODS treatment were recorded during Visit 5. Likewise, baseline IIEF-5 scores before tablet treatment in Arms 1 and 2 were recorded during Visits 5 and 2, respectively. Statistically significant improvements in IIEF-5 scores appeared at both 4 and 8 weeks. There was no statistically significant difference between the scores at week 4 and week 8, for either the ODS or tablet formulation. Consequently, full treatment efficacy was achieved by week 4. Strip Sildenafil for the Treatment of ED § 4.86, respectively (P value = .899). Mean EHS values for the sildenafil ODS and tablet formulations were 3.06 § 0.75 and 3.07 § 0.75, respectively (P value = .953). These results demonstrate that the efficacy of sildenafil strips was comparable to that of sildenafil tablets for the treatment of ED. In addition to treatment efficacy, secondary response benchmarks such as onset of action times and duration of action times were evaluated. A comparison of the earliest onset of action times between the sildenafil ODS and tablet formulations showed no statistically significant difference in values (47.11 minutes and 54.21 minutes for sildenafil ODS and sildenafil tablets, respectively, P value = .580). Nonetheless, an onset of action time that was 7.1 minutes faster than the conventional response time may be considered clinically significant by some ED patients. Likewise, a comparison of the duration of action times between the sildenafil ODS and tablet formulations yielded no statistically significant difference in values (85.86 minutes and 90.84 minutes for sildenafil ODS and sildenafil tablets, respectively, P value = .745). Yet, a duration of action time that was 5 minutes longer than the experimental response time may be considered clinically significant by some ED patients. Based on univariate and multivariate analyses, the results of this study demonstrate that neither sildenafil formulation, body weight, body mass index, nor time period with an ED diagnosis were associated with the treatment response, as shown in Table 2. There was no statistically significant difference in the extent of IIEF-5 scoring improvements when using either the ODS or tablet formulations. Similarly, increasing EHS values also showed no statistically significant difference when using either formulation. However, participant age, severity of ED, and attempt of medication use were significantly associated with IIEF-5 scoring changes after treatment. Notably, there were no serious or life-threatening adverse events during the course of the study. The adverse events affecting a small subset of the study population were minor and well documented from previous clinical studies (see Table 3). Furthermore, a comparison of the incidence of adverse events between the ODS and tablet formulations showed no significant difference. The only symptom that appeared for 1 formulation more often than the other was the flushing symptom, which was more common in the conventional tablet group (see Table 3). At the end of the study, 47.3% of the treatment population preferred using the sildenafil ODS formulation whereas 52.7% preferred using the conventional sildenafil tablet formulation. Convenience was cited as a significant reason for preferring the tablet form over the ODS form. DISCUSSION Erectile dysfunction is a common condition experienced by elderly males. The estimated prevalence of ED in Asian men is projected to reach 200 million by the year 2025. 18 Since the 2000 study, when the prevalence of ED in Thai men was benchmarked at 37.5%, this condition has come to be associated with other ailments such as hypertension, diabetes, and cardiovascular disease, all of which can affect an individual's overall health as well as the quality of life for couples. 3 As the first-line therapy option for ED, PDE5 inhibitors are commonly accessible and broadly used. Following its regulatory approval in 1998, 19 sildenafil became the first PDE5 inhibitor to be widely prescribed for the treatment of ED. Tablets continue to be the most prominent formulation for sildenafil. However, orally disintegrating films offer an alternative delivery method for ED treatment when patients desire a more natural or spontaneous response from their therapy options. 20 Early orodissolvable film formulations Week 0 EHS value represents the baseline value at the beginning of a treatment period. Baseline EHS values before ODS treatment in Arms 1 and 2 were recorded during Visits 2 and 5, respectively. Baseline EHS values before tablet treatment in Arms 1 and 2 were recorded during Visits 5 and 2, respectively. Statistically significant improvements in EHS values appeared at both 4 and 8 weeks. There was no statistically significant difference between the EHS values at week 4 and week 8, for either the ODS or tablet formulation. As a result, full treatment efficacy was achieved by week 4. highlighted the potential for high solubility, rapid onset of action, and improved bioavailability. 21 Although bioequivalence studies involving sildenafil ODS have documented peak concentration (C max ) terms and provided valuable pharmacokinetic profiles in healthy populations, 14,22 there have been few studies demonstrating its efficacy. The results of this study indicate that both formulations had comparable efficacy in the treatment of ED, as evidenced by statistically significant increases in IIEF-5 scores and EHS values. As a consequence of these improvements, treatment with sildenafil resulted in a shift in the severity of ED. The average baseline IIEF-5 score before treatment fell in the mild to moderate ED scale range (eg, IIEF-5 = 12−16). After treatment with either sildenafil formulation, the average IIEF-5 score shifted to the mild ED scale range (IIEF-5 = 17−21). Changes in EHS values also followed this trend as the initial moderate ED baseline values moved to mild ED post-treatment values. From a clinical perspective, patients with moderate ED are unable to have successful sexual intercourse because of insufficient penile erection, whereas patients with mild ED are able to have successful sexual activity despite a possible decrease in penile erection. Both the sildenafil ODS and tablet formulations significantly improved penile erection to the point where achieving successful sexual activity was likely. Previously, a 2003 randomized, double-blind, placebo-controlled flexible dose study in Thailand 23 documented comparable outcomes to those reported for the original sildenafil tablet (Viagra, Pfizer Inc., New York, NY, USA), 19 using the initial IIEF scoring system. Similarly, the conventional tablet results from the present study were benchmarked to those reported by the ENDOTRIAL study, 24 which examined the efficacy of sildenafil, tadalafil, and vardenafil in the treatment of ED using the abridged IIEF-5 scoring system. In terms of efficacy, improvements in the IIEF-5 scores were comparable between the 2 clinical trials. For the original sildenafil tablet in the ENDOTRIAL study, the IIEF-5 score increased by 3.52. Data from the present study showed an IIEF-5 score increase of 3.38, indicating that the efficacy of the generic tablet treatment was comparable to Although there was no statistically significant difference in treatment efficacy, onset of action time, or duration of action time between the sildenafil ODS and tablet formulations, slight differences may still have clinical ramifications. In the present study, treatment with sildenafil ODS resulted in a 7.1-minute earlier onset of action time compared to the slower conventional tablet formulation. A faster onset of action time may be valuable to a number of patients. In addition to those patients with dysphagia or malabsorption, the ODS formulation may also benefit those who use the medication after a fat-rich meal or in the presence of alcohol as these conditions and scenarios directly affect sildenafil absorption. 11 The adverse events experienced by patients in the present study were comparable to those reported in prior sildenafil trials. The most commonly encountered adverse events resulting from tablet use were headaches and flushing, accounting for 12.3% and 9.3% of the total incidence of adverse events, respectively. Morales et al reported similar incidence values for headaches and flushing at 16% and 10%, respectively. 25 Although there was no statistically significant difference in adverse events between the sildenafil ODS and tablet formulations, flushing was more common in the conventional tablet group. As a result, patients who experience flushing after tablet use may encounter fewer adverse events upon switching to the ODS formulation. The strength of this study derives from the implementation of a prospective, randomized, controlled crossover trial and the use of 2 established erectile function evaluation questionnaires to determine IIEF-5 scores and EHS values. Additionally, the IIEF-5 questionnaire was provided in the Thai language after having been validated for internal consistency, validity, and reliability. 26 Also, the EHS questionnaire was crafted specifically for the development and study of sildenafil. 17 By virtue of the crossover design, every patient had the opportunity to use each of the 2 sildenafil formulations for 8 weeks, providing enough time for patient preferences to develop. The authors of the present study also recognize its limitations. First, the placebo group is absent. Because PDE5 inhibitors are now the standard treatment option for ED, a clinical trial that includes a placebo group was thought to be inappropriate for the patients. In addition, such a trial design may have attracted fewer candidates and would undoubtedly raise multiple ethical concerns. Second, the clinical trial was not conducted as a blind study. Because both sildenafil formulations were unique (eg, orally disintegrating strip vs tablet) it was impossible to disguise the identity of the formulation upon treatment administration. Therefore, an adequate washout period was included in the study design to mitigate the potential for bias. Third, values for the onset of action and duration of action were measured by the patients themselves using a watch. This data may not be completely accurate when compared to the stopwatch technique. Lastly, we could not compare the cost effectiveness between these 2 formulations because the ODS formulation is not currently marketed in Thailand. Even with these limitations, the authors believe that the crossover design has made the study results more consistent and more informative as a reference for the treatment of ED patients and for subsequent clinical studies. CONCLUSIONS The sildenafil ODS and conventional tablet formulations both demonstrated good clinical efficacy and safety for the treatment of ED. The present clinical trial data indicate that both formulations had comparable efficacy and safety profiles. Sildenafil ODS manifested a 7.1-minute faster onset of action time when compared to sildenafil tablets. Although not statistically significant, this difference in onset of action time may have clinical meaningfulness for some patients. Additionally, patients treated with sildenafil ODS exhibited fewer flushing symptoms. In cases where patients cannot tolerate treatment with sildenafil tablets because of flushing, the sildenafil ODS formulation may be a viable alternative.
2021-10-28T15:14:00.980Z
2021-10-25T00:00:00.000
{ "year": 2021, "sha1": "36ed88a85e244401fa6f1a8f850f19ddd0961f89", "oa_license": "CCBYNCND", "oa_url": "http://www.smoa.jsexmed.org/article/S2050116121001331/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ef60f81419578870c9fd8cbc47dc848ac051e838", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
25640788
pes2o/s2orc
v3-fos-license
The impact of child safety promotion on different social strata in a WHO Safe Community Abstract: Background: The objective of the current study was to evaluate outcomes of a program to prevent severe and less severe unintentional child injuries among the different social strata under WHO Safe Community program. Specifically, the aim was to study effectiveness of Safe Community program for reducing child injury. Methods: A quasi-experimental design was used, with pre- and post-implementation registrations covering the children (0 -15 years) in the program implementation area (population 41,000) and in a neighboring control municipality (population 26,000) in Östergötland County, Sweden. Results: Boys from not vocationally active households displayed the highest pre-intervention injury rate in both the control and intervention areas. Also in households in which the vocationally significant member was employed, boys showed higher injury rates than girls. Households in which the vocationally significant member was self-employed, girls exhibited higher injury rates than boys in the intervention area. After 6 years of program activity, the injury rates for boys and girls in employed category and injury rates for girls in self-employed category displayed a decreasing trend in the intervention area. However, in the control area injury rate decreased only for boys of employed families. Conclusions: The study indicated that almost no changes in injury rates in the control area suggested that the reduction of child injuries in the intervention area between 1983 and 1989 was likely to be attributable to the safety promotion program. Therefore, the current study indicates that Safe Community program seems to be successful for reducing child injuries. Introduction W orldwide unintentional injuries remain a significant health problem for children, despite several decades of concerted efforts. 1 Among the children (0-15 years), most fatal injuries occur at home. Studies of child injury by severity suggest that the socioeconomic determinants of more severe injuries differ from those of less severe injuries. 1,2 However, less we know about the child injury prevention programs especially in relation to socioeconomic status of the children's families. Community based programs to prevent common nonfatal injuries have been effectively implemented as complements to various national safety programs. [3][4][5][6][7] The current study presents an outcome evaluation on different social strata of a program to prevent severe and less severe unintentional child injuries. The program was developed following the World Health Organization (WHO) Safe Community program (more details at http://www.phs.ki.se/csp/). Using a quasi-experimental design to compare intervention and control communities, the study investigated changes in the all-cause injury risk after program implementation. In addition, changes in the distribution of injury severity and injury event contexts in the intervention community were examined. 1 An assessment of the general structure and process of the program has previously been reported. 8 In Sweden, the positioning of the local government in the program structure appears to be the most important factor determining program effectiveness. injury and provided a call for drastic actions for childhood injury prevention. 9 WHO Safe Communities program has been operating for the last two decades to prevent injuries and promote safety. Earlier study indicated that, the relative risk for child injury has decreased significantly in a WHO Safe Community in Sweden without focusing socioeconomic determinants. 1 Injuries especially of children have been reported to be more common in households with poorer social strata. 10,11 Vulnerable populations living in poor social strata are disproportionately at a risk of injury. [12][13][14][15] However, to the best of authors knowledge, few studies to date have investigated the impact of child safety promotion programs on boys and girls from different social strata. 4 The current study addresses this gap in knowledge using WHO Safe Community program in Sweden. The objective of the current study was to investigate differences in the distribution of the child injury rate reduction among the different social strata in the catchment area. Specifically, the aim was to study, using a quasi-experimental design, 16 rates of child injury treated by healthcare organizations among members of households at different levels of labour market integration before and after program implementation. Methods The Motala community is one of the original reference sites for the World Health Organization (WHO) Safe Community accreditation criteria. The Safe Community concept was developed in Sweden in conjunction with the WHO, based on findings from local Swedish injury prevention programs in the 1970s and 1980s. Scandinavian countries were among the first to implement the Safe Community model in the late 1980s and early 1990s. 17 The model emphasizes community participation and multidisciplinary collaboration, recognizing that those most able to solve local injury problems are those people who live in that particular community. 7 Study design A quasi-experimental design was used, with pre- and post-implementation registrations covering the total populations 0-15 years of age in the program implementation area (Motala) and in a neighboring control municipality (Mjölby) in Östergötland County. The pre-implementation study period covered 52 weeks from 1 October 1983 to 30 September 1984. The post-implementation period covered 52 weeks from 1 January 1989 to 31 December 1989. Changes in the morbidity rates following the intervention were studied using prospective registration of all acute care episodes during the study period. The intervention area had four health care centers and a county annex hospital with a casualty department, while the control area shared the annex hospital and had two health care centers, one with an emergency unit. Implementation of the Motala program In 1985, the Health Services Board of the County Council and the Municipal Government Board agreed to share responsibility for a local injury prevention program and a self regulating Child Safety Council (CSC). CSC members included politicians, county officials whose departments were responsible for the care and welfare of children, and representatives of non-governmental organizations. In 1987, the CSC used its influence within the local social network to establish an organization for the regular implementation of safety measures. All injuries treated at health care units were reported to the program. The registration procedure was based on earlier experience in Sweden. 18 For all injured children treated at the emergency room at the local hospital, a form was filled in by staff with the time of contact and standard personal data. Statistical analyses identified high risk age groups, the most common injury environments, and the most common types. The CSC cooperated with local mass media in the intervention area to provide regular information about injury prevention. To reach preschool children, nurses in the intervention area were trained and asked to provide age adjusted safety information to parents at compulsory annual health visits. Follow up interviews with parents who had visited childcare nurses showed that almost all families had received the safety information. However, despite receiving the information, only a minority were aware of the major hazards. Therefore, a video demonstrating safety modifications in the home was distributed to all parents with children younger than 6 years of age as part of a behavioral safety education and information program directed at falls in the home. In addition, safety products and examples of modifications of risk environments were displayed at public places. Indoor environments at all daycare centers were also evaluated, but required only minor modifications. Regular safety rounds were introduced for safety maintenance at the daycare centers as well as at playgrounds and other public facilities frequented by preschool children. To target schoolchildren, indoor environments at schools and sports facilities were evaluated, and regular safety rounds for maintenance were also introduced. Furthermore, all physical education teachers in the intervention area participated in an injury prevention course focusing on high risk groups of children. This course was intended to contribute to meeting the goal that every child performing physical exercise would have the basic skills for the activity and be informed about rules and injury risks. Local sports clubs were also asked to contribute to the injury prevention program. For the most popular team sport, soccer, workshops for coaches and referees were used to discourage foul play. For the most popular individual sport, horseback riding, an attempt was made to support the supervision of novices, including new rules requiring supervision of young riders during all interaction with horses. Both structural and educational measures were taken to improve traffic safety. A "Safe way to school" program was implemented at every primary school in cooperation with the municipality's planning department. The program included a "Cut your garden hedge" initiative to increase driveway visibility in residential areas. In addition, voluntary organizations and the police provided traffic education programs aimed at primary and lower secondary school students. A one hour traffic lesson was scheduled each week for all fourth graders. Last, a safe cycling program was initiated to subsidize the price of cycle helmets and to promote helmet use. Children were also offered courses to "shape up your bike" to reduce risks of equipment failure. Classification of data The Swedish Socio-economic Index (SEI) was used to classify the individuals in the study areas. The SEI was used since the early 1980s to represent social status in most national databases and statistics. 19 The SEI defines social status primarily as being based on occupation. Children and young people are categorized to the SEI group to which their parents' household belongs. SEI data for all individuals in the intervention and control areas were collected from Statistics Sweden (http:// www.scb.se). For the pre-implementation measurement, SEI data originated in the Census survey conducted in 1985. Corresponding data for the post-implementation measurement originated in the 1990 Census survey. Considering that the WHO Safe Community model relies strongly on the existing civic social network, and that occupation is an important determinant for these networks, the detailed SEI categories were used for coding individuals into three secondary categories based on the relation that the household had to the labour market: (1) households in which the vocationally significant member was employed, i.e. the person in the household with the highest wage earnings; (2) households in which the vocationally significant member was an entrepreneur or self-employed; and (3) households in which the adults were not vocationally active. Community characteristics Motala is situated in the western part of the county of Östergötland. The population was approximately 41,000 during the study period (82% living in the central and residential areas and the 18% living in surrounding rural areas). Seventy seven percent were gainfully employed in the field of manufacturing, trade and public administration. Mjölby, control municipality area, (population 26,000), was selected on the basis of socio-economic and demographic similarities to Motala and obviously due to availability of injury data. The city of Mjölby is situated 30 km south of Motala in the same county in the south-eastern part of Sweden. Data collection All children and adolescents under 16 years of age arriving at any health care unit located in the intervention and control areas during the study periods were included in to the current study. The nature and event context of injuries was classified using the International Classification of Diseases, eighth revision, 20 and the abbreviated injury scale (AIS) was used to measure injury severity. 21 Based on information from medical records two specially trained nurses classified injuries after the care episode. The attending physician was asked to verify, whenever necessary the accuracy of the classification. However, due to a lack of resources data on injury severity and event context were not collected from the control area. 1 To estimate the quality of the specific injury registration procedure, secondary sampling of all acute health care attendances in the intervention area was undertaken during the third week of the pre-implementation registration period and in both the intervention and control areas during the third week of the post-implementation registration period. University hospital emergency department records from September 1984 were also additionally analyzed for any systematic differences between persons from the intervention and control areas receiving care outside the care units providing data for this evaluation. Statistical methods Injury rates, expressed as per 100 person-years, were calculated by community (intervention and control municipality) for each study period (1983/1984 and 1989), by socio-economic group according to labour market: employed, self-employed and not vocationally active; and by gender, as well as for girls and boys together. 22 Ninety-five percent confidence intervals (CI) were employed for injury rates. To avoid double registration of the same injury, only the first episode of injury during each registration period was included in the calculations. However, if the child had any new other injury during the registration period -that was registered in the current study. The differences in injury rates between 1989 and 1983/1984 were computed for both areas with 95% CI. A P-value <0.05 was employed to test the level of statistical significance. Similarly, differences in changes of injury rate between the intervention and control areas were computed using the following expression: Difference in changes of injury rate = [Post-intervention injury rate in intervention area -Pre-intervention injury rate in intervention area] - [Post intervention injury rate in control area - Pre-intervention injury rate in control area] All computations were performed using SPSS statistical software (PASW Statistics, Version 18). Results Less than 1% of the eligible patients could not be identified in the medical record archives for secondary data analyses. During 1983-84, child all-cause injury rates were 172 per 1000 population years in the intervention area and 124 per 1000 population years in the control area. This difference is due, in part, to a lower proportion of injured residents from the intervention area than in the control area seeking emergency care at the university hospital. Only 3% of residents from the intervention area were taken directly to the university hospital for care, compared with 12% from the control area. The age and gender mix in both areas were close to the national average and stable over the registration periods. Members of households in which the vocationally significant member was employed constituted the largest share of the population <16 years of age in both the intervention (84%) and control (82%) areas. The members of self-employed households represented 8% and 11%, respectively ( Table 1). Members of households classified as not vocationally active constituted 8% and 7% respectively. The income levels in both areas were at 93% of the national average and remained stable between the registration periods. Between 49% and 51% of the total population in the intervention and control areas were gainfully employed during the registration periods. During both periods, the share of the population with more than compulsory school education was about 5% below the national average in both areas. Similarly, the share of urban residents remained between 79% and 82% in both areas. The distribution of employers was comparable between the areas and registration periods, the share employed by manufacturing industries (31-34%) was higher than the national average (21-20%). Boys from not vocationally active households displayed the highest pre-intervention injury rate in both the control and intervention areas ( Table 2). Also in households in which the vocationally significant member was employed, boys showed higher injury rates than girls. In the households where the vocationally significant members were self-employed, girls exhibited higher injury rates than boys in the intervention area. After 6 years of program activity, the injury rates for boys and girls in employed and for girls in self-employed categories displayed a decreasing trend in the intervention area (Table 3). However, in the control area injury rate decreased only for boys of employed families. Changes in injury rates in the control area were not statistically significant in other social strata. Non-vocationally active households had the highest incidence of injury in the intervention area, and boys sustained injuries more frequently than girls in employed and nonvocational social status groups in both study areas. Discussion The current study indicates that Safe Community program seems to be successful for reducing child injuries. The study analyzed the WHO Safe Community program for safety promotion with regard to associations between pre-and post-intervention injury rates among boys and girls, and socio-economic status, as defined by the employment category of the household's significant member. The study indicated that almost no changes in injury rates in the control area suggested that the reduction of child injuries in the intervention area between 1983 and 1989 was likely to be attributable to the safety promotion program. The socially disadvantaged children as indicated by the SEI categories were at the highest pre-intervention injury risk, indicating that lower socio-economic status is an important risk factor for injury; this is consistent with previous research. 15 The present study design did not allow for an investigation into the causes of these differences, although a possible explanation could be more prevalent use of unsafe domestic products and less attention/supervision of the guardians in deprived households. Another finding that requires further study is that girls in the self-employed category displayed higher injury rates than boys. Considering the program's aim of equality in safety issues between social groupings, the program was only partially successful in that it reduced the injury rate in employed households but it did not influence the injury rate of the self-employed households for boys and not vocationally active households. This finding indicates that the program's approach of combining a population-based, communitywide strategy with more targeted interventions to groups at increased risk, i.e. children/teenagers, is not so much successful in reducing health inequalities. It has been suggested that a 'pure' population strategy is only appropriate when risk is widely diffused through the whole population. 16 Earlier studies have demonstrated that injuries in childhood are related both to poverty at the household level and to living in a deprived neighborhood, and that these influences are independent. 23 This evidence suggests a parallel use of community-wide efforts and more targeted area-based interventions in order to reduce child injuries. Another explanation could be substance abuse among adult members of non-vocationally active households, leading to accidents and/or neglect in child minding. While substance abuse has been associated with the occurrence of injury, the association has not been characterized by type of substance and injury type. However, a recent study has reported that alcohol and cocaine use is independently associated with violence-related injuries, whereas opiate use is independently associated with non-violent injuries and burns. 24 Screening for substance abuse was not included in the present study, which warrants to be addressed in future studies. 25 The current study is from a medium size community in Sweden. As the socio-cultural characters vary over the areas, the current findings might suffer from making a general conclusion for Nordic countries. Therefore further evaluations are warranted in other WHO Safe Communities in other low-medium-and high-income countries. For individuals injured more than once, only the first episode during each registration period was included in the current study. Repetitive injuries of the same nature of the same child warrant further studies. In future, similar studies are warranted using severity of the injuries. The study has used data from 1983-1989 to measure the changes of childhood injuries according to social status. Though the data seemed to be old but according to intention of the study it should not create any problem in connection to reality. The current context of the study can demand similar studies using recent data. In conclusion, the Safe Community program seemed to be effective in that it reduced the childhood injury rates in the intervention areas. However, households with employed and self-employed (with exception for boys) revealed statistically significant social stratum for effective child injury intervention. The findings do seem to suggest that additional research on the issue of parental sex and gender role as it relates to employment status or self-employment could be an interesting area for further analysis and research. Further research on evaluation of the WHO Safe Community programs in association with social strata and child injury intervention is also warranted from different countries. Funding: This study was supported by grants from the Swedish Civil Contingencies Agency (MSB). Competing interest: None declared Ethical approval: The study was approved by the Regional Committee for Research Ethics at Linköping University, Sweden.
2014-10-01T00:00:00.000Z
2012-01-01T00:00:00.000
{ "year": 2012, "sha1": "6f95b65d4d51b3369c6c1b0b6cdd2dab819dda0a", "oa_license": "CCBY", "oa_url": "http://www.jivresearch.org/jivr/index.php/jivr/article/download/83/130", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6f95b65d4d51b3369c6c1b0b6cdd2dab819dda0a", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
201665779
pes2o/s2orc
v3-fos-license
Pre-Impact Detection Algorithm to Identify Tripping Events Using Wearable Sensors This study aimed to investigate the performance of an updated version of our pre-impact detection algorithm parsing out the output of a set of Inertial Measurement Units (IMUs) placed on lower limbs and designed to recognize signs of lack of balance due to tripping. Eight young subjects were asked to manage tripping events while walking on a treadmill. An adaptive threshold-based algorithm, relying on a pool of adaptive oscillators, was tuned to identify abrupt kinematics modifications during tripping. Inputs of the algorithm were the elevation angles of lower limb segments, as estimated by IMUs located on thighs, shanks and feet. The results showed that the proposed algorithm can identify a lack of balance in about 0.37 ± 0.11 s after the onset of the perturbation, with a low percentage of false alarms (<10%), by using only data related to the perturbed shank. The proposed algorithm can hence be considered a multi-purpose tool to identify different perturbations (i.e., slippage and tripping). In this respect, it can be implemented for different wearable applications (e.g., smart garments or wearable robots) and adopted during daily life activities to enable on-demand injury prevention systems prior to fall impacts. Introduction Falling is widely recognized as one of the most important causes of disability in fragile individuals [1][2][3][4]. Several reports agree with the evidence that about 30% of older adults (+65 years of age) fall at least once per year [5,6], and the percentage increases with ageing and related neuro-musculo-skeletal diseases [7]. Falls can indeed result in traumatic and physiological consequences [3], thus worsening the quality of life of fragile individuals and augmenting the costs of healthcare [8][9][10]. The increasing life expectancy of the worldwide population is expected to further exacerbate the effects of the risk of falls on society as a whole. As such, national and international agencies have been facing this problem by consistently supporting research activities in the field of fall prevention programs [11,12]. One of the strategies currently being investigated to counteract the risk of falls involves predicting the forthcoming occurrence using wearable sensors in conjunction with suitable signal processing algorithms [13][14][15]. Inertial sensors, also named inertial measurement units (IMUs; i.e., accelerometers and/or gyroscopes), are among the most applicable sensor types since they can work as stand-alone platforms during daily activities in unstructured environments and can be easily embedded in either garments or wearable devices [13,16,17]. 2. spring-rope mechanism; 3. treadmill; 4. footswitch under the unperturbed foot; 5. cam-based braking mechanism; 6. camera-based motion capture system. As a representative example, the position of the Inertial Measurement Units (IMUs) is reported on the thigh, shank and foot of the right limb, and the elevation angles used as input of the algorithm (i.e., elevation angles of the shank and foot-θshank and θfoot, respectively) are depicted on the left limb. The control architecture of the platform was based on an Arduino Due microcontroller and managed two different inputs: (1) an external enabler, handled by the experimenter; and (2) a foot switch under the unperturbed foot signaling its heel strike. When the experimenter decided to deliver the perturbation, she/he enabled the control loop using an external input. Then, the cam-based actuation started moving when the heel strike of the unperturbed (i.e., left) foot was detected. The movement of the rope was then stopped for 0.9 s and then released to allow the subject to autonomously recover balance. To prevent anticipative behaviors, participants could not see the experimenter and listened to music with headphones during the whole experimental session. The experimental protocol consisted of five unexpected tripping perturbations delivered during the swing phase of the right leg and ten additional trials, in which no perturbation was applied. To prevent bias, subjects did not know whether they would have been perturbed or not, and they did not know the side being perturbed. Seven IMUs (Xsens wireless Motion Tracker Awinda system; [26]) were used as motion trackers. Each IMU had embedded inertial sensor components, namely a 3D rate gyroscope, a 3D accelerometer, a 3D magnetometer, a barometer, and a thermometer. They were located on lower limbs (i.e., pelvis, thighs, shanks and feet) to monitor the elevation angles of each segment, with a sampling rate of 100 Hz. The 3D kinematic of the feet was also recorded by a 6-camera based Vicon 512 Motion Analysis System (Vicon, Oxford, UK), with a sampling rate of 100 Hz, to identify gait events and record the onset of the perturbation (i.e., heel strike of the left foot). Kinematic data and inertial signals were synchronized off-line. In more detail, before each trial, participants were asked to perform a couple of jumps from a steady stance while both the camerabased system and IMUs were recording body kinematics. Datasets were off-line time-aligned by assessing the time-lag between them. Specifically, we computed the cross-correlation of the second time-derivative of the vertical component of one marker on the IMU placed on the right foot with the vertical component of the acceleration detected by the same IMU within a 5-s time window including spring-rope mechanism; 3. treadmill; 4. footswitch under the unperturbed foot; 5. cam-based braking mechanism; 6. camera-based motion capture system. As a representative example, the position of the Inertial Measurement Units (IMUs) is reported on the thigh, shank and foot of the right limb, and the elevation angles used as input of the algorithm (i.e., elevation angles of the shank and foot-θshank and θfoot, respectively) are depicted on the left limb. The control architecture of the platform was based on an Arduino Due microcontroller and managed two different inputs: (1) an external enabler, handled by the experimenter; and (2) a foot switch under the unperturbed foot signaling its heel strike. When the experimenter decided to deliver the perturbation, she/he enabled the control loop using an external input. Then, the cam-based actuation started moving when the heel strike of the unperturbed (i.e., left) foot was detected. The movement of the rope was then stopped for 0.9 s and then released to allow the subject to autonomously recover balance. To prevent anticipative behaviors, participants could not see the experimenter and listened to music with headphones during the whole experimental session. The experimental protocol consisted of five unexpected tripping perturbations delivered during the swing phase of the right leg and ten additional trials, in which no perturbation was applied. To prevent bias, subjects did not know whether they would have been perturbed or not, and they did not know the side being perturbed. Seven IMUs (Xsens wireless Motion Tracker Awinda system; [26]) were used as motion trackers. Each IMU had embedded inertial sensor components, namely a 3D rate gyroscope, a 3D accelerometer, a 3D magnetometer, a barometer, and a thermometer. They were located on lower limbs (i.e., pelvis, thighs, shanks and feet) to monitor the elevation angles of each segment, with a sampling rate of 100 Hz. The 3D kinematic of the feet was also recorded by a 6-camera based Vicon 512 Motion Analysis System (Vicon, Oxford, UK), with a sampling rate of 100 Hz, to identify gait events and record the onset of the perturbation (i.e., heel strike of the left foot). Kinematic data and inertial signals were synchronized off-line. In more detail, before each trial, participants were asked to perform a couple of jumps from a steady stance while both the camera-based system and IMUs were recording body kinematics. Datasets were off-line time-aligned by assessing the time-lag between them. Specifically, we computed the cross-correlation of the second time-derivative of the vertical component of one marker on the IMU placed on the right foot with the vertical component of the acceleration detected by the same IMU within a 5-s time window including the two initial jumps. The time-lag between records coincided with the abscissa of the maximum of their related cross-correlation function. Research procedures followed the Declaration of Helsinki and were approved by the Local Ethical Committee. Data Pre-Processing Gait events (i.e., right and left heel strikes) were identified based on the trajectories of the markers placed on the feet, as reported in the literature [27]. For each subject and each trial, data were subdivided in two subsets: data recorded before and after the onset of the perturbation (i.e., left heel strike). The former referred to the last 10 unperturbed strides in which each cycle started with the left heel strike and ended with the following one; each stride was time-interpolated over 101 points, and all strides were averaged to have a representative unperturbed gait cycle. The latter referred to the compensatory stride; it started simultaneously with the onset of the perturbation and ended with the following left heel strike. Data referring to the compensatory stride were also time-interpolated over 101 points. The duration of both strides (i.e., unperturbed and compensatory ones) was evaluated as the interval between two consecutive heel contacts of the left foot. Lower limbs (i.e., perturbed and unperturbed limbs; PL and UL, respectively) were modeled as a three-link (i.e., thigh, shank and foot) chain. According to the aim of this study, we primarily monitored the elevation angles in the sagittal plane-that is, the orientation of right thigh, shank and foot with respect to the vertical axes (see red lines in Figure 1); in particular, the elevation angles were estimated using a new Kalman filter specifically developed by Xsens for capturing human motion by fusing the gyroscope, accelerometer and magnetometer signals [26]. For each trial, an 18-s long dataset-i.e., 15 s before the onset of the perturbation and 3 s after-was retained for the tuning and the validation of the detection algorithm. Pre-Impact Detection Algorithm (PIDA) Our PIDA was designed to signal sudden modifications of the quasi-periodic features of walking patterns due to unexpected perturbations [25]. To achieve this task, it accounted for two main components ( Figure 2): (1) a set of adaptive oscillators (AOs) coupled with a kernel based non-linear filter, and (2) an adaptive threshold-based algorithm (ATBA). the two initial jumps. The time-lag between records coincided with the abscissa of the maximum of their related cross-correlation function. Research procedures followed the Declaration of Helsinki and were approved by the Local Ethical Committee. Data Pre-Processing Gait events (i.e., right and left heel strikes) were identified based on the trajectories of the markers placed on the feet, as reported in the literature [27]. For each subject and each trial, data were subdivided in two subsets: data recorded before and after the onset of the perturbation (i.e., left heel strike). The former referred to the last 10 unperturbed strides in which each cycle started with the left heel strike and ended with the following one; each stride was time-interpolated over 101 points, and all strides were averaged to have a representative unperturbed gait cycle. The latter referred to the compensatory stride; it started simultaneously with the onset of the perturbation and ended with the following left heel strike. Data referring to the compensatory stride were also time-interpolated over 101 points. The duration of both strides (i.e., unperturbed and compensatory ones) was evaluated as the interval between two consecutive heel contacts of the left foot. Lower limbs (i.e., perturbed and unperturbed limbs; PL and UL, respectively) were modeled as a three-link (i.e., thigh, shank and foot) chain. According to the aim of this study, we primarily monitored the elevation angles in the sagittal plane-that is, the orientation of right thigh, shank and foot with respect to the vertical axes (see red lines in Figure 1); in particular, the elevation angles were estimated using a new Kalman filter specifically developed by Xsens for capturing human motion by fusing the gyroscope, accelerometer and magnetometer signals [26]. For each trial, an 18-s long dataset-i.e., 15 s before the onset of the perturbation and 3 s afterwas retained for the tuning and the validation of the detection algorithm. Pre-Impact Detection Algorithm (PIDA) Our PIDA was designed to signal sudden modifications of the quasi-periodic features of walking patterns due to unexpected perturbations [25]. To achieve this task, it accounted for two main components ( Figure 2): (1) a set of adaptive oscillators (AOs) coupled with a kernel based non-linear filter, and (2) an adaptive threshold-based algorithm (ATBA). The set of adaptive oscillators (AOs), if properly tuned, provided a synchronized estimation of non-sinusoidal quasi-periodic input signals with zero phase-lag [28]. Accordingly, the input and output of this predictor are likely to be similar during steady walking. Conversely, if a perturbation suddenly altered the cyclic features of the gait patterns, the output seeks a new periodic signal, thus diverging from the actual input. The accuracy and the responsiveness of the pool of AOs were optimized by tuning their learning gains, namely phase (k P ) and amplitude (k A ), learning gains. The difference between the input and output of the AOs-i.e., the error-was then analyzed by the ATBA (Figure 2), as follows: 1. The algorithm first selected a w-long portion of the error signal prior to the current time frame and computed its mean (µ) and standard deviation (σ); 2. Then, it compared the absolute value of the error signal at the current time-frame with a threshold set at µ + kσ, where k represents a corrective factor to shape the value of the threshold; 3. If the absolute value of the error was above the threshold, the algorithm delivered a warning; 4. A set of r consecutive warnings was used to detect a lack of balance that could potentially result in an incipient fall. The optimization of our PIDA consisted of identifying a suitable set of tuning parameters-i.e., k P , k A , w, k and r-to both minimize the detection time and reduce the number of FA. PIDA Tuning The tuning of our PIDA consisted of identifying an optimal set of tuning parameters-i.e., k P , k A , w, k and r-to both minimize the detection time (i.e., the time elapsing between the onset of the perturbation and the output of the algorithm) and reduce the number of FA; i.e., false positives (FP) and false negatives (FN). Noticeably, an FP occurred when a postural transition was detected before the onset of the actual perturbation, and an FN occurred when no postural transitions were detected within 1 s after the onset of the perturbation. To achieve the best tuning of our PIDA, we first investigated the dynamic behavior of the AOs in the domain of learning gains (i.e., k A and k P ) reported in Table 1. Specifically, we sought the domain of learning gains that allowed for the best match between the real and estimated elevation angles of the right shank and foot. To achieve this task, we computed the root means square of the difference (RMSD) and the Pearson correlation coefficient (ρ) between current and estimated angles during the 3-s long time window before the onset of the perturbation. Accordingly, we assumed that a suitable tuning of the AOs was obtained if the RMSD was lower than 0.1 rad and ρ was higher than 0.9. Once the AOs were set up, we tuned the ATBA (Figure 2) to identify the best performance of our PIDA while parsing out datasets collected during the experimental sessions. The ranges of the tuning parameters for the ATBA (i.e., w, k and r) are reported in Table 1. The performance of our PIDA was assessed in terms of mean detection time (MDT) and percentage of FA across subjects and tripping trials. Figure 3 shows the lower limb behavior during the unperturbed and compensatory strides. Before the onset of the perturbation, the elevation angles were comparable to those described in the literature at the same speed [29,30]. As expected, after the onset of the perturbation, lower limb kinematics, especially those that referred to the perturbed shank and foot, were altered due to the cable braking effect (Figure 3a,c,e). Accordingly, before the onset of the perturbation (i.e., during steady walking), the unperturbed stride lasted 1.25 ± 0.06 s; after the onset of the perturbation (i.e., during the tripping event), the compensatory stride lasted 1.09 ± 0.09 s. Figure 3 shows the lower limb behavior during the unperturbed and compensatory strides. Before the onset of the perturbation, the elevation angles were comparable to those described in the literature at the same speed [29,30]. As expected, after the onset of the perturbation, lower limb kinematics, especially those that referred to the perturbed shank and foot, were altered due to the cable braking effect ( Figure 3, panels a, c and e). Accordingly, before the onset of the perturbation (i.e., during steady walking), the unperturbed stride lasted 1.25 ± 0.06 s; after the onset of the perturbation (i.e., during the tripping event), the compensatory stride lasted 1.09 ± 0.09 s. Tuning of the AOs Considering that tripping mainly modified the orientation of the perturbed shank and foot (Figure 3), the performances of our algorithm were investigated while monitoring these segments (see also Section 4). Accordingly, Figure 4 shows representative examples of accurate and inaccurate tuning of the AOs tracking the orientation of the perturbed shank and foot. Tuning of the AOs Considering that tripping mainly modified the orientation of the perturbed shank and foot (Figure 3), the performances of our algorithm were investigated while monitoring these segments (see also Section 4). Accordingly, Figure 4 shows representative examples of accurate and inaccurate tuning of the AOs tracking the orientation of the perturbed shank and foot. Figure 5 shows the results of the tuning of the AOs and reports the range of learning gains (i.e., k A and k P ) allowing for a suitable prediction of the elevation angles during steady walking. As far as the elevation angle of the perturbed shank is concerned, a subdomain of k A and k P (range: k A = 1 and k P = [10, 20, 30, 40, 50, 60, 70, 80, 90, 100]) allowed the pool of AOs to properly fit real kinematics (i.e., RMSD < 0.1 rad, and ρ > 0.9; Figure 5a,b). Conversely, only few learning gains of the AOs (i.e., k A = 1 and k P = [80, 90, 100]) allowed for a suitable tracking of the perturbed foot elevation angle (Figure 5c,d). As expected, if the AOs were properly tuned, the error signal between the output of the AOs and the real kinematics suddenly increased after the onset of the perturbation (Figure 4). This abrupt change of the error signals was then analyzed by the ATBA to detect the lack of balance due to tripping events. Figure 5, panels c and d). As expected, if the AOs were properly tuned, the error signal between the output of the AOs and the real kinematics suddenly increased after the onset of the perturbation (Figure 4). This abrupt change of the error signals was then analyzed by the ATBA to detect the lack of balance due to tripping events. Figure 5, panels c and d). As expected, if the AOs were properly tuned, the error signal between the output of the AOs and the real kinematics suddenly increased after the onset of the perturbation (Figure 4). This abrupt change of the error signals was then analyzed by the ATBA to detect the lack of balance due to tripping events. a) and (c)) and ρ ((b) and (d)) for each learning gain of the AOs (i.e., k A and k P ) and for both input signals (i.e., elevation angles of perturbed shank and foot). x-axes show k A ; y-axes show k P . Dark and light gray describe low and high values, respectively. Tuning of the ATBA Based on the results of the AOs' tuning, the detection algorithm (i.e., ATBA) was validated on a subdomain of tuning parameters (k P , k A , w, k, and r). Specifically, the learning gains of the AOs allowing for the best tracking of the input signals were selected as follows: for the perturbed shank, k P and k A were set at 20 and 1, respectively ( Figure 4a); for the perturbed foot, k P and k A were set at 100 and 1, respectively (Figure 4c). Figure 6 shows the MDT (panel a) and the percentage of FA (panel b) obtained by monitoring the elevation angle of the perturbed shank for all tuning parameters of the ATBA (i.e., w, k and r). The results revealed that, for a suitable FA percentage (i.e., lower than 10%), the best combination of tuning parameters was w = 400, r = [6,8] and k = 3.5. In particular: 1. Tuning of the ATBA Based on the results of the AOs' tuning, the detection algorithm (i.e., ATBA) was validated on a subdomain of tuning parameters (kP, kA, w, k, and r). Specifically, the learning gains of the AOs allowing for the best tracking of the input signals were selected as follows: for the perturbed shank, kP and kA were set at 20 and 1, respectively ( Figure 4a); for the perturbed foot, kP and kA were set at 100 and 1, respectively (Figure 4c). Figure 6 shows the MDT (panel a) and the percentage of FA (panel b) obtained by monitoring the elevation angle of the perturbed shank for all tuning parameters of the ATBA (i.e., w, k and r). The results revealed that, for a suitable FA percentage (i.e., lower than 10%), the best combination of tuning parameters was w = 400, r = [6,8] and k = 3.5. In particular: 1. If r = 6, the MDT was 0.37±0.11 s,with FA equal to 9.4%; 2. If r = 8, the MDT was 0.40±0.15 s, with FA equal to 9.4%. As far as the elevation angle of the perturbed foot is concerned, the percentage of FA was 100% for all the combinations of tuning parameters of the ATBA (i.e., w, k and r). Noticeably, these FA were all FN; that is, the ATBA was never able to detect signs for the lack of balance within a 1-s time window following the onset of the perturbation. Accordingly, no MDT was obtained for all these conditions. Discussion The aim of this study was to investigate the performance of a pre-impact detection algorithm while identifying unexpected tripping disturbances delivered during steady walking. To do this, a previous version of our algorithm, relying on joint angles and identifying slippages [25], was updated to detect the lack of balance following tripping perturbations. Noticeably, in this study, wearable sensors were used to monitor the orientation of lower limb segments. Overall, the best performance of the pre-impact detection algorithm was obtained monitoring the orientation of the perturbed shank, achieving an MDT equal to 0.37 0.11 s with an acceptable rate of FA (lower than 10%). Noticeably, the time course of the thigh elevation angle was not significantly altered by the perturbation (Figure 3) and one of our recent studies revealed that if our PIDA parses out hip joint angles during tripping, the detection time increases to 800-900 ms [31]. Therefore, we decided to avoid a deeper analysis of the performance of our PIDA while parsing out As far as the elevation angle of the perturbed foot is concerned, the percentage of FA was 100% for all the combinations of tuning parameters of the ATBA (i.e., w, k and r). Noticeably, these FA were all FN; that is, the ATBA was never able to detect signs for the lack of balance within a 1-s time window following the onset of the perturbation. Accordingly, no MDT was obtained for all these conditions. Discussion The aim of this study was to investigate the performance of a pre-impact detection algorithm while identifying unexpected tripping disturbances delivered during steady walking. To do this, a previous version of our algorithm, relying on joint angles and identifying slippages [25], was updated to detect the lack of balance following tripping perturbations. Noticeably, in this study, wearable sensors were used to monitor the orientation of lower limb segments. Overall, the best performance of the pre-impact detection algorithm was obtained monitoring the orientation of the perturbed shank, achieving an MDT equal to 0.37 ± 0.11 s with an acceptable rate of FA (lower than 10%). Noticeably, the time course of the thigh elevation angle was not significantly altered by the perturbation (Figure 3) and one of our recent studies revealed that if our PIDA parses out hip joint angles during tripping, the detection time increases to 800-900 ms [31]. Therefore, we decided to avoid a deeper analysis of the performance of our PIDA while parsing out the elevation angle of the most distal lower limb segment, in accordance with the purpose of this study (i.e., to find out the best sensor-set to minimize the detection time). This result, in conjunction with our previous findings [23,25], corroborates the hypothesis that the proposed algorithm can detect the lack of balance due to different acute perturbations (i.e., slippage, tripping) delivered during steady walking in a timely manner. Specifically, with respect to this study, our algorithm is well suited to be implemented in a smart lower limb prosthesis, equipped with IMUs, to enable strategies to promote balance recovery. Tuning of the AOs Firstly, the tuning of amplitude and phase learning gains of the pool of AOs (i.e., k A and k P ) were effectively updated to calculate the error signal (i.e., the input of the ATBA). Remarkably, according to the aim of this study, during walking trials, the error signal should be around zero to avoid FP. Conversely, during tripping trials, the error signal should increase to avoid FN and identify the lack of balance. As shown in Figure 5, the behavior of the AOs, while tracking the shank orientation, was acceptable (RMSD < 0.1 rad, and ρ < 0.9) in a wider subdomain of learning gains (i.e., k A and k P ; Figure 5a,b) compared to that allowing for an accurate foot tracking (Figure 5c,d). This result was expected since it reflects the behavior of the AOs to better track signals with a lower frequency content. In fact, the AOs behave like low-pass filters with zero delay [32]. Noticeably, the frequency content of the shank orientation was lower than that of the foot ( Figure 7); thus, the AOs can more accurately track the kinematics of the proximal body segments during steady locomotion. Accordingly, before the onset of the perturbation, the error signal achieved while monitoring the orientation of the shank (Figure 4a) was lower than that observed while monitoring the orientation of the foot (Figure 4c). In any case, the percentage of FP was minimized in both cases (i.e., monitoring the orientation of both shank and foot). study (i.e., to find out the best sensor-set to minimize the detection time). This result, in conjunction with our previous findings [23,25], corroborates the hypothesis that the proposed algorithm can detect the lack of balance due to different acute perturbations (i.e., slippage, tripping) delivered during steady walking in a timely manner. Specifically, with respect to this study, our algorithm is well suited to be implemented in a smart lower limb prosthesis, equipped with IMUs, to enable strategies to promote balance recovery. Tuning of the AOs Firstly, the tuning of amplitude and phase learning gains of the pool of AOs (i.e., kA and kP) were effectively updated to calculate the error signal (i.e., the input of the ATBA). Remarkably, according to the aim of this study, during walking trials, the error signal should be around zero to avoid FP. Conversely, during tripping trials, the error signal should increase to avoid FN and identify the lack of balance. As shown in Figure 5, the behavior of the AOs, while tracking the shank orientation, was acceptable (RMSD < 0.1 rad, and ρ < 0.9) in a wider subdomain of learning gains (i.e., kA and kP; Figure 5a,b) compared to that allowing for an accurate foot tracking (Figure 5c,d). This result was expected since it reflects the behavior of the AOs to better track signals with a lower frequency content. In fact, the AOs behave like low-pass filters with zero delay [32]. Noticeably, the frequency content of the shank orientation was lower than that of the foot ( Figure 7); thus, the AOs can more accurately track the kinematics of the proximal body segments during steady locomotion. Accordingly, before the onset of the perturbation, the error signal achieved while monitoring the orientation of the shank (Figure 4a) was lower than that observed while monitoring the orientation of the foot (Figure 4c). In any case, the percentage of FP was minimized in both cases (i.e., monitoring the orientation of both shank and foot). After the onset of the perturbation, the error signal achieved while monitoring the orientation of the shank increased (Figure 4a), allowing a fast identification of the balance loss ( Figure 6a). Conversely, after the perturbation onset, the error signal observed while monitoring the orientation of the foot was close to zero (Figure 4c). This latter result was likely due to the fact that the AOs were tuned to track a signal with higher frequency content (i.e., foot orientation), which were similar to those elicited while people reactively managed unexpected perturbations. In other words, the AOs tuned to monitor the orientation of the foot were also able to suitably track the higher frequency content resulting from sudden and unexpected tripping disturbances. In fact, no significant differences were observed between the estimated and measured signals after the perturbation After the onset of the perturbation, the error signal achieved while monitoring the orientation of the shank increased (Figure 4a), allowing a fast identification of the balance loss ( Figure 6a). Conversely, after the perturbation onset, the error signal observed while monitoring the orientation of the foot was close to zero (Figure 4c). This latter result was likely due to the fact that the AOs were tuned to track a signal with higher frequency content (i.e., foot orientation), which were similar to those elicited while people reactively managed unexpected perturbations. In other words, the AOs tuned to monitor the orientation of the foot were also able to suitably track the higher frequency content resulting from sudden and unexpected tripping disturbances. In fact, no significant differences were observed between the estimated and measured signals after the perturbation ( Figure 4c); thus, our approach relying on an IMU placed on the foot did not distinguish the steady walking from the reactive responses elicited after tripping. It is worth noting that increasing the domain of learning gains above the range reported in Table 1 did not improve the performance of the AOs, as it only potentially increased the above-mentioned effect (data not reported). Overall, a trade-off between FP and FN should be advantageous to allow the ATBA to provide a suitable MDT. Future analyses will be focused on improving the performances of the AOs in terms of FA. Tuning of the ATBA The tuning of the ATBA consisted of properly selecting the length of the bin before the current sample to gather some statistical properties of the signal (i.e., w), the number of consecutive warnings to prevent FA (i.e., r) and the threshold amplitude (i.e., k). Concerning the first parameter (i.e., w), the results revealed that acceptable values of FA (<10%) can be obtained with w = 400 samples (corresponding to 4 s); that is, a time window roughly accounting for 4 full strides, as in our experimental conditions. Noticeably, w = 400 samples also represent the minimum time required to update the ATBA at the beginning of each session. Based on the evidence that about 60% of daily bouts account for more than 5 full strides [33], the tuning of our ATBA relying on w = 400 samples is supposed to be suitable for the majority of daily activities. A greater w would reduce the initial responsiveness of our algorithm and require a greater memory mostly due to storage-related reasons. Accordingly, we believe that w = 400 samples represent a suitable trade-off between the prompt tuning of the algorithm and a low effort in term of data management and storage. With respect to the number of warnings (i.e., r), we had to choose a suitable value to guarantee a fast detection time and prevent the risk of FA. In this study, acceptable values of FA (<10%) were achieved with r = 6 and r = 8 samples. Noticeably, the increase in r induced a delay in the detection of the lack of balance. Thus, the best performance, in terms of both minimized detection time and FA, was obtained using r = 6 samples (see Figure 6). The last parameter of our algorithm was the corrective factor to shape the threshold (i.e., k). In our previous study [25], we heuristically chose k = 3 based on the assumption that the distribution of the error signal was Gaussian; thus, the probability that a value is over three standard deviations is lower than 1%. However, we acknowledged that a different choice might slightly improve the algorithm performances. Accordingly, in the current study, we tested three values (3, 3.5 and 4), showing that the best performance involved k = 3.5 obtaining a threshold equal to µ ± 3.5σ. Overall, according to the reported results, we can conclude that the proposed pre-impact detection algorithm, if well-tuned, can be effective across different scenarios (e.g., slippages, tripping events) showing suitable values for detection time and FA. Sensors Position and Related Performance of the Algorithm The previous version of our algorithm, relying on hip joints kinematics [25], was designed to be easily implemented in an active pelvis orthosis, a wearable robot equipped with joint position sensors, and to detect the lack of balance due to unexpected slippages delivered during steady walking. A later analysis confirmed that the proposed strategy was effective in closing a human-robot loop aimed at promoting balance recovery after slippages in elderly people and trans-femoral amputees [23,24]. Here, we proposed an updated version of this algorithm relying on wearable sensors placed on the lower limb segments. In particular, according to the purpose of our study, we tested the algorithm considering only the distal segments (i.e., shank and foot), since they were more affected by the tripping perturbations (see Figure 3). In addition, we only used signals recorded from the perturbed side (i.e., the right one) as the input of our algorithm, considering that it is earlier and more significantly altered by the perturbation (Figure 3c,e) compared to the unperturbed one (Figure 3d,f). Overall, the best performance was achieved tracking the orientation of the perturbed shank; thus, the proposed pre-impact detection algorithm can be effectively implemented by using only one IMU placed on this body segment. It is important to highlight that the advancement in micro-technology and wireless communication makes wearable sensors suitable for pre-impact detection algorithms [34]. Indeed, wearable IMUs are a low-cost system that can be used to detect fall in extended spaces, and they do not require additional infrastructure installation. Accordingly, our pre-impact detection algorithm could potentially be part of a complete fall detection and injury prevention system that would promote more independent living in the elderly community. Comparison with the State of Art Over the last few years, a great deal of effort has been put into investigating new fall detection strategies to automatically identify the occurrence of a fall event [20,[35][36][37]. Fall detection systems can be generally classified as post-fall mobility detection and pre-impact detection [34]. The former is expected to provide timely medical assistance for fall victims. However, falls can be only detected after impacts; thus, related injuries cannot be prevented. The latter, as with our approach, is expected to overcome such limitations, allowing falls to be detected before the body hits the ground. Accordingly, this strategy also has the potential to enable on-demand fall protection systems to prevent fall-related injuries. Thus, the main advantage of the pre-impact fall detection is that, if a fall can be detected in its earliest stage, more efficient preventive systems can be implemented for the minimization of injuries. Noticeably, our pre-impact detection algorithm can identify a lack of balance due to unexpected tripping events in about 0.37 ± 0.11 s (Figure 6a). We must acknowledge that this outcome cannot be directly compared to those reported in the literature. As a matter of fact, other authors determined either the critical falling time (i.e., the time which elapsed from the fall detection time and the moment at which the inclination angle between the center of mass and the center of pressure exceeded a range of −23 • to 23 • from the vertical [38]) or the lead-time (i.e., the time which elapsed from the fall detection and the impact of the subjects on a mattress [20,[39][40][41]). In contrast, in this study, we investigated the time which elapsed from the onset of the perturbation and the actual detection; that is, the time window preceding that observed by the above-mentioned works. However, our outcomes (i.e., MDT = 0.37 ± 0.11 s; Figure 6a), in conjunction with the evidence that the duration of the transitory phase between steady locomotion and hitting the ground can be longer than 0.5-0.7 s [29,42,43], allow us to hypothesize that the proposed pre-impact detection algorithm is able to promptly signal a lack of balance due to tripping, providing enough time to effectively enable mitigation strategies for impact prevention. Although promising, the performances of our detection algorithm are limited by the fact that it was tested and validated under well-controlled experimental conditions. In this respect, the cyclic features of the unperturbed gait patterns were guaranteed by a constant walking speed, in contrast to the evidence that human behavior, in real life, is much more variable (e.g., walking speed changes continuously, while walking people can change direction, or climb/descend stairs [33]). In addition, the lack of balance was induced by pseudo-impulsive events (i.e., an unexpected tripping), whereas a real fall is a complex motor task involving no-stereotyped biomechanics [44]. Finally, we used the simplest approach (i.e., a threshold-based algorithm parsing only one signal) to detect abnormal behaviors even if a more complex strategy can be also implemented to improve the overall performance. Accordingly, future analysis will be focused on testing and updating the proposed algorithm to detect real-world falls. Conclusions In this study, a pre-impact detection algorithm was updated to identify the lack of balance due to tripping. The best performance was obtained when analyzing the perturbed shank with a mean detection time equal to 0.37 ± 0.11 s and a low rate of false alarms (<10%). To conclude, the proposed algorithm is a simple threshold-based approach for the automatic detection of different types of unexpected gait disturbances [25], monitoring the shank orientation with a wearable sensor. Accordingly, it can be easily implemented in a lower limb robotic prosthesis already equipped with sensors to provide assistance to the user while regaining balance after unexpected tripping. Funding: This work was supported by the INAIL (Istituto Nazionale per l'Assicurazione contro gli Infortuni sul Lavoro) through the project MOTU (Protesi robotica di arto inferiore con smartsocket ed interfaccia bidirezionale per amputati di arto inferiore) and institutional funds from The BioRobotics Institute, Scuola Superiore Sant'Anna.
2019-08-30T16:45:35.948Z
2019-08-27T00:00:00.000
{ "year": 2019, "sha1": "105a3d069fd1e747d6be55039852786fc6fc1c55", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/s19173713", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "be45b7f383a4ed18c02808b7286456311b8fda69", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
119461611
pes2o/s2orc
v3-fos-license
Consequences of wall stiffness for a beta-soft potential Modifications of the infinite square well E(5) and X(5) descriptions of transitional nuclear structure are considered. The eigenproblem for a potential with linear sloped walls is solved. The consequences of the introduction of sloped walls and of a quadratic transition operator are investigated. I. INTRODUCTION The E(5) and X(5) models have been proposed by Iachello [1,2] to describe the essential characteristics of shape-transitional forms of quadrupole collective structure in nuclei. The E(5) model, for γ-soft nuclei, and the X(5) model, for axially symmetric nuclei, are both based upon the approximation of the potential energy as a square well in the Bohr deformation variable β. These models produce predictions for level energy spacings and electromagnetic transition strengths intermediate between those for spherical oscillator structure and for deformed γ-soft [3] or deformed axially-symmetric rotor [4] structures. The X(5) predictions for level energy spacings and electromagnetic transition strengths have been extensively compared with data for nuclei in transitional regions between spherical and rotor structure [5,6,7,8,9,10,11,12,13]. For several such nuclei, including the N =90 isotopes of Nd, Sm, Gd, and Dy, the X(5) predictions match well the yrast band level energies and the excitation energy of the K π =0 + 2 band head [ Fig. 1(a,b)]. The X(5) predictions also reproduce essential features of the electric quadrupole transitions from the K π =0 + 2 band to the ground state band: the presence of strong spinascending interband transitions but highly-suppressed spin-descending transitions. However, several discrepancies exist between the X(5) predictions and observed values. The spacing of level energies in the K π =0 + 2 band is predicted to be much larger than in the ground state band, but empirically at most a slightly larger energy scale is found for the K π =0 + 2 band [ Fig. 1(c)] [9,11,12]. This overprediction is encountered in descriptions of transitional nuclei with the interacting boson model (IBM) and geometric collective model (GCM) as well [18,19]. For nuclei with yrast band level energies matching the X(5) predictions, the yrast band B(E2) strengths tend to fall below the X(5) predictions, and sometimes even below the pure rotor predictions (see Fig. 2 of Ref. [11]). For the N =90 nuclei, the transitions between the K π =0 + 2 and ground state bands have strength ratios typically matching those predicted, but their strength scale is considerably weaker than predicted [5,7,9,20,21]. It is thus necessary to ascertain which aspects of the X(5) description are most important in determining the predictions for these basic observables. The square well potential involves an infinitely-steep "wall" in the potential as a function of β, presumably a radical approximation. Moreover, the model has so far been used only with a first-order electric quadrupole transition operator, but the likely importance of second-order effects has been noted by Arias [22] and by Pietralla and Gorbachenko [23]. In the present work, the infinitely stiff confining wall is replaced with a gentler, sloped wall, constructed using a linear potential. The effects upon calculated observables of the introduction of a sloped wall and of a quadratic transition operator are addressed. A computer code for solution of the sloped well eigenproblem is provided through the Electronic Physics Auxiliary Publication Service [24]. Consider the Bohr Hamiltonian [4] where β and γ are the Bohr deformation variables and the M κ are angular momentum operators, with potential Since this potential is a function of β only, the fivedimensional analogue of the central force problem arises. The usual separation of "radial" (β) and "angular" variables [3,25] occurs, yielding eigenfunctions of the form Ψ(β, γ, ω)=f (β)Φ(γ, ω), where ω≡(ϑ 1 , ϑ 2 , ϑ 3 ) are the Euler angles. The angular wave functions Φ(γ, ω), common to all γ-independent problems, are known [26]. For the radial problem, following Rakavy [25], it is most convenient to work with the "auxiliary" radial wave function ϕ(β)≡β 2 f (β). This function obeys a one-dimensional Schrödinger equation with a "centrifugal" term, where the centrifugal coefficient α is related to the O(5) separation constant τ (τ =0, 1, . . .) by α=(τ + 1)(τ + 2). For problems with a more general potential V (β, γ)=V β (β) + V γ (γ), Iachello [2] showed that an approximate separation of variables occurs, provided that V γ (γ) confines the nucleus to γ≈0 (see Ref. [2] for details). In this "γ-stabilized" case, the eigenfunctions are of the form Ψ(β, γ, ω)∝f (β)η(γ)φ KLM (ω), where the φ KLM (ω) are the conventional rigid rotor angular wave functions [4] for angular momentum L, z-axis projection M , and symmetry axis projection K. The auxiliary radial wave function again obeys (3), but now with α= 1 3 L(L + 1) + 2. In the region β<β w , the potential V (β) of (2) vanishes, and the radial equation (3) reduces to the Bessel equation of order ν=(α + 1/4) 1/2 . The solutions with the correct convergence properties at the origin are ϕ(β)∝β 1/2 J ν (ε 1/2 β), where ε≡(2B/ 2 )E. In the region β>β w , where the potential is linear in β, an analytic solution does not exist for the full problem with centrifugal term. For α=0 only, (3) reduces to the Airy equation, The analytic solutions obtained for α=0 provide a very efficient basis for numerical diagonalization to obtain the true α =0 solutions of the radial equation (3). It is first necessary to obtain a basis set of α=0 solutions (4) The eigenvalues of ε are determined by the condition that ϕ(β) be continuous and smooth at the matching point β=β w . This yields a transcendental equation which is solved numerically for ε. The normalization coefficients N 1 and N 2 then follow from continuity and the requirement ∞ 0 dβ|ϕ(β)| 2 =1. Since the radial equation (3) has the form of a one-dimensional Schrödinger equation, its solution for general values of α may be carried out as the matrix diagonalization problem for a corresponding "Hamiltonian" matrix h, including the centrifugal potential, with respect to these α=0 basis functions, with entries Convergence in this basis is rapid -for instance, the eigenvalues of the ground state and first excited radial solution converge to within ∼1.5% of their true values with a truncated basis of only 5 eigenfunctions. Values shown in this paper are calculated for a basis size of 25. For illustration, an example potential, with centrifugal contribution, and the corresponding calculated eigenvalues are shown in Fig. 2. Electromagnetic transition strengths can be calculated from the matrix elements of the collective multipole operators. The general E2 operator for the geometric model [27,28,29] may be expanded in laboratory frame coordinates α 2µ as [30] Energies of low-lying 0 + and 2 + levels for the sloped well potential with S=50. The potential without the fivedimensional centrifugal term is shown (solid curve), together with the potential including the centrifugal contributions for L=0 and L=2 (dashed curves). For the present purposes, it is necessary to reexpress this operator in terms of the intrinsic frame coordinates and D 2 (ω) [31], giving, to second order in β, In both the γ-independent and γ-stabilized cases, the matrix element of M(E2; µ) between two eigenstates factors into an angular integral and a radial integral. Here we consider matrix elements between unsymmetrized γstabilized wave functions [4] as needed in calculations for the rigid rotor, X(5), or γstabilized sloped well models. The matrix element separates into intrinsic and Euler angle integrals, yielding in terms of for K ′ −K=0 or for K ′ −K=±2, where dτ ≡β 4 dβ |sin 3γ| dγ and the reduced matrix element normalization convention is that of Rose [32]. (The matrix elements of the symmetrized wave functions, for K =0, may be calculated from this matrix element as usual [4].) Considering the present β-γ separated wave functions Φ(β, γ)=f (β)η(γ), for the case of no γ excitation (so K ′ =K=0), and under the approximation γ≈0, these integrals reduce to Quadrupole moments, defined by eQ J ≡(16π/5) 1/2 JJ|M(E2; 0)|JJ , may be calculated as eQ J =(16π/5) 1/2 (JJ20|JJ) J||M(E2)||J . The following calculations can be considerably simplified if it is noted that the eigenvalue spectrum and wave functions depend upon the Hamiltonian parameters B, β w , and C only in the combination to within an overall normalization factor on the eigenvalues and overall dilation of all wave functions with respect to β. [This follows from invariance of the Schrödinger equation solutions under multiplication of the Hamiltonian by a constant factor and under a transformation of the potential V ′ (β)=a 2 V (aβ) [33].] For a given value of S, the numerical solution need only be obtained once, at some "reference" choice of parameters (e.g., 2B/ 2 =1 and β w =1), and the solution for any other well of the same S can be deduced analytically. Specifically, suppose the reference calculation yields an eigenvalue ε and a normalized radial wave function f (β). Then a calculation performed for the same B and S but for a different width β ′ w produces the eigenvalue ε ′ and normalized wave function f ′ (β) given by the simple rescalings and the radial integrals scale to I ′ 1 = β ′ w I 1 and I ′ 2 = β ′2 w I 2 . Thus, the essential parameter which controls the relative strengths of the linear and quadratic terms of the E2 operator is A ′ ≡A 2 β w /A 1 , in terms of which the matrix element in (9) is Ratios of E2 matrix elements depend only upon S and A ′ . A computer code for solution of the sloped well eigenproblem and for calculation of the radial matrix elements between eigenstates is provided through the Electronic Physics Auxiliary Publication Service [24]. This code also calculates observables for the E(5) and X(5) models. III. RESULTS In the following discussion, let us restrict our attention to γ-stabilized structure relatively close to the X(5) limit of the sloped well model, since this regime is most directly relevant to the transitional nuclei recently considered in the context of the X(5) model. The sloped well potential approaches a pure linear potential as β w vanishes at fixed slope (that is, as S→0) and approaches a square well as the slope goes to infinity at fixed β w (that is, as S→∞). It can thus produce a much wider variety of structures than are considered in the present discussion. However, calculations for the full range of these cases may be obtained with the provided computer code [24]. First we examine the energy spectrum, comparing it to the X(5) spectrum. Naturally, the eigenvalues for the sloped well are lowered relative to those for the X(5) well of the same β w , as the outward slope of the wall effectively widens the well, causing level energies to "settle" lower. The essential feature is that the widening of the well introduced by the wall slope is a relatively small fraction of the well width at low energies, while it is much greater at high energies, as may be seen by inspection of the potential (Fig. 2). Thus, the high-lying levels experience a disproportionately greater increase in the accessible range of β-values than do low-lying levels and consequently are lowered in energy relative to the low-lying levels. From the calculated energies, it is seen that as S is decreased from infinity the higher-spin levels within a band are lowered more rapidly than the lower-spin members, resulting in a reduction of the ratio R 4/2 ≡E(4 + 1 )/E(2 + 1 ) for the yrast band [ Fig. 1(a)] and a lowering of the curve of E versus J for each band (Fig. 3). The excited band head energies are lowered as well [ Fig. 1(b)]. But the most dramatic change is the rapid collapse of the spacing scale of levels within the excited bands relative to that of the ground state band [ Figs. 1(c) and 3]. For S≈50, the predicted energy spacing scale within the K π =0 + 2 band is reduced sufficiently to be consistent with the spacings found for the N =90 transitional nuclei, while the energies of low-spin yrast band members and the K π =0 + 2 band head are still relatively close to their X(5) values, as shown in Fig. 3. The second order term in the E2 operator (7) can interfere either constructively or destructively with the first order term. For all transitions between low-lying levels considered here, the radial integrals I 1 and I 2 in (14) have the same sign. Thus, negative values of A ′ lead to constructive interference [note the negative coefficient in (14)], while positive values lead to destructive interference. For the X(5) square well, the higher-spin members of the yrast band have larger average β values than do the low-spin members, so the quadratic term is relatively more important for the higher-spin levels. In the case of destructive interference, the curve showing the spin dependence of B(E2) values, normalized to B(E2; 2 + 1 → 0 + 1 ), falls below that obtained with the simple linear E2 operator, as seen in Fig. 4(a). The broad range of such curves obtained experimentally (see Ref. [11]) can be qualitatively reproduced with different values of A ′ . Destructive interference also reduces the interband B(E2) strengths and the in-band B(E2) strengths within the K π =0 + 2 band, relative to B(E2; 2 + 1 → 0 + 1 ) [ Fig. 5(b)], ameliorating the overprediction of interband strengths in the X(5) model. The spin-descending interband transitions in the X(5) model have highly-suppressed linear E2 matrix elements, so these transitions are very sensitive to even a small quadratic contribution. Values of A ′ which give only moderate modifications to the other transitions can give complete destructive interference for these spin-descending transitions. The spin dependence of quadrupole moments within the yrast band is shown in Fig. 4(b). Observe that the situation just described differs considerably from that encountered for a pure rotor. For a rigid rotor, the intrinsic wave function Φ αK (β, γ) is the same for all levels within a band, so I 2 provides only a uniform adjustment to the intrinsic matrix element between bands. Inclusion of the second order term in M(E2) thus leaves unchanged the ratio of any two B(E2) values within a band or the ratio of any two B(E2) values between the same two bands. Although inclusion of a quadratic term in the E2 oper- 6: Level scheme and selected B(E2) strengths (a) for the sloped well with parameters chosen to approximately reproduce the observed low-energy structure of 150 Nd (S=75, A ′ =0.6) and (b) as measured for 150 Nd [7,14]. Arrow thicknesses are proportional to the logarithm of the B(E2) strength. Limits are indicated on experimental B(E2) strengths for transitions with unknown E2/M 1 mixing ratios. ator with A ′ >0 can at least qualitatively explain the discrepancies between the X(5) B(E2) predictions and empirical values, this explanation is not entirely satisfactory. Many different spin dependences of the B(E2) values within the yrast band are observed for nuclei with similar energy spectra [11], and these require correspondingly varied, apparently ad hoc choices of the parameter A ′ for their reproduction. Moreover, it is possible to obtain estimates for the coefficients A 1 and A 2 in the geometric E2 operator based on a simple model of the nuclear charge and current distribution, as described in Refs. [28,29], and these values yield A ′ ≈−0.2, giving weak constructive interference for the low-lying transitions. In the interacting boson model, the E2 transition operator is of the form T (E2) ∝(d † ×s + s † ×d) (2) + χ(d † ×d) (2) , in terms of the boson creation operators s † and d † , where the value χ=− √ 7/2 is commonly used in calculations involving the transition from spherical to axially-symmetric deformed structure [35,36]. In the classical limit, (d † ×s+s † ×d) (2) may be approximately identified with the linear term of the geometric model transition operator and χ(d † ×d) (2) with the quadratic term. The addition of these terms is constructive for low-lying transitions, and the relative contribution of the second-order term is comparable to that obtained for A ′ ≈−1 in the present description. The effect of sloped walls on the calculated B(E2) strengths is dominated by the greater broadening of the well at high energies than at low energies discussed above. While all the eigenfunctions "spread" in β extent relative to those for the square well, this spreading is most pronounced for the high-lying levels. Since the first order E2 operator is proportional to β, the E2 matrix elements tend to be enhanced for the higher-lying levels. In the yrast band, the in-band B(E2) strengths for higher-spin band members are increased relative to those for the lower-spin band members, as are the quadrupole moments for higher-spin band members [ Fig. 4(c,d)]. Several of the interband B(E2) strengths are also increased relative to B(E2; 2 + 1 → 0 + 1 ) (Fig. 5). The changes in B(E2) values induced by decreasing S are largely opposite in sense to those produced by introduction of the second-order term in the E2 operator. The parameters S and A ′ may be chosen so as to balance these two effects against each other, except that for the spin-descending interband transitions the strong destructive interference tends to dominate. To allow comparison with empirical values, in Fig. 6(a) predictions obtained with the sloped wall potential and quadratic E2 operator are shown for parameter values chosen to approximately reproduce the observed lowenergy structure of 150 Nd. The experimental values are given in Fig. 6(b). Finally, let us consider the effects of wall slope on the properties of the K π =2 + 1 band, or γ band. Within the γ-stabilized separation of variables of Ref. [2], the properties of this band are largely independent of the specific choice of γ-confining potential V γ (γ). This potential determines the band head energy as well as the γ-dependent wave function η(γ). The wave function, however, simply contributes a normalization factor |sin 3γ| dγη 1 (γ) sin γη 0 (γ) to I 1 in (11), and an analogous factor to I 2 , common to all electromagnetic matrix elements between the K π =2 + 1 band and the K π =0 + 1 and 0 + 2 bands. Although these quantities can be calculated for any particular hypothesized form for V γ (γ), such as a harmonic oscillator potential [2], they in practice may be treated as free parameters. The essential feature of the K π =2 + 1 band is that the radial wave function for each of its members is the "ground state" solution of the radial equation (3) for the given angular momentum. This K π =2 + 1 band is thus essentially a duplicate of the yrast band, displaced to a higher energy by the excitation energy in the γ degree of freedom, with energy spacings and radial wave functions for the even spin members identical to those for the yrast band, but with the addition of odd spin members and with different angular wave functions. (Note that for K =0 Bijker et al. [13] use a different separation procedure from that in Refs. [2,6], yielding a modified form of the radial equation with α= 1 3 [L(L + 1) − K 2 ] + 2, which changes the energy spacings and in-band radial matrix elements by 5% relative to those of the yrast band.) Thus, the dependence of K π =2 + 1 band properties upon wall slope closely matches that of the yrast band properties. Notably, the K π =2 + 1 band does not demonstrate the rapid decrease in energy spacing scale with decreasing wall slope exhibited by the K π =0 + 2 band, as illustrated in Fig. 7(a). This is at least qualitatively consistent with the observed similarity of the yrast and K π =2 + 1 , but not K π =0 + 2 , band energy spacings in the N =90 X(5) candidate nuclei. sess the same radial wave functions as the yrast band members, the strengths of transitions within the K π =2 + 1 band or between this band and the yrast band depend upon the same radial matrix elements as do the yrast inband transition strengths and quadrupole moments already considered. Consequently, decreasing wall slope leads to a moderate enhancement of the interband transition strengths involving higher spin levels, directly commensurate with the increases shown in Fig. 4(c,d). The dependence of branching ratios from the K π =2 + 1 band to the yrast band on wall slope is shown in Fig. 7(b). Transitions between the K π =2 + 1 band and the K π =0 + 2 band depend instead upon radial matrix elements which contribute to the K π =0 + 2 to yrast band transition strengths. (The small radial matrix element values which yield the characteristic suppression of spin-descending transitions from the K π =0 + 2 band to the yrast band here yield a suppression of spin-ascending K π =2 + 1 to 0 + 2 transitions.) Decreasing wall slope yields enhancement of the allowed transitions and, for the low-spin levels, either little change or substantial reduction of the suppressed transitions [ Fig. 7(c,d)]. Detailed quantitative predictions for the K π =2 + 1 band level energies and electromagnetic observables, using either the separation of variables of Refs. [2,6] or that of Ref. [13], may be obtained with the provided code [24]. IV. CONCLUSION The use of a β-soft potential within the geometric picture has recently received attention as providing a simple description of nuclei intermediate between spherical and rigidly deformed structure. From the present results, it is seen that the energy spacing scale of states within excited bands is highly sensitive to the stiffness of the well boundary wall. A potential for which the well width increases with energy can produce a more compact spacing scale for excited states than is obtained with a pure square well, providing much closer agreement with the observed energy spectra for nuclei in the N ≈90 transition region. It is also found that a second-order contribution to the E2 transition operator can lead to a wide range of possible yrast band B(E2) spin dependences, as well as to modifications of off-yrast matrix elements. However, a systematic understanding of the proper strength for this second-order contribution is needed if the E2 operator is to be applied effectively.
2019-04-14T03:09:02.827Z
2004-03-05T00:00:00.000
{ "year": 2004, "sha1": "c5dc2e0ea990a017c79693da4528d47720d5b01d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/nucl-th/0403020", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5958f97a0e7a3499cdbc0e0672a943f268289b24", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
234523088
pes2o/s2orc
v3-fos-license
Potencial de restauración de bosques de coníferas en zonas de movimiento de germoplasma en México Potential of restoration of coniferous forests from germplasm transfer zones in Mexico coníferas degradación (ZMG), (N), (UPG) (BG); Abstract Forest land restoration requires data related to the number of seedlings produced and the level of soil degradation. Decision makers need to know the efforts of the national reforestation program as a driver of ecosystem restoration in Mexico. To assess the restoration potential of conifer forests and reduce land degradation by Germplasm Movement Zones (GMZ), priority zones for restoration were compared with areas that possess most effective restoration efforts: survival rate of planted seedlings, number of nurseries (N), Germplasm Production Units (GPU) and Germplasm Banks (GB) with data from Conafor corresponding to the 2016-2018 period. It was found that 27 GMZ had 7 418 975.30 ha of low-production forest land as priority areas and 9 389 577.70 ha of forest land with medium and low degradation. According to the variables used in the comparative analysis, eight GMZ (XII.4, XII.5, X.3, X.2, XII.1, V.3, XII.2, and XV.1) were identified as restoration potential zones because their priority areas could be totally reforested by using Pinus and Abies species. Introduction Terrestrial ecosystems provide a host of ecosystem services to humankind, including food, fodder, fiber, fuel and timber forest products. The demand for land products and services is degrading the ecosystems. About one third of the world's arable land is affected by degradation, which results in an increase in the number of people living in poverty in developing countries (Boer et al., 2017). Soil degradation is a serious global problem for many communities and is related to food insecurity, vulnerability to climate change and poverty (Barbier and Hochard, 2016). This degradation comes in various forms, including soil nutrient depletion, salinization, agrochemical contamination, soil erosion, vegetative degradation (e.g., deforestation) as a result of overgrazing, and the clearing of forests for use as farmland (Scherr and Yadav, 2001). Deforestation is a major problem for developing countries because it causes loss of biodiversity and increases the greenhouse effect (Hein et al., 2018). Most deforested areas occur in the temperate and subtropical zones (Angelsen et al., 1999). Around the world, there are several causes identified as the main drivers of deforestation: expansion of agricultural land, logging and firewood extraction, overgrazing, fires, mining, urbanization, military conflicts and tourism (Chakravarty et al., 2012). All of these must be addressed by each country in order to reduce their impacts. Deforestation brings with it some problems that globally affect natural resources and the human population (Chakravarty et al., 2012): climate change, loss of soil and water resources, flooding, decline in biodiversity, economic habitat loss, as well as social consequences. There are some essential strategies for reducing deforestation, which vary by region and time (Hein et al., 2018). In Mexico, temperate forests extend over an area of about 323 305 km 2 (around 17 % of the country), provide timber and non-timber resources, and are home to species essential to its biodiversity . However, these ecosystems have been reduced in almost 45 % of the country, due to increased land degradation (Semarnat-Colegio de Postgraduados, 2002). National projections for deforestation rates have varied from 260 000 to 1 600 000 ha year -1 over the past three decades, according to the record of academic studies and official reports (Couturier et al., 2012). The main causes of deforestation in Mexico are land use change for agriculture (82 %), illegal logging (8 %), as well as forest fires and diseases (6 %) (Goldstein et al., 2011). The government's response has been to legislate and establish public policy programs (Goldstein et al., 2011;Cotler et al., 2013), such as those of the Federal Environmental Protection Agency (Profepa), forest certification programs, afforestation and reforestation efforts, the creation of natural protected areas, and payment for environmental services programs. However, some other programs have favored or encouraged deforestation, including Procampo and Alianza para el Campo, since they promote agricultural activities at the cost of reducing forest areas (Schmook and Vance, 2009). Mexico's National Forestry Commission (Conafor) established Germplasm Movement Zones (GMZs), equivalent to seed zones, defined as areas with similar ecological and climatic characteristics that host populations with relatively uniform genotypes or phenotypes (Flores et al., 2014), in order to reduce the movement of germplasm out of its natural distribution. Zoning helps to increase the survival rate of established seedlings in the field, which is affected when species are planted outside their local distribution; therefore, they exhibit high mortality rates and poor adaptation to different growing conditions (Rehfeldt et al., 2014). Although these zones have been defined, there is still a movement of germplasm among the GMZs that affects plant growth and diversity. The reforestation program in Mexico is a permanent strategy to recover and increase forest areas and reduce forest land degradation; for example, in 2020 100 000 ha were reforested (FAO, 2020). However, the main problem is the low survival rate of seedlings (Burney et al., 2015) which is associated with poor quality seedlings (Escobar-Alonso and Rodríguez, 2019). The low survival percentages cause that the goals of reforestation are not fulfilled, which seek to restore and to conserve the forests of the country. In spite of the efforts to restore forests, none of the current degraded areas have been considered, nor have the level of degradation or the number of seedlings produced in nurseries per ecological zone or GMZ. The first is an area with wide formations of natural vegetation, but relatively homogeneous, similar in physiognomy although not necessarily identical (FAO, 2001). In order to propose a national restoration strategy, it is necessary to evaluate and use this information. Therefore, the objective of this research study was to assess the restoration potential of conifer forests in order to reduce land degradation by GMZs, by comparing priority areas for restoration with the most effective reforestation efforts. In this regard, the following questions were raised: 1) Does the amount of seedlings vary among conifer species produced in the nurseries?; 2) Is the deterioration of land that is home to conifers dissimilar in different production zones and restoration zones by ZMG?; 3) Does the survival rate of seedlings vary for conifers by GMZ?; and 4) Is the restoration potential of conifers different within each GMZ? This information is essential for planning reforestation actions to be initiated in order to restore those areas with soil degradation problems through the use of conifers. Materials and Methods The restoration potential to reduce soil degradation in the Germplasm Movement Zones (GMZ) of Mexico was analyzed (Conafor, 2016), based on comparisons between priority areas (production areas and restoration areas) and effective reforestation efforts (percentage survival of planted seedlings, number of nurseries, germplasm production units and germplasm banks). Production areas are forest lands that, according to the structure and composition of vegetation, are subject to forest exploitation (Semarnat, 2015); while the restoration zones are forest areas with degradation evidence, with different degrees of progress and that constitute a risk from the loss of the forest resource that they may represent (Semarnat, 2015). The germplasm production units are areas established in natural stands, plantations or nurseries, with individuals belonging to a forest species, selected by their genotype or phenotype, whose origin is well identified, and which are used for the production of fruits, seeds or vegetative material (Conafor, 2016). From Conafor data (2019a), the most commonly produced conifer genera and species were defined, and their average values of total seedlings planted from 2016 to 2018. This database has information on reforestation and conservation programs at the national level, which is used annually to write government reports. The conifer taxa were chosen because they cover most of the GMZs and produce different services for the population; for example, environmental services, timber production (Díaz-Núñez et al., 2016), and organic carbon storage (INECC, 2015). because they could be restored in a short time (Flores et al., 2019b). The effective reforestation efforts in each GMZ were assessed using the percentage of seedling survival defined by Conafor (2010), as well as the number of established nurseries (N), defined Germplasm Production Units (GPU) and installed germplasm banks (GB), according to Conafor's records. This information was used because it directly supports the production of coniferous seedlings in the country. In each GMZ, the area that can be reforested with 1 100 plants ha -1 , and the average survival rate were estimated based on their registered percentages (Conafor, 2010). GMZs with a high planted seedling survival rate and the highest amount of N, GPUs and GBs were considered the areas with the most effective reforestation efforts. Finally, the priority areas for restoration were compared with those with the most effective reforestation efforts in order to define the restoration potential sites. Figure 1. Survival rates (green colors) and nurseries (yellow circles), germplasm production units (blue circles), and germplasm banks (pink circles) in the GMZs. Pinus oocarpa varied in all survival rates, while P. pseudostrobus and P. teocote were the most frequent pines, with higher, medium, and lower survival rates; P. As for the number of nurseries, one zone had the most (72), five had a considerable number (13 to 19), 17 zones had few (1 to 8), and four zones had none (Table 3). One zone accounted for the largest number of GPUs among the established units (14), while 17 zones had one to six, and nine zones had none. On the other hand, seven zones had very few GBs (1 to 5), and 20 zones lacked germplasm banks altogether (Figure 1, Discussion This research evaluated the potential of certain Mexican conifers to reduce forest land degradation in the GMZs, through the reforestation program, in order to provide a basis for the implementation of a restoration strategy for temperate forests. The results of this study showed that the number of seedlings produced in nurseries, land degradation and survival rate were different for the selected conifers, and their restoration potential varies among the GMZs. Consequently, during the reforestation process in the GMZs, the species studied were able to restore many forest lands with medium (III.C) and low degradation (III.D). Pinus species are the most widely distributed in the country, compared to other conifers (Farjon and Filer, 2013); therefore, they have been the most commonly used by nurseries. For example, P. cembroides, P. oocarpa and P. pseudostrobus are distributed along different temperate mountain ranges as pure conifer and Revista Mexicana de Ciencias Forestales Vol. 12 (63) Enero -Febrero (2021) mixed forests (Rzedowski, 1979;Flores et al., 2011;Farjon and Filer, 2013;Flores et al., 2019a) and are the most widely used for the production of seedlings. On the other hand, some pines are very important for obtaining wood (Sánchez-González, 2008), for the sawmill industry and for resin production (Fuentes et al., 2006), as well as for the establishment of commercial plantations (López-Upton et al., 2005); therefore, they are quite frequently grown in nurseries each year. Pinus, Abies, Callitropsis, Cupressus and Land degradation (II.C, and III.C and III.D) varied among the GMZs and presented different species of conifers; that is, the productivity of forest lands in II.C registered a smaller area of degradation, with 24 species (except P. jeffreyi), than land degradation types III.C and III.D with 16 species (Cupressus sempervirens, P. greggii, P. hartwegii, P. jeffreyi, P. lawsonii, P. maximartinezii, P. montezumae, P. patula, T. mucronatum were absent). This proved that the areas of II.C can be restored in a short time, but they need a great investment; for example, between the (Masek et al., 2011), and large investments have been required for their restoration. The efforts made to implement the reforestation program have been significant in the restoration areas, but are still insufficient for some states, despite the fact that reforestation and soil improvement activities have been developed since 1999 (Ceccon et al., 2015). The reforestation rate in the country is not enough; it is estimated that, in order to recover 43.5 million ha of degraded soils 400 000 ha must be reforested per year and approximately 68 million U.S. dollars must be invested; however, the Mexican government reforests only around 193 000 ha per year (Ceccon et al., 2015) and invests merely 32 million U.S. dollars (Sánchez-Velásquez, 2009). Species with medium or low production also have an important survival rate, as indicated by Gómez-Romero et al. (2012) for P. hartwegii (89 to 82 %) and P. The species selected for nursery production should be tolerant to water deficit or even drought -as it happens with P. cembroides, which is resistant to adverse conditions of rainfall, soil, frost, drought and high temperatures (Flores et al., 2018;Gutiérrez-García et al., 2015)-as, due to climate change, they are likely to experience drier conditions and water stress during their growth in the field. In addition, the selected taxa must have the ability to grow in substrates that limit their establishment, as is the case of P. leiophylla, a taxon whose seedlings reach significant height when produced on mine booty substrate, while P. devoniana has appreciable growth (Osuna-Vallejo et al., 2017). For eroded areas, soil formation through the use of conifer taxa is another aspect to consider in species selection. The restoration potential of conifers was different within the GMZ. The results clearly suggest that in a relevant number of zones (eight) their priority areas could be reforested, since both the production of seedlings and their different survival percentages indicate it. In this regard, the number of planted seedlings (1 100 plants ha -1 ) with their percentages of survival in the field are sufficient to cover these areas, although they only represent 8.80 % of the total of areas II.C, III.C and III. D. In order to propose a program to restoration degraded areas, it is necessary to define different densities and species which support the restoration process, for example P. durangensis could be used to restore III.C and III.D areas (Flores et al., 2019). Seedling survival is an important factor to consider when reforesting. It is estimated that, in Mexico, reforested areas reach a low (36 %) (Wallace et al., 2015) or medium (50 %) (Burney et al., 2015) average of seedling survival after their first year, due to poor seedling quality and drought. Therefore, it is suggested that local seedlings be used in reforestation projects to increase the potential for acclimatization (Sáenz-Romero and Guries, 2002) and reduce the risk of death from drought. The X.3 zone was the most important for the restoration potential, because it includes many nurseries, GPUs and GBs; this shows that X.3 has a good effort within the reforestation program (Flores et al., 2019b). For forest owners, conifers are interesting trees to use in reforestation areas; thus, in the region surrounding the Monarch Butterfly Biosphere Reserve, their restoration potential has been an important factor in the decision to use them to reforest agricultural plots and degraded forests (Honey-Rosés et al., 2018). In order to promote soil conservation practices with conifer taxa, the government has implemented the Forest Soil Conservation and Restoration Program, which pays a subsidy to landowners. This action aims to reduce the estimated land degradation in the country by 45 % (Semarnat-Colegio de Postgraduados, 2002). In the restoration areas, it is necessary to increase the efforts of the reforestation program (nurseries, GPUs and GBs); furthermore, restoration failures -i.e., the high initial mortality, deficient growth and susceptibility to biotic and abiotic stressors, due to the misuse of the source and the genetic quality of the forest reproduction material-must be avoided (Godefroid et al., 2011). Appropriate attention to the genetic quality of germplasm is important for a forest restoration that seeks to adapt tree species to changing conditions (White et al., 2007). Conclusions In recent decades, the surface area of Mexico's temperate forests has been reduced due to increased land degradation. The reforestation program is an ongoing strategy to increase forest areas and reduce forest land degradation with Pinus, Abies, Callitropsis, Cupressus and Taxodium. Pinus cembroides, P. pseudostrobus, P. oocarpa, P. devoniana, P. engelmannii, P. montezumae and P. greggii, which add up to 76.36 % of the total production in nurseries during the analyzed period. In addition, these taxa are distributed in 27 GMZs, which have 7 418 975.30 ha of low-production forest land (II.C) and 9 389 577.70 ha of medium-and low-degradation forest land (III.C and III.D). In the GMZs, 10 zones are identified with higher survival rates: eight with medium values, five with low values, and four with lower values. As for the number of nurseries, one area contains the majority of the nurseries; five areas include a considerable amount of them; 17 include few, and four areas include none. One zone includes most of the established GPUs, while 17 zones have few units, and nine zones have none at all. In seven zones there are very few GBs, and none in 20. Eight GMZs have restoration potential, since they can be fully reforested, but zone X.3 alone includes more nurseries, GPUs and GBs, compared to the others.
2020-12-24T09:09:59.009Z
2020-12-11T00:00:00.000
{ "year": 2020, "sha1": "d4e7dfa9bd4c73295b4c9ffa70c77db6f9b5d441", "oa_license": "CCBYNC", "oa_url": "https://cienciasforestales.inifap.gob.mx/index.php/forestales/article/download/813/2321", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "885d3975abdb900e54cb5faaa764df9d4dd7ce8b", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
115680713
pes2o/s2orc
v3-fos-license
VTOL UAV Transition Maneuver Using Incremental Nonlinear Dynamic Inversion The paper seeks to study the control system design of a novel unmanned aerial vehicle (UAV). The UAV is capable of vertical takeoff and landing (VTOL), transition flight and cruising via the technique of direct force control. The incremental nonlinear dynamic inversion (INDI) approach is adopted for the 6-DOF nonlinear and nonaffine control of the UAV. Based on the INDI control law, a method of two-layer cascaded optimal control allocation is proposed to handle the redundant and coupled control variables. For the weight selection in optimal control allocation, a dynamic weight strategy is proposed. This strategy can adjust the weight of the objective function according to the flight states and mission requirements, thus determining the optimizing direction and ensuring the rationality of the allocation results. Simulation results indicate that the UAV can track the target trajectory accurately and exhibit continuous maneuverability in transition flight. Introduction UAVs have increasing applications including surveillance, communications, search and rescue operations, and other military tasks.Among different flight conditions of UAVs, the 0-1000 m height area in cities is one of the most significant applications, where complex terrain and significant gusts arising from atmospheric turbulence exist. This research studies a novel fixed-wing VTOL UAV with thrust vector engines, which can be applied in urban areas.The UAV can get rid of the restrictions imposed by takeoff and landing conditions and be accurately recovered by hover function; the UAV owns a larger combat range and higher flight speed by forward flight capability.A new concept of low-speed cruising is studied in this paper.For the most current VTOL aircrafts, the transition from hover to forward flight is short and stable.However, for lowspeed cruising, the transition is prolonged as a normal flight state.The vehicle can maintain transition flight for long time cruising by adopting direct force control and has favorable maneuverability.This flight mode is appropriate for vehicles flying in low-altitude complex conditions.In the meantime, under the low speed of UAV and the inefficient aerodynamic surface control during the transition flight, the incorporated control strategy of vectoring nozzles and aerodynamic surface should be adopted to control the attitude.In view of this, the thrust vector direct force control is of critical significance for transition maneuver.However, large nonlinearities, redundancies, and coupling effects arise when this technique is adopted. In recent years, the research on VTOL UAVs is increasingly prosperous with the advancements in automatic control and the increasing popularity of UAV platforms [1].The problem of transition maneuvers has been studied for different types of UAVs, including a fixed-wing aircraft equipped with a thrust vector engine and lift fan [1][2][3], tiltrotor aircraft [4][5][6][7], tail-sitter aircraft [8,9], ducted-fan VTOL aircraft [10,11], and tilt-wing aircraft [12,13].A pilot auxiliary control system was designed by Francesco and Mattei [1] for a fixed-wing tiltrotor UAV, and the control logics for different flight states were specified and synthesized through adopting INDI method.The daisy-chaining logic was employed to handle the control redundancy.The autonomous transition control of two V/STOL aircrafts was studied by Xili et al. [2].The nonlinear trajectory control strategies in longitudinal direction have been designed specifically for aircrafts in different types.The maneuver of a tilt quadcopter was researched by Ryll et al. [7].The control design was basically dependent on the exact linearization of the motion equations, and the actuation redundancy was calculated through employing pseudoinverse matrices.The previous works generally decouple the longitudinal and lateral control, design control logics for hover, transition flight, and forward flight, respectively, and regard some redundant control variables as constants to solve the transition maneuver control problem.The UAV's maneuver potential with vectored thrust cannot be fully utilized. The INDI method is adopted in this paper to control UAV's position/attitude during transition maneuver.The INDI method, which originates from the nonlinear dynamic inversion (NDI), solves the incremental form of equations of motion and generates a control law substantially reducing the dependence on aerodynamic model and other vehicle models.INDI was firstly adopted to control UAV's attitude control by Sieberling et al. [14].The INDI attitude control law of a quadcopter was proposed, and the momentum of the propellers was incorporated in the controller by Smeur et al. [15].They generalized this method based on the previous work to the outer control loop (position control loop) of a quadcopter under severe gust loads [16,17].Lu et al. [18] applied the INDI method to the fixed-wing aircraft trajectory controller.They compared the performance of the INDI controller with that of NDI approach and proved that the INDI method can reduce the model uncertainties effectively. An INDI control system is designed in this paper to address the 6-DOF nonlinear control of the UAV in transition flight, and the main contributions are listed below: (1) Given the problems of strong nonlinearities and multiaxis coupling characteristics, a unified 6-DOF nonlinear control strategy is proposed to control position/ attitude and there is no need to switch the control logic according to different flight states.The INDI method is introduced to address the model uncertainty and control coupling problem.Different from the work conducted by Lu et al. [18], the sideslip angle β is not assumed to be zero, and the vectored thrust in 3 directions of the body axis is considered (2) A two-layer cascaded optimal control allocation method is proposed to address the control redundancy based on the INDI control law.The firstlayer optimal allocation is conducted to allocate the increment of flight attitude and vectored thrust in the translational dynamics control loop.The solution of the engine thrust, vectoring nozzle deflections, and aerodynamic surface deflections are calculated in the second-layer control allocation (3) A dynamic weight selection strategy is designed for the objective function of the two-layer cascaded optimal control allocation.In dynamic weight selection strategy, a weight generator and a weight regulator are designed, which calculate weight through an analytic hierarchy process (AHP) and adjust weight according to flight states and mission requirement, respectively.The dynamic weight selection strategy can allocate control variables according to mission requirement and ensure the optimal results to be feasible This paper is constructed as follows: the configuration and aerodynamics characteristics of the VTOL UAV researched in this paper are described in Section 2. A 6-DOF mathematical model and an INDI control system of the vehicle are given in Section 3. The two-layer cascaded optimal control allocation method is presented in Section 4. Section 5 provides the simulations results.Eventually, conclusions are drawn in Section 6. UAV Configuration and Aerodynamics The UAV is designed in a tandem-wing plus lift-body configuration.This aerodynamic configuration can provide more lift under the limitation imposed by wing span, and it makes UAV applicable to fly in low-altitude complex flight condition.The power system consists of a lift fan in the front part of fuselage and two thrust vector engines in each side of the rear part of fuselage, as presented in Figures 1(a) and 1(b).At the bottom of the lift fan, a control rudder is equipped with the shaft.The control rudder can be deflected left and right with 12 degrees and provides lateral vectored thrust T y and yaw moments n T , as illustrated in Figures 1(c) and 1(d).The engine's vectoring nozzle can be swung up with 15 degrees and down with 90 degrees.It provides 3-axis vectored thrust for a direct force control, as shown in Figures 1(e) and 1(f).Through the cooperation of the lift fan and thrust vector engine, the UAV can realize VTOL, transition flight, and cruise. In the case of no wind tunnel experiment, the UAV's aerodynamic coefficients were obtained by the method of aerodynamic estimation and CFD calculation together.In aerodynamic coefficients, the static pitch moment derivative C mα ranges from −0.04 to 0.056.When the angle of attack is from 0 to 18 degrees, C mα > 0, which indicates that the UAV is statically instable in pitch channel.The reason of instability design is that for VTOL UAV, the alignment of the mass center should be considered with power system distribution and the location of aerodynamic center, which is in the front of the mass center, facilitates the vehicle to pitch up in transition maneuver.The static yaw moment derivative C nβ ranges from −0.017 to −0.03.The static stability in yaw channel is caused by the rear wing dihedral effect.The vehicle's main dynamic moment derivatives are C lp = −0 0084, C mq = −0 01739, and C nr = −0 0011.They are all less than zero, and it indicates that the damping moments can reduce the angular velocities and keep the vehicle dynamics stable.The mapping relationship between the actuators, aerodynamic forces, and moments are usually nonlinear.In this paper, to meet the requirement of the control allocation, polynomial approximation is used to fit the aerodynamic data curve between the actuator deflections and aerodynamic coefficients.This will alleviate the need of large lookup tables 2 International Journal of Aerospace Engineering and speed up the control allocation computational process.The basic data of UAV is presented in Table 1. INDI Control System Design This section presents the design process of the control system.The trajectory tracking controller, by adopting time-scaled method, is split into four control loops, i.e., translational kinematics (position) control loop, translational dynamics (flight path) control loop, rotational kinematics (attitude) control loop, and rotational dynamics (angular rate) control loop.Given the differences of the inner and outer loops in the time constants, the control laws for inner and outer loops can be designed independently [19,20].Considering that the model uncertainties caused by aerodynamic force and moments merely exist in translational dynamics and rotational dynamics control loops, these two control loops are designed by the INDI method while the other two loops are designed by the NDI method.The equations of the 6-DOF flight dynamic model and thrust vector engine model adopted to design the control system are described in this section. 3.1. Translational Kinematic Control Loop.The reference flight path vector is calculated in translational kinematic control loop based on the target trajectory.The position vector and flight path vector of the aircraft are defined as where x, y, and z represent the positions of the aircraft in the North, East, and down directions, respectively.V denotes the total velocity of the aircraft, χ indicates the kinematic The desired derivatives of position vector can be designed by a classical linear controller according to the reference trajectory and the feedback of the vehicle's position. In (3), K 0 = K x K y K z T are the gains of the position linear controller.For a better explanation, the definitions are given below: the variables with superscript "des" denote the desired commands generated by the linear controller, and the variables with superscript "ref" denote the reference commands given for the controller to follow.The translational kinematic control loop contains no model uncertainties, and the NDI method is adopted to calculate the control input.The reference flight path command for the next control loop is calculated in In this loop, there are totally 6 control inputs.Among them, x 2a represent the aerodynamic control variables, where μ is the bank angle, and α and β are the angle of attack and angle of sideslip, respectively.x 2t represent the vectored thrusts, and T x , T y , and T z are the components of the thrust in the body axis.The equation of translational dynamics is expressed as In ( 7) and ( 8), L kg is the direction cosine matrix from earth axis to flight path axis, L kb α, β, μ is the direction cosine matrix from body axis to flight path axis, and L ka μ is the direction cosine matrix from wind axis to flight path axis. the aerodynamic force in which D represents drag, C represents aerodynamic side force, and L represents lift.F A and L kb α, β, μ and L ka μ are assumed as the functions of x 2a .The multiplication of direction cosine matrixes causes severe coupling in control variables.Furthermore, in translational dynamic control loop, the error caused by the estimation of F A will bring uncertainties in the control system.Therefore, the INDI method is adopted to address the control problem.To rewrite (6) in incremental form, it is defined that x 1n and x 2n represent the values of x 1 and x 2 in the next time step, and their relationships are described below: Applying Taylor expansion to x 1n at x 1 , x 2 and with higher-order terms neglected, the result can be expressed as In (10), the second and third terms partial to x 1 are assumed much smaller than the forth term, partial to x 2 .This 4 International Journal of Aerospace Engineering commonly arises from the principle of time scale separation [20].For simplification, it is approximated that Replace x 1n with ideal derivatives of flight path vector x des 1 , which is calculated by the linear controller.The incremental control equation can be denoted as To simplify the computation process, it is assumed that where x 2i and x 2j represent the elements in vector x 2 .Accordingly, the equation can be rewritten into the affinein-control form Δx 2t , 14 where g 1a and g 1t are the 3 × 3 matrixes, representing the aerodynamic control matrix and thrust vector control matrix.The detail information of g 1a and g 1t is shown in Appendix.Compared with a traditional aircraft in incremental approach, the thrust vector control matrix g 1t is additional.The elements in g 1t are polynomials of control variables in x 2a .Thus, the control system is transferred into a linear and time variant system, and the increment of control variables is decoupled.Then the ideal incremental virtual command Δx 2 can be calculated by In ( 15), the superscript " ‡" represents the dynamic weight pseudoinverse method, which will be introduced in the next section.The reference command for the next control loop is obtained by 3.2.2.Flight Path Vector Derivative Acquisition.In ( 12), (14), and (15), x 1 can be derived by V, χ, and γ, which are measured by onboard sensors.With the x des 1 − x 1 introduced, the term f 1 x 1 is cancelled, which is the reason why INDI is referred to a sensor-based approach.This approach transfers the dependence on the model accuracy into the dependence on the sensor accuracy.In most cases, the signals were measured by sensor contain noise, and the differentiation of noisy signal amplifies the noise.To make results accurate, a filter can be adopted to abate the noise in sensor data.In this paper, a second-order filter is employed.As stated in literatures [16,18,21], the washout filter can be expressed in Laplace domain as However, the filter leads to a delay which should be compensated.In the Taylor expansion shown in (10), x 1 , x 1 , and x 2 should be from the same moment.In this regard, a second-order filter is also applied for x 2 to counteract the impact caused by time delay in x 1 and x 1 , and ( 12) can be rewritten in a time-synchronized form where subscript "f " represents the filtered variable.Alternatively, other methods can also deal with the measurement delay problem, such as predictive filtering proposed by Sieberling et al. [14].However, the prediction requires additional modeling and cannot predict disturbances.The final reference command for x ref 2 can be given by 3.2.3.Uncertainty Analysis.The model uncertainty stemmed from aerodynamic force in translational dynamic control loop is analyzed in this subsection.The change of aerodynamic coefficients is assumed to be primarily caused by the angle of attack (α) and sideslip angle (β) for simplification.The aerodynamic coefficient can be expressed as In (20), C D , C C , and C L represent the total drag coefficient, side force coefficient, and lift coefficient, respectively; C D0 represents the sum of drag coefficients without the part contributed by α; C C0 represents the sum of side force coefficients without the part contributed by β; C L0 represents the sum of lift coefficients without the part contributed by α; and C Dα , C Cβ , and C Lα represent derivative coefficients about α and β.These coefficients vary with the flight condition and can be assumed as constants in the calculation of each control period.The methods to obtain these parameters involve mathematic estimation and CFD analysis, as described in Section 2. The errors between the estimate coefficients and accurate coefficients make the model uncertain and assume that the uncertainty is mainly caused by C D0 , C C0 , and C L0 .As the work by Lu et al. [18] indicates, the kinematic roll International Journal of Aerospace Engineering angle (μ) can be calculated directly under the assumption of β = 0, which largely simplifies the control equation.Additionally, coefficients C D0 , C C0 , and C L0 can be eliminated by adopting INDI, which evidently reduces model uncertainty.However, the UAV in this paper with direct force control owns great lateral maneuverability.Therefore, β cannot be approximated as zero, and μ is calculated through solving the control equation.As a result, coefficients C D0 , C C0 , and C L0 arise in the third column of g 1a and cannot be eliminated.Nevertheless, no uncertainty exists in the thrust vector control matrix g 1t . The application of INDI in the translational dynamics control loop has two primary advantages.First, INDI control law restrains the model uncertainty caused by aerodynamic force in control matrix g 1a and isolates its impact on direct force incremental control.Second, INDI can decouple the control variables in g 1 x 1 , x 2 and transfer the control equation into a linear form.Accordingly, the aerodynamic force and vectored thrust control allocation are simplified, the complex numerical solution of nonlinear coupling control is avoided, and computational load of onboard control system is reduced. Rotational Kinematic Control Loop.The objective in this control loop is to track x ref 2a given in the translational dynam-ics control loop.The angular rate vector of the vehicle is defined as In (21), p, q, and r represent the roll, pitch, and yaw rates in body axis, respectively.No model uncertainty exists in this control loop, and the control law based on the standard NDI approach is established below: It is noteworthy that γ, χ, and μ are not directly measured onboard in this control system.They are calculated by (23) derived from (6). where A x , A y , and A z are directly measured by the accelerometers of the aircraft.The reason why γ and χ are not acquired by the same method adopted in translational dynamic control loop is that the filter will delay the measurement and cause errors from accurate results.The reference angular rates can be obtained by where x des 2a is the desired command and designed by linear controller, which is similar to the one adopted to design x des 1 .3.4.Rotational Dynamic Control Loop.x 4 is defined as the control moments acting on the vehicle, and it can be denoted as where x 4s denote the moments generated by aerodynamic control surface; x 4t indicate the moments generated by vectored thrust; and l c , m c , and n c represent the rolling, pitching, and yawing control moments, respectively.The dynamics of the angular rates of the vehicle can be expressed into the following affine-in-control form 26 where J denotes the inertia matrix and M a refers to the aerodynamic moments generated by derivatives unrelated to control surface deflections. International Journal of Aerospace Engineering Like aerodynamic force, the aerodynamic coefficients can also be denoted as In ( 27) and (28), C l , C m , and C n represent the total 3axis aerodynamic moment coefficients, respectively; C l0 represents the sum of aerodynamic rolling moment coefficients without the part contributed by δ a and δ r ; C m0 represents the sum of aerodynamic pitching moment coefficients without the part contributed by δ e ; C n0 represents the sum of aerodynamic yawing moment without the part contributed by δ a and δ r ; and C lδ a , C lδ r , C mδ e , C nδ a , and C nδ r represent derivative aerodynamic moment coefficients about δ a , δ e , and δ r . The aerodynamic control moments are denoted as where q denotes the dynamic pressure, S denotes the reference wing area, b denotes the wing span, and c denotes the mean aerodynamic chord.δ AS = δ a δ e δ r T are control surface deflections.The control moment coefficients C AS are assumed to be accurate in aerodynamic estimation, and the uncertainties are considered to be caused principally by C l0 , C m0 , and C n0 .Rewriting (26) into the incremental form, as presented in (30), the term f 3 x 3 can be cleared up. In (30), x 3f represents the derivative of x 3 , which is the filtered signal collected by the sensor.In the absence of f 3 x 3 , the uncertainty caused by M a is eliminated and the nonlinear cross couplings of the angular rate term is also cancelled.The reference control moments can be calculated as After the calculation of four control loops, Δx 2t and Δx 4 will be output as virtual commands to the secondlayer control allocation.The control allocation of the engine thrust, vectoring nozzles, and aerodynamic control surface deflections will be introduced in the next section.The block diagram of the control system is illustrated in Figure 2. Two-Layer Cascaded Optimal Control Allocation The allocation method of redundant control variables (elements in x 2 and x 4 ) in translational dynamics and rotational dynamics control loops is introduced in this section.Given this, the allocation result of x 2 will affect x 4 , and the allocation results of power system actuators T L , T R , T F δ R , δ L , and δ F are determined by x 2t and x 4t synthetically.A two-layer cascade optimal control allocation method is designed to address UAV's control allocation based on INDI control law. First-Layer Trajectory Incremental Control Allocation. The first-layer control allocation is conducted to allocate the increment of flight attitude Δx 2a and vectored thrust Δ x 2t in trajectory control. 4.1.1.Incremental Pseudoinverse Method.In translational dynamics control loop, the control equation is linearized by the INDI method.Accordingly, the dynamic weight pseudoinverse method can be used in control allocation.Based on (14), it is defined as International Journal of Aerospace Engineering where x 1f represents the derivative of x 1 , which is the filtered signal collected by the sensor.As the filter will delay the time in x 1f , to keep all signals synchronized in control equations, the signal of x 2 is also required to be filtered to counteract the error generated by the delay of x 1f .Regardless of the control variable constrains, the optimal control allocation can be denoted as where Δx 2 denotes the optimization variable and x 2f and ν are considered constants, they are all 3 × 1 vectors, and W 1 and W 2 indicate the diagonal matrixes of weight, which are generated by dynamic weight strategy, and will be discussed later.The first term of objective function, as shown in (33), is adopted to control the scale of control variables in x 2 , while the second term is employed to limit the change rates of the control variables.Equation (33) can be expanded as x 2f are constants which can be calculated directly.Accordingly, it is deduced that (33) has the same minimizing argument as Substitute (37) into (36) and incorporate with constant x T 20 W 2 x 20 . Based on the foregoing deduction, the optimal problem presented in (33) is equivalence min On that basis, the minimum norm solution of the control allocation problem is obtained as 20 , 40 In (41), superscript " †" represents the pseudoinverse operation for the matrix.In order to make the first-layer control allocation an equality constrained optimization problem, the range of Δx 2 is not limited.Then an analytical solution is obtained, and numerical iterations are avoided, which is suitable for onboard use [22].With the allocation results of Δx 2a , the x ref 2a can be calculated by (19).After adjusting x ref 2a into the right quadrant, as shown in (42), x ref 2a is transmitted into the rotational kinematics control loop. The Δx 2t is considered the virtual command to transmit into the second-layer control allocation, in which Δ x 2t and Δx 4t are used to solve the values of power system actuators T L , T R , T F δ R , δ L , and δ F . Dynamic Weight Strategy. In the course of control allocation, the weight is adopted to describe the differences of control variables' significance.On the basis of the significance of control variables changing with flight states and mission requirements, a dynamic weight strategy is proposed to allocate control variables optimally and properly. The traditional determination of control variable's weight largely depends on human experience.In this regard, there is no absolute criteria for weight determination.It is commonly difficult to judge a control variable's significance globally, while the significance between every two control variables can be easily compared.In dynamic weight strategy, the analytic hierarchy process (AHP) is adopted to synthesize comparison results of every two control variables and calculate each control variable's weight by AHP-judgment matrix.The weight matrixes in the objective function can be denoted as In (43), each element in W 1 and W 2 represents the weight of the control variable corresponding to its subscript.According to different physical properties, control variables are divided into different sets.The 8 International Journal of Aerospace Engineering definitions and subordinations, respectively, of the sets are expressed in Based on the subordinations, the sets and control variables are classified into three hierarchies, known as the weight structure, as presented in Figure 3. The weights in different hierarchies are determined, respectively, and they all obey the rules listed below: (1) Weights in the same set are required to be normalized 46 (2) Weights in the first and second hierarchies satisfy w 1 , w 2 , w 11 , w 12 , w 21 , w 22 ∈ 0 0 01 1 47 (3) In the third hierarchy, w ijk is acquired by the APHjudgment matrix.The judgment matrix is established for each set, with elements representing the priorities of every two control variables in the same set.The processes of APH method are introduced in [23,24].According to the weights in each hierarchy, the final weights in the objective function can be calculated by 48 where W i k,k represents the element in weight matrix W i row k column k.In practical flight, the controller is required to generate the initial value of the weight structure according to the mission requirement and dynamically adjust the value of the weight structure according to the conditions of actuator saturation.In dynamic weight strategy, the weight generator and weight regulator are designed to realize the foregoing functions. The weight generator is designed to generate the initial value of the weight structure, in every control period before optimization.Its working principle is introduced below: Step 1. Establish weight structures which consist of w i , w ij , and w ijk . Step 2. Artificially design and test several weight structures according to some typical flight states and mission requirements.On this basis, index weight structures with their relative flight states and mission requirements save weight structures into repository.A simplified repository is established in this paper.For flight states, merely the impact of velocity is factored in, and velocities 5 m/s, 10 m/s, 15 m/s, 20 m/s, and 25 m/s count as typical flight states.In mission requirement, only the impact of direct force control level (DFC level ∈ 0, 1 ) is factored in, and DFC level equals 0, 0.5, and 1 count as typical mission requirements.Larger DFC level represents less attitude changes and more direct force control, while smaller DFC level which represents less vectored thrust usage and more attitude maneuver during trajectory tracking.The repository of 3 × 5 weight structures employed in this paper is acquired by artificial design and computer-based simulation.In control allocation, the weight structure for the current demand is interpolated by velocity and DFC level . The weight regulator is designed to adjust the weight when control variables exceed their limitation and ensure the rationality of allocation results.The working process of weight regulator is shown below. Step 1. Extract control variables saturated in the last control allocation according to the feedback information.The saturated control variable can be single or multiple. Step 2. Update saturation counter.Every control variable has a corresponding saturation counter n ijk ∈ N i, j = 1, 2, k = 1, 2, 3 , which is adopted to record saturation times.If the control variable is not saturated in the last allocation, its saturation counter will be established at zero. Step 3. Adjust the weight structure in accordance with the saturation counter results.For different hierarchies of weight structure, the adjustment strategies are different, as illustrated below: Strategy 1.For n ijk ∈ 1,150 with the interval of every 5 count times, increase the significance of the saturated control variables relative to other control variables in the identical minimum set.On that basis, update the APHjudgment matrix and recalculate w ijk in set.If more than one control variable gets saturated, the comparison between the saturated control variables in significance remains unchanged, and their significance is improved compared with those unsaturated.Strategy 2. For n ijk ∈ 151,300 , the weight (w ij ) in the middle hierarchy is required to be adjusted as Strategy 1 is exercised.Take the angle of attack α as an example.If α continues to be saturated with the interval of every 5 count times, increase the weight (w 11 ) of set S x 2a .In (47), w ij is divided into 101 levels, and w ij is improved to a new level for each weight adjustment.Assume w 11 = 0 6 and w 12 = 0 4 before the adjustment, and with the improvement of w 11 , the values turn out to be w 11 = 0 61 and w 12 = 0 39.Strategy 3.For n ijk ∈ 301, +∞ with an interval of every 10 count times, the weight (w i ) of set (S x 2 or S Δx 2 ) is required to be adjusted, as Strategy 1 and Strategy 2 are exercised.If the weight reaches the upper limit with w i , w ij = 1, the weight stops increasing; if the weight reaches the lower limit with w i , w ij = 0, the decrement stops. The range of saturation counter n ijk and the adjustment of interval in each strategy are determined by control frequency in the weight regulator.For example, for 100 Hz control frequency, the control is allocated for every 0.01 second.Following the foregoing strategy, if a control variable is saturated for more than 3 seconds, the controller will adjust the weight for every hierarchy in the weight structure to get optimal control allocation to satisfy the current flight requirement.For the excessively large range of n ijk , the weight adjustment turns out to be oversluggish.The control variables will be saturated continuously, and the vehicle will deviate from target trajectory seriously.For the excessively small range of n ijk , the weight adjustment will be excessively sensitive.The fastchanging weight in objective function will cause vehicle oscillation and control divergence.The working process of the first-layer control allocation and dynamic weight strategy is illustrated in Figure 4. Second-Layer Actuator Control Allocation. In the second-layer control allocation, the allocation of aerodynamic control surfaces (δ a , δ e , δ r ) and power system actuators (T L , T R , T F , δ L , δ R , δ F ) are conducted according to the information of vectored thrust and control moments calculated in translational dynamics control loop and rotational dynamics control loops, respectively. The dynamic model of the engine system can be denoted as respectively; and δ a , δ e , and δ r represent the alerion, elevator, and rudder, respectively.To keep time synchronized, the output of the actuator model will go through a second-order filter to get T E f , δ TVN f , δ AS f , where subscript "f" denotes the filtered control output.In the second-layer control allocation, the daisy chaining method is adopted to allocate aerodynamic control moment and thrust vector moment.On that basis, the control output of the power system is calculated by x 2t and x 4t .The second-layer control allocation can be conducted in the incremental linear form or normal nonlinear form, and the two methods are introduced below. 4.2.1. Incremental Linear Allocation Method.The incremental linear allocation method allocates ΔT E Δδ TVN and Δδ AS according to Δx 4 and Δx 2t .The incremental daisy chaining is adopted to allocate Δx 4s and Δx 4t based on (51).In this method, the priority of Δx 4s outstrips Δx 4t , and thus, the thrust vector moment is adopted only when the aerodynamic control moment gets saturated. The increment of aerodynamic control surface is calculated by The reference value of aerodynamic control surface can be expressed as On that basis, introduce (49) into the incremental form, as shown in (54), in which the incremental 11 International Journal of Aerospace Engineering multiplication is considered a high-order small quantity and can be ignored. The thrust vector engine model is transformed from a nonlinear system into a linear system, and ΔT E and Δδ TVN can be calculated directly by matrix inversion through the incremental linear allocation, as shown in (56).This method can evidently expedite allocation without iterative solution of nonlinear equations.However, like (14), when UAV is in high maneuver flight, the increment of control variables will be large, and the neglection of incremental multiplication will cause approximation errors.In this regard, this method is appropriate for minor or moderate maneuverable flights. The range of T L , T R , T F and δ L , δ R , δ F is limited on the basis of power as shown in (58).Eventually, the control signals are output to the aerodynamic control surfaces and power system. International Journal of Aerospace Engineering The flow chart of the incremental linear allocation is presented in Figure 5. Normal Nonlinear Allocation Method. In nonlinear allocation, other than calculating ΔT E and Δδ TVN , the outputs of thrust vector engine system T E and δ TVN are calculated directly through solving nonlinear equations.This method is suitable for all maneuver flights and will not cause approximation error.The total time consumption remains rational and applicable for onboard use though the calculation time is longer than the incremental linear allocation method.The flow chart of the allocation process is exhibited in Figure 6. Simulation An example is designed in this section to test the control method which is discussed in the previous section.The target trajectory is designed as follows: The initial state of UAV is static.From 0 to 20 seconds, the vehicle is climbing up and accelerating to 10 m/s in ground x-axis.From 20 to 60 seconds, the UAV is in transition flight and maneuvering laterally, showing UAV's flight ability in low-altitude complex flight conditions, including cities and mountainous areas.After 60 seconds, the vehicle makes uniformly accelerated flight in ground x-axis and transforms from transition flight to cruise. The example involves three different flight states, i.e., VTOL, transition flight, and cruise.These states can adequately show UAV's longitudinal and lateral maneuverability required for an urban flight vehicle.The controller designed in this paper does not need to switch as flight states change.The trajectory can be tracked merely through adjusting the weights in the controller.As the simulation results prove, the control method can solve nonlinear, nonaffine, and coupled control problems effectively and allocate redundant control variables appropriately. To show the effectiveness of the two-layer cascaded control allocation for different flight states and mission requirements, two flight strategies are designed.In the first strategy, the UAV will track the trajectory with small attitude angle magnitude and fluctuation.In the second strategy, the vehicle is required to track the target trajectory through employing small direct force control and primarily adopting attitude control.For all strategies, the incremental linear allocation method is adopted in the secondlayer control allocation, and 0.01 s is adopted as the calculation step.The weight structure is generated and regulated all through the trajectory tracking, and at last, the weight changes into (61).13 International Journal of Aerospace Engineering Additionally, the weight change would be better consecutive for different flight states and mission requirements, since inconsecutive weight change will cause the saltation of flight states.The trajectory tracking result is presented in Figure 7. The flight states of Strategy 1 are exhibited in Figure 8. The wind angle and body angle of the UAV are presented in Figures 8(c) and 8(d).As indicated, the angle of attack α is negative in climbing phase, and α increases with the increasing of velocity and pitch angle.The UAV first keeps the minus pitch angle, and the body z-axis thrust is adopted to assist x-axis acceleration.With the rise of lift and decline of climb rate, the pitch angle (θ) and vectoring nozzle deflection angle increase, and more body x-axis thrust is adopted to accelerate the vehicle in ground x-axis.These maneuvering strategies are calculated by the controller automatically according to current flight states and weight structure. Figures 8(i) and 8(j) show the changes of control surface deflections and aerodynamic control moments, respectively.In Figure 8(i), the black curve (delta-a) represents the aileron deflection angle (δ a ), and the blue curve (delta-e) represents the elevator deflection angle (δ e ).In Figure 8(j), the black curve (PTC-roll) represents the percentage of aerodynamic roll control moment in total roll control moments, and the blue curve (PTC-pitch) represents the percentage of aerodynamic pitch control moment in total pitch control moments.As the daisy chaining method is adopted in the second layer control allocation, the thrust vector control moments will be used only when aerodynamic control moments get saturated.Also, the effect of aerodynamic control surface is affected by the magnitude of wind angle.With large magnitude of angle of attack, the effect of control surface is less and vice versa.Therefore, in the takeoff phase, the control surfaces are in maximum deflection angle (±32 °).From 0 to 20 seconds, with the increase of velocity, the percentage of aerodynamic control moments increases, and control surfaces are no longer saturated. From 20 to 60 seconds, the UAV is in transition flight and making lateral maneuvering, the flight velocity is 10 m/s, and α ranges from 0 degrees to 17 degrees.In Figure 8(f), T z is negative as the body x-axis is pointing down, indicating that UAV employs vectored thrust to compensate the lift.The change tendencies of T z and α are similar with the absolute value of velocity.As indicated, the larger lift is generated with the rise of velocity and smaller T z is required to compensate the lift.The change of engine thrust and nozzle deflection angle after control allocation are presented in Figures 8(g) and 8(h).In Figures 8(i) and 8(j), the aerodynamic control moments get saturated again when the large control moments are required, and their usage percentage decreases as the thrust vector control moments are added in.During lateral maneuver, roll angle ϕ (Figure 8(d)) and sideslip angle β (Figure 8(c)) are changed jointly, meaning that STT and BTT controls are adopted in the UAV simultaneously.Additionally, different values of T L and T R change similarly with β.As indicated, the UAV uses engine differential thrust to realize the STT control.The change of these flight states is corresponded to kinematic azimuth angle χ, as presented in Figure 8(b).Overall, when χ is increasing, ϕ, β, and n c keep positive; when χ is decreasing, ϕ, β, and n c keep negative, as presented in Figures 8(b), 8(c), 8(d), and 8(e). When t > 60s, with the increase of velocity, α and T z decrease, the usage percentage of aerodynamic roll control moment rises to 100%, and the usage of aerodynamic pitch control moments increases.The UAV is transformed from transition flight to cruise.15 International Journal of Aerospace Engineering Figure 9(c), from 60 seconds to 80 seconds, the saltation of control moment m c exists, which is caused by the aerodynamic interference between the tandem wings.During this period, α decreases from 36 degrees to 5 degrees.When α is around 18 degrees, the first saltation of pitch moment coefficient C m is caused by the different stall angles between the front wing and the rear wing.When α is around 9 degrees, the wake flow of the front wing beats on the rear wing directly, which leads to the second saltation of C m .As indicated, in the INDI control system, the control moment m c is used to counteract the aerodynamic disturbances and keep the stability of flight attitude. Comparing the change of α in Figures 8(c) and 9(a), we can find that when UAV makes lateral maneuver, α in Strategy 1 increases with the increase of V, while Strategy 2 is on the contrary.This is because the average α approaching to 12 degrees is small in Strategy 1 and does not reach the stall angle.With the rise of velocity, the increment of α can largely increase lift and reduce T Z .This maneuver strategy based on weight selection is an optimal result, which makes the objective function get minor results.In Strategy 2, the average value of α is around 34 degrees, which exceeds the stall angle evidently.In this condition, increasing α will result in lift loss.Accordingly, with the increase of velocity, more lift is generated, and reducing α is the best solution for optimal control allocation. The application of the INDI control method has a requirement for the frequency of the control system.High control frequency is especially required for fiercer maneuver.In INDI control, the multiplication of increment caused by coupled control variables counts as a high-order small quantity and is neglected, which will cause large errors under fierce maneuver and large sampling time.Also, in the discrete differentiator, which is used to calculate the derivative of control vectors, the larger sampling time will result in less derivation accuracy.These errors can be restrained through reducing the sampling time, which is equivalent to increase the control frequency.As the simulation results indicate, UAV can track the target trajectory accurately.The application of the INDI method and two-layer cascaded optimal control allocation in a controller of 100 Hz can satisfy the need of maneuver in this case.Furthermore, the simulation validates the effectiveness of the control method developed in this paper.17 International Journal of Aerospace Engineering to different flight states and mission requirements, which ensures the rationality of control allocation.In the future, the control system presented in this paper will be incorporated with the controller of a prototype.A flight test is required to be further conducted to verify the effectiveness of the proposed control system.Appendix g 1a = g 1a 11 g 1a 12 g 1a 13 g 1a 21 g 1a 22 g 1a 23 g 1a 31 g 1a 32 g 1a 33 Vectored thrust of engine (f) Thrust vector engine r T 50 In (49) and (50), T F denotes lift fan thrust; T L and T R denote the left and right engine thrusts, respectively; δ F 10 International Journal of Aerospace Engineering denotes the lift fan rudder deflection angle; δ L and δ R denote the left and right engine thrust vector nozzle deflections, respectively; d xf denotes the distance from the lift fan center to the vehicle mass center in body x -axis; d xt and d yt denote the distance from the engine nozzle to the vehicle mass center in body x-axis and y-axis, Figure 4 : Figure 4: Working process of first-layer control allocation and dynamic weight strategy. 5. 1 . Strategy 1: Trajectory Tracking.For Strategy 1, the initial values of the weights are presented in (60).To avoid the errors caused by the small magnitudes of weighted control variables, the weights have been expanded by 100 times. Percentage of aerodynamic control moments
2019-02-16T13:29:49.869Z
2018-11-27T00:00:00.000
{ "year": 2018, "sha1": "7c6ffc92134915647da9ff0cec3462c4ed58c7aa", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ijae/2018/6315856.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7c6ffc92134915647da9ff0cec3462c4ed58c7aa", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
15984769
pes2o/s2orc
v3-fos-license
Femoral derotation osteotomy with multi-level soft tissue procedures in children with cerebral palsy: Does it improve gait quality? Purpose Poor motor control and delayed thumb function and a delay in walking are the main factors which retard the natural decrease of the femoral anteversion (FA) with age. In addition, cerebral palsy (CP) patients usually have muscular imbalance around the hip as well as muscle contractures, both of which are main factors accounting for the increased FA which is commonly present in CP patients. The purpose of this retrospective study was to analyze the mid-term results of femoral derotational osteotomy (FDO) on the clinical findings, temporospatial and kinematic parameters of gait in children with CP. Methods We performed a retrospective review of all patients diagnosed with CP and increased FA who were treated with FDO with multi-level soft tissue surgeries at a single institution between 1992 and 2011. FA assessment was done in the prone position, and internal (IR) and external rotation (ER) of the hip was measured in the absence of pelvis rotation. Surgical procedures were performed on the basis of both clinical findings and video analysis. Clinical findings, Edinburgh Visual Gait Scores (EVGS) and results from three-dimensional gait analysis were analyzed preoperatively and last follow-up. Results A total of 93 patients with 175 affected extremities were included in this review. Mean age was 6.2 ± 3.1 (standard deviation) at initial surgery. The average length of the follow-up period was 6.3 ± 3.7 years. At the last follow-up, the postoperative hip IR had significantly decreased (73.9° vs. 46.2°; p < 0.0001), the hip ER had significantly improved (23.8° vs. 37°; p < 0.0001) and the popliteal angle had significantly decreased (64.2° vs. 55.8°; p < 0.0001). The total EVGS showed significant improvement after FDO (35.2 ± 6.4 vs. 22.5 ± 6.1; p < 0.001). Computed gait analysis showed significant improvement in the foot progression angle (FPA; 8.1° vs. −16.9°; p = 0.005) and hip rotation (−13.9° vs. 5.7°; p = 0.01) at the last follow-up. Stance time was improved (60.2 vs. 65.1 %; p = 0.02) and swing time was decreased (39.9 vs. 35.2 %; p = 0.03). Double support time and cadence were both decreased (p = 0.032 and p = 0.01). Conclusions Our data suggest that the FDO is an appropriate treatment strategy for the correction of FA and associated in-toeing gait in children with CP. Improvements in clinical and kinematic parameters were observed in both groups after FDO with multi-level soft tissue release. The most prominent effects of FDO were on transverse plane hip rotation and FPA. Introduction Cerebral palsy (CP) is a disorder which primarily affects body movement and muscle coordination. It is characterized by spasticity and progressive musculo-skeletal problems [1][2][3]. The management of spasticity is a major challenge and initially focuses on the elimination of spasticity at an early age [4][5][6]. Uncontrolled spasticity may gradually cause progressive muscle contractures and bone deformities, such as increased femoral anteversion (FA) [7]. The primary main aim of treating a CP patient is to create a productive, highly functional and active individual in his/her social environment [8,9]. The physiological high anteversion present at birth (40-60°) slowly decreases with growth in normally developing children. However, in children with CP, the normal remodeling process of the FA does not occur [10], and FA has not been found to resolve over time in CP patients [11]. Poor motor control and delayed functional milestones, such as delays in rolling-over, crawling, kneeling, standing and walking, are the main factors which retard the natural decrease of the FA with age. In addition, CP patients usually have muscular imbalance around the hip as well as muscle contractures, both of which are main factors accounting for the increased FA which is commonly present in CP patients [8]. A growing child with CP usually has internal rotation contracture of the hip, which also contributes to a decrease of the FA angle [12]. In children with CP, torsion occurs as a gentle twisting throughout the whole femur [10]. Many clinical and radiological methods have been developed to diagnose increased FA [13][14][15][16][17][18][19]. In the clinical setting, goniometric measurement of hip internal--external rotation, the prominence angle test and, for more precise measurements, computed tomography (CT) have been used to determine the existence of increased FA. Even though CT-based methods have been reported to deliver the most accurate measurement, physical examination is a frequently performed, cost-effective, and safe method that does not involve exposure of the child to radiation [14,20]. Femoral derotational osteotomies (FDO) are being increasingly performed, and there have been many reports in support of this approach to resolve this mal-torsion [8,18,[21][22][23]. This is a key strategy for treating CP patients and the primary procedure adopted to improve the in-toeing gait in the transverse plane. This procedure also has a number of effects on the coronal and sagittal planes of the gait [11]. The method of de-rotation is still controversial, with each de-rotation technique associated with specific benefits and problems [22,24]. The aim of this retrospective study was to analyze the mid-term results of FDO on the clinical findings and the temporospatial and kinematic parameters of gait in children with CP. Methods This study was a retrospective review of all patients diagnosed with CP and increased FA who were treated with FDO with multi-level soft tissue surgeries at a single institution between 1992 and 2011. The inclusion criteria used for patient selection were: medical history for[1 year of follow-up; spastic diplegia and tetraplegia; data available on the pre-and post-operative clinical examination; Gross Motor Function Classification System level I, II or III [25]; spastic type CP [1]. The medical charts of all patients meeting these criteria were analyzed for demographic data, clinical findings (hip flexion and extension, hip flexion contracture (Thomas test), hip abduction angle, femur internal and external rotations, total rotation arc of the hip, popliteal angle, knee flexion contracture, thighfoot angle, rectus contracture (Duncan Ely test) and motion analysis parameters. All clinical findings were evaluated before surgery and at last follow-up. Lengthening of multilevel soft tissues, tendon transfers and botulinum toxin A injections were documented, as well as the level of FDO. The FA was assessed with the child in the prone position; the internal (IR) and external rotation (ER) of the hip was measured while not allowing pelvis rotation. Over 70°o f IR and limited ER (\30°) was considered to indicate the existence of increased FA. The trochanteric prominence test was also used in patients who had undergone previous hip surgery, and a CT scan was performed to measure the absolute FA. If the gait abnormalities were relevant with increased FA, FDO with multi-level surgery was considered for patient. Video analysis was used to underpin all operative procedures, which were performed based on analysis of both the clinical findings and video analysis. Proximal FDO (pFDO) was the preferred surgical procedure for children between 5 and 8 years old. In these cases, the patient was in the prone position, and osteotomy was performed at the level of trochanter minor and fixed with a blade plate. Dennis-Brown orthosis was used in combination with a custom-made brace to keep the knees in extension for 4 weeks postoperatively. For children older than 8 years, distal FDO (dFDO) was the preferred surgical approach. In these cases, the patient was in the supine position, and osteotomy was performed at the level of the metaphysis and diaphysis junction and fixed with a dynamic compression plate. A knee immobilizer was applied for 4 weeks postoperatively. Orthotics can be removed during the physical therapies and for hygienic care. The level of osteotomy, type of implant selection, postoperative bracing and/or casting may difference in accordance with the surgical procedure. The IR was corrected to approximately 30°and the ER to approximately 50°with both proximal and distal derotations. Patients were referred back to their rehabilitation centers as soon as possible after surgery to continue muscle exercises and strengthening. Complications were classified as infection, non-union, fracture around the implant and reincrease of FA. For video-based gait assessment, we used the Edinburgh Visual Gait Score (EVGS) and compared preoperative scores with those at the last follow-up [26]. Threedimensional gait analysis was assessed using the BTS motion analysis system (Elite Eliclinic, BTS, Milan, Italy) consisting of six cameras and two force plates. A number of the patients enrolled in the study were found to have an appropriate gait, based on analysis before surgery and at a minimum of 2 years after surgery. Data on temporospatial parameters (stance phase %, swing phase %, double support time %, cadence, gait velocity, step length, stride length, step width) and kinematic data (mean pelvic tilt, mean pelvic rotation, peak hip flexion in swing, mean foot progression angle (FPA) in mid-stance, mean hip rotation angle at the end of loading response, maximum knee flexion in swing) were collected, and preoperative values were compared with those at the last follow-up. This research was approved by the Institutional Review Board of the Department of Orthopedics, Istanbul Faculty of Medicine, Istanbul University. The study complies with the Declaration of Helsinki statement on medical protocol and ethics. All patients enrolled in the study provided oral and written informed consent. Statistical analysis Statistical analysis was carried out using the Student t test for parametric data, the Mann-Whitney U (Wilcoxon rank test) test for non-parametric data and the chi-square test for categorical data, as appropriate (SPSS v18.0; IBM Corp., Armonk, NY). The Kolmogov-Smirnov and Shapiro-Wilk tests were used for normalization. A p value of B0.05 was considered to be significant. For those interventions which were done bilaterally, only one side were included in the statistical analysis. Hamstring (67.5 %), adductor (44.8 %) and gastrocnemius (38.6 %) release were the most common soft tissue procedures performed simultaneously with FDO. The soft tissue procedures undertaken are shown in Table 1. pFDO was performed on 84 extremities of 45 patients with a mean age of 6.6 ± 1.3 years. dFDO was performed on 89 extremities of 48 patients with a mean age of 10.7 ± 2.4 years. In addition to femoral osteotomies, three periacetabuler Dega osteotomies were performed. Comparison of the preoperative values of the parameters under study with the values at last follow-up revealed that the postoperative hip abduction angle had improved (31.3°v s. 34.9°, respectively; p \ 0.0001), the hip IR had decreased (73.9°vs. 46.2°, respectively; p \ 0.0001), the hip ER had improved (23.8°vs. 37°, respectively; p \ 0.0001), Duncan-Ely test positivity had increased (n = 26 vs. 52, respectively; p \ 0.0001) and the popliteal angle had decreased (64.2°vs. 55.8°, respectively; p \ 0.0001). All changes were statistically significant. A summary of the these clinical findings is given in Table 2 and Fig. 1. Computed gait analysis showed significant changes after surgery even in our small group of patients (35 limbs of 18 patients). There was also a significant improvement at the last follow-up in mean FPA in mid-stance [8.1°vs. -16.9°( follow-up); p = 0.005] and in mean hip rotation angle at the end of the loading response [-13.9°vs. 5.7°(followup); p = 0.01]. Maximum hip extension in stance was decreased from 6.7°to 1.5°at last follow-up, but the difference was not statistically significant (p = 0.11), and maximum hip flexion in swing was significantly decreased from 42.7°to 35.9°at last follow-up (p = 0.032). As the primary aim of FDO was to correct femoral internal rotation in the transverse plane, both maximum hip and knee flexion were improved after surgery (p = 0.032 and p = 0.052, respectively) in the sagittal plane. The mean pelvic tilt and rotation were similar before and after surgery (Table 4). Step length, stride length and step width remained similar at 1 year after FDO (Table 5). Implant failure was seen in four patients in pFDO patients during the early post-operative period due to poor fixation. Early revision was made with a longer blade plate using a different proximal insertion point. Superficial wound infection was seen in two patients in the dFDO group, which was cured with antibiotic therapy. Non-union was seen in two patients and treated with revision plating and bone graft. Three fractures around the implants required surgical treatment. Recurrence of FA was correlated with younger age and seen in six patients at the last clinic visit. The mean age at FDO was 5.2 ± 0.9 years in patients who had recurrence of increased FA. Discussion Femoral anteversion is a common problem in children with CP and can occur either unilaterally (typically in hemiplegia) or bilaterally (typically in children with bilateral involvement). It is generally assumed that the increased internal femoral torsion is a result of increased muscle tone, most specifically of the medial hamstrings. Excessive increased FA is best corrected with FDO. The aim of this study was to analyze the mid-term benefits of FDO on the clinical outcomes of children with CP, as well as on the temporospatial and kinematic parameters of gait in these children. Anteversion was measured by physical examination with the child in the prone position. Goniometric measurements of hip IR and ER were made; such measurements have been shown to be reliable and can be used to monitor FA with sufficient accuracy. In the prominence test IR and ER are measured by palpation, but it has been reported that palpation of the greater trochanter can There are many methods for measuring FA, with each producing slightly different measurements and having some variation in the degree of accuracy. Radiological measurement of FA with a plain X-ray is an obsolete technique and not appropriate if the neck shaft angle is very high ([150°) [27]. Other diagnostic imaging modalities used to measure FA include fluoroscopy, ultrasonography, CT and magnetic resonance [17,[28][29][30][31][32]. Measurements obtained by CT imaging was used in our study. CT imaging is probably the most widely used clinical technique for measuring femoral neck anteversion [31], but in children with severe IR, an absolute measurement of FA is not required prior to surgery [10]. There is also no consensus on when FA needs to be measured accurately. In our study only patients who had undergone prior hip surgery were radiologically examined. FDO may be performed either at the proximal or distal femur. Among the patients enrolled in our study, pFDO and dFDO were performed in 48 and 52 %, respectively, of all 175 extremities. The complication rate was higher in pFDO patients (12 vs. 5; p [ 0.05). The most commonly seen complications were early implant failure and re-increase of the FA (6 patients); the mean age of these six patients at surgery was 5.2 ± 0.9 years. As expected, younger patients were found to be prone to recurrence. In our institution, we prefer the patient to be in the prone position on the operation table during pFDOs as this position allows for an evaluation of hip IR/ER while in hip extension and knee flexion. Many authors advocate that the reason for subluxation in spastic hip is increased FA or internal rotation in addition to the coxa valga [17,33]. As a beneficial effect of hip centralization and to prevent further luxations, pFDO can be performed in combination with varus osteotomy. The EVGS is a simple tool for use in video-based gait assessment. It has been validated specifically for use in patients with CP and has a good inter-observer and excellent intra-observer reliability [26]. In our study, video recordings of gait events on the sagittal, coronal and transverse planes were assessed at selected anatomic levels. In this visual analysis, abnormality is severe when the score exceeds ''0''. In our series, total visual scores were significantly improved after FDO (36.8 ± 6.3 vs. 22.2 ± 6; p \ 0.0001), as well as individual ones ( Fig. 2; Table 2). Three-dimensional gait analysis helps confirm the cause and presentation of rotational abnormalities that can be a result of rotational deformities of the femur and/or tibia reflected in abnormal hip rotation and foot progression in the transverse plane. Gait analysis also provides further information about pelvic rotation in the transverse plane, and this information can help the surgeon to interpret transverse plane abnormalities [11]. Akalan et al. studied the gait parameters of CP children with increased FA and compared these with those of children with increased FA who were developing normally [34]. These authors noted that the effects of increased FA differ between a child with CP and a normal child. This observation led them to conclude that before muscle lengthening, FDO should be considered in early stages of growth in CP in order to improve pelvic stability and the knee extensor mechanism [34]. Evaluation of our results revealed significant improvements in the transverse plane kinematics and the timedistance parameters, indicating an improvement in the gait function. Improvements, including those in the mean FPA in mid-stance and mean hip rotation angle at the end of the loading response, were significant at the last follow-up (p = 0.005 and p = 0.01). Maximum knee flexion (62.2 vs. 56.3; p = 0.05) and maximum hip flexion (49 vs. 42.7; p = 0.032) had decreased significantly at the last followup. Our physical examination and gait analysis results show that the dynamic effects of other procedures, such as hamstring and iliopsoas lengthening, were maintained at the last follow-up and that gait was improved. As anticipated, in-toeing gait resolved after FDO-and is a second beneficial effect of FDO. The primary purpose of FDO was to correct femoral internal rotation in the transverse plane, with an improvement of in-toeing gait. Both parameters were seen to have improved after surgery in our cohort (p = 0.01 and p = 0.005, respectively). Previous studies have shown that the amount of in-toeing can decrease with soft tissue surgeries [35,36]. Distal hamstring lengthening of contracted medial hamstrings can restrict external rotation of the limb and resolves in-toeing. Also, rectus femoris muscle transfer to the medial hamstrings can increase the external rotation strength of an affected limb [35,36]. We report severe in-toeing in spastic children, with a mean preoperative hip IR of 73.9°± 7.7°. The difference between the pre-and postoperative hip IR determined in the clinical examination was 27.7°± 9.3°( p \ 0.0001), and the FPA was 25°± 11.2°(p \ 0.0001). This significant improvement of in-toeing in children with CP can only be due to the FDO. Some children with increased FA seem to gain stability by their walking experience over years with a hip IR; thus, a hip IR may provide better stability in stance. Once the deformity is corrected, the children tend to return to the experienced posture of IR [10]. Our study showed that compensating external tibial torsion was more common in those patients who were operated on after reaching an age of 8 years (20 vs. 9; p = 0.015), as seen at last follow-up. There are several limitations to our study which are inherent to its retrospective design and heterogeneous patient characteristics in terms of geographic involvement of the disease. Rather than evaluating clinical examination changes at two different time points (preoperative and last follow-up), it would have been better to include an additional follow-up examination in the relatively short-term at about 1 year postoperative. Evaluating the effect of a single procedure included in a multilevel surgery is always difficult. By controlling other soft tissue procedures and determining the homogeneity of gait abnormalities and functional status of the patients, we may have been able to determine the effects of FDO more precisely. Small changes in kinematic values might have been caused by marker placement, even though this procedure was performed by an experienced researcher. Also, gait analysis combined with surface electromyography and muscle strength measurement by a hand-held dynamometer may provide more accurate results in muscle force analysis. Preand postoperative clinical assessments and surgeries were performed by a single experienced pediatric orthopedic surgeon and were therefore reliable. We reported the mid-term follow-up results of a large number of patients undergoing FDO with multi-level surgery. However, due to an insufficient number of cases with appropriate motion analysis (20 % of all extremities), the gait parameters could not be evaluated accurately. Further study with more gait assessments is required to define the effects of FDO with multi-level surgery. In conclusion, our data suggest that FDO is a possible treatment option for the correction of FA and associated intoeing gait in children with CP. Improvements in clinical and kinematic parameters were observed for both conditions after FDO with multi-level soft tissue release. The most prominent effects of FDO were on transverse plane hip rotation and FPA. Compliance with ethical standards Conflict of interest Each author certifies that he/she has no commercial associations that might pose a conflict of interest in connection with the submitted article. Funding information This study was funded by the authors. Ethical approval All procedures performed in these studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://crea tivecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
2018-04-03T04:42:40.642Z
2015-11-23T00:00:00.000
{ "year": 2015, "sha1": "62b467c5d78243d0e6a2f6bba6bb13ec8a368c55", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1007/s11832-015-0706-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "62b467c5d78243d0e6a2f6bba6bb13ec8a368c55", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
22847106
pes2o/s2orc
v3-fos-license
On local convergence of the method of alternating projections The method of alternating projections is a classical tool to solve feasibility problems. Here we prove local convergence of alternating projections between subanalytic sets $A,B$ under a mild regularity hypothesis on one of the sets. We show that the speed of convergence is O$(k^{-\rho})$ for some $\rho\in (0,\infty)$. Introduction The method of alternating projections is a classical tool to solve the following feasibility problem: Given closed sets A, B in R n , find a point x * ∈ A ∩ B. Alternating projections can be traced back to the work of Schwarz [26] in 1869, and were popularized in lecture notes of von Neumann [23] since the 1930s. The method generates sequences a k ∈ P A (b k−1 ), b k ∈ P B (a k ), where P A , P B are the set-valued orthogonal projection operators on A and B. If the alternating sequence a k , b k is bounded and satisfies a k − b k → 0, then each of its accumulation points is a solution of the feasibility problem. The fundamental question is when such a sequence converges to a single limit point x * ∈ A ∩ B. For convex sets alternating projections are globally convergent as soon as A ∩ B = ∅, and the survey [2] gives an excellent state-of-art of the convex theory. In one of the earliest contributions to the nonconvex case, Combettes and Trussell [11] proved in 1990 that the set of accumulation points of a bounded sequence of alternating projections with a k − b k → 0 is either a singleton or a nontrivial compact continuum. In 2013 it was shown in [6] by way of an example that the continuum case may indeed occur. This shows that without convexity a sequence of alternating projections a k , b k may fail to converge even when it is bounded and satisfies a k − b k → 0. In 2008 Lewis and Malick [19] proved that a sequence a k , b k of alternating projections converges locally linearly if A, B are C 2 -manifolds intersecting transversally. Expanding on this in 2009, Lewis et al. [20] proved local linear convergence for general A, B intersecting non-tangentially in the sense of linear regularity, where one of the sets is superregular. In 2013 Bauschke et al. [4,5] investigate the case of non-tangential intersection further and prove linear convergence under weaker regularity and transversality hypotheses. Here we prove local convergence under less restrictive conditions, where A, B may also intersect tangentially. We propose a new geometric concept, called separable intersection, which gives local convergence of alternating projections when combined with Hölder regularity, a mild hypothesis less restrictive than prox-regularity. Date Separable intersection has wide scope for applications, as it not only includes nontangential intersection, but goes beyond and allows also a large variety of cases where A, B intersect tangentially. In particular, we prove that closed subanalytic sets A, B always intersect separably. This leads to the central result that alternating projections between subanalytic sets converge locally with rate O(k −ρ ) for some ρ ∈ (0, ∞) if one of the sets is Hölder regular with respect to the other. As these hypotheses are satisfied in practical situations, we obtain a theoretical explanation for the fact, observed in practice, that even without convexity alternating projections converge well in the neighborhood of A ∩ B. As an application, we obtain a local convergence proof for the classical Gerchberg-Saxton error reduction algorithm in phase retrieval. The structure of the paper is as follows. Section 3 introduces the concept of separable intersection of two closed sets. Then 0-separability is related to existing transversality concepts. In section 4 we discuss Hölder regularity and compare it to older regularity concepts like prox-regularity, Clarke regularity, and superregularity. The central chapter 5 gives the convergence proof with rate for sets intersecting separably. In section 6 we show that subanalytic sets intersect separably and then deduce the convergence result for subanalytic sets. Section 6 gives also some applications indicating the versatility of our convergence test. In particular, we prove local convergence of an averaged projection method related to in [1,Corollary 12], where the authors use the Kurdyka-Łojasiewicz inequality. The final section 7 gives limiting examples. After the initial version [22] of this article, a concept related to our notion of 0separability, called intrinsic transversality, was announced in [12]. We compare this to our own transversality and regularity concepts in sections 3 and 4. Preparation Given a nonempty closed subset A of R n , the projection onto A is the set-valued mapping P A associating with x ∈ R n the nonempty set where · is the Euclidean norm, induced by the scalar product ·, · , and where d A (x) = min{ x − a : a ∈ A}. The closed Euclidean ball with center x and radius r is denoted B(x, r). We write a ∈ P A (b) if the projection is potentially set-valued, while a = P A (b) means it is unique. A sequence of alternating projections between nonempty closed sets A, B satisfies b k ∈ P B (a k ), a k+1 ∈ P A (b k ), k ∈ N. We occasionally switch to the following index-free notation, which is standard in optimization: b ∈ P B (a), a + ∈ P A (b), b + ∈ P B (a + ), etc. The sequence of alternating projections is then . . . , a, b, a + , b + , a ++ , b ++ , . . . . We refer to a → b → a + , respectively b → a + → b + , as the building blocks of the sequence, where it is always understood that b ∈ P B (a), a + ∈ P A (b), b + ∈ P B (a + ), etc. Notions from nonsmooth analysis are covered by [25,21]. The proximal normal cone to A at a ∈ A is the set N p A (a) = {λu : λ ≥ 0, x ∈ P A (x+u)}. The normal cone to A at a ∈ A is the set N A (a) of vectors v for which there exist a k ∈ A with a k → a and v k ∈ N p Tangential and non-tangential intersection In this section we introduce the fundamental concept of separable intersection of sets A, B, which plays the central role in our convergence theory. We say that B intersects A separably at x * if (1) holds for some ω ∈ [0, 2), γ > 0. If it is also true that A intersects B separably, that is, if the analogue of (1) holds for building blocks a → b → a + , then we obtain a symmetric condition, and in that case we say that A, B intersect separably at x * . and rewrite (1) in the more suggestive form calling this the angle condition for the building block b → a + → b + . For ω ∈ (0, 2) the interpretation of (1), or (1 ′ ), is that if the angle α between b − a + and b + − a + for two consecutive projection steps b → a + → b + shrinks down to 0 as the alternating sequence approaches x * , then α should not shrink too fast. Namely, through (1 ′ ), the angle is linked to the shrinking distance between the sets. For ω = 0 the meaning of (1 ′ ) is that the angle α stays away from 0. Remark 3. Informally, when the angle α = ∠(b − a + , b + − a + ) between two consecutive projection steps shrinks to zero, A, B must in some sense intersect tangentially at x * . In contrast, when α stays away from 0, the case of 0-separability, one could say that A, B intersect transversally, or at an angle. In that case alternating projections are expected to behave well and converge linearly. Tangential intersection is the more embarrassing case, where convergence could be slowed down or even fail. Our concept of ω-separability gives new insight into the case of tangential intersection. There has been considerable effort in the literature to avoid tangential intersection by making transversality assumptions. We mention transversal intersection in [19], the generalized non-separation property in [21], linearly regular intersection in [20], or the notion of constraint qualification in [4]. In the following we relate these notions to 0separability. Bauschke et al. [4,Definition 2.1] introduce an extension of the Mordukhovich normal cone called the B-restricted normal cone N B A (x * ) to A at x * ∈ A. They define u ∈ N B A (x * ) if there exist a n ∈ A, a n → x * , and u n → u such that u n = λ n (b n − a n ) for some λ n > 0 and b n ∈ B with a n ∈ P A (b n ). They then establish basic inclusions between the restricted normal cone and various classical cones [4,Lemma 2.4]. In particular for any a ∈ A and B one has N B A (a) ⊂ N A (a). Now let A and B be non-empty subsets of R n . In [4, Definition 6.6] the authors say that (A, A, B, B) satisfies the CQ-condition at . This condition is to be understood as a transversality hypothesis, because we have the following Proof: According to [4, Definition 2.1] the B-restricted proximal normal cone NB A (a) of A at a ∈ A is the set of vectors u of the form u = λ(b − a) for some λ > 0 and some b ∈ B satisfying a ∈ P A ( b). The cone NÃ B (b) at b ∈ B is defined analogously. Then by [4, Definition 6.1], specialized to the case of two sets, the CQ-number at x * associated with , implies θ(A, A, B, B) < 1 . Using this, pick δ > 0 such that θ δ (A, A, B, B) =: 1 − γ < 1. Consider a building block b → a + → b + as in definition 1 with b, a + , b + ∈ U := B(x * , δ). Then we have b ∈ B and a + ∈ A. Hence b − a + ∈ N B A (a + ) and That shows 1 − cos α ≥ γ > 0 and proves that B intersects A 0-separably at x * with constant γ. The estimate for building blocks a → b → a + is analogous. Example 7.3 shows that the converse of proposition 1 is not true. In fact, 0-separability seems more versatile in applications, while still guaranteeing linear convergence. We conclude by noting that linearly regular intersection in the sense of [20]; and transversality in the sense of [19], imply 0-separability. Following [20, section 2, (2. 2)], A and B have linearly regular intersection at x * ∈ A∩B if This property is called strong regularity in [17] and the basic qualification condition for sets in [21, Definition 3.2 (i)]. As a consequence of , linearly regular intersection implies that (A, A, B, B) satisfies the CQ-condition at x * for any nonempty A and B in R n ; cf. [5]. By Proposition 1 we therefore have: Corollary 1. (Linear regularity implies 0-separability). Suppose A, B intersect linearly regularly at x * ∈ A ∩ B. Then they intersect 0-separably at x * . As we mentioned before, in the context of alternating projections linear regularity and the CQ-condition are to be understood as transversality type hypotheses, indicating that the sets A, B intersect at an angle at x * , as opposed to intersecting tangentially. This is confirmed by relating 0-separability to the classical notion of transversality. Following [19, def. 3 where T M (x * ) is the tangent space to M at x * ∈ M. We then have the following Corollary 2. Let A, B be C 2 -manifolds which intersect transversally at x * . Then A and B intersect 0-separably at x * . After the initial version [22] of this work was published, a related concept termed intrinsic transversality was proposed in [12]. Following [12,Def. 2.2], A, B are intrinsically transversal at x * ∈ A ∩ B with constant κ ∈ (0, 1] if there exists a neighborhood U of x * such that for every a + ∈ A ∩ U \ B and every b + ∈ B ∩ U \ A the estimate is satisfied. This relates to 0-separability as follows. We will resume the discussion of separable intersection of sets in section 6. Hölder regularity In this section we introduce the concept of Hölder regularity. We then relate it to other regularity notions like Clarke regularity, prox-regularity, superregularity in the sense of [20], and its extension in [4]. Definition 2. (Hölder regularity). Let σ ∈ [0, 1). The set B is σ-Hölder regular with respect to A at b * ∈ A ∩ B if there exists a neighborhood U of b * and a constant c > 0 such that for every a + ∈ A ∩ U, and every b + ∈ P B (a + ) ∩ U one has where r = a + − b + . We say that B is Hölder regular with respect to A if it is σ-Hölder regular with respect to A for every σ ∈ [0, 1). Remark 4. Using the angle β = ∠(a + − b + , b − b + ) and r = a + − b + , condition (6) can be re-written in the following more suggestive form Geometrically, this means that the right circular cone with axis a + − b + and aperture β = arccos √ cr σ truncated by the ball B(a + , (1 + c)r) and the B-restricted proximal normal cone N B A (a + ) contains no points of B other than b + . In the remainder of this section we relate Hölder regularity to older geometric and analytic regularity concepts. We first consider notions related to 0-Hölder regularity. The case of σ-Hölder regularity with σ > 0 will be considered later. Proposition 3. (0-Hölder regularity from superregularity). Suppose B is (A, ǫ, δ)regular at b * ∈ A ∩ B. Then B is 0-Hölder regular at b * with respect to A with constant c = ǫ 2 . In particular, if B is superregular at b * , then B is 0-Hölder regular with respect to A with constant c that may be chosen arbitrarily small. [4], it remains to prove the first part of the statement. In order to check 0-Hölder regularity, we have to provide a neighborhood U of b * and c > 0 such that (6) is satisfied with σ = 0. We choose U = B(b * , δ 4(1+ǫ 2 ) ) and put c = ǫ 2 . To check (6) pick a + , b + ∈ U such that b + ∈ P B (a + ), a + ∈ A. That gives r = b + − a + ≤ δ 2(1+c) . By the definition of the restricted normal cone we have u : We have to show that b is not an element of the set in (6) for σ = 0. Suppose b ∈ B(a + , (1+c)r). Then we have to show and the claim follows. Remark 5. Example 7.1 shows that the converse of proposition 3 is not true. The difference between superregularity and its extension (A, ǫ, δ)-regularity on the one hand, and 0-Hölder regularity on the other, is the following: in (6) we exclude points in the intersection of a restricted right circular cone with vertex b + , axis a + − b + , and aperture β = arccos √ cr σ and the shrinking ball B(a + , (1 + c)r). In contrast, (A, ǫ, δ)-regularity forbids many more points, namely all points in that same cone, but within the fixed ball B(b * , δ). In consequence, this type of regularity is not suited to deal with singularities pointing inwards, like the prototype in example 7.1. We next justify our notion of Hölder regularity by proving that prox-regular sets are σ-Hölder regular for every σ is the largest ball with its centre on b + R + d which touches B in b from outside, i.e., has no points from B in its interior. It was shown in [24, Thm. An immediate consequence is that sets of positive reach in the sense of Federer [15] are prox-regular; see e.g. [24,Theorem 1.3]. Therefore, prox-regularity is a local version of positive, or non-vanishing, reach. We now relax the concept of non-vanishing reach to sets where the reach may vanish at some boundary points, but slowly so. We say that the reach vanishes with exponent σ and rate τ . , then it has slowly vanishing reach at b * with respect to A with rate τ = 0 and arbitrary exponent σ ∈ (0, 1]. and since τ ′ > 0 was arbitrary, this shows that (8) is satisfied with τ = 0. Proposition 5. (Hölder regularity from slowly vanishing reach). Let σ ∈ (0, 1). Suppose B has σ-slowly vanishing reach with rate τ ∈ [0, 1) with respect to A at b * ∈ A∩B. Then B is (1 − σ)-Hölder regular with respect to A with any constant c > 0 satisfying In particular, c may be chosen arbitrarily small. Proof: 1) We have to show that there exists a neighborhood U of b * such that (6) is satisfied with c as in (9) and with exponent 1 − σ. By condition (9) we can choose τ ′ > τ and ǫ > 0 such that By condition (8), and since τ < τ ′ , there exists a neighborhood U of b * such that whenever We will show that U is the neighborhood we need in condition (6). 2) To prove this pick As this is clear for cos β ≤ 0, we may assume cos β > 0. Let us define where r, β are as before. We claim that the ball B(b + + Rd, R) contains b, where as above To prove this, note that by the cosine theorem, applied in the triangle a + , b + , b, we have Since a + − b ≤ (1 + c)r, we obtain which on completing squares turns out to be the same as Here the last equality uses the definition (10) of R. We therefore obtain and using the cosine theorem again, now in the triangle b This gives b ∈ B(b + + Rd, R) as claimed. 3) By the definition (7) of R(b + , d), any radius R ′ < R(b + , d) must give rise to a ball with But as we have shown in part 2), the ball B(b + + Rd, R) contains b, so necessarily R ≥ R(b + , d). Hence by the choice of U in part 1), r σ /R ≤ r σ /R(b + , d) < τ ′ , or what is the same, r σ < Rτ ′ . Substituting the definition (10) of R and multiplying by r −σ , we deduce Now suppose that cos β > √ cr 1−σ , contrary to what we wish to show. Then a contradiction. That proves the result. Since prox-regularity at b * ∈ B implies slowly vanishing reach at b * with respect to any closed set A containing b * , we have the following immediate consequence. Consider the case of a Lipschitz domain B. Here Hölder regularity may be related to a property of the boundary ∂B of B. Proposition 6. Let σ ∈ (0, 1). Suppose B is the epigraph of a locally Lipschitz function f : R n−1 → R. Let x * ∈ R n−1 and suppose there exists a neighborhood V of x * and µ > 0 such that for every x 0 ∈ V and every proximal subgradient g ∈ ∂ p f (x 0 ) the one-sided Then B is σ-Hölder regular at (x * , f (x * )) ∈ B with respect to every closed set A containing (x * , f (x * )), and for every constant c > 0 satisfying µ ≤ √ c/(2 + c) σ . . We will show that U is as required. In order to check (6), choose a + ∈ A\B and b + ∈ P B (a + ) such that a + , b + ∈ U. for some t > 0. Using a + − b + = r, we can therefore write Since there is nothing to prove for cos β ≤ 0, we assume cos β > 0. By the definition of B we have b = (x, ξ) for some x ∈ R n−1 and ξ ≥ f (x). Now Here the first inequality uses the fact that ξ ≥ f (x). The second inequality uses the onesided Hölder estimate from the hypothesis. In order to be allowed to use this estimate, we have to assure that x ∈ V . This follows from The third inequality can be seen as follows. We have Hence by the choice of c. That completes the argument. Remark 7. The nomenclature in Proposition 6 can be explained as follows. Lipschitz smoothness [14] of −f at x 0 is a well-known second-order property equivalent to the second difference quotient being bounded below for g ∈ ∂f (x 0 ) and x in a neighborhood of x 0 . The Hölder estimate in Proposition 6 is the analogous but weaker condition ∆ 1+σ (·) ≥ −µ > −∞ for some σ ∈ (0, 1). In analogy with [14] one could call this σ-Hölder smoothness of −f at x 0 . We consider the following natural modification of amenability from [25]. Proposition 7. (Hölder regularity from Hölder amenability). Suppose the closed set B is σ ′ -Hölder amenable at x * for some σ ′ ∈ (0, 1]. Then B is σ-Hölder regular at x * with respect to any closed set A containing x * for every σ ∈ (0, σ ′ ), and with arbitrary constant c. The proof may be adopted from on [20,Prop. 4.8] with minor changes, and we skip the details. This result suggests that Hölder regularity is settled between the weaker superregularity and the stronger prox-regularity. This is true as long as we consider this type of regularity as a property of B alone. We stress, however, that it is the combination with A and the shrinking distance between the sets in (6) which makes our definition 3 truly versatile in applications. This is corroborated by the following observation. , N B Following entirely the argument in [12, page 6], one can now find a smaller neighborhood U of x * such that the following is true: 2) We claim that U is the neighborhood required in σ-Hölder regularity with constant c. To check this, we have to show that the set (6) is empty. We assume that b ∈ U is an element of that set. Then b ∈ P A (a + ) −1 ∩ B and b ∈ B(a + , (1 + c)r). Hence we are in the situation of part 1), which means r Hence the set (6) is empty. Convergence In this section we prove the main convergence result. Alternating projections converge locally for sets which intersect separably, if one of the sets is Hölder regular with respect to the other. The proof requires the following preparatory lemma. Lemma 1. (Three-point estimate). Suppose B intersects A separably at x * ∈ A ∩ B with exponent ω ∈ [0, 2) and constant γ > 0 on the neighborhood U of x * . Suppose B is also ω/2-Hölder regular at x * ∈ A ∩ B with respect to A on U with constant c > 0 satisfying c < γ 2 . Then there exists 0 < ℓ < 1, depending only on γ, c and U, such that for every building block b → a + → b + in U. Theorem 1. (Local convergence). Suppose B intersects A separably at x * ∈ A ∩ B with exponent ω ∈ [0, 2) and constant γ and is ω/2-Hölder regular at x * with respect to A and constant c < γ 2 . Then there exists a neighborhood V of x * such that every sequence of alternating projections between A and B which enters V , converges to a point b * ∈ A ∩ B. Proof: 1) By hypothesis there exists a neighborhood U = B(x * , 4ǫ) of x * ∈ A ∩ B such that every building block b → a + → b + with b, a + , b + ∈ U satisfies the angle condition 1 − cos α ≥ γ b + − a + ω , where α = ∠(b − a + , b + − a + ). In addition, by shrinking U if necessary, we may assume that B is ω/2-Hölder regular at x * on U with constant c < γ 2 . Then by the three-point estimate (Lemma 1) there exists ℓ ∈ (0, 1) depending only on c, γ and U, such that a + − b + 2 + ℓ b − b + 2 ≤ a + − b 2 for every such building block. Since a + − b ≤ a − b , we deduce the following four-point estimate for building blocks a → b → a + → b + with b, a + , b + ∈ U. We shall prove by induction that for every k ≥ 1, we have Let us first do the induction step and suppose that hypotheses (16), (17) are true at k − 1 for some k ≥ 2. We have to show that they also hold at k. 2.3) Let us now prove that the hypotheses and this is precisely (17) at k = 1. This concludes the induction argument. 3) Having proved (16), (17) for all indices k ≥ 1, we see from (18) that the series ∞ j=1 b j − b j+1 converges, which means b k is a Cauchy sequence, which converges to a limit b * ∈ B ∩ B(x * , ǫ). Using relation (21) we conclude that a k converges to the same limit b * ∈ A ∩ B. Our next result gives the convergence rate for ω ∈ (0, 2). The case ω = 0, where linear convergence is obtained, will be treated separately in Theorem 3. Passing to the limit M → ∞ gives . Consequently, and substituting this gives on the right of (23) dominates the first term 1 2 (S N −1 −S N ). That means, there exists another constant C ′′ > 0 such that for all N ∈ N. We claim that there exists yet another constant C ′′′ such that Assuming this proved, summation of (24) Hence for yet two other constants C ′′′′ , C ′′′′′ , Since b M − b * ≤ S M by the triangle inequality, that proves the claimed speed of convergence. In order to prove (24) we divide the set of indices into proving (24). In contrast, for N ∈ J we have (24) is also satisfied. Finally, the same estimate for a k follows from Theorem 2. (Local convergence with linear rate). Let A, B intersect 0-separably at x * with constant γ ∈ (0, 2). Suppose B is 0-Hölder regular at x * with respect to A with constant c < γ 2 . Then there exists a neighborhood V of x * such that every sequence of alternating projections that enters V converges R-linearly to a point b * ∈ A ∩ B. Proof: Applying Lemma 1 and Theorem 1 in the case ω = 0, we obtain convergence of the sequence a k , b k to a point b * ∈ A ∩ B from summability of k b k−1 − b k . Now from the proof of Corollary 4, we see that in the case θ = 1 2 equation (23) simplifies to or what is the same Remark 8. Theorem 2 extends the results in [20,Thm. 5.2] and [4,Thm. 3.14] in two ways. Firstly, as seen in example 7.1, 0-Hölder regularity includes sets B which have singularities pointing inwards, where superregularity [20] and its extension in [4] fail. Secondly, 0-separability is weaker than linear regularity or the CQ in [4], see example 7.4. We now obtain the following consequence of Theorem 1, originally proved in [5] for more general families of sets. When specialized to the case of two sets we have Corollary 5. (Bauschke et al. [5,Theorem 3.14]). Suppose (A,Ã, B,B) satisfies the CQ-condition (2) at x * ∈ A ∩ B, where P A (∂B \ A) ⊂Ã, P B (∂A \ B) ⊂B. Moreover, suppose for every ǫ > 0 there exists δ > 0 such that B is (A, ǫ, δ) regular at x * . Then there exists a neighborhood V of x * such that every alternating sequence which enters V converges R-linearly to a point in A ∩ B. As already shown in [5] one readily derives Corollary 6. (Lewis, Luke, Malick [20]). Suppose A, B intersect linearly regularly and B is superregular. Then alternating projections converge locally R-linearly to a point in the intersection. The following is now a consequence of Theorem 2, using Propositions 2 and 8. Corollary 7. (Drusvyatskiy, Ioffe, Lewis [12]). Suppose A, B intersect intrinsically transversally at x * . Then there exists a neighborhood U of x * such that every sequence of alternating projections entering U converges R-linearly to a point in the intersection. Remark 9. Drusvyatskiy et al. [12] stress that their approach gives local R-linear convergence under a transversality hypothesis alone, while the older [20,5,22] still need regularity assumptions. However, this statement should be read with care, because Propositions 2 and 8 show that intrinsic transversality amalgamates transversality and regularity aspects. In particular, it is more restrictive than 0-Hölder regularity in tandem with 0-separability, so that Theorem 2 is stronger than the main result in [12]. Subanalytic sets Following [8], a subset A of R n is called semianalytic if for every x ∈ R n there exists an open neighborhood V of x such that for finite sets I, J and real-analytic functions φ ij , ψ ij : V → R. The set B in R n is called subanalytic if for every x ∈ R n there exist a neighborhood V of x and a bounded semianalytic subset A of some R n × R m , m ≥ 1, such that B ∩ V = {x ∈ R n : ∃y ∈ R m such that (x, y) ∈ A}. Finally, an extended real-valued function f : R n → R ∪ {∞} is called subanalytic if its graph is a subanalytic subset of R n × R. We consider the function f : , which completes the proof. Definition 6. Let f : R n → R ∪ {∞} be lower semi-continuous with closed domain such that f | domf is continuous. We say that f satisfies the Łojasiewicz inequality with exponent θ ∈ [0, 1) at the critical point x * of f if there exists γ > 0 and a neighborhood U of x * such that |f (x) − f (x * )| −θ g ≥ γ for every x ∈ U and every g ∈ ∂f (x). Proof: Note that f (a * ) = 0. Therefore there exists a neighborhood U of a * ∈ A ∩ B such that for every a + ∈ A∩U and every g ∈ ∂f (a + ). Now let a → b → a + → b + be a building block with a, b, a + , b + ∈ U. From Lemma 2, . This uses the fact that a + ∈ P A (b). Hence by (26) we have Let us for the time being consider angles α = ∠(b − a + , b + − a + ) smaller than 90 • . Then the minimum value in (27) Since 1 − cos α ≥ 1 2 sin 2 α, estimate (28) implies This shows that we must have θ > 1 2 , because the numerator tends to 0, so the denominator has to go to zero, too, which it does for 4θ − 2 > 0. Let us now discuss the case where α ≥ 90 0 . We claim that the same estimate (29) is still satisfied. Since cos α < 0, the numerator 1 − cos α in (29) is ≥ 1. Moreover, the infimum in (27) is now attained at λ = 0 with the value a + − b + = d B (a + ). Hence estimate (27) Proof: We assume A∩B = ∅, otherwise there is nothing to prove. Consider the function f : Then f has closed domain A and is continuous on A, which makes it amenable to definition 6. Every x * ∈ A ∩ B is a critical point of f . Since A, B are subanalytic sets, f is subanalytic. That can be seen as follows. ) shows that f is subanalytic. Now we invoke Theorem 3.1 of [9], which asserts that f satisfies the Łojasiewicz inequality at x * for some θ ∈ (0, 1). Hence (26) is true for every g ∈ ∂f (a + ), and therefore also for every g ∈ ∂f (a + ). Applying Lemma 3, we deduce that B intersects A separably with ω = 4θ − 2. Interchanging the roles of A and B, it follows also that A intersects B separably. Corollary 8. (Local convergence for subanalytic sets). Let A, B be subanalytic. Suppose B is Hölder regular at x * ∈ A ∩ B with respect to A. Then there exists a neighborhood V of x * such that every sequence of alternating projections a k , b k which enters V converges to some b * ∈ A ∩ B with rate b k − b * = O(k −ρ ) for some ρ ∈ (0, ∞). Corollary 9. Let A, B be closed subanalytic sets and suppose B has slowly vanishing reach with respect to A. Let x * ∈ A ∩ B, then there exists a neighborhood U of x * such that every sequence of alternating projections a k , b k which enters U converges to some Recall from [8,27] that a subset A of R n is called semialgebraic if for every x ∈ R n there exists a neighborhood V of x such that (25) is satisfied with φ ij , ψ ij polynomials. Naturally, this means that every semialgebraic set is semianalytic, hence subanalytic. By combining Theorems 1 and 3, we therefore obtain the following result. As a variant of the method of alternating projects consider the averaged projection method. Given closed sets C 1 , . . . , C m , the method generates a sequence x n by the recursion x n+1 ∈ 1 m (P C 1 (x n ) + · · · + P Cm (x n )). Corollary 11. Let C 1 , . . . , C m be subanalytic sets in R d , and let c * ∈ C 1 ∩ · · · ∩ C m . Then there exists a neighborhood U of c * such that whenever a sequence x n of averaged projections enters U, then it converges to some x * ∈ C 1 ∩ · · · ∩ C m with rate x k − x * = O(k −ρ ) for some ρ ∈ (0, ∞). Since B is convex, it is prox-regular hence Hölder regular with respect to A, so by Corollary 8 there exists a neighborhood U = U × · · · × U of (c * , . . . , c * ) ∈ A ∩ B such that every alternating sequence which enters U converges to some (x * , . . . , x * ) ∈ A ∩ B with rate O(k −ρ ) for some ρ ∈ (0, ∞). Now consider an averaged projection sequence x k entering U. It follows that (x k , . . . , x k ) ∈ U, hence x k converges to x * with that same rate. Remark 10. We mention a related averaged projection method in [1,Corollary 12], where the authors use the Kurdyka-Łojasiewicz inequality. The employed technique indicates that results in the spirit of Theorem 3 could be obtained for more general classes of sets definable in an o-minimal structure [1]. We conclude this section with an application of Theorem 1, demonstrating its versatility as a convergence test in practical situations. Let C N be a finite dimensional unitary space, and consider the discrete Fourier transform as a unitary linear operator x → x of C N . The phase retrieval problem [16,13] consist in estimating an unknown signal x ∈ C N whose Fourier amplitude | x(ω)| = a(ω), ω = 0, . . . , N −1, is known. In physical terminology, identifying x means retrieving its unknown phase x(ω)/| x(ω)| in frequency domain. Formally, given a function a(·) : {0, . . . , N − 1} → [0, ∞), we have to find an element of the set B = {x ∈ C N : | x(ω)| = a(ω) for all ω = 0, . . . , N − 1}. Since this problem is underdetermined, additional information about x in a different Fourier plane or in the time domain is added. We represent it in the abstract form x ∈ A for a closed set A. Then the phase retrieval problem is to find x ∈ A ∩ B. The famous Gerchberg-Saxton error reduction algorithm [16] computes a solution of the phase retrieval problem by generating a sequence of estimates as follows: Given x ∈ C N , compute x and correct its Fourier amplitude by putting y(ω) = a(ω) x(ω)/| x(ω)| if x(ω) = 0, and y(ω) = a(ω) if there is extinction x(ω) = 0. For short, y = a x/| x| with the convention 0/|0| = 1. Then compute the inverse discrete Fourier transform y of y and build the new iterate x + by projecting y on the set A, that is x + ∈ P A ( y). In condensed notation: Corollary 12. (Gerchberg-Saxton error reduction). Suppose the constraint x ∈ A is represented by a subanalytic set A. Let x * ∈ A ∩ B be a solution of the phase retrieval problem. Then there exists ǫ > 0 such that whenever a Gerchberg-Saxton sequence x k enters B(x * , ǫ), then it converges to a solutionx ∈ A ∩ B of the phase retrieval problem with rate of convergence x k −x = O(k −ρ ) for some ρ ∈ (0, ∞). Proof: With the convention 0/|0| = 1, the mapping x → (a x/| x|) ∼ is an orthogonal projection on the set B = {x ∈ C N : | x(·)| = a(·)}. (See for instance [3, (8), (10)], where the authors consider even the function space case). Therefore the Gerchberg-Saxton algorithm (30) is an instance of the alternating projection methods between the subanalytic set A and the Fourier amplitude set B. We show that B is subanalytic and prox-regular. Local convergence with rate O(k −ρ ) then follows from Corollary 9. As far as subanalyticity of B is concerned, observe that on identifying which is clearly a representation of the form (25), since the discrete Fourier transform x → x is analytic. To show prox-regularity of B, we have to show that the projection on B is singlevalued in a neighborhood of B. With the same identification C N ∼ = R 2N evoked before, the projection on B splits into N projections in R 2 , given as ( x 1 (ω), x 2 (ω)) → a(ω) In the case a(ω) = 0 this is the projection onto the origin, which is clearly single-valued. For a(ω) > 0 this is the orthogonal projection onto the sphere of radius a(ω) in R 2 , which is single-valued except at the origin ( x 1 (ω), x 2 (ω)) = (0, 0). This means the projection on B is unique on the neighborhood U = {x ∈ C N : | x(·)| ≥ a(·)/2} of B, proving prox-regularity of B. Remark 11. The constraint x ∈ A may represent additional measurements, or it may include prior information about the unknown image. In the original work [16] x ∈ A represents Fourier amplitude information from a second Fourier plane, which is a constraint analogous to x ∈ B. The constraint x ∈ A may also represent prior information about the support supp(x) of the unknown signal x in physical domain. It may for instance be known that supp(x) ⊂ S, where S is a subset of {0, . . . , N − 1} with card(S) ≪ N, or with a periodic structure. This is known as an atomicity constraint in crystallographic phase retrieval [13]. For A = {x ∈ C N : x(t) = 0 for t ∈ S}, P A is simply truncation y → y · 1 S . Here the Gerchberg-Saxton error correction method has the explicit form Other choices of the constraint x ∈ A have been discussed in the literature, see e.g. [13]. Our convergence result requires only subanalyticity of A, a condition which is always satisfied in practice. Note that since B is not Clarke regular at x * = (0, 0), it is not superregular in the sense of [20]. What is more, B is not (A, ǫ, δ)-regular in the sense of [4] at x * = (0, 0), regardless how ǫ, δ > 0 are chosen, because the cone b + + {v : a + − b + , v ≤ ǫ a + − b + v } with vertex at the projected point b + = ( 3 4 x, 3 4 x) ∈ B hits B at points b ′ ∈ B other than b on the opposite side of A, regardless how small ǫ is chosen. And this cannot be prevented by shrinking the neighborhood B(x * , δ). Note that A, B intersect 0-separably at (0, 0), hence alternating projections converge linearly by Theorem 2. This cannot be obtained from the results in [20,4]. Example 7.2. (Regularity cannot be dispensed with). Following [6], consider the spiral z(φ) = (1 + e −φ )e iφ , φ ∈ [0, ∞) in the complex plane which approaches the unit circle S = {|z| = 1} form outside. Define a sequence z n = z(φ n ) with φ 1 < φ 2 < · · · → ∞ such that z n+1 − z n < z n − z n−1 → 0, P {z k :k =n}∪S (z n ) = z n+1 , and such that every z ∈ S is an accumulation point of the z n . In [6] an explicit construction with these properties is obtained recursively as Let A = {z 2n : n ∈ N} ∪ S, B = {z 2n−1 : n ∈ N} ∪ S, then A ∩ B = S. Note that for starting points |z 0 | > 1, the sequence of alternating projections between A and B is a tail of the sequence z n , so none of the alternating sequences converges. Note that ∠(z n+1 −z n , z n−1 −z n ) → π, hence A, B intersect 0-separably at every x * ∈ S = A∩B. The CQ in the sense of [4] is satisfied at every x * ∈ A∩B. Namely, for z ∈ S, N B A (z) = N A B (z) = R + (−iz). Indeed, as a n = P A (b n ) approaches z, the direction u n = (b n − a n )/ b n − a n approaches a direction perpendicular to z, and since the spiral turns counterclockwise, this direction is −iz. Therefore N B A (z) ∩ (−N A B (z)) = {0} for every z ∈ S. Since the sequence z n fails to converge, we conclude that this must be due to the lack of regularity at points in S. Indeed, Hölder regularity fails for every 0 ≤ σ < 1. This can be seen as follows. Since the angle ∠(b − a + , b + − a + ) for the building block b → a + → b + approaches π, the corresponding angle β = ∠(b − b + , a + − b + ) goes to 0, so cos β → 1, and for σ ∈ (0, 1) we cannot find c > 0 such that cos β ≤ √ cr σ . Therefore, in order to assure (6), we would need b ∈ B(a + , (1 + c)r), where r = a + − b + . This however, would imply linear convergence of the alternating sequence, which fails. As a consequence of Proposition 3, the other regularity concepts fail, as does intrinsic transversality. Example 7.3. (Discrete spiral I). We consider a discrete approximation of the logarithmic spiral, generated by 8 equally spaced rays emanating from the origin. Starting on one of the rays, we project perpendicularly on the neighboring ray, going counterclockwise. We label the projected points a 1 , b 1 , a 2 , . . . . This defines two sets Every sequence of alternating projections between A and B not starting at the origin is a tail of the sequence a n , b n and converges to (0, 0). Since α = ∠(b − a + , b + − a + ) = 135 • , A, B intersect 0-separably at x * = (0, 0). We check whether the intersection satisfies the CQ in the sense on [4]. Consider one of the rays on which a point a + is situated. Then u = b − a + ∈ N B A (a + ) is perpendicular to a + − x * , i.e., perpendicular to the ray in question. As u is the same for all a + on that ray, How about regularity at (0, 0)? Naturally, A, B are not superregular at (0, 0), because they are not Clarke regular. Concerning (A, ǫ, δ)-regularity of B in the sense of [4], suppose in a building block b → a + → b + we wish to set up a cone with apex b + and axis b + + R + (a + − b + ) by choosing its aperture small enough through the choice of ǫ such that all previous points of A are avoided, then we have to choose smaller and smaller angles β to do this, so this type of regularity fails. 16 √ 2) contains no points of the small square in its interior, the distance of b 1 to the small square being R = 7 16 √ 2, writing R = (1 + c)r we conclude that we can take c = 7 8 √ 2 − 1 > 0 in (6). Now up to a scaling and a rotation the situation is precisely the same for every building block a → b → a + starting in a square of length 2 a . After one 360 • -tour we end up at a ++++ on the same ray as a, and from there on the spiral will stay in that smaller square of length 2 1 16 a = 2 a ++++ . As a consequence of theorem 2, the sequence converges to (0, 0) with linear rate. None of the approaches of [20,19,4] allows to derive this. Example 7.4. (Discrete spiral II). We can modify the above construction by fixing φ ∈ (0, π 4 ) and generating rays kφ, k ∈ N. Turning counterclockwise, and keeping only the projected points, we generate iterates a k , b k with the property that a k has angle 2kφ mod 2π with the horizontal, b k has angle (2k +1)φ mod 2π. We put A = {a k : k ∈ N}∪{(0, 0)}, B = {b k : k ∈ N} ∪ {(0, 0)}, then A ∩ B = {(0, 0)} and P B (a k ) = b k , P A (b k ) = a k+1 by adapting the argument in example 7.3. The sequence represents again a discrete version of the logarithmic spiral, turning inwards counterclockwise. However, if we now choose φ such that φ/(2π) is irrational, there will be no periodicity, and the set of directions a k / a k will be dense in S 1 , and so for b k / b k . We have ∠(b + − a + , b − a + ) = π − φ, which means A, B intersect 0-separably at (0, 0). They intersect at an angle, this angle being π − φ. However, A, B do not intersect linearly regularly in the sense of [20,4]. Indeed, let us fix ψ ∈ [0, 2π) and u = (cos ψ, sin ψ). Then there exist rays 2kφ arbitrarily close to ψ and a k on these rays, projected from b k−1 on ray (2k − 1)φ. That means, u k = (b k − a k )/ b k − a k gets arbitrarily close to the direction u ⊥ = (− sin ψ, cos ψ), so u ⊥ ∈ N B A (0, 0). This shows N B A (0, 0) = R 2 and N A B (0, 0) = R 2 for A = P A (∂B \ A), B = P B (∂A \ B), so linear regularity and extensions fail. [7], any sequence of alternating projections between A and B started at a ∈ A \ S wanders down following the spiral, turning infinitely often around the cylinder with shrinking a n −b n → 0. In particular, every x * ∈ S is an accumulation point of a n , b n , so convergence fails. Since B is clearly Hölder regular with respect to A, we deduce that the angle condition (1) must fail, so in particular A is not subanalytic. This is interesting, as A is the projection of a semianalytic set in R 4 . For a picture see [7]. [4] at 0, hence also 0-separably. Note that B is not (A, ǫ, δ)-regular at 0 in the sense of [4], but it is σ-Hölderregular for every σ ∈ [0, 1). Note that intrinsic transversality fails here, because it uses the cones N A (a), N B (b), which in this case are too large because they coincide with the whole line. We modify this example as follows. Let a n = 2 −n , A = {a n : n ∈ N} ∪ {0}, b n = 1 2 (a n + a n+1 ) − δ n , B = {b n : n ∈ N} ∪ {0}, where δ n < 2 −n (a n − b n ). Then a n+1 − b n shrinks only by a factor 1 − δ n → 1 with respect to b n − a n , while shrinkage between a n+1 − b n and a n+1 − b n+1 is by a factor close to 1 2 . This shows that an alternating sequence may converge R-linearly without a fixed shrinkage factor 1 − κ 2 in every half step. Note that Theorem 2 still applies in this case. Example 7.8. Using the same function f and A, B, observe that for ω ≥ 2 the quotient q(x) stays away from 0, so that condition (1 ′ ) is satisfied. This explains why values ω ≥ 2 are not meaningful in definition 1.
2014-09-28T09:42:13.000Z
2013-12-19T00:00:00.000
{ "year": 2013, "sha1": "17941d10df490a654903cd6268ebe2aa0de1a914", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1312.5681", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "17941d10df490a654903cd6268ebe2aa0de1a914", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
32349423
pes2o/s2orc
v3-fos-license
Safety of Transesophageal Echocardiography in Adults : Study in a Multidisciplinary Hospital Método: estudo prospectivo com 137 pacientes que realizaram ETE com MZ associado à sedação moderada. Analisamos as seguintes ocorrências: complicações com anestesia tópica, ao uso do MZ e complicações relacionadas ao procedimento. Análises uni e multivariada foram usadas para testar a influência das variáveis clínicas: idade, sexo, acidente vascular cerebral (AVC), miocardiopatia (MP), duração do exame, insuficiência mitral (IM) e dose de MZ. Introduction In recent years, the transesophageal echocardiography (TEE) became a broadly used assessment tool in the investigation of cardiac and noncardiac diseases, being a complementary option to the transthoracic echocardiogram (TTE).The images of the heart through the esophageal technique have better definition and quality, allowing a detailed visualization of the heart structures, such as atrial appendices, valves and great vessels.Consequently, the TEE contributes with additional diagnostic information, when compared to the TTE [1][2][3] .However, the TEE is a semi-invasive examination, which requires sedation most of the times and can lead to complications related to the procedure itself or the use of sedation.Some studies have analyzed the risks of performing this procedure 4,5 .The objective of this study was to analyze the aspects of feasibility, safety and Mailing Address: Alexandre Ferreira Cury • Hospital Israelita Albert Einstein -Av.Albert Einstein, 627 / 701, 4 andar, Medicina Diagnostica e Preventiva (MDP) -05651-901 -Morumbi, São Paulo, SP, Brasil.E-mail: alexandrecury@ig.com.brManuscript received June 10, 2008; revised manuscript received August 04, 2008; accepted September 02, 2008.complication rates of TEE associated to the routine use of mild and moderate sedation. Methods From October 2006 to July 2007 137 TEE were performed in our institution.All examinations were carried out at the Laboratory of Echocardiography, Emergency Room (ER) or Intensive Care Unit (ICU) of Hospital Israelita Albert Einstein (HIAE).A Phillips echocardiography equipment, model HDI 5000 with multiplane probe MPT 7-4 MHz and adequate software to perform the transesophageal echocardiography.Contraindications were relative: late esophageal surgery, previous high digestive bleeding, cervical column disease, dysphagia symptoms; and absolute: obstruction or critical stenosis of the esophagus, active high digestive bleeding, esophageal fistula, ulceration or perforation, esophageal diverticulum, esophageal tumor. All patients had been fasting for 6 hours or more and had no complaints regarding deglutition of foods and no contraindications to the examination.The patients were maintained with a nasal oxygen catheter at 3 l/min and continuous pulse oximetry to monitor oxygen saturation. Original Article After peripheral venous access had been attained, all patients received, before the examination started, progressive doses of Midazolan via I.V. route, until mild or moderate sedation was obtained, according to the line of procedure for anesthesia and sedation of Hospital Israelita Albert Einstein, classified in three levels, according to the following description: ANSIOLYSIS (MILD SEDATION): it is the state of tranquility and serenity induced by drugs, during which the patient responds normally to verbal commands.Although the cognitive and coordination functions may be impaired, the cardiovascular and respiratory functions are preserved.MODERATE SEDATION ("CONSCIOUS SEDATION"): it is a depression of consciousness induced by drugs, during which the patient awakes intentionally to a verbal command or slight tactile stimulus.No intervention is necessary to maintain the airway permeable and the spontaneous ventilation is adequate.The cardiovascular function is preserved.DEEP SEDATION: it is a depression of consciousness induced by drugs, during which the patient does not awake easily; however, the patient responds to repeated painful stimuli.The capacity to maintain the spontaneous respiratory capacity can be impaired.The patient might require assistance to maintain airway permeability and/or respiratory support.The cardiovascular function is often preserved. The local anesthesia of the oropharynx was carried out with 10% spray and 2% gel lidocaine.The heart rate (HR) and blood pressure (BP) were monitored with a Hewlett Packard automatic measurement equipment.The measurements were carried out in the brachial region every 5 minutes, from the beginning of sedation until the end of the examination.Cardiopulmonary resuscitation material with vasoactive drugs and orotracheal aspirator was available at the room throughout the examinations.Microbubble test (saline solution agitated in a syringe with 5 ml of 0.9% of saline solution + 1 ml of air, connected to a three-way cannula together with an empty syringe, with fast passage of solution from one syringe to another) was used to assess cardiac shunt.At the end of the procedure, the presence of bleeding, signs of local trauma, vital signs and complete recovery of the level of consciousness were verified.Flumazenil I.V. was used in some patients to revert sedation, according to the discretion of the examiner.All events and complications during the procedures and sedation were recorded in a spreadsheet, as well as the regular recording of BP and oxygen saturation.After the examination, all outpatients were discharged with a companion and advised about the late effects of the sedative and its restrictions. All examinations were recorded in VHS tapes or DVD and the reports were typed and stored in a server database. All patients received previous information about the examination and agreed with the terms of the Free and Informed Consent Form for the procedure and sedation, according to the Technical Norm, Resolution SS -169 de 19/06/1996.Some clinical characteristics and procedure features were analyzed: Fisher's exact test (univariate analysis) and analysis of correspondence (multivariate analysis) were used to study the association between the variables of interest in the study 6 . Midazolan was used in all patients, whereas flumazenil was prescribed to 85 patients (62%), according to the discretion of the examiner.It was not necessary to use flumazenil for the urgent reversion of sedation in any of the patients in this series.The mean doses of Midazolan and Flumazenil were 4.29±1.87mg and 0.28±0.24mg, respectively; mean age was 65.04±15.94years and 58% of the patients were males; the mean duration of the examination was 16.42±6.18minutes; EF was 60%±9; the mean EF in the group with myocardiopathy was 40%±4 and the mean EF in the group with severe mitral regurgitation was 44%±5. According to the classification used, we obtained the following results: Complications with the procedure: minor events occurred in three cases and were caused by mild and transient hypoxia, due to the transient obstruction of the upper airway by the base of the tongue or mechanical obstruction by the space occupied by the probe during its introduction.It was not necessary to interrupt the examination, administer flumazenil to reverse sedation or use ventilatory support. They represented approximately 2% of the cases.According to the American Heart Association (AHA), the expected rate of events is approximately 3.3% 7 .Major events (severe ones) were not observed in this series.According to the American Heart Association (AHA), the acceptable rate is around 0.5% 7 . Complications due to the use of Midazolan occurred in 9 cases, with 8 due to mild hypoxia (5.8%), i.e., levels of oxygen saturation between 81 and 90%.All cases of mild hypoxia related to the use of Midazolan quickly responded to the increase in oxygen supply and neither the test interruption nor the sedation reversal with flumazenil was necessary.Arterial hypotension (SAP=80-89 mmHg) occurred in 1 case (0.7% of the cases); it was transitory and the sedation withdrawal and reversal with flumazenil was not necessary.No respiratory failure, paradoxical reaction or intolerance to Midazolan was reported.These events corresponded to 6.5% of the cases, with the acceptable rate in the literature being a maximum of 10% 8 .No complications were observed with the use of topical anesthetic or flumazenil. No significant difference was observed between the doses of Midazolan in the groups with and without severe MR (5.08±2.30x 6.75±2.06,p=0.15) (Table 2), and in the groups with and without myocardiopathy (5.08±2.25 x 5.73±2.94,p=0.37) (Table 3).However, a significant difference was observed regarding Midazolan doses in the group of patients older than 65 years, when compared to those younger than 65 (4.2±1.8 mg x 6.2±2.3 mg, p<0.01). The results presented in Table 1 and Chart 1 allows us to affirm that there is an association between the rate of complications and the following variables: Midazolan dose, severe MR and ejection fraction.No association was observed between events and the following clinical variables: age, sex, previous stroke, test duration > 10 minutes. Discussion The complications related to the TEE might be due to the passage and handling of the esophageal probe during the procedure and the use of sedatives and other medications.In this study, the rate of complications related to TEE was low, lower than the one observed in the literature 7,8 .Complications such as bleeding, respiratory failure, heart failure, arrhythmia or death were not observed.In a study about the safety of TEE in approximately 10,000 patients, the authors reported pulmonary and heart complications, as well as bleeding in 0.18% to 0.5% of the cases and death in 0.098% of the cases, similar to the rates observed in series with more than 200,000 patients submitted to gastroduodenoscopy procedures.It is worth mentioning that, in this study 5 , the majority of the patients was awake and did not receive sedation and therefore, did not meet the criteria of moderate sedation used and aforementioned in our study. The failure during the insertion of the esophageal probe was not observed in the 137 procedures and seems to be related to factors such as the patient's cooperation, degree of sedation and the operator's experience.There are reports in the literature of failure rates being related to the number of procedures performed, i.e., centers that performed fewer than 200 examinations/year have failure rates of 3.9±3.2%,when compared to centers that perform more than 200 examinations/year (1.4±0.9%,p<0.05) 5 .However, in this study, as the examinations were performed by experienced physicians supported by an efficient nursing team and a careful use of sedation, no failures were observed. Another aspect is related to the possibility of thermal lesions and mucosal lacerations due to the extended use of the esophageal probe, such as during cardiac surgeries.The literature has reported the presence of thermal lesions and bleeding in anticoagulated patients, and more rarely, mucosal damage in patients with Mallory-Weiss esophagitis [9][10][11] .The curling of the probe during handling is an unusual situation and can cause difficulty at the removal.In these circumstances, the probe must be introduced until the stomach, where the curling can be undone.Compression of adjacent structures has been reported in procedures carried out in children, such airway obstruction 12 .None of the conditions described above was observed in our study. Other complications, also related to the procedure, include the supra-ventricular and ventricular arrhythmias, which are most of the time, self-limited 13 .These arrhythmias are many times caused by the handling of the esophageal probe during the procedure, mainly in patients without sedation or under superficial sedation, and anti-arrhythmic agents are rarely used.The bradyarrhythmias are also rare and are related to the vagal effect during the procedure.They are usually self-limited and it is rarely necessary to use atropine.There are reports in the literature of death due to ventricular arrhythmia in a patient submitted to TEE.The autopsy revealed an infiltrate of lymphocytes in the myocardium, suggesting active myocarditis 14 .Pulmonary complications such as bronchospasm and laryngospasm are mentioned in the literature.They occur in approximately 0.2% of the procedures and are probably caused by the probe handling or medication.Hypoxia usually occurs caused by two situations: 1 -hypoxia related to the procedure (high airway obstruction by the base of the tongue or mechanical obstruction by the space occupied by the probe) and 2 -hypoxia caused the sedative 15 .Contrarily to the hypoxia related to the procedure, which usually occurs at the beginning of the procedure, soon after the passage of the esophageal probe, the hypoxia related to the use of sedation occurs during the examination, some minutes after the passage of the probe, and is related to secondary hypoventilation to the sedative effect, as Midazolan has a 3-minute start action and a 5-minute maximum effect, after which its effect declines gradually in up to 30-40 minutes. It is at the peak moment of the pharmacological action that we can verify the presence of hypoxia due to hypoventilation.In our study, the analysis of the events occurred in a small percentage of patients and it was not necessary to interrupt any of the examinations; these data are in agreement with those in the literature 13 .Heart failure rarely occurs in patients with advanced myocardiopathy submitted to TEE and it is due to the withdrawal of medication, the endogenous stimulation of catecholamines and the use of sedation, which can contribute to the worsening of the ventricular contractile function. Severe bleedings are also rare and the risk is higher in patients with esophageal disease (diverticulum, tumors and stenosis).In an european multicenter study, a death caused by acute bleeding was reported in a patient with esophageal neoplasia.During the passage of the probe, there was laceration of the tumor with important hematemesis 5 .Another example of severe bleeding after TEE was reported in a patient that received thrombolytic agents for the treatment of thrombosis of a mechanical mitral prosthesis.This patient presented hemothorax due to an esophageal hematoma that ruptured into the thorax 5 .Some variables such as the use of Midazolan > 5 mg, myocardiopathy and severe MR were associated with a significantly higher number of complications during the procedure, at the uni-and multivariate analysis.Midazolan is a benzodiazepine agent with liver metabolism and practically renal excretion.The half-life is around 1-4 hours and it is increased in both kidney, heart and liver failure, as well as in obese and elderly patients.Hemodynamic and respiratory events, among others, are known to occur with the use of Midazolan. Cury et al Safety of Transesophageal Echocardiography More common in children and in patients with hemodynamic instability, hypotension and respiratory depression frequently occur when associated with narcotics. The recommended Midazolan dose is 0.5-2 mg I.V. in 2 minutes; the effects must be assessed every 2-3 minutes and the total recommended dose is usually 2.5-5 mg.The association of other central nervous system depressors, elderly patients and those with kidney, heart or liver failure must be monitored and lower doses are indicated.The effects on the cardiovascular system are usually caused by a decrease in the peripheral vascular resistance, myocardial depression and decreased cardiac output 8 . The severe MR promotes a volumetric overload to the left chambers and, consequently, an increase in the left ventricular diastolic-end pressure, which can lead to pulmonary congestion. The mean EF in the group with myocardiopathy was 40%±4 and 44%±5 in the group with severe MR.Therefore, we observed that the group with severe MR also presented a reduced mean EF (44%±5) and this can be an associated factor in the occurrence of events.However, no significant difference was observed between the doses of Midazolan in the group with and without severe MR (5.08±2.30mg x 6.75±2.06mg, p=0.15) (Table 2) and in the group with and without myocardiopathy (5.08±2.25 mg x 5.73±2.94mg, p=0.37) (Table 3). Conclusion The present study demonstrates that when the TEE is performed by experienced professionals, even in patients under sedation, it is associated with a low risk of events.No major events were reported and there was no need to interrupt the procedures due to minor events, guaranteeing the safety of the procedure. Potential Conflict of Interest No potential conflict of interest relevant to this article was reported. Sources of Funding There were no external funding sources for this study. Study Association This study is not associated with any post-graduation program. Chart 1 - Correspondence analysis chart.Multivariate analysis showing the occurrence of complications and the variables: ejection fraction (EF), severe MR andMidazolan dose > 5 mg.
2018-04-03T04:41:40.683Z
2009-01-01T00:00:00.000
{ "year": 2009, "sha1": "208cc18db0a889b76694b3e8ed5959f478e87516", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/abc/a/r8dxRkvSxVQwYzxQsYQvWMr/?format=pdf&lang=pt", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "95272bc7d66d5f68ee40e350ca68ea5248328fb4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
8757506
pes2o/s2orc
v3-fos-license
Transition Semantics - The Dynamics of Dependence Logic We examine the relationship between Dependence Logic and game logics. A variant of Dynamic Game Logic, called Transition Logic, is developed, and we show that its relationship with Dependence Logic is comparable to the one between First-Order Logic and Dynamic Game Logic discussed by van Benthem. This suggests a new perspective on the interpretation of Dependence Logic formulas, in terms of assertions about reachability in games of im- perfect information against Nature. We then capitalize on this intuition by developing expressively equivalent variants of Dependence Logic in which this interpretation is taken to the foreground. Dependence Logic Dependence Logic [17] is an extension of First-Order Logic which adds dependence atoms of the form =(t 1 , . . . , t n ) to it, with the intended interpretation of "the value of the term t n is a function of the values of the terms t 1 . . . t n−1 ." The introduction of such atoms is roughly equivalent to the introduction of non-linear patterns of dependence and independence between variables of Branching Quantifier Logic [7] or Independence Friendly Logic [10,9,15]: for example, both the Branching Quantifier Logic sentence ∀x ∃y ∀z ∃w R(x, y, z, w) and the Independence Friendly Logic sentence ∀x∃y∀z(∃w/x, y)R(x, y, z, w) correspond in Dependence Logic to ∀x∃y∀z∃w(=(z, w) ∧ R(x, y, z, w)), in the sense that all of these expressions are equivalent to the Skolem formula ∃f ∃g∀x∀zR(x, f (x), z, g(z)). As this example illustrates, the main peculiarity of Dependence Logic compared to the others above-mentioned logics lies in the fact that, in Dependence Logic, the notion of dependence and independence between variables is explicitly separated from the notion of quantification. This makes it an eminently suitable formalism for the formal analysis of the properties of dependence itself in a firstorder setting, and some recent papers ( [5,2,4]) explore the effects of replace dependence atoms with other similar primitives such as independence atoms [5], multivalued dependence atoms [2], or inclusion or exclusion atoms [3,4]. Branching Quantifier Logic, Independence Friendly Logic and Dependence Logic, as well as their variants, are called logics of imperfect information: indeed, the truth conditions of their sentences can be obtained by defining, for every model M and sentence φ, an imperfect-information semantic game G M (φ) between a Verifier (also called Eloise) and a Falsifier (also called Abelard), and then asserting that φ is true in M if and only if the Verifier has a winning strategy in G M (φ). As an alternative of this (non-compositional) Game-Theoretic Semantics, which is an imperfect-information variant of Hintikka's Game-Theoretic Semantics for First Order Logic [8], Hodges introduced in [11] Team Semantics (also called Trump Semantics), a compositional semantics for logics of imperfect information which is equivalent to Game-Theoretic Semantics over sentences and in which formulas are satisfied or not satisfied not by single assignments, but by sets of assignments (called Teams). In this work, we will be mostly concerned with Team Semantics and some of its variants. We refer the reader to the relevant literature (for example to [17] and [15]) for further information regarding these logics: in the rest of this section, we will content ourselves with recalling the definitions and results which will be useful for the rest of this work. Definition 1.2 (Team) Let M be a first-order model and let V be a finite set of variables. A team X over M with domain Dom(X) = V is a set of assignments from V to M . Definition 1.3 (Relations corresponding to teams) Let X be a team over M , and let V be a finite set of variables. and let v be a finite tuple of variables in its domain. Then X( v) is the relation {s( v) : s ∈ X}. Furthermore, we write Rel(X) for X(Dom(X)). As is often the case for Dependence Logic, we will assume that all our formulas are in Negation Normal Form: Definition 1.4 (Dependence Logic, Syntax) Let Σ be a first-order signature. Then the set of all dependence logic formula with signature Σ is given by φ :: where R ranges over all relation symbols, t ranges over all tuples of terms of the appropriate arities, t 1 . . . t n range over all terms and v ranges over the set Var of all variables. The set Free(φ) of all free variables of a formula φ is defined precisely as in First Order Logic, with the additional condition that all variables occurring in a dependence atom are free with respect to it. TS-dep: φ is a dependence atom =(t 1 , . . . , t n ) and any two assignments s, s ′ ∈ X which assign the same values to t 1 . . . t n−1 also assign the same value to t n ; TS-∨: φ is of the form ψ 1 ∨ ψ 2 and there exist two teams Y 1 and Y 2 such that TS-∧: φ is of the form ψ 1 ∧ ψ 2 , M |= X ψ 1 and M |= X ψ 2 ; TS-∃: φ is of the form ∃vψ and there exists a function F : The disjunction of Dependence Logic does not behave like the classical disjunction: for example, it is easy to see that =(x)∨ =(x) is not equivalent to =(x), as the former holds for the team X = {{(x, 0)}, {(x, 1)}} and the latter does not. However, it is possible to define the classical disjunction in terms of the other connectives: Definition 1.6 (Classical Disjunction) Let ψ 1 and ψ 2 be two Dependence Logic formulas, and let u 1 and u 2 be two variables not occurring in them. Then we write ψ 1 ⊔ ψ 2 as a shorthand for Proposition 1.1 For all formulas ψ 1 and ψ 2 , all models M with at least two elements 1 whose signature contains that of ψ 1 and ψ 2 and all teams X whose domain contains the free variables of ψ 1 and ψ 2 The following four proportions are from [17]: 2 For all models M and Dependence Logic formulas φ, M |= ∅ φ. for all suitable models M and for all nonempty teams X. Furthermore, in Φ(R) the symbol R occurs only negatively. As proved in [13], there is also a converse for the last proposition: Theorem 1.7 (From Σ 1 1 to Dependence Logic) Let Φ(R) be a Σ 1 1 sentence in which R occurs only negatively. Then there exists a Dependence Logic formula φ( v), where | v| is the arity of R, such that for all suitable models M and for all nonempty teams X whose domain contains v. Because of this correspondence between Dependence Logic and Existential Second Order Logic, it is easy to see that Dependence Logic is closed under existential quantification: for all Dependence Logic formulas φ( v, P ) over the signature for all models M with domain Σ and for all teams X over the free variables of φ. Therefore, in the rest of this work we will add second-order existential quantifiers to the language of Dependence Logic, and we will write ∃P φ( v, P ) as a shorthand for the corresponding Dependence Logic expression. Dynamic Game Logic Game logics are logical formalisms for reasoning about games and their properties in a very general setting. Whereas the Game Theoretic Semantics approach attempts to use game-theoretic techniques to interpret logical systems, game logics attempt to put logic to the service of game theory, by providing a highlevel language for the study of games. They generally contain two different kinds of expressions: 1. Game terms, which are descriptions of games in terms of compositions of certain primitive atomic games, whose interpretation is presumed fixed for any given game model; 2. Formulas, which, in general, correspond to assertions about the abilities of players in games. In this subsection, we are going to summarize the definition of a variant of Dynamic Game Logic [16]. 2 Then, in the next subsection, we will discuss a remarkable connection between First-Order Logic and Dynamic Game Logic discovered by Johan van Benthem in [19]. One of the fundamental semantic concepts of Dynamic Game Logic is the notion of forcing relation: Definition 1.8 (Forcing Relation) Let S be a nonempty set of states. A forcing relation over S is a set ρ ⊆ S×Parts(S), where Parts(S) is the powerset of S. In brief, a forcing relation specifies the abilities of a player in a perfect-information game: (s, X) ∈ ρ if and only if the player has a strategy that guarantees that, whenever the initial position of the game is s, the terminal position of the game will be in X. A (two-player) game is then defined as a pair of forcing relations satisfying some axioms: Definition 1.9 (Game) Let S be a nonempty set of states. A game over S is a pair (ρ E , ρ A ) of forcing relations over S satisfying the following conditions for all i ∈ {E, A}, all s ∈ S and all X, Y ⊆ S: Non-triviality: (s, ∅) ∈ ρ i . Determinacy: If (s, X) ∈ ρ i then (s, S\X) ∈ ρ j , where j ∈ {E, A}\{i}. 3 Definition 1.10 (Game Model) Let S be a nonempty set of states, let Φ be a nonempty set of atomic propositions and let Γ be a nonempty set of atomic game symbols. Then a game model over S, Φ and Γ is a triple (S, is a game over S for all g ∈ Γ and where V is a valutation function associating each p ∈ Φ to a subset V (p) ⊆ S. The language of Dynamic Game Logic, as we already mentioned, consists of game terms, built up from atomic games, and of formulas, built up from atomic proposition. The connection between these two parts of the language is given by the test operation φ?, which turns any formula φ into a test game, and the diamond operation, which combines a game term γ and a formula φ into a new formula γ, i φ which asserts that agent i can guarantee that the game γ will end in a state satisfying φ. Definition 1.11 (Dynamic Game Logic -Syntax) Let Φ be a nonempty set of atomic propositions and let Γ be a nonempty set of atomic game formulas. Then the sets of all game terms γ and formulas φ are defined as for p ranging over Φ, g ranging over Γ, and i ranging over {E, A}. We already mentioned the intended interpretations of the test connective φ? and of the diamond connective γ, i φ. The interpretations of the other game connectives should be clear: γ d is obtained by swapping the roles of the players in γ, γ 1 ∪ γ 2 is a game in which the existential player E chooses whether to play γ 1 or γ 2 , and γ 1 ; γ 2 is the concatenation of the two games corresponding to γ 1 and γ 2 respectively. ) be a game model over S, Γ and Φ. Then for all game terms γ and all formulas φ of Dynamic Game Logic over Γ and Φ we define a game γ G and a set φ G ⊆ S as follows: • sρ E X iff s ∈ φ G and s ∈ X; • sρ A X iff s ∈ φ G or s ∈ X for all s ∈ S and all X with ∅ = X ⊆ S; DGL-concat: For all game terms γ 1 and γ 2 , γ 1 ; • sρ i X if and only if there exists a Z such that sρ i 1 Z and for each z ∈ Z there exists a set X z satisfying zρ i 2 X z such that DGL-∪: For all game terms γ 1 and γ 2 , where, as before, DGL-atomic-pr: If s ∈ φ G , we say that φ is satisfied by s in G and we write M |= s φ. We will not discuss here the properties of this logic, or the vast amount of variants and extensions of it which have been developed and studied. It is worth pointing out, however, that [20] introduced a Concurrent Dynamic Game Logic that can be considered one of the main sources of inspiration for the Transition Logic that we will develop in Subsection 3.2. 4 [20] gives the following alternative condition for the powers of the universal player: • sρ A X if and only if X = Z 1 ∪ Z 2 for two Z 1 and Z 2 such that sρ A 1 Z 1 and sρ A 2 Z 2 . It is trivial to see that, if our games satisfy the monotonicity condition, this rules is equivalent to the one we presented. Dynamic Game Logic and First Order Logic In this subsection, we will briefly recall a remarkable result from [19] which establishes a connection between Dynamic Game Logic and First-Order Logic. In brief, as the following two theorems demonstrate, either of these logics can be seen as a special case of the other, in the sense that models and formulas of the one can be uniformly translated into models of the other in a way which preserves satisfiability and truth: V ) be any game model, let φ be any game formula for the same language, and let s ∈ S. Then it is possible to uniformly construct a first-order model G F O , a first-order formula φ F O and an assignment Theorem 1.14 Let M be any first order model, let φ be any first-order formula for the signature of M , and let s be an assignment of M . Then it is possible to uniformly construct a game model G DGL , a game formula φ DGL and a state We will not discuss here the proofs of these two results. Their significance, however, is something about which is necessary to spend a few words. In brief, what this back-and-forth representation between First Order Logic and Dynamic Game Logic tells us is that it is possible to understand First Order Logic as a logic for reasoning about determined games! In the next sections, we will attempt to develop a similar result for the case of Dependence Logic. A Logic for Imperfect Information Games Against Nature We will now define a variant of Dynamic Game Logic, which we will call Transition Logic. It deviates from the basic framework of Dynamic Game Logic in two fundamental ways: 1. It considers one-player games against Nature, instead of two-player games as is usual in Dynamic Game Logic; 2. It allows for uncertainty about the initial position of the game. Hence, Transition Logic can be seen as a decision-theoretic logic, rather than a game-theoretic one: Transition Logic formulas, as we will see, correspond to assertions about the abilities of a single agent acting under uncertainty, instead of assertions about the abilities of agents interacting with each other. In principle, it is certainly possible to generalize the approach discussed here to multiple agents acting in situations of imperfect information, and doing so might cause interesting phenomena to surface; but for the time being, we will content ourselves with developing this formalism and discussing its connection with Dependence Logic. Our first definition is a fairly straightforward generalization of the concept of forcing relation: Definition 2.1 (Transition system) Let S be a nonempty set of states. A transition system over S is a nonempty relation θ ⊆ Parts(S) × Parts(S) satisfying the following requirements: Informally speaking, a transition system specifies the abilities of an agent: for all X, Y ⊆ S such that (X, Y ) ∈ θ, the agent has a strategy which guarantees that the output of the transition will be in Y whenever the input of the transition is in X. The four axioms which we gave capture precisely this intended meaning, as we will see: where S is a nonempty set of states, E is a nonempty set of possible decisions for our agent and O is an outcome function from S × E to Parts(S). If s ′ ∈ O(s, e), we say that s ′ is a possible outcome of s under e; if O(s, e) = ∅, we say that e fails on input s. Definition 2.3 (Abilities in a decision game) Let Γ = (S, E, O) be a decision game, and let X, Y ⊆ S. Then we say that Γ allows the transition X → Y , and we write Γ : X → Y , if and only if there exists a e ∈ E such that ∅ = O(s, e) ⊆ Y for all s ∈ X (that is, if and only if our agent can make a decision which guarantees that the outcome will be in Y whenever the input is in X). Proof: Let θ ⊆ Parts(S) × Parts(S) be any transition system, let us enumerate its elements {(X i , Y i ) : i ∈ I)}, and let us consider the game Γ = (S, Suppose that (X, Y ) ∈ θ. If X = ∅, then Γ : X → Y follows at once by definition. If instead X = ∅, by non-triviality we have that Y is nonempty too, and furthermore (X, Hence, by monotonicity and downwards closure, (X, Y ) ∈ θ, as required. If instead X = ∅, then by non-creation we have again that (X, Y ) ∈ θ. Conversely, consider a decision game Γ = (S, E, O). Then the set of its abilities satisfies our four axioms: But then the same holds for all s ∈ X ′ , and hence Γ : Non-creation: Let Y ⊆ S and let e ∈ E be any possible decision. Then Non-triviality: Let s 0 ∈ X, and suppose that Γ : X → Y . Then there exists a e such that ∅ = O(s, e) ⊆ Y for all s ∈ X, and hence in particular What this theorem tells us is that our notion of transition system is the correct one: it captures precisely the abilities of an agent making choices under imperfect information and attempting to guarantee that, if the initial state is in a set X, the outcome will be in a set Y . Definition 2.5 (Trump) Let S be a nonempty set of states. A trump over S is a nonempty, downwards closed family of subsets of S. Whereas a transition system describes the abilities of an agent to transition from a set of possible initial states to a set of possible terminal states, a trump describes the agent's abilities to reach some terminal state from a set of possible initial states: 5 Conversely, for any trump X over S there exists a transition system θ such that X = reach(θ, Y ) for any nonempty Y ⊆ S. Proof: Let θ be a transition system. Then if (X, Y ) ∈ θ and X ′ ⊆ X, by downwards closure we have at once that (X ′ , Y ) ∈ θ. Furthermore, (∅, Y ) ∈ θ for any Y . Hence, reach(θ, Y ) is a trump, as required. Conversely, let X ⊆ Parts(Parts(S)) be a trump, and let us enumerate its elements as {X i : i ∈ I}. Then define θ as It is easy to see that θ is a transition system; and by construction, where we used the fact that X is downwards closed. We can now define the syntax and semantics of Transition Logic: Definition 2.6 (Transition Model) Let Φ be a set of atomic propositional symbols and let Θ be a set of atomic transition symbols. Then a transition model is a tuple T = (S, {θ t : t ∈ Θ}, V ), where S is a nonempty set of states, θ t is a transition system over S for any t ∈ Θ, and V is a function sending each p ∈ Φ into a trump of S. Definition 2.7 (Transition Logic -Syntax) Let Φ be a set of atomic propositions and let Θ be a set of atomic transitions. Then the transition terms and formulas of our language are defined respectively as where t ranges over Θ and p ranges over Φ. be a transition model, let τ be a transition term, and let X, Y ⊆ S. Then we say that τ allows the transition from X to Y , and we write T |= X→Y τ , if and only if TL-atomic-tr: τ = t for some t ∈ Θ and (X, Y ) ∈ θ t ; TL-test: τ = φ? for some transition formula φ such that T |= X φ in the sense described later in this definition, and X ⊆ Y ; TL-⊗: τ = τ 1 ⊗τ 2 , and X = X 1 ∪X 2 for two X 1 and X 2 such that T |= X1→Y τ 1 and T |= X2→Y τ 2 ; TL-concat: τ = τ 1 ; τ 2 and there exists a Z ⊆ S such that T |= X→Z τ 1 and T |= Z→Y τ 2 . Analogously, let φ be a transition formula, and let X ⊆ S. Then we say that X satisfies φ, and we write T |= X φ, if and only if TL-⊤: φ = ⊤; TL-atomic-pr: φ = p for some p ∈ Φ and X ∈ V (p); TL-⋄: φ = τ ψ and there exists a Y such that T |= X→Y τ and T |= Y ψ. Proof: By induction. We end this subsection with a few simple observations about this logic. First of all, we did not take the negation as one of the primitive connectives. Indeed, Transition Logic, much like Dependence Logic, has an intrinsically existential character: it can be used to reason about which sets of possible states an agent may reach, but not to reason about which ones such an agent must reach. There is of course no reason, in principle, why a negation could not be added to the language, just as there is no reason why a negation cannot be added to Dependence Logic, thus obtaining the far more powerful Team Logic [18,12]: however, this possible extension will not be studied in this work. The connectives of Transition Logic are, for the most part, very similar to those of Dynamic Game Logic, and their interpretation should pose no difficulties. The exception is the tensor operator τ 1 ⊗ τ 2 , which substitutes the game union operator γ 1 ∪ γ 2 and which, while sharing roughly the same informal meaning, behaves in a very different way from the semantic point of view (for example, it is not in general idempotent!) The decision game corresponding to τ 1 ⊗ τ 2 can be described as follows: first the agent chooses an index i ∈ {1, 2}, then he or she picks a strategy for τ i and plays accordingly. However, the choice of i may be a function of the initial state: hence, the agent can guarantee that the output state will be in Y whenever the input state is in X only if he or she can split X into two subsets X 1 and X 2 and guarantee that the state in Y will be reached from any state in X 1 when τ 1 is played, and from any state in X 2 when τ 2 is played. It is also of course possible to introduce a "true" choice operator τ 1 ∪ τ 2 , with semantical condition TL-∪: T |= X→Y τ 1 ∪ τ 2 iff T |= X→Y τ 1 or T |= X→Y τ 2 ; but we will not explore this possibility any further in this work, nor we will consider any other possible connectives such as, for example, the iteration operator TL- * : T |= X→Y τ * iff there exist n ∈ N and Z 0 . . . Z n such that Z 0 = X, Z n = Y and T |= Zi→Zi+1 τ for all i ∈ 1 . . . n − 1. Transition Logic and Dependence Logic This subsection contains the central result of this work, that is, the analogues of Theorems 1.13 and 1.14 for Dependence Logic and Transition Logic. Representing Dependence Logic models and formulas in Transition Logic is fairly simple: 1. If φ is a literal or a dependence atom, φ T L = φ?; Theorem 2.11 For all first-order models M , teams X and formulas φ, the following are equivalent: • M T L |= X→S φ T L . Proof: We show, by structural induction on φ, that the first condition is equivalent to the last one. The equivalences between the last one and the second and third ones are then trivial. One interesting aspect of this representation result is that Dependence Logic formulas correspond to Transition Logic transitions, not to Transition Logic formulas. This can be thought of as one first hint of the fact that Dependence Logic can be thought of as a logic of transitions: and in the later sections, we will explore this idea more in depth. Representing Transition Models, game terms and formulas in Dependence Logic is somewhat more complex: • For every p ∈ Φ, a binary relation V p whose interpretation is {(j, x) : j ∈ J p , x ∈ X j }. where for any transition term τ , variable x and unary relation symbol P , τ DL x (P ) is defined as 6. For all t ∈ Θ, t DL x (P ) is ∃i(=(i) ∧ ∃y(R t (i, x, y)) ∧ ∀y(¬R t (i, x, y) ∨ P y)); For all formulas ) for a new and unused variable y. Theorem 2.14 For all transition models T = (S, (θ t : t ∈ Θ), V ), transition terms τ , transition formulas φ, variables x, sets P ⊆ S and teams X over T DL with X(x) ⊆ S, 7 T Proof: The proof is by structural induction on terms and formulas. Let us first consider the cases corresponding to formulas: 1. For all teams X, T DL |= X ⊤ and T |= X(x) ⊤, as required; 2. Suppose that T DL |= X ∃j(= (j) ∧ V p (j, x)). Then there exists a m ∈ Dom(T DL ) such that T DL |= X[m/j] V p (j, x). Hence, we have that X(x) ⊆ X m ∈ V (p); and, by downwards closure, this implies that X(x) ∈ V (p), and hence that T |= X(x) p as required. Conversely, suppose that T |= X(x) p. Then X(x) ∈ V (p), and hence X(x) = X m for some m ∈ J p . Then we have by definition that T DL |= X[m/j] V p (j, x), and finally that T DL |= X T x (p). By induction hypothesis, this is the case if and only x , that is, by induction hypothesis, if and only if T |= X ψ 1 ∧ ψ 2 . T DL |= X ( τ ψ) DL x if and only if there exists a P such that T DL |= X (τ ) DL x (P ) and T DL |= X[T DL /y] ¬P y ∨ (ψ) DL y . By induction hypothesis, the first condition holds if and only if T |= X(x)→P τ . As for the second one, it holds if and only if X[T DL /y] = Y 1 ∪ Y 2 for two Y 1 , Y 2 such that T DL |= Y1 ¬P y and T DL |= Y2 τ y (ψ). But then we must have that T |= Y2(y) ψ and that P ⊆ Y 2 (y); therefore, by downwards closure, T |= P ψ and finally T |= X(x) τ ψ. Conversely, suppose that there exists a P such that T |= X(x)→P τ and T |= P ψ; then by induction hypothesis we have that T DL |= X (τ ) DL x (P ) and that T DL |= X[T DL /y] ¬P y ∨ (ψ) DL x , and hence T DL |= X ( τ ψ) DL x . Now let us consider the cases corresponding to transition terms: 6. Suppose that T DL |= X ∃i(=(i) ∧ ∃y(R t (i, x, y)) ∧ ∀y(¬R t (i, x, y) ∨ P y)). If X = ∅ then X(x) = ∅, and hence by non-creation we have that (X(x), P ) = (∅, P ) ∈ θ t , as required. Let us assume instead that X = ∅. Then, by hypothesis, there exists a m ∈ Dom(T DL ) such that • There exists a F such that T DL |= X[m/i][F/y] R t (i, x, y); From the first condition it follows that for every p ∈ X(x) there exists a q such that R t (m, p, q): therefore, by the definition of R t , every such p must be in X m . From the second condition it follows that whenever R t (m, p, q) and p ∈ X(x) ⊆ X m , q ∈ P ; and, since X(x) = ∅, this implies that Y m ⊆ P by the definition of R t . Hence, by monotonicity and downwards closure, we have that (X(x), P ) ∈ θ t and that T |= X(x)→P t, as required. Conversely, suppose that (X(x), P ) = (X m , Y m ) ∈ θ t for some m ∈ I t . If X(x) = ∅ then X = ∅, and hence by Proposition 1.2 we have that T DL |= X t DL x (P ), as required. Otherwise, by non-triviality, P = Y m = ∅. Let now p ∈ P be any of its elements and let F (s) = p for all p ∈ X[m/i]: then M |= X[m/i][F/y] R t (i, x, y), as any assignment of this team sends x to some element of X m and y to p ∈ Y m . Furthermore, let s ∈ X(x) = X m , and let q be such that R t (m, s(x), q): then q ∈ Y m = P , and hence M |= X[m/i][T DL /y] ¬R t (i, x, y) ∨ P y. So, in conclusion, M |= X t DL x (P ), as required. 7. T DL |= X φ DL x ∧ P x if and only if T |= X(x) φ and X(x) ⊆ P , that is, if and only if T |= X(x)→P φ?. Hence, the relationship between Transition Logic and Dependence Logic is analogous to the one between Dynamic Game Logic and First-Order Logic. In the next sections, we will develop variants of Dependence Logic which are syntactically closer to Transition Logic, while still being first-order: as we will see, the resulting frameworks are expressively equivalent to Dependence Logic on the level of satisfiability, but can be used to represent finer-grained phenomena of transitions between sets of assignments. In what follows, we will do exactly that, first with Transition Dependence Logic -a variant of Dependence Logic, expressively equivalent to it, which is also a quantified version of Transition Logic -and then with Dynamic Dependence Logic, in which all expressions are interpreted as transitions! But why would we interested in such variants of Dependence Logic? One possible answer, which we will discuss in this subsection, is that transitions between teams are already a central object of study in the field of Dependence Logic, albeit in a non-explicit manner: after all, the semantics of Dependence Logic interprets quantifiers in terms of transformations of teams, and disjunctions in terms of decompositions of teams into subteams. This intuition is central to the study of issues of interdefinability in Dependence Logic and its variants, like for example the ones discussed in [4]. As a simple example, let us recall Definition 1.6: where u 1 and u 2 are new variables. As we said in Proposition 1.1, M |= X ψ 1 ⊔ ψ 2 if and only if M |= X ψ 1 or M |= X ψ 2 . We will now sketch the proof of this result, and -as we will see -this proof will hinge on the fact that the above expression can be read as a specification of the following algorithm: 1. Choose an element a ∈ Dom(M ) and extend the team X by assigning a as the value of u 1 for all assignments; 2. Choose an element b ∈ Dom(M ) and further extend the team by assigning b as the value of u 2 for all assignments; 3. Split the resulting team into two subteams Y 1 and Y 2 such that (a) ψ 1 holds in Y 1 , and the values of u 1 and u 2 coincide for all assignments in it; (b) ψ 2 holds in Y 2 , and the values of u 1 and u 2 differ for all assignments in it. Since the values of u 1 and u 2 are chosen to always be respectively a and b, one of Y 1 and Y 2 is empty and the other is of the form X[ab/u 1 u 2 ], and since u 1 and u 2 do not occur in ψ 1 or ψ 2 the above algorithm can succeed (for some choice of a and b) only if M |= X ψ 1 or M |= X ψ 2 . As another, slightly more complicated example, let us consider the following problem. Given four variables x 1 , x 2 , y 1 and y 2 , let x 1 x 2 | y 1 y 2 be an exclusion atom holding in a team X if and only if for all s, s ′ ∈ X, s(x 1 x 2 ) = s ′ (y 1 y 2 )that is, if and only if the sets of the values taken by x 1 x 2 and by y 1 y 2 in X are disjoint. By Theorem 1.7, we can tell at once that there exists some Dependence Logic formula φ(x 1 , x 2 , y 1 , y 2 ) such that for all suitable M and X, M |= X φ(x 1 , x 2 , y 1 , y 2 ) if and only if M |= X x 1 x 2 | y 1 y 2 ; but what about the converse? For example, can we find an expression ψ(x, y), in the language of First Order Logic augmented with these exclusion atoms (but with no dependence atoms), such that for all suitable M and X M |= X ψ(x, y) if and only if M |= X =(x, y)? As discussed in [4] in a more general setting, the answer is positive, and one such ψ(x, y) is ∀z(z = y ∨ (z = y ∧ xz | xy)), where z is some variable other than x and y. 8 Why is this the case? Well, let us consider any team X with domain containing x and y, and let us evaluate ψ(x, y) over it. As shown graphically in Figure 1, the transitions between teams occurring during the evaluation of the formula correspond to the following algorithm: On the other hand, one Dependence Logic expression corresponding to x 1 x 2 | y 1 y 2 is where w 1 , w 2 , u 1 and u 2 are new variable. We encourage the interested reader to verify that this is the case by examining the transitions between teams corresponding to the formula: in brief, the intuition is that first we extend our team by picking all possible pairs of values for w 1 and w 2 , then for any such pair we flag -through our choice of u 1 and u 2 -whether w 1 w 2 is different from x 1 x 2 or from y 1 y 2 . This implies that no such pair is equal to both x 1 x 2 and y 1 y 2 , or, in other words, that x 1 x 2 and y 1 y 2 have no value in common. More and more complex examples of definability results of this kind can be found in [4]; but what we want to emphasize here is that all these examples, like the one we discussed in depth here, have a natural interpretation in terms of algorithms which transform teams and apply simple tests to them, as the above one. Hence, we hope that the development of variants of Dependence Logic in which these transitions are made explicit might prove itself useful for the further study of this interesting class of problems. Transition Dependence Logic As stated, we will now define a variant of Dependence Logic which can also be seen as a quantified variant of Transition Logic. We will then prove that the resulting Transition Dependence Logic is expressively equivalent to Dependence Logic, in the sense that any Dependence Logic formula is equivalent to some Transition Dependence Logic formula and vice versa. where v ranges over all variables in Var, R ranges over all relation symbols of the signature, t ranges over all tuples of terms of the required arities, n ranges over N and t 1 . . . t n range over the terms of our signature. TDL-dep: φ is a dependence atom =(t 1 , . . . , t n ) and any two s, s ′ ∈ X which assign the same values to t 1 . . . t n−1 also assign the same value to t n ; TDL-∨: φ is of the form φ 1 ∨ φ 2 and M |= X φ 1 or M |= X φ 2 ; TDL-∧: φ is of the form φ 1 ∧ φ 2 , M |= X φ 1 and M |= X φ 2 ; TDL-⋄: φ is of the form τ ψ and there exists a Y such that M |= X→Y τ and M |= Y ψ. As the next theorem shows, in this semantics formulas and transitions are interpreted in terms of trumps and transition systems: Non-triviality: If X = ∅ then M |= X→∅ τ . Proof: The proof is by structural induction over φ and τ , and presents no difficulties whatsoever. Also, it is not difficult to see, on the basis of the results of the previous section, that this new variant of Dependence Logic is equivalent to the usual one: Theorem 3.4 For every Dependence Logic formula φ there exists a Transition Dependence Logic transition term τ φ such that for all first-order models M and teams X. Proof: τ φ is defined by structural induction on φ, as follows: 1. If φ is a first-order literal or a dependence atom then τ φ = φ?; It is then trivial to verify, again by induction on φ, that M |= X φ if and only if M |= X τ φ ⊤, as required. This representation result associates Dependence Logic formulas to Transition Dependence Logic transition terms. This fact highlights the dynamical nature of Dependence Logic operators, which we discussed in the previous subsection: in this framework, quantifiers describe transformations of teams, the Dependence Logic connectives are operations over games, and the literals are interpreted as tests. In fact, one might wonder what is the purpose of Transition Dependence Logic formulas: could we do away with them altogether, and develop a variant of Transition Dependence Logic in which all formulas are transitions? Later, we will explore this idea further; but first, let us verify that Transition Dependence Logic is no more expressive than Dependence Logic. for all first-order models M and teams X. Furthermore, for every Transition Dependence Logic transition term τ and Dependence Logic formula θ there is a Dependence Logic formula U (τ, ψ) such that again for all first-order models M and teams X. Proof: We prove the two claims together, by structural induction over φ and τ . 8. If τ is of the form τ 1 ⊗ τ 2 and v is the tuple of all free variables of θ then where R is a new | r|-ary relation symbol. Indeed, suppose that M |= X U (τ, θ): then there exists a relation R and two subteams X 1 and X 2 of X such that R v); and furthermore, by locality we have that M |= ∀ v(¬R v ∨ θ). Hence, M |= X U (τ 1 ⊗ τ 2 , θ), as required. The intended interpretations of these formulas are rather different, even though they happen to be satisfied by the same teams: and for this reason, Transition Dependence Logic may be thought of as a proper refinement of Dependence Logic even though it has exactly the same expressive power. Dynamic Predicate Logic Dynamic Semantics is the name given to a family of semantical frameworks which subscribe to the following principle ( [6]): The meaning of a sentence does not lie in its truth conditions, but rather in the way it changes (the representation of ) the information of the interpreter. In various forms, this intuition can be found prefigured in some of the later work of Ludwig Wittgenstein, as well as in the research of philosophers of language such as Austin, Grice, Searle, Strawson and others ([1]); but its formal development can be traced back to the work of Groenendijk and Stokhof about the proper treatment of pronouns in formal linguistics ( [6]). We refer to [1] for a comprehensive analysis of the linguistic issues which caused such a development, as well as for a description of the ways in which this framework was adapted in order to model presuppositions, questions/answers and other phenomena; here we will only present a formulation of dynamic predicate semantics, the alternative semantics for first-order logic which was developed in the above mentioned paper by Groenendijk and Stokhof. A formula φ is satisfied by an assignment s if and only if there exists an assignment s ′ such that M |= s→s ′ φ; in this case, we will write M |= s φ. We will discuss neither the formal properties of this formalism nor its linguistic applications here. All that is relevant for our purposes is that, according to it, formulas are interpreted as transitions from assignments to assignments, and furthermore that the rule for conjunction allows us to bind occurrences of a variable of the second conjunct to quantifiers occurring in the first one. 9 The similarity between this semantics and our semantics for transition terms should be evident. Hence, it seems natural to ask whether we can adopt, for a suitable variant of Dependence Logic, the following variant of Groenendijk and Stokhof's motto: The meaning of a formula does not lie in its satisfaction conditions, but rather in the team transitions it allows. From this point of view, transition terms are the fundamental objects of our syntax, and formulas can be removed altogether from the language -although, of course, the tests corresponding to literals and dependence formulas should still be available. As in Groenendijk and Stokhof's logic, satisfaction becomes then a derived concept: in brief, a team X can be said to satisfy a term τ if and only if there exists a Y such that τ allows the transition from X to Y , or, in other words, if and only if some set of non-losing outcomes can be reached from the set X of initial positions in the game corresponding to τ . In the next section, we will make use of these intuitions to develop another, terser version of Dependence Logic; and finally, we will discuss some implications of this new version for the further developments and for the possible applications of this interesting logical formalism. Dynamic Dependence Logic We will now develop a formula-free variant of Transition Dependence Logic, along the lines of Groenendijk and Stockhof's Dynamic Predicate Logic. Definition 3.7 (Dynamic Dependence Logic -Syntax) Let Σ be a firstorder signature. The set of all formulas of Dynamic Dependence Logic over Σ is given by the rules τ ::= R t | ¬R t | =(t 1 , . . . , t n ) | ∃v | ∀v | τ ⊗ τ | τ ∩ τ | τ ; τ where, as usual, R ranges over all relation symbols of our signature, t ranges over all tuples of terms of the required lengths, n ranges over N, t 1 . . . t n range over all terms, and v ranges over Var. A formula τ is said to be satisfied by a team X in a model M if and only if there exists a Y such that M |= X→Y τ ; and if this is the case, we will write M |= X τ . It is not difficult to see that Dynamic Dependence Logic is equivalent to Transition Dependence Logic (and, therefore, to Dependence Logic). Further Work In this work, we established a connection between a variant of Dynamic Game Logic and Dependence Logic, and we used it as the basis for the development of variants of Dependence Logic in which it is possible to talk directly about transitions from teams to teams. This suggests a new perspective on Dependence Logic and Team Semantics, one which allow us to study them as a special kind of algebras of nondeterministic transitions between relations. One of the main problems that is now open is whether it is possible to axiomatize these algebras, in the same sense in which, in [14], Allen Mann offers an axiomatization of the algebra of trumps corresponding to IF Logic (or, equivalently, to Dependence Logic). Furthermore, we might want to consider different choices of connectives, like for example ones related to the theory of database transactions. The investigation of the relationships between the resulting formalisms is a natural continuation of the currently ongoing work on the study of the relationship between various extensions of Dependence Logic, and promises of being of great utility for the further development of this fascinating line of research. Acknowledgements The author wishes to thank Johan van Benthem and Jouko Väänänen for a number of useful suggestions and insights. Furthermore, he wishes to thank the reviewers for a number of highly useful suggestions and comments.
2013-05-21T08:14:00.000Z
2012-03-05T00:00:00.000
{ "year": 2014, "sha1": "37350b4de430e10aa6f2e7d9f5a99db5f4cb4f24", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1203.0871", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a70e5b32bfb4321bd1959b95e0eab36440064c81", "s2fieldsofstudy": [ "Computer Science", "Philosophy" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
8449201
pes2o/s2orc
v3-fos-license
Patterns of ranibizumab and aflibercept treatment of central retinal vein occlusion in routine clinical practice in the USA Background The intravitreal anti-vascular endothelial growth factor treatments ranibizumab and aflibercept have proven efficacy in clinical trials, but their real world usage in central retinal vein occlusion (CRVO) has not been assessed. We therefore evaluated the treatment patterns of both drugs in a US claims database. Methods The IMS Integrated Data Warehouse was used to identify the patients with CRVO in the USA with claims for ranibizumab or aflibercept between 24 September 2012 and 31 March 2014 with at least 12 months follow-up. Patients were required to have had no anti-VEGF treatment code for 6 months before index (‘treatment-naive'). Mean numbers of injections and non-injection visits to a treating physician were compared with patients receiving these treatments. Results Patient characteristics were similar for patients receiving ranibizumab (n=206) or aflibercept (n=79) at index. The mean (±SD) numbers of injections received by patients treated with ranibizumab or aflibercept were 4.4±2.8 and 4.7±2.9 (P=0.38), respectively; the total number of patient visits to their treating physician was 7.3±3.7 and 7.0±2.9 (P=0.52), respectively. For patients receiving one or more injections (n=238), the mean interval between injections was 55.1 days (ranibizumab) and 54.2 days (aflibercept; P=0.44). Conclusions Our results suggest that, in routine clinical practice, patients receive a comparable number of injections in the first year of treatment with ranibizumab or aflibercept. This may have implications for commissioning and service development of CRVO care pathways. Introduction Macular edema secondary to retinal vein occlusion (RVO) can cause severe visual impairment owing to obstruction of the retinal vasculature, and is the second most common retinal vascular disease. 1,2 Occlusion of the retinal veins causes an increase in retinal capillary pressure resulting in upregulation of vascular endothelial growth factor (VEGF) expression and a consequent increase in vascular permeability and new vessel proliferation within the iris and anterior chamber. As a result, blood and plasma are discharged into the retina, often causing complications including macular edema and varying degrees of ischemia, potentially leading to severe vision loss. Although occlusion of the central retinal vein (central RVO (CRVO)) occurs less frequently than in branch veins, it is associated with severe visual outcomes. Anti-VEGF therapy is now the standard of care for CRVO, replacing the previous observation-only approach. [3][4][5] Ranibizumab (Lucentis; Genentech Inc., San Francisco, CA, USA and Novartis Pharma AG, Basel, Switzerland) is a humanized, affinity-matured VEGF antibody fragment that binds to and neutralizes all isoforms of VEGF. Ranibizumab is recommended to be given monthly based on the evidence from clinical trials. 6 The efficacy of ranibizumab for the management of CRVO has been reported in multiple studies including the Randomized Study Comparing Ranibizumab to Sham in Patients with Macular Edema Secondary to CRVO (ROCC) 7 and the Ranibizumab for the Treatment of Macular Edema After CRVO Study (CRUISE); 8,9 intravitreal injections of ranibizumab provided rapid improvement in 6-month visual acuity and macular edema following CRVO, with low rates of adverse events. 7,8 These improvements were largely maintained with a subsequent 6 months of dosing as required (pro re nata (PRN)). 9 Ranibizumab was approved for treatment of macular edema secondary to CRVO by the US Food and Drug Administration (FDA) in June 2010. 10 Aflibercept is a fully human, recombinant fusion protein that targets VEGF-A, VEGF-B, and placental growth factor. Aflibercept binds all isoforms of VEGF-A with high affinity-a markedly higher affinity than that of ranibizumab. Like ranibizumab, aflibercept is recommended in the USA to be given as monthly intravitreal injections. 11 Patients should subsequently be monitored regularly, and treatment should be resumed if visual outcomes deteriorate. Two recent clinical trials (VEGF Trap-Eye: Investigation of Efficacy and Safety in CRVO (GALILEO) 12,13 and VEGF Trap-Eye for macular edema secondary to CRVO (COPERNICUS) 14,15 ) have shown that monthly intravitreal aflibercept treatment was well tolerated and improved visual acuity after 6 months significantly more than sham injections; these improvements were maintained with subsequent monthly monitoring and PRN dosing. 12 Aflibercept was approved for the treatment of macular edema secondary to CRVO in September 2012. 16 Despite promising results from clinical trials as described above, real world usage of aflibercept and ranibizumab in CRVO has not yet been studied. This study therefore aimed to assess the treatment patterns of ranibizumab and aflibercept for the management of macular edema secondary to CRVO in routine clinical practice in the USA using a large, patient-level, physician-entered claims database. Materials and methods This retrospective study was based on the analysis of US physician-level claims data from the Integrated Data Warehouse (IDW; managed by IMS Health, Plymouth Meeting, PA, USA), a claims database that encompasses B1 billion professional fee claims per year, representing B80% of practicing eye care specialists (including over 13 000 ophthalmologists) and covering all 50 states. Approximately 95% of claims submitted for payment from these sources are available for analysis within 3 weeks. The study included adult patients with a first medical claim registered in the IDW with a procedure code for intravitreal injection of ranibizumab or aflibercept between 24 September 2012 and 31 March 2014, and with a concomitant diagnosis of CRVO (recorded as a code from the International Classification of Disease 9th Revision Clinical Modification; ICD-9-CM 362.35); this first claim was defined as the patient's index date. Patients were required to have at least 12 months of follow-up data (post index date) within this study period and a minimum of 6 months of available data in the IDW before the index date. The physician administering the index medication was required to have consistently submitted medical claims to the IDW during the 6 months before the index date and during the follow-up period ('physician stability' criteria). Patients were excluded from the analysis if: their records indicated that they had received an anti-VEGF injection during 6 months before the index date (ensuring 'naivety'); if they received more than one anti-VEGF drug within 12 months after the index date (to avoid the potential confound of a patient being included in both groups). The last assumption was relaxed in the sensitivity analysis to assess the number of any anti-VEGF injections received by patients starting on ranibizumab and aflibercept. The primary analysis assessed the number of injections received, non-injection visits made and total visits (ie, the sum of injection and non-injection visits) made by treatment-naive patients (defined as having received no anti-VEGF treatment claim in the 6 months before the index date) who were treated continuously (ie, received no other anti-VEGF therapy) with their index therapy for at least 12 months (365 days). Mean dosing intervals (number of days between the injections) were determined for the first year of therapy for patients starting on either treatment and receiving at least two injections. Differences between the treatment patterns of ranibizumab and aflibercept were assessed, and reported P-values were adjusted for baseline characteristics. Negative binomial regression was used to compare the effect of patient characteristics on injection and visit estimates for those treated continuously with ranibizumab and aflibercept for at least 12 months. A generalized estimating equation (GEE) model applied at the patient level was used to compare the effect of patient characteristics on dosing interval estimates for patients having received two or more injections. The nesting assumption is reviewed in the discussion section. Finally, an autocorrelation of order 1 was used for within-cluster correlation. Several sensitivity analyses were performed: we assessed the mean number of injections, non-injection visits, and total visits including anti-VEGFs other than that given at index ('any anti-VEGF'); and we assessed the first 6 months of data to see if there were betweengroup differences. We also assessed the baseline characteristics of the patients receiving only one injection compared with those receiving multiple injections, to assess whether this patient subset could confound the analyses. For continuous variables, between-group statistical differences were assessed using unpaired Student's t-tests, with Po0.05 used to define a significant difference. Categorical variables were assessed using Fisher's exact test. Results In total, 285 patients were treated continuously with their index drug over 12 months (ranibizumab, n ¼ 206; aflibercept, n ¼ 79; Figure 1). The two treatment groups were comparable in terms of demographics or type of health plan, and almost all patients received treatment from an ophthalmologist (including retinal specialists) ( Table 1). The majority of patients in both the groups (ranibizumab, 57%; aflibercept, 53%) were female, and their median (interquartile range) ages were 74.0 (67.0-81.0) years and 76.0 (70.0-81.0) years, respectively. Cancer, cardiovascular disease, chronic pulmonary disease, and diabetes mellitus were the only comorbidities listed in the Charlson-Deyo comorbidity index (CCI) 17 that occurred in 45% of patients in either group. For patients treated continuously with ranibizumab or aflibercept, the mean ± SD number of injection visits during the first 12 months of treatment (based on a negative binomial model adjusting for characteristics; Supplementary Table 1) was 4.4±2.8 and 4.7±2.9 (P ¼ 0.38), respectively, and that of non-injection visits was 2.8±2.6 and 2.2±2.1 (P ¼ 0.06), respectively ( Figure 2). The total number of visits to the treating physician in the 12 months after the index date was 7.3 ± 3.7 and 7.0 ± 2.9 (P ¼ 0.52) for ranibizumab and aflibercept, respectively. Patients received an injection on the majority of their visits to their prescribing physician (ranibizumab, 63%±26%; aflibercept, 67%±27%). For patients receiving one injection or more (n ¼ 238), the mean interval between the injections was 55.1 days for patients treated continuously with ranibizumab and 54.2 Table 2. Over half of the patients in each group had four or more injections of their index drug within the first year of treatment (ranibizumab, 55.3%; aflibercept, 60.8%; Figure 3). Over 40% of all patients received four doses in the first 6 months of therapy with their index treatment (ranibizumab, 40.3%; aflibercept, 43.0%). Approximately 20% of all patients received five or more doses in the first 6 months of therapy with their index treatment (ranibizumab, 22.3%; aflibercept, 19.0%). When the inclusion criteria were extended to include any additional anti-VEGF treatment claims during follow-up (ranibizumab, n ¼ 261; aflibercept, n ¼ 93), the numbers of all anti-VEGF injections received in the first 12 months of follow-up were 4.7 ± 2.7 and 4.8 ± 2.9 (P ¼ 0.59) for patients starting on ranibizumab and aflibercept, respectively (when adjusting for baseline characteristics). The according number of non-injection visits were 3.0±2.6 and 2.3±2.2 (Po0.05) and the total number of visits were 7.6±3.6 and 7.1±2.9 (P ¼ 0.25), respectively. Of patients receiving only one injection during follow-up (ranibizumab, n ¼ 35; aflibercept, n ¼ 12), 78.7% made more than one subsequent non-injection visit to their physician (ranibizumab, 77.1%; aflibercept, 83.3%; Figure 4). When comparing this subset of 47 patients who received only one injection during follow-up with those who received two or more injections, these results were not significantly associated with differences in sex, age, CCI, region, payer type, or the specialty of the prescribing physician, although CCI did approach significance (P ¼ 0.05). The majority of the 285 patients included in the primary analysis received treatment in only one eye throughout follow-up, with only 3.5% of patients receiving bilateral treatment at any point during followup (ranibizumab, 3.9%; aflibercept, 2.5%). When the inclusion criteria were relaxed to include any anti-VEGF treatment received during follow-up (n ¼ 354), 4.5% of patients were observed to have received bilateral treatment at some point during follow-up (ranibizumab, 4.6%; aflibercept, 4.3%). Of the 285 patients included in this study, none were found to have a claim relating to glaucoma associated with vascular disorders (ICD-9 365.63), diabetic macular edema (DME; ICD-9 362.07) or neovascular age-related macular degeneration (nAMD; ICD-9 362.50, 362.51, 362.52) in the 6 months before index. One patient had a recorded claim for DME within the first year after index; claims relating to glaucoma associated with vascular disorders and nAMD were not observed during follow-up. Discussion This patient-level claims database analysis is, to our knowledge, the first to directly compare the patterns of ranibizumab and aflibercept use when given for treatment of CRVO. The main finding is that the number of injections received and total number of visits made by patients continuously treated with their index therapy was not significantly different regardless of whether patients started treatment with ranibizumab or aflibercept. There were no discernible demographic differences between patients in the ranibizumab and aflibercept groups. In the USA, both the anti-VEGF treatments assessed presently are recommended to be given as monthly intravitreal injections for the management of macular edema secondary to CRVO. However, the presented results suggest that very few patients receive this regularity of injection throughout the first year of treatment. Potentially, this is due at least in part to improved visual outcomes in the patients receiving these anti-VEGF treatments, as has been seen in clinical trials. However, our time sensitivity analysis shows that even in the first 6 months of treatment most patients do not receive monthly anti-VEGF treatment as recommended by the labels of ranibizumab and aflibercept. Furthermore, we have shown that the likelihood of bilateral treatment is low; less than 5% of patients in both the groups were treated bilaterally in the year after index, compared with reports that B10% of those with unilateral CRVO will develop the condition bilaterally. 1 The similarity of the injections given and total visits made by patients in the ranibizumab and aflibercept groups suggest that physicians may be using these treatments similarly in routine clinical practice. These results are in alignment with those observed in other ophthalmic indications, where the number of injections and total visits made were similar whether patients received ranibizumab or aflibercept. Another recent US claims database study in patients with nAMD reported that 5.8 (ranibizumab) and 5.5 (aflibercept) injections were given annually. 18 Furthermore, when the results were extended to include the number of any intravitreal anti-VEGF injections received during follow-up, the mean number of injections received in the 12 months after index was similar between treatment groups and also similar to those received by patients receiving continuous treatment with their index drug, suggesting that the physicians may be using the two drugs interchangeably. Despite previous reports that a significant proportion of patients with CRVO subsequently experience neovascular glaucoma, 19 we found no reports of glaucoma associated with vascular disorders in the 6 months before index or during follow-up. Other comorbidities such as DME and nAMD were also rare during the study period, with only one observation of a DME code during the follow-up period. The absence of disease overlap indicates that the treatment patterns are representative of patients with unambiguously diagnosed CRVO. Given that current market prices for ranibizumab and aflibercept are similar (US wholesale acquisition costs per 0.5 mg vial: ranibizumab, $1950; aflibercept, $1850) 20 and that injections constitute the majority of the treatment costs associated with these treatments, the observation that the number of injections administered and physician visits for each treatment is very similar suggests that budgetary considerations for both treatments are likely to be similar in routine clinical practice. Therefore, the findings of this study represent important considerations for payers when evaluating the cost effectiveness of these treatments in the real world. These study findings also highlight that the way new therapies are used in practice may differ from recommendations based on the clinical trials, and emphasize the importance of this type of post approval observational study. Owing to the relative recency of aflibercept approval and the number of inclusion/exclusion criteria required to product robust results, a large database with rapid upload of data was essential in order to generate sufficient data for analysis. The IDW is one of the largest claims-based databases in the USA, and 95% of claims are available for analysis within 3 weeks of submission. We believe this to be the largest observational study of its type to directly compare ranibizumab and aflibercept. Our sensitivity analyses support the main findings and suggest that the comparable observations made between these two treatment groups are not confounded by differences between groups in the first 6 months of treatment; of the 55-60% of patients receiving four or more injections in the first year after index treatment, over 40% in each group received these injections in the first 6 months of treatment. As no visual acuity data are available in the claims database, it is not possible to conclude that patients who only received one injection did not need additional ones. However, the fact that the vast majority of those patients had one or more follow-up visits post injection is reassuring. There are several limitations in this study. Like any observational studies comparing two treatments, patient inclusion could be subject to selection bias, particularly if there are differences between patients receiving ranibizumab and those receiving aflibercept. However, the similarities of the available baseline characteristics between both the groups of patients suggest that this is not the case. Physicians' approaches to treating CRVO could differ from each other. One potential way to address the physician-level specific would be to have run a GEE model with multiple levels of cluster nesting (physician and patient). However, in this data set, 147 physicians injected both ranibizumab and aflibercept (average number of patients per physicians: 1.9); 118 (80.3%) of these physicians injected only one or two patients. Therefore, there was not sufficient data to run a meaningful analysis with two levels of nesting. As the aflibercept treatment pool grows, the analysis may become more meaningful. In addition, this study involves a relatively small sample size after application of the inclusion criteria, especially for aflibercept. However, the similarity of the injection and total visit results provide no suggestion that our findings are underpowered, and even with extended analyses the results are unlikely to demonstrate any clinically meaningful differences in injections or total visits. This study uses physician-entered claims codes to assess treatment patterns; as such, a risk of misclassification is inevitable, although we believe any such misclassification applies to both treatment groups equally. For example, the lack of neovascular glaucoma during the study period is unlikely to mean that no such events occurred, and more likely reflects that the diagnosis was unrecorded or missed. Last, we were unable to assess the effect of injection frequency on visual outcomes. This is beyond the scope of this type of study and requires further investigation. In conclusion, this study is the first to directly compare treatment patterns of ranibizumab and aflibercept administered for the management of CRVO in routine clinical practice in the USA. The results suggest that these two therapies are not used as recommended by the labels in the USA and that patients receive similar numbers of injection regardless of the treatment with which they initiate therapy. Further studies are warranted to link these findings to visual outcomes and to evaluate whether the treatment patterns observed in this US study represent those in other countries. Summary What was known before K Anti-VEGF treatments ranibizumab and aflibercept have proven efficacy in clinical trials. K The patterns of usage of these treatments in the real world are not adequately understood. What this study adds K Patients receiving ranibizumab or aflibercept treatment for management of macular edema secondary to retinal vein occlusion receive a similar number of injections, and make a similar number of visits to their treating physician. Conflict of interest Andrew Lotery has attended scientific advisory boards and received educational grants from Novartis Pharma AG, Basel, Switzerland and Bayer HealthCare AG, Leverkusen, Germany. Stephane Regnier is an employee of, and owns shares in, Novartis Pharma AG, Basel, Switzerland.
2016-05-12T22:15:10.714Z
2015-01-09T00:00:00.000
{ "year": 2015, "sha1": "959eee2e23a4eb59c73f726b74ad616a8810474a", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/eye2014308.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "959eee2e23a4eb59c73f726b74ad616a8810474a", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
254492956
pes2o/s2orc
v3-fos-license
A comparison of banks and real estate intermediaries as house sellers The foreclosure crisis associated with the banking crisis transformed banks in the hardest-hit countries into real estate brokers. The main novelty of this paper is to study banks as sellers of their own foreclosed properties and compare banks’ sales outcomes with those of traditional agents in the real estate market. We compare the list price, selling price, time on market and price discount of traditional real estate companies (TRECs) and bank-owned real estate companies (BRECs). We find evidence of a higher selling price, higher list price and longer time on market (TOM) for BRECs than for TRECs. Our findings are consistent with BRECs displaying greater patience as well as lower risk aversion. However, these explanations are not enough to fully account for the magnitudes of the coefficients. The empirical estimates suggest that information in the housing market may also be a source of distortions. In fact, the main aim of the sale varies depending on the incentives of the company. BREC sellers are banks that own the properties for sale, so their incentives are to maximize selling prices to reduce the loss charged to their annual results, while TRECs seek to minimize the TOM. Introduction Real estate assets are heterogeneous, displaying a greater variety of attributes than most other goods. Consequently, obtaining and conveying credible information about a property's characteristics are crucial for the success of a real estate transaction. Real estate brokers may benefit buyers by providing information about properties and neighbourhoods in areas where the buyer may have little familiarity, providing advertising services, setting more accurate property list prices and enhancing the value of properties by improving their presentation. An additional benefit is that brokers may help market participants obtain and convey accurate and credible information. In summary, brokers offer sellers potentially useful knowledge and expertise (as well as convenience by showing and advertising the house and helping with paperwork). However, because the relationship between the homeowner and the broker resembles a classical principal-agent problem, brokers may not deploy their services in ways that promote sellers' interests (Bernheim & Meer, 2008). Brokers have strong incentives to sell houses quickly, even at a substantial price reduction (Levitt & Syverson, 2008). The economic literature has extensively analysed the impact of real estate brokers on the outcomes of housing sales. The consensus seems to be that, on average, the use of a real estate broker tends to decrease the number of days that a property remains on the market (Baryla & Ztanpano, 1995;Bernheim & Meer, 2008;Elder et al., 1999;Hendel et al., 2009;Levitt & Syverson, 2008;Rutherford et al., 2005). With respect to price, some authors (Benjamin & Chinloy, 2000;Bernheim & Meer, 2008;Elder et al., 2000;Hendel et al., 2009) find that real estate brokers have no impact on transaction prices. On the other hand, Yavas and Yang (1995) and Violand and Simon (2007) find that brokers affect transaction prices for certain combinations of house characteristics but not for others. Agarwal et al., (2019) shows that agents use information advantages and bought their own houses at prices that are 2.54% lower than those of comparable houses bought by other buyers. Hayunga and Munneke, (2019) also demonstrate that agents (and investors) hold bargaining power relative to individuals in their purchases. Rutherford et al. (2005) and Levitt and Syverson (2008) find that brokers obtain a higher price for their own properties (2.6-7%), but to do so, they must leave the properties on the market longer. Bernheim and Meer (2013) also present evidence that houses on the Stanford University campus sold directly by the owner fetch a higher price. Gautier et al. (2017) show that flat-fee agents sell properties more rapidly, and the average price is 2.7% higher than that secured by traditional agents. Therefore, the profits of traditional brokers are at least partly driven by rents rather than performance. Finally, Hendel et al. (2009) compare the outcomes of sellers who list their home on a for-sale-by-owner (FSBO) website versus those who used an agent and a multiple listing service (MLS). MLS shortens the time to sale, but the two servicers deliver the same price. Thus, real estate markets provide a particularly fertile testing ground for examining the impact of brokers and their information on transaction outcomes. The main novelties of this paper are the study of banks as sellers of their own foreclosed properties and the comparison with other traditional agents in the real estate market. The paper extends the previous literature to analyse a new type of real estate broker that appeared in countries strongly affected by the 2008 financial crisis. The foreclosure crisis associated with the banking crisis has transformed the banks of the hardest-hit countries into real estate brokers. Spain is a clear example of banks adding intermediation in the real estate market to their traditional activities. The very lax mortgage standards of Spanish banks during the expansion of 2001-2007 led to excessive exposure to real estate assets. Following the collapse of the country's property market in 2008, Spanish banks foreclosed many properties. To clean up their balance sheets of property assets, banks created real estate brokers such as Altimira (Santander Bank), Solvia (Banco Sabadell), Bankimia (BBVA) and CXI (Catalunya Caixa), among many others. Thus, from 2009, banks and traditional real estate brokers started competing in the housing market. In this paper, we compare the outcomes (list price, selling price, time on market and price cut) of traditional real estate companies (TRECs) and real estate brokers belonging to banks (BRECs). Both companies competed in the same housing market. We found evidence of a higher selling price, higher list price and longer time on market (TOM) for BRECs than for TRECs. Our findings are consistent with the greater patience as well as lower risk aversion of BRECs. However, these explanations are not enough to fully account for the magnitudes of the coefficients. The empirical estimates suggest that information in the housing market can also be a source of distortions. In fact, the main aim of the sale varies depending on the incentives of the company. BREC sellers are banks that own dwellings, so their incentive is to maximize selling prices to reduce the loss charged to the annual results, while TRECs aim to minimize the TOM. Additionally, we present evidence for possible explanations from behavioural economics. In particular, higher list prices observed for BRECs than for TRECs are consistent with results from behavioural economics (Kahneman et al., 1986a(Kahneman et al., , 1986bThaler, 2015). BRECs prefer to set a high list price rather than to reduce list prices. This strategy produces a feeling of fairness in the buyer, who, unaware of the reference price, believes she has obtained a good deal. Hence, this strategy allows BRECs to maintain the loyalty of future customers while maximizing profit (which is consistent with the main incentive of BRECs). Additionally, this strategic behaviour is consistent with the anchor effect and prospect theory. The remainder of the paper is structured as follows. We start by contextualizing the housing market and housing finance in Spain, explaining the role of BRECs in this market. In Sect. 3, we outline our theoretical framework and the hypotheses to test. Section 4 describes our dataset. In this section, we show differences in means for TRECs and BRECs in terms of not only housing characteristics but also outcomes (selling price, list price, price cut and TOM). Then, we present the empirical estimates. The subsequent section describes possible explanations of the results obtained. Finally, we end by summarizing the arguments and presenting some concluding remarks. Spanish credit and housing bubble In the years prior to the great financial crisis, Spain experienced one of the most important housing booms among developed economies. This housing boom was one of the main engines of economic growth in Spain. During this period, more dwellings were built in Spain than in Germany, the U.K., France and Italy combined. According to official statistics from the Department of Public Works, housing reached 860,000 dwellings in 2006. The average number of originated mortgages was more than 1.1 million per year. These figures are quite remarkable if we consider that the number of households in Spain during that period was 15.5 million. Greater competitive pressure implied that managers of financial institutions could only increase profits drastically by originating a large number of new mortgages. Excessive dependence on the real estate industry, together with a softening of the credit standards (Akin et al., 2014), explains why the economic and financial crisis hit Spain harder than other developed economies. Consequently, 61.495 million € was needed to rescue the country's financial system. During this crisis, one of the main problems facing financial institutions was that their balance sheets held not only risky mortgages but also properties at inflated prices (Akin et al., 2014). The majority of BRECs' housing stock came from foreclosures (in the case of properties from families) or bankruptcy (in the case of properties from building companies). Figure 1 shows the evolution of the gross value of foreclosed real estate assets owned by Spanish banks. From 2009 to the second half of 2011, the gross value increased. The reduction in 2012 and 2013 was due to the transfer to SAREB (the bad bank of the Spanish government) of the real estate assets owned by financial institutions that were rescued by the public sector. To illustrate, using the value of property assets of 80,000 million (the average for the whole period), we estimate that the number of dwellings on the balance sheets of financial institutions at the end of 2013 was 245,000 units. This figure represents 28.8% of the inventory of unsold new housing. In this scenario, financial institutions began to operate as real estate broker companies (bank real estate companies or BRECs) and compete with traditional real estate companies (traditional real estate companies or TRECs) to sell their housing stock. Those new companies were responsible for a large proportion of housing transactions. Our calculations indicate that the real estate units sold by banks or bank-owned corporations amount to approximately 23% of all transactions in the Spanish real estate market. Levitt and Syverson (2008) found that agent-owned homes are sold for more and remain on the market longer than clients' homes. They show that this result cannot be sufficiently explained within competitive markets without either informational frictions or agency problems. Competitive market explanations include unobserved differences between homes, greater patience and less risk aversion. In this sense, agent distortions through either eluding effort or exploiting an informational advantage are proposed to explain the results. Of the two explanations, Levitt and Syverson (2008) lean towards informational asymmetries. Theoretical framework We can compare the behaviour of TRECs and BRECs using a similar approach to that used by Levitt and Syverson (2008) and Hendel et al. (2009). Traditional real estate agents receive only a small share of the incremental profit when a house sells for a higher value. However, BRECs obtain 100% of the incremental profit of the sale, so they have an incentive to maximize price, while TRECs have an incentive to sell the house quickly. In our case, we observe a lower selling price and TOM for TRECs than for BRECs. The higher risk aversion of TREC agents and the greater patience of BRECs can also explain these results. There is no evidence of a larger shirking of effort by TRECs with respect to BRECs. Using the previous arguments, we can define two hypotheses: Claim 1 For a given dwelling, we should observe a shorter time to sale in TRECs. As mentioned, overall, the economic literature (Baryla & Zumpano, 1995;Elder et al., 1999;Rutherford et al., 2005;Bernheim & Meer, 2008;Levitt & Syverson, 2008;Hendel et al., 2009) concludes that TRECs have an incentive to shorten the TOM. Agency problems (Levitt & Syverson, 2008) are therefore the source of this claim. BRECs' greater patience, given that selling homes is not their core business, and lower risk aversion, given their portfolios are more diversified than those of TRECs, reinforce this claim. As Hendel et al. (2009) point out, a patient seller obtains a higher price. However, changes in capital requirements and provisions (funds to cover eventual future obligations) on foreclosure properties may affect BRECs' incentives to wait for a high transaction price. A BREC's decision on when to sell a property must consider that keeping real estate assets on its balance sheet produces several costs not borne by traditional real estate intermediaries. In particular, there is a time-dependent provision that decreases the net value of such properties with a charge to bank profits. This charge is a proportion of the accounting value of the real estate property. The alternative choice is to sell the property at the market price, which implies charging the loss against profits. Assuming that the price at which the bank secure property ownership is the same as the market price at that time, then the bank chooses to sell when the market price is higher than the value of the property on its balance sheet after the provision: where n is the number of time periods, g is the rate of price reduction of residential properties and d is the annual rate of the provision. This relationship implies that as long as g > 1 − exp ln(1−dn) During the period of analysis, the Spanish banking regulation imposed a 10% yearly charge on the original value of properties repossessed by banks that were included on their balance sheets. The schedule of provisions increased only during the first three years; afterwards, the value was set to 70% of the initial property value. Therefore, the optimal n to sell the property increased quite rapidly when the expected reduction in prices increased above 10%. This schedule of provisions was quite different from the situation in the US. Soon after the beginning of the crisis, US banks started offering discounts on foreclosed properties. This situation was not the case in the Spanish banking system because it was more convenient, in terms of profit and loss (P&L), to pay the provision than to sell at a large discount, and banks expected housing prices to recover soon. It was not until 2015-2018 that banks decided to sell their stock of foreclosed properties to investment funds and hedge funds. Claim 2 For a given dwelling, we should observe higher selling prices among BRECs. The literature on incentives would predict such price differences between BRECs and TRECs. The literature on search and bargaining points out that sellers have the ability to set high asking prices and obtain high selling prices if they are willing to be patient and bargain hard (Genesove & Mayer, 2001;Yavas & Yang, 1995). The effect on price discounts is difficult to predict. Thus, agent-based theory predicts that a higher selling price (for a particular BREC in our case) can be obtained either by raising the list price and setting the price discount at an amount equivalent to (or slightly higher than) the increase or by setting the same list price TO that of a TREC and offering a smaller price discount after negotiation. In this case, we use the framework of behavioural economics (Kahneman et al., 1986a(Kahneman et al., , 1986bThaler, 2015) to interpret the results. Goods that are bought infrequently and whose quality is difficult to assess are usually marketed by sellers as a "good deal". Sellers have an incentive to manipulate the perceived reference price and create the illusion of a "good deal". Homes fulfil both characteristics, so they can be marketed in this way. We have explained why BRECs are more prone to using this strategy than TRECs and other loss-averse agents. However, both BRECs and TRECs can use it. That said, the strategy has limitations. For example, a very high list price relative to observable characteristics (known in the literature as a higher degree of overpricing) discourages buyers from further investigating the property (Guren, 2018;Ngai & Sheedy, 2013). However, BRECs remain more likely to use this marketing strategy because, first, (linking again with incentive theory) the basic objective of BRECs is profit, while TRECs prioritize the TOM, and second, a "good deal" gives the buyer an impression of fairness, which is more important for BRECs than for TRECs. Buyers seem to appreciate a BREC's effort to be "fair". Keeping customers happy by seeming fair is an especially high priority for companies that plan to sell to the same customers for a long time. Since banks may provide the mortgage to finance the property and use it as a cross-selling product, the probability of doing future business with the buyer is higher for BRECs than for TRECs. Therefore, it is important for banks to give customers and the public the impression that their transactions are fair (especially given that banks were the basic culprit behind the financial crisis and their morality was heavily questioned). In this sense, banks have more to lose if buyers feel that they act unfairly. After all, since selling properties is not the core business of BRECs, higher (lower) selling prices can be compensated by cheaper (more expensive) loans to buyers to adjust their balance sheets. We should factor in an additional element. BRECs use appraisal values as a guide to set their list prices. However, it is now well known that houses displayed a high level of overappraisal in Spain during the real state expansion (Akin et al., 2014). This phenomenon implies that banks should set high list prices. This expectation is consistent with another theoretical explanation from behavioural economics: the anchor effect (the appraisal price is the reference price for BRECs). Finally, prospect theory (Kahneman & Tversky, 1979) also predicts a higher list price. According to this theory, loss-averse agents consider the original purchase as a reference point. Based on mental accounting and the associated need to break even, sellers can set a higher list price, especially in bust periods. In line with the previous arguments, we can develop two new hypotheses: Claim 3 For a given dwelling, we should observe higher list prices among BRECs than among TRECs. Claim 4 For a given dwelling and list price, the discount offered by BRECs will be either equal to or higher than that offered by TRECs. If BRECs' price discount is higher, it will not be enough to compensate for their higher list price compared to that of TRECs. Data We use two datasets. On the one hand, we use a dataset obtained from a housing market intermediary with franchisers in most Spanish provinces. This real estate company also possesses its own mortgage brokerage branch. For instance, this company accounted for 4% of total real estate sales in Spain during 2012 This percentage is the highest market share of any intermediary operating in the Spanish residential real estate market since most transactions in this market still involve a direct negotiation between the homeowner and the buyer. Our data were not collected with the objective of being representative of the entire population of houses sold during the sample period. The intermediaries that provided information are not uniformly represented in Spain (there are more branches in large cities and metropolitan areas around large cities), which does not seem to affect the mean prices. The table below shows a comparison of the appraisal prices of our dataset with those obtained from the Department of Public Works (DPW) for cities where the firm has a very large sample. The comparison corresponds to the second semester of 2012. Appraisal prices are the only variable that we can compare with a population variable (in fact, the data of the DPW are not the population of appraisals, but they are quite close). The table shows a very small deviation in appraisal prices between our sample and the population (or close to the population) of appraisals that compile the DPW. The difference is only 3.2% for the average of these cities. Therefore, and given that we are not claiming that our sample is fully representative of the population of all transacted properties of the years under study, we believe there are no reason to expect that the differences would be much larger in places not included in the table (except for sampling variability) (Table 1). On the other hand, we use a dataset from a real estate company belonging to a bank holding 3.4% of the total housing stock held by financial institutions. In fact, this figure is calculated after transferring some of the properties to the SAREB. The SAREB is what is colloquially called a 'bad bank' or a special purpose entity backed by the Spanish government to manage and disinvest foreclosure properties and delinquent mortgages that were transferred to it from the four nationalized Spanish financial institutions. Our data belong to one of these financial institutions. The data represent the situation prior to the transfer to the SAREB. At that time, the company was holding more than 9% of the total housing stock held by financial institutions. As in TRECs, although not uniformly represented in Spain, the mean values of turnover and net value are similar to those of all properties held by BRECs. To show this fact, we built Table 2 using information from the financial reports of financial institutions. Our institution is BREC 7, and the mean price is very close to the mean price of all housing stock of BRECs in Spain. The data cover the period from 26/01/2010 to 07/03/2013. We consider only properties that are actually on the market since it takes time for banks to complete the process of actual repossession until the property is ready to be marketed and sold. Adding data from the two databases, we have 7,513 dwellings sold: 951 provided by the bank-dependent real estate company and 6,562 provided by the traditional real estate company. For the whole sample, we have information on housing characteristics (size, rooms, bathrooms, availability of a lift) and the transaction (list price, selling price, time on market). In Table 3, we compare the properties sold by the traditional real estate company and the bank-owned real estate company. A key question is whether these properties are comparable. The first two columns present the mean and standard deviation. The last two columns present the differences in means and the t-statistics of the differences. The differences are small but, because of the reasonably large sample sizes, statistically significant. TREC properties generally have more room than BREC properties, and a higher proportion has a lift. These differences in the characteristics of the real estate properties sold by BRECs and TRECs require homogenization of their quality. The traditional method used to compare heterogeneous real estate properties is hedonic regression, which is the method we adopt. We now explore the differences in outcomes. In Table 3, we also present the differences in means and t-statistics for the basic outcomes (selling price in thousands of euros, list price in thousands of euros, price discount and time on market) of properties sold through TRECs and BRECs. The results suggest that there is a large positive premium in the selling price (41.2%) for properties sold by BRECs. This premium is similar to that observed in list prices (42.2%). From this slightly higher premium in the list price than in the selling price, we can infer that brokers from BRECs use the marketing strategy of increasing the perceived reference price. A fair deal is prioritized by BRECs. In this respect, this upwards bias may also be explained by the use of the appraisal price as a guide to set list prices. Akin et al. (2014) observed higher upwards bias in the case of commercial banks and FROB-owned institutions. Consequently, a slightly larger price discount (lower selling price to list price ratio) is also observed in properties sold by BRECs (14%) with respect to the price observed in properties sold by TRECs (12%). Similarly, the TOM for properties sold by BRECs is 38 days longer than that for properties sold by TRECs. Method The previous results seem to highlight that properties sold by BRECs have a higher TOM, list price, selling price and price discount. However, the numbers in Table 3 suggest some differences in the observed characteristics and locations of houses sold by TRECs and BRECs. If the houses sold by BRECs have more attractive characteristics, then the previous evidence also captures the impact of these features rather than the effect of company type. Additionally, TRECs and BRECs have a different market share in many areas of the country (in fact, BREC properties were located in areas with a high proportion of the housing stock). To control for differences in houses, we construct a hedonic model of prices (for the selling and list price outcomes in logs). For this purpose, we add surface, number of rooms, number of bathrooms, and the availability of a lift as explanatory variables in the model (the characteristics of the house displayed in Table 1). Among the controls, we include the following: postal code (to control for location 1 ), monthly time dummies, two dummy variables for properties sold in Barcelona or Madrid and the percentage of the total housing stock of the city to which the property belongs. The TOM is also a control (see footnote 2 for details after estimation of the baseline model). Controlling for location and time dummies permits us to estimate a fixed effects model with fixed effects of time and location. In doing so, we exploit within-group variation over time since we control for the average differences across postal codes and year (which is a very granular group) in any observable or unobservable predictors, such as differences in quality. The fixed effect coefficients soak up all the across-group actions. What remains is within-group action; that is, we have greatly reduced the threat of endogeneity in the form of omitted variable bias or any mechanism that works across groups. Additionally, we have clustered standard errors at the province level. Finally, we add BREC: a dummy variable that takes a value of 1 if a property is sold by a BREC. In addition, we estimate a model in which the dependent variable is the ratio of selling price to list price and a model in which the dependent variable is the TOM. The controls follow a similar structure, except that in this latter case, the TOM is replaced by the ratio of selling price to list price as the control variable. Additionally, we compute the degree of overpricing (DOP), as in Anglin et al. (2003). The DOP is measured as the percentage deviation from a typical list price given the observable characteristics of the house. Specifically, we compute a hedonic regression in which the dependent variable is the list price and use the residual to determine the degree of overpricing. The DOP is included as an explanatory variable in the TOM equation. Therefore, the estimated equation is: where Y it is the outcome estimated (selling price, list price selling to list price ratio or TOM), X it is the vector of hedonic characteristics and other controls (dummies for Madrid and Barcelona, stock, TOM-except in the TOM equation-or DOP-in the TOM equation), t is the time fixed effect, j represents the location (postal codes) fixed effects and it is the error term. Results The first column of Table 4 reports the results from the hedonic model in which the dependent variable is the selling price (in logs). We were able to explain 78% of the variation in the logarithm of price. Each additional square metre increases the selling price by 1.16%, while the impact of one additional room or bathroom is 4.09 and 5.52%, respectively. A lift increases the price by more than 23.8%. A location in one of the two major cities (Madrid or Barcelona) has a large positive premium (25.4% for Madrid and 29.3% for Barcelona). BREC properties are sold at a higher premium (approximately 23%). This premium is smaller than that observed in the table of descriptive statistics. In column two, we explore the impact of these variables with respect to list price. The impact of the characteristics is nearly identical. The same occurs with the BREC variable. Once we control for characteristics and location, the premium on properties sold by BRECs observed in the list price is only one point higher than that observed in the selling price. Consequently, the coefficient of BREC is insignificant in the third column. The price discount is the same for properties sold by TRECs and BRECs. In the last column of Table 4, we observe a longer time to sale in the case of BRECs-specifically, 54 more days. 2 In this sense, the possibility of selecting their own assets may explain the outcomes obtained for TRECs. The lower observed list price may result from initially underestimating the property value fixed with the purpose of selling the property quickly. The lower selling price follows the incentive of minimizing the TOM instead of maximizing the selling price since only a percentage of the amount at which the asset is sold is earned by the TREC. Thus, the differences in outcomes can be explained by different incentives. As shown by Levitt and Syverson (2008), the main aim for TRECs is to minimize the TOM. Because traditional real estate agents receive only a small share of the incremental profit when a house sells for a higher value, they have an incentive to convince their clients to sell their houses too cheaply and too quickly. A higher rotation seems to indicate that the objective is not to maximize the selling price. There may be an incentive for TRECs to accept properties whose owners do not seek an excessively high price and who have a certain degree of flexibility in lowering the list price, obtaining a shorter TOM as a result. BRECs have different incentives. These companies are more inattentive, and since they wait longer to sell, they can maximize the price (1) Y it = + X it + BREC it + t + j + it 2 In fact, the effect of BRECs on selling price is reflected in the TOM as the latter is an explanatory variable for price and SP/LP. We estimated the model without adding the effect of TOM on price. These effects are 23.3% (selling price), 28.4% (list price), 3.7% (SP/LP) and 67 days (TOM). Although slightly higher, the effects are similar. However, Yavas and Yang (1995) point out that the listing price affects the TOM and vice versa, resulting in a simultaneity problem between the selling price and the time on market. We follow Yavas and Yang (1995) and Ben-Shahar (2002) and simultaneously estimate the TOM and selling price using the degree of overpricing as the identification variable in the TOM equation. The effect of BRECs on the selling price is 20.6% and on TOM is 38 days. obtained. Additionally, real estate brokerage represents only a small part of BRECs' business activity. Overall, agent distortions and BRECs' greater patience and lower risk aversion can explain the results. The results also present evidence for the list price hypothesis based on behavioural economics. BRECs prefer the marketing strategy of setting a higher list price and then reducing this price to that of TRECs rather than setting a list price similar to that of TRECs and then being more reluctant to reduce the price. This strategy gives BRECs a social image of fairness in their transactions based on the possibility of buyers obtaining a "good deal". In fact, BRECs ran many advertising campaigns claiming large discounts (see some examples in Appendix 1). This behaviour may also reflect the fact that BRECs anchor their prices to the original appraisal prices. The initial appraisal values were high to begin with, and BRECs' losses are calculated as the difference between the selling price and the appraisal value. Finally, we analysed heterogeneity. We introduced interactions of the BREC-TREC dummy with hedonic characteristics. In summary, the effect of BRECs on selling prices, list prices and the selling to list price ratio is higher than in the regression without interactions, but hedonic characteristics reduce this effect. For instance, a higher number of rooms reduces the effect of BRECs by 4 percentage points, while lists reduces it by 9.6 percentage points. Ten additional square metres also reduce the effect of BRECs on selling prices by nearly 4 percentage points. In the case of list prices, 10 additional square metres also reduce the effect of BRECs by 3 percentage points. An additional bathroom increases the effect of BRECs on the selling to list price ratio, while an additional room and the availability of a lift reduce it. Finally, an additional bathroom increases the effect of BRECs on the TOM, while the availability of a lift reduces it. We do not present the effect of the interactions of the BREC dummy with housing stock and dummies for Madrid and Barcelona because all three are insignificant in all regressions (Table 5). Robustness check: competing theoretical explanations As pointed out by Levitt and Syverson (2008) and Hendel et al. (2009), the results observed in the data may have different explanations. Recall that both companies compete in the same market (the client is similar), and we have adjusted by housing quality and location (later, we will delve further in the discussion about unobserved characteristics). These explanations can be divided into two groups: within-competitive market factors (unobserved differences, greater patience of one agent; lower risk aversion of BRECs) and out-of-competitive market factors (shirking, information asymmetries and incentives). Unobserved house characteristics We captured up to 78% of the variance in the selling price, which is a large proportion of the total variation. In any case, we have reasons to expect that the influence of unobserved heterogeneity will be small. First, the differences in observed characteristics are not large. Second, in contrast with Hendel et al. (2009), in our case, sellers cannot choose among several platforms. Individual sellers must sell their homes through TRECs, so sellers' attributes cannot be correlated with the company. In fact, once the list price and the expected selling price are fixed, even BRECs sell many of their homes through The gap in unobserved differences is therefore small. However, to check the importance of potential selection on unobservables more accurately, we adopt the approach of Altonji et al. (2005), developed theoretically by Oster (2019). In our case, the value of that leads to = 0 (no relationship) is 5.10 4 (see Table 12). This result implies that the selection on unobservables would have to be more than five times stronger than the selection on observables to attribute all of the effect of the BREC versus TREC dummy to selection bias, which is highly unlikely. Oster (2019) argues that setting δ to 1, which "formalizes the idea that selection on unobservables is the same as selection on observables", 5 is a good benchmark to check the impact of unobservable variables on the estimation. In our case, this estimation generates a value of 0.38. If we accept that there is causality only if the identified set excludes zero, which holds in our case, we cannot reject the differential impact of BRECs and TRECs. 2. Greater patience of BRECs Yavas and Yang (1995) point out that a seller who can wait for a higher-paying buyer may set a higher asking price to attract only buyers who would value this property higher than the market value. The outcomes for this seller are a higher list price, a higher selling price and a longer TOM. BRECs may be more patient than TRECs if they suffer lower costs from maintaining homes in the state required for home showing. Additionally, TREC homes can be owned by sellers who are making job-related moves that are time-sensitive. BREC homes are from foreclosures, evictions, defaults, etc. In these homes, there are no families with time restrictions. If BRECs have lower discount rates than TRECs, BRECs will receive a higher price for an otherwise identical home, offset by a longer TOM. However, some findings point in the opposite direction of this explanation. First, the required differences in discount rates needed to explain our results are larger (30.1% 6 ) than those reported in the literature (Genesove & Mayer, 1997). Second, a patient seller searches for the perfect buyer for his dwelling. On the one hand, the perfect buyer is more difficult to find if the dwelling is more atypical. On the other hand, in these cases, higher differences between patient and impatient sellers are expected. Patient sellers will obtain a higher selling price offset with a longer TOM. To test this hypothesis, we construct a measure of atypicality 7 following Haurin (1988), and we add the interaction of this variable with the BREC dummy in the models presented in Table 2. If BRECs are more patient, we expect a positive and significant effect of this interaction. The first two columns of Table 6 show results in the opposite direction. Differences in the TOM, selling price and list price are reduced by atypicality. TRECs are more common brokers for atypical properties than BRECs. One additional point on the atypicality measure (which is close to a 50% increase in the atypicality index) offsets nearly all of the differences between BRECs and TRECs. 8 Financial regulations may explain this result. Additionally, this result may derive from the fact that TRECs select atypical dwellings based on the expectation of selling them at higher prices, knowing ex ante that a longer TOM is needed. Finally, for special properties (or properties for special buyers), fewer offers are expected. If a BREC is a patient agent, we can expect a longer TOM and higher list and selling prices for properties with fewer offers. For the subsample of BREC transactions, we have information about the number of offers that a property received. A dummy variable is added according to whether this property received more than one offer. For these properties, higher list prices are observed as well as lower price cuts and TOMs, which is the opposite direction of the prediction. Less risk aversion of BRECs In the case that BRECs are less risk averse than TRECs, they place a lower value on an offer today relative to the expectation of a higher future offer. TREC agents depend exclusively on the housing market. In this sense, TRECs' profit is more sensitive to housing price shocks (especially those related to the sales volume) than BRECs' since these shocks affect all TRECs' earnings (and only a small part of BRECs' earnings). Additionally, TRECs are usually risk averse at the moment they accept a new property in their assets. They can select which properties match demand by looking at the owners' asking price and characteristics of the property that may make that particular house more appealing to buyers. Finally, banks know that public money may be used to rescue financial institutions after economic and financial crises. This possibility can be an additional factor to take into consideration when discussing differences between BRECs and TRECs in terms of risk aversion. What degree of risk aversion is necessary to explain the BREC-TREC gap? Following Levitt and Syverson (2008) and Kocherlakota (1996), we calculate the coefficient of relative risk aversion, which is triple that Table 6 Effect of atypicality on differences in outcomes among BRECs and TRECs in the models estimated in Table 2 ***p < 0.01, **p < 0.05, *p < 0. Levitt and Syverson (2008). 9 Therefore, this risk aversion is even more implausibly high according to the usual values in the literature. Out-of-competitive market factors 4. Shirking Among the explanations outside the framework of competitive markets, we first examine the possibility of shirking on effort. TREC shirking may affect offers by reducing the rate at which offers arrive or by generating offers from a lower price distribution. That is, the agent may shirk by hiding offers, reducing the real price of the offer and reducing its variability. Burdett and Ondrich (1985) show that in a labour market setting, some consequences of TREC shirking (lower offer arrival, lower mean and variance in the offer) imply a longer TOM. These predictions contrast with our findings. Furthermore, for shirking to be important, it must be difficult to observe TREC agents' level of effort (Levitt & Syverson, 2008). Many tasks performed by an agent can be easily observed. Another way to test this explanation is to consider that TRECs may exert more effort in the fourth quarter to meet their annual sales targets. However, as we pointed out earlier, BRECs commercialize many of their properties through TRECs, so we can consider this effect to be absent for BREC properties. Using the same specification as in Table 4, we interact our BREC dummy with a dummy for the fourth quarter (Table 7). The results do not show evidence of shirking since the interaction is not statistically significant. Information and different incentives Previous potential explanations are inadequate to explain the magnitude of our findings. In this sense, we found room for differences in incentives. BRECs obtain 100% of the incremental profit of the sale, so they have an incentive to maximize price, while TRECs have an incentive to sell the house quickly. Our results support this explanation. Even in the case that TRECs commercialize BRECs properties, the evidence indicates that Table 7 Effect of the fourth quarter on differences in outcomes between BRECs and TRECs in the models estimated in Table 2 ***p < 0.01, **p < 0.05, *p < 0. TRECs serve individual sellers differently than BREC sellers, according to their incentives and policies. In addition, in Table 8, we present new evidence that the gap between TRECs and BRECs varies with their incentives. In large cities, the volume of transactions is higher, and the buyer can learn about the reference price simply by gathering information on nearby sales prices. In small cities, however, buyers can obtain less information about homes on the market, and the advantage of TRECs and BRECs with respect to individual sellers is higher, so we should observe a higher gap between TRECs and BRECs. We proxy the informational advantage by estimating outcomes for two groups: large cities and the provinces these cities are in, on the one hand, and remaining cities, on the other. As we show in Table 8, a shorter TOM as well as higher list and selling prices are observed for cities in the second group. That is, where BRECs have a higher information advantage due to the absence of comparable homes, they obtain the target selling price and do so sooner. Additionally, a lower price discount is observed. 6. Explanations from behavioural economics Previous evidence of incentives leaves room for strategic behaviour. Throughout the paper, we have noted that BRECs prefer the marketing strategy of setting a higher list price and later conceding a higher discount. From behavioural economics, we know that this strategy gives BRECs a social image of fairness based on the possibility of buyers obtaining a good deal. However, additional explanations from behavioural economics could produce the same outcomes: higher list and selling prices. First, sellers may know that the asking price could serve as an anchor or heuristic used by a buyer to judge the property value. In this case, buyers may not be able to adjust sufficiently away from the anchor to arrive at the real market price (Northcraft & Neale, 1987). In this sense, strategic behaviour consisting of setting a higher list price is also a better way for BRECs to maximize benefits because TRECs recommend underpricing even in hot markets (Bucchianeri & Minson, 2013). For us, it is impossible to disentangle this anchor effect from the hypothesized fairness effect. Second, according to prospect theory (Kahneman & Tversky, 1979) applied to the housing market (Genesove & Mayer, 2001), loss-averse agents take the original purchase price as their reference point. Based on mental accounting and the associated need to break even, sellers may set a higher list price, especially in bust periods. In this case, the original purchase price acts as a reservation price to avoid losses. This theory principally affects BRECs since they commercialize their own assets. In this respect, we exploit additional information in the subsample of BREC transactions. For BRECs, the original purchase price is the appraisal price. Here, we have information about not only the appraisal price but also the net book value of every asset. Two hypotheses can be tested. First, higher list prices may be expected for transactions in which a reference point exists, that is, transactions in which list prices are greater than or equal to the appraisal price (34.66%). On the other hand, higher list prices may be expected for transactions in which BRECs act as a loss-averse agent, that is, for transactions in which the list price is greater than or equal to the net book value (51.32%). Table 9 presents the results for the differences in outcomes for transactions that either include a reference or in which the BREC acts as a loss-averse agent. In both cases, this fact is captured by including a dummy variable in the baseline models estimated in Table 4 for the BREC subsample. Higher list prices are observed in both cases. This result can be interpreted as evidence for prospect theory since we observe higher list prices with higher appraisal prices or when list prices are equal to or higher than the net value. Therefore, in this case, BRECs act as loss-averse agents. Conclusions In this paper, we examine the relative performance of two competing types of companies: TRECs and BRECs. Our results suggest a higher selling price and a longer TOM for BRECs than for TRECs. Our findings are consistent with explanations related to dynamics within and outside competitive markets. However, within-competitive market drivers either are rejected or are inadequate to explain the magnitudes of the coefficients when more in-depth analysis is performed. The empirical estimates suggest that information in the housing market may also be a source of distortions. In fact, the main aim of the sale varies depending on company incentives. Namely, BRECs own the property, so their incentive is to maximize the selling price, while TRECs seek to minimize the TOM. Individual homeowners are induced by their agents to sell quickly and at a lower price with respect to bank homeowners. On this point, we must add the caveat that this rule cannot be extended ad infinitum. For BRECs, a longer TOM also implies balance sheet and monetary costs in terms of higher provisions and less money to lend. Longer TOMs might occur because BRECs are "forced" to set higher prices for assets to cover losses on properties that are difficult to sell. 10 In this sense, the results can also be interpreted as evidence of misaligned incentives between banks (monetary costs) and bank-owned brokerage companies (maximize benefits). 11 Finally, the higher list prices observed for BRECs are consistent with the behavioural economics framework (Kahneman et al., 1986a(Kahneman et al., , 1986bThaler, 2015). BRECs prefer to set a higher list price rather than to reduce list prices only reluctantly. This strategy produces an impression of fairness in the buyer, who, unaware of the reference price, believes she has obtained a good deal. Hence, this strategy permits BRECs to maintain the loyalty of future customers while maximizing profit. Additionally, this strategic behaviour is consistent with the anchor effect and prospect theory. Levitt and Syverson (2008) examine why reputation concerns do not discipline real estate agents more effectively. They provide two possible explanations. First, it is difficult for agents to engage in repeat business with a given client. Second, the counterfactual outcome is not observed. Additionally, these authors express surprise that sellers do not more frequently hire independent appraisers to inform them of the value of their homes since the information provided to home sellers is an important part of that service. According to Levitt and Syverson (2008), an appraiser is disinterested in the final transaction price. However, recent evidence (Ben-David, 2011;Akin et al., 2014) shows that appraisers are not independent and introduce an upwards bias in their valuations. As long as banks are expected to keep housing assets on their balance sheets for a long time, data for the coming years will allow us to study whether these differences are constant over time or vary with the cyclicality of the market and the consolidation of banking structures. In terms of limitations, we find evidence of different selling strategies adopted by BRECs and TRECs and that this difference cannot be explained by unobserved heterogeneity. However, we cannot be sure that all the differences found are not due to unobserved heterogeneity. First, the premium price found for BRECs (20%) is large in that it is higher than that previously reported in the literature comparing other agents (Bernheim & Meer, 2013;Hendel et al., 2009;Levitt & Syverson, 2008). This difference also remains when TRECs sell BREC properties. BRECs with large amounts of homes sell some houses through TRECS but control the final price (individual offerings found through TRECs are evaluated by the bank). In this paper, we presented some explanations for this strategy (different incentives and explanations from behavioural economics), but there is still some room for unobserved heterogeneity to explain part of this 20%. Appendix 1 Examples of BREC advertisement campaigns See Tables 10, 11 and 12. is the effect of BRECs on selling prices, is the degree of proportionality between observables and unobservables, and mcontrols represent stock, time and location dummies to produce = 0 With mcontrols 5.10 = 1 0.38 The structure of the banks' real estate cost is quite different from that of a traditional broker. Banks must finance their properties at the cost of capital until they sell them. In addition, for every year that they keep the property in their balance sheet, they must charge provisions, thus reducing their profits. Banks also offer better financing conditions for the real estate they sell than on properties sold by other agents in the market. For instance, a bank can offer to finance 100% of the property price instead of the maximum of 80% set for properties not owned by the bank. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2022-12-10T16:16:03.286Z
2022-12-08T00:00:00.000
{ "year": 2022, "sha1": "252aaf0923404d550304b911de142034ab1cc549", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10901-022-09994-6.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "79c30d744322100c17e164cee2acb48312055a5c", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }